In medicine, AI is not only a technological tool but a catalyst reshaping how we think, learn, and collaborate across disciplines. As universities and health systems worldwide adapt to this new landscape, one key question emerges: what kind of mindset and skill set will the next generation of researchers and clinicians need?
We explored this question with Prof Geraint Rees, Vice-Provost (Research, Innovation & Global Engagement) at UCL, during his visit to Tsinghua University for the Beijing–Tsinghua Health AI Summit.
A leading neuroscientist and global education innovator, Prof Rees shared thought-provoking perspectives on how AI is redefining both scientific discovery and medical training.
Q1: What are the core competencies that define the next generation of AI professionals?
Geraint Rees: When we think about the core competencies that define the next generation of AI professionals, we often categorize them as interdisciplinary thinking, algorithmic competency, or clinical skills. I think the core competency is not any one of those individually — it’s the fluency to switch between them.
We’re going to need professionals who can, in one moment, think about the algorithm they’re using in the AI-enabled hospital of the future, but then know when to switch into clinical skills mode, or when to switch into an interdisciplinary mode. So it’s the cognitive fluency — the ability to flex between those different ways of thinking and interacting — that I think will be really core to the next generation of AI professionals.
Now, how we teach people that is an interesting question. My experience, for example, with Demis Hassabis, who came to UCL to do his PhD and I was lucky enough to be his second supervisor, is that you often develop it through periods of immersion in the discipline you’re trying to learn. Demis, for example, was a successful businessman and software engineer who decided to learn neuroscience.
What will the AI-enabled health professionals of the future have to experience? They’ll probably have to spend a period deeply immersing themselves in computer science and AI — or maybe they’ll be engineers who need to immerse themselves deeply in clinical practice.
I also think that kind of learning will be extended over a professional lifetime, because what we know about AI at the moment is that the skills you need to interact with it — even on your smartphone — are changing every six weeks. So the idea that we can train medical professionals once and expect that to last for 30 years seems a little overoptimistic. I think we’re going to need continued periods of learning like that.
Q2: As AI transforms global medical education, how should an international network for medical AI talent be built, and what role can Tsinghua Medicine play?
Geraint Rees: AI is transforming medicine across the world — not in any one country, but in every country — and not in one way, but in ways specific to each country’s needs. Medicine itself is also global. Everyone, whether in the developed world or the majority world, suffers from conditions like heart attacks and strokes, and broadly speaking, human physiology is the same everywhere.
That’s why, when we think about AI-enabled hospitals and the future of AI in medicine, we need to think in terms of global networks. If we develop an AI system that works in London, it also needs to work in Beijing, in Singapore, and in other parts of the world. We won’t achieve that unless we create global collaborations.
We already know that many healthcare professionals train in different parts of the world, and I think it should be exactly the same for AI.
Tsinghua Medicine can play a really important role here. Tsinghua is already an incredibly successful university in a leading city, in an incredibly successful country, and it has deep strengths in areas critical for AI — such as computer science, biomedical engineering, and related disciplines. My own university, UCL, may not be quite as strong in those areas but has incredible strengths in other domains — including computer science, biomedical sciences, the life sciences, and, of course, medicine.
London is a city that shares some similarities with Beijing, but they’re also complementary. So my hope is that Tsinghua Medicine collaborating with UCL, and UCL collaborating with Tsinghua Medicine, will bring out the best in both places.
We’re going to need that because what’s happening with AI in healthcare is incredibly exciting — it’s a global phenomenon, it’s moving very quickly — and we need the best minds in the best places in the world collaborating to achieve the best outcomes for patients.
Q3: In which specific areas do you think UCL and Tsinghua will collaborate in the future?
Geraint Rees: I can’t predict which areas UCL and Tsinghua will collaborate in over the long term, because collaborations naturally start small and then grow. My hope for the initial collaboration is that we begin with a few key areas — perhaps general medicine, cancer, neuroscience, or areas of interest to Tsinghua Medicine such as hepatology and hepatobiliary medicine — classic clinical disciplines. Then we identify the academics, joint appointments, and medical students who will really interact, and expand from there.
The progress we’ve already made is fantastic. We’re visiting each other’s campuses frequently, we’ve established those important relationships, and we’ve signed an MOU with a clear understanding of what comes next. But what ties all of this together is that collaborations are built on people. The most important thing is not just to pick the areas, but to find the people who are going to make a difference — who will work together in a shared collaboration and generate incredibly exciting outcomes. Because without people, collaborations don’t work. And that’s why I’m here in Beijing — because it’s important to talk and to interact face to face.
Q4: How should collaboration between AI and medicine extend beyond research — into policy, ethics, and regulation?
Geraint Rees: It’s really important to think about the dialogue between education and regulation — and how we can establish it. I think we can start by identifying what actually needs regulating in AI. We’re already talking about some of those issues in this symposium: Who is responsible if AI gets it wrong? Is it the developer, the hospital or healthcare setting where it’s deployed, or the healthcare professional who uses it? My answer is all three.
Each bears some part of the responsibility — the developer who builds the system, the hospital or healthcare setting where it’s used, and the professional who uses it. We need to consider what that means for education, because we’re educating healthcare professionals, software developers, and healthcare leaders who will deploy AI. That dialogue, I think, comes naturally.
But as AI develops, its capabilities and applications will keep changing — as we’ve already seen in our daily lives. In just the last two or three years, what AI can do has changed completely. So our education systems at Tsinghua Medicine or UCL Medicine must evolve quickly too — but not so fast that they’re always chasing the next exciting thing. We need to find the right balance.
Regulation also applies to other rapidly changing areas, such as cell and gene therapy. At UCL, we’re doing a lot of work in that field — where the ability to alter the human genome or engineer immune cells raises major regulatory questions. We can learn from what worked well there, and what didn’t. Although AI is incredibly exciting, it’s perhaps not as fundamentally different from previous medical advances as we might think — and that should reassure us.
We can also use our knowledge of AI and its deployment in medicine to think proactively about regulation. For example, one of the most exciting things about AI is that it’s often adaptive — it changes its performance as it learns more about patients or settings. That’s a challenge for regulators, because most medicines don’t change their behavior over time.
But we can think about smart regulation. For instance, we could require that the first derivative of change — being technical now — is positive. In plain English, that means if an AI system is deployed for diagnosis, its adaptive changes must improve accuracy, not reduce it. That might sound like common sense, but thinking in terms of derivatives is something engineers and mathematicians at Tsinghua Medicine might naturally do — not regulators.
If we use our educational expertise to anticipate these challenges, we can help legal and regulatory professionals understand them before they arise. That’s a really effective way to accelerate progress. Ultimately, if we want to move as fast, as safely, and as broadly as possible — in a way that benefits patients — we need to make all these pieces work in harmony.
Q5: As a neuroscientist, you’ve witnessed how AI and brain science converge. How does this intersection inspire the future of medical education?
Geraint Rees: As a neuroscientist, it’s fascinating to see how AI is evolving. I’ll give you three answers.
First, there’s a lot of interplay between the two fields. Computer scientists — not just people like Demis Hassabis, but across the world — are now thinking about how the brain works and about cognitive architecture as a guide for designing advanced AI systems. Neurologically inspired AI systems are definitely becoming a thing.
Second, in medicine, one might be a little disappointed that we aren’t yet seeing as much of that flow through to medical AI applications. For example, in medical imaging — which I know well — many systems are based on pattern recognition. They perform extremely well at identifying mammograms or brain scans, but are they neurologically inspired? Not really. That gives me optimism, though, because it shows how much potential remains for future progress.
Finally, there’s the question of how different disciplines think about problems. I’ve spent much of my professional life studying human consciousness — trying to understand the neural mechanisms that underlie our thought processes and our subjective experience of the world. Now, we see a generation of computer scientists who’ve built large language models and are asking: Is my model conscious?
It’s a little amusing to see that field, in a way, reinventing my own — cognitive psychology — which has been asking those questions for over a century. It shows that disciplines still have much to learn from one another in how they understand the world.
Universities like Tsinghua Medicine and UCL can help by encouraging people to cross disciplinary boundaries, to be curious about something outside their specialty, and to learn from others. As university leaders, we can create programs and opportunities for people to study beyond their fields — and that will be crucial for the future of AI.
One area we haven’t talked much about yet is the humanities and the social sciences. Science and engineering can help us treat patients better and live more fully — but the humanities ultimately teach us what it means to live a good life, to find happiness and satisfaction. As we develop AI and AI in medicine, there’s much to learn from the humanities and social sciences — from what they bring and what they can infuse into our understanding of technology and humanity.