Technology Now

According to Alzheimer Disease International, there are 10 million new cases of dementia across the world every year. That’s the equivalent of a new case every 3 seconds.

It’s a devastating disease which manifests differently in different patients, but a new and novel approach could revolutionize treatment. A team from University College London is developing a foundational AI model of the human brain, with the hope it can be trained to trial individualised treatment plans for dementia sufferers, as well as better understand the disease.

Our guest this week is one of the project leaders. Parashkev Nachev is a Professor of Neurology at University College London. His team have been working in collaboration with HPE to create these AI-based digital twin, bringing together the best of AI and human medical expertise.

This is Technology Now, a weekly show from Hewlett Packard Enterprise. Every week we look at a story that's been making headlines, take a look at the technology behind it, and explain why it matters to organizations and what we can learn from it.

Do you have a question for the expert? Ask it here using this Google form:
About the expert:

Sources and statistics cited in this episode:
AI methods for earlier Alzheimer's diagnosis:
Statistics on Alzheimer’s disease:,new%20case%20every%203.2%20seconds 
Precision medicine statistics:,new%20report%20by%20MarketsandMarkets%E2%84%A2.
A wearable breast cancer screening device:

Creators & Guests

Aubrey Lovell
Michael Bird

What is Technology Now?

HPE News. Tech Insights. World-Class Innovations. We take you straight to the source — interviewing tech's foremost thought leaders and change-makers that are propelling businesses and industries forward.

Aubrey Lovell (00:09):
Hello and welcome back to Technology Now, our weekly show from Hewlett Packard Enterprise, where we take what's happening in the world and explore how it's changing the way organizations are using technology. We're your hosts, Aubrey Lovell...

Michael Bird (00:22):
And Michael Bird. So in this episode, we are looking at how new developments in AI could help deliver new healthcare treatments by mapping the human brain.

Aubrey Lovell (00:34):
And as we've seen on this podcast before, the benefits that AI and machine learning have brought to the medical profession have been incredible. Models are now used to help identify healthcare trends such as disease outbreaks, as well as potential treatments. Meanwhile, hospitals and organizations can use AI to track patient records and plan their pathways through the care system.

Michael Bird (00:55):
Yeah. So where next? Well, in this episode, we'll be exploring the concept of people receiving a precision medicine tailored specifically to their own AI mapped brain and how current research could revolutionize treatment for degenerative diseases such as Alzheimer's.

Aubrey Lovell (01:13):
Very nice. So as we always say, if you're the kind of person who needs to know why what's going on in the world matters to your organization, you know that this podcast is for you. And if you haven't yet, please subscribe to your podcast app of choice so you don't miss out. All right, let's get into it.

Michael Bird (01:30):
Let's do it.

Aubrey Lovell (01:34):
According to Alzheimer's Disease International, there are 10 million new cases of dementia across the world every year. That's the equivalent of a new case every three seconds.

Michael Bird (01:45):
Yeah, it's horrible and a devastating disease, but that is hope. Researchers at the Alzheimer's Society are currently working on AI methods to bring about earlier diagnosis and understand how dementia symptoms first develop in people. It's promising research which could help provide better medicine for anyone living with the disease and even give us more time to plan for the future.

Aubrey Lovell (02:08):
But it's still a one size fits all approach. Where AI is really starting to help the medical profession is its ability to provide what's called precision medicine. That's where prevention and treatment is tailored specifically to an individual's genetic makeup as well as their environment and lifestyle. According to a report by MarketsandMarkets, the precision medicine industry was estimated to be worth around $29 billion in 2023. The report suggests that by 2028, it could be worth over $50 billion.

Michael Bird (02:41):
And combining precision medicine with 3D AI brain mapping could change the lives of millions. One of the researchers leading the field is Parashkev Nachev, Professor of Neurology at University College London. He's part of a team working in collaboration with HP to create foundational models of the human brain that is an AI-based digital twin, which in future could be adapted to suit the needs of individual patients and provide individualized care to them, bringing together the best of AI and human medical expertise. Our reporter, Alex Podmore, recently caught up with Professor Nachev and bought us this interview.

Parashkev Nachev (03:21):
So the problem that we are dealing with is understanding the fundamental structure of the brain as seen through the eyes of the tests that we carry out to characterize it in particular imaging, but also other investigations of this kind. And the task is difficult because the brain is not only extraordinary complex, it's also highly individual. Brains of each of us differ. And in order for us to be able to deliver individual level personalized care, we need to be able to capture that variation. And what that requires is very large generative model of the space of possible characteristics of the brain. And that's our task now. We are building what you might call a foundation model of the human brain, a model that understands its fundamental essence and can be used in order to accelerate and enable all sorts of downstream tasks.

Alex Podmore (04:15):
Give me a sense of the kind of things you've been looking at throughout your career in neurology and how technology has started to make your life easier when approaching these big problems.

Parashkev Nachev (04:26):
So the nature of these models is they're enormous, they're large, and the reality is that there is no limit to the size and exclusivity of the model that we would wish to achieve. The limit is really one of feasibility, and that is because every single patient counts. We can never say to a patient that they're too rare to be worth capturing in a model. So we've always tried to build bigger and bigger models and more and more expressive models. And we are fortunate that the technology in the past few years has changed quite substantially. Obviously there's been the rise of GPUs, but what matters to us in particular is the ability to use high memory, high throughput machines where we can paralyze across multiple GPUs across nodes. And our ideal scenario would be one where we are no longer limited or not substantively limited by compute so that we know that we have given information in front of us the best possible model that guides what we do with each individual patient.

Alex Podmore (05:32):
So just to reiterate, you've been working with HP to use some GPU processing power to run the imaging of the brain that you've been looking at. Can you tell us a little bit more about how that project's come about and what it actually involves day-to-day?

Parashkev Nachev (05:46):
So HP have supported the development of a super computing installation at UCL, and we have been running on the machines within that installation and also other machines that HP have given us access to. And what we've tried to do there, this is working with the Advanced Research Computing Center at UCL, is to create a framework that enables models of this kind, large generative models to be trained. So far, we've been able to develop our models in a way that allows us to paralyze very well across machines. And the current scaling figures are very encouraging. It suggests that we can grow installations of this kind to an even greater size and we should be able to achieve excellent performance at the end of our training rooms.

Alex Podmore (06:38):
And just to simplify things and take it back a step, this is an AI based project, but why do we need AI rather than the other technologies available?

Parashkev Nachev (06:47):
No one doubts that we need complex models to model the weather. Indeed, the UK has a billion pound supercomputer. Most nations have very large computer installations for modeling meteorological phenomenon. Yet, no one would doubt that the brain is even more complex than the weather. And yet we currently have, well, nowhere near the kind of computer that is possibly needed. And I suppose the attitude traditionally has been that the complexity is such that it's not even worth trying to model it. All we could do is to fit relatively simple traditional statistical models. And that was true up to about 10 to 15 years ago.

We've now known some time that the application of complex models to healthcare is rewarding and the rewards are not merely about creating tools to automate tasks such as radiology. They're about capturing fundamental heterogeneity, the richness, the complexity of human biology so that we can then target treatments to individuals so that we can be sure that what we are doing as much as we can works for this specific patient. And what that implies is highly expressive models, nothing but a very rich, very complex model can capture something that is itself rich and complex. The same applies to healthcare.

Alex Podmore (08:17):
And just to go back to one thing you said in there, perhaps one of your stated aims would be to make neurology a science that's a little bit more individual based or individual focused. But why is that so hard to do?

Parashkev Nachev (08:28):
Because each individual is located in the sense of the description of [inaudible 00:08:36] individuality being defined by a very wide multiplicity of factors. The number of variables that determine who we are is huge. If you think about how many variables you need to capture in order to describe somebody's individuality in the face, it's hundreds, maybe thousands. And the same is true of the brain and that descriptive richness demands models that can absorb it and can determine the relation between two people in that very high dimensional space. And the reason we need to elicit those relationships we need to localize people in their nearest neighborhood in the space of description is because it's only from their neighbors that we can learn about what is best for them. Medicine is naturally counterfactual. You do not normally have experience of a particularly illness before you have it. And so we always have to draw intelligence from other people. The closer to your specific characteristics that we can make that intelligence, the better the model will be and the better the outcome will be.

Michael Bird (09:45):
Absolutely fascinating. Thank you so much, and we'll be back with more from Alex's interview with Professor Nachev in a moment. So don't go anywhere. Okay. It is time for Today Alert, the part of the show where we take a look at something happening in the world that we think you should know about.

Aubrey Lovell (10:02):
And we're going to stick with the world of medicine for this one because listen up ladies. A professor in the US has developed a bra that can screen women for breast cancer between doctors' appointments. Canan Dağdeviren developed the idea for the wearable detector in 2015 after her aunt was diagnosed with the disease which had grown between regular checkups. Canan now leads the conformable decoders research group at her university and has created a physical sample of a wearable ultrasound device that sits in the cup of a bra. The user places the 3D printed design inside the cup and runs a handheld ultrasonic tracker to detect any abnormalities. The device is held in place by magnets and small honeycomb shaped openings allow for contact with the skin.

The implant was tested on a 71-year-old woman with a history of cysts in her breasts. The research team could detect cysts as small as 0.3 centimeters in diameter, which is the size of early stage tumors. The research also showed that the tissue could provide images at a depth of up to eight centimeters or about three inches, which is on a level the same as a traditional ultrasound. Although the scans do currently have to be connected to the same sort of screens you get at hospitals, the team are working on a version that would be available to view on a smartphone, and that is pretty incredible and also lifesaving.

Michael Bird (11:28):
Yeah, very much so. Thank you Aubrey. All right. It is time to head back to our interview with Professor Parashkev Nachev, courtesy of our reporter, Alex Podmore.

Alex Podmore (11:40):
What is the long-term hope for your project?

Parashkev Nachev (11:42):
The idea is to build a foundation model of the human brain that captures its variability across the full landscape of abnormality, but also across the full spectrum of different pathological disorders. And that also understands how the brain may appear different in the context of variations of the machines that we use to image it and study it and record its function. You can think of this as being a model of everything, and its objective is to be able to draw the maximum amount and precision of intelligence about each individual given the various investigations that we carry out when we see a patient so that then the treatments that we deliver are always as optimal for each individual as they can be.

Alex Podmore (12:36):
How can someone like HPE help take you from A to B when scaling this project? What kind of things will you be relying on?

Parashkev Nachev (12:44):
So the task of architecting these models is very hard, but so is the task of making them trainable on computer at large scale. And so what is very helpful for us is to have an enterprise partner that can help us deploy these models on big training infrastructures. And then of course, further down the line help creating frameworks that would allow these models to be used in practice. But at this stage, the critical task is scaling. How do you achieve the scaling we need in order to train models of the requisite size. That will need a whole load of engineering, obviously it requires the right hardware and also the right software frameworks, and it requires a development process where the algorithmic generation and the implementation evolve hand in hand. That is, I think, very important because that is the only way that we can really make these models work at scale.

Alex Podmore (13:49):
Amazing. Well, thank you so much for your time.

Parashkev Nachev (13:50):
Thank you very much.

Aubrey Lovell (13:53):
Remarkable. And thanks so much for stopping the chat with the podcast Parashkev Nachev, and you can find more in the topics discussed in today's episode in the show notes. And I do just want to say a personal note on this episode, particularly for me, I have people in my family who have dementia and breast cancer. So these are really critical topics to talk about and it's really, really cool to see how technology is helping us advance that technology of the detection and early awareness to get ahead of it in some cases where it's life changing. Well, we're getting towards the end of the show, which means it's time for This Week in History, a look at monumental events in the world of business and technology, which has changed our lives. And the clue last week we said was in 1961, boldly going, went bananas. Did you get it, Michael? Because I clearly didn't.

Michael Bird (14:49):
No, not at all. What is it? What is it?

Aubrey Lovell (14:51):
Well, all will be revealed. It was the first ape into space. So that makes total sense now. So this week, 63 years ago, a four-year-old chimpanzee called Ham blasted off on the Mercury Redstone 2 mission, which would take him to 150 miles above the earth and 5,000 miles per hour. So that's 240 kilometers and a little over 8,000 kilometers per hour respectively. Ham was intensively trained over 18 months for the flight during which he had to pull levers when a light blinked, testing his ability to perform actions while weightless. The capsule actually had a pressure leak during the mission, but Ham's spacesuit protected him and he splashed down in the Atlantic after a little under 17 minutes. The success of the mission was the green light for NASA to begin human space flight missions. He was given an apple for his service and retired to a zoo in North Carolina where he died in 1983 at the age of 25. So Ham, we salute you.

Michael Bird (15:54):
If I'm honest, I'm surprised he came back alive. That's really quite cool. That's really quite sweet he retired in a zoo. Anyway, next week, the clue is six years after losing, Gary isn't feeling so blue. Your move listeners. I think I know what this one is for the first time for a little while.

Aubrey Lovell (16:14):
I'm going to have to think on it.

Michael Bird (16:15):
Anyway, that brings us to the end of Technology Now for this week.

Aubrey Lovell (16:18):
Thank you to our guest, Professor Parashkev Nachev and to you, thank you so much for joining us.

Michael Bird (16:24):
Technology Now is hosted by Aubrey Lovell and myself, Michael Bird with a special thanks to our reporter, Alex Podmore. This episode was produced by Sam Datta-Paulin and Al Booth with production support from Harry Morton, Zoe Anderson, Alicia Kempson, Alison Paisley, Alyssa Mitri, Camilla Patel, Alex Podmore, and Chloe Sewell.

Aubrey Lovell (16:42):
Our social editorial team is Rebecca Wissinger, Judy Ann Goldman, Katie Guarino, and our social media designers are Alejandra Garcia, Carlos Alberto Suarez, and Embar Maldonado.

Michael Bird (16:53):
Technology Now is a lower street production for Hewlett Packard Enterprise. And we'll see you next week.

Aubrey Lovell (16:59):
Bye for now.