The Truth Seekers

A shocking medical headline claimed weight loss drugs like Ozempic could slash Alzheimer's risk by 45% - but what if the story was too good to be true? This episode unravels a critical medical mystery, exposing how a sensational claim derived from observational studies completely fell apart when rigorous clinical trials were conducted. Listeners will discover the stark difference between promising associations and actual medical proof, learning how media headlines can dramatically misrepresent scientific research. We'll break down why a 45% risk reduction reported in May 2025 became a complete failure by November, revealing the crucial importance of understanding study types and not jumping to premature conclusions about medical treatments. A quick note—the opinions and analysis shared on Truth Seekers are our own interpretations of published research and should not be used as medical, financial, or professional advice. Always consult qualified professionals for decisions affecting your health or wellbeing.

What is The Truth Seekers?

Truth Seekers: Where Data Meets Reality

Tired of sensational headlines and conflicting health advice? Join Alex Barrett and Bill Morrison as they cut through the noise to uncover what scientific research actually says about the claims flooding your social media feed.

Each week, Alex and Bill tackle a different health, nutrition, or wellness claim that everyone's talking about. From "blue light ruins your sleep" to "seed oils are toxic," they dig into the actual studies, examine the methodologies, and translate the data into plain English.

No agenda. No sponsors to please. No credentials to fake. Just two people committed to finding out what's really true by going straight to the source—the research itself.

Perfect for anyone who's skeptical of influencer health advice but doesn't have time to read every scientific study themselves. New episodes drop regularly, delivering clarity in a world full of clickbait.

Question everything. Verify with data. Find the truth.

Disclaimer: Truth Seekers provides educational content based on published research. Nothing in this podcast should be considered medical, financial, or professional advice. Always consult qualified professionals for decisions affecting your health and wellbeing.

**When the Weight Loss Drug Headlines Crashed Into Reality**

Alex: Right, so May of last year—headlines everywhere. Ozempic and those similar drugs could cut your Alzheimer's risk by forty-five percent.

Bill: Forty-five percent.

Alex: Massive number.

Bill: It really caught fire. I saw people in Reddit threads literally asking their doctors to prescribe these drugs specifically for brain protection. Not even for diabetes or weight loss.

Alex: Well, that makes sense though, doesn't it? If you're already taking it for weight loss and someone's telling you it's also protecting your brain from dementia, that's a brilliant bonus. Or if dementia runs in your family, you're thinking maybe this is worth trying.

Bill: Oh, absolutely. And the source wasn't some dodgy wellness blog—this was JAMA Neurology. Top-tier journal. Meta-analysis of 26 trials, over 164,000 participants.

Alex: So naturally, the headlines ran with it. But—this was May 2025, yeah?

Bill: Yeah.

Alex: What's happened since then?

Bill: So that's the interesting part. November—just six months later—Novo Nordisk announced results from their big Phase 3 trial testing whether semaglutide, that's Ozempic, actually slows Alzheimer's progression.

Alex: And?

Bill: Failed. Completely.

Alex: Hang on.

Bill: Nearly 4,000 patients with early Alzheimer's, and the drug didn't slow cognitive decline compared to placebo.

Alex: So in May, we're told it cuts dementia risk by forty-five percent, and by November, a massive trial shows it doesn't actually work in Alzheimer's patients?

Bill: That's exactly what happened.

Alex: That's... how does that even happen?

Bill: This is a perfect example of why the type of study matters just as much as the results.

Alex: Right. Okay. So let's break this down. What did that May study actually show?

Bill: Okay, so the JAMA Neurology meta-analysis—they looked at people with diabetes or obesity who were taking various glucose-lowering medications. When they compared GLP-1 drugs, that's the class Ozempic belongs to, to other diabetes medications, they found that people on GLP-1 drugs had a 45% lower rate of being diagnosed with dementia.

Alex: People with diabetes, though.

Bill: Right.

Alex: Not people who already had Alzheimer's or were specifically at risk for it.

Bill: Exactly. These were observational studies embedded in diabetes trials. The researchers were following people being treated for diabetes and obesity, and they noticed this association with lower dementia diagnoses.

Alex: Mmm.

Bill: So it sounds promising on its face.

Alex: It does sound quite promising, actually.

Bill: Which is why the headlines were so confident. But here's the methodological issue—these were observational studies, not randomized controlled trials.

Alex: Which means what, in practical terms?

Bill: In observational studies, you're just watching what happens to people who choose to take a drug versus people who don't. You can't control for all the other differences between those groups. In a randomized trial, you randomly assign people to get the drug or placebo, which removes those confounding factors.

Alex: So the people taking these weight loss drugs might differ from people not taking them in ways that also affect dementia risk.

Bill: Exactly. And here's the thing—

Alex: Wait, sorry, how big were those differences? Like, are we talking about age differences, or...?

Bill: Well, they tried to adjust for the obvious stuff—age, sex, other medications. But here's what they can't fully adjust for: obesity and diabetes are both well-established risk factors for Alzheimer's. So if GLP-1 drugs help people lose weight and control their blood sugar, you'd expect to see lower dementia rates even if the drug itself has zero direct effect on the brain.

Alex: Oh. Oh, that's clever.

Bill: Right?

Alex: So the forty-five percent reduction might just be reflecting that weight loss and better glucose control protect your brain, not that the drug itself is doing anything special for Alzheimer's pathology.

Bill: That's called confounding by indication. The very reason people are taking the drug—to treat obesity and diabetes—is also something that would reduce dementia risk on its own.

Alex: Hang on, this feels familiar. Didn't we—wasn't there something similar with... the dark chocolate and diabetes thing?

Bill: Yes! That was the same pattern. Observational studies showed people who ate dark chocolate had lower diabetes risk, but—

Alex: But when they actually tested it properly, the mechanism didn't hold up.

Bill: Exactly. Because the people eating dark chocolate were also exercising more, eating more fruit and veg—

Alex: They were just healthier people generally.

Bill: Right. Same thing here.

Alex: Okay. So when Novo Nordisk actually tested this in a proper trial with Alzheimer's patients, what did they do differently?

Bill: The EVOKE and EVOKE Plus trials. They enrolled 3,808 people who already had mild cognitive impairment or early Alzheimer's disease. They randomly assigned half to get semaglutide, half to get placebo, and followed them to see if the drug slowed cognitive decline.

Alex: And it didn't.

Bill: Correct. The drug failed to show any benefit on the primary outcome, which was slowing disease progression.

Alex: Even though—wait, didn't you say there were some biomarker improvements?

Bill: Yeah, they did see some improvements in Alzheimer's-related biomarkers, but that didn't translate to actual cognitive benefit.

Alex: That's properly disappointing. Especially since the mechanism makes sense, doesn't it? There's real science suggesting these drugs could be neuroprotective.

Bill: There is. In animal studies and cell cultures, GLP-1 receptor agonists do show neuroprotective effects. The theory is solid.

Alex: But?

Bill: But this is a reminder that what works in a petri dish or a mouse doesn't always work in humans. And the person saying this isn't some critic—it's Daniel Drucker, who actually helped develop these drugs and just won the 2025 Breakthrough Prize for that work.

Alex: What did he say?

Bill: He called the results "very disappointing" and "a sobering reminder that GLP-1 medicines will not be helpful for every medical condition."

Alex: So when the drug's own developer is saying it doesn't work for this, that's the end of the story, yeah?

Bill: Well...

Alex: Oh, there's more?

Bill: There was a smaller trial published in Nature Medicine in December. The ELAD trial, testing liraglutide, which is another GLP-1 drug.

Alex: And that one worked?

Bill: Sort of. It's complicated. The primary outcome was measuring glucose metabolism in the brain, and that showed no difference between the drug and placebo.

Alex: Okay, so strike one.

Bill: But on one secondary cognitive test—the ADAS-Exec, I think it was called—they did see slower cognitive decline in the drug group compared to placebo.

Alex: That sounds like it did work, though.

Bill: Here's the issue—

Alex: There's always an issue.

Bill: That was an exploratory secondary outcome, meaning it wasn't the main thing they were testing for. And when you test multiple outcomes, you increase the chance of finding something significant just by chance. The researchers themselves said the results weren't corrected for multiple comparisons and need to be interpreted with caution.

Alex: Right. And did that cognitive benefit translate to actual functional improvement? Like, could people do daily tasks better?

Bill: No. They tested that specifically with measures of daily living activities, and there was no significant difference.

Alex: So you might do slightly better on a cognitive test, but it doesn't actually help you function better in real life.

Bill: Exactly.

Alex: That's not particularly useful.

Bill: And this was only 204 people, compared to the nearly 4,000 in the EVOKE trials. When you have a small positive signal in a small exploratory study and a clear negative result in a large definitive trial, the large trial wins.

Alex: Okay, but here's what I don't understand. When I was covering health stories, you'd get these observational studies that would generate massive headlines, and then the follow-up trials would come back negative, and everyone would just move on to the next thing. But you're saying the observational data wasn't actually wrong?

Bill: Well—

Alex: Because if it was showing a forty-five percent reduction and the real answer is zero, that seems pretty wrong to me.

Bill: I think that's too harsh.

Alex: Really?

Bill: Yeah. The observational studies showed a real association. People taking GLP-1 drugs for diabetes did have lower dementia rates. That's not false.

Alex: But the implication was wrong.

Bill: The implication the media drew was wrong. The studies themselves were pretty careful to say they were showing an association, not causation. But by the time it gets to headlines—

Alex: It becomes "this drug prevents Alzheimer's."

Bill: Right. So the data wasn't wrong, it was just incomplete. The association was real, but the reason for it wasn't what people thought.

Alex: Hmm.

Bill: What?

Alex: I don't know, I think I still disagree with you on this. If you publish a study showing a forty-five percent reduction, and it turns out the drug doesn't actually prevent Alzheimer's, then what was the point? You've misled everyone.

Bill: But that's how science works. Observational studies generate hypotheses. Then you test them with RCTs. The system worked exactly as it should.

Alex: The system worked, but the communication didn't. And that's—actually, that's quite important, isn't it? Because by the time the RCTs came back, people had already made decisions based on the observational data.

Bill: That's fair. That's actually a really good point.

Alex: I think my issue is that we present these preliminary findings as if they're conclusive, and then we act surprised when they don't hold up.

Bill: Yeah. Yeah, okay. I think you're right about the communication part. The science itself was working as intended—observational data suggesting a signal, then RCTs testing it properly. But the way it was communicated to the public didn't reflect that uncertainty.

Alex: Right.

Bill: So maybe we need to be clearer about what observational data can and can't tell us.

Alex: That would be nice.

Bill: Although, I mean, "drug shows association with lower dementia risk in observational study of diabetes patients" is a much less exciting headline than "drug cuts Alzheimer's risk by 45%."

Alex: Well, that's the problem, isn't it?

Bill: It is.

Alex: Anyway, what were we—right, so what should people actually take from this? Because millions of people are on these drugs, and they've probably heard the Alzheimer's claims.

Bill: If you're taking a GLP-1 drug for diabetes or obesity, keep taking it if you and your doctor agree it's working for those conditions. The weight loss and metabolic improvements are real, and those likely do offer some brain protection indirectly.

Alex: But don't take it specifically hoping to prevent Alzheimer's.

Bill: Right. The evidence doesn't support that. The clinical trials testing that specific claim came back negative.

Alex: What about people who are concerned about Alzheimer's because it runs in their family? What should they focus on instead?

Bill: The good news is we know quite a bit about what actually does reduce dementia risk. Cardiovascular exercise, cognitive stimulation, social engagement, Mediterranean diet, good sleep, and treating cardiovascular risk factors like high blood pressure and diabetes if you have them.

Alex: And the diabetes point is important, because that was part of the confusion, wasn't it? Treating diabetes does appear protective, whether you use these drugs or other methods.

Bill: Exactly. The benefit might be from improving your metabolic health generally, not from the specific drug.

Alex: Which is actually quite empowering information—you don't necessarily need an expensive medication to get that benefit.

Bill: Right. And when I was doing A/B testing at my old job, we'd see this kind of thing all the time. You'd see a signal in observational data, get really excited, then run a proper test and find out it was correlation, not causation.

Alex: How often would the proper test contradict the observational stuff?

Bill: Often enough that we learned to be skeptical. Maybe—I don't know, maybe half the time? More? It depended on how many confounders were lurking.

Alex: That's depressing.

Bill: It's just how it works. Observational data is useful for generating ideas, but it's not proof.

Alex: Right. Okay, so back to the actual trials. The EVOKE trials—nearly 4,000 people, proper randomization, and nothing. What was the timeline on those? How long did they follow people?

Bill: I'd have to double-check the exact duration, but these were Phase 3 trials, so we're talking at least a couple years of follow-up.

Alex: And genuinely nothing? Not even a small effect?

Bill: Failed on the primary outcome. There might have been some secondary measures that looked better, but when your main outcome doesn't move, that's it.

Alex: And by the time those trials reported in November, the damage was already done. I saw GP surgeries with posters about it.

Bill: Which is why we need to be more careful about how we communicate preliminary findings. Observational data is useful for generating hypotheses, but it's not proof.

Alex: The other thing that strikes me is how quickly this all happened. May to November—that's only six months between "this is brilliant news" and "actually, never mind."

Bill: That's the pace of modern science communication. But the science itself was working exactly as it should. Observational studies suggested a signal, researchers designed proper trials to test it, and the trials gave us a definitive answer. The problem is the headlines didn't reflect that uncertainty.

Alex: So the lesson for listeners is: when you see headlines about a dramatic benefit from a drug, check what type of study it's based on. Observational or randomized trial?

Bill: And whether it's testing the actual outcome you care about. The observational studies weren't testing Alzheimer's prevention—they were testing diabetes control and happened to notice lower dementia rates.

Alex: Right. And when they actually tested Alzheimer's prevention, it didn't work. That's the story.

Bill: That's the story. And honestly, even though it's disappointing, it's good to know. Now people can make informed decisions instead of—what did you call it—false hope?

Alex: Yeah.

Bill: Now they can make informed decisions instead of relying on false hope.

Alex: And maybe the next time a headline promises a miracle drug, people will remember to wait for the clinical trials.

Bill: We can hope.

Alex: Huh. "Hope" might be the wrong word at this point.

Bill: Fair.