The InForm Fitness Podcast

Adam Zickerman and Mike Rogers of InForm Fitness are joined by Dr. Peter Attia to discuss the studying of the studies that we are constantly barraged with through the news and in our social media feeds.  Dr. Attia explains the mechanics of scientific research and how to distinguish the relationship between showing cause and effect in an effort to become more equipped in understanding and possibly mistrusting the information we are given regarding exercise, nutrition, disease prevention, and more.

Show Notes

Adam Zickerman and Mike Rogers of InForm Fitness are joined by Dr. Peter Attia to discuss the studying of the studies that we are constantly barraged with through the news and in our social media feeds.  Dr. Attia explains the mechanics of scientific research and how to distinguish the relationship between showing cause and effect in an effort to become more equipped in understanding and possibly mistrusting the information we are given regarding exercise, nutrition, disease prevention, and more.
Dr. Peter Attia is the founder of Attia Medical, PC, a medical practice in New York City and San Diego that focuses on the applied science of longevity. Dr. Attia also happens to be a client of InForm Fitness.
Dr. Peter Attia's Website: https://peterattiamd.com
Richard Feynman on Scientific Method (1964) https://www.youtube.com/watch?v=0KmimDq4cSU
Do We Really Know What Makes Us Healthy? By GARY TAUBES  https://www.nytimes.com/2007/09/16/magazine/16epidemiology-t.html

Adam Zickerman - Power of 10: The Once-A-Week Slow Motion Fitness Revolution http://bit.ly/ThePowerofTen
For a FREE 20-Minute strength training full-body workout and to find an Inform Fitness location nearest you, please visit http://bit.ly/Podcast_FreeWorkout

47 Living with Uncertainty-Conversation about Studies w Dr. Peter Attia Transcript

SUMMARY KEYWORDS
study, exercise, relative risk, question, people, risk, difficult, point, drug, read, understand, epidemiology, picking, metformin, called, wrote, smoking, rule, squats, tricycle

SPEAKERS
Tim Edwards, Dr. Attia, Adam, Mike
 
Dr. Attia  00:06
You know, cognitive dissonance and bias are so strong. And once we lock in on an idea, it becomes very difficult to see evidence that contradicts that. And it also becomes very difficult to scrutinize evidence that supports it.
 
Tim Edwards  00:28
Welcome inform nation to Episode 47 of the inform fitness Podcast. I'm Tim Edwards with the inbound podcasting network and in a minute, Adam Zickerman, the founder of inform fitness and general manager slash trainer at inform fitness New York City will be joined by the gentleman whose voice opened the show, Dr. Peter Attia, who is the founder of Attia Medical, PC medical practice with offices in San Diego and in New York City, focusing on the Applied Science of longevity. He also happens to be a client of inform fitness like myself. Dr. Attia joins us today to discuss the studying of studies regarding the mechanics of scientific research, and how to distinguish the relationship between showing cause and effect. Now you might be asking, What does any of this have to do with slow motion high intensity strength training? Well, plenty. How many times in this podcast have we referenced a study based upon exercise and nutrition alone. You see it every day in your social media feeds and on the news. But how many of these studies are actually true or even accurate? How are these studies administered? And can you trust the results? Well, to quote Adam Zuckerman hopefully after listening to this episode, you'll be a bit more equipped to understand the barrage of information you read and hear about, and not to fall, hook line and sinker for every claim that is reported as a study.
 
Adam  01:53
Hello, everybody, this is Adam, very excited about this podcast. It's kind of a little heady, gets into statistics, but hopefully we're going to break it down for the layperson. So when you read headlines that make such crazy claims that you can kind of your your, your well, basically a bullshit detector goes off and you can kind of figure out how to navigate that. So we have with us to talk about this Dr. Peter Attia. He's a physician practicing in New York City and California. And his practice basically focuses on longevity which he calls the overlap of lifespan and health span, which is kind of cool. His clinical interests are in Nutritional Biochemistry, exercise physiology, lipid ology, endocrinology, and a few other cool things. He's also somebody who works out very easily. You just worked out here just now didn't ya,
 
Dr. Attia  02:39
sure. Did.
 
Adam  02:40
How you feeling?
 
Dr. Attia  02:42
that hurt. That was
 
Mike  02:43
sorry. Sorry, Peter.
 
Adam  02:45
Mike trained him. Yeah.
 
Tim Edwards  02:48
Sorry, not sorry.
 
Adam  02:50
Peter is also an incredible athlete, endurance athlete. And he understands the efficacy of these types of high intensity workouts that we talked about. So Doctor, thank you for coming on the show.
 
Dr. Attia  03:02
Thank you for having me. But I will add one correction. I don't think I can use the word athlete to describe anything I do anymore
 
Adam  03:10
former athlete,
 
Mike  03:12
he looks like an athlete. It looks like you haven't seen him. He looks pretty good,
 
Adam  03:15
he's a good looking guy. So beat up. I want to do this podcast because I read your series you wrote, you wrote a five piece series called studying studies. And you venture into much detail about the mechanics, if you will, of scientific research, and how difficult it is to distinguish without a shadow of a doubt the relationship between, of course what all research tries to do, show cause and effect. So Dr. Attia, you start off your series, quoting two iconic individuals. First, from Mark Twain, who I guess you mentioned is quoting Disraeli, Benjamin Disraeli, who says there are lies, damned lies and statistics. And you also quote a Nobel Prize winning physicist Richard Feynman, one of my favorites, actually, I read his posthumously published Caltech lecture called the meaning of it all. Is that where you got this quote from? By the way?
 
Dr. Attia  04:07
I actually to be honest, I don't remember in which I mean, he said it so famously, so many times.
 
Adam  04:12
Yeah
 
Dr. Attia  04:12
My favorite book, of course, is surely you're joking. Mr. Fineman? Yes, which is the sort of the the sort of entertainment value of Fineman.
 
Adam  04:21
So Dr. Feynman says, The first principle is that you must not fool yourself, and you are the easiest person to fool. So you have to be very careful about that. And this is our responsibility as scientists, certainly to other scientists, and I think also to laymen. So doctor, you know, tell me if I'm wrong, but your main point of this whole series of essays is something that you actually say in your first essay, which is, and I'm quoting you, statistics can be both persuasive and misleading if we're not careful. And it's self persuasion that we must vigilantly guard against if we really want reliable knowledge about the world we live in. So can you tell us why this is so important to understand?
 
Dr. Attia  05:00
sure. But before I do, I also want to acknowledge, you know, the work that goes into the stuff that I write, it couldn't really be done without kind of a team of analysts I have. So inside my medical practice, I have a team of four analysts whose full time job it is to help me read, assimilate, digest, and understand the Absolute barrage of data that's out there. I think in one of our posts, we even mentioned that at last check, there are just under 100,000 pieces of literature produced monthly on PubMed, in the English language. And I think our estimation is probably somewhere in the neighborhood of 100 of them, would be relevant to what we do, meaning they overlap with our field of interest, but also would rise to the standard of being relevant to our attention. And to funnel that down from 100,000 to 100, is something that I could use a team of 400 analysts to do, but the four guys I have are amazing. And in particular, Bob Kaplan, who's the who's sort of in charge of the whole operation, and my right hand, without him, this whole thing that we do, wouldn't wouldn't exist. So with that said,
 
Adam  06:12
That's Bob, shout out to Bob
 
Dr. Attia  06:14
And, and also, I think I would say, we wrote this series, knowing it was not going to be that quote, unquote, interesting. This was not something we wrote to, you know, there's no clickbait involved here. This is kind of something that we hope will over time become the sort of thing that people go back to. I forget who said it, but there's an interesting quote that that said something to the effect of like, your, you only learn something when you're actually ready to learn it. And of course, it was stated much more eloquently. But the point is, like, a lot of times these things fall on deaf ears. And so anyway, with that said, what we've created is a bit of a repository called studying studies. And it begins with, I think our first one is on relative versus absolute risk, which I'm sure we'll get to. But to your point, which is the important one, we are all myself included. So guilty of this, it is just, you know, cognitive dissonance and bias are so strong. And once we lock in on an idea, it becomes very difficult to see evidence that contradicts that. And it also becomes very difficult to scrutinize evidence that supports it. And so I think what Fineman says quite eloquently could be also stated, by, you know, one of the guys when I was doing my postdoc who I remember very well, he said to me, you know, you have to learn to kill your babies. So you're going to do you're going to have a hypothesis, you're gonna do a whole bunch of experiments, you're going to think you're getting an answer and, and you're going to start to, you know, drive that answer, you're going to, you're going to look at ways to see your data in a manner that makes your your answers seem more believable, but you have to be able to scrutinize that to the point where you ultimately end up killing most of your babies. So I don't pretend to say that this is easy. I think this is actually very difficult. And in many ways, the easiest way to do it is, you know, doing it the way we try to do it, which is have more than one person involved and have opposing sets of eyes. I've spoken about this, certainly in sort of other discussions around the notion of creating a blue team, red team. So this is something that's not uncommon in sort of experimental disciplines of science, like physics. But I think it's frankly, even valuable in things like, you know, a hedge fund, you know, so for example, if you're trying to decide if you know, this stock is worth the price that it's being asked right now, you typically will send off a team of analysts to do a bunch of research on that. But that exercise can be much more powerful if you send off a blue team and a red team that don't necessarily communicate with each other in any way, shape, or form, but go and look at data and force themselves to make the for and against case, and then you compare it. So
 
Mike  08:50
create a debate, like hoping that they'll be a debate.
 
Dr. Attia  08:54
Absolutely, yeah. So so that we internally kind of talked about this idea of Blue Team Red team, as a way to keep ourselves honest, because I don't really trust myself that much. And I don't know that any individual should be able to trust themselves.
 
Adam  09:07
No, I know, I'm biased. I mean, totally. When it comes to just the way we train people and how we view exercise, it's, and that's why I, like you said, you know, you don't learn something until you're ready for it. And that that's where I am in my career, you know, trying to push the envelope with exercise science and knowledge. And, you know, This can't be it. Right. So they got to be more to all this. I think it was Carl Sagan that said, if there's an exception to the rule in science, the rule is wrong. Yeah. You know, and I'm finding exceptions and exercise all the time. So you know, even I need to wake myself up and push the envelope and and question everything that I do, and this is why I have you on the show today.
 
Dr. Attia  09:44
Well, I think that that speaks to something that makes biology much more difficult than physics or mathematics. So the first thing is, I think there are very few rules in biology in the way that we think of rules in you know, these other disciplines. So to Your point, I don't think I could with a straight face make the case that there is one, exercise regimen, one nutritional regimen, one, you know, endocrinologic regimen, etc, that is optimal for every person. I mean, I simply know that's not the case. I could argue it on first principles, I could argue it empirically. So. So what's a corollary of that a corollary of that is that there are no proofs in biology. And we'll be very careful with the use of that word. So if I had $1, for every time I saw a study that said such and such proves x, or this study proves y it's incredible sloppiness either on the part of the journalist or the scientist, because unlike mathematics, where there are proofs, there are no proofs. In biology. Everything is stochastic. Everything is about probability, everything is about confidence. But there is no proof. And therefore, as an extension, there are likely very few rules, you know, outside of things like Central Dogma,
 
Adam  10:56
like exactly. So let's get to this idea of self persuasion and use some examples. All right? Example you use was a headline. And the title of this journal article said, cholesterol fighting drugs, lower risk of Alzheimer's disease. And you use that as an example of the self persuasion.
 
Dr. Attia  11:14
Well, you know, I was thinking about that, after we wrote it. And I was like, it wasn't even a great example. Because I mean, it was a good example. But I think it's a better one, if there's a far more egregious example, actually, which is the use of hormone replacement therapy in postmenopausal women and the risk of breast cancer associated with it. And I think I remember Bob and I talking about this before we put that piece out and deciding we weren't going to use that example, since that warrants an entire post on its own, which we'll get to. The gist of that is the following. I think many people if they're up in the peanut gallery, and not like digging down into the data will still have some sort of vague notion that hormone replacement therapy increases the risk of breast cancer in women. So the question is, where does that inclination come from? And that inclination comes from something called the Women's Health Initiative, which was a study that was done, in response to interestingly, an epidemiologic series of studies that suggested that women who went on hormone replacement therapy actually saw improvements in many aspects of morbidity, obviously, not easy to look at mortality. But for example, you know, greater cognition, improved bone density, improved heart health, less breast cancer, etc. But as was the reasonable thing to do, the question was, well, let's put this to the test with a prospective randomized, double blind, controlled trial. And that was done. I won't get into the complete shortcomings of the actual study, because the study itself had at least four enormous structural shortcomings that are not a fault of the people who studied it. In other words, at the time the study was done, I think it had to be done incorrectly, which is as ridiculous as that sounds. But when we look back today, it's very clear that you would do at least four things, different orthogonally differently, not just slightly differently. Nevertheless, when the study came out, the headline, the only thing that most people remember is, quote, breast cancer risk went up 25% In the women getting HRT. And so to this day, if I ever want to have a discussion about HRT with a patient, it begins with about a 30 minute lecture on the Women's Health Initiative, which is one of the papers I actually have sitting in my office on my desk, so I can just very easily refer to it.
 
Adam  13:24
It's kind of reminded me of the autism and vaccination type of,
 
Dr. Attia  13:28
well, that's even a worse example, because that was actually perpetuated by fraud, and what effectively turned out to be to be to be fraud, and, you know, sort of at least scientific misconduct. This was this was an example where yes, it is true that when you compared the use of estrogen and progestin to a placebo, the hazard ratio for breast cancer was 1.24. So what does that mean? So hazard ratio of 1.24 means a 24% increase in the relative risk of breast cancer. I'll resist all urges to explain why I don't even believe that that is correct. But let's assume it is correct. Let's assume that there was a 24% increase in response to the use of estrogen and or progestin. The important question that should be asked but was often overlooked was what was the absolute increase in risk? The relative increase in risk doesn't tell you anything by itself. And when you look back at the data, which is very easy to do, you don't actually have to be a statistician or mathematician to do this, it's actually only basic arithmetic that I think will walk readers through in that series. The absolute increase in risk was approximately going from four cases per 1000 to five cases per 1000. Now going from four cases per 1000 to five cases per 1000 is indeed a 25. Or, you know, there's actually 24% increase in relative risk, but it's only a one in 1000. or 0.1%, absolute increase in risk. So the way I try to help women understand HRT is let's assume that that's correct. And I'll give you seven reasons why I don't believe it is. But if that is correct, if that becomes our ceiling of risk, and let's assume it is, you then have to ask the question, is a point 1% increase in your risk of breast cancer worth the alleviation of some of the symptoms you might actually have during menopause? And of course, I think the answer would be it depends on how bad those symptoms are
 
Adam  15:31
right
 
Dr. Attia  15:31
If a woman is sailing through menopause, and she has no symptoms, then one could maybe convincingly say, look, it's not worth the hassle slash the risk. But when a woman is debilitated by, you know, a number of symptoms, including in all the vasomotor symptoms, which are usually the first ones, you have to ask the question, How bad is a point 1% absolute increase in a disease that has a one in 12 chance of killing you,
 
Adam  15:53
especially now, since you can probably figure out that family history of breast cancer
 
Dr. Attia  15:56
Absolutely. And we have much better, you know, ways to to further refine risk for an individual. So this is, to me, one of the most important things that the the lay person wants to have in their toolkit, which is if ever I read something, you know, whether it be in the actual paper itself, or more typically, in the way it's reported, I want to make sure I understand the difference between absolute and relative risk.
 
Adam  16:21
So Peter, is it is it to wieldy, a question to ask you to help us figure out the difference between, you know, our listeners figure out what relative risk is versus absolute risk
 
Dr. Attia  16:31
it might be, but I can give it a try. So the example I just gave is probably as good a one as you can use. Although I think we talked about Alzheimer's disease elsewhere, the relative risk, the risk of not using the word relative to define it says, from wherever you start, and I don't know where you're starting, or I'm going to ignore where you're starting, how much does the probability of this event go up. And that relative risk in this example is 25%. So you went from something that has a point 4% chance of happening to something that has a point 5% chance of happening. And going from point 4 to point 5 is mathematically an increase of 25%. So the relative risk is 25%. But of course, if you went from 40% likelihood of happening to 50% likelihood of happening, that would also be a 25% relative risk, the difference is in the latter example, your absolute increase in risk is 10%. Right? Whereas in the former, your absolute level of increase in risk is point 1%. And that's why it's very difficult to make decisions clinically without knowing both. But because we are, I think, inherently a bunch of lazy people. And I include myself in that just as anybody. We want headlines we want what's going to sound more exciting, there was a point 1% increase in the risk of something or there was a 25% increase, both are correct, but both are incomplete. in isolation.
 
Adam  18:03
You gave another example about the relative risk for this new drug that reduced cancer incidence by 50%. That was a relative risk, right? And then, but the absolute risk is really goes from two and two in 1000. Going to to one in 1000. Correct. So is that really a? Everyone thinks, wow, I have a 50% chance of reducing my risk of cancer when really it's that's actually not true? Well, it's true
 
Dr. Attia  18:28
it's true on a relative sense, because then you now have to evaluate, that's really the risk is dropping my absolute risk by point 1% worth the trade off of taking whatever this drug is
 
Adam  18:40
exactly. Right. And you say we're messy creatures, and there are so many moving parts and pieces. To what what will influence the results of the study, for example, the exercise habits of individuals that make up the sample or the access to health care, the smoking history can confound the results of the study you're trying to approve an association of. So like one confounder, I'd love you to talk about is this healthy user bias? All right, because that that that's a common one. And it's kind of easy to understand and how that plays into studies and how quickly and how, just what I'm trying to do is trying to find show how easy it is to have things screw up a study and the results really, really might not be what we think they are.
 
Dr. Attia  19:27
Yeah, so I mean, Richard Feynman does a great job. There's a there's a great clip on YouTube, and I'm sure it's I know we've linked to it somewhere in the blog about it. He's giving a lecture at, I think it's either it's either Caltech or Cornell. But maybe you can find it in the show notes and link to it, but it's a beautiful, beautiful exercise of him on a blackboard walking through the scientific method. And he explains it in a way that I won't even try to reproduce because it's just so priceless and Finemanesque, but it's effectively you make an observation. You make a guess you design an experiment and compute the consequences of what your guess would be if the, in other words, what would be the experimental outcome if your guests or hypothesis is correct, you design the experiment, and then you measure the outcome versus the outcome you predicted. Or that would be true if the thing is correct. Now, he explains it much more eloquently. The gist of it is, outside of doing experiments, we actually cannot establish cause and effect outside of the most extreme circumstances, that tend to be by far the exceptions and rarely the rules. So what you're talking about a healthy user bias becomes an issue when we rely on things outside of experiments, which we unfortunately have to do very often, or at least choose to do very often in human biology, because, as you pointed out, humans are messy. We live for long periods of time, you can't study us in captivity. So it's very difficult to do an experiment. For example, if you wanted to demonstrate, do people who exercise four times a week or more have lower risks of pick your favorite disease versus people who don't exercise? Okay, well to do that experiment is almost impossible. Because what are you going to do get, you know, 1000s of people and randomize them into two groups, meaning by randomization, that means you've mathematically created a large enough sample, that you now know that your two groups are statistically identical. And now you have one group exercise five times a week and the other group never exercise. And first of all, you assume that they're doing that, how you could make them do that. And without putting them in captivity for 20 years, I don't know. But at the end of 20 years, you then ask the question, hey, or even 10 years, whatever? Is there a difference in heart disease, cancer, Alzheimer's disease, etc? Now, we certainly think there would be. But we don't do that. Instead, what we do is we say, well, let's just actually go and survey the population. Let's take a backwards look. And ask the question, okay, let's find people and do surveys and find out how much exercise everybody does. And then we'll stratify people, and we will then do a mathematical analysis to try to simplify for other variables, and see if it gives us the answer. The problem with that is, if you're comparing people who, on their own are choosing to exercise five times a week versus people who on their own are choosing not to exercise, the likelihood that you are able to also tease out every other difference, for example, their sleep habits, their eating habits, it's very unlikely, yes, you can, using statistical analyses probably simplify some of the more obvious differences such as smoking, as a general rule, I think you'd find a higher incidence of smoking in the non exercising crowd than the smoking crowd. And that could probably be extrapolated. But many of these things cannot. And there's actually an amazing essay on this, or essay may be the wrong word, but Gary Taubes, who I know you must know. He wrote a great piece in 2007, in the New York Times Magazine. And it's I can't remember the title of it. But it's something you should also link to this because I think it's a real gem of a piece. The gist of it is like, what do we not know about studies, and I think it's one of the better pieces on the limitations of epidemiology, and specifically, this healthy user bias that really makes it difficult to understand the impact of nutritional choices, exercise choices, other quote unquote, lifestyle choices, frankly, even drug choices on heart outcomes. Because when you do these analysis, the hazard ratios, meaning the magnitude of difference between the groups is usually so small, that it falls well below the threshold of epidemiology to rise to the level of saying, and it's not significance, you can often find statistical significance. It's just can you be confident that there's a causal relationship here? And I would say unfortunately, the answer today is virtually never
 
Adam  23:57
Another example of healthy user bias that comes to mind is I hear often that people that floss their teeth have less cardiac disease, as if flossing your teeth itself has an actual cause on your an effect on your heart.
 
Dr. Attia  24:11
Now it could point is we can't learn it from that study
 
Adam  24:15
 we can't learn because people that floss the teeth are also usually healthier in other ways, and they have healthier habits in other ways. And that might be the reason why they don't have heart disease as often. So that's the healthy user bias.
 
Dr. Attia  24:26
That's exactly right. And that's a great example of a question that would be quite vexing to me because I think that using cardiac disease as an example, it's it's it's certainly a disease that's both driven by lipoproteins, inflammation, endothelial function, all these other things. So is there a is there a plausible mechanism by which, you know, having poor dentition could increase your risk of heart disease? Absolutely. There's a plausible mechanism there. But we're not going to my guess is the hazard ratio on those studies is sufficiently low, and therefore, at best, it's generating a hypothesis to be tested.
 
Adam  25:00
Yeah, well, regardless, you should you should, whether it affects your heart or not, you should floss your teeth everybody
 
Mike  25:07
we think probably.
 
Adam  25:12
So you know, you use, you've been throwing around a couple words like epidemiology. And epidemiology is another word for an observational study. An observational study or an epidemiological study is different from a, what they call a random control study. And you're going to touch on how hard that is when you're putting people in captivity, because you have to control for all these variable. Otherwise, it all confound the results. So why, and you touched on this a little bit, maybe you can touch a little bit more, why aren't there more, you know, random control studies, which are considered the researchers gold standard of studies. Why are we relying on all these epidemiological studies, which, as you just pointed out, have all these confounded all these, you know, problems.
 
Dr. Attia  25:58
So I think it comes down to a couple of things. But one of the biggest issues is the most obvious one, which is a logistics problem, it is very difficult to do randomized control trials, and it is often much much more expensive. It also takes much longer. So if you want to study the effect of let's go back to the previous example, we had, does the frequency of exercise impact even just pick one metric? Does it impact your risk of Alzheimer's disease? Which is a very important question. And certainly, one that logically would make sense, you know, more exercise, better blood flow, you know, you know, a number of neurotrophic factors, BDNF, all of these things, you come up with 100 mechanisms, why, but if put to the test, are we going to say we're going to get to group several 100 people and force one of them to not exercise force another one of them to exercise and again, we're going to put these people in captivity, that's impossible. So they're going to have to live in their free world, and hope that the compliance is high enough that you create enough discrimination between the two groups, and then follow for an outcome. Now, it's not to say that that nothing I described there cannot be done. All of that can be done. The question is, how difficult is it? How expensive is it? And oh, by the way, who's paying for that study? Because it's very difficult to remunerate on an exercise study. So while we can, we have a much higher appetite for doing randomized control trials in pharmacology, and part of it is because the FDA says you have to we're not approving this drug if you haven't done if you haven't demonstrated safety and efficacy and effectiveness in prospective clinical trials. But there's an incentive to spend a billion dollars, which is about how much it will cost today, a little over a billion today to get a drug approved through that process. But it's very difficult to imagine doing that with something for which there's no remuneration. That's the first fundamental problem. The second one comes down to the ethical problem. Sometimes the most interesting questions are ones in which we just really don't think it would be ethical to randomize people to one of the two groups.
 
Adam  28:04
So you can't test harmful effects
 
Dr. Attia  28:06
Yeah, we can't. Not that this is debatable today, but certainly there was a great period of time after the landmark Surgeon General's report in the early 1960s, demonizing smoking, between that and the widespread acceptance of the role of the smoking played in lung cancer. But the question is, could you with all of that mounting evidence, and those were examples where the hazard ratios were more than 10? So this now gets into the territory of where epidemiology may actually be sufficient for determining causality? But would we feel if we were, you know, scientists and physicians designing that study? Would we be comfortable randomizing people to a forced smoking group? And the answer is no. And similarly, today, I don't know that if I were involved in a clinical trial, I'd be terribly excited about randomizing people to a group of don't exercise, right, or let's disrupt your sleep for the next 10 years, certainly, on a short term basis, it's probably reasonable to do sleep disruption studies over a period of weeks or even months to test the, you know, validity of theories around the importance of sleep. And, you know, for example, glucose homeostasis
 
Mike  29:11
In a silly way, what the guy did in Supersize Me, you know, how he just like ate all that crappy food
 
Dr. Attia  29:16
thats right
 
Mike  29:17
you know, so it's a I mean, they he chose to do that to himself and then tested his health markers throughout the period
 
Adam  29:24
sample size of one Yeah,
 
Dr. Attia  29:26
yeah. But, but basically, these are the these are the fundamental issues, which is, you know, compliance, logistics, duration, cost and ethics.
 
Adam  29:37
Yes. Okay. So hence the reason why we are really relying on so many observational studies or epidemiological studies. You even pointed out that 52 observational studies were looked at. And these particular 52 observational studies were actually tested by a random control study, and the random control studies showed that every single conclusion of the 52 all 52 conclusions of the observational studies were wrong. They didn't, that's zero for 52. That's ridiculous. So, so what gives? I mean,
 
Dr. Attia  30:14
I think we I think a lot of the low hanging fruit in epidemiology is gone. I think that's the bottom line. I think when you think of the real gems, the real case studies of what made epidemiology great. It was in areas where so epidemiology is such a blunt tool, that you need enormous amounts of discrimination between what you're trying to detect. And so smokers getting lung cancer or not getting lung cancer, people exposed to massive amounts of asbestos getting mesothelioma yes or no chimney sweepers or non chimney sweepers getting scr0tal cancer, yes or no, you know, these questions, they had such enormous impacts that the epidemiology could give us much, much more confidence in an answer. Today, we're dealing with things where if they're, and to be clear, going back to those 52 cases, I'm not suggesting that every one of those epidemiology studies was incorrect. It's also possible that some of those randomized control trials were so poorly done that they missed the mark. What I'm saying is at that point, we're now outside of the discriminatory capacity of the tool to measure. And I don't know if I, I think we've written about this as well. And even more upsetting feature is not that what's more upsetting is when you look at and John Ioannidis, who's if anybody's interested in this space, and they're looking for one person to be reading, in terms of just like, who's a very thoughtful academician, who I think has some of the most insights on this topic. It's this fella by the name of John Ioannidis at Stanford University. And John's written some of the most cited papers on this topic, including a very famous paper, I believe, in 2005. in Plus One that gets at this notion of how most published research is incorrect.
 
Adam  31:57
So, all right, which kind of brings me to the grand finale question is, now that we're thoroughly confused, and we can't trust observational studies, or random control studies, because it's so difficult to do and not done well. So what do we do as lay people we're trying to we're trying to lead healthy lives, you're trying to improve ourselves, we're trying to make decisions what drug to take or not take, what do we do? I mean, like
 
Mike  32:22
is that prepared to do the date that deep dive like? Well, step one,
 
Adam  32:25
I even with the deep dive, it seems like, you know, even you know, you can learn you can read a book on statistics and still do your homework and still figure out holy cow, we still don't know for sure. So So what do we do?
 
Dr. Attia  32:37
Well I mean, I guess there's, there's a couple ways to think about this. I think the first way to think about this is to get comfortable with uncertainty. I think. I don't think we're wired to deal with uncertainty very well, we don't really think
 
Adam  32:50
that's why there's religion.
 
Dr. Attia  32:51
Yeah, I think that's why there's a lot of actual problems, frankly, which is, we, we, and again, when I say we I'm being very deliberate, and to include myself, because even though I'm trained in mathematics, and I think I think as probabilistically as anybody, I know that in my darkest moments, I tend to revert to binary thinking. And so I think
 
Adam  33:14
what does that mean?
 
Dr. Attia  33:15
Meaning it's black or white? It's this or it's that there's no problem, you know, as opposed to thinking in probability distributions. So that the real way we should always be thinking about life is probabilities.
 
Adam  33:26
Yeah
 
Dr. Attia  33:26
Now, in some cases, we love the examples where the probabilities are so clear. So if I drop this pen, what is the probability that it will hit the floor? Well, that is actually, you know, you can describe that with a probability function in physics and mathematics. And it will show you that
 
Adam  33:41
there's a small probability that it will actually hit the floor
 
Dr. Attia  33:44
yeah, there's a theoretically small probability that it won't hit the floor. From a practical standpoint, in that example, the answer is zero. So we have a bunch of rules that govern our universe that get us, I think, overly comfortable in the notion of yes or no black or white zero or one binary type answers. In sort of engineering, we describe that as being digital, on or off
 
Adam  34:10
Gotcha
 
Dr. Attia  34:11
The opposing concept is called analog, where you have a sliding scale from zero to one, you turn up the lever, and so so that's to me, biology is much more analog than digital. And therefore, for every question, you just have to say, well, we may never actually know the answer to this, what I have to do if I'm making a decision, or what my doctor has to do, if she's making the decision, or at least advising me on the decision is I have to be able to understand the risk adjusted return on this investment. So the very simplest way I try to explain this to patients is using a two by two matrix. So on the x axis is where I talk about the reward. On the y axis, the vertical axis is where I talk about the risk Now, even though those are
 
Adam  35:01
the hormone replacement therapy might be a good example.
 
Dr. Attia  35:04
Sure, frankly, everything that we do is an example of this. In fact, Bob and I often will draw on a whiteboard like this two by two matrix and, and shade in the parts of it that we think represent any type of intervention. So where does Where does high intensity interval training fit in on this? Where does you know, long distance endurance training fit into this? Where does taking Metformin fit into this? Where does you know intermittent fasting fit into this, like, anything that you can do should be able to be placed on that matrix? And it's not a point, it's not a dot by the way, it's like a, you know, a shaded
 
Adam  35:36
curve
 
Dr. Attia  35:37
 curve, Yeah, exactly. So it's complicated. So but you can simplify this into a two by two as opposed to just a continuum. And the two by two would be when dealing with the reward or the payoff, think about picking up a Bitcoin versus picking up $1 bill. Now this is this example might be irrelevant in a few years, bitcoins irrelevant. But but let the record show that at the moment of this a bitcoin is still worth something so. So you either you're either asking the question is, am I picking up a Bitcoin? Am I picking up $1 bill? Or making a penny for that matter? Right, something that we think we would ascribe very little value to? On the risk side? The question is, am I picking this thing up while it's sitting in front of a moving tricycle or a moving train? So what so I try to look at everything I do through that lens. And the first thing you want to realize is you never ever want to be picking up dollar bills in front of moving trains. That's an obvious statement, but it's worth thinking through things that you might do. There are lots of things that I think people propose to do that in my mind, amount to that picking up $1 bill in front of a moving train. Yeah, you know, you could get $1 but it could also be a catastrophic outcome. Conversely, there are very few opportunities that we aren't already aware of that are akin to picking up bitcoins in front of tricycles. Most of those things have been realized, for example, not smoking, is picking up a Bitcoin in front of a tricycle.
 
Adam  36:43
Yeah
 
Dr. Attia  36:46
it's got a huge multiplier effect. And it's, you know, it's relatively safe to not smoke. Similarly, being insulin sensitive, as an outcome is another one of those things where like, there's just no disagreement, there is complete convergence along the importance of insulin sensitivity with respect to cardiovascular health, cerebral health, cancer, etc. Now, the How to becomes more problematic. So, you know, should one do this type of exercise versus that type of exercise? should one take this drug versus that drug? You know, Metformin I brought up before a moment ago, because, you know, I think, about 10 years ago, it started to become pretty clear in what are called cohort studies, where you took backwards looks at, you know, backwards looks at data that were collected for other reasons
 
Adam  37:55
retrospective
 
Dr. Attia  37:56
Yeah, these are retrospective cohort studies that were looking at patients with diabetes, who were taking Metformin versus those who weren't taking Metformin. And they corrected for all sorts of factors. And, you know, the suggestions looked pretty interesting, actually, both on an absolute and relative basis, the Metformin takers were getting a lot less cancer, their their, their relative risk, reduction of cancer was about 25%. And their relative risk in mortality was about 38 to 40%. Again, I don't remember the absolute numbers, but they weren't trivial. It wasn't like one of these point 1% questions. So it begs the question, should we be taking Metformin for cancer prevention now, so at least three out of five patients will come in my office and they want to know if they should be taking Metformin. And full disclosure, I do. I've been taking Metformin for 10 years. But I also like to point out to patients, that I'm taking it in a really off label way, because I don't actually have data to talk about insulin sensitive people taking Metformin to reduce the risk of cancer. So to me, I don't think that that is so the benefit of taking Metformin, if you have diabetes, it's obvious might be high enough that that's like your Bitcoin and I think Metformin is a relatively safe drug that it's probably closer to the tricycle than it is the train. But in someone like me, I think I'm going to be brutally honest, it's really picking up $1. Bill, I don't believe I'm getting nearly the benefit of the patients with type two diabetes. And I say that based on subsequent cohort studies that looked at obese non diabetics that were insulin resistant versus not etc. So, to your macro question, how the hell does one actually make sense of this? I think the the short answer is you don't there is no making total sense of is there is no knowing what to do at all times. There's simply a process by which you think about things and notice everything in that process involves your own lens for risk.
 
Adam  39:50
Well, you know, when you when you talked about you describing the bicycle versus the train and picking up a penny I mean that that says it all I think I mean, that's that's the answer. because, you know, when I relate when you're saying that I was thinking about a comment one of my clients made to me saying, listen, Adam, I remember what you told me because this is a guy that over I considered that overtrained. You know, he just worked way too much always getting hurt, always sore. And I said to him, you know, I'm more of your risk manager than I am your trainer. In other words, you know, squats are a great exercise. deadlifts are a great exercise. But at what risk because doing squats with barbells or weights on your spine is just not worth the benefit. Because it can be catastrophic. You might as well do leg press, and maybe a leg press machine is not as effective as a squat with dumbbells on it. You know, but it's a lot less risky. So that's these are the decisions that we're making, how we train people how we exercise. Yeah, maybe squats are better, who knows, but there are a lot I know, they are a lot riskier. So let's take the less risky route.
 
Dr. Attia  40:56
Yeah, I think that's a great example with I mean, I'm a huge proponent of squats and deadlifts. But I'll still go through periods of time. You know, Mike, we were talking earlier at the workout you asked me before we started, hey, you got anything kind of bugging you? And it's like, yeah, you know, my right SI joints been bugging me for a little while. And I actually took six weeks off deadlifting just focused on all single leg isolated, you know, lots of lunges, lots of lateral stuff, and basically just had to give the thing a rest. Now that was that absolutely necessary? No, I'm sure I could have pushed through it. But yeah, the view was, look, I can probably get 80% of the benefit without the exposure and the risk. And I think when you are doing very, very heavy compound joint movements. Rule number one is don't get hurt. And you talk to any good investor, they'll say Rule number one is don't lose money. Now Charlie Munger probably gets credit for being the first to say that. I think any investor will say that rule number one, don't lose money. Rule number two, don't forget rule number one. And rule number one of exercise is don't get hurt.
 
Adam  41:51
Yeah. Thank you so much. That was That was great. And sorry that we didn't have any great conclusion for you people. But that's that's reality, that that there's a lot of uncertainty in this world. Now. I highly, highly recommend you read everything that Dr. Attia writes. He has a website PeterAttiamd.com. Right, Peter Attia ATTIA MD.com. Read everything he writes, I highly recommend it. Thank you so much for coming on our show. That was very informative, very articulate.
 
Dr. Attia  42:21
Thanks for having me. Thanks for the workout.
 
Mike  42:24
It's my pleasure. Thanks for being here.
 
Tim Edwards  42:28
Thanks, guys. We'll include links in the show notes not only to Dr. Attia's website, but also to an article that he referenced earlier in the episode written by Gary Taubes the articles titled Do we really know what makes us healthy. Dr. Attia also referenced a video that you can find on YouTube. It's Richard Feynman scientific method, we'll have a link in the show notes to that as well. You'll find additional links in the show notes that will direct you to the inform fitness website where you'll find a free slowmotion high intensity workout waiting for you. Just click the Try us Free button right there on the homepage, fill out the form, pick your location, and then you can experience a free full body workout that you can complete in just 20 to 30 minutes, it's informfitness.com And yet another link in the show notes is for Adams book titled power of 10 the once a week slow motion fitness revolution. That link will take you to Amazon and for less than 15 bucks you'll find a ton of nutritional tips including a handy list of foods that support the power of 10 protocol, and some effective demonstrations of exercises that you can perform in the comfort of your own home. You know we have close to 50 episodes for you to binge listen if you're new to the podcast, so don't forget to hit subscribe in whichever podcast app you might be listening. And if you don't mind, we'd really appreciate it if you took a couple of moments to leave us a review. Until next time for Adam Zickerman and Mike Rogers of inform fitness. I'm Tim Edwards with the inbound podcasting Network.


What is The InForm Fitness Podcast?

Now listened to in 100 countries, The InForm Fitness Podcast with Adam Zickerman is a presentation of InForm Fitness Studios, specializing in safe, efficient, High Intensity strength training.
Adam discusses the latest findings in the areas of exercise, nutrition and recovery with leading experts and scientists. We aim to debunk the popular misconceptions and urban myths that are so prevalent in the fields of health and fitness and to replace those sacred cows with scientific-based, up-to-the-minute information on a variety of subjects. The topics covered include exercise protocols and techniques, nutrition, sleep, recovery, the role of genetics in the response to exercise, and much more.

47 Living with Uncertainty-Conversation about Studies w Dr. ...

SUMMARY KEYWORDS
study, exercise, relative risk, question, people, risk, difficult, point, drug, read, understand, epidemiology, picking, metformin, called, wrote, smoking, rule, squats, tricycle
SPEAKERS
Tim Edwards, Dr. Attia, Adam, Mike

Dr. Attia 00:06
You know, cognitive dissonance and bias are so strong. And once we lock in on an idea, it becomes very difficult to see evidence that contradicts that. And it also becomes very difficult to scrutinize evidence that supports it.

Tim Edwards 00:28
Welcome inform nation to Episode 47 of the inform fitness Podcast. I'm Tim Edwards with the inbound podcasting network and in a minute, Adam Zickerman, the founder of inform fitness and general manager slash trainer at inform fitness New York City will be joined by the gentleman whose voice opened the show, Dr. Peter Attia, who is the founder of Attia Medical, PC medical practice with offices in San Diego and in New York City, focusing on the Applied Science of longevity. He also happens to be a client of inform fitness like myself. Dr. Attia joins us today to discuss the studying of studies regarding the mechanics of scientific research, and how to distinguish the relationship between showing cause and effect. Now you might be asking, What does any of this have to do with slow motion high intensity strength training? Well, plenty. How many times in this podcast have we referenced a study based upon exercise and nutrition alone. You see it every day in your social media feeds and on the news. But how many of these studies are actually true or even accurate? How are these studies administered? And can you trust the results? Well, to quote Adam Zuckerman hopefully after listening to this episode, you'll be a bit more equipped to understand the barrage of information you read and hear about, and not to fall, hook line and sinker for every claim that is reported as a study.

Adam 01:53
Hello, everybody, this is Adam, very excited about this podcast. It's kind of a little heady, gets into statistics, but hopefully we're going to break it down for the layperson. So when you read headlines that make such crazy claims that you can kind of your your, your well, basically a bullshit detector goes off and you can kind of figure out how to navigate that. So we have with us to talk about this Dr. Peter Attia. He's a physician practicing in New York City and California. And his practice basically focuses on longevity which he calls the overlap of lifespan and health span, which is kind of cool. His clinical interests are in Nutritional Biochemistry, exercise physiology, lipid ology, endocrinology, and a few other cool things. He's also somebody who works out very easily. You just worked out here just now didn't ya,

Dr. Attia 02:39
sure. Did.

Adam 02:40
How you feeling?

Dr. Attia 02:42
that hurt. That was

Mike 02:43
sorry. Sorry, Peter.

Adam 02:45
Mike trained him. Yeah.

Tim Edwards 02:48
Sorry, not sorry.

Adam 02:50
Peter is also an incredible athlete, endurance athlete. And he understands the efficacy of these types of high intensity workouts that we talked about. So Doctor, thank you for coming on the show.

Dr. Attia 03:02
Thank you for having me. But I will add one correction. I don't think I can use the word athlete to describe anything I do anymore

Adam 03:10
former athlete,

Mike 03:12
he looks like an athlete. It looks like you haven't seen him. He looks pretty good,

Adam 03:15
he's a good looking guy. So beat up. I want to do this podcast because I read your series you wrote, you wrote a five piece series called studying studies. And you venture into much detail about the mechanics, if you will, of scientific research, and how difficult it is to distinguish without a shadow of a doubt the relationship between, of course what all research tries to do, show cause and effect. So Dr. Attia, you start off your series, quoting two iconic individuals. First, from Mark Twain, who I guess you mentioned is quoting Disraeli, Benjamin Disraeli, who says there are lies, damned lies and statistics. And you also quote a Nobel Prize winning physicist Richard Feynman, one of my favorites, actually, I read his posthumously published Caltech lecture called the meaning of it all. Is that where you got this quote from? By the way?

Dr. Attia 04:07
I actually to be honest, I don't remember in which I mean, he said it so famously, so many times.

Adam 04:12
Yeah

Dr. Attia 04:12
My favorite book, of course, is surely you're joking. Mr. Fineman? Yes, which is the sort of the the sort of entertainment value of Fineman.

Adam 04:21
So Dr. Feynman says, The first principle is that you must not fool yourself, and you are the easiest person to fool. So you have to be very careful about that. And this is our responsibility as scientists, certainly to other scientists, and I think also to laymen. So doctor, you know, tell me if I'm wrong, but your main point of this whole series of essays is something that you actually say in your first essay, which is, and I'm quoting you, statistics can be both persuasive and misleading if we're not careful. And it's self persuasion that we must vigilantly guard against if we really want reliable knowledge about the world we live in. So can you tell us why this is so important to understand?

Dr. Attia 05:00
sure. But before I do, I also want to acknowledge, you know, the work that goes into the stuff that I write, it couldn't really be done without kind of a team of analysts I have. So inside my medical practice, I have a team of four analysts whose full time job it is to help me read, assimilate, digest, and understand the Absolute barrage of data that's out there. I think in one of our posts, we even mentioned that at last check, there are just under 100,000 pieces of literature produced monthly on PubMed, in the English language. And I think our estimation is probably somewhere in the neighborhood of 100 of them, would be relevant to what we do, meaning they overlap with our field of interest, but also would rise to the standard of being relevant to our attention. And to funnel that down from 100,000 to 100, is something that I could use a team of 400 analysts to do, but the four guys I have are amazing. And in particular, Bob Kaplan, who's the who's sort of in charge of the whole operation, and my right hand, without him, this whole thing that we do, wouldn't wouldn't exist. So with that said,

Adam 06:12
That's Bob, shout out to Bob

Dr. Attia 06:14
And, and also, I think I would say, we wrote this series, knowing it was not going to be that quote, unquote, interesting. This was not something we wrote to, you know, there's no clickbait involved here. This is kind of something that we hope will over time become the sort of thing that people go back to. I forget who said it, but there's an interesting quote that that said something to the effect of like, your, you only learn something when you're actually ready to learn it. And of course, it was stated much more eloquently. But the point is, like, a lot of times these things fall on deaf ears. And so anyway, with that said, what we've created is a bit of a repository called studying studies. And it begins with, I think our first one is on relative versus absolute risk, which I'm sure we'll get to. But to your point, which is the important one, we are all myself included. So guilty of this, it is just, you know, cognitive dissonance and bias are so strong. And once we lock in on an idea, it becomes very difficult to see evidence that contradicts that. And it also becomes very difficult to scrutinize evidence that supports it. And so I think what Fineman says quite eloquently could be also stated, by, you know, one of the guys when I was doing my postdoc who I remember very well, he said to me, you know, you have to learn to kill your babies. So you're going to do you're going to have a hypothesis, you're gonna do a whole bunch of experiments, you're going to think you're getting an answer and, and you're going to start to, you know, drive that answer, you're going to, you're going to look at ways to see your data in a manner that makes your your answers seem more believable, but you have to be able to scrutinize that to the point where you ultimately end up killing most of your babies. So I don't pretend to say that this is easy. I think this is actually very difficult. And in many ways, the easiest way to do it is, you know, doing it the way we try to do it, which is have more than one person involved and have opposing sets of eyes. I've spoken about this, certainly in sort of other discussions around the notion of creating a blue team, red team. So this is something that's not uncommon in sort of experimental disciplines of science, like physics. But I think it's frankly, even valuable in things like, you know, a hedge fund, you know, so for example, if you're trying to decide if you know, this stock is worth the price that it's being asked right now, you typically will send off a team of analysts to do a bunch of research on that. But that exercise can be much more powerful if you send off a blue team and a red team that don't necessarily communicate with each other in any way, shape, or form, but go and look at data and force themselves to make the for and against case, and then you compare it. So

Mike 08:50
create a debate, like hoping that they'll be a debate.

Dr. Attia 08:54
Absolutely, yeah. So so that we internally kind of talked about this idea of Blue Team Red team, as a way to keep ourselves honest, because I don't really trust myself that much. And I don't know that any individual should be able to trust themselves.

Adam 09:07
No, I know, I'm biased. I mean, totally. When it comes to just the way we train people and how we view exercise, it's, and that's why I, like you said, you know, you don't learn something until you're ready for it. And that that's where I am in my career, you know, trying to push the envelope with exercise science and knowledge. And, you know, This can't be it. Right. So they got to be more to all this. I think it was Carl Sagan that said, if there's an exception to the rule in science, the rule is wrong. Yeah. You know, and I'm finding exceptions and exercise all the time. So you know, even I need to wake myself up and push the envelope and and question everything that I do, and this is why I have you on the show today.

Dr. Attia 09:44
Well, I think that that speaks to something that makes biology much more difficult than physics or mathematics. So the first thing is, I think there are very few rules in biology in the way that we think of rules in you know, these other disciplines. So to Your point, I don't think I could with a straight face make the case that there is one, exercise regimen, one nutritional regimen, one, you know, endocrinologic regimen, etc, that is optimal for every person. I mean, I simply know that's not the case. I could argue it on first principles, I could argue it empirically. So. So what's a corollary of that a corollary of that is that there are no proofs in biology. And we'll be very careful with the use of that word. So if I had $1, for every time I saw a study that said such and such proves x, or this study proves y it's incredible sloppiness either on the part of the journalist or the scientist, because unlike mathematics, where there are proofs, there are no proofs. In biology. Everything is stochastic. Everything is about probability, everything is about confidence. But there is no proof. And therefore, as an extension, there are likely very few rules, you know, outside of things like Central Dogma,

Adam 10:56
like exactly. So let's get to this idea of self persuasion and use some examples. All right? Example you use was a headline. And the title of this journal article said, cholesterol fighting drugs, lower risk of Alzheimer's disease. And you use that as an example of the self persuasion.

Dr. Attia 11:14
Well, you know, I was thinking about that, after we wrote it. And I was like, it wasn't even a great example. Because I mean, it was a good example. But I think it's a better one, if there's a far more egregious example, actually, which is the use of hormone replacement therapy in postmenopausal women and the risk of breast cancer associated with it. And I think I remember Bob and I talking about this before we put that piece out and deciding we weren't going to use that example, since that warrants an entire post on its own, which we'll get to. The gist of that is the following. I think many people if they're up in the peanut gallery, and not like digging down into the data will still have some sort of vague notion that hormone replacement therapy increases the risk of breast cancer in women. So the question is, where does that inclination come from? And that inclination comes from something called the Women's Health Initiative, which was a study that was done, in response to interestingly, an epidemiologic series of studies that suggested that women who went on hormone replacement therapy actually saw improvements in many aspects of morbidity, obviously, not easy to look at mortality. But for example, you know, greater cognition, improved bone density, improved heart health, less breast cancer, etc. But as was the reasonable thing to do, the question was, well, let's put this to the test with a prospective randomized, double blind, controlled trial. And that was done. I won't get into the complete shortcomings of the actual study, because the study itself had at least four enormous structural shortcomings that are not a fault of the people who studied it. In other words, at the time the study was done, I think it had to be done incorrectly, which is as ridiculous as that sounds. But when we look back today, it's very clear that you would do at least four things, different orthogonally differently, not just slightly differently. Nevertheless, when the study came out, the headline, the only thing that most people remember is, quote, breast cancer risk went up 25% In the women getting HRT. And so to this day, if I ever want to have a discussion about HRT with a patient, it begins with about a 30 minute lecture on the Women's Health Initiative, which is one of the papers I actually have sitting in my office on my desk, so I can just very easily refer to it.

Adam 13:24
It's kind of reminded me of the autism and vaccination type of,

Dr. Attia 13:28
well, that's even a worse example, because that was actually perpetuated by fraud, and what effectively turned out to be to be to be fraud, and, you know, sort of at least scientific misconduct. This was this was an example where yes, it is true that when you compared the use of estrogen and progestin to a placebo, the hazard ratio for breast cancer was 1.24. So what does that mean? So hazard ratio of 1.24 means a 24% increase in the relative risk of breast cancer. I'll resist all urges to explain why I don't even believe that that is correct. But let's assume it is correct. Let's assume that there was a 24% increase in response to the use of estrogen and or progestin. The important question that should be asked but was often overlooked was what was the absolute increase in risk? The relative increase in risk doesn't tell you anything by itself. And when you look back at the data, which is very easy to do, you don't actually have to be a statistician or mathematician to do this, it's actually only basic arithmetic that I think will walk readers through in that series. The absolute increase in risk was approximately going from four cases per 1000 to five cases per 1000. Now going from four cases per 1000 to five cases per 1000 is indeed a 25. Or, you know, there's actually 24% increase in relative risk, but it's only a one in 1000. or 0.1%, absolute increase in risk. So the way I try to help women understand HRT is let's assume that that's correct. And I'll give you seven reasons why I don't believe it is. But if that is correct, if that becomes our ceiling of risk, and let's assume it is, you then have to ask the question, is a point 1% increase in your risk of breast cancer worth the alleviation of some of the symptoms you might actually have during menopause? And of course, I think the answer would be it depends on how bad those symptoms are

Adam 15:31
right

Dr. Attia 15:31
If a woman is sailing through menopause, and she has no symptoms, then one could maybe convincingly say, look, it's not worth the hassle slash the risk. But when a woman is debilitated by, you know, a number of symptoms, including in all the vasomotor symptoms, which are usually the first ones, you have to ask the question, How bad is a point 1% absolute increase in a disease that has a one in 12 chance of killing you,

Adam 15:53
especially now, since you can probably figure out that family history of breast cancer

Dr. Attia 15:56
Absolutely. And we have much better, you know, ways to to further refine risk for an individual. So this is, to me, one of the most important things that the the lay person wants to have in their toolkit, which is if ever I read something, you know, whether it be in the actual paper itself, or more typically, in the way it's reported, I want to make sure I understand the difference between absolute and relative risk.

Adam 16:21
So Peter, is it is it to wieldy, a question to ask you to help us figure out the difference between, you know, our listeners figure out what relative risk is versus absolute risk

Dr. Attia 16:31
it might be, but I can give it a try. So the example I just gave is probably as good a one as you can use. Although I think we talked about Alzheimer's disease elsewhere, the relative risk, the risk of not using the word relative to define it says, from wherever you start, and I don't know where you're starting, or I'm going to ignore where you're starting, how much does the probability of this event go up. And that relative risk in this example is 25%. So you went from something that has a point 4% chance of happening to something that has a point 5% chance of happening. And going from point 4 to point 5 is mathematically an increase of 25%. So the relative risk is 25%. But of course, if you went from 40% likelihood of happening to 50% likelihood of happening, that would also be a 25% relative risk, the difference is in the latter example, your absolute increase in risk is 10%. Right? Whereas in the former, your absolute level of increase in risk is point 1%. And that's why it's very difficult to make decisions clinically without knowing both. But because we are, I think, inherently a bunch of lazy people. And I include myself in that just as anybody. We want headlines we want what's going to sound more exciting, there was a point 1% increase in the risk of something or there was a 25% increase, both are correct, but both are incomplete. in isolation.

Adam 18:03
You gave another example about the relative risk for this new drug that reduced cancer incidence by 50%. That was a relative risk, right? And then, but the absolute risk is really goes from two and two in 1000. Going to to one in 1000. Correct. So is that really a? Everyone thinks, wow, I have a 50% chance of reducing my risk of cancer when really it's that's actually not true? Well, it's true

Dr. Attia 18:28
it's true on a relative sense, because then you now have to evaluate, that's really the risk is dropping my absolute risk by point 1% worth the trade off of taking whatever this drug is

Adam 18:40
exactly. Right. And you say we're messy creatures, and there are so many moving parts and pieces. To what what will influence the results of the study, for example, the exercise habits of individuals that make up the sample or the access to health care, the smoking history can confound the results of the study you're trying to approve an association of. So like one confounder, I'd love you to talk about is this healthy user bias? All right, because that that that's a common one. And it's kind of easy to understand and how that plays into studies and how quickly and how, just what I'm trying to do is trying to find show how easy it is to have things screw up a study and the results really, really might not be what we think they are.

Dr. Attia 19:27
Yeah, so I mean, Richard Feynman does a great job. There's a there's a great clip on YouTube, and I'm sure it's I know we've linked to it somewhere in the blog about it. He's giving a lecture at, I think it's either it's either Caltech or Cornell. But maybe you can find it in the show notes and link to it, but it's a beautiful, beautiful exercise of him on a blackboard walking through the scientific method. And he explains it in a way that I won't even try to reproduce because it's just so priceless and Finemanesque, but it's effectively you make an observation. You make a guess you design an experiment and compute the consequences of what your guess would be if the, in other words, what would be the experimental outcome if your guests or hypothesis is correct, you design the experiment, and then you measure the outcome versus the outcome you predicted. Or that would be true if the thing is correct. Now, he explains it much more eloquently. The gist of it is, outside of doing experiments, we actually cannot establish cause and effect outside of the most extreme circumstances, that tend to be by far the exceptions and rarely the rules. So what you're talking about a healthy user bias becomes an issue when we rely on things outside of experiments, which we unfortunately have to do very often, or at least choose to do very often in human biology, because, as you pointed out, humans are messy. We live for long periods of time, you can't study us in captivity. So it's very difficult to do an experiment. For example, if you wanted to demonstrate, do people who exercise four times a week or more have lower risks of pick your favorite disease versus people who don't exercise? Okay, well to do that experiment is almost impossible. Because what are you going to do get, you know, 1000s of people and randomize them into two groups, meaning by randomization, that means you've mathematically created a large enough sample, that you now know that your two groups are statistically identical. And now you have one group exercise five times a week and the other group never exercise. And first of all, you assume that they're doing that, how you could make them do that. And without putting them in captivity for 20 years, I don't know. But at the end of 20 years, you then ask the question, hey, or even 10 years, whatever? Is there a difference in heart disease, cancer, Alzheimer's disease, etc? Now, we certainly think there would be. But we don't do that. Instead, what we do is we say, well, let's just actually go and survey the population. Let's take a backwards look. And ask the question, okay, let's find people and do surveys and find out how much exercise everybody does. And then we'll stratify people, and we will then do a mathematical analysis to try to simplify for other variables, and see if it gives us the answer. The problem with that is, if you're comparing people who, on their own are choosing to exercise five times a week versus people who on their own are choosing not to exercise, the likelihood that you are able to also tease out every other difference, for example, their sleep habits, their eating habits, it's very unlikely, yes, you can, using statistical analyses probably simplify some of the more obvious differences such as smoking, as a general rule, I think you'd find a higher incidence of smoking in the non exercising crowd than the smoking crowd. And that could probably be extrapolated. But many of these things cannot. And there's actually an amazing essay on this, or essay may be the wrong word, but Gary Taubes, who I know you must know. He wrote a great piece in 2007, in the New York Times Magazine. And it's I can't remember the title of it. But it's something you should also link to this because I think it's a real gem of a piece. The gist of it is like, what do we not know about studies, and I think it's one of the better pieces on the limitations of epidemiology, and specifically, this healthy user bias that really makes it difficult to understand the impact of nutritional choices, exercise choices, other quote unquote, lifestyle choices, frankly, even drug choices on heart outcomes. Because when you do these analysis, the hazard ratios, meaning the magnitude of difference between the groups is usually so small, that it falls well below the threshold of epidemiology to rise to the level of saying, and it's not significance, you can often find statistical significance. It's just can you be confident that there's a causal relationship here? And I would say unfortunately, the answer today is virtually never

Adam 23:57
Another example of healthy user bias that comes to mind is I hear often that people that floss their teeth have less cardiac disease, as if flossing your teeth itself has an actual cause on your an effect on your heart.

Dr. Attia 24:11
Now it could point is we can't learn it from that study

Adam 24:15
we can't learn because people that floss the teeth are also usually healthier in other ways, and they have healthier habits in other ways. And that might be the reason why they don't have heart disease as often. So that's the healthy user bias.

Dr. Attia 24:26
That's exactly right. And that's a great example of a question that would be quite vexing to me because I think that using cardiac disease as an example, it's it's it's certainly a disease that's both driven by lipoproteins, inflammation, endothelial function, all these other things. So is there a is there a plausible mechanism by which, you know, having poor dentition could increase your risk of heart disease? Absolutely. There's a plausible mechanism there. But we're not going to my guess is the hazard ratio on those studies is sufficiently low, and therefore, at best, it's generating a hypothesis to be tested.

Adam 25:00
Yeah, well, regardless, you should you should, whether it affects your heart or not, you should floss your teeth everybody

Mike 25:07
we think probably.

Adam 25:12
So you know, you use, you've been throwing around a couple words like epidemiology. And epidemiology is another word for an observational study. An observational study or an epidemiological study is different from a, what they call a random control study. And you're going to touch on how hard that is when you're putting people in captivity, because you have to control for all these variable. Otherwise, it all confound the results. So why, and you touched on this a little bit, maybe you can touch a little bit more, why aren't there more, you know, random control studies, which are considered the researchers gold standard of studies. Why are we relying on all these epidemiological studies, which, as you just pointed out, have all these confounded all these, you know, problems.

Dr. Attia 25:58
So I think it comes down to a couple of things. But one of the biggest issues is the most obvious one, which is a logistics problem, it is very difficult to do randomized control trials, and it is often much much more expensive. It also takes much longer. So if you want to study the effect of let's go back to the previous example, we had, does the frequency of exercise impact even just pick one metric? Does it impact your risk of Alzheimer's disease? Which is a very important question. And certainly, one that logically would make sense, you know, more exercise, better blood flow, you know, you know, a number of neurotrophic factors, BDNF, all of these things, you come up with 100 mechanisms, why, but if put to the test, are we going to say we're going to get to group several 100 people and force one of them to not exercise force another one of them to exercise and again, we're going to put these people in captivity, that's impossible. So they're going to have to live in their free world, and hope that the compliance is high enough that you create enough discrimination between the two groups, and then follow for an outcome. Now, it's not to say that that nothing I described there cannot be done. All of that can be done. The question is, how difficult is it? How expensive is it? And oh, by the way, who's paying for that study? Because it's very difficult to remunerate on an exercise study. So while we can, we have a much higher appetite for doing randomized control trials in pharmacology, and part of it is because the FDA says you have to we're not approving this drug if you haven't done if you haven't demonstrated safety and efficacy and effectiveness in prospective clinical trials. But there's an incentive to spend a billion dollars, which is about how much it will cost today, a little over a billion today to get a drug approved through that process. But it's very difficult to imagine doing that with something for which there's no remuneration. That's the first fundamental problem. The second one comes down to the ethical problem. Sometimes the most interesting questions are ones in which we just really don't think it would be ethical to randomize people to one of the two groups.

Adam 28:04
So you can't test harmful effects

Dr. Attia 28:06
Yeah, we can't. Not that this is debatable today, but certainly there was a great period of time after the landmark Surgeon General's report in the early 1960s, demonizing smoking, between that and the widespread acceptance of the role of the smoking played in lung cancer. But the question is, could you with all of that mounting evidence, and those were examples where the hazard ratios were more than 10? So this now gets into the territory of where epidemiology may actually be sufficient for determining causality? But would we feel if we were, you know, scientists and physicians designing that study? Would we be comfortable randomizing people to a forced smoking group? And the answer is no. And similarly, today, I don't know that if I were involved in a clinical trial, I'd be terribly excited about randomizing people to a group of don't exercise, right, or let's disrupt your sleep for the next 10 years, certainly, on a short term basis, it's probably reasonable to do sleep disruption studies over a period of weeks or even months to test the, you know, validity of theories around the importance of sleep. And, you know, for example, glucose homeostasis

Mike 29:11
In a silly way, what the guy did in Supersize Me, you know, how he just like ate all that crappy food

Dr. Attia 29:16
thats right

Mike 29:17
you know, so it's a I mean, they he chose to do that to himself and then tested his health markers throughout the period

Adam 29:24
sample size of one Yeah,

Dr. Attia 29:26
yeah. But, but basically, these are the these are the fundamental issues, which is, you know, compliance, logistics, duration, cost and ethics.

Adam 29:37
Yes. Okay. So hence the reason why we are really relying on so many observational studies or epidemiological studies. You even pointed out that 52 observational studies were looked at. And these particular 52 observational studies were actually tested by a random control study, and the random control studies showed that every single conclusion of the 52 all 52 conclusions of the observational studies were wrong. They didn't, that's zero for 52. That's ridiculous. So, so what gives? I mean,

Dr. Attia 30:14
I think we I think a lot of the low hanging fruit in epidemiology is gone. I think that's the bottom line. I think when you think of the real gems, the real case studies of what made epidemiology great. It was in areas where so epidemiology is such a blunt tool, that you need enormous amounts of discrimination between what you're trying to detect. And so smokers getting lung cancer or not getting lung cancer, people exposed to massive amounts of asbestos getting mesothelioma yes or no chimney sweepers or non chimney sweepers getting scr0tal cancer, yes or no, you know, these questions, they had such enormous impacts that the epidemiology could give us much, much more confidence in an answer. Today, we're dealing with things where if they're, and to be clear, going back to those 52 cases, I'm not suggesting that every one of those epidemiology studies was incorrect. It's also possible that some of those randomized control trials were so poorly done that they missed the mark. What I'm saying is at that point, we're now outside of the discriminatory capacity of the tool to measure. And I don't know if I, I think we've written about this as well. And even more upsetting feature is not that what's more upsetting is when you look at and John Ioannidis, who's if anybody's interested in this space, and they're looking for one person to be reading, in terms of just like, who's a very thoughtful academician, who I think has some of the most insights on this topic. It's this fella by the name of John Ioannidis at Stanford University. And John's written some of the most cited papers on this topic, including a very famous paper, I believe, in 2005. in Plus One that gets at this notion of how most published research is incorrect.

Adam 31:57
So, all right, which kind of brings me to the grand finale question is, now that we're thoroughly confused, and we can't trust observational studies, or random control studies, because it's so difficult to do and not done well. So what do we do as lay people we're trying to we're trying to lead healthy lives, you're trying to improve ourselves, we're trying to make decisions what drug to take or not take, what do we do? I mean, like

Mike 32:22
is that prepared to do the date that deep dive like? Well, step one,

Adam 32:25
I even with the deep dive, it seems like, you know, even you know, you can learn you can read a book on statistics and still do your homework and still figure out holy cow, we still don't know for sure. So So what do we do?

Dr. Attia 32:37
Well I mean, I guess there's, there's a couple ways to think about this. I think the first way to think about this is to get comfortable with uncertainty. I think. I don't think we're wired to deal with uncertainty very well, we don't really think

Adam 32:50
that's why there's religion.

Dr. Attia 32:51
Yeah, I think that's why there's a lot of actual problems, frankly, which is, we, we, and again, when I say we I'm being very deliberate, and to include myself, because even though I'm trained in mathematics, and I think I think as probabilistically as anybody, I know that in my darkest moments, I tend to revert to binary thinking. And so I think

Adam 33:14
what does that mean?

Dr. Attia 33:15
Meaning it's black or white? It's this or it's that there's no problem, you know, as opposed to thinking in probability distributions. So that the real way we should always be thinking about life is probabilities.

Adam 33:26
Yeah

Dr. Attia 33:26
Now, in some cases, we love the examples where the probabilities are so clear. So if I drop this pen, what is the probability that it will hit the floor? Well, that is actually, you know, you can describe that with a probability function in physics and mathematics. And it will show you that

Adam 33:41
there's a small probability that it will actually hit the floor

Dr. Attia 33:44
yeah, there's a theoretically small probability that it won't hit the floor. From a practical standpoint, in that example, the answer is zero. So we have a bunch of rules that govern our universe that get us, I think, overly comfortable in the notion of yes or no black or white zero or one binary type answers. In sort of engineering, we describe that as being digital, on or off

Adam 34:10
Gotcha

Dr. Attia 34:11
The opposing concept is called analog, where you have a sliding scale from zero to one, you turn up the lever, and so so that's to me, biology is much more analog than digital. And therefore, for every question, you just have to say, well, we may never actually know the answer to this, what I have to do if I'm making a decision, or what my doctor has to do, if she's making the decision, or at least advising me on the decision is I have to be able to understand the risk adjusted return on this investment. So the very simplest way I try to explain this to patients is using a two by two matrix. So on the x axis is where I talk about the reward. On the y axis, the vertical axis is where I talk about the risk Now, even though those are

Adam 35:01
the hormone replacement therapy might be a good example.

Dr. Attia 35:04
Sure, frankly, everything that we do is an example of this. In fact, Bob and I often will draw on a whiteboard like this two by two matrix and, and shade in the parts of it that we think represent any type of intervention. So where does Where does high intensity interval training fit in on this? Where does you know, long distance endurance training fit into this? Where does taking Metformin fit into this? Where does you know intermittent fasting fit into this, like, anything that you can do should be able to be placed on that matrix? And it's not a point, it's not a dot by the way, it's like a, you know, a shaded

Adam 35:36
curve

Dr. Attia 35:37
curve, Yeah, exactly. So it's complicated. So but you can simplify this into a two by two as opposed to just a continuum. And the two by two would be when dealing with the reward or the payoff, think about picking up a Bitcoin versus picking up $1 bill. Now this is this example might be irrelevant in a few years, bitcoins irrelevant. But but let the record show that at the moment of this a bitcoin is still worth something so. So you either you're either asking the question is, am I picking up a Bitcoin? Am I picking up $1 bill? Or making a penny for that matter? Right, something that we think we would ascribe very little value to? On the risk side? The question is, am I picking this thing up while it's sitting in front of a moving tricycle or a moving train? So what so I try to look at everything I do through that lens. And the first thing you want to realize is you never ever want to be picking up dollar bills in front of moving trains. That's an obvious statement, but it's worth thinking through things that you might do. There are lots of things that I think people propose to do that in my mind, amount to that picking up $1 bill in front of a moving train. Yeah, you know, you could get $1 but it could also be a catastrophic outcome. Conversely, there are very few opportunities that we aren't already aware of that are akin to picking up bitcoins in front of tricycles. Most of those things have been realized, for example, not smoking, is picking up a Bitcoin in front of a tricycle.

Adam 36:43
Yeah

Dr. Attia 36:46
it's got a huge multiplier effect. And it's, you know, it's relatively safe to not smoke. Similarly, being insulin sensitive, as an outcome is another one of those things where like, there's just no disagreement, there is complete convergence along the importance of insulin sensitivity with respect to cardiovascular health, cerebral health, cancer, etc. Now, the How to becomes more problematic. So, you know, should one do this type of exercise versus that type of exercise? should one take this drug versus that drug? You know, Metformin I brought up before a moment ago, because, you know, I think, about 10 years ago, it started to become pretty clear in what are called cohort studies, where you took backwards looks at, you know, backwards looks at data that were collected for other reasons

Adam 37:55
retrospective

Dr. Attia 37:56
Yeah, these are retrospective cohort studies that were looking at patients with diabetes, who were taking Metformin versus those who weren't taking Metformin. And they corrected for all sorts of factors. And, you know, the suggestions looked pretty interesting, actually, both on an absolute and relative basis, the Metformin takers were getting a lot less cancer, their their, their relative risk, reduction of cancer was about 25%. And their relative risk in mortality was about 38 to 40%. Again, I don't remember the absolute numbers, but they weren't trivial. It wasn't like one of these point 1% questions. So it begs the question, should we be taking Metformin for cancer prevention now, so at least three out of five patients will come in my office and they want to know if they should be taking Metformin. And full disclosure, I do. I've been taking Metformin for 10 years. But I also like to point out to patients, that I'm taking it in a really off label way, because I don't actually have data to talk about insulin sensitive people taking Metformin to reduce the risk of cancer. So to me, I don't think that that is so the benefit of taking Metformin, if you have diabetes, it's obvious might be high enough that that's like your Bitcoin and I think Metformin is a relatively safe drug that it's probably closer to the tricycle than it is the train. But in someone like me, I think I'm going to be brutally honest, it's really picking up $1. Bill, I don't believe I'm getting nearly the benefit of the patients with type two diabetes. And I say that based on subsequent cohort studies that looked at obese non diabetics that were insulin resistant versus not etc. So, to your macro question, how the hell does one actually make sense of this? I think the the short answer is you don't there is no making total sense of is there is no knowing what to do at all times. There's simply a process by which you think about things and notice everything in that process involves your own lens for risk.

Adam 39:50
Well, you know, when you when you talked about you describing the bicycle versus the train and picking up a penny I mean that that says it all I think I mean, that's that's the answer. because, you know, when I relate when you're saying that I was thinking about a comment one of my clients made to me saying, listen, Adam, I remember what you told me because this is a guy that over I considered that overtrained. You know, he just worked way too much always getting hurt, always sore. And I said to him, you know, I'm more of your risk manager than I am your trainer. In other words, you know, squats are a great exercise. deadlifts are a great exercise. But at what risk because doing squats with barbells or weights on your spine is just not worth the benefit. Because it can be catastrophic. You might as well do leg press, and maybe a leg press machine is not as effective as a squat with dumbbells on it. You know, but it's a lot less risky. So that's these are the decisions that we're making, how we train people how we exercise. Yeah, maybe squats are better, who knows, but there are a lot I know, they are a lot riskier. So let's take the less risky route.

Dr. Attia 40:56
Yeah, I think that's a great example with I mean, I'm a huge proponent of squats and deadlifts. But I'll still go through periods of time. You know, Mike, we were talking earlier at the workout you asked me before we started, hey, you got anything kind of bugging you? And it's like, yeah, you know, my right SI joints been bugging me for a little while. And I actually took six weeks off deadlifting just focused on all single leg isolated, you know, lots of lunges, lots of lateral stuff, and basically just had to give the thing a rest. Now that was that absolutely necessary? No, I'm sure I could have pushed through it. But yeah, the view was, look, I can probably get 80% of the benefit without the exposure and the risk. And I think when you are doing very, very heavy compound joint movements. Rule number one is don't get hurt. And you talk to any good investor, they'll say Rule number one is don't lose money. Now Charlie Munger probably gets credit for being the first to say that. I think any investor will say that rule number one, don't lose money. Rule number two, don't forget rule number one. And rule number one of exercise is don't get hurt.

Adam 41:51
Yeah. Thank you so much. That was That was great. And sorry that we didn't have any great conclusion for you people. But that's that's reality, that that there's a lot of uncertainty in this world. Now. I highly, highly recommend you read everything that Dr. Attia writes. He has a website PeterAttiamd.com. Right, Peter Attia ATTIA MD.com. Read everything he writes, I highly recommend it. Thank you so much for coming on our show. That was very informative, very articulate.

Dr. Attia 42:21
Thanks for having me. Thanks for the workout.

Mike 42:24
It's my pleasure. Thanks for being here.

Tim Edwards 42:28
Thanks, guys. We'll include links in the show notes not only to Dr. Attia's website, but also to an article that he referenced earlier in the episode written by Gary Taubes the articles titled Do we really know what makes us healthy. Dr. Attia also referenced a video that you can find on YouTube. It's Richard Feynman scientific method, we'll have a link in the show notes to that as well. You'll find additional links in the show notes that will direct you to the inform fitness website where you'll find a free slowmotion high intensity workout waiting for you. Just click the Try us Free button right there on the homepage, fill out the form, pick your location, and then you can experience a free full body workout that you can complete in just 20 to 30 minutes, it's informfitness.com And yet another link in the show notes is for Adams book titled power of 10 the once a week slow motion fitness revolution. That link will take you to Amazon and for less than 15 bucks you'll find a ton of nutritional tips including a handy list of foods that support the power of 10 protocol, and some effective demonstrations of exercises that you can perform in the comfort of your own home. You know we have close to 50 episodes for you to binge listen if you're new to the podcast, so don't forget to hit subscribe in whichever podcast app you might be listening. And if you don't mind, we'd really appreciate it if you took a couple of moments to leave us a review. Until next time for Adam Zickerman and Mike Rogers of inform fitness. I'm Tim Edwards with the inbound podcasting Network.

- 1 -
00Transcribed by https://otter.ai