GiveWell Conversations

GiveWell has built its reputation on rigorous research—analyzing randomized controlled trials, building cost-effectiveness models, and reviewing monitoring data to identify cost‑effective ways to save and improve lives.

In an effort to supplement this desk research and make better decisions, we’ve been working to gather more information directly from the people who live and work in the countries where we fund programs.

In this episode, GiveWell CEO and co-founder Elie Hassenfeld speaks with Principal Researcher Alex Cohen about GiveWell’s work to gather local insights to check our assumptions and figure out what we might be missing.

Elie and Alex discuss:
  • Testing key hypotheses about the data GiveWell relies on: We’re working to improve the inputs in our decision making. This includes checks on coverage data, more information about how programs work in practice, and assessing whether estimated program effects are plausible. These efforts have already identified discrepancies between coverage surveys and other areas for improvement.
  • Addressing the limitations of global health data: GiveWell depends on credible data for its research, yet global health and development data is quite limited. For example, basic measurements like child mortality rates rely on household surveys conducted only every five years. We’re employing multiple approaches to address these limits: funding independent survey firms; conducting site visits; hiring local consultants; and strengthening networks with government officials, implementing organizations, and other funders.
  • Balancing the trade-offs between local work and desk research: Desk research will continue to make up the vast majority of our work. We believe that complementing that research with additional information we gather from local sources could meaningfully improve our grantmaking. We expect to dedicate around 5% of our research team’s time and around 1% of our total grantmaking to these efforts, which we believe will have an outsized impact.

By prioritizing efforts to learn from people in the places where we fund programs, we hope to better understand how programs are being implemented, identify bottlenecks, and more. We believe that incorporating this information will improve our decision-making and our work to help people as much as we can. It provides checks on our primary models, increases confidence in our conclusions, and could highlight where we might be missing something important.

Visit our All Grants Fund page to learn more about how you can support this work, and listen or subscribe to our podcast for our latest updates.

This episode was recorded on December 23, 2025 and represents our best understanding at that time.

What is GiveWell Conversations?

Welcome to GiveWell’s podcast sharing the latest updates on our work. Tune in for conversations with GiveWell staff members discussing current priorities of our Research team and recent developments in the global health landscape.

Elie Hassenfeld: [00:00:00] Hey everyone, this is Elie Hassenfeld, GiveWell's co-founder and CEO. Today we're going to talk about some work that we're doing to deepen our research and hopefully make better decisions. We're going to talk about work that we're doing that we call local insights, and that's trying to gather information directly from people who are living and working in the countries where we fund programs. This could be through surveys, through hiring consultants, through site visits, but all of this is intended to complement the desk research that we spend the vast majority of our time doing to help us better understand the way programs work, whether our understanding from the desk research matches the way that programs are actually being implemented directly where they are.

You know, some of the problem that you can imagine us having is, you know, we are trying to determine where to direct grants to people who are living thousands of [00:01:00] miles away from us in very different contexts. We believe, and part of the philosophy of GiveWell is that desk-based research, looking at academic studies, often randomized controlled trials, building spreadsheets that help us estimate the cost-effectiveness of programs, and looking at monitoring and evaluation data from nonprofit organizations—all of that forms the vast majority of what we look at and really is the most important part of our decision making. But then there's always this question of what we might be missing and where the model, the research, the distance leads us to have a misimpression of how programs are implemented.

And so while desk research is the most important, we know that we have opportunities to improve by building up more of that local insight, that direct observation from us or others that help us really understand how programs are implemented.

And so today I'm talking to Alex Cohen, a researcher at GiveWell who's leading this work, about why we decided to spin this up, some of the things we might be missing, how big a problem we think [00:02:00] this is in our research today, some of the things we've done so far, and where we're going in the future. So Alex, thanks for joining us.

Alex Cohen: So glad to be here. Thanks.

Elie Hassenfeld: So let's just start in, help us understand, help me understand a little bit more. What do you think we might be missing today in our research and, you know, how could this help us address that problem?

Alex Cohen: Yeah, so I think there are a few areas where these sorts of local insights could create some big surprises. The first example is additional checks on the coverage data that we get. So when we fund bednet campaigns or vitamin A supplementation campaigns, we want to know what percent of people actually received bednets or vitamin A supplements. We typically get data on this from our grantees. They commission survey firms to go and ask people whether they receive these commodities.

But you can think of a lot of reasons why it would be good to have an additional check on this data from an independent survey. You know, it's possible that that might find [00:03:00] different coverage than the grantee survey is reporting and that that coverage is lower, for example, that would mean that the program is less effective than we think. That might also point to ways that we should be considering boosting coverage of the program. And so it could be useful in other ways too.

Second one is a little broader, but more just how do these programs work in practice? Is it plausible that they have the effect that we think? So we find via desk research, RCTs [randomized controlled trials] on the effect of conditional cash transfers for vaccination. These tell us that, you know, there's decent evidence that giving cash to caregivers when they bring their kids in to get vaccinated boosts vaccination in different contexts. Sometimes it's useful to check that against ground truth data about why people don't use vaccines: Are the reasons people don't use them consistent with barriers that would be alleviated by cash?

There are also lots of times where we don't have an RCT. We fund programs that provide technical assistance to [00:04:00] increase vitamin A supplementation coverage. There's not an RCT of that. Instead we're relying on an understanding that the activities that are done by the organization actually lead to more VAS coverage, vitamin A supplementation coverage. And talking to people that are involved in those campaigns, talking to local decision makers, could give us a better sense of whether they're providing meaningful support or whether, maybe, there's not a plausible reason that these activities could increase coverage. Those are a couple examples.

Elie Hassenfeld: Let's just use one program, you can pick the program. And explain what that program is. Then the current state of the data that we get about that program, what it tells us, and then what you could imagine additional investigation via local insights potentially telling us.

And I think something just to say at the outset is that one of the kinds of things it might tell us is, you know, unknown unknowns. Like we in some ways everything that we do, and with desk research, it's like, well, we have specific questions. You know, what is the effect size of this program? How many people does it [00:05:00] help? What's the mortality effect of giving out bednets? And just getting closer and asking a lot of questions can raise other questions that we didn't even know we didn't know the answers to. So let's leave that to the side for a second. But let's just use one program to walk through how this works and what it could do for us.

Alex Cohen: Yeah, maybe let's say we're considering expanding water chlorination programs. These are programs that provide chlorine to households so that they disinfect their water and their children and other folks in the household are less susceptible to diarrhea and other waterborne illnesses. And suppose we're investigating whether to launch this program in Borno State. This is a state in northeast Nigeria. One of the things that we want to understand is, are people using chlorination there currently or are they treating their water in other ways? Why aren't they treating their water in other ways?

Elie Hassenfeld: Alex, let me just ask you, so why do we want to know that? Why is that important?

Alex Cohen: Yeah. We want to know that because if people are already chlorinating their water, then funding a program that is intended to raise [00:06:00] chlorination rates is going to be less effective.

And so that's kind of a basic test that we want to know. We also want to know if people aren't, why aren't they? Is it because they can't afford chlorine? Is it because they don't know about it? Is it because they do know about it but they don't like the taste? That might point to different programs that we would fund. This matters for a few ways.

So, the way we might investigate that is look at the DHS. This is a massive set of surveys that's done across countries and then also try to do phone calls with people that have worked in the state. It's kind of challenging to do because this is a pretty remote part of Nigeria. I think the ideal or, you know, what would be better is if we had better sources of information on this.

So, could we run our own survey that collects information from households in the state about a variety of things, but one of them being chlorination or water treatment. Do you treat your water? And if not, why not? If we're considering a program [00:07:00] for inline chlorination, which requires a certain set of water piping, gathering information on what people use would be valuable.

Then on the more qualitative side, you know, it'd be great if we could talk to community health workers. These are people that work in these communities, have a good sense of the health issues for poor people. What do they see as the barriers to chlorine use? Would the program that we're considering actually address those?

Elie Hassenfeld: Right, so maybe one way that I'm summarizing this up in my head is to draw this spectrum, and this will be overly simplified, but might help explain it, and then tell me if this seems reasonable. You know, on one end of the spectrum, you might have something that you could consider, like the behavior of a nonprofit program that's not very data driven. And so you might say, all right, we want to run this program and we're just going to roll it out because we think it will be good. And so that's operating with, you know, very, very limited data—so that's not what GiveWell does, but you could imagine some hypothetical program kind of operating with the belief that their activity is [00:08:00] impactful without knowing a lot about its background.

Then you have this next stage where GiveWell says, all right, we're interested in rolling out water chlorination, supporting organizations that do that. The lowest touch way we can determine the need is to look at existing data sources that are like numerical sources of use of water chlorination. And there's this survey, the DHS survey that we talked about on an episode of the podcast several months ago, which does like every five years, does very intense data collection in a country, so you could rely on that. And then also, you know, we can call people up who are nearby. So sort of like data that is more or less available or information available at fairly low intensity. Those both miss a lot. And what they're missing is hearing from people directly about what they do, why they do what they do, in ways that could give us insights about whether the program would be effective.

And then our options. You know, one of the main primary tools we could use is funding a survey on our own to send people [00:09:00] out, a research firm, a survey firm, to go house to house and get answers to questions that could inform us about the challenges to the program, the potential need for the program, how much people want the program, et cetera.

I'm curious what you think about that spectrum, how closely that maps to the way you're thinking about this.

Alex Cohen: Yeah. I think that's totally right, and I think one thing to just highlight there is we've been talking about surveys, so talking to a bunch of people and asking them a series of specific questions. I think there's a qualitative angle to this too. When we go on site visits, it's great to talk to the community health workers and ask them about programs they've seen. And, you know, maybe there was a chlorination program that got tried three years ago and it failed because of this obvious issue that someone who was on the ground would realize. So just to highlight, it's not just more quantitative surveys, but qualitative work too.

Elie Hassenfeld: So how do you then think about the cost benefit here, you know, on one hand, like how big a difference could this [00:10:00] make to our decision making and also how much resource is worth investing, either money or time? And just like when you think about the different options, how do you think about that?

Alex Cohen: Yeah. So typically, when we're thinking about funding research at GiveWell, we use a value of information framework. The way this looks is, say there's a program that we think is maybe below our cost-effectiveness bar, but we're not really sure. And we fund an RCT, a randomized controlled trial, to see if it's better than we think. And that's valuable because there's some chance that we fund the trial, it causes us to believe that it's more cost-effective, and then we can direct funding to this program in the future. And you can kind of cash that all out into a cost-effectiveness number. I think the general idea is similar here, right?

We fund this work because we think it could cause us to learn things that maybe make us think certain grants are [00:11:00] more effective than we thought, and so we could direct more funding. Or cause us to think they're less effective than we thought, and so could direct less funding. Maybe they cause us to identify new grant opportunities that we wouldn't have found before. That's valuable too.

So I think it's a similar framework cashing that out in terms of, well, we should spend $1 million or $10 million or $100 million on this. I think that's challenging to do, I think, especially in this local insights case where, yeah, we have a few hypotheses that we want to test: Are coverage levels as high as we think, are the barriers to these programs what we think they are?

Elie Hassenfeld: Yeah. So you said that there are a few hypotheses that we want to test. You know, it seems like there are these key inputs into our decision making that we rely on. You know, one example is we fund a program that delivers some health good. Then we get a coverage survey that tells us how successfully that health good was distributed.

We want to do extra work to answer the [00:12:00] question. How credible is that data about the coverage? Can you just walk us through what are the big hypotheses that we want to test?

Alex Cohen: Yeah, so another one is, our grantmaking decisions depend a lot on our moral weights. So this is how we trade off programs that increase consumption or the amount of income that people have to spend versus programs that reduce child mortality or mortality of other age groups.

Trading off these things is really challenging. We do it because we're considering programs some of which make people richer, some of which improve health, some of which do both. And we need a way to trade these off. We've used a variety of approaches to get at this. And one has been trying to ask people in countries where we fund, how they would make these trade-offs if they were deciding whether to give all people in their village $100 or access to a medication that could lower the chance of mortality by a certain amount, which would they pick? This is a really [00:13:00] challenging thing to elicit preferences about. But we think getting this input from people in places that we fund is important.

We've done this in a couple countries so far. We have so far not done it in Nigeria, which is a country that receives a large share of our funding. That's one area where we could learn more.

Elie Hassenfeld: So we talked about coverage surveys. That's a kind of data that we rely on consistently. There's preferences about how to trade off between different kinds of goods. What else is on the list?

Alex Cohen: Yeah, we also talked about kind of plausibility checks on whether our grant funding increases coverage. We fund this program to provide technical assistance. What are the exact activities that they're doing? Do those seem valuable to people in that country?

The other one I'd add is, what would happen if we didn't fund these programs, you know, if we didn't fund a bednet campaign, would people get bednets? Would they get them from other sources? Would another funder step in? This is a challenging thing to get at, but I think there are a couple [00:14:00] avenues we could consider.

So one example is, yeah, there are some states in Nigeria that have not gotten net campaigns in several years. What does their bednet coverage look like? So again, we've got data from the DHS on this, but we can also dig a little bit deeper and understand, you know, if DHS says, actually, net coverage is pretty high in some of these places where net campaigns haven't occurred in a long time, should that make us think that campaigns are not going to increase coverage that much? Or is it the case that people are using really worn out nets, really low-quality nets? We could try to dig deeper on that, either through surveys or by talking to people with experience on the ground. Digging deeper into this question of, if we didn't fund net campaigns, would people get nets anyway—that's the parameter we're trying to learn more about.

Elie Hassenfeld: Got it. And so we're talking about a few different kinds of activities. There's funding survey firms to collect data from [00:15:00] a group of people where their input will affect our decisions. Then there's maybe funding people to report back on some form of qualitative information. Maybe that is the survey firm or someone else. There's visits that we make ourselves directly—like our staff, you or me and others. And we just look at programs. We talk to people, we see what we learn. What else, if anything, is included or does that really cover the gamut of what we're talking about here?

Alex Cohen: So I would include maybe two more things. One is just developing our networks of people that live and work in these countries. This is everybody from government officials working at the Ministry of Health, to people that have implementation experience. You know, we can build these relationships by visiting in person. But, these are people that we can oftentimes talk to from our desks too. But, deepening those networks is really important.

And then the other one I'd add is there are some more, I guess, experimental approaches that we're looking at. So [00:16:00] one is doing AI chatbot–based cell phone surveys. We're working with an organization called Dimagi that has access to a group of community health workers in northern Nigeria. And they're able to send questions to this group about their activities, about what they're seeing in their communities. Then an AI chatbot has a conversation with them to probe more on questions about coverage or questions about major issues people in their community are facing, major barriers they're facing as a community health worker. That's something that we're starting to explore. That's a newer area, and there are maybe some other more novel ways to collect data too, but that's the other one I'd add.

Elie Hassenfeld: I think one thing that might be interesting to think about for a second is, you know, how we are choosing to solve or address this challenge and how others do the same thing.

So, well, first, I mean one way that I see this challenge that I think may not be obvious to people on the outside is [00:17:00] relative to many other sectors in the economy, in some sense, like development is a very data-poor environment. Like we just talked about GiveWell working in global health, and that's a data-rich environment relative to other areas of philanthropy. But relative to, say, the financial markets or the US economy, the data available about what is happening in a state of Nigeria is extremely limited. And so on one hand it is hard to know what is really happening with the data available, sort of to an extreme extent.

A second part of the problem is, you know, we—GiveWell—can talk to a nonprofit, you know, say, I don't know, CHAI, the Clinton Health Access Initiative, about a program, but it's like, GiveWell talks to some, a leader in, you know, the US or the UK at CHAI and then they talk to a program person who leads their work on malaria or perhaps their work in a particular country, Nigeria. Then that person, you know, has to talk to someone else. And so the chain of people that information has to pass through to go from implementation [00:18:00] to the GiveWell decision maker is very far.

And so these two challenges—yeah, I'm trying to lay out my thinking and I'm curious how you see this. But on one hand, we're trying to make decisions with good information, but it's very hard to make good decisions because data is limited. Number two, access to information requires this long chain.

And then there's different ways to solve this problem. One way to solve the problem is spend a lot of time doing visits yourself, you know, sending staff, our staff there. So you kind of do an end around on the full chain and talk to the people directly. Well, the challenge with that is extremely time consuming. And I still think it's the case that something like 90% of the relevant information to the decision comes from desk-based research. And so we don't want to allocate too much of our staff time to doing visits because well, that means they won't have the time to do something else.

And then another way organizations solve this problem is you set up country offices and country teams. So you could imagine GiveWell hiring full-time staff in Nigeria who lead our Nigeria work. [00:19:00] And then the challenge we have with that approach, you know, aside from the fact that GiveWell is still a relatively small team, but you can imagine doing that, but then still you have the challenge of feeding that information from that team back into the overall GiveWell approach.

And one thing that GiveWell tries to do is more or less, we can't do this exactly, but more or less try to compare all programs on similar footing. And so, one way that organizations sometimes solve this is through decentralization. So there's a decision maker sitting in a country office in a particular country, and they are given the discretion to grant funds based on their best judgment.

Well, that does a lot to increase the weight on local insights, but what it does is decrease the weight on sort of the aggregation of data across the whole enterprise. And so I think this is the challenge we face. It's like low-information environment, hard to get data. Decentralization doesn't seem like it really works because we believe strongly in this, try to do our best to quantify and compare things in like terms. And so the approach that we're taking [00:20:00] is trying to collect our own data and, you know, in one way or another, feed it back into the machine, the GiveWell decision making machine. And, yeah, I'm curious where that description seems right, where you think it's missing important nuance.

Alex Cohen: Yeah, so I think broadly that seems right. Maybe let me add another challenge and then maybe put in a plug for slightly more local insights and slightly less desk research.

So the other challenge is just like we often hear conflicting things from different people. It's not like we have this chain, we can ask people questions or we can go in person and ask people, and we get a clear answer that everyone agrees on. Oftentimes, yeah, there's disagreement. People in one area may say that bednets are really popular here, there's no issues. In another area they said no, people will hate them, they're too stiff. We have to triangulate that.

I think that's not an unfamiliar problem. I think even when we're [00:21:00] doing desk research, you know, the numbers might not agree. Sources not lining up is not an unknown problem, but makes it challenging, I think, especially in a case like this where we're probably more skilled at resolving discrepancies in empirical work than we are with on-the-ground information. So I think that's another challenge.

Yeah, I think the other thing I'd add is like, going on site visits and thinking more about this local insights work also just makes me less sanguine about the desk research—a desk research project might be, let's pull the data that we can get on bednet coverage.

This past summer, I went with some other GiveWell staff to shadow the enumerators that do post-distribution monitoring for bednet programs. They ask households that receive nets, did you receive your net? Are you using it? Can we see it? Is it in good condition? So we shadowed these enumerators for a couple days, and I think it was a learning experience in a lot of ways, but it just really drives home that these surveys are [00:22:00] collected by people and they have incentives to take shortcuts or they've just got a lot of work to do in a limited amount of time. And so just because we're at our desks and see a piece of information, I don't know, in a PDF that we're reviewing, doesn't mean that it's the truth. And I think we've always known that, but I think doing these visits drives that home for me.

Elie Hassenfeld: Yeah. And so how do you counter the claim or the question that's like come on, GiveWell, or come on, Alex, this is self-evident. How can you take any of this data to mean anything? Like, it's all, you know, garbage in, garbage out. I mean, what do you say back to that kind of objection?

Alex Cohen: Yeah, it's a good question. I think it's good to be skeptical of these data in the sense that we should understand where they're coming from, what sources of bias are in there. But yeah, I guess my very high-level abstract answer is, yeah, I think there's a decent amount of signal in a lot of data that we use. And the way we get that is by triangulating [00:23:00] different sources of information, trying to get different estimates, trying to talk to different people about this, trying to supplement what we read in a study with what the experts in the field really say and what people who work in the country really say. I think it's a lot of trying to triangulate, trying to sense check things. I don't think we should close up shop and say it's all useless, we should just, you know, throw a dart at the board and decide what to fund. But I think it's good to have some skepticism.

Elie Hassenfeld: Yeah. I mean, I feel like when we've dug in, it seems like what we have often found is that the data we're getting is clearly imperfect or off by some amount, but also has a lot of meaningful signal in it. And then, you know, I think part of where we're coming from with the local insights work is that it's also very easy—I mean, I think we've all had this experience. I've certainly had it. If you go on a site visit, a site visit is literally five or six days looking at various programs. And it's very easy to think that it is appropriate to draw a conclusion based on one observation.

And I think part of the challenge here is, on one [00:24:00] hand, recognizing the limits in the aggregate data that we're receiving. And then, on the other, recognizing the limits of just direct anecdotal observation, and trying to marry the two as best we can. The way you put this earlier is a framing that I really resonate with, which is we have some hypotheses and we want to test the hypotheses.

So one hypothesis that we have is, you know, coverage surveys, data is pretty good, but we really want to check how good it is. And so doing extra work to check that hypothesis is valuable. And you know, if we learn that it's far off, then we'll just adapt to that and can do more. And on the other hand, if it is looking even better, you know, more precisely estimated than we previously thought, then that can also teach us something.

And so a lot of this is just trying to, you know, as best we can, operate in an information-poor environment, recognize that any path we take has significant limitations. Just relying on the data we get with some adjustments is limited, as is anecdotal observation and trying to marry the two.

And then it's just because of the nature of the challenge of [00:25:00] gathering the data, it's time consuming and expensive to even make progress. You know, we're operating at the pace of years, maybe months, you know, rather than days in terms of being able to experiment and turn around new information.

I wonder if you could just talk a little bit about that, because I think it's helpful too. Timeframes here and feedback loops, it's like we're talking about these hypotheses, then we need to test them. And it's just not the case that we can go out and get an answer in a week. It takes a long time to really learn something from even an individual project that's part of this larger initiative. So maybe just talk through the timeframe for a single project.

Alex Cohen: So let's take in-person surveys that we're funding. So I think with these, this is about three months between when the survey goes to the field, start collecting data, and when we get the results, get the share of people that are using nets or people who are resistant to vaccines for various reasons or how people trade off these questions about income versus mortality and moral weights.

The nice thing I guess about doing [00:26:00] a survey like this is, relative to other research, like a randomized controlled trial, is we're just going to get a snapshot at a point in time. It's not like we do a pre and post, or we've got treatment and control and then following up with them over time. So it's relatively quick in that sense.

When we're talking to experts or working with local consultants, you know, they're out, they're collecting information, they're reporting back to us. Or if we're doing these cell phone surveys, we can get that at a relatively faster clip.

So, it's not immediate. It definitely takes less time than doing desk research typically. I think the other piece is it requires work on our side. So if we're commissioning this in-person survey, we want to be thoughtful about which questions we're asking and the right way to ask them, so that we're getting unbiased answers. That takes a fair amount of work on our side too.

Elie Hassenfeld: Right. I mean, often questions that we have are somewhat abstract. They can be difficult to communicate, then they have to be [00:27:00] translated into another language. On one of the visits I did to Kenya, one of the things we did was go house to house and sit with survey enumerators, the people whose job it is to go house to house and ask people questions.

And, you know, just to give a sense like to walk through a set of 10 high-level questions—What's the health of your children? What is their health history? What food do you eat? How often do you eat it? What's your income level? What challenges do you face?—I mean, that kind of survey takes, you know, an hour and a half and a lot of skill by the enumerator to be able to administer it. And so it's just, you know, a time-consuming and expensive activity to gather this data.

Alex Cohen: Yeah, that's right. Yeah, definitely, we've got other examples of that. So, part of this survey is asking about vitamin A supplementation, and this is one where we've observed differences in coverage across different surveys. We think part of it is because sometimes vitamin A supplementation is asked in the section on vaccines and sometimes it's not. And the reason is that some caregivers think that vitamin A is just part of the vaccine sequence, so if you [00:28:00] ask it outside of that context, they're confused. They don't know what it is. But if you ask it in the context of vaccines, they remember. And so there's lots of stuff like that. It's challenging.

Elie Hassenfeld: And then just to give a sense, what portion of GiveWell's research time do you think will go into this in 2026? You know, even ballpark, you know, is this like 50% or 5%? I mean, I know it's not 50%, but I don't know how you would categorize the time. And then what portion of the funds we direct next year will be going into this kind of activity, just to give some ballpark sense.

Alex Cohen: Yeah, so I think closer to 5% seems about right across the research team, in terms of this work. I think in terms of grant funding, it's likely less than 5% of our grantmaking. These surveys are I guess fairly inexpensive. So I guess maybe we spend $3 million or so on this, max. So definitely more staff time investment than grant making. That's my best guess.

Elie Hassenfeld: And does that include [00:29:00] folks going on site visits? Is that included in that number in your head? And that basically comes to, you know, the research team is between 50 and 60 people, let's say 60 people right now. So 5% is about three full-time equivalents working on this for a year.

Alex Cohen: Yeah. I think that's right. My instinct is that, so we did quite a few site visits this past year. My guess is we'll do fewer this next year. Definitely not zero. But my guess is that those will go down a little bit as we focus more on grantmaking targets.

Elie Hassenfeld: Yeah. And so basically, you know, this is an activity where it takes more time to figure out how to do it well and to do it well than the actual money involved. And so, yeah, like 5% of our time, less than 1% of our grantmaking. Yeah, that's great. Give me your thoughts on how valuable this could be for our grantmaking.

Alex Cohen: So I mentioned that I don't have the value of information calculation in terms of number of dollars of grantmaking that could be affected. But I think there are a couple examples that stand out to me and make me think there probably are areas where we might be off.

And so [00:30:00] the first example is, I think, over the past year or two, we've learned a lot about chlorination programs that we've funded. We recently published a lookback that talked about some discrepancies we got between independent coverage survey estimates and grantee-funded survey estimates of chlorination rates. The independent coverage estimates were much lower. This was also consistent with some observations that folks had on site visits, noticing that, you know, these coverage surveys are pretty subjective. It seems like there could be wiggle room there. That's one example.

And the other is just my own visit to Uganda this past summer. And this note about not extrapolating too much from a couple weeks, but there were things that changed my perspective quite a bit. One was just seeing examples of how challenging implementation of these programs is. You think something like distributing a bednet should be straightforward. You bring the bednets. You give them to people. We heard stories about bednets being thrown into crowds of people and [00:31:00] you have to grab one out of the air. One example, but makes you realize that these things can be more challenging than you think.

And then the other was, we talked to community health workers in a few villages in Uganda. And I was surprised by how little ability they had to provide care. I was under the impression, based on some background reading, that they were able to provide some basic level of treatment, basic testing. But when we talked to individuals in these communities, both people that would go to the health worker and the health worker themselves, they mentioned they didn't have access to malaria tests, malaria treatment. The best they could do is refer someone to the government health center.

And that was a big surprise. Again, not something that makes me think we should adjust cost-effectiveness up or down, but the delta between what I thought going in and what I saw there makes me think there are a lot of cases where our understanding could be off.

Elie Hassenfeld: Yeah. Thanks, Alex, for talking this through.

Alex Cohen: Thanks, Elie.
--

Elie Hassenfeld: [00:32:00] Hey, it's Elie again. So, you know, I think one of the things that really comes through in this conversation is just how challenging it is for GiveWell to figure out the answers to the questions we have so we can allocate funds well. I think that's sort of obvious from the outside. It's hard to help people living 10,000 miles away, but just getting to the details of what it actually takes to look at data from various surveys, but that data is only collected infrequently and at low granularity, you know, to really know how a program like malaria nets—one of the most studied programs that exists—how it really works in practice is so challenging. And, you know, we recognize that and that's why we're trying to spin this work up because, in our view, the imperative to help people overseas is there because the needs are so great and money can go so far. It's clear that we should rely on serious, rigorously collected data to form the bulk of our decision making. And that's what we've done in our entire history.

And then we know that that data has flaws and is certainly not the [00:33:00] truth with a capital T. And so this year we plan to keep investing more in this effort to understand where the data we get is falling short and where it can be better so we can make better decisions in the future.

As always, thank you for listening and for your interest in engaging with our work. If you want to support the kind of work that we do or the organizations that we grant to, you can find options to do that on our website at GiveWell.org/donate. Thank you so much.