You’re Still Looking for Your Keys Under the Streetlight

Emmett Shear went on the Clearer Thinking podcast and discussed effective altruism among other things. We can split up his points into things I agree with and things that seem incorrect. I’ll do this using Scott Alexander’s Effective Altruism As A Tower of Assumptions for some landmarks. The tower in question:

Scott’s note: “Not intended to be canonical; realistically it would be more of a tree or flowchart than a tower.”

Shear:

…what people want is a place to put money so they can buy indulgences, so they can feel less guilty. And unfortunately, that’s not a thing that exists. No one is printing indulgences that you can just just give money here and you have done your part. That’s, unfortunately, just not how it works. Do your best. Do enough. That’s good. I love that Giving What We Can pledge. I think that’s a hugely beneficial idea like, “Hey, what if we all just took 10% and we said that was enough?” That would actually be way more than people give today. And it would also be enough, I think, if we all did it. Then people could stop beating themselves up over not feeling guilty about not doing enough, which I think is acting from a fear of “I am not good enough.” That’s one of the most dangerous things you can do.

First of all, as always, if you’re critiquing EA, that means you’re doing effective altruism. You’ve agreed that there are multiple ways to do good in the world, and you think some are better than others. You’re already at the base of the tower. Next, Shear agrees donating 10% of your income is good. Giving What We Can seems pretty squarely inside EA. It’s possible Shear doesn’t realize that GWWC is a “…public commitment to give a percentage of your income or wealth to organisations that can most effectively help others“. So either Shear is endorsing effective giving or he didn’t realize what it was, and he’s making a critique about a group of ideas he has apparently little understanding of (I think it’s this).

The part of indulgences makes no sense to me. Perhaps I’m an outlier. I’m a big tent libertarian as well as a big tent EA, so I have zero guilt about the money I make. I like the free enterprise system, and I think it makes the world better. I tend to think EAs are much more accepting of market benefits than other groups in the NGO space which can skew very left-wing, but maybe other EAs actually feel guilty about making money. If they do, I agree with Shear that the point of EA isn’t to make you feel less guilty.

However, I think the point of EA is really obviously not that! Since EA was created out a specific problem actual people like Dustin Moskovitz, Holden Karnofsky, and Elie Hasenfeld actually had which was:

  • I’ve got a ton of money
  • I’d like it to donate it in ways that use the money well, but
  • There’s no data on which charities actually accomplish good things

Indulgences have nothing to do with it. Next, let’s move up the tower to cause prioritizations:

The malaria bed nets thing is the classic like, drunk looking for his keys under the streetlight…It’s a little unfair. There are keys to be found underneath the spotlight of quantifiable, measurable impact. And it is good work to go do that. But like most good that can be done in the world, unfortunately, is not easily measurable by randomized controlled trial or highly measurable, highly quantified, very trustworthy impact statements. To the degree we find good to be done on those, we should fund that stuff.

…You’re reducing the variance on your giving by insisting on high measurability, because you know for sure you’re having this impact. It’s not that doing that kind of low variance giving is bad, it’s just, obviously, the highest impact stuff is going to be more leveraged than that. And it’s also going to be impossible to predict, probably non-repeatable a lot of the time, and so, sure fund the fucking bednets. But, that’s not going to be THE answer. It’s just AN answer.

Obviously Shear is going off the cuff, but it’s clear he’s never heard of Open Philanthropy’s post on Hits Based Giving which is like 8 years old at this point. GiveWell has a fund explicitly to incubate new ideas that haven’t be proven yet. It’s well known that not every opportunity is going to fit into a rigorous RCT scenario. A major benefit of GiveWell is to provide a better baseline compared to what most people donate to. GiveWell really does save more lives than donating to your college. If Emmett thinks donating to Yale (his alma mater) is better, he should make that case!

But sure, I agree that if we gave $100 billion to GiveWell over the next 10 years and even if they knew exactly the most high impact thing to do with it, it’s not like all of humanity’s problems would be solved. There are a lot of very intractable political stability and institutional issues around the world, and bednets won’t necessarily solve that. But check where we are on the tower! GiveWell is there to be better than the generic charitable giving most people do, and I think it’s pretty good at that.

The conversation turns to x-risk, but this also misses a lot of work. OpenPhil gave money to the Institute for Progress which does all sorts of innovation policy work like immigration, biotechnology, and more. OpenPhil gives money to animal welfare, to land use reform work, and global health scientific research. To critique EAs for being too focused on measurable RCTs is just bonkers. But let’s talk about x-risk:

And I’d say on the other side of it, the “Oh, but isn’t it more important to go after nuclear risks and stuff like that, or AI risk or whatever?” More important is the problem. That idea that you can rank all the things by importance and that you could know, in a global sense, which of these things is most important to work on, like what is most important for you to do is contextual to you. It’s driven by where you will be able to have the best impact, which is partly about the problem, but also partly about you, and where you live, and what you know, and what you’re connected to. And if you care about one of these, you think you have an inclination that there’s a big risk over there, learning more about that and growing in that direction might be a good idea.

But, the world is full of unknowns. To think that you’ll have THE correct answer is like, “No, you won’t. You’ll only not know THE correct answer, you won’t even have a full ranking. You’ll just have a bunch of ideas of stuff where your estimates all overlap each other and have high variance… Or how you, in order to get out of analysis paralysis, insist to yourself, “We have found the correct answer: AI x-risk is the most important thing. That is all I’ll devote my life to because nothing else is nearly as important because that’s the thing.” And like, maybe, maybe not. How do you know? You don’t know. You can’t possibly know, because the world is complicated.

As we’ve said earlier, if you’re making a critique of EA, then you’re already admitting that some causes might be better than others. Shear is trying to get around this, by dismissing all prioritization as impossible. It contradicts what he said about bednets, where the argument that the highest impact work is not going to be provable under an RCT regime, but I think we can charitably restate his argument thusly: charitable impact is like a Hayekian information problem where the relevant information is scattered with everyone having specialized knowledge. In this world, you can’t standardize impact because it’s unique to each person.

Again, if Emmett wants to argue that Yale donations are as good as GiveWell, he should do that! I’m not convinced. But let’s talk about individual advice. Does EA just tell everyone to focus on AI only all the time? No, of course not. Shear is just repeating what effective altruists already do as if it’s some fundamental demolition of their core beliefs. 80000 Hours doesn’t tell everyone to go into AI risk. Most EA money doesn’t go towards AI x-risk. The point is that this wasn’t something people worried about at all until effective altruists started talking about it before AI blew up with this big boom after the transformers paper.

And moreover, the EA record here on cause prioritization is really good! It turns out prioritization is possible after all! There’s been a lot of interest in risks from pandemics for a long time. EAs weren’t the only ones talking about it (Bill Gates talk from 2014). But EAs noted this was an important and neglected area before COVID-19 killed millions of people. We need more people being ahead of the curve like this, and we should see who has a good track record of working on problems so that we have solutions on hand when the problems become real.

To Emmett’s credit, he later goes on to say that an AI which makes it easier to build a better AI could in fact be “end-of-the-world dangerous”. But it strikes me as strange to believe AI could end the world and also that it’s impossible to prioritize some resources there.

If there’s a single EA concept that it’s clear Emmett Shear doesn’t understand, it’s “neglectedness”. Doing something good is nice. Doing something good, that no one else has thought of yet, where there’s lots of low-hanging fruit and high payoff: that’s superb.

…What’s the most highly measurable impact that I can have? But you know, the charitable work I’ve done that probably had the biggest impact in terms of leveraged impact has always been opportunistic. There’s a person in a place, and I know because of who I am and who they are that I can trust them and this is a unique opportunity, and I have an asymmetric information advantage. And I am going to act fast with less oversight and give them money to make it happen more quickly. And that’s not replicable. I can’t give that to you as another opportunity to go do because most high impact things don’t look that way…

This sounds plausible until you think about what effective altruists have already done. When Elie and Holden formed GiveWell there was no repeatable, replicable way to give money to save the most lives per dollar. Imagine if they had had this attitude when starting out. They would have thrown up their hands and said “welp, guess we’ll go home and donate our money to Yale!” Instead they built from scratch an organization that tried to understand what global health charities actually did and whether they were helpful. And in 2022 GiveWell raised $600 million and directed it towards places where they expect they can save a life per $5000. Things aren’t replicable and scalable until you recreate the world to make it so. You’d think a Silicon Valley CEO would know better!

Maybe one of the highest impact things you could have done was to invest money in YouTube because YouTube has created this massive amount of impact in terms of people’s ability to learn new skills or whatever. Or donate money to Wikipedia earlier or something. But that’s not replicable. Once it’s done, it’s done. You need to figure out the next thing…

There’s a lot to say here. There’s a big difference between market transactions, which pay for themselves, and charitable work which doesn’t. It’s completely reasonable to ask if there are situations where the free market won’t solve a problem, but altruistic giving could, so I don’t think the YouTube analogy makes any sense.

But setting that aside, I need to shout “NEGLECTEDNESS” loudly into the void until someone hears me. There is no world in which the internet exists, but there’s not a major video focused social media site. For God’s sake, when Google bought YouTube, they already had their own video hosting site in Google Video. If Google had cancelled that project, I’m 100% sure Facebook would have grabbed the free money, as they tried to create a video hosting platform when YouTube already existed. Your investment in YouTube is literally worthless when it comes to altruistic impact, because there’s a vibrant market searching for business opportunities in the space. Distributing bednets that would never have been distributed otherwise actually makes a difference!

And of course, I totally endorse acting fast with less oversight. Fast Grants was good! ACX grants are good! I think this counts as EA, but to the extent that other people don’t, I will agree with Shear on the Silicon Valley mindset of variance and experimentation. I just think effective altruism does this already.

Atop the Tower

Alright, so we’re at the top of the tower. Maybe there are EA orgs which should be criticized more explicitly. Could be. I think the ones I brought up here like GiveWell, OpenPhil, and 80000 Hours are actually already doing the stuff Emmett Shear says they should be doing. And there’s individual projects and focus areas that I don’t actually think are very impactful, but if Shear thinks there are higher impact specific projects, he doesn’t do a good job of conveying that.

I also actually think there probably are some big blind spots in EA as a whole. EA isn’t left-wing, but it’s a lot more left-leaning that I am. I suspect there are real conservative critiques that EA hasn’t internalized. I’m sure there are traditional values that are doing loadbearing work we don’t realize, and since EA is pretty explicitly anti most conservative values, I suspect that could result in some poor outcomes in ways that are hard to predict. It’s a hard problem and I wish someone with more time could think about it more deeply.

But what frustrates me to no end is that EA critics never seem to bring up good points. They are often just like Emmett Shear’s: they criticize a strawman that doesn’t exist, they don’t understand that EA invests in a broad array of cause areas, and they bring up points that EAs have been discussing for years as if they were novel. They say the effective altruists are looking for their keys under a streetlight, because they’ve never bothered to move out from their own streetlights.

Leave a Reply