Book Review: The Precipice

I have titled my annual blog post summarizing where I donate my charitable budget as “How can we use our resources to help others the most?” This is the fundamental question of the Effective Altruism movement which The Precipice‘s author, Toby Ord, helped found. For a while, Toby Ord focused on figuring out how to fight global poverty, doing the most good for the worst off people in the world. Now, he is focusing on the long term future and existential risk.

The Precipice is fantastic. It’s incredibly well written, engaging, and approachable. It covers a lot of ground from why we should care about the world, what risks humanity faces in the future, how we might think about tackling those risks, and what the future might look like if we succeed.

The Precipice eloquently interweaves fairly philosophical arguments with more empirical analysis about the sources of existential risk and tries to statistically bound them. The book discusses a pretty concerning topic of the potential end of humanity, but it does so with an eminently reasonable approach. The complexities of philosophy, science, probability, epidemiology, and more all are brought into the narrative, but made easily digestible for any reader. I honestly wish Toby Ord could teach me about everything, his writing was so clear and engaging.

The main discussion is never overwhelming with technical details, but if you ever find a point interesting, even the footnotes are amazing. At one point I came up with a counterpoint to Ord’s position, wrote that down in my notes, only to find that the next several paragraphs addressed it in its entirety, and there was actually a full appendix going into more detail. Honestly, this will be less of a book review and more of a summary with a couple final thoughts, because I think this book is not only excellent, but its content is perhaps the most important thing you can read right now. You are welcome to read the rest of this blog post, but if you have found this compelling so far, feel free to stop reading and order Toby Ord’s book posthaste.

Existential Risk

The consequences of 90% of humans on Earth dying would be pretty terrible, and given our relatively poor response to recent events, perhaps we should better explore other potential catastrophes and how we can avoid them. But The Precipice goes further. Instead of 90% of humans dying, what happens if 100% of us die out? Certainly that’s strictly worse with 100>90, but in fact these outcomes are far apart in magnitude: if all humans die out today, then all future humans never get to exist.

There’s no reason we know of that would stop our descendants from continuing to live for billions of years, eventually colonizing the stars, and allowing for the existence of trillions of beings. Whatever it is that you enjoy about humanity, whether that’s art, engineering, or the search for truth, that can’t continue if there aren’t any humans. Full stop. As far as we know, we’re the only intelligence in the universe. If we screw up and end humanity before we get off this planet, then we don’t just end it for ourselves but perhaps we end all intelligent life for the remaining trillions of years of the universe.

Even though I was aware of the broad thesis of the book, I was continually impressed with just how many different angles Ord explores. He early on notes that while we might normally think of a catastrophic extinction event, like an asteroid impact, as the thing we are keen on avoiding, in fact there are several scenarios that would be similarly devastating. For example, if humanity were to suffer some calamity that did not kill everyone but left civilization stuck at pre-industrial technology, that would also preclude humanity from living for trillions of years and colonizing the stars. A 1984 style global totalitarian state would also halt humanity’s progress, perhaps permanently.

Ord also discusses the fundamental moral philosophy implications of his thesis. The natural pitch relies on utilitarian arguments as stated above; if humanity fails to reach its potential, this not only harms any humans currently alive but all future generations. Other arguments against extinction include a duty to our past and what we owe to our ancestors, the rights of those future generations who don’t get to decide for themselves, and the simple fact that we would lose everything we currently value.

The book categorizes three types of risk: natural, anthropogenic, and future risks. Natural includes asteroids, supervolcanoes, and stellar explosions. These are pretty diverse topics, and Ord is quite informative. The story about asteroid risk was particularly fascinating to me. In the 90s, the relatively new discovery of the dinosaurs’ demise led Congress to task NASA with identifying all the largest near-Earth asteroids to see if they pose a threat to Earth. They allocated some money, and NASA tracked every near-Earth asteroid over 10 km in length, and determined that none pose a threat in the next century. They then moved on to 1 km asteroids and have now mapped the vast majority of those as well. The total cost of the program was also quite small for the information provided — only $70 million.

This is one of the rare successes in existential risk so far. Unfortunately, as Ord points out several times in the book, current foundational existential risk research at present is no more than $50 million a year. Given the stakes, this is deeply troubling. As context, Ord points out that the global ice cream market is about $60 billion, some 1000x larger.

I’ll skip the other natural risks here, but the book bounds natural risk quite skillfully; humans have been around for about 200,000 years, so it seems natural risk can’t be much higher than 0.05% per century. Even then, we’d expect our technologically advanced civilization to be more robust to these risks than we have been in the past. Many species survived even the largest mass extinctions, and none of them had integrated circuits, written language, or the scientific method.

That doesn’t mean that all risk has declined over time. On the contrary, according to Ord, the vast majority of existential risk is anthropogenic in origin. Nuclear weapons and climate change dominate this next section. It’s remarkable just how callous early tests of nuclear weapons really were. Ord recounts how there were two major calculations undertaken by a committee of Berkeley physicists before the Manhattan project got underway in earnest. One was whether the temperature of a sustained nuclear reaction would ignite the entire atmosphere in a conflagration (the committee believed it would not). The other was whether Lithium-7 would contribute to a thermonuclear explosion (it was believed it would not). It turns out that Lithium-7 can contribute to a thermonuclear explosion as was found out when the Castle Bravo test was about three times larger than expected, irradiating some 15 nearby islands.

It turned out the other calculation was correct, and the first nuclear explosion in 1945 did not ignite the atmosphere. But clearly, given the failure of the other calculation, the level of confidence here was not high enough to warrant the risk of ending all life on Earth.

Luckily, current risk from nuclear weapons and climate change that would wipe out humanity seems quite low (although not zero). Even a nuclear winter scenario or high sea level rise would not make the entire Earth uninhabitable, and it is likely humans could adapt, although the loss of life would still be quite catastrophic.

Instead, the bulk of the risk identified by Toby Ord is in future technologies which grow more capable every year. These include engineered pandemics from our increasingly powerful and cheap control over DNA synthesis, as well as artificial intelligence from our increasingly powerful and integrated computer systems.

The threat of engineered pandemics is particularly prescient as I write this in August 2020 where SARS-CoV-2 is still sweeping the world. Ord notes that even given quite positive assumptions about whether anyone would want to destroy the world with a virus, if the cost is cheap enough, it only takes one crazy death cult to pull the trigger. Even an accidental creation of a superweapon is a serious risk, as production is cheap and there are many examples of accidental leakages of bioweapons from government laboratories in the past. Unfortunately, we are also woefully unprepared on this front. The Biological Weapons Convention had a budget of $1.4 million in 2019, which Ord notes is less than most McDonald’s franchises.

Risks from unaligned artificial intelligence are similarly related to technical advancements. Ord notes that artificial intelligence has had some impressive achievements recently from photo and face identification to translation and language processing to games like Go and Starcraft. As computer hardware gets better and more specific, and as we discover more efficient algorithmic applications of artificial intelligence, we should expect this trend to continue. It therefore seems plausible that sometime in the future, perhaps this century, we will see artificial intelligence exceed human ability in a wide variety of tasks and ability. The Precipice notes that, were this to happen with some sort of general intelligence, humanity would no longer be the most intelligent species on the planet. Unless we have some foresight and strategies in place, having a superior intelligence with it own goals could be considerably dangerous.

Unfortunately, we are already quite poor at getting complex algorithms to achieve complicated goals without causing harm (just take a look at the controversy around social media and misinformation, or social media and copyright algorithms). The use of deep learning neural networks in more high stakes environments means we could be facing opaque algorithmic outcomes from artificial intelligence that we don’t know if we’ve correctly programmed to achieve the goals we actually want. Throw in the fact that human civilizational goals are multifaceted and highly debated, and there is a great deal of mistakes that could occur between what humans “want” and what a superior intelligence attempts to accomplish. While Toby Ord doesn’t think we should shut down AI research, he does suggest we take this source of risk more seriously by devoting resources to addressing it and working on the problem.

So What Do We Do?

I’ve spent a lot of time on enumerating risks because I think they are a concrete way to get someone who is unfamiliar with existential risk to think about these ideas. But Ord isn’t writing a book of alarmism just to freak out his audience. Instead, starting with the high levels of risk and adding the extremely negative consequences, Ord details how we might begin to tackle these problems. Unprecedented risks come with modeling challenges: if an existential risk cannot by definition, have ever occurred, how can we know how likely it is? We have to acknowledge this limitation, use what incomplete knowledge we can have access to (number of near misses is a good start), and start building institutions to focus on solving these hard problems.

International coordination is a major factor here. Many of these problems are collective action problems. Humanity has found ways around collective action issues with international institutions before (nuclear arms treaties), and so we need to replicate those successes. Of course, we can’t establish new or better institutions unless we get broad agreement that these issues are major problems that need to be solved. Obviously, that’s why Ord wrote this book, but it’s also why I feel compelled to blog about it as well. More on that momentarily.

In this section of the book, The Precipice outlines preliminary directions we can work towards to improve our chances of avoiding existential catastrophes. These include obvious things like increasing the funding for the Biological Weapons Convention, but also discussions on how to think about technological progress, since much of our future existential risk rises as technology improves. We also obviously need more research on existential risk generally.

Finally, I want to wrap up discussing Appendix F, which is all of Ord’s general policy recommendations put into one place. As policy prioritization has long been an interest of mine, I found Toby Ord’s answer to be quite fascinating. I wrote a post a few months back discussing the highest impact policies actually being discussed in American politics in this election cycle. Comparing it to Toby Ord’s recommendations, the overlap is essentially nonexistent except for some points on climate change, which most democrats support such as the U.S. rejoining the Paris Climate Agreement. There’s also a point about leveraging the WHO to better respond to pandemics, and given Trump has essentially done the exact opposite by removing U.S. funding for the WHO, I suppose I should at least include that as relevant policy debate.

I want to emphasize that Ord has 9 pages of policy ideas, and many of them are likely uncontroversial (improve our understanding of long period comets, have the Biological Weapons Convention have a real budget), but our political system is failing to even address these challenges, and I think it’s important to highlight that.

There is room for optimism; human knowledge is improved by discussion and research, and that includes reading and blogging. If you find these ideas interesting, or even more broadly, if you think there are valuable things in the world, one of the most effective activities you could do this year might be to just read The Precipice. Even without the weight of humanity, the concepts, problem solving, and prose are worth the read all by themselves. This is definitely by favorite book I’ve read this year, and I’ve skipped over summarizing whole sections in the interests of time. Ord even has a whole uplifting chapter about humanity’s future potential, and is overall quite positive. Please attribute any gloominess on this topic to me and not the book.

And if you do read this book, it also just makes for intriguing conversation. I couldn’t help but tell people about some of the ideas here (“are supervolcanoes a national security threat?” ), and the approach is just wonderfully different, novel, and cross-disciplinary.

For more on this, but slightly short of reading the whole book, I also recommend Toby Ord’s excellent interview on the 80000 Hours Podcast. On that page you can also find a host of awesome links to related research and ideas about existential risk. I’ll also link Slate Star Codex’s longer review of The Precipice, and places to buy it.

Links 2019-03-07

First links post in a while because I have some housekeeping. After trying to have comments just on reddit, I’ve realized it makes way more sense to just have comments right below the articles again. I really don’t like the WordPress default comment system so I’ve opted instead for Disqus. These have been implemented for a while, but I wanted to bring your attention to them.

I’ve also finally updated the site to default to https. Kind of an embarrassment for a site promoting encryption to not have https defaulted, but this blog is a volunteer project done for personal interest (and personal expense!).

I’ve removed Greg Mankiw’s blog from the sidebar because I realized I wasn’t reading it much anymore and it doesn’t talk about too much interesting econ stuff very often. I also removed Jeffrey Tucker’s blog beautiful anarchy, because I don’t think he posts there anymore now that he’s running aier.org.

I’ve added gwern.net because this past year I’ve realized how much more I’ve been going to his site even though I’ve known about it for a long time. Gwern is a rationalist independent researcher. He doesn’t really write blogs so much as essays on a topic. I recommend his site wholeheartedly. Seriously, his site is the first link on this post for a reason. If you are overwhelmed by the amount of content, see if anything in his “Most Popular” or “Notable” categories jump out at you and start there. I personally found “Embryo Selection For Intelligence” to be quite engrossing.

Slate Star Codex has had some good posts about the importance of OpenAI’s GPT-2. First some background on GPT-2. Next, GPT-2 seems to have learned things haphazardly, in almost a human-like way, to attain its goals of creating good responses to prompts. It connects things in a stream of consciousness reminiscent of a child’s thoughts. As Scott says, simply pattern matching at a high level is literally what humans do.

Also on AI, I found an amazing 2018 AI Alignment Literature Review and Charity Comparison by LessWrong user Larks. It’s a very impressive in depth look at groups concerned about the AI alignment problem.

From Vox: “The case that AI threatens humanity, explained in 500 words”.

Noah Smith writes A Proposal for an Alternative Green New Deal. It makes vastly more sense than the vague, progressive wishlist discussed by current Democratic members of Congress. However, even Smith’s suggestions seem pretty poorly thought out to me; he endorses massive subsidies to green technology, on the order of $30 billion a year, without addressing how the state will know where to invest the money. As I recall, the government isn’t a great central planner. He also just kind of tosses in there universal health insurance, apparently paid for by the government, which sounds like Medicare for all. That seems to both massively politically complicate anything actually trying to fix climate change, and also destroy the entire federal budget, which I think is a national security problem.

Related, on a more nuanced note, John Cochrane discusses a letter signed by many economists endorsing a carbon tax, which seems much more precise and useful to people concerned about climate change. To make it politically palatable, they suggest making a carbon dividend paid to all taxpayers out of this tax. Noah Smith also endorsed this approach as just one piece of his Green New Deal. On brand, The Economist endorses carbon taxes as well.

Bitcoin Hivemind developer Paul Sztorc writes about Bitcoin’s future security budget. It’s a really good technical discussion of how Bitcoin can be funded in the future, and why we need sidechains to help pay for the cost of keeping Bitcoin secure.

Bruce Schneier writes about the need for Public Interest Cybersecurity, envisioning it as a parallel to public interest legal work. It’s an interesting take, and I’m not sure how I feel about it. On the one hand, he’s right that lawmakers know little about the technologies they are supposed to regulated, but that’s also true of literally every industry. Sure it would be great if we had more things like the EFF, but I’m have to ask 80,000 Hours if they thought people going into charity work should work for the EFF or AI Alignment research or other existential risk. I’m also not sure I agree that there aren’t enough incentives to invent new security protocols. Google is taking security very seriously on their own, but so are tons of Bitcoin and cryptocurrency developers who are constantly seeking ways to make their projects more secure and do more creative things with crypto.

The U.S. trade deficit hit a 10 year high. Here is the actual Bloomberg article. This is silly political bickering, so I won’t spend much time on it, but it reflects just how the president fails to grasp very simple economics. The trade deficit doesn’t mean anything by itself, it’s just a measure of the goods traded, and it’s not even very good at that (goods designed here but manufactured in another country see their whole value “subtracted” in the trade deficit despite American labor inputs). The drivers of the trade deficit are things like relative values of currencies and national savings rates, not the levels of tariffs. Meanwhile, Trump’s tax cuts have spurred U.S. growth while the rest of the world has been sluggish, leading to higher trade deficits because Americans are relatively more wealthy. This flurry of economic activity prompted the Fed to raise rates to stave off inflation, which also drives up the trade deficit, and so Trump has taken the horrible tact of trying to publicly attack the Fed to lower rates, which is terrible for any sort of responsible Fed policy. The whole thing is a ridiculous mess which could have been avoided if Trump had any semblance of economic knowledge.

The Fifth Column podcast is a highly entertaining libertarian politics podcast. Episode 132 is a little different as Michael Moynihan takes the opportunity to interview Mark Weisbrot, Co-Director of the Center for Economic and Policy Research, a left-wing think tank, on the Maduro government in Venezuela. I have a lot of thoughts on this interview, but my foremost is whether Weisbrot counts as an actual representative voice of the Left. I think one of the worst things social media does is to hold up the most controversial person on one side because they generate the most clicks and buzz and force both sides to jump in and flame each other. In an hour long interview, Weisbrot takes, as far as I can tell, no opportunity to criticize the Maduro regime, nor offers any way in which they could have improved their policies. He accepts and touts statistics that support his view, and dismisses, minimizes, or ignores stats that counter him–even if they are all from the same source! Even though he’s a big deal at a left-wing think tank, I have to point out that most left-leaning academics don’t need to be in think tanks because most university politics skew left. This might explain how someone with this level of willful ignoranceg could hold such a key position. I think the interview is worth listening to if you would like to see the extent of what humans can do to put up mental barriers to seeing their own logical inconsistencies and motivated reasoning. Nonetheless, I feel bad about linking to this interview as I think it unfairly represents actual socialists who would like to nationalize all industries and seize the means of production.