Book Review: The Precipice

I have titled my annual blog post summarizing where I donate my charitable budget as “How can we use our resources to help others the most?” This is the fundamental question of the Effective Altruism movement which The Precipice‘s author, Toby Ord, helped found. For a while, Toby Ord focused on figuring out how to fight global poverty, doing the most good for the worst off people in the world. Now, he is focusing on the long term future and existential risk.

The Precipice is fantastic. It’s incredibly well written, engaging, and approachable. It covers a lot of ground from why we should care about the world, what risks humanity faces in the future, how we might think about tackling those risks, and what the future might look like if we succeed.

The Precipice eloquently interweaves fairly philosophical arguments with more empirical analysis about the sources of existential risk and tries to statistically bound them. The book discusses a pretty concerning topic of the potential end of humanity, but it does so with an eminently reasonable approach. The complexities of philosophy, science, probability, epidemiology, and more all are brought into the narrative, but made easily digestible for any reader. I honestly wish Toby Ord could teach me about everything, his writing was so clear and engaging.

The main discussion is never overwhelming with technical details, but if you ever find a point interesting, even the footnotes are amazing. At one point I came up with a counterpoint to Ord’s position, wrote that down in my notes, only to find that the next several paragraphs addressed it in its entirety, and there was actually a full appendix going into more detail. Honestly, this will be less of a book review and more of a summary with a couple final thoughts, because I think this book is not only excellent, but its content is perhaps the most important thing you can read right now. You are welcome to read the rest of this blog post, but if you have found this compelling so far, feel free to stop reading and order Toby Ord’s book posthaste.

Existential Risk

The consequences of 90% of humans on Earth dying would be pretty terrible, and given our relatively poor response to recent events, perhaps we should better explore other potential catastrophes and how we can avoid them. But The Precipice goes further. Instead of 90% of humans dying, what happens if 100% of us die out? Certainly that’s strictly worse with 100>90, but in fact these outcomes are far apart in magnitude: if all humans die out today, then all future humans never get to exist.

There’s no reason we know of that would stop our descendants from continuing to live for billions of years, eventually colonizing the stars, and allowing for the existence of trillions of beings. Whatever it is that you enjoy about humanity, whether that’s art, engineering, or the search for truth, that can’t continue if there aren’t any humans. Full stop. As far as we know, we’re the only intelligence in the universe. If we screw up and end humanity before we get off this planet, then we don’t just end it for ourselves but perhaps we end all intelligent life for the remaining trillions of years of the universe.

Even though I was aware of the broad thesis of the book, I was continually impressed with just how many different angles Ord explores. He early on notes that while we might normally think of a catastrophic extinction event, like an asteroid impact, as the thing we are keen on avoiding, in fact there are several scenarios that would be similarly devastating. For example, if humanity were to suffer some calamity that did not kill everyone but left civilization stuck at pre-industrial technology, that would also preclude humanity from living for trillions of years and colonizing the stars. A 1984 style global totalitarian state would also halt humanity’s progress, perhaps permanently.

Ord also discusses the fundamental moral philosophy implications of his thesis. The natural pitch relies on utilitarian arguments as stated above; if humanity fails to reach its potential, this not only harms any humans currently alive but all future generations. Other arguments against extinction include a duty to our past and what we owe to our ancestors, the rights of those future generations who don’t get to decide for themselves, and the simple fact that we would lose everything we currently value.

The book categorizes three types of risk: natural, anthropogenic, and future risks. Natural includes asteroids, supervolcanoes, and stellar explosions. These are pretty diverse topics, and Ord is quite informative. The story about asteroid risk was particularly fascinating to me. In the 90s, the relatively new discovery of the dinosaurs’ demise led Congress to task NASA with identifying all the largest near-Earth asteroids to see if they pose a threat to Earth. They allocated some money, and NASA tracked every near-Earth asteroid over 10 km in length, and determined that none pose a threat in the next century. They then moved on to 1 km asteroids and have now mapped the vast majority of those as well. The total cost of the program was also quite small for the information provided — only $70 million.

This is one of the rare successes in existential risk so far. Unfortunately, as Ord points out several times in the book, current foundational existential risk research at present is no more than $50 million a year. Given the stakes, this is deeply troubling. As context, Ord points out that the global ice cream market is about $60 billion, some 1000x larger.

I’ll skip the other natural risks here, but the book bounds natural risk quite skillfully; humans have been around for about 200,000 years, so it seems natural risk can’t be much higher than 0.05% per century. Even then, we’d expect our technologically advanced civilization to be more robust to these risks than we have been in the past. Many species survived even the largest mass extinctions, and none of them had integrated circuits, written language, or the scientific method.

That doesn’t mean that all risk has declined over time. On the contrary, according to Ord, the vast majority of existential risk is anthropogenic in origin. Nuclear weapons and climate change dominate this next section. It’s remarkable just how callous early tests of nuclear weapons really were. Ord recounts how there were two major calculations undertaken by a committee of Berkeley physicists before the Manhattan project got underway in earnest. One was whether the temperature of a sustained nuclear reaction would ignite the entire atmosphere in a conflagration (the committee believed it would not). The other was whether Lithium-7 would contribute to a thermonuclear explosion (it was believed it would not). It turns out that Lithium-7 can contribute to a thermonuclear explosion as was found out when the Castle Bravo test was about three times larger than expected, irradiating some 15 nearby islands.

It turned out the other calculation was correct, and the first nuclear explosion in 1945 did not ignite the atmosphere. But clearly, given the failure of the other calculation, the level of confidence here was not high enough to warrant the risk of ending all life on Earth.

Luckily, current risk from nuclear weapons and climate change that would wipe out humanity seems quite low (although not zero). Even a nuclear winter scenario or high sea level rise would not make the entire Earth uninhabitable, and it is likely humans could adapt, although the loss of life would still be quite catastrophic.

Instead, the bulk of the risk identified by Toby Ord is in future technologies which grow more capable every year. These include engineered pandemics from our increasingly powerful and cheap control over DNA synthesis, as well as artificial intelligence from our increasingly powerful and integrated computer systems.

The threat of engineered pandemics is particularly prescient as I write this in August 2020 where SARS-CoV-2 is still sweeping the world. Ord notes that even given quite positive assumptions about whether anyone would want to destroy the world with a virus, if the cost is cheap enough, it only takes one crazy death cult to pull the trigger. Even an accidental creation of a superweapon is a serious risk, as production is cheap and there are many examples of accidental leakages of bioweapons from government laboratories in the past. Unfortunately, we are also woefully unprepared on this front. The Biological Weapons Convention had a budget of $1.4 million in 2019, which Ord notes is less than most McDonald’s franchises.

Risks from unaligned artificial intelligence are similarly related to technical advancements. Ord notes that artificial intelligence has had some impressive achievements recently from photo and face identification to translation and language processing to games like Go and Starcraft. As computer hardware gets better and more specific, and as we discover more efficient algorithmic applications of artificial intelligence, we should expect this trend to continue. It therefore seems plausible that sometime in the future, perhaps this century, we will see artificial intelligence exceed human ability in a wide variety of tasks and ability. The Precipice notes that, were this to happen with some sort of general intelligence, humanity would no longer be the most intelligent species on the planet. Unless we have some foresight and strategies in place, having a superior intelligence with it own goals could be considerably dangerous.

Unfortunately, we are already quite poor at getting complex algorithms to achieve complicated goals without causing harm (just take a look at the controversy around social media and misinformation, or social media and copyright algorithms). The use of deep learning neural networks in more high stakes environments means we could be facing opaque algorithmic outcomes from artificial intelligence that we don’t know if we’ve correctly programmed to achieve the goals we actually want. Throw in the fact that human civilizational goals are multifaceted and highly debated, and there is a great deal of mistakes that could occur between what humans “want” and what a superior intelligence attempts to accomplish. While Toby Ord doesn’t think we should shut down AI research, he does suggest we take this source of risk more seriously by devoting resources to addressing it and working on the problem.

So What Do We Do?

I’ve spent a lot of time on enumerating risks because I think they are a concrete way to get someone who is unfamiliar with existential risk to think about these ideas. But Ord isn’t writing a book of alarmism just to freak out his audience. Instead, starting with the high levels of risk and adding the extremely negative consequences, Ord details how we might begin to tackle these problems. Unprecedented risks come with modeling challenges: if an existential risk cannot by definition, have ever occurred, how can we know how likely it is? We have to acknowledge this limitation, use what incomplete knowledge we can have access to (number of near misses is a good start), and start building institutions to focus on solving these hard problems.

International coordination is a major factor here. Many of these problems are collective action problems. Humanity has found ways around collective action issues with international institutions before (nuclear arms treaties), and so we need to replicate those successes. Of course, we can’t establish new or better institutions unless we get broad agreement that these issues are major problems that need to be solved. Obviously, that’s why Ord wrote this book, but it’s also why I feel compelled to blog about it as well. More on that momentarily.

In this section of the book, The Precipice outlines preliminary directions we can work towards to improve our chances of avoiding existential catastrophes. These include obvious things like increasing the funding for the Biological Weapons Convention, but also discussions on how to think about technological progress, since much of our future existential risk rises as technology improves. We also obviously need more research on existential risk generally.

Finally, I want to wrap up discussing Appendix F, which is all of Ord’s general policy recommendations put into one place. As policy prioritization has long been an interest of mine, I found Toby Ord’s answer to be quite fascinating. I wrote a post a few months back discussing the highest impact policies actually being discussed in American politics in this election cycle. Comparing it to Toby Ord’s recommendations, the overlap is essentially nonexistent except for some points on climate change, which most democrats support such as the U.S. rejoining the Paris Climate Agreement. There’s also a point about leveraging the WHO to better respond to pandemics, and given Trump has essentially done the exact opposite by removing U.S. funding for the WHO, I suppose I should at least include that as relevant policy debate.

I want to emphasize that Ord has 9 pages of policy ideas, and many of them are likely uncontroversial (improve our understanding of long period comets, have the Biological Weapons Convention have a real budget), but our political system is failing to even address these challenges, and I think it’s important to highlight that.

There is room for optimism; human knowledge is improved by discussion and research, and that includes reading and blogging. If you find these ideas interesting, or even more broadly, if you think there are valuable things in the world, one of the most effective activities you could do this year might be to just read The Precipice. Even without the weight of humanity, the concepts, problem solving, and prose are worth the read all by themselves. This is definitely by favorite book I’ve read this year, and I’ve skipped over summarizing whole sections in the interests of time. Ord even has a whole uplifting chapter about humanity’s future potential, and is overall quite positive. Please attribute any gloominess on this topic to me and not the book.

And if you do read this book, it also just makes for intriguing conversation. I couldn’t help but tell people about some of the ideas here (“are supervolcanoes a national security threat?” ), and the approach is just wonderfully different, novel, and cross-disciplinary.

For more on this, but slightly short of reading the whole book, I also recommend Toby Ord’s excellent interview on the 80000 Hours Podcast. On that page you can also find a host of awesome links to related research and ideas about existential risk. I’ll also link Slate Star Codex’s longer review of The Precipice, and places to buy it.

A 2020 Policy Platform Proposal

It’s election season so it’s time to start talking electoral politics again. The Trump administration has been particularly successful in ignoring policy discussions in favor of political point scoring. This isn’t too surprising given Trump’s lack of consistent ideology, apart from perhaps opposition to free trade and immigration. Impeachment has also helped focus attention on Trump’s political situation rather than his policies or lack thereof. Don’t get me wrong, I think there is a strong non-policy case against Trump, and I think in particular Congressman Justin Amash has done an excellent job in articulating why Trump’s behavior is concerning.

However, I think there is also a policy-based critique of Trump. In order to properly make that case and compare Trump’s policies to Biden’s or other candidates, we must establish a foundation declaring which problems are most important, and what policies could be used to implement them. Criteria for these policy ideals include some utilitarian calculus, i.e., how to improve the lives of the most people in the largest way. Thus, the first of these policies is actually a meta-policy, a way to improve congressional power to pass laws and run the state. Changing the way we make policy can affect all of our future policy making.

Countering this interest in utilitarian idealism is a preference for some political feasibility; in other words, while I might prefer to emphasize revolutionary changes that significantly improve the country (changing all of our voting systems to approval voting or quadratic voting or switching taxation to be based on land value), I’ve left them off this list because they are not just unpopular, but in fact virtually never discussed. If you find a particular policy interesting, please follow the links in that section for additional policy discussion and details.

Finally, there is uncertainty here, and I’ll mention other policies that didn’t make this list at the end. Trying to filter major talking points out of a broader range of political ideas is difficult. Policies and political philosophies are interconnected, and where I’m drawing boundaries must be arbitrary. Nonetheless, these ideas should form a good basis for uniformly judging candidate policies.

Congressional Power

Any policy platform has to address the fact that our current system for governance, for crafting and enacting policy, is deeply flawed. We have uncompetitive and broken elections, we have bad ways of choosing candidates, and we have too much power in the executive branch. Executive authority compounds our problems by making each election a stark singular choice between polarized sides instead of a well rounded government built on a legislature with many interests represented. I can’t fix all of these in this policy platform, so improving the balance between the president and Congress seems like a good place to start.

The entire budget for the legislative branch, including congressional staff, offices, and congressional agencies like the GAO and CBO, is about $5 billion. Congress is then responsible for oversight and legislative action for the entire $5 trillion federal government. The CBO has a mere 250 person staff, and it can’t even research and score all Congressional bills. This is absolute insanity.

Congress needs to be able to wield its muscle. It should not be relying on executive branch bureaucracies as unbiased experts evaluating their own performance. It should have a better staffed research arm which can oversee all aspects of the massive American bureaucracy. Congressmen also need to have more and more policy-focused positions on their own staff, along with fewer committee assignments. National Affairs has an excellent in depth discussion of the thinking behind this brief overview. Legislators are currently underpaid amateurs who spend half their time outside of Washington focused on other things besides governance. This does not allow for knowledgeable congressional oversight of the federal government.

Cato also has some excellent ideas for strengthening Congress such as having a standing committee to review executive overreaches from statutory law, and forcing votes on major rules as implemented by regulators or bureaucrats. Other ideas include expanding the congressional calendar, making a new Congressional Regulatory Service to oversee the regulations made by independent and executive agencies, and requiring all civil asset forfeitures to be deposited into the Treasury to be spent by Congress, not the executive.

Unfortunately, even despite a recent impeachment trial this is simply not a major political issue in this year’s campaign, and no candidate is running with strengthening Congress as a priority. In fact, there are essentially no meta-policy ideas being floated. Yet ideas are not hard to come by!

Liberalizing Immigration

The U.S. immigration system is terrible (see section 8 here). It is esoteric, slow, and requires a complete overhaul. It should have a focus on a merit-based system rather than nation-of-origin and family ties as it does now. It should be simpler for high-skilled workers to be hired by American companies and it should definitely be easier for young workers, educated at excellent American colleges, to be hired by American companies and remain in the United States where they can pay tax dollars for decades.

Why is this so high up on the list?

This is a matter of national security. China is a growing power, but crucially, it cannot expand its influence or economy through immigration. The Chinese state has largely decided that ethnicity matters, and China is not seeking to create a multicultural amalgamation to improve the world, but rather a nationalist state. The U.S. isn’t restricted in this way; anyone can be an American. Immigrants are also more likely to start businesses and take risks. That means the most creative and ambitious people in the world can come to the United States and contribute to our culture, knowledge, technology, and wealth. Moreover, these remarkable people already want to come here. Increased dynamism and economic growth also makes the rest of our geopolitical challenges easier; it means the national debt is less of a burden, and national defense spending can be higher in absolute terms while costing less of a percentage of GDP.

This is also perhaps the best and simplest way to improve the world quickly. It’s extremely difficult to improve nations with poor institutions, yet people who struggle in developing nations can be immediately more productive if they are transplanted to the U.S. And of course many are quite willing to do so, uprooting their entire lives for a chance at the American Dream. We can pursue limitations on their access to public money, or a simple tax upon immigrating, but nonetheless we should be voting to improve the world in the most altruistic and nationalistic way possible: expanding legal immigration in order to make more Americans!

Federal Incentives to Build More Housing in U.S. Cities

This is a specific policy taken from the Niskanen Center’s Will Wilkinson. Cited on this blog before, he suggests giving federal money to urban areas that add large amounts of new housing stock. Why? Because American cities are absurdly expensive to live in, yet new housing is extremely difficult to develop due to overregulation and zoning laws.

The impact of our poor housing policy is enormous. Economists suggest housing constraints have lowered U.S. GDP by as much as a third over the last 50 years. Think about that. We could be missing a third of GDP because millions of people who wanted to move somewhere for a better job couldn’t find a place to live. It’s clear that the most productive areas in the U.S., especially cities like New York and San Francisco, are prohibitively expensive, keeping out potential new productive workers.

Wilkinson’s suggestion isn’t the only possible policy solution; another is to change zoning to be hyper local, composed by residents of a single street or city block. This would allow experimentation and innovation, instead of immovable local land interests which keep out future non-residents who can’t vote in today’s elections.

While the viable solutions are still up for the debate, the impact is clear: the lack of housing development in U.S. cities due to overregulation may be the single greatest barrier to economic growth, thus earning its inclusion on this short list of policies.

Decriminalization of All Drugs

Ever since Pete Buttigieg announced his support for this policy, I’ve had it circled for inclusion on this list. The War on Drugs has been a colossal failure, has not reduced drug use, and has radically increased prison populations. There have been extraordinary costs to the taxpayer in both civil liberties and assets. Massive application of state force has helped to give a monopoly in funding to the most bloodthirsty and gruesome organized criminal elements in the world, including terrorists. There have even been spillover effects as governments crack down on prescription pain killers, leaving patients in agony.

This policy is wrong morally, practically, and economically. It is not the place of the state to determine what substances informed adults can consume or inject. It is also abundantly clear the state has zero capability to halt the trade or consumption of drugs. Rather, enforcement of drug laws have bolstered a black market where information is asymmetric and scarce, endangering all involved. The only thing the state has succeeded in doing is making organized crime more financially viable. The resulting conflict in Mexico has killed over 150,000 people, making it one of the largest conflicts of the 21st century behind only the Iraq War, Syrian Civil War, and Darfur. It is this monstrous loss of human life as a result of changeable government policy that places this item so high on this list.

And of course it goes without saying that this massive assistance to organized crime is occurring at great financial cost. Estimates for enforcement, prosecution, incarceration, and military interventions are as high as $50 billion a year. State prohibition of private mutually consensual transactions also requires erosion of our rights in ways that frustrate measures of concrete financial cost. The ACLU notes extensive surveillance has been justified under the guise of drug enforcement while increasingly militarized police forces have abused their power to break into homes unannounced or preemptively shoot victims all in the name of stopping transactions among consenting adults. It’s time to end this failed policy.

Catastrophic Risk

It’s clear today that the federal government does not respond well to large disasters. Perhaps too much relies upon the whims of the executive who happens to be in power, but it seems likely that we could institutionalize better responses to catastrophic events. Yes, this includes pandemics, but also major earthquakes, solar flares, artificial intelligence, and even plans for averting nuclear war (for a more detailed analysis, read Toby Ord’s recent book, The Precipice).

This is a highly neglected problem and thus one of the highest impact policies we could undertake (climate change could go here, but it has not been quite as neglected a topic as other risks, so I’ve detailed it later). At the beginning of 2020, I would not have included this in the list of top policies, not because it was low impact, but simply due to the fact that it was not discussed as a major political issue. The failings of the federal government to respond to a deadly virus have pushed catastrophic risks into the mainstream. While the likelihood of any given catastrophe is low, it is the enormous impact of the tail-risk that should concern us; preparing now will mean the difference between devastation and mere hardship.

We should look to create public commissions to investigate our preparedness for various catastrophic events, identify what can be done now for relatively small budgets with larger payoffs when a disaster comes, and then pass legislation that enshrines this knowledge institutionally in ways that do not rely on the whims and competence of whomever happens to be president. It is vital that any commissions include our preparedness for other challenges besides pandemics; preparedness for unexpected events is not selected through democratic pressures, and perhaps this has resulted in our current difficult situation with COVID-19. It would be wise to use this opportunity to prepare not just for the next viral outbreak, but for other unlikely events as well.

Other Topics

There are arguments for inclusion of a lot more policies. I’ll run through several more quickly.

It matters a lot who the president appoints to the Federal Reserve, and that they are extensively qualified and independent. I’ve left it off of this list mostly because we’ve lucked out and it seems Trump’s appointments haven’t been that different from normal. When odd choices were floated, they were largely quashed. Independence is obviously still at risk with the president tweeting criticism of his own appointees, so this issue shouldn’t be overlooked, but given that I treat it like a pass/fail grade, we can reasonably hope this will be a “pass” for all candidates in 2020. I wish I could say that more definitively, but I can’t.

Healthcare is a huge part of the federal budget and has an outsized impact on the economy. We also don’t have great solutions, but this is another issue that could easily have made the list. The most important aspects are stopping reliance on employers providing health insurance (which makes it much harder for workers to take risks and switch jobs), and expanding coverage for the least well-off. How we do that is difficult to answer in such a small space, but I’m wary of radical changes that seek to quickly re-imagine the U.S. healthcare industry from the top down.

Climate change is a potentially expensive disaster waiting to happen. If the past months have taught us anything, waiting for disasters to happen is not the correct strategy. Instituting a small carbon tax seems like a good place to start. It can be refunded to taxpayers equally, or even made to incentivize carbon sequestration programs with refundable tax credits for carbon taken from the atmosphere.

Free trade has had a massive impact on reducing poverty worldwide, while also improving the economies of all countries around the world. There’s also some evidence for reduced chances of wars between important trading partners. Aligning American and Chinese commercial interests through trade will be a vital part of avoiding a war between these world powers. Free trade is also a vital vehicle for continuing the pattern of global poverty reduction seen in the last 30 years.

U.S. interventions in the Middle East have been one of the largest contributors to excess deaths from U.S. policy. Obviously there is high uncertainty over whether many conflicts would have continued even without American intervention, but that seems unlikely in at least a couple large instances (the Iraq War being the biggest one). U.S. support of regimes like Saudi Arabia also seems to show negative payoffs from a humanitarian calculus. It also does not seem that larger 21st century goals like opposing authoritarianism in China and avoiding large scale wars are served through Middle Eastern interventions.

Candidates’ Priorities Matter Too

While this is a nice policy platform, ultimately the goal is to judge candidates by their relationship with these policies.

A major problem for this approach of separating out policies isn’t that most people running for office oppose these positions, but that they might be indifferent or even positive on these high impact policies while still focusing on other completely radical ideas. Elizabeth Warren’s many proposals come to mind here. There are some meritorious critiques in Warren’s proposals; competition is vital to a well functioning market, and some of her ideas could enhance competition. But many are far more radical with, at best, unknown effects on competition and the economy generally. These include the eradication of private equity, the changing of corporate boards, and an unprecedentedly large wealth tax which could significantly curtail investment. If Warren scored highly on the top policies put forward here (she does alright on immigration, housing, and drug policy), how do we balance that with the relatively radical (and I’d argue unhelpful) economic proposals she made the centerpiece of her campaign?

Unfortunately, we have to take those points seriously and note that while I have tried to rank these policies in a somewhat utilitarian, impact- centered way (policies within the Overton Window that help the most people by the greatest amount), radical policies that backfire could have very high impacts that shove aside the ideas proposed here.

And that goes for both parties. If Trump did well on these policies (unlikely, yes), but then also centered his campaign on radical ideas like defaulting on the national debt, shutting off the internet, or throwing away nuclear arms control treaties, then not implementing those policies might become the highest impact.

There is a lot of uncertainty that remains; some of these policies could be higher on the list, and I’ve likely excluded some that are high impact that have not yet occurred to me. Major policies could matter in the future that we just haven’t encountered. And of course these are only policy preferences; as noted in my last post, simple competency is an important factor as well. Despite all of these caveats, this an important step in laying a foundation of policy discussion and analysis against which we can measure candidates. Electoral politics is messy and tribal; discussions confound concise and consistent frameworks, but when they do swerve towards policy, these points should help form the questions that need to be asked.

Artificial General Intelligence and Existential Risk

The purpose of this post is to discuss existential risk, and why artificial intelligence is a relatively important aspect of existential risk to consider. There are other essays about the dangers of artificial intelligence that I will link to throughout and at the end of this post. This essay is a different approach that perhaps will appeal to someone who has not seriously considered artificial general intelligence as an issue requiring civilization’s attention. At the very least, I’d like to signal that it should be more socially acceptable to discuss this problem.

First is the section on how I approached thinking about existential risk. My train of thought is a follow up to Efficient Advocacy. Also worth reading: Electoral Reform Fantasies.

Background

Political fights, especially culture war battles that President Trump seems so fond of, are loud, obnoxious, and tend to overshadow more impactful policy debates. For example, abortion debates are pretty common, highly discussed political issues, but there have been almost no major policy changes since the Supreme Court’s decision 40 years ago.  The number of abortions in the US has declined since the 1980s, but it seems uncorrelated with any political movements or electoral victories. If there aren’t meaningful differences from different political outcomes, and if political effort, labor, and capital is limited, these debates seem to distract from other areas that could impact more people. Trump seems especially good at finding meaningless conflicts to divide people, like NFL players’ actions during the national anthem or tweeting about Lavar Ball’s son being arrested in China.

Theorizing about how to combat this problem, I started making a list of what might be impactful-but-popular (or at least not unpopular) policies that would make up an idealized congressional agenda: nominal GDP futures markets, ending federal prohibition of marijuana, upgrading Social Security Numbers to be more secure, reforming bail. However, there is a big difference between “not unpopular”, “popular”, and “prioritized”. I’m pretty sure nominal GDP futures markets would have a pretty positive effect on Federal Reserve policy, and I can’t think of any political opposition to it, but almost no one is talking about it. Marijuana legalization is pretty popular across most voters, but it’s not a priority, especially for this congress. So what do you focus on? Educating more people about nominal GDP futures markets so they know such a solution exists? Convincing more people to prioritize marijuana legalization?

The nagging problem is that effective altruist groups like GiveWell have taken a research based approach to identify at what the best ways are to use our money and time to improve the world. For example, the cost of distributing anti-mosquito bed nets is extremely low, resulting in an average life saved from malaria at a cost in the thousands of dollars. The result is that we now know our actions have a significant opportunity cost; if a few thousand dollars worth of work or donations doesn’t obviously have as good an impact as literally saving someone’s life, we need a really good argument as to why we should do that activity as opposed to contributing to GiveWell’s top charities.

One way to make a case as to why there are other things worth spending money on besides GiveWell’s top charities, is to take a long term outlook, trying to effect a large change that would impact a large amount of people in the future.  For example, improving institutions in various developing countries would help those populations become richer. Another approach would be to improve the global economy, which would both allow for more investment in technology as well as push investment into developing countries looking for returns. Certainly long term approaches are more risky compared to direct impact charities that improve outcomes as soon as possible, but long term approaches can’t be abandoned either.

Existential Risk

So what about the extreme long term? What about existential risk? This blog’s philosophy takes consequentialism as a founding principle, and if you’re interested in the preceding questions of what policies are the most helpful, and where we should focus our efforts, you’ve already accepted that we should be concerned about the effects of our actions. The worst possible event, from a utilitarian perspective would be the extinction of the human race, as it would not just kill all the humans alive today (making it worse than a catastrophe that only kills half the humans), but also ends the potential descendants of all of humanity, possibly trillions of beings. If we have any concern for the the outcomes of our civilization, we must investigate sources of existential risk. Another way to state this is: assume it’s the year 2300, and humans no longer exist in the universe. What is the most likely cause of our destruction?

Wikipedia actually has a very good article on Global Catastrophic Risk, which is a broad category encompassing things that could seriously harm humanity on a global scale. Existential risks are a strict subset of those events, which could end humanity’s existence permanently. Wikipedia splits them up into natural and anthropogenic. First, let’s review the non-anthropogenic risks (natural climate change, megatsunamis, asteroid impacts, cosmic events, volcanism, extraterrestrial invasion, global pandemic) and see whether they qualify as existential.

Natural climate change and megatsunamis do not appear to be existential in nature. A megatsunami would be terrible for everyone living around the affected ocean, but humans on the other side of the earth would appear to be fine. Humans can also live in a variety of climates, so natural climate change would likely be slow enough for some humans to adapt, even if such an event causes increased geopolitical tensions.

Previous asteroid impacts have had very devastating impacts on Earth, notably the Cretaceous-Paleocene extinction event some 66 million years ago. This is a clear existential risk, but you need a pretty large asteroid to hit Earth, which is unusual. Larger asteroids can also be more easily identified from further away, giving humanity more time to do something (push it off path, blow it up, etc). The chances here are thus pretty low.

Other cosmic events are also low probability. Gamma-ray bursts are pretty devastating, but they’d have to be close-by (with a few hundred light-years at least) as well as aimed directly at Earth. Neither of these is likely within the next million years.

Volcanism is also something that has the potential to be pretty bad, perhaps existential level (see Toba Catastrophe Theory), but it is also pretty rare.

An alien invasion could easily destroy all of humanity. Any species with the capability to travel across interstellar space with military ambitions would mean they are extremely technologically superior. However, we don’t see any evidence of a galactic alien civilization (see Fermi Paradox 1 & 2 and The Great Filter). Additionally, solving this problem seems somewhat intractable; on a cosmic timescale, an alien civilization that arose before our own would likely have preceded us by millennia, meaning the technology gap between us and them would be hopelessly and permanently large.

A global pandemic seems pretty bad, certainly much more likely than anything else we’ve covered in the short term. This is also exacerbated by human actions creating a more interconnected globe. However, it is counterbalanced by the fact that no previous pandemic has ever been 100% lethal, and that modern medicine is much better than it was during the Black Plague. This is a big risk, but it may not be existential. Definitely on our shortlist of things-to-worry-about though.

Let’s talk about anthropogenic risks next: nuclear war, conventional war, anthropogenic climate change, agricultural crises, mineral exhaustion, artificial intelligence, nanotechnology, biotechnology.

A common worry is nuclear war. A massive nuclear exchange seems somewhat unlikely today, even if a regional disagreement in the Korean peninsula goes poorly in the worst possible way. It’s not common knowledge, but the “nuclear winter” scenario is still somewhat controversial, and I remain unconvinced that it poses a serious existential threat, although clearly a nuclear exchange would kill millions. Conventional war is also out as it seems strictly less dangerous than a nuclear war.

For similar reasons to nuclear winter, I’m not quite worried about global warming on purely existential terms. Global warming may be very expensive, it may cause widespread weather, climate, and ecological problems, but I don’t believe humanity will be entirely wiped out. I am open to corrections on this.

Agricultural crises and mineral exhaustion seem pretty catastrophic-but-not-existential as well. These would result in economic crises, but by definition, economic crises need humans to exist; if there are fewer humans, it seems that an agricultural crisis would no longer be an issue.

The remaining issues are largely technological in nature: artificial intelligence, biotechnology, nanotechnology, or technical experiments going wrong (like if the first nuclear test set the atmosphere on fire). These all seem fairly concerning.

Technological Existential Risk

Concern arises because technological progress means the likelihood that we will have these technologies grows over time, and, once they exist, we would expect their cost to decrease. Additionally, unlike other topics listed here, these could wipe out humanity permanently. For example, a bioengineered virus could be far more deadly than what would naturally occur, possibly resulting in a zero survival rate. The cost of DNA technology has steadily dropped, and so over time, we might expect the number of organizations or people who have the knowledge and funding to engineer deadly pathogens to increase. The more people who have this ability, the more likely that someone makes a mistake and releases a deadly virus that kills everyone. An additional issue is that it is quite likely that military research teams are right now researching bioweapons like an engineered pathogen. Incentives leading to the research of dangerous weapons like this are unlikely to change, even as DNA engineering improves, meaning the risk of this threat should grow over time.

Nanotechnology also has the potential to end all life on the planet, especially under a so-called “grey goo” scenario, where nanobots transform all the matter on Earth. This has a lot of similarities to a engineered pathogen, except the odds of any human developing some immunity no longer matter, and additionally all non-human life, indeed, all matter on Earth is also forfeit, not just the humans. Like biotechnology threats, we don’t have this technology yet, but it is an active area of research. We would also expect this risk to grow over time.

Artificial General Intelligence

Finally, artificial general intelligence contains some similar issues to the others: as technology advances, we have a higher chance of creating it; the more people who can create it, the more dangerous it is; once it is created, it could be deadly.

This post isn’t a thesis on why AI is or isn’t going to kill all humans. We made an assumption that we were looking exclusively at existential risk in the near future of humanity. Given that assumption, our question is why will AI be more likely to end humanity than anything else? Nonetheless, there are lingering questions as to whether AI is an actual “real” threat to humanity, or just an unrealistic sci-fi trope. I will outline three basic objections to AI being dangerous with three basic counterarguments.

The first objection is that AI itself will not be dangerous because it will be too stupid. Related points are that AI is too hard to create, or we can just unplug it if it has differing values from us. Counterarguments are that experts disagree on exactly when we can create human-level AI, but most agree that it’s plausible in the next hundred or couple hundred years (AI Timelines). It’s also true that we’ve seen improvements in AI ability to solve more general and more complex problems over time; AlphaZero learned how to play both Go and Chess better than any human without changes in its base code, YouTube uses algorithms to determine what content to recommend and what content to remove ads from, scanning through thousands of hours of video content every minute, Google’s Pixel phone can create software based portrait photos via machine learning rather than needing multiple lenses. We should expect this trend to continue, just like with other technologies.

However, the difference between other technological global risks and AI is that the machine learning optimization algorithms could eventually be applied to machine learning itself. This is the concept of an “intelligence explosion”, where an AI uses its intelligence to design and create successively better versions of itself. Thus, it’s not just that an organization might make a dangerous technological breakthrough, like an engineered virus, but that once the breakthrough occurs, the AI would rapidly become uncontrollable and vastly more intelligent than us. The intelligence analogy being that a mouse isn’t just less smart than a human, it literally doesn’t comprehend that its environment can be so manipulated by humans that entire species depend on the actions of humans (i.e. conservation, rules about overhunting) for their own survival.

Another objection is that if an AI is actually as intelligent as we fear it could be, it wouldn’t make “stupid” mistakes like destroying all of humanity or consuming the planet’s resources, because that wouldn’t count as “intelligent”. The counterpoint is the Orthogonality Thesis. This simply states that an AI can have any goal. Intelligence and goals are orthogonal and independent. Moreover, an AI’s goal does not have to explicitly target humans as bad (e.g. “kill all the humans”) to cause us harm. For example, a goal to calculate all the digits of pi or solve the Riemann Hypothesis might require as much computing power as possible. As part of achieving this goal, a superintelligence would determine that it must manufacture computing equipment and maximize energy to its computation equipment. Humans use energy and are made of matter, so as a way to achieve its goal, it would likely exterminate humanity, and convert all matter it could into computation equipment. Due to its superintelligence, it would accomplish this.

A final objection is that despite experts believing human level AI will happen in the next 100 years, if not sooner, there is nothing to be done about it today or that it is a waste of time to work on this problem now. This is also known as the “worrying about overpopulation on Mars” objection, comparing the worry about AI to something that is several scientific advancements away.  Scott Alexander has an entire blog post on this subject, which I recommend checking out. The basic summary is that AI advancement and AI alignment research are somewhat independent. And we really need to learn how to properly align AI values before we get human level AI.

We have a lot of theoretical philosophy that we need to figure out how to impart to a computer. Things like how humans actually make decisions, or how to value different moral tradeoffs. This could be extraordinarily complicated, as an extremely smart optimization algorithm could misinterpret almost everything we say if it did not already share our values for human life, health, and general brain state. Computer scientists set out to teach computers how to understand natural human language some 60 years ago, and we still haven’t quite nailed it. If imparting philosophical truths is similarly difficult, there is plenty of work to be done today.

Artificial intelligence could advance rapidly from human level to greater than human very quickly; the best human Go player lost to an AI (AlphaGo) in 2016, and a year later, AlphaGo lost to a new version, AlphaGo Zero, 100 games to none. It would thus not be surprising if a general intelligence achieved superhuman status a year after achieving human-comparable status, or sooner. There’s no fire alarm for artificial general intelligence. We need to be working on these problems as soon as possible.

I’d argue then, that of all scenarios listed here, a misaligned AI is the most likely to actually destroy all of humanity as a result of the Orthogonality Thesis. I also think that unlike many of the other scenarios listed here, human level AI will exist sometime soon, compared to the timescale of asteroids and vulcanism (see AI Timelines, estimates are highly variable, anywhere from 10 to 200 years). There is also a wealth of work to be done surrounding AI value alignment. Correctly aligning future AI with goals compatible with human values is thus one of the most important challenges facing our civilization within the next hundred years or so, and probably the most important existential threat we face.

The good news is that there are some places doing this work, notably the Machine Intelligence Research Institute, OpenAI, and the Future of Humanity Institute. The bad news is that despite the importance of this issue, there is very little in the way of conversations, money, or advocacy. AI Safety research is hard to calculate in total, as some research is likely done by private software companies, but is optimistically on the order of tens of millions of dollars a year. By comparison, the U.S. Transportation Security Administration, which failed to find 95% of test weapons in a recent audit, costs $7.5 billion a year.

Further Reading

I have focused this essay on trying to convey the mindset of thinking about existential risk generally and why AI is specifically worrying in this context. I’ve also tried to keep it short. The following are further resources on the specifics of why Artificial General Intelligence is worth worrying about in a broader context, arranged by length. If you felt my piece did not go in depth enough on whether AI itself is worth being concerned about, I would urge you to read one of the more in depth essays here which focus on that question directly.

 


Leave a comment on the official reddit thread.