Electoral Reform

This is the first post in a series on the 2020 U.S. election. The next post will likely be on strategic voting in the U.S. electoral system. But before we get there, this is a short biennial plea to remind you that the current way the U.S. conducts elections and government is not the only way. While you may not always be able to reform the electoral system while you are voting from inside it, sometimes opportunities can arise, and it should always be in the back of our mind.

All democracies have drawbacks of some kind, but the American electoral system seems to have a lot of issues, many of them fixable. I’ve had a lot to say about the issue (and more, and even more).

I’m not just talking about the electoral college, although yes, that is a problem (some good critiques here). Our use of first past the post voting is the worst of all possible voting systems. I’ve often advocated Approval Voting but there are many good alternatives. Nonetheless, all voting systems will trend towards two parties under winner-take-all single member districts like we have today. We might consider multi-member districts, although discussion about such an idea is essentially nonexistent. Worse still, House districts are gerrymandered to create uncompetitive elections. Perhaps you’d hope other parties might be able to enter into uncompetitive elections, but ballot access laws place barriers to entry to alternative parties, sometimes costing thousands of dollars to obtain signatures just to get on the ballot, while requirements are waived for Republicans and Democrats. This also makes it generally more difficult for alternative voting coalitions to arise.

Unfortunately, many general election races are already decided before you even consider who to vote for in November. So perhaps we should focus on voting in primaries as the way to exercise your right to vote? Sadly, primaries themselves have many issues: they also use first past the post, they have an extremely narrow electorate, and their structure incentivizes ignoring moderates because they either can’t vote or are split between the primaries of the two parties. Even if you know of a competitive primary in a state where the general won’t be close (for example, Republicans usually win in the deep south, but the Republican primary might be competitive), in many states you have to either be registered with that party (or sometimes independent) in order to vote in that party’s primary. That often means you have to spend time changing your voter registration while predicting ahead of time whether the primary will be close. Each state is different, so this can be a major headache trying to cast an actual decisive vote. Note, that there are plenty of good primary reform ideas as well; St. Louis Approves is campaigning for a simple blanket primary with approval voting, with the top vote-getters going on the general election.

So far we’ve covered a lot of voting issues and possible reforms, but I want to also emphasize that there are important democratic channels outside of pure voting. For example, voting provides no feedback for specific legislation, so representatives don’t receive direct electoral feedback about how they are voting. A better way to express opinions here would be to call legislators’ offices and complain directly. Note legislators will probably only care if you are a voter, but not that you spent any actual time and effort to research who you were voting for. We’ll revisit that in the next post.

Legislative institutions also have major impact on how policy becomes law, and they have their own problems. Representatives in the House have very little ability to offer amendments on most legislation, which is instead crafted by House leadership from the top down. This discourages broadly popular coalitions in favor of partisan priorities. Moreover, Congress has continually ceded power to the president, which hypercharges the importance of the imperial presidency. This results in division and every presidential election being a winner-take-all high stakes competition. If Congress was powerful and moderate, much less would ride on every presidential election..

In conclusion: the median American voter this year will vote in a uncompetitive non-swing state in the electoral college, have an uncompetitive Senate and House election, and have uncompetitive state legislative elections about which they know very little. This is not great.

All hope is not lost though. Last time I wrote this type of post, I mentioned that Reform Fargo was trying to get an approval voting system implemented for Fargo municipal elections. That effort passed, and they are currently using approval voting, which already resulted in council members getting broad support instead of the tiny fractions of the vote they were getting before. This year, St. Louis is looking at implementing an approval voting system as well. Both of these efforts were helped by the Center for Election Science, which is one of the charities I suggested donating to in my end-of-year charity discussion.

While most of us won’t have a chance yet to vote to improve our election system, it does seem like improvements are possible. And look out for my next post discussing more in depth the electoral landscape we will be facing this year.

Picture credit: David Maiolo licensed under CC-BY-SA 3.0 Unported.

Book Review: The Precipice

I have titled my annual blog post summarizing where I donate my charitable budget as “How can we use our resources to help others the most?” This is the fundamental question of the Effective Altruism movement which The Precipice‘s author, Toby Ord, helped found. For a while, Toby Ord focused on figuring out how to fight global poverty, doing the most good for the worst off people in the world. Now, he is focusing on the long term future and existential risk.

The Precipice is fantastic. It’s incredibly well written, engaging, and approachable. It covers a lot of ground from why we should care about the world, what risks humanity faces in the future, how we might think about tackling those risks, and what the future might look like if we succeed.

The Precipice eloquently interweaves fairly philosophical arguments with more empirical analysis about the sources of existential risk and tries to statistically bound them. The book discusses a pretty concerning topic of the potential end of humanity, but it does so with an eminently reasonable approach. The complexities of philosophy, science, probability, epidemiology, and more all are brought into the narrative, but made easily digestible for any reader. I honestly wish Toby Ord could teach me about everything, his writing was so clear and engaging.

The main discussion is never overwhelming with technical details, but if you ever find a point interesting, even the footnotes are amazing. At one point I came up with a counterpoint to Ord’s position, wrote that down in my notes, only to find that the next several paragraphs addressed it in its entirety, and there was actually a full appendix going into more detail. Honestly, this will be less of a book review and more of a summary with a couple final thoughts, because I think this book is not only excellent, but its content is perhaps the most important thing you can read right now. You are welcome to read the rest of this blog post, but if you have found this compelling so far, feel free to stop reading and order Toby Ord’s book posthaste.

Existential Risk

The consequences of 90% of humans on Earth dying would be pretty terrible, and given our relatively poor response to recent events, perhaps we should better explore other potential catastrophes and how we can avoid them. But The Precipice goes further. Instead of 90% of humans dying, what happens if 100% of us die out? Certainly that’s strictly worse with 100>90, but in fact these outcomes are far apart in magnitude: if all humans die out today, then all future humans never get to exist.

There’s no reason we know of that would stop our descendants from continuing to live for billions of years, eventually colonizing the stars, and allowing for the existence of trillions of beings. Whatever it is that you enjoy about humanity, whether that’s art, engineering, or the search for truth, that can’t continue if there aren’t any humans. Full stop. As far as we know, we’re the only intelligence in the universe. If we screw up and end humanity before we get off this planet, then we don’t just end it for ourselves but perhaps we end all intelligent life for the remaining trillions of years of the universe.

Even though I was aware of the broad thesis of the book, I was continually impressed with just how many different angles Ord explores. He early on notes that while we might normally think of a catastrophic extinction event, like an asteroid impact, as the thing we are keen on avoiding, in fact there are several scenarios that would be similarly devastating. For example, if humanity were to suffer some calamity that did not kill everyone but left civilization stuck at pre-industrial technology, that would also preclude humanity from living for trillions of years and colonizing the stars. A 1984 style global totalitarian state would also halt humanity’s progress, perhaps permanently.

Ord also discusses the fundamental moral philosophy implications of his thesis. The natural pitch relies on utilitarian arguments as stated above; if humanity fails to reach its potential, this not only harms any humans currently alive but all future generations. Other arguments against extinction include a duty to our past and what we owe to our ancestors, the rights of those future generations who don’t get to decide for themselves, and the simple fact that we would lose everything we currently value.

The book categorizes three types of risk: natural, anthropogenic, and future risks. Natural includes asteroids, supervolcanoes, and stellar explosions. These are pretty diverse topics, and Ord is quite informative. The story about asteroid risk was particularly fascinating to me. In the 90s, the relatively new discovery of the dinosaurs’ demise led Congress to task NASA with identifying all the largest near-Earth asteroids to see if they pose a threat to Earth. They allocated some money, and NASA tracked every near-Earth asteroid over 10 km in length, and determined that none pose a threat in the next century. They then moved on to 1 km asteroids and have now mapped the vast majority of those as well. The total cost of the program was also quite small for the information provided — only $70 million.

This is one of the rare successes in existential risk so far. Unfortunately, as Ord points out several times in the book, current foundational existential risk research at present is no more than $50 million a year. Given the stakes, this is deeply troubling. As context, Ord points out that the global ice cream market is about $60 billion, some 1000x larger.

I’ll skip the other natural risks here, but the book bounds natural risk quite skillfully; humans have been around for about 200,000 years, so it seems natural risk can’t be much higher than 0.05% per century. Even then, we’d expect our technologically advanced civilization to be more robust to these risks than we have been in the past. Many species survived even the largest mass extinctions, and none of them had integrated circuits, written language, or the scientific method.

That doesn’t mean that all risk has declined over time. On the contrary, according to Ord, the vast majority of existential risk is anthropogenic in origin. Nuclear weapons and climate change dominate this next section. It’s remarkable just how callous early tests of nuclear weapons really were. Ord recounts how there were two major calculations undertaken by a committee of Berkeley physicists before the Manhattan project got underway in earnest. One was whether the temperature of a sustained nuclear reaction would ignite the entire atmosphere in a conflagration (the committee believed it would not). The other was whether Lithium-7 would contribute to a thermonuclear explosion (it was believed it would not). It turns out that Lithium-7 can contribute to a thermonuclear explosion as was found out when the Castle Bravo test was about three times larger than expected, irradiating some 15 nearby islands.

It turned out the other calculation was correct, and the first nuclear explosion in 1945 did not ignite the atmosphere. But clearly, given the failure of the other calculation, the level of confidence here was not high enough to warrant the risk of ending all life on Earth.

Luckily, current risk from nuclear weapons and climate change that would wipe out humanity seems quite low (although not zero). Even a nuclear winter scenario or high sea level rise would not make the entire Earth uninhabitable, and it is likely humans could adapt, although the loss of life would still be quite catastrophic.

Instead, the bulk of the risk identified by Toby Ord is in future technologies which grow more capable every year. These include engineered pandemics from our increasingly powerful and cheap control over DNA synthesis, as well as artificial intelligence from our increasingly powerful and integrated computer systems.

The threat of engineered pandemics is particularly prescient as I write this in August 2020 where SARS-CoV-2 is still sweeping the world. Ord notes that even given quite positive assumptions about whether anyone would want to destroy the world with a virus, if the cost is cheap enough, it only takes one crazy death cult to pull the trigger. Even an accidental creation of a superweapon is a serious risk, as production is cheap and there are many examples of accidental leakages of bioweapons from government laboratories in the past. Unfortunately, we are also woefully unprepared on this front. The Biological Weapons Convention had a budget of $1.4 million in 2019, which Ord notes is less than most McDonald’s franchises.

Risks from unaligned artificial intelligence are similarly related to technical advancements. Ord notes that artificial intelligence has had some impressive achievements recently from photo and face identification to translation and language processing to games like Go and Starcraft. As computer hardware gets better and more specific, and as we discover more efficient algorithmic applications of artificial intelligence, we should expect this trend to continue. It therefore seems plausible that sometime in the future, perhaps this century, we will see artificial intelligence exceed human ability in a wide variety of tasks and ability. The Precipice notes that, were this to happen with some sort of general intelligence, humanity would no longer be the most intelligent species on the planet. Unless we have some foresight and strategies in place, having a superior intelligence with it own goals could be considerably dangerous.

Unfortunately, we are already quite poor at getting complex algorithms to achieve complicated goals without causing harm (just take a look at the controversy around social media and misinformation, or social media and copyright algorithms). The use of deep learning neural networks in more high stakes environments means we could be facing opaque algorithmic outcomes from artificial intelligence that we don’t know if we’ve correctly programmed to achieve the goals we actually want. Throw in the fact that human civilizational goals are multifaceted and highly debated, and there is a great deal of mistakes that could occur between what humans “want” and what a superior intelligence attempts to accomplish. While Toby Ord doesn’t think we should shut down AI research, he does suggest we take this source of risk more seriously by devoting resources to addressing it and working on the problem.

So What Do We Do?

I’ve spent a lot of time on enumerating risks because I think they are a concrete way to get someone who is unfamiliar with existential risk to think about these ideas. But Ord isn’t writing a book of alarmism just to freak out his audience. Instead, starting with the high levels of risk and adding the extremely negative consequences, Ord details how we might begin to tackle these problems. Unprecedented risks come with modeling challenges: if an existential risk cannot by definition, have ever occurred, how can we know how likely it is? We have to acknowledge this limitation, use what incomplete knowledge we can have access to (number of near misses is a good start), and start building institutions to focus on solving these hard problems.

International coordination is a major factor here. Many of these problems are collective action problems. Humanity has found ways around collective action issues with international institutions before (nuclear arms treaties), and so we need to replicate those successes. Of course, we can’t establish new or better institutions unless we get broad agreement that these issues are major problems that need to be solved. Obviously, that’s why Ord wrote this book, but it’s also why I feel compelled to blog about it as well. More on that momentarily.

In this section of the book, The Precipice outlines preliminary directions we can work towards to improve our chances of avoiding existential catastrophes. These include obvious things like increasing the funding for the Biological Weapons Convention, but also discussions on how to think about technological progress, since much of our future existential risk rises as technology improves. We also obviously need more research on existential risk generally.

Finally, I want to wrap up discussing Appendix F, which is all of Ord’s general policy recommendations put into one place. As policy prioritization has long been an interest of mine, I found Toby Ord’s answer to be quite fascinating. I wrote a post a few months back discussing the highest impact policies actually being discussed in American politics in this election cycle. Comparing it to Toby Ord’s recommendations, the overlap is essentially nonexistent except for some points on climate change, which most democrats support such as the U.S. rejoining the Paris Climate Agreement. There’s also a point about leveraging the WHO to better respond to pandemics, and given Trump has essentially done the exact opposite by removing U.S. funding for the WHO, I suppose I should at least include that as relevant policy debate.

I want to emphasize that Ord has 9 pages of policy ideas, and many of them are likely uncontroversial (improve our understanding of long period comets, have the Biological Weapons Convention have a real budget), but our political system is failing to even address these challenges, and I think it’s important to highlight that.

There is room for optimism; human knowledge is improved by discussion and research, and that includes reading and blogging. If you find these ideas interesting, or even more broadly, if you think there are valuable things in the world, one of the most effective activities you could do this year might be to just read The Precipice. Even without the weight of humanity, the concepts, problem solving, and prose are worth the read all by themselves. This is definitely by favorite book I’ve read this year, and I’ve skipped over summarizing whole sections in the interests of time. Ord even has a whole uplifting chapter about humanity’s future potential, and is overall quite positive. Please attribute any gloominess on this topic to me and not the book.

And if you do read this book, it also just makes for intriguing conversation. I couldn’t help but tell people about some of the ideas here (“are supervolcanoes a national security threat?” ), and the approach is just wonderfully different, novel, and cross-disciplinary.

For more on this, but slightly short of reading the whole book, I also recommend Toby Ord’s excellent interview on the 80000 Hours Podcast. On that page you can also find a host of awesome links to related research and ideas about existential risk. I’ll also link Slate Star Codex’s longer review of The Precipice, and places to buy it.