You’re Still Looking for Your Keys Under the Streetlight

Emmett Shear went on the Clearer Thinking podcast and discussed effective altruism among other things. We can split up his points into things I agree with and things that seem incorrect. I’ll do this using Scott Alexander’s Effective Altruism As A Tower of Assumptions for some landmarks. The tower in question:

Scott’s note: “Not intended to be canonical; realistically it would be more of a tree or flowchart than a tower.”

Shear:

…what people want is a place to put money so they can buy indulgences, so they can feel less guilty. And unfortunately, that’s not a thing that exists. No one is printing indulgences that you can just just give money here and you have done your part. That’s, unfortunately, just not how it works. Do your best. Do enough. That’s good. I love that Giving What We Can pledge. I think that’s a hugely beneficial idea like, “Hey, what if we all just took 10% and we said that was enough?” That would actually be way more than people give today. And it would also be enough, I think, if we all did it. Then people could stop beating themselves up over not feeling guilty about not doing enough, which I think is acting from a fear of “I am not good enough.” That’s one of the most dangerous things you can do.

First of all, as always, if you’re critiquing EA, that means you’re doing effective altruism. You’ve agreed that there are multiple ways to do good in the world, and you think some are better than others. You’re already at the base of the tower. Next, Shear agrees donating 10% of your income is good. Giving What We Can seems pretty squarely inside EA. It’s possible Shear doesn’t realize that GWWC is a “…public commitment to give a percentage of your income or wealth to organisations that can most effectively help others“. So either Shear is endorsing effective giving or he didn’t realize what it was, and he’s making a critique about a group of ideas he has apparently little understanding of (I think it’s this).

The part of indulgences makes no sense to me. Perhaps I’m an outlier. I’m a big tent libertarian as well as a big tent EA, so I have zero guilt about the money I make. I like the free enterprise system, and I think it makes the world better. I tend to think EAs are much more accepting of market benefits than other groups in the NGO space which can skew very left-wing, but maybe other EAs actually feel guilty about making money. If they do, I agree with Shear that the point of EA isn’t to make you feel less guilty.

However, I think the point of EA is really obviously not that! Since EA was created out a specific problem actual people like Dustin Moskovitz, Holden Karnofsky, and Elie Hasenfeld actually had which was:

  • I’ve got a ton of money
  • I’d like it to donate it in ways that use the money well, but
  • There’s no data on which charities actually accomplish good things

Indulgences have nothing to do with it. Next, let’s move up the tower to cause prioritizations:

The malaria bed nets thing is the classic like, drunk looking for his keys under the streetlight…It’s a little unfair. There are keys to be found underneath the spotlight of quantifiable, measurable impact. And it is good work to go do that. But like most good that can be done in the world, unfortunately, is not easily measurable by randomized controlled trial or highly measurable, highly quantified, very trustworthy impact statements. To the degree we find good to be done on those, we should fund that stuff.

…You’re reducing the variance on your giving by insisting on high measurability, because you know for sure you’re having this impact. It’s not that doing that kind of low variance giving is bad, it’s just, obviously, the highest impact stuff is going to be more leveraged than that. And it’s also going to be impossible to predict, probably non-repeatable a lot of the time, and so, sure fund the fucking bednets. But, that’s not going to be THE answer. It’s just AN answer.

Obviously Shear is going off the cuff, but it’s clear he’s never heard of Open Philanthropy’s post on Hits Based Giving which is like 8 years old at this point. GiveWell has a fund explicitly to incubate new ideas that haven’t be proven yet. It’s well known that not every opportunity is going to fit into a rigorous RCT scenario. A major benefit of GiveWell is to provide a better baseline compared to what most people donate to. GiveWell really does save more lives than donating to your college. If Emmett thinks donating to Yale (his alma mater) is better, he should make that case!

But sure, I agree that if we gave $100 billion to GiveWell over the next 10 years and even if they knew exactly the most high impact thing to do with it, it’s not like all of humanity’s problems would be solved. There are a lot of very intractable political stability and institutional issues around the world, and bednets won’t necessarily solve that. But check where we are on the tower! GiveWell is there to be better than the generic charitable giving most people do, and I think it’s pretty good at that.

The conversation turns to x-risk, but this also misses a lot of work. OpenPhil gave money to the Institute for Progress which does all sorts of innovation policy work like immigration, biotechnology, and more. OpenPhil gives money to animal welfare, to land use reform work, and global health scientific research. To critique EAs for being too focused on measurable RCTs is just bonkers. But let’s talk about x-risk:

And I’d say on the other side of it, the “Oh, but isn’t it more important to go after nuclear risks and stuff like that, or AI risk or whatever?” More important is the problem. That idea that you can rank all the things by importance and that you could know, in a global sense, which of these things is most important to work on, like what is most important for you to do is contextual to you. It’s driven by where you will be able to have the best impact, which is partly about the problem, but also partly about you, and where you live, and what you know, and what you’re connected to. And if you care about one of these, you think you have an inclination that there’s a big risk over there, learning more about that and growing in that direction might be a good idea.

But, the world is full of unknowns. To think that you’ll have THE correct answer is like, “No, you won’t. You’ll only not know THE correct answer, you won’t even have a full ranking. You’ll just have a bunch of ideas of stuff where your estimates all overlap each other and have high variance… Or how you, in order to get out of analysis paralysis, insist to yourself, “We have found the correct answer: AI x-risk is the most important thing. That is all I’ll devote my life to because nothing else is nearly as important because that’s the thing.” And like, maybe, maybe not. How do you know? You don’t know. You can’t possibly know, because the world is complicated.

As we’ve said earlier, if you’re making a critique of EA, then you’re already admitting that some causes might be better than others. Shear is trying to get around this, by dismissing all prioritization as impossible. It contradicts what he said about bednets, where the argument that the highest impact work is not going to be provable under an RCT regime, but I think we can charitably restate his argument thusly: charitable impact is like a Hayekian information problem where the relevant information is scattered with everyone having specialized knowledge. In this world, you can’t standardize impact because it’s unique to each person.

Again, if Emmett wants to argue that Yale donations are as good as GiveWell, he should do that! I’m not convinced. But let’s talk about individual advice. Does EA just tell everyone to focus on AI only all the time? No, of course not. Shear is just repeating what effective altruists already do as if it’s some fundamental demolition of their core beliefs. 80000 Hours doesn’t tell everyone to go into AI risk. Most EA money doesn’t go towards AI x-risk. The point is that this wasn’t something people worried about at all until effective altruists started talking about it before AI blew up with this big boom after the transformers paper.

And moreover, the EA record here on cause prioritization is really good! It turns out prioritization is possible after all! There’s been a lot of interest in risks from pandemics for a long time. EAs weren’t the only ones talking about it (Bill Gates talk from 2014). But EAs noted this was an important and neglected area before COVID-19 killed millions of people. We need more people being ahead of the curve like this, and we should see who has a good track record of working on problems so that we have solutions on hand when the problems become real.

To Emmett’s credit, he later goes on to say that an AI which makes it easier to build a better AI could in fact be “end-of-the-world dangerous”. But it strikes me as strange to believe AI could end the world and also that it’s impossible to prioritize some resources there.

If there’s a single EA concept that it’s clear Emmett Shear doesn’t understand, it’s “neglectedness”. Doing something good is nice. Doing something good, that no one else has thought of yet, where there’s lots of low-hanging fruit and high payoff: that’s superb.

…What’s the most highly measurable impact that I can have? But you know, the charitable work I’ve done that probably had the biggest impact in terms of leveraged impact has always been opportunistic. There’s a person in a place, and I know because of who I am and who they are that I can trust them and this is a unique opportunity, and I have an asymmetric information advantage. And I am going to act fast with less oversight and give them money to make it happen more quickly. And that’s not replicable. I can’t give that to you as another opportunity to go do because most high impact things don’t look that way…

This sounds plausible until you think about what effective altruists have already done. When Elie and Holden formed GiveWell there was no repeatable, replicable way to give money to save the most lives per dollar. Imagine if they had had this attitude when starting out. They would have thrown up their hands and said “welp, guess we’ll go home and donate our money to Yale!” Instead they built from scratch an organization that tried to understand what global health charities actually did and whether they were helpful. And in 2022 GiveWell raised $600 million and directed it towards places where they expect they can save a life per $5000. Things aren’t replicable and scalable until you recreate the world to make it so. You’d think a Silicon Valley CEO would know better!

Maybe one of the highest impact things you could have done was to invest money in YouTube because YouTube has created this massive amount of impact in terms of people’s ability to learn new skills or whatever. Or donate money to Wikipedia earlier or something. But that’s not replicable. Once it’s done, it’s done. You need to figure out the next thing…

There’s a lot to say here. There’s a big difference between market transactions, which pay for themselves, and charitable work which doesn’t. It’s completely reasonable to ask if there are situations where the free market won’t solve a problem, but altruistic giving could, so I don’t think the YouTube analogy makes any sense.

But setting that aside, I need to shout “NEGLECTEDNESS” loudly into the void until someone hears me. There is no world in which the internet exists, but there’s not a major video focused social media site. For God’s sake, when Google bought YouTube, they already had their own video hosting site in Google Video. If Google had cancelled that project, I’m 100% sure Facebook would have grabbed the free money, as they tried to create a video hosting platform when YouTube already existed. Your investment in YouTube is literally worthless when it comes to altruistic impact, because there’s a vibrant market searching for business opportunities in the space. Distributing bednets that would never have been distributed otherwise actually makes a difference!

And of course, I totally endorse acting fast with less oversight. Fast Grants was good! ACX grants are good! I think this counts as EA, but to the extent that other people don’t, I will agree with Shear on the Silicon Valley mindset of variance and experimentation. I just think effective altruism does this already.

Atop the Tower

Alright, so we’re at the top of the tower. Maybe there are EA orgs which should be criticized more explicitly. Could be. I think the ones I brought up here like GiveWell, OpenPhil, and 80000 Hours are actually already doing the stuff Emmett Shear says they should be doing. And there’s individual projects and focus areas that I don’t actually think are very impactful, but if Shear thinks there are higher impact specific projects, he doesn’t do a good job of conveying that.

I also actually think there probably are some big blind spots in EA as a whole. EA isn’t left-wing, but it’s a lot more left-leaning that I am. I suspect there are real conservative critiques that EA hasn’t internalized. I’m sure there are traditional values that are doing loadbearing work we don’t realize, and since EA is pretty explicitly anti most conservative values, I suspect that could result in some poor outcomes in ways that are hard to predict. It’s a hard problem and I wish someone with more time could think about it more deeply.

But what frustrates me to no end is that EA critics never seem to bring up good points. They are often just like Emmett Shear’s: they criticize a strawman that doesn’t exist, they don’t understand that EA invests in a broad array of cause areas, and they bring up points that EAs have been discussing for years as if they were novel. They say the effective altruists are looking for their keys under a streetlight, because they’ve never bothered to move out from their own streetlights.

Book Review: Harry Potter and the Methods of Rationality

Harry Potter and the Methods of Rationality is a Harry Potter fanfic written by Eliezer Yudkowsky (which you can find here). Yudkowsky is the creator of LessWrong which I often use as a shorthand for the entire Rationality space. I wasn’t around at the time, but I eventually found Scott Alexander’s work at Slate Star Codex, who is linked in the sidebar and is one of the inspirations for this blog. Through SSC, I found writings by Yudkowsky, mostly about artificial intelligence. Last year I tried to read the essays known as “The Sequences” or Rationality: A-Z at LessWrong. I read the first couple of books, but they’re pretty dense and my interest dropped off.

HPMOR, unlike The Sequences, is fiction, and I found it incredibly easy to read. I assume the project began as a way to teach rationality through a common pop culture phenomenon, and it’s pretty solid at doing that. Concepts like Bayesian updating and evolutionary biology are well explained, but are done so partially as a critique of the Harry Potter universe.

The general premise is that Harry Potter’s Aunt Petunia marries an Oxford professor instead of Vernon Dursley. Thus, Harry is raised with extensive training on the scientific method by his adoptive parents, as well as the latest understanding of probability, physics, biology, chemistry, etc. He comes to Hogwarts armed with the scientific method and sets about trying to understand how magic works.

The Harry Potter universe is an excellent substrate in which to do this, because its world is so creative, popular, and complete. But J.K. Rowling’s world also has glaring problems which can be explored in ways that teach social and physical science, and even philosophy. The wizarding government apparently sends people to Azkaban where they are not only kept separate from other people (as we do in our world with regular prisons), but they are also psychologically tortured in the most horrifying way, with their good memories drained by beings of pure evil. They also apparently have no trials, as Hagrid is sent to be tortured in the second book with no oversight. Additionally, wizards have magical healing capabilities and can magically create copies of food, yet muggles in the non-wizarding world die of malaria, tuberculosis, and starvation all the time, and this is never mentioned in the original source material.

A lot of science fiction has similar problems; in The Matrix, the machines get energy from human beings’ thermal energy, but instead of just putting people in comas, they create a massively complex neural interactive computer simulation. This creates both the ranks from which their enemies recruit and uses tons of additional energy. To be sure, science fiction isn’t necessarily made worse because of these internally inconsistencies, but good science fiction should explore these ideas instead of paper over them with hand-waving.

The writing of HPMOR is delightful. A fanfic that explores Bayesian probability in the Harry Potter universe shouldn’t be this well written, but it really is. It’s creative, funny, intense, emotional, and continually pushed me to want more. The actual size of the six books in the series is gargantuan, somewhere over half a million words, or something like the first four actual Harry Potter books combined. If you read just the first “book”, I think you’ll get a general idea and know if you find it interesting yourself. I couldn’t put it down. It makes me wonder in particular about Eliezer Yudkowsky; I had previously thought of him as an AI alignment research person, so it would appear he is somewhat of a polymath who can both write incredibly amazing and addictive fan fiction and be a leading advocate/researcher for AI alignment.

The exploration of magic is the inspiration for the story, but that’s not where it stops. The plot itself is highly compelling and different enough from the actual book(s) that I wasn’t sure exactly where it was going. It also does a nice job of rebuilding the Harry Potter world in a somewhat internally consistent way after all the criticisms. Yudkowsky comes up with a lot of original ideas here that fit into the pre-existing universe really well. There are also a lot of influences from Ender’s Game, which makes a lot of sense given that is also a story of a young child genius using logic and game theory to make it through a school.

This story also makes me think about intellectual property and copyright lengths again. HPMOR is perhaps the best example I’ve ever seen of someone creating an incredible story in a world that they didn’t have IP rights to, and it makes me wonder how many more stories like this could exist if copyright lengths were shorter. Harry Potter was such a huge phenomenon that it really required the modern world to have that huge impact, like Star Wars or Marvel movies. But were copyrights to only last 30 years, we might be able to see amazing works like HPMOR used to build careers on great franchises in the public domain while those cool franchises were still relevant!

In short, I strongly recommend this fan fiction under the condition that you enjoy Harry Potter. Otherwise, a lot of setting and characters may not make sense and I’m sure all of the jokes will fall flat. Other than that, if you’re already reading this blog, you have some vague enjoyment of rationality, empiricism, systematized thinking, etc and this story is educational, creative, and addictive.

Links 2018-07-09

My new series focusing on policy summaries made me realize that while the political world and Twittersphere may not discuss policy much, there are groups of people who research policy professionally and have probably covered some of what I want to do with my “Policies in 500 Words or Less” series.  So after looking around, I found that the Cato Institute has an excellent page called the Cato Handbook for Policymakers. It contains a ridiculous 80 entries of policy discussions including a top agenda of important items, a focus on legal and government reforms, fiscal, health, entitlement, regulation, and foreign policies. I will definitely be pulling some ideas from that page for future policy summaries.

I recently found the YouTube channel of Isaac Arthur, who makes high quality, well researched, and lengthy videos on futurism topics, including space exploration. I’d like to take a moment to highlight the benefits of a free and decentralized market in the internet age. Adam Smith’s division of labor is incredibly specialized with the extent of our market. Arthur has a successful Patreon with weekly videos on bizarre and niche topics that regularly get hundreds of thousands of views (24 million total for his channel), and they are available completely free, no studio backing necessary. Such an informative career could not have existed even 10 years ago.

The 80000 Hours Podcast, which was recently mentioned in our top podcasts post, had Dr. Anders Sandberg on (broken into two episodes) to discuss a variety of related topics: existential risk, solutions to the Fermi Paradox, and how to colonize the galaxy. Sandberg is a very interesting person and I found the discussion enlightening, even if it didn’t focus much on how to change your career to have large impacts, like 80000 Hours usually does.

Reason magazine’s July issue is titled “Burn After Reading”. It contains various discussions and instructional articles on how to do things that are on the border between legal and illegal, such as how to build a handgun or how to make good pot brownies or how to hack your own DNA with CRISPR kits. It’s an impressive demonstration of the power of free speech, but also important to the cyberpunk ideal that information is powerful and can’t be contained.

George Will writes in support of Bill Weld’s apparent aim to become the 2020 Libertarian Party nominee. I admit I wasn’t hugely impressed with Weld’s libertarian bona fide’s when he was running in 2016, but I thought his campaigning and demeanor was easily better than Gary Johnson’s, who was already the LP’s best candidate in years, maybe ever. I think a better libertarian basis paired with Weld’s political skills would be an excellent presidential candidate for the LP.

Related: last week was the 2018 Libertarian Party National Convention. I don’t know if it’s worth discussing or whether it’s actually going to matter, but I have seen some good coverage from Matt Welch at Reason and Shawn Levasseur.

I read this very long piece by Democratic Senator (and likely Presidential hopeful) Cory Booker at Brookings. It was a pretty sad look at current issues of employment, worker treatment, and stagnant wages. There was a compelling case that firms are getting better at figuring out ways to force labor to compete through sub-contracting out labor to avoid paying employee benefits. This leads to monopsony labor purchasing by large firms, squeezing workers who don’t have the same amount of market bargaining power. He also mentions non-compete clauses and growing differences between CEO pay and average pay for workers. I don’t have good answers to these points, although his suggestion of a federal jobs guarantee seems very expensive and likely wasteful. His proposed rules about stock buybacks also seem to miss the point. Maybe stricter reviews of mergers would work, but perhaps larger firms are more efficient in today’s high tech economy, it’s hard to know. Definitely a solid piece from a source I disagree with, which is always valuable.

Somewhat related: Scott Alexander’s post from a couple months ago on why a jobs guarantee isn’t that great, especially compared to a basic income guarantee. Also worth reading, Scott’s fictional post on the Gattaca sequels.

Uber might have suspended testing of self driving automobiles, but Waymo is going full steam ahead. They recently ordered over 80,000 new cars to outfit with their autonomous driving equipment, in preparation for rolling out a taxi service in Phoenix. Timothy B. Lee at Ars Technica has a very interesting piece, arguing the setbacks for autonomous vehicles only exist if you ignore the strides Waymo has made.

Augur, a decentralized prediction market platform similar to Paul Sztorc’s Hivemind (which I’ve discussed before), is launching on the Ethereum mainnet today. Ethereum has its own scaling problems, although I’d hope at some point sharding will actually be a real thing. But for now, transactions on Augur may be pretty expensive, and complex prediction markets may remain illiquid. That may mean the only competitive advantage Augur will offer is the ability to create markets of questionable legality.  Exactly what that will be remains to be seen, but this is an exciting development in the continuing development of prediction markets.

 

Narrow Your Gun Debates

This is an update from my post two years ago, since gun debates are in the news again and have yet to be narrowed.

My position on most issues leans towards the ability of individuals to operate without restrictions and thus on firearms, I’m open to robust gun ownership, but I wrote this post to explore the issue more thoroughly. I’m by no means a gun purist, to the dismay of many more intense libertarians I know. If there were more stringent regulations on firearms purchases, changing those laws would not be among my policy priorities.

Nonetheless, many people do feel strongly about gun ownership in the United States, and I wonder if this is a position where efficient advocacy could help us understand whether those feelings are warranted. Unfortunately, gun ownership and gun control are complex issues with many different parts. Continue reading Narrow Your Gun Debates

Artificial General Intelligence and Existential Risk

The purpose of this post is to discuss existential risk, and why artificial intelligence is a relatively important aspect of existential risk to consider. There are other essays about the dangers of artificial intelligence that I will link to throughout and at the end of this post. This essay is a different approach that perhaps will appeal to someone who has not seriously considered artificial general intelligence as an issue requiring civilization’s attention. At the very least, I’d like to signal that it should be more socially acceptable to discuss this problem.

First is the section on how I approached thinking about existential risk. My train of thought is a follow up to Efficient Advocacy. Also worth reading: Electoral Reform Fantasies.

Background

Political fights, especially culture war battles that President Trump seems so fond of, are loud, obnoxious, and tend to overshadow more impactful policy debates. For example, abortion debates are pretty common, highly discussed political issues, but there have been almost no major policy changes since the Supreme Court’s decision 40 years ago.  The number of abortions in the US has declined since the 1980s, but it seems uncorrelated with any political movements or electoral victories. If there aren’t meaningful differences from different political outcomes, and if political effort, labor, and capital is limited, these debates seem to distract from other areas that could impact more people. Trump seems especially good at finding meaningless conflicts to divide people, like NFL players’ actions during the national anthem or tweeting about Lavar Ball’s son being arrested in China.

Theorizing about how to combat this problem, I started making a list of what might be impactful-but-popular (or at least not unpopular) policies that would make up an idealized congressional agenda: nominal GDP futures markets, ending federal prohibition of marijuana, upgrading Social Security Numbers to be more secure, reforming bail. However, there is a big difference between “not unpopular”, “popular”, and “prioritized”. I’m pretty sure nominal GDP futures markets would have a pretty positive effect on Federal Reserve policy, and I can’t think of any political opposition to it, but almost no one is talking about it. Marijuana legalization is pretty popular across most voters, but it’s not a priority, especially for this congress. So what do you focus on? Educating more people about nominal GDP futures markets so they know such a solution exists? Convincing more people to prioritize marijuana legalization?

The nagging problem is that effective altruist groups like GiveWell have taken a research based approach to identify at what the best ways are to use our money and time to improve the world. For example, the cost of distributing anti-mosquito bed nets is extremely low, resulting in an average life saved from malaria at a cost in the thousands of dollars. The result is that we now know our actions have a significant opportunity cost; if a few thousand dollars worth of work or donations doesn’t obviously have as good an impact as literally saving someone’s life, we need a really good argument as to why we should do that activity as opposed to contributing to GiveWell’s top charities.

One way to make a case as to why there are other things worth spending money on besides GiveWell’s top charities, is to take a long term outlook, trying to effect a large change that would impact a large amount of people in the future.  For example, improving institutions in various developing countries would help those populations become richer. Another approach would be to improve the global economy, which would both allow for more investment in technology as well as push investment into developing countries looking for returns. Certainly long term approaches are more risky compared to direct impact charities that improve outcomes as soon as possible, but long term approaches can’t be abandoned either.

Existential Risk

So what about the extreme long term? What about existential risk? This blog’s philosophy takes consequentialism as a founding principle, and if you’re interested in the preceding questions of what policies are the most helpful, and where we should focus our efforts, you’ve already accepted that we should be concerned about the effects of our actions. The worst possible event, from a utilitarian perspective would be the extinction of the human race, as it would not just kill all the humans alive today (making it worse than a catastrophe that only kills half the humans), but also ends the potential descendants of all of humanity, possibly trillions of beings. If we have any concern for the the outcomes of our civilization, we must investigate sources of existential risk. Another way to state this is: assume it’s the year 2300, and humans no longer exist in the universe. What is the most likely cause of our destruction?

Wikipedia actually has a very good article on Global Catastrophic Risk, which is a broad category encompassing things that could seriously harm humanity on a global scale. Existential risks are a strict subset of those events, which could end humanity’s existence permanently. Wikipedia splits them up into natural and anthropogenic. First, let’s review the non-anthropogenic risks (natural climate change, megatsunamis, asteroid impacts, cosmic events, volcanism, extraterrestrial invasion, global pandemic) and see whether they qualify as existential.

Natural climate change and megatsunamis do not appear to be existential in nature. A megatsunami would be terrible for everyone living around the affected ocean, but humans on the other side of the earth would appear to be fine. Humans can also live in a variety of climates, so natural climate change would likely be slow enough for some humans to adapt, even if such an event causes increased geopolitical tensions.

Previous asteroid impacts have had very devastating impacts on Earth, notably the Cretaceous-Paleocene extinction event some 66 million years ago. This is a clear existential risk, but you need a pretty large asteroid to hit Earth, which is unusual. Larger asteroids can also be more easily identified from further away, giving humanity more time to do something (push it off path, blow it up, etc). The chances here are thus pretty low.

Other cosmic events are also low probability. Gamma-ray bursts are pretty devastating, but they’d have to be close-by (with a few hundred light-years at least) as well as aimed directly at Earth. Neither of these is likely within the next million years.

Volcanism is also something that has the potential to be pretty bad, perhaps existential level (see Toba Catastrophe Theory), but it is also pretty rare.

An alien invasion could easily destroy all of humanity. Any species with the capability to travel across interstellar space with military ambitions would mean they are extremely technologically superior. However, we don’t see any evidence of a galactic alien civilization (see Fermi Paradox 1 & 2 and The Great Filter). Additionally, solving this problem seems somewhat intractable; on a cosmic timescale, an alien civilization that arose before our own would likely have preceded us by millennia, meaning the technology gap between us and them would be hopelessly and permanently large.

A global pandemic seems pretty bad, certainly much more likely than anything else we’ve covered in the short term. This is also exacerbated by human actions creating a more interconnected globe. However, it is counterbalanced by the fact that no previous pandemic has ever been 100% lethal, and that modern medicine is much better than it was during the Black Plague. This is a big risk, but it may not be existential. Definitely on our shortlist of things-to-worry-about though.

Let’s talk about anthropogenic risks next: nuclear war, conventional war, anthropogenic climate change, agricultural crises, mineral exhaustion, artificial intelligence, nanotechnology, biotechnology.

A common worry is nuclear war. A massive nuclear exchange seems somewhat unlikely today, even if a regional disagreement in the Korean peninsula goes poorly in the worst possible way. It’s not common knowledge, but the “nuclear winter” scenario is still somewhat controversial, and I remain unconvinced that it poses a serious existential threat, although clearly a nuclear exchange would kill millions. Conventional war is also out as it seems strictly less dangerous than a nuclear war.

For similar reasons to nuclear winter, I’m not quite worried about global warming on purely existential terms. Global warming may be very expensive, it may cause widespread weather, climate, and ecological problems, but I don’t believe humanity will be entirely wiped out. I am open to corrections on this.

Agricultural crises and mineral exhaustion seem pretty catastrophic-but-not-existential as well. These would result in economic crises, but by definition, economic crises need humans to exist; if there are fewer humans, it seems that an agricultural crisis would no longer be an issue.

The remaining issues are largely technological in nature: artificial intelligence, biotechnology, nanotechnology, or technical experiments going wrong (like if the first nuclear test set the atmosphere on fire). These all seem fairly concerning.

Technological Existential Risk

Concern arises because technological progress means the likelihood that we will have these technologies grows over time, and, once they exist, we would expect their cost to decrease. Additionally, unlike other topics listed here, these could wipe out humanity permanently. For example, a bioengineered virus could be far more deadly than what would naturally occur, possibly resulting in a zero survival rate. The cost of DNA technology has steadily dropped, and so over time, we might expect the number of organizations or people who have the knowledge and funding to engineer deadly pathogens to increase. The more people who have this ability, the more likely that someone makes a mistake and releases a deadly virus that kills everyone. An additional issue is that it is quite likely that military research teams are right now researching bioweapons like an engineered pathogen. Incentives leading to the research of dangerous weapons like this are unlikely to change, even as DNA engineering improves, meaning the risk of this threat should grow over time.

Nanotechnology also has the potential to end all life on the planet, especially under a so-called “grey goo” scenario, where nanobots transform all the matter on Earth. This has a lot of similarities to a engineered pathogen, except the odds of any human developing some immunity no longer matter, and additionally all non-human life, indeed, all matter on Earth is also forfeit, not just the humans. Like biotechnology threats, we don’t have this technology yet, but it is an active area of research. We would also expect this risk to grow over time.

Artificial General Intelligence

Finally, artificial general intelligence contains some similar issues to the others: as technology advances, we have a higher chance of creating it; the more people who can create it, the more dangerous it is; once it is created, it could be deadly.

This post isn’t a thesis on why AI is or isn’t going to kill all humans. We made an assumption that we were looking exclusively at existential risk in the near future of humanity. Given that assumption, our question is why will AI be more likely to end humanity than anything else? Nonetheless, there are lingering questions as to whether AI is an actual “real” threat to humanity, or just an unrealistic sci-fi trope. I will outline three basic objections to AI being dangerous with three basic counterarguments.

The first objection is that AI itself will not be dangerous because it will be too stupid. Related points are that AI is too hard to create, or we can just unplug it if it has differing values from us. Counterarguments are that experts disagree on exactly when we can create human-level AI, but most agree that it’s plausible in the next hundred or couple hundred years (AI Timelines). It’s also true that we’ve seen improvements in AI ability to solve more general and more complex problems over time; AlphaZero learned how to play both Go and Chess better than any human without changes in its base code, YouTube uses algorithms to determine what content to recommend and what content to remove ads from, scanning through thousands of hours of video content every minute, Google’s Pixel phone can create software based portrait photos via machine learning rather than needing multiple lenses. We should expect this trend to continue, just like with other technologies.

However, the difference between other technological global risks and AI is that the machine learning optimization algorithms could eventually be applied to machine learning itself. This is the concept of an “intelligence explosion”, where an AI uses its intelligence to design and create successively better versions of itself. Thus, it’s not just that an organization might make a dangerous technological breakthrough, like an engineered virus, but that once the breakthrough occurs, the AI would rapidly become uncontrollable and vastly more intelligent than us. The intelligence analogy being that a mouse isn’t just less smart than a human, it literally doesn’t comprehend that its environment can be so manipulated by humans that entire species depend on the actions of humans (i.e. conservation, rules about overhunting) for their own survival.

Another objection is that if an AI is actually as intelligent as we fear it could be, it wouldn’t make “stupid” mistakes like destroying all of humanity or consuming the planet’s resources, because that wouldn’t count as “intelligent”. The counterpoint is the Orthogonality Thesis. This simply states that an AI can have any goal. Intelligence and goals are orthogonal and independent. Moreover, an AI’s goal does not have to explicitly target humans as bad (e.g. “kill all the humans”) to cause us harm. For example, a goal to calculate all the digits of pi or solve the Riemann Hypothesis might require as much computing power as possible. As part of achieving this goal, a superintelligence would determine that it must manufacture computing equipment and maximize energy to its computation equipment. Humans use energy and are made of matter, so as a way to achieve its goal, it would likely exterminate humanity, and convert all matter it could into computation equipment. Due to its superintelligence, it would accomplish this.

A final objection is that despite experts believing human level AI will happen in the next 100 years, if not sooner, there is nothing to be done about it today or that it is a waste of time to work on this problem now. This is also known as the “worrying about overpopulation on Mars” objection, comparing the worry about AI to something that is several scientific advancements away.  Scott Alexander has an entire blog post on this subject, which I recommend checking out. The basic summary is that AI advancement and AI alignment research are somewhat independent. And we really need to learn how to properly align AI values before we get human level AI.

We have a lot of theoretical philosophy that we need to figure out how to impart to a computer. Things like how humans actually make decisions, or how to value different moral tradeoffs. This could be extraordinarily complicated, as an extremely smart optimization algorithm could misinterpret almost everything we say if it did not already share our values for human life, health, and general brain state. Computer scientists set out to teach computers how to understand natural human language some 60 years ago, and we still haven’t quite nailed it. If imparting philosophical truths is similarly difficult, there is plenty of work to be done today.

Artificial intelligence could advance rapidly from human level to greater than human very quickly; the best human Go player lost to an AI (AlphaGo) in 2016, and a year later, AlphaGo lost to a new version, AlphaGo Zero, 100 games to none. It would thus not be surprising if a general intelligence achieved superhuman status a year after achieving human-comparable status, or sooner. There’s no fire alarm for artificial general intelligence. We need to be working on these problems as soon as possible.

I’d argue then, that of all scenarios listed here, a misaligned AI is the most likely to actually destroy all of humanity as a result of the Orthogonality Thesis. I also think that unlike many of the other scenarios listed here, human level AI will exist sometime soon, compared to the timescale of asteroids and vulcanism (see AI Timelines, estimates are highly variable, anywhere from 10 to 200 years). There is also a wealth of work to be done surrounding AI value alignment. Correctly aligning future AI with goals compatible with human values is thus one of the most important challenges facing our civilization within the next hundred years or so, and probably the most important existential threat we face.

The good news is that there are some places doing this work, notably the Machine Intelligence Research Institute, OpenAI, and the Future of Humanity Institute. The bad news is that despite the importance of this issue, there is very little in the way of conversations, money, or advocacy. AI Safety research is hard to calculate in total, as some research is likely done by private software companies, but is optimistically on the order of tens of millions of dollars a year. By comparison, the U.S. Transportation Security Administration, which failed to find 95% of test weapons in a recent audit, costs $7.5 billion a year.

Further Reading

I have focused this essay on trying to convey the mindset of thinking about existential risk generally and why AI is specifically worrying in this context. I’ve also tried to keep it short. The following are further resources on the specifics of why Artificial General Intelligence is worth worrying about in a broader context, arranged by length. If you felt my piece did not go in depth enough on whether AI itself is worth being concerned about, I would urge you to read one of the more in depth essays here which focus on that question directly.

 


Leave a comment on the official reddit thread. 

Links 2017-1-12

As we approach the time when free trade is the heretical advice rather than the obvious logical one, it’s time to brush up on our free trade arguments. Here’s an interesting one: would you ban new technology to save the jobs tied to the technology it replaces? Would you ban light bulbs to save candlemakers? Cars to save horsebreeders? It’s a ridiculous proposition to freeze the economy at a certain point in time. Well, there’s no economic difference between new technology and free trade. In fact, we can treat international trade as a fancy machine where we send corn away on a boat and the machine turns the corn into cars.  

And speaking of free trade, this is the economic modeling for why a tariff is unequivocally inefficient. One of the impacts of a tariff, by the way, is an increase in the market price of a good. Anyone saying that a tariff won’t have negative effects on consumers is just plain wrong.

The excellent open source encrypted messaging app Signal is so useful, it has to avoid having its application servers blacklisted by oppressive regimes. It uses a workaround of having encrypted connections through content delivery network, in this case, Google itself. Moxie Marlinspike, the creater of Signal says “Eventually disabling Signal starts to resemble disabling the internet.”

One of the biggest problems with Trump I pointed out last year was the total unknown of his policies. He keeps changing his mind on almost every issue, and when he does speak, he wanders aimlessly, using simplified language that is more blunt and less precise. Fitting right into this pattern, Trump has taken to Twitter for much of his communication, even since winning the election. Twitter is a short and imprecise tool for communication, and this New York Times article shows just how much uncertainty Trump creates with his tweets.

Related: Bill Perry is terrified of increased nuclear proliferation. The article is a little alarmist, but it’s worth remembering that nuclear war was a real threat just 30 years ago. It should not be taken for granted that nuclear war will never occur, and Trump seems the most likely of the post-Soviet presidents to get involved in a confrontation with a major nuclear power.

Scott Alexander reveals his ideal cabinet (and top advisers) if he were president. It’s not only remarkably better than Trump’s, it’s probably better than any cabinet and appointees we’ve ever had (Bernie Sanders notwithstanding). Highlights include: Alex Tabarrok as head of the FDA, Scott Sumner as Chairman of the Fed, Charles Murray as welfar czar, Peter Thiel as Commerce Secretary, and Elon Musk as both Secretary of Transportation and Energy.

Speaking of cabinets, George Will details just how out of touch soon-to-be-Attorney General Jeff Sessions is, recounting his 2015 defense of unlimited civil asset forfeiture, a procedure by which the government takes cash and property from civilians who have been convicted of no crime and therefore have no recourse or due process protections. Don’t buy into the story that all of Trump’s appointees are horrific and terrifying; there is a gradient of his cabinet appointments depending on their authoritarian tendencies and the importance of their department, and unfortunately Jeff Sessions as Attorney General is by far the most concerning.

Missed this earlier last year, and worth keeping in mind as BuzzFeed gets hammered this week over their publishing of an unverified dossier: apparently the FBI already has daily aerial surveillance flights over American cities. These seem to be for general investigative use, not vital national security issues: “But most of these government planes took the weekends off. The BuzzFeed News analysis found that surveillance flight time dropped more than 70% on Saturdays, Sundays, and federal holidays.” 

Speaking of BuzzFeed and the crisis of “fake news”, which itself may not even be anything compared the crisis of facts and truth itself, Nathan Robinson has an excellent take on the matter (very long read). With the lack of facts in the election, the media and Trump’s critics generally have to be twice as careful to rebuild trust in the very concept that objective truth exists and can be discussed in a political context.

Government regulations have hidden, unexpected costs. These regulations hurt people regardless of their political affiliations, as a Berkeley professor found out when trying to evict a tenant that refused to pay rent. California’s rather insane tenant laws mean that serial rent-cheaters can go from place to place staying rent free for months at a time.

I’ve often thought about the right ordering of presidents from best to worst, taking into account a libertarian, liberty-promoting approach. One difficulty is the non-comparability of presidents separated by centuries. However, this blog post from 2009 actually does a nice job of scoring the presidencies. I don’t agree with each one, but it’s a rough categorization that makes sense. It even gave me an additional appreciation for Ulysses Grant, who I figured was mostly president by the luck of being the general in charge when his army won the Civil War. Other highlights include William Henry Harrison scoring 11th, thus beating over three quarters of the competition despite only being in office for a month. I feel like I could have found more worse things on Andrew Jackson, and in general I feel like I agreed with the list more the closer I got the end.

Jeffrey Tucker at FEE has a nice article about the difference between spreading ideas and actual economic production of goods. His thesis is that we have much less control over the developing of ideas than we do of developing normal rivalrous goods. And since libertarians are pretty solid at grasping the idea that the production of goods cannot be controlled from the top down, we should also acknowledge that top-down approaches to developing ideas are even more preposterous, especially in the digital age of decentralized information. I’ve thought about this a fair amount considering I like I blogging but I’m well aware few people read this blog. The simplest way to restate Tucker’s point is that you have to have good ideas more than good distribution. I don’t know if that’s an accurate take, but certainly good ideas are the single most important part of spreading your ideas.

There’s a saying on the internet that “Democracy is two wolves and a lamb voting on what to eat for lunch”. The 2016 election is excellent demonstration of just how poorly democracy can fail, but what our all alternatives. How about Futarchy? This is Robin Hanson’s idea to improve public policy: “In futarchy, democracy would continue to say what we want, but betting markets would now say how to get it. That is, elected representatives would formally define and manage an after-the-fact measurement of national welfare, while market speculators would say which policies they expect to raise national welfare.” Let’s hold a referendum on it; those seem to work out.

Bitcoin has been on the rise in recent months. So have other cryptocurrencies. But rather than focus just the price of the cryptocurrency, why not look at the total market valuation of those currencies? Sure, you might have heard that Bitcoin was up to $1000 again recently, but did you know that its total market cap is ~$13 billion? At the very peak of the Bitcoin bubble in 2013, all Bitcoins together were valued around $13 billion, but only for a matter of days. This time Bitcoin has kept that valuation for over 3 weeks. With more markets and availability, Bitcoin is becoming a real alternative for people whose national currencies have failed them. 

Postlibertarian throwback: When Capitalism and the Internet Make Food Better. A reminder that the despite the ongoing horrors of government we are witnessing, the market is still busy providing better products and cheaper prices.


Leave a comment on the official reddit discussion thread.

Links 2016-12-2

Added the awesome Conor Friedersdorf and Megan McArdle to the Libertarian Web Directory.

First, all the Trump-related links:

I’ve been saying this for a while, but Robby Soave at Reason articulates why the left bears a lot of the blame for Trump due to their aggressive pushing of political correctness.

Slate Star Codex talks about similar problem on crying wolf about Trump.  Even mentioned in Episode 33 of The Fifth Column.

Tyler Cowen on why Trump’s plan to keep jobs in the US is pretty awful.

Nonetheless, also read why Bryan Caplan isn’t freaking out about Trump.

The Nerdwriter, on YouTube, makes the case that Trump is a magician, using the media to distract our attention from where it should be.  Maybe I should stop reading about him so much.

Now, other related political posts not explicitly about Trump:

Megan McArdle had a good piece talking about bridging the gap between the “right-wing media” and the regular “media”. If you want to bring conservatives back into the mainstream, you have to stop politicizing everything and only hiring left-leaning news reporters who only want to cover the local food movement and how evil Walmart is.

Related: Bryan Caplan discusses that if you just talk about how great cohesion is and despair at the political divisions we see, you’ll never get outgroups to come back in, because to them you sound like you’re telling them to conform. You have to actually unilaterally reach out to them and show them respect despite how much you dislike them.

Philosopher Nick Land argues that contrary to the notion that fascism as a societal system has been largely dead since WWII, in fact almost all political philosophies in the world today are largely rooted in fascism, including the major political philosophies of the United States, progressivism and conservatism.

What is the most prominent social science debate happening at Peking University today? The most prestigious university in the still-technically-communist-party-controlled China isn’t about Maoism vs Stalinism, it’s a planned economy vs markets.  

Scott Sumner has a hopeful take on fiscal policy and specifically reducing government budgets.

Here is a terrifying story about the unintended consequences of overcriminalization, and deference to state power. A woman with a previous arrest for prostitution, was picked up and charged with “loitering for the purposes of prostitution”. Loitering is not a criminal activity, but can be applied to anyone standing still. Loitering for the purpose of doing something else is quite speculative. Of course, prostitution itself is already a criminalization of a voluntary transaction, so now anyone who has been arrested for a voluntary interaction other people find distasteful cannot stand anywhere without being accused of a crime. In fact, if cops think women are dressed too lewdly, they can also be arrested for intent to prostitute themselves. Since this woman is relatively poor (thus the loitering for a ride outside of a trailer park), she’s forced to plead guilty to the charges and go to jail for 2 months.  

Related: Adam Ruins Everything this week is about how important prostitution was to settling the American west, and, interestingly, empowering women in that region of the country far before they had similar rights in the east.

Why build higher? This video takes a look at the history of skyscrapers, but also delves into important areas of urbanization and how humanity will live in the future. Cities are more and more important to human civilization, and improving urban areas to exploit efficiencies of concentrated living is one of the most important challenges we face.  

Crash Course has a 10 minute intro video for the philosophy of utilitarianism. Since that’s an important building block for many of the arguments on this blog, I would definitely recommend it.  

Finally, to wrap up the short videos category, Learn Liberty has a great 5 minute video on one of the most fundamental economic concepts: Opportunity Costs. Every choice we make has a hidden cost of what could have been done with those resources and time. Ignoring those opportunity costs can lead to paradoxical ideas like the Broken Window Fallacy.

For the best coverage of the death of the dictator Fidel Castro, this long piece at the Miami Herald is the most comprehensive take available.

Postlibertarian throwback: read about the politics of outrage back in 2014. Unfortunately we have…not fixed our focus on outrage yet. 2017 and the age of Trump isn’t looking so great either.


Comment on the official reddit thread.

The Election Doesn’t Change Trump’s Bad Policies

The Trump Issues

In the Trump election aftermath, many on the left have discussed how best to approach this new challenge. Many have talked about trying to understand the concerns of Trump voters. This is a worthwhile undertaking. The people who voted for Trump have several worries spanning cultural differences, economic hardship, and perhaps even existential fear for the country as a whole. First, let’s go over those concerns.

The first, and perhaps most important concern for Trump voters was that the alternative was Hillary Clinton. This blog had an extensive discussion on Hillary’s shortcoming including her flaunting of the law, her foreign policy, her defense of Obamacare, her tax increases, and her slant towards government power in every sphere. I would argue some of these flaws are also present in Trump, but many Trump voters could at least hope the Trump unknown would deliver something more to their liking than the known failure of a Hillary presidency.

Granting all of Hillary’s problems, why did they think a Trump unknown was worth risking? Broadly, one area we did know where Trump stood was on the culture wars, and for that he was initially hailed as a hero against the left. I think the left has to shoulder a huge part of the blame here, because people have been trying to tell progressives their culture is intolerant for years.  See: Scott Alexander on tribalism and tolerance in 2014, Clarkhat on Gamergate in 2014, this blog last year, another blog, and Robby Soave did a good job summing it up after the election. I don’t think there’s much to add here.

On economic hardship, the more stereotypical Trump supporters (Trump won older voters, rural voters, and uneducated voters) have something to complain about as well. If you want to be depressed, please read this ridiculously long piece called “Unnecessariat “ (or skim this American Conservative piece for some key points). The takeaway is that Trumpland is hurting because it has been economically abandoned, not just culturally isolated. With services dominating the economy, the prospects for those living outside of cities has diminished as well. We are seeing increased suicides, drug addiction, and hopelessness in these areas.

Finally, combine these worries with media that feeds panic about disasters and internet echo chambers, and you get stark existential panic about entirely separate threats.

Cracked had an interesting piece on Trumpism and how we got here, and what caught my eye was the idea of urban culture slowly making its way out to the country. Cracked claims that older, less educated, rural folks saw the abandonment of Christian traditional culture in these hedonistic wonderlands of coastal “liberal” cities and thought there would be dire consequences for the nation. Low and behold, they see: “Chaos…Blacks riot, Muslims set bombs, gays spread AIDS, Mexican cartels behead children, atheists tear down Christmas trees.”

The Trump Solutions

The problem is that many of these perceptions are just wrong. We are healthier, less likely to be murdered, and safer than ever before. Maybe we blame clickbait media, maybe we blame gullible people for believing it, but living in cities just isn’t that scary.

Last year, I met an acquaintance who had grown up in a smaller town in the South, but was now moving to another state near a major urban center. He found out I had grown up in his destination city, and despite having just met 5 minutes prior, he peppered me with bizarre questions about whether I thought it was safe to live there. I assured him that it was a major metropolitan area where millions live and work without a problem every day. He made it seem like he was moving to Afghanistan. Look, I’m sure it was pretty hairy to live in New York/Miami/Chicago/LA in the 80s, but crime rates have collapsed over the last 25 years. The amount of people murdered in the first season of Daredevil in Hell’s Kitchen likely exceeds the total number of murders in all of Manhattan last year. Our perspective is all off. And if we are imagining that law and order is collapsing, our solution is going to vastly over-correct.

That’s part of a bigger point I’ve already made: Trump’s political victory doesn’t mean his supporters have any good ideas about improving the country, or even their own situations. It just means enough people thought there were enough problems for more voters to cast a ballot for Trump over Hillary in Michigan, Wisconsin, and Pennsylvania. For instance, I think there is a real basis for complaining about the intolerant left-wing culture that has grown more bold over the last 10 years. But the Trump response has been his own version of intolerance, just copying the left and doing nothing to improve the situation.

On the economy, Trump’s plan is at best a mixed bag. Experts are mediocre at predicting economic growth, so figuring out the best economic policies to help growth may also be difficult. Trump and his supporters might blame globalism for their woes, but putting tariffs on imports and striving to shut down commerce with some of our largest trade partners will hit the poor the hardest. Price increases on low cost imported products will harm low income earners much more than upper middle class households with savings and easier means of substitution. Maybe in the long run this will spur some industrial investment, but I think it’s just as likely to speed up automation. In 4 years, many economic problems scaring Trump voters could easily be exacerbated.

More to the point, the government can’t reverse the decline of manufacturing jobs in the United States. Short of seizing control of the economy via a 5 year plan, the world has changed. Manufacturing jobs peaked in the early 80s (BLS), and while globalization has accelerated the trend, it didn’t start it. Of course, “globalization” isn’t really an entity either; decisions that changed where firms do business were made by millions of individuals looking at cost-benefit analyses and comparing prices. The government didn’t say “move these factories to Mexico”, the government said “Technology is making it easier to communicate and do business in other countries, so we will reduce taxes and import quotas to make it easier for businesses and shareholders to do things they already want to do”. Trump can’t come back and order companies to make bad business decisions unless he wants a Soviet-style command economy with capital controls.

The United States has such a strong economy due to many factors, including its large, diverse, and skilled working populace, an abundance of natural resources, heavy investment in research and capital, and strong and interconnected financial markets. Our consumer market is the largest in the world, our trade dominates the globe in both goods and services. International economic institutions from the New York Stock Exchange to the World Bank and International Monetary Fund are based in (and often dominated by) the United States.

Trump’s push to cut us off from strong trade ties will certainly harm the American centrality to the global economic system. Obviously, to many Trump fans, this is a bonus, not a problem. But long term decline in American trade would likely be connected to more sluggish growth as native industries are protected from competition; for example, Apple has pushed innovation in the smartphone market since 2007 which radically changed the status quo of what phones could do. It has had ripple effects throughout the economy as the spread of widely accessible powerful mobile computers has changed everything from transportation to social interaction to navigation and even shopping. But we should remember that the smartphone revolution was made possible by cheap global supply chains, and without them, we are likely to see stagnation.

And those older, rural, lesser educated Trump voters? No one is going to want to hire them unless the economy is clicking and demanding more workers. Sluggish growth with no competition bred by protectionist policies won’t help them.

Maybe Trump’s tax cuts and deregulation pushes will jumpstart the economy enough to overcome his bad trade polices. It’s possible, but I’m not betting on it. If it doesn’t work, in four years we will simply have the same economic problems just with tons more debt. That’s a big risk he’s taking. And it’s made more risky by Trump’s plan to expand the police state and start deporting at least two million people  (not to mention increasing military spending from the $500 billion a year we spend already).  The ACLU has gone into detail about the difficulties we face if Trump attempts to carry out his campaign promises. It’s very difficult to deport millions of people without doing away with probable cause; how do you find and arrest only the people here illegally? If they aren’t caught by the police while engaged in crime, then by necessity the police must come to them, requiring sweeps of entire residential areas, stopping people with no probable cause at all. At the very least this is grossly expensive, and more likely it will harass and catch thousands of innocent American citizens in a dragnet. And none of this even touches on registration of Muslims, continued mass surveillance, and use of torture.

In four years if the economy hasn’t improved much, debt has accumulated, and the police state has been vastly expanded, will Trump admit his policies haven’t worked? This seems unlikely as Trump has never really apologized for any stances he’s taken or mistakes he’s made. It seems far more likely that he’ll use this built up police state to harass his political enemies.

If Trump is willing to place trade barriers and dramatically reduce the world-leading $2.4 trillion worth of goods imported, how much will he be willing to use government subsidies to pay companies to “invest” in the United States? Does this sound like government direction of the economy? If things aren’t going well, will he seize more control of the economy?

I should note, I haven’t even brought up Trump’s extensive conflicts of interest, where representing American diplomatic interests may run counter to his profit-seeking ones. I also haven’t mentioned that someone who is extremely thin-skinned will be in charge of the nuclear launch codes. Many of the concerns of Trump voters don’t make much sense, many of the policy solutions of Trump and his voters are bad and would make things worse, and on top of that, Trump is irresponsible, incompetent, authoritarian, and many other things I’ve argued before. Continued opposition to Trump’s policies is vital over the next four years.


Comment on the official reddit thread.

The Age of Em

I.

I recently had the opportunity to see George Mason Professor Robin Hanson talk about his book, The Age of Em. I also was able to work my way into having a long conversation with him after his presentation.

For those who don’t know, it’s perhaps the strangest book you’ve ever heard of. Hanson looks to project forward in time when the technology exists to easily upload human brains into computer simulations. These “emulated” brains will have certain characteristics from residing in computer hardware: they can make copies of themselves, save versions of themselves for later, or delete versions of themselves. They will even be able to run faster or slower than normal human brains depending on what hardware they are running on. Hanson spends the book working through the implications of this new society. And there are a lot of fascinating insights.

Hanson discusses the pure physics of this world, as suddenly speed of light delays in communication mean a lot; if an em is running at a million times human speed, then a bad ping of 50 ms is equivalent to over 12 hours for a message to get sent today. This leads to very close physical locations of ems, which concentrates them in large cities. Their economy also grows much faster than ours due to the rapid speed at which their brains are thinking, although they may be physically restrained by how quickly the physical manufacturing of their hardware can occur. The economy also quickly moves to subsistence wages, as even the most productive members of society can have their brains copied as many times as needed to fill all roles. Elon Musk is no longer a one of kind genius, and in fact anyone who cannot compete with an Elon Musk version in their job would likely be cast aside. For a more detailed summary and examples of bizarre ideas, I recommend Part III of Scott Alexander’s post on the book.

II.

In that blog post, Scott goes on to discuss in Part IV the problem of value drift. Hanson does a good job pointing out that past human societies would not have approved of what we now consider acceptable. In some areas, the change in values in stunning. Merely 10 years ago, many had reservations about gay marriage. Merely 50 years ago, many Americans had serious reservations about interracial marriage.  On the scale of humans’ existence as a species, the amount of time we have accepted that people have the right to worship their own religion is minuscule. The section of human history where subsistence existence was not the only option is likewise small. Professor Hanson told our group that by far the most common reaction to his painting of the future was rejection.

I even asked him specifically about it: Hanson had stated several times that it was not his job or intention to make us like or hate this future, only to know about it. I pointed out that many AI researchers were very concerned about the safety of artificial intelligence and what it might do if it hits an intelligence explosion. To me, there seems to be little difference between the AI intelligence explosion and the Em economy explosion. Both would be human creations, making decisions and changing their values rapidly, at a pace that leaves most “normal” traditional physical humans behind. If many of the smartest people studying AI think that we should do a lot of work to make sure AI values line up with our own, shouldn’t we do the same thing with Ems? Hanson’s answer was basically that if we want to control the value systems of our descendants thousands of mental years in the future, well good luck with that.

Scott in Part IV of his review demonstrates the problem with just allowing this value drift to happen. Hanson calls the era we live in the “dream time” since it’s evolutionarily unusual for any species to be wealthy enough to have any values beyond “survive and reproduce”. For most of human history, there wasn’t much ability to build cities or share knowledge because too many resources were focused on survival. Today, we have become so productive and intelligent that humans have elevated Earth’s carrying capacity high above the amount of people we have. We don’t have to spend all our resources on survival and so we can come up with interesting philosophical ideas about morality and what the meaning of life is. We’ve also harnessed this evolutionary competitiveness to fuel our market economy where the determiner of what survives isn’t nature, but human desires. Unfortunately when you switch to the Age of Em, suddenly the most productive part of the economy is plunged back into a Malthusian trap with all resources going to keep the Ems alive. Fulfilling human wants may be what drives the economy, but if there are other pressures on Ems, they will be willing to sacrifice any values they have to keep themselves alive and competitive. If the economy gives up on fulfilling human demand, I wouldn’t call that a drift in values, I’d call that an absence of values.

If we live in the dream time, then we live in a unique situation where only we can comprehend and formulate higher morality and philosophical purpose. I think we should take advantage of that if we can.

III.

Hanson’s observations given his assumption that the Age of Em will happen are excellent, considering he is predicting far into the future. It’s likely things won’t work out exactly this way, as perhaps a single company will have a patent on brain scanning for a decade before the market really liberalizes; this could seriously delay the rapid economic growth Hanson sees. He acknowledges this, and keeps his book more of a prediction of what will happen if we don’t oppose this change. I’m not sure how far Hanson believes that regulation/intellectual property will not be able to thwart the age of em, but it seems that he’s more confident it will not be stopped than that it will be. This may be an economist mistake where regulation is sort of assumed away as the realm of political science. It’s not unprecedented that weird inefficient institutions would last far into the future. Intellectual property in the digital age is really weird, all things considered. Software patents especially seem like a way to patent pure logic. But there are others: banking being done with paper checks, daylight savings time, the existence of pennies, and, of course, Arby’s. There are also plenty of examples of new technologies that have evolved much faster than regulation, like supplements, e-commerce, and ride-sharing. It remains to be seen what brain emulations will be.

There is also the possibility that emulated brains won’t be the next big shift in human society. Hanson argues that this shift will rival that of the agricultural revolution and the industrial revolution. This makes a lot of sense if brain emulation is indeed the next big change. Eliezer Yudkowsky (and Scott) think this is incorrect and artificial intelligence will beat it. This seems like a real possibility. Scott points out that we often come up with technological equivalents of human biology far before actually emulating biology. This is mostly because biology has accidentally figured things out via evolution and thus it is often needlessly complicated. For example, aircraft usually fly via fixed wing aerodynamics, not by flapping. It seems likely that we will reach human level problem solving via software rather than via brain scanning. Even if we don’t, it seems likely that software could quickly optimize a simulation based on a preliminary brain scan that was too rough to get a proper brain emulation into hardware. But software assisted reconstruction could start experimenting with neuron simulation and create a software assisted brain emulation that is better designed and more specialized than any human brain emulation.

It also seems possible that other things could happen first that change human history, like very expensive climate change, a crippling pandemic (anti-biotic resistance), genetic and epigenetic engineering  and of course some technological revolution we haven’t even imagined (the unknown). Certainly if we assume continued economic growth, either brain emulation, artificial intelligence, or genetic engineering seem like likely candidates to transform humanity. Hanson thinks AI research is really overrated (he used to be an AI researcher) and isn’t progressing very fast. But he was an AI researcher about 25 years ago and we’ve seen some pretty impressive improvements in machine learning and natural language processing since then. We’ve also seen some improvement in brain emulation technology as well to be fair. Genetic engineering was hailed as the next revolution in the 1990s, but has floundered ever since. Last year though, the use of CRISPR in genome engineering has dramatically increased the feasibility of actually picking and choosing specific genes. Any of these could drastically change human society. Perhaps any genetic improvements would be overshadowed by brain emulation or AI. I guess it depends on the importance of the physical world vs the digital one.

Of course, not all changes could be from improved technology. There’s a significant risk of a global multi-drug resistant pandemic. Our overuse of antibiotics, the difficulty in making everyone stop overusing them, and our highly integrated world means we’ve created an excellent scenario for a superbug to appear and spread. Anything resembling the 1918 Spanish Flu Epidemic could be devastating to the world population and to economic growth. Climate change poses a similar risk to both life and the economy. If either of these were to happen, it could significantly deter the Age of Em from occurring or at least delay it, along with a lot of the progress of our civilization. And that’s not even mentioning additional freak natural disasters like coronal mass ejections.

Overall, predictions are very difficult and if I had to bet, I’d bet that the next big change in human civilization won’t be emulated brains. A good competitor is definitely artificial superintelligence, but when you add in genetic engineering, natural disasters, drug resistant bacterial epidemics, and so on, you have to take the field over brain emulations.

Nonetheless, this book really does make you think about the world in a different way with a perspective both more global and more forward looking. It even makes you question what it means to be human. The ins and outs of the 2016 election really fade away (despite my continued interest and blogging). Political squabbling doesn’t compare to the historical trends of human civilization and the dawn of transhumanism.


Comment on reddit.

First They Came For The Data Analysts, And I Did Not Speak Out…

Data storage is cheap, and odds are good that any information you store today – if you care just a little about preserving it – can last well beyond your own lifespan. If you’re an intelligence agency and you’re collecting all of the surveillance information you possibly can, the easiest part of your job is probably siloing it so that you’ll have it for hundreds of years. If you’ve got any kind of budget for it, it’s easy to hold on to data practically indefinitely. So, if you’re the subject of surveillance by any of that sort of intelligence agency, all sorts of information collected about you may exist in intelligence silos for decades to come, probably long after you’ve forgotten it. That information exists, for practical purposes, effectively forever.

Suppose that your nation’s intelligence agency decides to collect information in bulk on every citizen it can, including you, and you judge that they are responsible and deserving of your trust, so you don’t mind that they are gathering this information about you and storing it indefinitely. Suppose that they actually are deserving of your trust, and the potentially massive amount of information that they collect and silo about you (and everyone else) is never abused, or even seen by a human analyst. Instead it sits in some massive underground data center, occasionally browsed through by algorithms combing for actual, specific security threats.

Trustworthy governments seem to be pretty stable governments, which is fortunate for people lucky enough to be governed by them. Year after year, there is a very high likelihood that the government will still be pretty great. But that likelihood can never be 100%, which is unfortunate because when you have a non-zero likelihood of something happening and you then compound it over a time scale like “effectively forever”, that puts you in uncomfortable territory. It’s hard to anticipate what sort of threats might exist five years from now, and harder to anticipate what might happen in 20. You have no idea what sort of world you’ll live in 40 years from now, but there are good odds that the extensive information siloed away today will still be around.

When I read Scott Alexander’s review of Manufacturing Consent, it was apparent that throughout the 20th century and clear into the present day, places that were stable at one point in time become unstable, and death squads followed shortly after. The Khmer Rouge killed about 25% of the population of Cambodia from 1975 to 1979. 1975 is too close to the present to comfortably say that we exist in a modern world where we don’t have to worry about genocide and mass-murdering states.

We have no idea what the mass-murderers of the distant future will care about. Many of them will probably have fairly commonplace criteria for the groups they want to purge based on such things as race, religion, cultural heritage, sexual orientation, and so on. But some will devise criteria we can’t even begin to imagine. In the middle of the 19th century, only a tiny minority of people had even heard of communism, but a generation or so later that doctrine caused the death of millions of people in camps, wars, purges, and famines. Perhaps we’ve exhausted the space of ideologies that are willing to kill entire categories of people, and maybe we’ve identified all of the categories of people that you can identify and decide to purge.  But are you willing to bet money, much less your life, on the prediction that you won’t belong to some future class of deplorables?

In some of the purges of history, people had a chance to pretend not to be one of the undesirables. There’s no obvious sign that a Pear Party-affiliated death squad can use to identify a member of the Pineapple Party when the Pineapple Party government is toppled, so long as the Pineapplists know that they’re being targeted by Pear partisans and now is the time to scrape off their Pineapple Party ’88 bumper stickers. High-profile Pineapplists have no option but to flee the country, but the average member can try to lay low through the ensuing sectarian violence. That’s how it used to be, at least. But today people can scroll back 5 years in your Facebook profile and see that you were posting pro-Pineapple links then that you’ve since forgotten.

But open support of the Pineapple Party is too obvious. The undesirables of the future may have enough foresight to cover their tracks when it comes to clear-cut evidence like that. But, returning to the trustworthy intelligence agency we’ve mandated with finding people who want to harm us but also don’t want to be found, there are other ways to filter people. Machine learning and big data analysis are mixed bags. If you really, really need them to preemptively identify people who are about to commit atrocities, you’re probably going to be let down. It’s hard to sift through immense streams of data to find people who don’t want to be found. Not impossible, but machine learning isn’t a magic wand. That said, people are impressed with machine learning for a reason. Sometimes it pulls a surprising amount of signal out of what was previously only noise. And we are, today, the worst at discerning signal from noise that we will ever be. Progress in computational statistics could hit a wall next year, and then we can all temper our paranoia about targeted advertisements predicting our deepest, darkest secrets and embarrassing us with extremely specific ad pitches when our friends are looking over our shoulders. Maybe.

But perhaps it’s possible, if you’re patient and have gigantic piles of data lying around, to combine text analysis, social graph information, and decades-old Foursquare check-ins in order to identify closeted Pineapple Party members. And maybe it requires a small army of statisticians and programmers to do so, so you’re really not worried when the first paper is published that shows that researchers were able to identify supporters of Pineapplism with 65% accuracy. But then maybe another five years goes by and the work that previously took that small army of researchers months to do is now available as an R package that anyone with a laptop and knowledge of Statistics 101 can download and use. And that is the point where having gigantic piles of data siloed for a practically infinite amount of time becomes a scary liability.

The scenario where Pearists topple the government, swarm into the intelligence agency’s really big data center, and then know exactly where to go to round up undesirables might be fairly unlikely on its own. But there’s actually a much larger number of less-obvious opportunities for would-be Pearist mass-murderers. But maybe someone finds a decades-old flaw in a previously trusted security protocol and Pear-affiliated hackers breach the silo. Maybe they get information from the giant surveillance silo of a country that, now that we think of it, no one should have sold all of that surveillance software to. Maybe the intelligence agency has a Pearist mole. Maybe the whole intelligence apparatus is Pear-leaning the whole time. Maybe a sizeable majority of the country elects a Pearist demagogue that promises to round up Pineapplists and put them in camps. This sort of thing isn’t behind us.

The data silo is a threat to everyone. In the long run, we can’t anticipate who will have access to it. We can’t anticipate what new category will define the undesirables of the future. And those unknowing future undesirables don’t know what presently-inconspicuous evidence is being filed away in the silo now to resurface decades in the future. But the trend, as it exists, points to a future where large caches of personal data are a liability because future off-the-shelf machine learning tools may be as easy to use and overpowered relative to machine learning’s bleeding edge today as our smartphones are compared to the Apollo Guidance Computer. The wide availability of information on the open internet might itself be dangerous looked at through this lens. But if your public tweets are like dry leaves accumulating in your yard and increasing the risk of a dangerous data-fueled-pogrom wildfire, then mass surveillance silos are like giant rusty storage tanks next to your house that intelligence agencies are pumping full of high-octane petroleum as fast as they can.


Comment on reddit.

Picture credit: Wikimedia Foundation Servers by Wikipedia user Victor Grigas, licensed under CC-BY-SA-3.0.