How can we use our resources to help others the most?

This is the fundamental question of the Effective Altruism movement, and it should be the fundamental question of all charitable giving (and indeed, this post is largely copied from my similar post last year). I think the first fundamental insight of effective altruism (which really took it from Peter Singer) is that your donation can change someone’s life, and the wrong donation can accomplish nothing. People do not imagine charity in terms of “investments” and “payoffs”, yet GiveWell estimates that you can save a human life for somewhere in the magnitude of $3000.

Many American households donate that much to charity every year, and simply put, if the charities we donate to don’t try to maximize their impact, our donations may not help many people, when they could be saving a life.

This post is a short reminder that we have researched empirical evidence that you can make a difference in the world! The EA movement has already done very impressive work on how we might evaluate charitable giving, why the long term future matters, and what the most important and tractable issues might be.

Apart from the baseline incredible giving opportunities in global poverty (see GiveWell’s top charities), the long term future is an important and underfocused area of research. If humanity lives for a long time, then the vast majority of conscious humans who will exist will exist in the far future. Taking steps to ensure their existence could have massive payoffs, and concrete research in this area to avoid things like existential risk seems very important and underfunded.

I write this blog post not to shame people into donating their entire incomes (see Slate Star Codex on avoiding being eaten by consequentialist charitable impacts), but rather to ask donors to evaluate where you are sending your money within your budget and to see if perhaps the risk of paying such a high opportunity cost is worth it. Alma maters and church groups are the most common form of charity Americans give to, but the impacts from these areas seem much lower than donating to global poverty programs or the long term future.

Finally, part of this blog post is simply to publicly discuss what I donate to and to encourage others to create a charitable budget and allocate it to address problems that are large in the number of people they impact, highly neglected, and highly solvable. I thus donate about a third of my budget to GiveWell as a baseline based on evidence backed research to save lives today. I then donate another third of my budget to long term/existential risk causes where I think the impact is the highest, but the tractability is perhaps the lowest. The primary place I’ve donated to this year is the Long Term Future Fund from EA Funds. I remain uncertain on the best ways to improve the long term future, and so anything I haven’t spent from this budget item I’ve sent to GiveWell as part of my baseline giving.

The last third of my budget is reserved to focusing on policy, which is where I believe the EA movement is currently weakest. I donate money to the Center for Election Science, especially after their impressive performance this year bringing Approval Voting to St. Louis. I also donate to the Institute for Justice, as they work on fairly neglected problems in a tractable way, winning court cases to improve civil liberties for U.S. citizens. Finally, I donated a small amount to the Reason Foundation which publishes Reason magazine, as they are one of the larger places advocating big tent libertarian ideas today. It would be great to be able to move good policies to polities with bad institutions (i.e. many developing nations), but that problem seems highly intractable. It may be that the best we can do is create good institutions here and hope they are copied. I’m open to different ideas, but I am a relatively small donor and so I believe that taking risks with a portion of my donations in ways that differ from the main EA thrust is warranted.

There are many resources from the Effective Altruism community, and I’ll include several links of similar recommendations from around the EA community. If you haven’t heard of EA charities, consider giving some of your charity budget to GiveWell, or other EA organization you find convincing. If you don’t have a charity budget, consider making one for next year. Even small amounts a year can potentially save dozens of cumulative lives today, or perhaps hundreds in the far future!

Book Review: The Precipice

I have titled my annual blog post summarizing where I donate my charitable budget as “How can we use our resources to help others the most?” This is the fundamental question of the Effective Altruism movement which The Precipice‘s author, Toby Ord, helped found. For a while, Toby Ord focused on figuring out how to fight global poverty, doing the most good for the worst off people in the world. Now, he is focusing on the long term future and existential risk.

The Precipice is fantastic. It’s incredibly well written, engaging, and approachable. It covers a lot of ground from why we should care about the world, what risks humanity faces in the future, how we might think about tackling those risks, and what the future might look like if we succeed.

The Precipice eloquently interweaves fairly philosophical arguments with more empirical analysis about the sources of existential risk and tries to statistically bound them. The book discusses a pretty concerning topic of the potential end of humanity, but it does so with an eminently reasonable approach. The complexities of philosophy, science, probability, epidemiology, and more all are brought into the narrative, but made easily digestible for any reader. I honestly wish Toby Ord could teach me about everything, his writing was so clear and engaging.

The main discussion is never overwhelming with technical details, but if you ever find a point interesting, even the footnotes are amazing. At one point I came up with a counterpoint to Ord’s position, wrote that down in my notes, only to find that the next several paragraphs addressed it in its entirety, and there was actually a full appendix going into more detail. Honestly, this will be less of a book review and more of a summary with a couple final thoughts, because I think this book is not only excellent, but its content is perhaps the most important thing you can read right now. You are welcome to read the rest of this blog post, but if you have found this compelling so far, feel free to stop reading and order Toby Ord’s book posthaste.

Existential Risk

The consequences of 90% of humans on Earth dying would be pretty terrible, and given our relatively poor response to recent events, perhaps we should better explore other potential catastrophes and how we can avoid them. But The Precipice goes further. Instead of 90% of humans dying, what happens if 100% of us die out? Certainly that’s strictly worse with 100>90, but in fact these outcomes are far apart in magnitude: if all humans die out today, then all future humans never get to exist.

There’s no reason we know of that would stop our descendants from continuing to live for billions of years, eventually colonizing the stars, and allowing for the existence of trillions of beings. Whatever it is that you enjoy about humanity, whether that’s art, engineering, or the search for truth, that can’t continue if there aren’t any humans. Full stop. As far as we know, we’re the only intelligence in the universe. If we screw up and end humanity before we get off this planet, then we don’t just end it for ourselves but perhaps we end all intelligent life for the remaining trillions of years of the universe.

Even though I was aware of the broad thesis of the book, I was continually impressed with just how many different angles Ord explores. He early on notes that while we might normally think of a catastrophic extinction event, like an asteroid impact, as the thing we are keen on avoiding, in fact there are several scenarios that would be similarly devastating. For example, if humanity were to suffer some calamity that did not kill everyone but left civilization stuck at pre-industrial technology, that would also preclude humanity from living for trillions of years and colonizing the stars. A 1984 style global totalitarian state would also halt humanity’s progress, perhaps permanently.

Ord also discusses the fundamental moral philosophy implications of his thesis. The natural pitch relies on utilitarian arguments as stated above; if humanity fails to reach its potential, this not only harms any humans currently alive but all future generations. Other arguments against extinction include a duty to our past and what we owe to our ancestors, the rights of those future generations who don’t get to decide for themselves, and the simple fact that we would lose everything we currently value.

The book categorizes three types of risk: natural, anthropogenic, and future risks. Natural includes asteroids, supervolcanoes, and stellar explosions. These are pretty diverse topics, and Ord is quite informative. The story about asteroid risk was particularly fascinating to me. In the 90s, the relatively new discovery of the dinosaurs’ demise led Congress to task NASA with identifying all the largest near-Earth asteroids to see if they pose a threat to Earth. They allocated some money, and NASA tracked every near-Earth asteroid over 10 km in length, and determined that none pose a threat in the next century. They then moved on to 1 km asteroids and have now mapped the vast majority of those as well. The total cost of the program was also quite small for the information provided — only $70 million.

This is one of the rare successes in existential risk so far. Unfortunately, as Ord points out several times in the book, current foundational existential risk research at present is no more than $50 million a year. Given the stakes, this is deeply troubling. As context, Ord points out that the global ice cream market is about $60 billion, some 1000x larger.

I’ll skip the other natural risks here, but the book bounds natural risk quite skillfully; humans have been around for about 200,000 years, so it seems natural risk can’t be much higher than 0.05% per century. Even then, we’d expect our technologically advanced civilization to be more robust to these risks than we have been in the past. Many species survived even the largest mass extinctions, and none of them had integrated circuits, written language, or the scientific method.

That doesn’t mean that all risk has declined over time. On the contrary, according to Ord, the vast majority of existential risk is anthropogenic in origin. Nuclear weapons and climate change dominate this next section. It’s remarkable just how callous early tests of nuclear weapons really were. Ord recounts how there were two major calculations undertaken by a committee of Berkeley physicists before the Manhattan project got underway in earnest. One was whether the temperature of a sustained nuclear reaction would ignite the entire atmosphere in a conflagration (the committee believed it would not). The other was whether Lithium-7 would contribute to a thermonuclear explosion (it was believed it would not). It turns out that Lithium-7 can contribute to a thermonuclear explosion as was found out when the Castle Bravo test was about three times larger than expected, irradiating some 15 nearby islands.

It turned out the other calculation was correct, and the first nuclear explosion in 1945 did not ignite the atmosphere. But clearly, given the failure of the other calculation, the level of confidence here was not high enough to warrant the risk of ending all life on Earth.

Luckily, current risk from nuclear weapons and climate change that would wipe out humanity seems quite low (although not zero). Even a nuclear winter scenario or high sea level rise would not make the entire Earth uninhabitable, and it is likely humans could adapt, although the loss of life would still be quite catastrophic.

Instead, the bulk of the risk identified by Toby Ord is in future technologies which grow more capable every year. These include engineered pandemics from our increasingly powerful and cheap control over DNA synthesis, as well as artificial intelligence from our increasingly powerful and integrated computer systems.

The threat of engineered pandemics is particularly prescient as I write this in August 2020 where SARS-CoV-2 is still sweeping the world. Ord notes that even given quite positive assumptions about whether anyone would want to destroy the world with a virus, if the cost is cheap enough, it only takes one crazy death cult to pull the trigger. Even an accidental creation of a superweapon is a serious risk, as production is cheap and there are many examples of accidental leakages of bioweapons from government laboratories in the past. Unfortunately, we are also woefully unprepared on this front. The Biological Weapons Convention had a budget of $1.4 million in 2019, which Ord notes is less than most McDonald’s franchises.

Risks from unaligned artificial intelligence are similarly related to technical advancements. Ord notes that artificial intelligence has had some impressive achievements recently from photo and face identification to translation and language processing to games like Go and Starcraft. As computer hardware gets better and more specific, and as we discover more efficient algorithmic applications of artificial intelligence, we should expect this trend to continue. It therefore seems plausible that sometime in the future, perhaps this century, we will see artificial intelligence exceed human ability in a wide variety of tasks and ability. The Precipice notes that, were this to happen with some sort of general intelligence, humanity would no longer be the most intelligent species on the planet. Unless we have some foresight and strategies in place, having a superior intelligence with it own goals could be considerably dangerous.

Unfortunately, we are already quite poor at getting complex algorithms to achieve complicated goals without causing harm (just take a look at the controversy around social media and misinformation, or social media and copyright algorithms). The use of deep learning neural networks in more high stakes environments means we could be facing opaque algorithmic outcomes from artificial intelligence that we don’t know if we’ve correctly programmed to achieve the goals we actually want. Throw in the fact that human civilizational goals are multifaceted and highly debated, and there is a great deal of mistakes that could occur between what humans “want” and what a superior intelligence attempts to accomplish. While Toby Ord doesn’t think we should shut down AI research, he does suggest we take this source of risk more seriously by devoting resources to addressing it and working on the problem.

So What Do We Do?

I’ve spent a lot of time on enumerating risks because I think they are a concrete way to get someone who is unfamiliar with existential risk to think about these ideas. But Ord isn’t writing a book of alarmism just to freak out his audience. Instead, starting with the high levels of risk and adding the extremely negative consequences, Ord details how we might begin to tackle these problems. Unprecedented risks come with modeling challenges: if an existential risk cannot by definition, have ever occurred, how can we know how likely it is? We have to acknowledge this limitation, use what incomplete knowledge we can have access to (number of near misses is a good start), and start building institutions to focus on solving these hard problems.

International coordination is a major factor here. Many of these problems are collective action problems. Humanity has found ways around collective action issues with international institutions before (nuclear arms treaties), and so we need to replicate those successes. Of course, we can’t establish new or better institutions unless we get broad agreement that these issues are major problems that need to be solved. Obviously, that’s why Ord wrote this book, but it’s also why I feel compelled to blog about it as well. More on that momentarily.

In this section of the book, The Precipice outlines preliminary directions we can work towards to improve our chances of avoiding existential catastrophes. These include obvious things like increasing the funding for the Biological Weapons Convention, but also discussions on how to think about technological progress, since much of our future existential risk rises as technology improves. We also obviously need more research on existential risk generally.

Finally, I want to wrap up discussing Appendix F, which is all of Ord’s general policy recommendations put into one place. As policy prioritization has long been an interest of mine, I found Toby Ord’s answer to be quite fascinating. I wrote a post a few months back discussing the highest impact policies actually being discussed in American politics in this election cycle. Comparing it to Toby Ord’s recommendations, the overlap is essentially nonexistent except for some points on climate change, which most democrats support such as the U.S. rejoining the Paris Climate Agreement. There’s also a point about leveraging the WHO to better respond to pandemics, and given Trump has essentially done the exact opposite by removing U.S. funding for the WHO, I suppose I should at least include that as relevant policy debate.

I want to emphasize that Ord has 9 pages of policy ideas, and many of them are likely uncontroversial (improve our understanding of long period comets, have the Biological Weapons Convention have a real budget), but our political system is failing to even address these challenges, and I think it’s important to highlight that.

There is room for optimism; human knowledge is improved by discussion and research, and that includes reading and blogging. If you find these ideas interesting, or even more broadly, if you think there are valuable things in the world, one of the most effective activities you could do this year might be to just read The Precipice. Even without the weight of humanity, the concepts, problem solving, and prose are worth the read all by themselves. This is definitely by favorite book I’ve read this year, and I’ve skipped over summarizing whole sections in the interests of time. Ord even has a whole uplifting chapter about humanity’s future potential, and is overall quite positive. Please attribute any gloominess on this topic to me and not the book.

And if you do read this book, it also just makes for intriguing conversation. I couldn’t help but tell people about some of the ideas here (“are supervolcanoes a national security threat?” ), and the approach is just wonderfully different, novel, and cross-disciplinary.

For more on this, but slightly short of reading the whole book, I also recommend Toby Ord’s excellent interview on the 80000 Hours Podcast. On that page you can also find a host of awesome links to related research and ideas about existential risk. I’ll also link Slate Star Codex’s longer review of The Precipice, and places to buy it.

How can we use our resources to help others the most?

This is the fundamental question of the Effective Altruism movement, and it should be the fundamental question of all charitable giving. I think the first fundamental insight of effective altruism (which really took it from Peter Singer) is that your donation can change someone’s life, and the wrong donation can accomplish nothing. People do not imagine charity in terms of “investments” and “payoffs”, yet GiveWell estimates that you can save a human life for somewhere in the magnitude of $2500.

Many American households donate that much to charity every year, and simply put, if the charities we donate to don’t try to maximize their impact, our donations may not help many people, when they could be saving a life.

This post is a short reminder that we have researched empirical evidence that you can make a difference in the world! The EA movement has already done very impressive work on how we might evaluate charitable giving, why the long term future matters, and what the most important and tractable issues might be.

Apart from the baseline incredible giving opportunities in global poverty (see GiveWell’s top charities), the long term future is an important and underfocused area of research. If humanity lives for a long time, then the vast majority of conscious humans who will exist will exist in the far future. Taking steps to ensure their existence could have massive payoffs, and concrete research in this area to avoid things like existential risk seems very important and underfunded.

I write this blog post not to shame people into donating their entire incomes (see Slate Star Codex on avoiding being eaten by consequentialist charitable impacts), but rather to ask donors to evaluate where you are sending your money within your budget and to see if perhaps the risk of paying such a high opportunity cost is worth it. Alma maters and church groups are the most common form of charity Americans give to, but the impacts from these areas seem much lower than donating to global poverty programs or the long term future.

Finally, part of this blog post is simply to publicly discuss what I donate to and to encourage others to create a charitable budget and allocate it to address problems that are large in the number of people they impact, highly neglected, and highly solvable. I thus donate about a third of my budget to GiveWell as a baseline based on evidence backed research to save lives today. I then donate another third of my budget to long term causes where I think the impact is the highest, but the tractability is perhaps the lowest. Top charities I’ve donated to here include the Machine Intelligence Research Institute for AI alignment research, as well as the Long Term Future Fund from EA Funds.

The last third of my budget is reserved to focusing on policy, which is where I believe the EA movement is currently weakest. I donate money to the Institute for Justice, as they work on fairly neglected problems in a tractable way, winning court cases to improve civil liberties for U.S. citizens. I also like the Center for Election Science as they work to improve the democratic processes in the US. It would be great to be able to move good policies to polities with bad institutions (i.e. many developing nations), but that problem seems highly intractable. It may be that the best we can do is create good institutions here and hope they are copied. I’m open to different ideas, but I am a relatively small donor and so I believe that taking risks with a portion of my donations in ways that differ from the main EA thrust is warranted. This is by far my most uncertain category, and thus usually I will not entirely fulfill my budget for policy charities. I plan on giving anything remaining to GiveWell.

There are many resources from the Effective Altruism community, and I’ll include several links of similar recommendations from around the EA community. If you haven’t heard of EA charities, consider giving some of your charity budget to GiveWell, or other EA organization you find convincing. If you don’t have a charity budget, consider making one for next year. Even small amounts a year can potentially save dozens of cumulative lives!

Podcast Recommendations October 2019

Last year I wrote up a post discussing my recommended podcasts, and I figured it was about time to update my list. Podcasts have grown significantly in the last 10 years to the point where I honestly haven’t listened to terrestrial radio stations for several years. Podcast distribution is decentralized, and the barrier to entry is low. We live in a world where if you have a niche interest, there’s going to be a podcast and several YouTube channels covering it.

But since podcast discussion is decentralized, my most common method of hearing about podcasts is through other people. In that light, I have created this list of recommendations. It is loosely grouped with podcasts I have listened to longer and/or enjoy more at the top, with more recent podcast discoveries or podcasts whose episodes I have found hit or miss towards the bottom.

I’d also like to take a second to recommend a method of podcast listening: have a low barrier to skipping an episode of a podcast that you otherwise enjoy. This was actually a recommendation by 80,0000 Hours podcast host Rob Wiblin. He encourages his listeners to skip podcast episodes if they find it uninteresting because he’d rather they continue to enjoy the pieces of content from the podcast that they do like, rather than feel like they have to slog through parts they don’t. Moreover, there is just so much good content out there, you should never waste your time with something you don’t find interesting. And now the (slightly sorted!) list:

Reason Podcast

First up, the Reason Podcast includes several different types of excellent content. My favorite is the Monday Editor’s Roundtable which usually includes Katherine Mangu-Ward, Matt Welch, Nick Gillespie, and Peter Suderman. It’s well-edited, sharp, witty, and always tackles the latest news of the week from a libertarian perspective. In the last few years I often find myself wondering if the political world has lost its mind, and on Mondays I’m able to get the message that yes, everyone has gone crazy, but you’re still not alone, there are these four libertarian weirdos who are right there with you. Moreover, Nick and Matt’s obscure 70s and 80s pop cultural references and cynicism play well off of Katherine and Peter’s more techno-libertarian science fiction vibe.

However, that’s not the only content here! There are many interviews from presidential candidates to authors and professors. Audio from the monthly SoHo Forum debates are also posted, and I always listen to at least the opening statements (audience Q&As are less interesting to me). Overall, I almost never skip an episode of the podcast and they produce a ton of great content!

80,000 Hours

80,0000 Hours is an effective altruist organization researching how people can do the most good with their careers. The effective altruist movement does great work, and I think anyone seriously interested in making a difference in the world should be aware of it and the approach with which effective altruists analyze the world. But more than that, this podcast is just more awesome than other interview shows. Rob Wiblin, the host, is excellent at interviewing. He presses the guests on issues but is also willing to accepting strange concepts about the world and follow them to their interesting conclusions.

The interviews are also long, sometimes resulting in 3 hour episodes. This is on purpose, as they can cover in depth why people have the beliefs they do, and what specialized knowledge they have accumulated working in niche roles. Sample episodes include Vitalik Buterin (founder of Ethereum) on ways to revamp public goods, blockchains, and effective giving, Paul Christiano (AI alignment researcher at OpenAI) on messaging the future, increasing compute power and how CO2 interacts with the brain, and Philip Tetlock (author/inventor of Superforecasting) on why forecasting matters for everything.

This one is perhaps a bit more intense than some of the more chill “people hanging out” podcasts, but I listen to every episode.

EconTalk

EconTalk is centrally an economics podcast hosted by Russ Roberts. It’s funded by the Library of Economics and Liberty and Roberts leans libertarian, but he is a courteous and thoughtful interviewer. He knows his biases and acknowledges them during discussions. The podcast strays into many related fields, not just economics; Russ is interested in personal philosophy and introspection as well.

As of late, Russ has particular concerns about the economics field and how free market policies fall short of what we might hope for. In particular, he has discussed themes of societal disillusionment and isolation that simple “material” concerns that dominate economic metrics cannot capture. I wouldn’t say I always agree with Russ and certainly not with all of his guests, but I can say I listen to almost every episode because there are so many good insights discussed.

The Fifth Column

I recently heard the term “Dive Podcast”. This is an excellent description of The Fifth Column, a talk show hosted by Kmele Foster, Matt Welch, Michael Moynihan, and Anthony Fisher. All lean various degrees and shades of libertarian, and discuss the news and/or critique the ever continuous stream of takes in print media, television, online, Twitter, etc while in various states of inebriation. This is much less of a cerebral lecture and more of a “rhetorical assault” as Kmele calls it.

I find the show incredibly entertaining, often informative, and very funny. I listen to all episodes as soon as they are posted.

Hello Internet

Hello Internet is another talk show, hosted by YouTubers CGP Grey and Brady Haran. It isn’t really related to any topics we cover here on the blog, but it is nonetheless entertaining and charming. Unlike The Fifth Column, there is no alcohol involved in the making of this podcast, but it does have an amusing self-grown culture and language.

For example, there is an official flag of the podcast after a referendum of users was held, but one of the losing flags is occasionally taken up by rebellious listeners. There are also unofficial official birds of Hello Internet (the Reunion Swamphen with limited edition t-shirts). Topics covered include YouTube, technology, but also the various interests of Brady and Grey, such as mountain climbing or Apple products. There’s no simple way to convey this podcast, but I do recommend it, and I do listen to every episode.

Rationally Speaking

Rationally Speaking is an interview show hosted by Julia Galef, founder of the Center for Applied Rationality and who I’ve heard described as one of the major pillars of the rationality community. Like Russ Roberts of EconTalk, Galef is an excellent, fair, and thoughtful interviewer. However, the subjects of these interviews are much broader than EconTalk’s admittedly broad discussion of economics. There is a general focus on the philosophy of why we believe what we believe. I do tend to skip more episodes of Rationally Speaking than I do of previously mentioned interview podcasts, but I estimate I still listen to 90% of all episodes, and I would absolutely recommend this very accessible podcast to everyone.

The Economist Editor’s Picks

This one is pretty straightforward. In a world where we tend to get news continuously from the internet or our smartphones, this podcast is a short, ~20 minute weekly selection of important topics from a global perspective that you might not know much about, and that may have gotten swept away in the torrent of your daily information deluge. The Economist is certainly opinionated, but I think does a good job of promoting moderate, liberal ideas that would improve the world. This podcast is an excellent way to expose yourself to some of those simple important concepts in a global context.

Anatomy of Next

From Founder’s Fund, this is a bit of an outlier podcast on here. It’s much more of a series of scripted journalistic pieces or lectures rather than recorded unscripted discussions between people. However, it is quite ambitious in its ideas. The latest season, entitled “New World” which finished up in early 2019, is about how to build a human civilization on Mars. Anatomy of Next explores everything, most of which does not exist yet, but perhaps could. There is terraforming, genetic engineering, sci-fi launch concepts, etc.

I wouldn’t say this podcast is for everyone, but if you feel like you are missing out on human optimism, where people talk about settling Mars with technology that doesn’t exist and yet remain incredibly compelling, this is a podcast you should definitely check out. Also, thanks to Nick Gillespie and Reason for interviewing Mike Solana and letting me know about this podcast in the first place!

Building Tomorrow

Building Tomorrow is a podcast about technology and innovation, and how that is leading to and interacting with individual liberty. It’s hosted at Libertarianism.org which is a project of the Cato Institute. I only recently discovered this podcast and thus it is lower down on my list only because I haven’t had a chance to listen to as many episodes as I would like. Nonetheless, every episode I have listened to is really great! Of course, this program is the perfect niche for me to enjoy, but I would definitely recommend it to anyone who enjoys this blog.

Conversations with Tyler

Tyler Cowen co-hosts one of the most popular econ blogs in the world, Marginal Revolution, and, of course, he is quite an accomplished economist and author. I have recently discovered his podcast, and it’s pretty wonderful. I admit, I don’t listen to every episode, as it turns out Cowen’s and my interests diverge somewhat, which is quite alright. On the episodes that I do find interesting, Cowen is an excellent, although unorthodox interviewer. I rarely go into an episode knowing much about the interviewee or even thinking that I’d really enjoy the topic, but I am always impressed.

There are some additional podcasts I listen to sporadically, but either don’t fit the context of this blog, or I haven’t listened to enough episodes to recommend them here. Nonetheless, it’s worth mentioning that I have listened to a handful of episodes from the Neoliberal Podcast, and I hypothesize that if I wrote this list again in 3 months, it would likely be here.

If you have any podcast recommendations, please tweet at me or leave a comment! I’m always interested in more podcasts.

Links 2018-07-09

My new series focusing on policy summaries made me realize that while the political world and Twittersphere may not discuss policy much, there are groups of people who research policy professionally and have probably covered some of what I want to do with my “Policies in 500 Words or Less” series.  So after looking around, I found that the Cato Institute has an excellent page called the Cato Handbook for Policymakers. It contains a ridiculous 80 entries of policy discussions including a top agenda of important items, a focus on legal and government reforms, fiscal, health, entitlement, regulation, and foreign policies. I will definitely be pulling some ideas from that page for future policy summaries.

I recently found the YouTube channel of Isaac Arthur, who makes high quality, well researched, and lengthy videos on futurism topics, including space exploration. I’d like to take a moment to highlight the benefits of a free and decentralized market in the internet age. Adam Smith’s division of labor is incredibly specialized with the extent of our market. Arthur has a successful Patreon with weekly videos on bizarre and niche topics that regularly get hundreds of thousands of views (24 million total for his channel), and they are available completely free, no studio backing necessary. Such an informative career could not have existed even 10 years ago.

The 80000 Hours Podcast, which was recently mentioned in our top podcasts post, had Dr. Anders Sandberg on (broken into two episodes) to discuss a variety of related topics: existential risk, solutions to the Fermi Paradox, and how to colonize the galaxy. Sandberg is a very interesting person and I found the discussion enlightening, even if it didn’t focus much on how to change your career to have large impacts, like 80000 Hours usually does.

Reason magazine’s July issue is titled “Burn After Reading”. It contains various discussions and instructional articles on how to do things that are on the border between legal and illegal, such as how to build a handgun or how to make good pot brownies or how to hack your own DNA with CRISPR kits. It’s an impressive demonstration of the power of free speech, but also important to the cyberpunk ideal that information is powerful and can’t be contained.

George Will writes in support of Bill Weld’s apparent aim to become the 2020 Libertarian Party nominee. I admit I wasn’t hugely impressed with Weld’s libertarian bona fide’s when he was running in 2016, but I thought his campaigning and demeanor was easily better than Gary Johnson’s, who was already the LP’s best candidate in years, maybe ever. I think a better libertarian basis paired with Weld’s political skills would be an excellent presidential candidate for the LP.

Related: last week was the 2018 Libertarian Party National Convention. I don’t know if it’s worth discussing or whether it’s actually going to matter, but I have seen some good coverage from Matt Welch at Reason and Shawn Levasseur.

I read this very long piece by Democratic Senator (and likely Presidential hopeful) Cory Booker at Brookings. It was a pretty sad look at current issues of employment, worker treatment, and stagnant wages. There was a compelling case that firms are getting better at figuring out ways to force labor to compete through sub-contracting out labor to avoid paying employee benefits. This leads to monopsony labor purchasing by large firms, squeezing workers who don’t have the same amount of market bargaining power. He also mentions non-compete clauses and growing differences between CEO pay and average pay for workers. I don’t have good answers to these points, although his suggestion of a federal jobs guarantee seems very expensive and likely wasteful. His proposed rules about stock buybacks also seem to miss the point. Maybe stricter reviews of mergers would work, but perhaps larger firms are more efficient in today’s high tech economy, it’s hard to know. Definitely a solid piece from a source I disagree with, which is always valuable.

Somewhat related: Scott Alexander’s post from a couple months ago on why a jobs guarantee isn’t that great, especially compared to a basic income guarantee. Also worth reading, Scott’s fictional post on the Gattaca sequels.

Uber might have suspended testing of self driving automobiles, but Waymo is going full steam ahead. They recently ordered over 80,000 new cars to outfit with their autonomous driving equipment, in preparation for rolling out a taxi service in Phoenix. Timothy B. Lee at Ars Technica has a very interesting piece, arguing the setbacks for autonomous vehicles only exist if you ignore the strides Waymo has made.

Augur, a decentralized prediction market platform similar to Paul Sztorc’s Hivemind (which I’ve discussed before), is launching on the Ethereum mainnet today. Ethereum has its own scaling problems, although I’d hope at some point sharding will actually be a real thing. But for now, transactions on Augur may be pretty expensive, and complex prediction markets may remain illiquid. That may mean the only competitive advantage Augur will offer is the ability to create markets of questionable legality.  Exactly what that will be remains to be seen, but this is an exciting development in the continuing development of prediction markets.