Bitcoin Hard Fork Predictions

Tomorrow there is scheduled to be a hard fork of the Bitcoin blockchain and network. There’s a fair amount of uncertainty over what will happen. The hashrate is unknowable until the fork occurs. The price seems to be around 10% of the price of Bitcoin. However, there aren’t too many exchanges that will be accepting this currency, and there are even fewer places you can actually spend it.

I’m going to make some predictions about it to put on record what I think is going to occur and to see how correct or incorrect I end up being.

  1. There will be a Bitcoin Cash block mined before 12 AM August 2, US Eastern time: 80%
  2. The price of Bitcoin Cash at 12 AM August 2, US Eastern time will be <10% of Bitcoin’s price: 70%
  3. The price of Bitcoin Cash on August 5 will be < 10% of Bitcoin’s price: 90%
  4. The price of Bitcoin Cash on September 1 will be < 10% of Bitcoin’s price: 90%
  5. The value of all transactions of Bitcoin Cash around September 1 (maybe averaged over a week?) will be < 10% of the value of all transactions in Bitcoin: 95%

I have mixed hopes for the success of Bitcoin Cash. On the one hand, I wrote previously that if the two factions in Bitcoin split, we could have a competitive market showing which rules were better. However, due to network effects, I still don’t think it could happen and be very successful. Supposing it did succeed though (had a pretty high market price), what would that mean? I suppose it would mean forks would become more common. That might be better for competition, but not for stability of the currency.

Ultimately, the idea that it would be fairly easy to make a successful hard fork of Bitcoin would be pretty devastating to Bitcoin’s health. It would mean consensus doesn’t mean much, it would mean the Bitcoin community could splinter pretty easily, which would therefore mean Bitcoin’s usefulness as a currency decreases as each part of the community would be using their own forked blockchain and coin. Something like sidechains seems like a much better implementation of this idea.

I should probably also disclose that I do not have much faith in the current governance model of Bitcoin Cash, and that does concern me a bit as well. I hope that hasn’t clouded my judgment of the actual technological and economic implications, but only time will tell if my predictions are true.

Unpopular Net Neutrality Opinions

Net neutrality has benefits, and regulation has a role in ensuring its continuing existence, but there are several problems inherent in FCC telecom policy and the debate about net neutrality.

History

The new FCC chair (and Trump appointee) Ajit Pai has proposed reclassifying internet service providers as not “common carriers” under Title II of the 1934 Communications Act, thus reducing the available regulatory options for the FCC.

Net neutrality is the concept that all internet traffic should be treated identically by Internet Service Providers (like cable companies) or governments regardless of content, protocol, users, destination, sources, etc. It means that loading a webpage from this blog would not cost you more than loading a webpage from a large company, assuming the content size is similar.

The FCC has broadly promoted net neutrality in the past. Around 2008, the FCC blocked Comcast from slowing the speed of its users who were utilizing BitTorrent to download videos. Comcast appealed and won, with an appellate court ruling that the FCC did not have the anciliatory jurisdiction over Comcast’s network (Comcast v FCC). The FCC next tried to issue an Open Internet Order in 2010, but in Verizon v FCC, that order was largely vacated, as the same appellate court ruled that the FCC could not regulate ISPs unless it classified them as common carriers under Title II of the 1934 Communications Act. In 2015, the FCC classified ISPs as common carriers under Title II and enforced net neutrality rules.

Problems with Title II

A big problem with Title II is that it was written in 1934, 21 years before Tim Berners-Lee, the inventor of the world wide web, was born. In fact, the vast majority of Title II is so useless that when Tom Wheeler proposed ISPs be classified as common carrier, he said that of the 61 Title II sections, the FCC would forebear from applying the entire title except six sections (201, 202, 208, 222, 254, and 255).

One question I cannot answer without more specific legal expertise is whether Wheeler’s rule only allows the application of those sections, or if in the future the FCC can unilaterally decide (without a vote) to apply other sections of Title II, now that ISPs are seen as common carriers. For example, Section 224 of Title II relates to pole attachments. Can the future FCC regulate broadband providers’ pole attachments if they wanted to under Wheeler’s rule? Even if they cannot, they can certainly write a new rule that applies all of Title II with a full vote of the commission.

Perhaps a better solution would be for Congress to pass a new law allowing the FCC to regulate net neutrality, but bar the FCC from regulating ISPs under Title II otherwise. This would narrow the FCC’s focus officially to what consumers care about. Of course, that would require nuanced Congressional action which is likely impossible given the many competing interests in both houses.

Is Title II regulation overwhelming and innovation killing? Ajit Pai has argued so. The New York Times editorial board disagrees, but their argument seems quite lacking.  They dismiss Pai’s claim that broadband capital investment declined since Title II classification as “alternative facts”, but a simple Google search reveals why they found numbers that conflict with Pai. The source, the Free State Foundation, calculated a trend line of broadband capital expenditures since 2003. They calculated the expected expenditures after the Title II regulation as compared with the actual. So while capital expenditures actually increased after the regulation, they increased less than the trend line indicates they should have.

Is it misleading for Pai to say capital expenditures decreased? Yes, or at the very least it’s imprecise. Is it misleading for Title II proponents to say there has been no effect? Probably, although trend lines are tricky. Additionally, the Times argues that the pattern of increased consolidation in the telecoms industry is a symptom of a healthy economic sector. This is a non sequitur. Mergers and acquisitions could be symptoms of profitable or unprofitable companies, depending on who is buying who, but ultimately to me it seems more indicative that economies of scale exist. One possible explanation for recent increase in economies of scale could be an increased regulatory burden. I don’t know if that’s the case, but to suggest that Charter’s purchase of Time Warner is a symptom of a healthy telecoms sector is the Times projecting their own political views onto market actions.

Problems with Net Neutrality

Ajit Pai has argued (in this Reason interview) that ISPs were not favoring some internet traffic over others. This seems incorrect. Comcast v. FCC was specifically about Comcast reducing the speed of some types of traffic. John Oliver points out that Google Wallet was not allowed to function on phones on the networks of AT&T, Verizon, and Sprint since it competed with a joint electronic wallet venture of those companies. On the other hand, Google Wallet still out-competed the networks’ own payment system despite being banned on those platforms. Consumer response was so positive on other networks that the consumers demanded it on AT&T and Verizon. Eventually the joint venture folded and got absorbed by Google Wallet/Android Pay.

Moreover, a few phone networks have run afoul of net neutrality rules by giving consumers free data for certain services, e.g. T-Mobile allowing streaming music to not count against a customers’ data cap. If the service provided by the content producer is so profitable that it can afford to pay for its own bandwidth, is it wrong to give that bandwidth to customers free of charge?

The economics here is complicated. In a perfectly competitive market, content producers could only charge for the marginal cost of producing more content while ISPs could only charge the marginal cost of additional bandwidth. Consumers would pay each company for their respective consumption of their products.

But we don’t have a competitive market, either for content producers (only HBO has Game of Thrones, only Netflix has Stranger Things) and especially not for ISPs. Since cable ISPs are state granted monopolies, there is a solid argument for regulating them, as they have leverage over content producers. That argument does disappear though when there is competition, such as in the case of wireless broadband.

It is also worth pointing out that the importance of “neutrality” towards content is only narrowly valid. For example, bandwidth at certain times is more valuable. The Economist has suggested electric power be charged at different rates when used at different times. Similar arguments could be used for internet usage. It is also undeniable that some internet traffic really is more important, and consumers would be willing to pay more to have their bank notifications or business calls come through faster than YouTube videos, which they might be ok with allowing to buffer. Certainly we would want consumers making this decision and not ISPs, especially when there is little ISP competition for most end users. Additionally, such prioritization could be done by software on the consumer/LAN side of the router, and ISPs should likely just be dumb pipes that deliver what we tell them to.

Finally, we should be cautious about locking in rules even if they make sense today. Markets change over time, and there is a possibility that past rules will restrict innovation in the future. Since competition itself can defend against bad ISP behavior (perhaps even better than the FCC), having the FCC focus on increasing competition seems at least as vital as net neutrality. Interestingly, this is what Ajit Pai has argued for (see Reason interview above).

Conclusion

Today it seems likely that a policy of net neutrality by cable ISPs is more beneficial than not. It also seems likely that to protect that idea today, some form of regulation is needed on cable companies that are state granted monopolies in a given area. Such regulation is not as clearly necessary in wireless providers, and we should always be reviewing the importance of FCC regulations in order to avoid a curtailment of innovation. Additionally, any regulation should come from new Congressional legislation, not a law written 80 years ago. However, the benefits of net neutrality should not be taken as given. Variations in the consumer value of content priority as well as bandwidth scarcity during peak hours are perfectly acceptable ways to prioritize internet traffic. The problem arises when monopoly ISPs are doing the prioritizing rather than consumers.

 


Leave a comment on the official reddit thread.

A Few Thoughts on Bitcoin

I have been aware of Bitcoin’s existence for a while, and while I was excited about it a few years ago, it had somewhat dropped off my radar. Perhaps because over the past few months, Bitcoin has seen a big increase in value, I started to revisit it and analyze it as a technology. My experience has been nothing short of breathtaking.

A few years ago, Bitcoin was pretty cool. I even wrote a paper about it, discussing the huge potential of the technology and decentralized, autonomous transactions could totally upend the banking industry. But back when I first got into Bitcoin, I was also interested in Austrian Economics, which I’m largely over now. Their focus on control of the money supply and dire warnings about the Federal Reserve weren’t really borne out by the rather mundane economic growth of the last few years.

Nonetheless, the Bitcoin community has been working on without me, and it has paid off: you can now use Bitcoin to purchase from all sorts of retailers, including Dell, Overstock.com, Newegg, and more. You can also buy all sorts of internet specific services, which to me seems like the clearest use case. These include Steam credit, VPNs, cloud hosting, and even Reddit gold.

The price has jumped up to over $1000 at the end of April 2017 (that’s over $18 billion in total market value of all Bitcoins), and it was briefly even higher a month ago on speculation the SEC would allow for a Bitcoin ETF. The ETF was rejected, but the potential of the currency remains. And technologically, Bitcoin is far more impressive than it was, most notably with a concept called the Lightning Network.

This technology would allow for instantaneous Bitcoin transactions (without having to accept risky zero confirmation transactions). These transactions would have the full security of the Bitcoin network, and would also likely allow massive scaling of the Bitcoin payment network. Drivechain is another project with great potential to scale Bitcoin and allow for applications to be built on top of the Bitcoin blockchain. It would create a two-way peg, enforced by miners, that allowed tokens to be converted from Bitcoin to other sidechains and back again. This would allow experimentation of tons of new applications without risk to the original Bitcoin blockchain.

Hivemind is particularly exciting as a decentralized prediction market that is not subject to a central group creating markets; anyone can create and market and rely on a consensus algorithm to declare outcomes. If attached to the Bitcoin blockchain, it also wouldn’t suffer from cannibalization that Ethereum blockchains like Augur can suffer from.

Mimblewimble is another interesting sidechain idea. It combines concepts of confidential transactions with (I think) homomorphic encryption to allow for completely unknowable transaction amounts and untraceable transaction histories. It would also do this while keeping the required data to run the blockchain fairly low (the Bitcoin blockchain grows over time). It would have to be implemented as a sidechain, but any transactions that occur there would be completely untraceable.

And there are even more cool projects: Namecoin, JoinMarket, the Elements Project, and of course other cryptocurrencies like Ethereum, Monero, and Zcash. This really makes the future of Bitcoin and cryptocurrencies seem pretty bright.

However, we’ve skipped a big point, which is that most of these cool innovations for Bitcoin can’t be done with Bitcoin’s present architecture. Moreover, the current number of Bitcoin transactions per block has just about maxed out at ~1800. This has resulted in something called the Scaling Debate, which centers about the best way to scale the Bitcoin blockchain. Upgrades to the blockchain must be done through consensus where miners mine new types of blocks, institutions running nodes approve of those new blocks, and users continue to create transactions that are included in new blocks (or else find another cryptocurrency). Before any of that can happen, developers have to write the code that miners, validation nodes, and users will run.

Right now, there is a big political fight that could very briefly be described as between users who support the most common implementation of the Bitcoin wallet and node (known as Bitcoin Core) and those who generally oppose that implementation and the loose group of developers behind it. I certainly am not here to take sides, and in fact it would probably have some long term benefits if both groups could go their separate ways and have the market decide which blockchain consensus rules are better. However, there is not much incentive to do that, as there are network effects in Bitcoin and any chain split would reduce the value of the entire ecosystem. The network effects would likely mean any two-chain system would quickly collapse to one chain or the other. No one wants to be on the losing side, yet no side can convince the other, and so there has been political infighting and digging in, resulting in the current stalemate.

There will eventually be a conclusion to this stalemate; there is too much money on the line to avoid it. Either the sides will figure out a compromise, the users or the miners will trigger a fork of the chain in some way and force the issue, or eventually a couple years down the road another cryptocurrency will overtake Bitcoin as the most prominent store of value and widely used blockchain. A compromise would obviously be the least costly, a chain split would be more expensive, but could possibly solve the disagreement more completely than a compromise, while another cryptocurrency winning would be by far the most expensive and damaging outcome. All development and code security that went into Bitcoin would have to be redone on any new crytocurrency. Nonetheless, Litecoin just this week seems to have approved of Segregated Witness, the code piece that is currently causing the Bitcoin stalemate. If Bitcoin’s stalemate continues for years, Litecoin is going to start looking pretty great.

Obviously it’s disappointing that even a system built on trustless transactions can’t avoid the pettiness of human politics, but it’s a good case study demonstrating just how pervasive and pernicious human political fights are. Ultimately, because cryptocurrencies are built in a competitive market, politics cannot derail this technology forever. And when the technology does win out, the impact on the word will be revolutionary. I just hope it’s sooner rather than later.

 


Bitcoin featured picture is a public domain image.

Leave a comment on the official reddit thread.

What is Postlibertarianism? v2.0

When I started blogging here about 18 months ago, I knew that I was having trouble identifying myself as exactly “libertarian”, despite that being my primary blogging perspective for years before that. I’ve mapped out important parts of this “new” position in previous posts, but now I think it would make sense to put everything in one place. This post is labeled “2.0” since former postlibertarian.com blogger Joshua Hedlund defined it pretty well in 2011. This is a more in depth analysis.
Continue reading What is Postlibertarianism? v2.0

Encrypted Communication Apps

I have discussed this idea in the past, but normally I’ve only gotten excitement about encrypted communication from my fellow libertarians and netsec friends. But with the current Presidential situation, there seems to be more interest in communicating without being overheard by the government, even among my government-loving left-wing friends. And this is excellent! Even if you don’t need privacy, by communicating securely all the time, you make it less notable when you do have to communicate securely, and you create more encrypted traffic that other government targets of surveillance can blend into.

First, let’s go over a very quick summary of encryption. If you’re already familiar with encryption, skip down past this section and the pictures to the list.

Public Key Encryption in 5 Minutes

An encryption algorithm takes information, like text, numbers, picture data (it’s all just 0s and 1s to computers) and outputs different text on the other side. A good encryption algorithm will output text that looks randomly generated so that no information can be gained about the source text. That output is then sent out in the clear (over the internet, where people might be spying) to the recipient. The recipient then reverses the process, decrypting the message and getting the original text, numbers, picture data, etc. However, if an algorithm always created the same output data from the same inputs, bad guys could figure out what you were saying pretty quickly. This introduces the idea of keys. A key is a number the algorithm uses to change the output in a predictable way. If both the sender and the recipient have a secret key, they can use their keys and the algorithm to send messages that only they can read (without the right key, the algorithm won’t reverse the encryption):

Symmetric key encryption. Public domain image.

But we can do better! In our previous scenario, we need to somehow communicate the secret key separately from our message. That’s a problem, since we likely are using encryption precisely because we can’t communicate openly. The solution is something called public key encryption. In this system, each person has two keys, one public and one private. To send someone a message, you can encrypt the message with their public key, and then send it to them. Then only they alone can decrypt the message with their private key.

Public key cryptography. Public domain image.

The reality of the mathematics is slightly more complicated, but for our purposes, what matters is how the public and private keys exist in each messaging app. Messing with these keys is difficult and confusing for users, but loss of the private key means communication is unsecured. Therefore, when using encrypted messaging, it’s important to be aware of how the app uses and manages the keys.

The Best Apps

The following is my ranked order of preferred secure communication:

1. Signal. This the gold standard encrypted communication app. It’s open source, free, has group chat, works on mobile and desktop, and of course is end-to-end encrypted. It even has encrypted voice calls. The one significant drawback is that it requires a phone number. It uses your phone number to distribute your public key to everyone that needs to contact you.  Because of this, it offers excellent encryption (requiring no security knowledge!), but no anonymity. If you want that, check the next entry.

2. PGP Encrypted email. So this one is a bit complicated. OpenPGP (stands for Pretty Good Privacy) is an open protocol for sending encrypted messages. Unlike the other apps on this list, PGP isn’t an app and therefore requires you to produce and manage your own keys. The tools you can find at the link will allow you to produce a private and public key pair. To send a message to someone else, you will have to obtain that person’s public key from them, use the software to encrypt the message with their public key, and then send it to them. Because it is so much work, I have this method second on the list, but there is no better way to communicate securely and anonymously. To better distribute your public key, I recommend keybase.io (use that link to send use encrypted emails!). The good thing about PGP is that it can be used with any email, or really any other method of insecure communication. Additionally, it’s open source, free, and very encrypted. 

Both Signal and PGP are very secure methods of communication. The following apps are good, but they are not open source and thus are not as provably secure. They are still better than just using unencrypted methods like SMS text, email, etc.

3. Whatsapp. WhatsApp is pretty good. It’s free, widely used, implements Signal protocol (and requires a phone number), works on mobile and desktop, has group chat and encrypted phone calls, and is encrypted by default. Moxie Marlinspike, the guy who made Signal, the number one app on this list, actually implemented the same Signal protocol on WhatsApp. That’s great, but unfortunately, WhatsApp isn’t open source, so while Moxie vouches for WhatsApp now, we don’t know what could happen in the future. WhatsApp could push out an update that does sneaky, but bad things, like turning off defaults. It’s also important to acknowledge that WhatsApp’s implementation already isn’t perfect, but it’s not broken. If you use WhatsApp, it’s important to make sure the notifications are turned on for key changes. Otherwise, it’s an excellent, widely used texting substitute.

4. Threema. Threema has an advantage in that it isn’t based in U.S., and it’s more security focused than Whatsapp. Threema is fairly feature rich, including group chat, but it isn’t free, it’s limited to mobile, and it isn’t open source. Threema uses the open source library NaCl, and they have a validation procedure which provides some comfort, although I haven’t looked at it in depth and can’t tell if it proves the cryptography was done perfectly. This paper seems to indicate that there’s nothing obviously wrong with their implementation. Nonetheless, it cannot be higher on this list while still being closed source.

5. FB Messenger secret conversations. Facebook Messenger is a free app and when using its secret conversations options, the Signal protocol is used. The app is also widely used but it takes effort to switch the conversations to secret. An encrypted app that isn’t encrypted by default doesn’t do much good. FB Messenger does let you look at your keys, but it isn’t as easy to check as it is in WhatsApp, and since it isn’t open source, keys could be managed wrong or defaults changed without us knowing. It also doesn’t have other features like group chat or desktop versions.

6. iMessage. Apple has done a good job with an excellent secure protocol for iMessage. It’s also feature rich, with group chat and more, but it’s only “free” if you are willing to shell out for Apple products. While Apple does a good job documenting their protocols, iMessage is not open source, which means we can’t verify how the protocol was implemented. Moreover, we cannot view our own keys on the app, so we don’t know if they change, and we don’t know how Apple manages those keys. It is therefore possible that Apple could either loop government spying into their system (by encrypting all messages with an extra master key) or simply turn over specific keys to the government. The amount you are willing to use iMessage to communicate securely should be determined by the amount you trust Apple can withstand government attempts to access their security system, both legal and technological.

Things I have specifically not listed on purpose:

  1. Don’t use SMS. It’s not encrypted and insecure. It would be good to not even use it for 2-factor authentication if you have a better option.
  2. Don’t use email. It’s not encrypted and insecure.
  3. Don’t use Telegram. They created their own “homemade” crypto library which you should NEVER EVER DO. Their protocol is insecure and their encryption is not on by default. In fact, there are at least two known vulnerabilities.

Leave a comment on the official Reddit thread.

The Age of Em

I.

I recently had the opportunity to see George Mason Professor Robin Hanson talk about his book, The Age of Em. I also was able to work my way into having a long conversation with him after his presentation.

For those who don’t know, it’s perhaps the strangest book you’ve ever heard of. Hanson looks to project forward in time when the technology exists to easily upload human brains into computer simulations. These “emulated” brains will have certain characteristics from residing in computer hardware: they can make copies of themselves, save versions of themselves for later, or delete versions of themselves. They will even be able to run faster or slower than normal human brains depending on what hardware they are running on. Hanson spends the book working through the implications of this new society. And there are a lot of fascinating insights.

Hanson discusses the pure physics of this world, as suddenly speed of light delays in communication mean a lot; if an em is running at a million times human speed, then a bad ping of 50 ms is equivalent to over 12 hours for a message to get sent today. This leads to very close physical locations of ems, which concentrates them in large cities. Their economy also grows much faster than ours due to the rapid speed at which their brains are thinking, although they may be physically restrained by how quickly the physical manufacturing of their hardware can occur. The economy also quickly moves to subsistence wages, as even the most productive members of society can have their brains copied as many times as needed to fill all roles. Elon Musk is no longer a one of kind genius, and in fact anyone who cannot compete with an Elon Musk version in their job would likely be cast aside. For a more detailed summary and examples of bizarre ideas, I recommend Part III of Scott Alexander’s post on the book.

II.

In that blog post, Scott goes on to discuss in Part IV the problem of value drift. Hanson does a good job pointing out that past human societies would not have approved of what we now consider acceptable. In some areas, the change in values in stunning. Merely 10 years ago, many had reservations about gay marriage. Merely 50 years ago, many Americans had serious reservations about interracial marriage.  On the scale of humans’ existence as a species, the amount of time we have accepted that people have the right to worship their own religion is minuscule. The section of human history where subsistence existence was not the only option is likewise small. Professor Hanson told our group that by far the most common reaction to his painting of the future was rejection.

I even asked him specifically about it: Hanson had stated several times that it was not his job or intention to make us like or hate this future, only to know about it. I pointed out that many AI researchers were very concerned about the safety of artificial intelligence and what it might do if it hits an intelligence explosion. To me, there seems to be little difference between the AI intelligence explosion and the Em economy explosion. Both would be human creations, making decisions and changing their values rapidly, at a pace that leaves most “normal” traditional physical humans behind. If many of the smartest people studying AI think that we should do a lot of work to make sure AI values line up with our own, shouldn’t we do the same thing with Ems? Hanson’s answer was basically that if we want to control the value systems of our descendants thousands of mental years in the future, well good luck with that.

Scott in Part IV of his review demonstrates the problem with just allowing this value drift to happen. Hanson calls the era we live in the “dream time” since it’s evolutionarily unusual for any species to be wealthy enough to have any values beyond “survive and reproduce”. For most of human history, there wasn’t much ability to build cities or share knowledge because too many resources were focused on survival. Today, we have become so productive and intelligent that humans have elevated Earth’s carrying capacity high above the amount of people we have. We don’t have to spend all our resources on survival and so we can come up with interesting philosophical ideas about morality and what the meaning of life is. We’ve also harnessed this evolutionary competitiveness to fuel our market economy where the determiner of what survives isn’t nature, but human desires. Unfortunately when you switch to the Age of Em, suddenly the most productive part of the economy is plunged back into a Malthusian trap with all resources going to keep the Ems alive. Fulfilling human wants may be what drives the economy, but if there are other pressures on Ems, they will be willing to sacrifice any values they have to keep themselves alive and competitive. If the economy gives up on fulfilling human demand, I wouldn’t call that a drift in values, I’d call that an absence of values.

If we live in the dream time, then we live in a unique situation where only we can comprehend and formulate higher morality and philosophical purpose. I think we should take advantage of that if we can.

III.

Hanson’s observations given his assumption that the Age of Em will happen are excellent, considering he is predicting far into the future. It’s likely things won’t work out exactly this way, as perhaps a single company will have a patent on brain scanning for a decade before the market really liberalizes; this could seriously delay the rapid economic growth Hanson sees. He acknowledges this, and keeps his book more of a prediction of what will happen if we don’t oppose this change. I’m not sure how far Hanson believes that regulation/intellectual property will not be able to thwart the age of em, but it seems that he’s more confident it will not be stopped than that it will be. This may be an economist mistake where regulation is sort of assumed away as the realm of political science. It’s not unprecedented that weird inefficient institutions would last far into the future. Intellectual property in the digital age is really weird, all things considered. Software patents especially seem like a way to patent pure logic. But there are others: banking being done with paper checks, daylight savings time, the existence of pennies, and, of course, Arby’s. There are also plenty of examples of new technologies that have evolved much faster than regulation, like supplements, e-commerce, and ride-sharing. It remains to be seen what brain emulations will be.

There is also the possibility that emulated brains won’t be the next big shift in human society. Hanson argues that this shift will rival that of the agricultural revolution and the industrial revolution. This makes a lot of sense if brain emulation is indeed the next big change. Eliezer Yudkowsky (and Scott) think this is incorrect and artificial intelligence will beat it. This seems like a real possibility. Scott points out that we often come up with technological equivalents of human biology far before actually emulating biology. This is mostly because biology has accidentally figured things out via evolution and thus it is often needlessly complicated. For example, aircraft usually fly via fixed wing aerodynamics, not by flapping. It seems likely that we will reach human level problem solving via software rather than via brain scanning. Even if we don’t, it seems likely that software could quickly optimize a simulation based on a preliminary brain scan that was too rough to get a proper brain emulation into hardware. But software assisted reconstruction could start experimenting with neuron simulation and create a software assisted brain emulation that is better designed and more specialized than any human brain emulation.

It also seems possible that other things could happen first that change human history, like very expensive climate change, a crippling pandemic (anti-biotic resistance), genetic and epigenetic engineering  and of course some technological revolution we haven’t even imagined (the unknown). Certainly if we assume continued economic growth, either brain emulation, artificial intelligence, or genetic engineering seem like likely candidates to transform humanity. Hanson thinks AI research is really overrated (he used to be an AI researcher) and isn’t progressing very fast. But he was an AI researcher about 25 years ago and we’ve seen some pretty impressive improvements in machine learning and natural language processing since then. We’ve also seen some improvement in brain emulation technology as well to be fair. Genetic engineering was hailed as the next revolution in the 1990s, but has floundered ever since. Last year though, the use of CRISPR in genome engineering has dramatically increased the feasibility of actually picking and choosing specific genes. Any of these could drastically change human society. Perhaps any genetic improvements would be overshadowed by brain emulation or AI. I guess it depends on the importance of the physical world vs the digital one.

Of course, not all changes could be from improved technology. There’s a significant risk of a global multi-drug resistant pandemic. Our overuse of antibiotics, the difficulty in making everyone stop overusing them, and our highly integrated world means we’ve created an excellent scenario for a superbug to appear and spread. Anything resembling the 1918 Spanish Flu Epidemic could be devastating to the world population and to economic growth. Climate change poses a similar risk to both life and the economy. If either of these were to happen, it could significantly deter the Age of Em from occurring or at least delay it, along with a lot of the progress of our civilization. And that’s not even mentioning additional freak natural disasters like coronal mass ejections.

Overall, predictions are very difficult and if I had to bet, I’d bet that the next big change in human civilization won’t be emulated brains. A good competitor is definitely artificial superintelligence, but when you add in genetic engineering, natural disasters, drug resistant bacterial epidemics, and so on, you have to take the field over brain emulations.

Nonetheless, this book really does make you think about the world in a different way with a perspective both more global and more forward looking. It even makes you question what it means to be human. The ins and outs of the 2016 election really fade away (despite my continued interest and blogging). Political squabbling doesn’t compare to the historical trends of human civilization and the dawn of transhumanism.


Comment on reddit.

First They Came For The Data Analysts, And I Did Not Speak Out…

Data storage is cheap, and odds are good that any information you store today – if you care just a little about preserving it – can last well beyond your own lifespan. If you’re an intelligence agency and you’re collecting all of the surveillance information you possibly can, the easiest part of your job is probably siloing it so that you’ll have it for hundreds of years. If you’ve got any kind of budget for it, it’s easy to hold on to data practically indefinitely. So, if you’re the subject of surveillance by any of that sort of intelligence agency, all sorts of information collected about you may exist in intelligence silos for decades to come, probably long after you’ve forgotten it. That information exists, for practical purposes, effectively forever.

Suppose that your nation’s intelligence agency decides to collect information in bulk on every citizen it can, including you, and you judge that they are responsible and deserving of your trust, so you don’t mind that they are gathering this information about you and storing it indefinitely. Suppose that they actually are deserving of your trust, and the potentially massive amount of information that they collect and silo about you (and everyone else) is never abused, or even seen by a human analyst. Instead it sits in some massive underground data center, occasionally browsed through by algorithms combing for actual, specific security threats.

Trustworthy governments seem to be pretty stable governments, which is fortunate for people lucky enough to be governed by them. Year after year, there is a very high likelihood that the government will still be pretty great. But that likelihood can never be 100%, which is unfortunate because when you have a non-zero likelihood of something happening and you then compound it over a time scale like “effectively forever”, that puts you in uncomfortable territory. It’s hard to anticipate what sort of threats might exist five years from now, and harder to anticipate what might happen in 20. You have no idea what sort of world you’ll live in 40 years from now, but there are good odds that the extensive information siloed away today will still be around.

When I read Scott Alexander’s review of Manufacturing Consent, it was apparent that throughout the 20th century and clear into the present day, places that were stable at one point in time become unstable, and death squads followed shortly after. The Khmer Rouge killed about 25% of the population of Cambodia from 1975 to 1979. 1975 is too close to the present to comfortably say that we exist in a modern world where we don’t have to worry about genocide and mass-murdering states.

We have no idea what the mass-murderers of the distant future will care about. Many of them will probably have fairly commonplace criteria for the groups they want to purge based on such things as race, religion, cultural heritage, sexual orientation, and so on. But some will devise criteria we can’t even begin to imagine. In the middle of the 19th century, only a tiny minority of people had even heard of communism, but a generation or so later that doctrine caused the death of millions of people in camps, wars, purges, and famines. Perhaps we’ve exhausted the space of ideologies that are willing to kill entire categories of people, and maybe we’ve identified all of the categories of people that you can identify and decide to purge.  But are you willing to bet money, much less your life, on the prediction that you won’t belong to some future class of deplorables?

In some of the purges of history, people had a chance to pretend not to be one of the undesirables. There’s no obvious sign that a Pear Party-affiliated death squad can use to identify a member of the Pineapple Party when the Pineapple Party government is toppled, so long as the Pineapplists know that they’re being targeted by Pear partisans and now is the time to scrape off their Pineapple Party ’88 bumper stickers. High-profile Pineapplists have no option but to flee the country, but the average member can try to lay low through the ensuing sectarian violence. That’s how it used to be, at least. But today people can scroll back 5 years in your Facebook profile and see that you were posting pro-Pineapple links then that you’ve since forgotten.

But open support of the Pineapple Party is too obvious. The undesirables of the future may have enough foresight to cover their tracks when it comes to clear-cut evidence like that. But, returning to the trustworthy intelligence agency we’ve mandated with finding people who want to harm us but also don’t want to be found, there are other ways to filter people. Machine learning and big data analysis are mixed bags. If you really, really need them to preemptively identify people who are about to commit atrocities, you’re probably going to be let down. It’s hard to sift through immense streams of data to find people who don’t want to be found. Not impossible, but machine learning isn’t a magic wand. That said, people are impressed with machine learning for a reason. Sometimes it pulls a surprising amount of signal out of what was previously only noise. And we are, today, the worst at discerning signal from noise that we will ever be. Progress in computational statistics could hit a wall next year, and then we can all temper our paranoia about targeted advertisements predicting our deepest, darkest secrets and embarrassing us with extremely specific ad pitches when our friends are looking over our shoulders. Maybe.

But perhaps it’s possible, if you’re patient and have gigantic piles of data lying around, to combine text analysis, social graph information, and decades-old Foursquare check-ins in order to identify closeted Pineapple Party members. And maybe it requires a small army of statisticians and programmers to do so, so you’re really not worried when the first paper is published that shows that researchers were able to identify supporters of Pineapplism with 65% accuracy. But then maybe another five years goes by and the work that previously took that small army of researchers months to do is now available as an R package that anyone with a laptop and knowledge of Statistics 101 can download and use. And that is the point where having gigantic piles of data siloed for a practically infinite amount of time becomes a scary liability.

The scenario where Pearists topple the government, swarm into the intelligence agency’s really big data center, and then know exactly where to go to round up undesirables might be fairly unlikely on its own. But there’s actually a much larger number of less-obvious opportunities for would-be Pearist mass-murderers. But maybe someone finds a decades-old flaw in a previously trusted security protocol and Pear-affiliated hackers breach the silo. Maybe they get information from the giant surveillance silo of a country that, now that we think of it, no one should have sold all of that surveillance software to. Maybe the intelligence agency has a Pearist mole. Maybe the whole intelligence apparatus is Pear-leaning the whole time. Maybe a sizeable majority of the country elects a Pearist demagogue that promises to round up Pineapplists and put them in camps. This sort of thing isn’t behind us.

The data silo is a threat to everyone. In the long run, we can’t anticipate who will have access to it. We can’t anticipate what new category will define the undesirables of the future. And those unknowing future undesirables don’t know what presently-inconspicuous evidence is being filed away in the silo now to resurface decades in the future. But the trend, as it exists, points to a future where large caches of personal data are a liability because future off-the-shelf machine learning tools may be as easy to use and overpowered relative to machine learning’s bleeding edge today as our smartphones are compared to the Apollo Guidance Computer. The wide availability of information on the open internet might itself be dangerous looked at through this lens. But if your public tweets are like dry leaves accumulating in your yard and increasing the risk of a dangerous data-fueled-pogrom wildfire, then mass surveillance silos are like giant rusty storage tanks next to your house that intelligence agencies are pumping full of high-octane petroleum as fast as they can.


Comment on reddit.

Picture credit: Wikimedia Foundation Servers by Wikipedia user Victor Grigas, licensed under CC-BY-SA-3.0.

Should Tesla charge more for their cars?

Tesla Motors announced their newest car, the Model 3, is now available for pre-order.  It’s always been Tesla’s stated purpose to bring down the cost of electric vehicle dramatically, by first charging people for high end cars, and using those profits to innovate the cost of cars down to affordable levels for the general public. It’s an admirable goal that combines the best intentions with good incentives, using idealism to drive profits.

Tesla has, confusingly, sold 3 models of cars prior to the Model 3: the early Tesla Roadster (all over $109k price), the ultra-luxury sedan Model S (starting price at $76k, but most sell at over $100k), and the newer Model X SUV (about $5000 more than the Model S).  Few Model X’s have been shipped, and only about 2500 Tesla Roadsters were ever built. The vast majority of Tesla’s automobiles have been Model S’s between 2012-2016. In that 4 year span, roughly 107,000 cars have been sold worldwide, with about 63,000 in the United States.

In the past couple weeks, Tesla has received 325,000 almost 400,000 Model 3 pre-orders.

Making matters worse, Tesla has said they expect to start shipping at the end of 2017. Some analysts say that Tesla will ship about 12,000 cars in 2017 and another 60,000 in 2018.  But this might be optimistic, since Tesla was supposed to start building the Model X last year, but only got a couple hundred out the factory before January.

Tesla will get better at manufacturing, but they are not ready to switch from the high end market to the mass market (or as mass market as a $35,000 base model). The Tesla “master plan” is not ready to attack this level of the market yet, but that’s not to say it isn’t successful in other ways; as Ben Thompson wrote: “The real payoff of Musk’s ‘Master Plan’ is the fact that Tesla means something.”  In fact it means so much that the demand for a $35k Tesla in 2017 is something like 10x predicted supply! Tesla should take advantage of this.

The obvious economic answer to quantity demanded outstripping quantity supplied is to raise the price.  Scaling Tesla’s manufacturing output to new heights is not going to be easy, but it will be easier with additional resources. And Tesla could use some additional resources (they lost about $300 million in 2014). Right now, people will be waiting around for their cars for years. Why not take some more from people who want a car sooner, so that more innovation can be done to help the people on the back-end?  That’s Tesla’s whole plan anyway.  Creating an affordable family car that you can only make 50,000 of every year doesn’t help many families!

Now, of course, it’s true that some of the appeal of Tesla is that they are trying to transform the auto industry, and if they charge more for the Model 3, one could argue they aren’t as transformational as they claim.  But I’d counter with Thompson’s comparison to Apple, in that the Tesla brand itself is drenched in cool. Tesla’s brand is quite valuable, and the best way to help humanity with that brand is to push harder for innovation.

An awesome, widely available $35,000 electric car will come, but for now, Tesla has the opportunity to marshal more resources to build a better future; it would be silly to not take advantage of that.


Photo Credit: “Candy Red Tesla Model 3”, is a derivative of this photo by Steve Jurvetson, used under CC BY 2.0. “Candy” is licensed under CC BY 2.0 by Mariordo.

Legal Innovation: Warrant Canaries

I recently came across a fascinating legal concept called warrant canaries. I’m going to cover the idea briefly, but if you want to know more about them in detail, I highly recommend this Warrant Canary FAQ at the Electronic Frontier Foundation.

The context is that many online services based in the United States can be compelled by the FBI to give whatever information they have to law enforcement through National Security Letters. Those documents often gag the companies from informing their customers they are being spied on, even if the service is being provided specifically so that users can get encrypted, private communication. It’s hard to pin down the exact constitutionality of NSLs. They were ruled unconstitutional in 2013, but it looks like the case was remanded in 2015 after the passage of the USA Freedom Act. Given the government’s continued efforts to obtain information regardless of constitutionality and limitations placed on them by Congress, it would be nice if we had some way to communicate if a service was under duress from the government.

The usefulness of warrant canaries (I’ll get to what they are in a moment) is based on two legal concepts: (1) it’s not illegal to inform anyone of a warrant you haven’t been served, and (2) the state cannot compel false speech.

The first statement is common sense, since you can’t be curtailed from simply stating something hasn’t happened yet.  The second is a bit more subtle; a stronger statement is that the state cannot compel speech at all, but that’s not always true. The state can sometimes compel commercial speech to inform consumers of information so they can make accurate decisions. The EFF elaborates that “…the cases on compelled speech have tended to rely on truth as a minimum requirement”.

This is essential because it allows companies with encryption products to convey highly relevant information to their customers. Companies can publicly post a message indicating they have not received a warrant because of the first legal concept, and they can immediately take down their public message when they do receive a warrant because the state cannot compel false speech.

To ensure the authenticity of the message stating that the given company has not been subject to a NSL, many go an extra step and sign their messages with a PGP key (example here).

Of course, a foolproof way to ensure no data is lost is to simply make all data encrypted, like Apple has with the iPhone, ProtonMail does for email, and everyone who has ever sent encrypted emails has been doing since the 90s. But I still like this idea, because individuals who run encryption services should not be forced to be government puppets, like the FBI hoped to do to Ledar Levison.

The weakness is that we don’t know what we don’t know, so it’s possible the government already has a new Secret National Security Letter which it uses to compel companies to lie under some made up interpretation of an arcane piece of legislation. The only real security is end-to-end encrypted communication or being Hillary Clinton.

 

Banning Unbreakable Smartphone Encryption is Stupid

At least two states, New York and California, have introduced legislation that would ban smartphones sold in those states if those smartphones could not be searched under request from law enforcement.  This would likely mean no phones would be sold with unbreakable encryption, although I suppose Apple or Samsung could manufacture two types of phones and then just sell all the encrypted ones from New Hampshire or something. These bills are still somewhat controversial, and as it has gotten press coverage, there has been a House bill introduced that would prevent state legislation like those bills introduced in New York and California. Continue reading Banning Unbreakable Smartphone Encryption is Stupid