Encrypted Communication Apps

I have discussed this idea in the past, but normally I’ve only gotten excitement about encrypted communication from my fellow libertarians and netsec friends. But with the current Presidential situation, there seems to be more interest in communicating without being overheard by the government, even among my government-loving left-wing friends. And this is excellent! Even if you don’t need privacy, by communicating securely all the time, you make it less notable when you do have to communicate securely, and you create more encrypted traffic that other government targets of surveillance can blend into.

First, let’s go over a very quick summary of encryption. If you’re already familiar with encryption, skip down past this section and the pictures to the list.

Public Key Encryption in 5 Minutes

An encryption algorithm takes information, like text, numbers, picture data (it’s all just 0s and 1s to computers) and outputs different text on the other side. A good encryption algorithm will output text that looks randomly generated so that no information can be gained about the source text. That output is then sent out in the clear (over the internet, where people might be spying) to the recipient. The recipient then reverses the process, decrypting the message and getting the original text, numbers, picture data, etc. However, if an algorithm always created the same output data from the same inputs, bad guys could figure out what you were saying pretty quickly. This introduces the idea of keys. A key is a number the algorithm uses to change the output in a predictable way. If both the sender and the recipient have a secret key, they can use their keys and the algorithm to send messages that only they can read (without the right key, the algorithm won’t reverse the encryption):

Symmetric key encryption. Public domain image.

But we can do better! In our previous scenario, we need to somehow communicate the secret key separately from our message. That’s a problem, since we likely are using encryption precisely because we can’t communicate openly. The solution is something called public key encryption. In this system, each person has two keys, one public and one private. To send someone a message, you can encrypt the message with their public key, and then send it to them. Then only they alone can decrypt the message with their private key.

Public key cryptography. Public domain image.

The reality of the mathematics is slightly more complicated, but for our purposes, what matters is how the public and private keys exist in each messaging app. Messing with these keys is difficult and confusing for users, but loss of the private key means communication is unsecured. Therefore, when using encrypted messaging, it’s important to be aware of how the app uses and manages the keys.

The Best Apps

The following is my ranked order of preferred secure communication:

1. Signal. This the gold standard encrypted communication app. It’s open source, free, has group chat, works on mobile and desktop, and of course is end-to-end encrypted. It even has encrypted voice calls. The one significant drawback is that it requires a phone number. It uses your phone number to distribute your public key to everyone that needs to contact you.  Because of this, it offers excellent encryption (requiring no security knowledge!), but no anonymity. If you want that, check the next entry.

2. PGP Encrypted email. So this one is a bit complicated. OpenPGP (stands for Pretty Good Privacy) is an open protocol for sending encrypted messages. Unlike the other apps on this list, PGP isn’t an app and therefore requires you to produce and manage your own keys. The tools you can find at the link will allow you to produce a private and public key pair. To send a message to someone else, you will have to obtain that person’s public key from them, use the software to encrypt the message with their public key, and then send it to them. Because it is so much work, I have this method second on the list, but there is no better way to communicate securely and anonymously. To better distribute your public key, I recommend keybase.io (use that link to send use encrypted emails!). The good thing about PGP is that it can be used with any email, or really any other method of insecure communication. Additionally, it’s open source, free, and very encrypted. 

Both Signal and PGP are very secure methods of communication. The following apps are good, but they are not open source and thus are not as provably secure. They are still better than just using unencrypted methods like SMS text, email, etc.

3. Whatsapp. WhatsApp is pretty good. It’s free, widely used, implements Signal protocol (and requires a phone number), works on mobile and desktop, has group chat and encrypted phone calls, and is encrypted by default. Moxie Marlinspike, the guy who made Signal, the number one app on this list, actually implemented the same Signal protocol on WhatsApp. That’s great, but unfortunately, WhatsApp isn’t open source, so while Moxie vouches for WhatsApp now, we don’t know what could happen in the future. WhatsApp could push out an update that does sneaky, but bad things, like turning off defaults. It’s also important to acknowledge that WhatsApp’s implementation already isn’t perfect, but it’s not broken. If you use WhatsApp, it’s important to make sure the notifications are turned on for key changes. Otherwise, it’s an excellent, widely used texting substitute.

4. Threema. Threema has an advantage in that it isn’t based in U.S., and it’s more security focused than Whatsapp. Threema is fairly feature rich, including group chat, but it isn’t free, it’s limited to mobile, and it isn’t open source. Threema uses the open source library NaCl, and they have a validation procedure which provides some comfort, although I haven’t looked at it in depth and can’t tell if it proves the cryptography was done perfectly. This paper seems to indicate that there’s nothing obviously wrong with their implementation. Nonetheless, it cannot be higher on this list while still being closed source.

5. FB Messenger secret conversations. Facebook Messenger is a free app and when using its secret conversations options, the Signal protocol is used. The app is also widely used but it takes effort to switch the conversations to secret. An encrypted app that isn’t encrypted by default doesn’t do much good. FB Messenger does let you look at your keys, but it isn’t as easy to check as it is in WhatsApp, and since it isn’t open source, keys could be managed wrong or defaults changed without us knowing. It also doesn’t have other features like group chat or desktop versions.

6. iMessage. Apple has done a good job with an excellent secure protocol for iMessage. It’s also feature rich, with group chat and more, but it’s only “free” if you are willing to shell out for Apple products. While Apple does a good job documenting their protocols, iMessage is not open source, which means we can’t verify how the protocol was implemented. Moreover, we cannot view our own keys on the app, so we don’t know if they change, and we don’t know how Apple manages those keys. It is therefore possible that Apple could either loop government spying into their system (by encrypting all messages with an extra master key) or simply turn over specific keys to the government. The amount you are willing to use iMessage to communicate securely should be determined by the amount you trust Apple can withstand government attempts to access their security system, both legal and technological.

Things I have specifically not listed on purpose:

  1. Don’t use SMS. It’s not encrypted and insecure. It would be good to not even use it for 2-factor authentication if you have a better option.
  2. Don’t use email. It’s not encrypted and insecure.
  3. Don’t use Telegram. They created their own “homemade” crypto library which you should NEVER EVER DO. Their protocol is insecure and their encryption is not on by default. In fact, there are at least two known vulnerabilities.

Leave a comment on the official Reddit thread.

2017 Predictions

It’s fun to have opinions, and it’s easy to craft a narrative to fit your beliefs. But it’s especially dangerous to look back at events and place them retroactively into your model of the world. You can’t learn anything if you’re only ever looking for evidence that supports you.  However, if you try to use your model of the world to create testable predictions, those predictions can be proven right or wrong, and you can actually learn something. Incorrect predictions can help update our models.

This is, of course, the basis for the scientific method, and generally increasing our understanding of the world. Making predictions is also important for making us more humble; we don’t know everything and so putting our beliefs to the test requires us to reduce our certainty until we’ve researched a subject before making baseless claims.  Confidence levels are an important part of predictions, as they force us to think in the context of value and betting: a 90% confidence level means I would take a $100 bet that required me to put up anything less that $90. Moreover, it’s not just a good idea to make predictions to help increase your knowledge; people who have opinions but refuse to predict things with accompanying confidence levels, and therefore refuse to subject their theories to scrutiny and testability, must be classified as more fraudulent and intellectually dishonest.

First let’s take a look at how I did this past year, and see if my calibration levels were correct. Incorrect predictions are crossed out.

Postlibertarian Specific

  1. Postlibertarian to have >10 additional posts by July 1, 2016:  70%
  2. Postlibertarian Twitter to have more than 240 followers:  70%
  3. Postlibertarian.com to have >10k page loads in 2016: 50% (had 30k according to StatCounter)
  4. The predictions on this page will end up being underconfident: 60%

World Events

  1. Liberland will be recognized by <5 UN members: 99% (recognized by 0)
  2. Free State Project to reach goal of 20,000 people in 2016: 50% (occurred February 3rd)
  3. ISIS to still exist: 80%
  4. ISIS to kill < 100 Americans 2016: 80% (I think <100 were killed by any terrorists, fewer in combat)
  5. US will not get involved in any new major war with death toll of > 100 US soldiers: 80%
  6. No terrorist attack in the USA will kill > 100 people: 80% (50 did die in the Orlando shooting unfortunately)
  7. Donald Trump will not be Republican Nominee: 80% (whoops)
  8. Hillary Clinton to be Democratic nominee: 90%
  9. Republicans to hold Senate: 60%
  10. Republicans to hold House: 80%
  11. Republicans to win Presidential Election: 50% (I predicted in December, Nate Silver had Trump at 35% the day of, who’s a genius now??)
  12. I will vote for the Libertarian Presidential Candidate: 70% *
  13. S&P 500 level end of year < 2500: 70%
  14. Unemployment rate December 2016 < 6% : 70%
  15. WTI Crude Oil price < $50 : 80%
  16. Price of Bitcoin > $500:  60%
  17. Price of Bitcoin < $1000: 80%
  18. Sentient General AI will not be created this year: 99%
  19. Self-driving cars will not be available this year to purchase / legally operate for < $100k: 99%
  20. I will not be able to rent trips on self-driving cars from Uber/ Lyft: 90% **
  21. Humans will not land on moon by end of 2016: 95%
  22. Edward Snowden will not be pardoned by end of Obama Administration: 80% ***

*I didn’t personally vote for the libertarian candidate, but I did trade my vote, resulting in Gary Johnson getting two votes more than he would have had I not voted at all. I’m counting this as at least a vote for Johnson.

**Technically, I am not particularly able to get a ride on a self-driving Uber because I don’t live in Pittsburgh, but I don’t think that’s what I meant. I also didn’t expect any self-driving Uber rides to be available anywhere, so I’m counting it against me.

***Obama still has a few weeks to pardon Snowden, but it’s not looking good

So let’s take a look at how I did by category:

  • Of items I marked as 50% confident, 3 were right and 0 were wrong.
  • Of items I marked as 60% confident, 3 were right and 0 were wrong.
  • Of items I marked as 70% confident, 4 were right and 1 was wrong.
  • Of items I marked as 80% confident, 7 were right and 2 were wrong.
  • Of items I marked as 90% confident, 1 was right and 1 was wrong.
  • Of items I marked as 95% confident, 1 was right and 0 were wrong.
  • Of items I marked as 99% confident, 3 were right and 0 were wrong.

As you can see from this data graphed, I have absolutely no idea what I’m talking about when it comes to predictions.

You’re supposed to be as close to the perfect calibration line as possible. The big problems are that I only had 2 or 3 predictions for the 50%, 60%, and 90% confidence intervals. For example, my slip-up on predicting Uber wouldn’t have self-driving cars this year means I was only 1 for 2 on 90% predictions. Clearly I need to find more things to predict, as I had 5 and 9 predictions for the 70% and 80% confidence levels, which were right about on the mark. Luckily for next year, I have almost double the number of predictions:

Predictions for 2017:

World Events

  1. Trump Approval Rating end of June <50% (Reuters or Gallup): 60%
  2. Trump Approval Rating end of year <50% (Reuters or Gallup): 80%
  3. Trump Approval Rating end of year <45% (Reuters or Gallup): 60%
  4. Trump 2017 Average Approval Rating (Gallup) <50%: 70%
  5. ISIS to still exist as a fighting force in Palmyra, Mosul, or Al-Raqqah: 60%
  6. ISIS to kill < 100 Americans: 80%
  7. US will not get involved in any new major war with death toll of > 100 US soldiers: 60%
  8. No terrorist attack in the USA will kill > 100 people: 90%
  9. France will not vote to leave to the EU: 80%
  10. The UK will trigger Article 50 this year: 70%
  11. The UK will not fully leave the EU this year: 99%
  12. No country will leave the Euro (adopt another currency as their national currency): 80%
  13. S&P 500 2016 >10% growth: 60%
  14. S&P 500 will be between 2000 and 2850: 80% (80% confidence interval)
  15. Unemployment rate December 2017 < 6% : 70%
  16. WTI Crude Oil price > $60 : 70%
  17. Price of Bitcoin > $750: 60%
  18. Price of Bitcoin < $1000: 50%
  19. Price of Bitcoin < $2000: 80%
  20. There will not be another cryptocurrency with market cap above $1B: 80%
  21. There will not be another cryptocurrency with market cap above $500M: 50%
  22. Sentient General AI will not be created this year: 99%
  23. Self-driving cars will not be available this year for general purchase: 90%
  24. Self-driving cars will not be available this year to purchase / legally operate for < $100k: 99%
  25. I will not be able to buy trips on self-driving cars from Uber/Lyft in a location I am living: 80%
  26. I will not be able to buy a trip on a self-driving car from Uber/Lyft without a backup employee in the car anywhere in the US: 90%
  27. Humans will not land on moon by end of 2017: 95%
  28. SpaceX will bring humans to low earth orbit: 50%
  29. SpaceX successfully launches a reused rocket: 60%
  30. No SpaceX rockets explode without launching their payload to orbit: 60%
  31. Actual wall on Mexican border not built: 99%
  32. Some increased spending on immigration through expanding CBP, ICE, or the border fence: 80%
  33. Corporate Tax Rate will be cut to 20% or below: 50%
  34. Obamacare (at least mandate, community pricing, pre-existing conditions) not reversed: 80%
  35. Budget deficit will increase: 90%
  36. Increase in spending or action on Drug War (e.g. raiding marijuana dispensaries, increased spending on DEA, etc): 70%
  37. Some tariffs raised: 90%
  38. The US will not significantly change its relationship to NAFTA: 60%
  39. Federal government institutes some interference with state level legal marijuana: 60%
  40. At least one instance where the executive branch violates a citable civil liberties court case: 70%
  41. Trump administration does not file a lawsuit against any news organization for defamation: 60%
  42. Trump not impeached (also no Trump resignation): 95%

Postlibertarian

  1. Postlibertarian.com to have >15 more blog posts by July 1, 2017: 80%
  2. Postlibertarian.com to have >30 blog posts by end of year: 70%
  3. Postlibertarian.com to have fewer hits than last year (no election): 60%
  4. Postlibertarian Twitter account to have <300 followers: 90%
  5. Postlibertarian Twitter account to have >270 followers: 60%
  6. Postlibertarian Subreddit to have <100 subscribers: 90%

 


Leave a comment on this post’s reddit thread.

The Age of Em

I.

I recently had the opportunity to see George Mason Professor Robin Hanson talk about his book, The Age of Em. I also was able to work my way into having a long conversation with him after his presentation.

For those who don’t know, it’s perhaps the strangest book you’ve ever heard of. Hanson looks to project forward in time when the technology exists to easily upload human brains into computer simulations. These “emulated” brains will have certain characteristics from residing in computer hardware: they can make copies of themselves, save versions of themselves for later, or delete versions of themselves. They will even be able to run faster or slower than normal human brains depending on what hardware they are running on. Hanson spends the book working through the implications of this new society. And there are a lot of fascinating insights.

Hanson discusses the pure physics of this world, as suddenly speed of light delays in communication mean a lot; if an em is running at a million times human speed, then a bad ping of 50 ms is equivalent to over 12 hours for a message to get sent today. This leads to very close physical locations of ems, which concentrates them in large cities. Their economy also grows much faster than ours due to the rapid speed at which their brains are thinking, although they may be physically restrained by how quickly the physical manufacturing of their hardware can occur. The economy also quickly moves to subsistence wages, as even the most productive members of society can have their brains copied as many times as needed to fill all roles. Elon Musk is no longer a one of kind genius, and in fact anyone who cannot compete with an Elon Musk version in their job would likely be cast aside. For a more detailed summary and examples of bizarre ideas, I recommend Part III of Scott Alexander’s post on the book.

II.

In that blog post, Scott goes on to discuss in Part IV the problem of value drift. Hanson does a good job pointing out that past human societies would not have approved of what we now consider acceptable. In some areas, the change in values in stunning. Merely 10 years ago, many had reservations about gay marriage. Merely 50 years ago, many Americans had serious reservations about interracial marriage.  On the scale of humans’ existence as a species, the amount of time we have accepted that people have the right to worship their own religion is minuscule. The section of human history where subsistence existence was not the only option is likewise small. Professor Hanson told our group that by far the most common reaction to his painting of the future was rejection.

I even asked him specifically about it: Hanson had stated several times that it was not his job or intention to make us like or hate this future, only to know about it. I pointed out that many AI researchers were very concerned about the safety of artificial intelligence and what it might do if it hits an intelligence explosion. To me, there seems to be little difference between the AI intelligence explosion and the Em economy explosion. Both would be human creations, making decisions and changing their values rapidly, at a pace that leaves most “normal” traditional physical humans behind. If many of the smartest people studying AI think that we should do a lot of work to make sure AI values line up with our own, shouldn’t we do the same thing with Ems? Hanson’s answer was basically that if we want to control the value systems of our descendants thousands of mental years in the future, well good luck with that.

Scott in Part IV of his review demonstrates the problem with just allowing this value drift to happen. Hanson calls the era we live in the “dream time” since it’s evolutionarily unusual for any species to be wealthy enough to have any values beyond “survive and reproduce”. For most of human history, there wasn’t much ability to build cities or share knowledge because too many resources were focused on survival. Today, we have become so productive and intelligent that humans have elevated Earth’s carrying capacity high above the amount of people we have. We don’t have to spend all our resources on survival and so we can come up with interesting philosophical ideas about morality and what the meaning of life is. We’ve also harnessed this evolutionary competitiveness to fuel our market economy where the determiner of what survives isn’t nature, but human desires. Unfortunately when you switch to the Age of Em, suddenly the most productive part of the economy is plunged back into a Malthusian trap with all resources going to keep the Ems alive. Fulfilling human wants may be what drives the economy, but if there are other pressures on Ems, they will be willing to sacrifice any values they have to keep themselves alive and competitive. If the economy gives up on fulfilling human demand, I wouldn’t call that a drift in values, I’d call that an absence of values.

If we live in the dream time, then we live in a unique situation where only we can comprehend and formulate higher morality and philosophical purpose. I think we should take advantage of that if we can.

III.

Hanson’s observations given his assumption that the Age of Em will happen are excellent, considering he is predicting far into the future. It’s likely things won’t work out exactly this way, as perhaps a single company will have a patent on brain scanning for a decade before the market really liberalizes; this could seriously delay the rapid economic growth Hanson sees. He acknowledges this, and keeps his book more of a prediction of what will happen if we don’t oppose this change. I’m not sure how far Hanson believes that regulation/intellectual property will not be able to thwart the age of em, but it seems that he’s more confident it will not be stopped than that it will be. This may be an economist mistake where regulation is sort of assumed away as the realm of political science. It’s not unprecedented that weird inefficient institutions would last far into the future. Intellectual property in the digital age is really weird, all things considered. Software patents especially seem like a way to patent pure logic. But there are others: banking being done with paper checks, daylight savings time, the existence of pennies, and, of course, Arby’s. There are also plenty of examples of new technologies that have evolved much faster than regulation, like supplements, e-commerce, and ride-sharing. It remains to be seen what brain emulations will be.

There is also the possibility that emulated brains won’t be the next big shift in human society. Hanson argues that this shift will rival that of the agricultural revolution and the industrial revolution. This makes a lot of sense if brain emulation is indeed the next big change. Eliezer Yudkowsky (and Scott) think this is incorrect and artificial intelligence will beat it. This seems like a real possibility. Scott points out that we often come up with technological equivalents of human biology far before actually emulating biology. This is mostly because biology has accidentally figured things out via evolution and thus it is often needlessly complicated. For example, aircraft usually fly via fixed wing aerodynamics, not by flapping. It seems likely that we will reach human level problem solving via software rather than via brain scanning. Even if we don’t, it seems likely that software could quickly optimize a simulation based on a preliminary brain scan that was too rough to get a proper brain emulation into hardware. But software assisted reconstruction could start experimenting with neuron simulation and create a software assisted brain emulation that is better designed and more specialized than any human brain emulation.

It also seems possible that other things could happen first that change human history, like very expensive climate change, a crippling pandemic (anti-biotic resistance), genetic and epigenetic engineering  and of course some technological revolution we haven’t even imagined (the unknown). Certainly if we assume continued economic growth, either brain emulation, artificial intelligence, or genetic engineering seem like likely candidates to transform humanity. Hanson thinks AI research is really overrated (he used to be an AI researcher) and isn’t progressing very fast. But he was an AI researcher about 25 years ago and we’ve seen some pretty impressive improvements in machine learning and natural language processing since then. We’ve also seen some improvement in brain emulation technology as well to be fair. Genetic engineering was hailed as the next revolution in the 1990s, but has floundered ever since. Last year though, the use of CRISPR in genome engineering has dramatically increased the feasibility of actually picking and choosing specific genes. Any of these could drastically change human society. Perhaps any genetic improvements would be overshadowed by brain emulation or AI. I guess it depends on the importance of the physical world vs the digital one.

Of course, not all changes could be from improved technology. There’s a significant risk of a global multi-drug resistant pandemic. Our overuse of antibiotics, the difficulty in making everyone stop overusing them, and our highly integrated world means we’ve created an excellent scenario for a superbug to appear and spread. Anything resembling the 1918 Spanish Flu Epidemic could be devastating to the world population and to economic growth. Climate change poses a similar risk to both life and the economy. If either of these were to happen, it could significantly deter the Age of Em from occurring or at least delay it, along with a lot of the progress of our civilization. And that’s not even mentioning additional freak natural disasters like coronal mass ejections.

Overall, predictions are very difficult and if I had to bet, I’d bet that the next big change in human civilization won’t be emulated brains. A good competitor is definitely artificial superintelligence, but when you add in genetic engineering, natural disasters, drug resistant bacterial epidemics, and so on, you have to take the field over brain emulations.

Nonetheless, this book really does make you think about the world in a different way with a perspective both more global and more forward looking. It even makes you question what it means to be human. The ins and outs of the 2016 election really fade away (despite my continued interest and blogging). Political squabbling doesn’t compare to the historical trends of human civilization and the dawn of transhumanism.


Comment on reddit.

First They Came For The Data Analysts, And I Did Not Speak Out…

Data storage is cheap, and odds are good that any information you store today – if you care just a little about preserving it – can last well beyond your own lifespan. If you’re an intelligence agency and you’re collecting all of the surveillance information you possibly can, the easiest part of your job is probably siloing it so that you’ll have it for hundreds of years. If you’ve got any kind of budget for it, it’s easy to hold on to data practically indefinitely. So, if you’re the subject of surveillance by any of that sort of intelligence agency, all sorts of information collected about you may exist in intelligence silos for decades to come, probably long after you’ve forgotten it. That information exists, for practical purposes, effectively forever.

Suppose that your nation’s intelligence agency decides to collect information in bulk on every citizen it can, including you, and you judge that they are responsible and deserving of your trust, so you don’t mind that they are gathering this information about you and storing it indefinitely. Suppose that they actually are deserving of your trust, and the potentially massive amount of information that they collect and silo about you (and everyone else) is never abused, or even seen by a human analyst. Instead it sits in some massive underground data center, occasionally browsed through by algorithms combing for actual, specific security threats.

Trustworthy governments seem to be pretty stable governments, which is fortunate for people lucky enough to be governed by them. Year after year, there is a very high likelihood that the government will still be pretty great. But that likelihood can never be 100%, which is unfortunate because when you have a non-zero likelihood of something happening and you then compound it over a time scale like “effectively forever”, that puts you in uncomfortable territory. It’s hard to anticipate what sort of threats might exist five years from now, and harder to anticipate what might happen in 20. You have no idea what sort of world you’ll live in 40 years from now, but there are good odds that the extensive information siloed away today will still be around.

When I read Scott Alexander’s review of Manufacturing Consent, it was apparent that throughout the 20th century and clear into the present day, places that were stable at one point in time become unstable, and death squads followed shortly after. The Khmer Rouge killed about 25% of the population of Cambodia from 1975 to 1979. 1975 is too close to the present to comfortably say that we exist in a modern world where we don’t have to worry about genocide and mass-murdering states.

We have no idea what the mass-murderers of the distant future will care about. Many of them will probably have fairly commonplace criteria for the groups they want to purge based on such things as race, religion, cultural heritage, sexual orientation, and so on. But some will devise criteria we can’t even begin to imagine. In the middle of the 19th century, only a tiny minority of people had even heard of communism, but a generation or so later that doctrine caused the death of millions of people in camps, wars, purges, and famines. Perhaps we’ve exhausted the space of ideologies that are willing to kill entire categories of people, and maybe we’ve identified all of the categories of people that you can identify and decide to purge.  But are you willing to bet money, much less your life, on the prediction that you won’t belong to some future class of deplorables?

In some of the purges of history, people had a chance to pretend not to be one of the undesirables. There’s no obvious sign that a Pear Party-affiliated death squad can use to identify a member of the Pineapple Party when the Pineapple Party government is toppled, so long as the Pineapplists know that they’re being targeted by Pear partisans and now is the time to scrape off their Pineapple Party ’88 bumper stickers. High-profile Pineapplists have no option but to flee the country, but the average member can try to lay low through the ensuing sectarian violence. That’s how it used to be, at least. But today people can scroll back 5 years in your Facebook profile and see that you were posting pro-Pineapple links then that you’ve since forgotten.

But open support of the Pineapple Party is too obvious. The undesirables of the future may have enough foresight to cover their tracks when it comes to clear-cut evidence like that. But, returning to the trustworthy intelligence agency we’ve mandated with finding people who want to harm us but also don’t want to be found, there are other ways to filter people. Machine learning and big data analysis are mixed bags. If you really, really need them to preemptively identify people who are about to commit atrocities, you’re probably going to be let down. It’s hard to sift through immense streams of data to find people who don’t want to be found. Not impossible, but machine learning isn’t a magic wand. That said, people are impressed with machine learning for a reason. Sometimes it pulls a surprising amount of signal out of what was previously only noise. And we are, today, the worst at discerning signal from noise that we will ever be. Progress in computational statistics could hit a wall next year, and then we can all temper our paranoia about targeted advertisements predicting our deepest, darkest secrets and embarrassing us with extremely specific ad pitches when our friends are looking over our shoulders. Maybe.

But perhaps it’s possible, if you’re patient and have gigantic piles of data lying around, to combine text analysis, social graph information, and decades-old Foursquare check-ins in order to identify closeted Pineapple Party members. And maybe it requires a small army of statisticians and programmers to do so, so you’re really not worried when the first paper is published that shows that researchers were able to identify supporters of Pineapplism with 65% accuracy. But then maybe another five years goes by and the work that previously took that small army of researchers months to do is now available as an R package that anyone with a laptop and knowledge of Statistics 101 can download and use. And that is the point where having gigantic piles of data siloed for a practically infinite amount of time becomes a scary liability.

The scenario where Pearists topple the government, swarm into the intelligence agency’s really big data center, and then know exactly where to go to round up undesirables might be fairly unlikely on its own. But there’s actually a much larger number of less-obvious opportunities for would-be Pearist mass-murderers. But maybe someone finds a decades-old flaw in a previously trusted security protocol and Pear-affiliated hackers breach the silo. Maybe they get information from the giant surveillance silo of a country that, now that we think of it, no one should have sold all of that surveillance software to. Maybe the intelligence agency has a Pearist mole. Maybe the whole intelligence apparatus is Pear-leaning the whole time. Maybe a sizeable majority of the country elects a Pearist demagogue that promises to round up Pineapplists and put them in camps. This sort of thing isn’t behind us.

The data silo is a threat to everyone. In the long run, we can’t anticipate who will have access to it. We can’t anticipate what new category will define the undesirables of the future. And those unknowing future undesirables don’t know what presently-inconspicuous evidence is being filed away in the silo now to resurface decades in the future. But the trend, as it exists, points to a future where large caches of personal data are a liability because future off-the-shelf machine learning tools may be as easy to use and overpowered relative to machine learning’s bleeding edge today as our smartphones are compared to the Apollo Guidance Computer. The wide availability of information on the open internet might itself be dangerous looked at through this lens. But if your public tweets are like dry leaves accumulating in your yard and increasing the risk of a dangerous data-fueled-pogrom wildfire, then mass surveillance silos are like giant rusty storage tanks next to your house that intelligence agencies are pumping full of high-octane petroleum as fast as they can.


Comment on reddit.

Picture credit: Wikimedia Foundation Servers by Wikipedia user Victor Grigas, licensed under CC-BY-SA-3.0.

Oracle v Google is Everything that’s Wrong with Copyright

This week, the Oracle v Google trial came to a close, with a jury finding that Google’s use of Oracle’s Java API names was fair use.  This is, of course, not the end, as Oracle has vowed to appeal the decision.

The outcome is monumental, but only because the courts have previously erred significantly and ruled that APIs are copyrightable at all.  The Supreme Court had refused certiori to examine that ruling of an Appellate court, which in turn was a reversal of a District court decision (EFF has all the details).  Interestingly, this most recent case was heard under the original judge, so it’s quite possible the Appellate Court will reverse again.  I think it’s crazy to suggest that API names are even copyrightable, but given that they’ve been ruled as such, I can’t see how use of APIs isn’t fair use.

Google didn’t copy Oracle’s code; they rewrote it themselves, but used the same name for the code functions, and then packaged it into a much better product than anything Oracle had created.  And it’s not like this negatively impacted Java’s market viability (contrary to what Oracle claims); Android likely saved Java from becoming a defunct language used only in big enterprise environments.  Younger aspiring developers want to program in languages for apps and new web technologies like Ruby, Node, Swift, and even Python. But the only new reason to know Java is because Android exists; if Android had picked Python, that’s what everyone would be learning to make Android apps. It’s ridiculous.

But more fundamentally, the use of API names can’t be restricted! That defeats the whole purpose of having them! Sure, Twitter has the right to restrict the calling of functions on their servers through their APIs, but the actual name of the REST calls isn’t theirs forever now. Steve and Leo on Security Now said it very well about APIs:

It’s driving a car. If we didn’t have a single uniform car/driver interface, meaning brake and accelerator, and this is how the steering wheel works, it would be a disaster. And as I thought about this more, I realized that this notion of standards is what this comes down to. And standards are such a part of our life that it’s even – it’s almost hard to appreciate the degree to which we depend upon them. I mean, think about even threads, you know, nuts and bolts with standard threading. If everyone just made up their own, so that screws were not interchangeable, it would just be a catastrophe.

I would go even further; a steering wheel is a patentable invention that other car companies would have to pay to use…but calling it a “steering wheel” isn’t something you can restrict. Doing so would be a blatant misuse of copyright and horrific reduction in free speech. Steven Gibson continued:

And, I mean, so I guess my real complaint is that Oracle has historically benefited from the spread and the use of Java. And so because they allowed that to happen, it’s done as well as it has. And suddenly now Google has capitalized on it, and they’re wanting to take their marbles back and to say – or basically, essentially, this is a $9.3 billion lawsuit. So they’re saying we want some of the revenue which Google is obtaining as a consequence of doing a far better job in commercializing and leveraging Java for profit than we ever could. Because all we’re doing is telling everyone to get Java out of their computers…

…The BIOS is another perfect example. The fact that IBM gave us an interface called the Basic I/O System allowed all kinds of programs to be written without regard for whether it was, for example, a color graphic display or a monochrome graphic display. They were completely different. They occupied different hardware regions. Yet the BIOS hid those differences so that a program didn’t have to worry about what type of hardware you had. And that was an API, a standard. But just in general this kind of standardization, you can sort of imagine sort of a Mad Max post-cataclysm world where you no longer have standards, and everyone’s thing is just made from scratch, and they’re not – nothing’s interoperable. And it would just be a bizarre place.

And I think one of the major things that the Industrial Revolution did was it taught us the power of interoperability. And here Oracle is trying to say, yeah, we’re going to get a toll for you using something that we purchased and never figured out how to use. 

I’ve said it in the past, and I’ll keep saying it: the purpose of all intellectual property law is not to help the owners of intellectual property, but rather to promote creativity and new works.  Ruling that API names are copyrightable does literally nothing to promote interoperability or improve technology; it only makes it harder to improve the world. Getting this fair use ruling is better than nothing, but it should never have come to this.


Photo credit: Android Lineup by Rob Bulmahn, licensed under CC-BY-2.0.

Model-Breaking Observations in the Senate

It’s rare when an idea, or piece of evidence, comes along that is so impressive, it forces you to rethink your entire model of the world. The recently released Feinstein-Burr encryption bill has done just that.

It has been described as “technically illiterate”, “chilling”, “ridiculous”, “scary”, and “dangerous“.  Not only are the issues with the bill fairly obvious to anyone with a cursory understanding of encryption, the problems are of such magnitude that it thwarts any attempt to understand the Senators’ actions.  Let’s look at the effects of the hypothetical law.

The biggest issue is that this bill will significantly damage the United States’ national security. We live in a highly insecure world where cyberattacks, both foreign and domestic, are omnipresent. The Feinstein-Burr bill would fundamentally reduce the security of all technology infrastructure in the country. Jonathan Zdziarski in a blog linked above, gives some details:

Due to the backdooring of encryption that this legislation implies, American electronics will be dangerously unsafe compared to foreign versions of the same product. Diplomats, CEOs, scientists, researchers, politicians, and government employees are just a few of the people whose data will be targeted by foreign governments and hackers both while traveling, but also whenever they’re connected to a network.

That’s awful, and even if you have the most America-first, protect-American-lives mentality, weakening American encryption is the worst thing you could do; it literally endangers American lives.

I think there’s also a strong case to be made that this will do very little to combat terrorism. Unbreakable, strong encryption is widely available on the internet for free, forever; if bad people want to use it, they will.  Moreover, terrorism, as awful as it is, is relatively rare; Americans are about a 1000x more likely to die non-terrorism related homicide. And many more “common” homicides occur due to heat-of-the-moment arguments, which means there would be no encrypted messages detailing conspiracies. All this bill does is remove the ability of average, non-technically inclined Americans to secure their data.

And the people whose data will be most at risk will be those consumers who are less educated or less technically adept. Better informed consumers might have the ability to install foreign encryption software on their phone to keep their data safe, but most uninformed consumers just use default settings.  Thus, criminals who try and commit identity theft will greatly benefit from this legislation; they wouldn’t usually bother targeting knowledgeable users anyway, and with security stripped away from phones, it will be much easier to steal data from susceptible users. The people most in need of help to protect their data will be disproportionately harmed by this legislation.

On the other hand, most companies are not uniformed users. They have IT departments who understand the value of encrypting their data, and they will continue to purchase strong security software, even if it is no longer sold in the United States.  Foreign produced software works just as well.  Banning strong encryption will debilitate the American technology sector, one of the biggest and most important parts of the economy.   This will cost Americans jobs and diminish America’s influence on the future of the world, as technological innovation moves overseas.  But this isn’t just bad for Americans; it’s not easy to simply move an entire company or product overseas. There are huge capital investments these companies have made that will not be available in other countries immediately, if ever, and this will set back the global technology industry billions if not trillions of dollars.

So this really begs the question of why Senators Dianne Feinstein and Richard Burr introduced this bill; given their stated obsession with national security, and given the horrific effect this bill would have on American national security, there’s no good way to resolve their stated beliefs with their actions. Here are a couple theories to explain their behavior, and some discussion as to why each respective theory is unsatisfying.

The Senators are actually foreign spies purposefully trying to weaken American national security.  Obviously, if this theory is true, it’s self-evidently very bad that our elected officials not only don’t represent us, but actually represent foreign governments likely trying to harm Americans. Sure it’s quite unlikely since it’s very difficult to become a U.S. Senator at all, and no spy agency would send agents in with a plan to become a U.S. Senator.  Whether they were turned into foreign agents after being elected, I really can only speculate. But it strikes me as improbable. Nonetheless, it is true that this legislation is exactly what foreign security agencies would want to introduce to make the United States more vulnerable.  I was curious, so I checked the constitutional definition of treason as well as the Espionage Act, but it seems that you need to literally give secrets to other people, not just make it easier for them to obtain. But there is that one case where a high ranking official is in trouble for storing documents insecurely…

They’re power hungry politicians. The idea of the Senators being foreign spies is bit far-fetched.  But what know for sure is that they are politicians, which means they chose a career path that would give them more power to change things. Maybe Burr and Feinstein are sick of technology companies telling the FBI that they can’t assist their investigations, and they wanted to put them in their place.  If this theory is true, it’s pretty self-evidently evil; people in power using their power indiscriminately to harm citizens is the exact problem Thomas Jefferson identified in the Declaration of Independence.  Of course, it’s not usually a big problem, because James Madison helped construct a whole host of ways to check the power of government. Of course, the most important check for our situation is that senators are voted in by the people. So as long as people know about this dumb bill, they’ll kick these guys out…right?

Hanlon’s Razor (origin disputed) states that one should “never attribute to malice that which is adequately explained by stupidity.”  This theory would mean that two sitting, highly experienced U.S. Senators are too stupid to realize the ill effects this will have on national  and economic security.  Obviously, congress has to make laws in areas that its members are not always familiar with…but Burr and Feinstein are the chair and vice chair, respectively, of the Intelligence Committee. If anyone knows about intelligence, they do. And Feinstein is even on the Judiciary Subcomittee on Technology, Privacy, and the Law! If even these people are too stupid to understand what the effects of their own policies are, we might as well stop sending representatives to a legislature at all and just have run-of-the-mill uneducated voters pass everything directly through referendum. Sure, they’d have no idea what they’re doing, but apparently neither do Senators!

What I think is most likely, and most terrifying, is that American Democracy incentivizes members of Congress to make bad policy if it’s politically beneficial. With all the aides and staff Senators have, plus the amount of pressure they receive from outside groups, it seems unlikely they never heard about the bad effects of the bill. Yet, they did it anyway. Given they don’t work for law enforcement, there is no Frank Underwood endgame for passing this bill; banning encryption doesn’t directly allow Burr and Feinstein to look at their political enemies’ phones (…probably), just criminals and the police.  So then maybe their incentive was to appear tough on crime and terrorism, consequences be damned. Richard Burr is in a reelection year in North Carolina, so let’s look at the effect this horrible bill has had on his chances to win according to Predictit.org:

Primary was in mid-March, bill introduced in early April
Primary was in mid-March, bill introduced in early April

As you can see, the bill had very little effect on his perceived chances. Now, it could be that voters have already factored in Senator Burr’s position on destroying defending American national security, and he needed to introduce this legislation to maintain his position. But it looks identical to a situation where North Carolina voters couldn’t care less about Senator Burr’s position on encryption, and his introduction of legislation consequently had no effect on his reelection chances. If it’s the former, then we are in serious trouble because our legislative representatives are incentivized to make horrible policies because voters aren’t well informed.  If it’s the latter, then we have to dismiss this explanation and go back to one of the other three.

Whatever the explanation is, it reflects poorly on how the government constructs policy, and it reflects poorly on American Democracy. Moreover, assuming any of those discussed theories are true, they imply massive issues that will be difficult or impossible to solve.  Reforming democracy as many progressives would like, through campaign finance, wouldn’t even address any of these issues; it is the technology corporations and privacy NGOs which have been advocating for more privacy and making unbreakable encryption more accessible, while law enforcement and other government agencies have been advocating for less security.  But as far as I can tell, even they haven’t demanded anything like this bill.  Thus,  more campaign spending by private groups would help, not hinder good policy.

No matter how you look at it, this bill indicates a big failure for democratic government and illustrates the dangers discretionary state power.


Photo credit: Caïn venant de tuer son frère Abel, by Henry Vidal in Tuileries Garden in Paris, France, photo by Alex E. Proimos, licensed under CC-BY-2.0.

Links 2016-4-17

Counting past infinity is easy! It was the infinity raised to infinity and infinite number of times that I really got lost.

I’ve settled on the right way to show the date in these links posts: the international standard ISO-8601.  It’s about time since that has been the standard since 1988.

Niskanen center names social justice aware libertarianism as “neoclassical libertarianism“. I like this idea, as it’s strictly superior to progressivism, and I’ve been trying to come up with a good name for it. Scott Alexander called it left-libertarianism-ist, which just isn’t as catchy. Of course, maybe pure libertarianism is better, but neoclassical liberalism is far more politically palatable. It is also more “conservative”, meaning that it is closer to the status quo.

Merrick Garland would not be a good SCOTUS justice. Randy Barnett discusses with Reason why he opposes Garland’s nomination: he’s completely deferential to executive and legislative authority and does not protect individual rights from the state. Does it make sense for the Senate to not give him a hearing? Maybe, maybe not. Did it make sense to declare prior to his announcement that any candidate wouldn’t get a hearing? Hard to say; if that hard line approach made Obama nominate an old white guy who endorses state power in the name of national security, that’s certainly a win for neoconservatives. I don’t think anyone should take an outrage stance on the Supreme Court opening because this really is a complicated game theory situation with nested layers of strategy. Even though I’m sure he is one of the most un-libertarian nominees ever, it’s impossible to say if he would be worse than a Hillary appointee or even a Trump appointee.

How to fight the War on Drugs: hit their wallets. Legal marijuana causes Mexican drug cartel revenues to plummet. 

Heard through Slate Star Codex, anti-censorship blog Status 451 (linked in the sidebar) held a fund-raiser for LambdaConf, a functional programming conference I had no idea existed until a week ago. Apparently, after an anonymous analysis of submitted papers, the Lambdaconf organizers selected a paper to be presented at the conference by Curtis Yarvin, a.k.a. Mencius Moldbug, perhaps the most well known neo-reactionary.  Certainly I think neo-reactionaries are a bit nuts, but Mr. Yarvin has also invented the intriguing functional programming language Urbit. We don’t agree with him politically, we can learn and grow our knowledge by understanding what he has to say, especially in technological areas he is an expert in! Alas, as Eric S. Raymond recounts, the social justice movement did not see it that way and pressured LambdaConf to remove Yarvin from the event. Lambdaconf refused and the activists moved to forcing sponsors to drop out. Incredibly, Status 451 started an indiegogo campaign to save LambdaConf, which was funded within the day. This is a big victory for anyone who wants to live in a tolerant, knowledgeable, and free society, but if you want to know their motivations firsthand, please read what they have to say.  Status 451 are also true believers, calling out some on the right for their similarly censoring response.

Related in Not the Onion news: Emory vows to hunt down students who politically disagree with the Left.

Bryan Caplan on liberalizing expertise and the link with defending free speech from the attacks of economic licensing.

A great write up on derivatives, what they are, how they work, and why it’s misleading to suggest that the derivatives market has a quadrillion dollars in risk.

Another excellent reddit post, this one asking soldiers what things they don’t tell you about war. In short: the smell.

Apparently the music industry thinks the DMCA doesn’t do enough to stop copyright infringers (more on the RIAA at TorrentFreak). It seems they’d like to target the safe harbor provisions of the DMCA, the only parts of it that are useful. Techdirt has a great series of posts from the other side, detailing the many abuses of DMCA takedown notices. Right now, there is no legal check on whether a takedown request comes from someone who actually owns the copyright, or even if that copyrighted work is utilized fairly for criticism or commentary. This isn’t an easy problem to solve by any means, but we should remember that the point of copyright is to encourage production of new works, and if there’s anything that YouTube does right is making it easier to create new content. Moreover, it’s helpful to remember that YouTube is run at a loss of more than $150 million a year. Trying to force YouTube to pay for content policing is one of the dumber ideas they’ve ever had, which is saying something. So what should be done instead? A good start would be to make false copyright claims a criminal offense, and require you to prove you own the copyright in the claim.  It would also be good if it turned out your copyright claim was wrong, the ad-money would not go to the claiming part, but would be held in escrow until the dispute is resolved. This would allow YouTube to better focus on actual infringers and stop the torrent of false claims. Of course, another big looming problem for the RIAA is Facebook video, which doesn’t even have the semi-transparent (though flawed) takedown-notice system of YouTube.  Ultimately, given how little money YouTube makes after 10 years on the internet, if YouTube was allowed to be held liable for infringing uploads, YouTube would either go out of business, or cease becoming a free platform anyone could use. This would be a monumental failure of the copyright regime; yes, it might end up getting RIAA members more money, but that is not the purpose of copyright. Copyright exists to help make new content, not destroy content platforms.

California is raising its minimum wage, eventually to $15 an hour. FiveThirtyEight’s Ben Casselman is excited at least to get some data on large minimum wage hikes, although judging from the headlines, it seems like he thinks this is a good idea. I’m fairly confident it is not, and Matt Zwolinski makes one good point to support me: the minimum wage doesn’t fight poverty.  There’s a lot of data surrounding the minimum wage. And it’s apparent that unemployment does not automatically rise when minimum wage increases occur.  Nonetheless, longer term unemployment effects are essentially impossible to study, and it’s likely there are some effects on businesses. If businesses could absorb 20-40% increases in labor costs easily, then why aren’t businesses getting more out of their employees, or more firms entering the business due to excess profits? There is evidence of long term job growth being harmed, as well as higher prices (see last link).  Ultimately, I predict there will be negative consequences for California, but it’s hard to find something that is worth predicting. I could predict that California’s employment and workforce participation rate will be lower than the country average by more than they are now (check this in the future). It’s also likely that low cost goods will see price increases, but I don’t have an easy way to check that over the next five years.

Robin Hanson has a good thought experiment to show that most people don’t vote to change the outcomes of elections. This would explain why anyone votes at all, given the uselessness of voting generally.
GiveWell tries a new tactic to persuade more people to fund their top researched causes: ” First of all. Just so you understand, this guy is a total loser. He begged me to be his peer reviewer, I said ‘NO THANKS.’ Pathetic!”

Related: We can’t stop here, this is Cruz country!

Daniel J. Bernstein taking over crypto is good.