This blog is somewhat concerned about machine superintelligence and promotes the idea that we should be researching how to fix the AI goal alignment problem now.
A few years ago, Brian Ng compared concerns about machine superintelligence to worrying about overpopulation on Mars:
If we colonize Mars, there could be too many people there, which would be a serious pressing issue. But there’s no point working on it right now, and that’s why I can’t productively work on not turning AI evil.
I don’t know how seriously anyone took this quote, as it seems like most people would quickly point out that the conditional statement is already almost true; there is no “if” in colonizing Mars, there are people trying to figure out how to get there right now. But the generic version I’ve heard is that worrying about AI is like worrying about overpopulation on Mars, as if overpopulation on Mars would look like overpopulation on Earth. But I don’t see any reason for this to be the case.
Oh, look, it only took 5 months to go from landing 1 person on Mars to Mars being overpopulated.
— Eliezer Yudkowsky ⏹️ (@ESYudkowsky) March 9, 2016
Mars isn’t overpopulated at this moment, but we have the technology to send a person to Mars right now. The Falcon Heavy rocket launched a car into near-Martian orbit in February. NASA has already sent a car-sized payload and landed it on Mars in 2012. So why haven’t we sent a human to Mars? Because if there was a human on Mars, Mars would immediately become overpopulated. A person wouldn’t have food or shelter or water if they just showed up on Mars today. A lot of the work being done to set up any kind of human mission to Mars is to solve the problems you think of when you think of overpopulation, like food and shelter.
Of course, that’s simply a critique of a poor analogy, not a response to underlying point. However, the response is just as compelling a point: just because there is a belief that machine superintelligence is far off, it does not follow that there is nothing to be done now. On the contrary, solving the AI Alignment problem must occur prior to the development of Artificial General Intelligence, or there could be very dangerous consequences. Researching how we might be able to convey to an intelligent agent the complex set of values that cover human beliefs is a daunting task. Working on it now seems like the least that we can do if there is even a small chance of existential risk.
The robots are ALREADY revolting. I do battle against evil robots every day. They have ruined the telephone with incessant sales pitches. I must battle the robot when driving any modern automobile. (My last rental would forget my settings every time I turned off the key. It wanted to be in shift-every-1.5-seconds-while-travelling-at-a-constant-speed-mode and I wanted sane/sport mode) I battle the robot in my dishwasher every time I use it. I want to do one hour express mode (and then run it again in that mode to get a proper rinse), while it wants to revert back to two hours and the dishes still aren’t clean mode.
It may already be too late.
Oh, and then there are the social media robots which are turning our society into a replay of Weimar Germany.
And who knows what the AI bots are doing to the financial markets…