Talking to an Alien Mind
Let’s step back for a moment and look at the gulf that separates us from computers. Not in terms of abilities—we’ve seen that computers are likely to match and exceed us in most areas—but in terms of mutual understanding. It turns out that it’s incredibly difficult to explain to a computer exactly what we want it to do in ways that allow us to express the full complexity and subtlety of what we want. Computers do exactly what we program them to do, which isn’t always what we want them to do.
For instance, when a programmer accidentally entered “/” into Google’s list of malware sites, this caused Google’s warning system to block off the entire Internet!1 Automated trading algorithms caused the May 6, 2010 Flash Crash, wiping out 9% of the value of the Dow Jones within minutes2—the algorithms were certainly doing exactly what they were programmed to do, though the algorithms are so com?plex that nobody quite understands what that was. The Mars Climate Orbiter crashed into the Red Planet in 1999 because the system had accidentally been programmed to mix up imperial and metric units.3
These mistakes are the flip side of the computer’s relentless focus: it will do what it is programmed to do again and again and again, and if this causes an unexpected disaster, then it still will not halt. Programmers are very familiar with this kind of problem and try to structure their programs to catch errors, or at least allow the code to continue its work without getting derailed. But all human work is filled with typos and errors. Even the best human software has about one error for every ten thousand lines of code, and most have many more than that.4 These bugs are often harmless but can sometimes cause enormously consequential glitches. Any AI is certain to be riddled with hundreds of bugs and errors—and the repercussions of any glitches will be commensurate with the Al’s power.
These and other similar errors are often classified as “human errors”: it wasn’t the system that was at fault; it was the programmer, engineer, or user who did something wrong. But it might be fairer to call them “human to computer translation errors”: a human does something that would make sense if they were interacting with another human, but it doesn’t make sense to a computer.
“I didn’t mean it to continue dividing when the denominator hit zero!”
“It’s obvious that bracket was in the wrong place; it shouldn’t have interpreted it literally!”
“I thought it would realize that those numbers were too high if it was using pounds per square inch!”
We don’t actually say those things but, we often act as though we believed they were true—they’re implicit, unverbalized assumptions we don’t even realize we’re making. The fact is that, as a species, we are very poor at programming. Our brains are built to understand other humans, not computers. We’re terrible at forcing our minds into the precise modes of thought needed to interact with a computer, and we consistently make errors when we try. That’s why computer science and programming degrees take such time and dedication to acquire: we are literally learning how to speak to an alien mind, of a kind that has not existed on Earth until very recently.
Take this simple, clear instruction: “Pick up that yellow ball.” If pronounced in the right language, in the right circumstances, this sentence is understandable to pretty much any human. But talking to a computer, we’d need thousands of caveats and clarifications before we could be understood.
Think about how much position information you need to convey (“The ‘ball’ is located 1.6 meters in front of you, 27 centimeters to your left, 54 meters above sea level, on top of the collection of red-ochre stones of various sizes, and is of ovoid shape—see attached hundred- page description on what counts as an ovoid to within specified tolerance”), how much information about relative visual images (“Yes, the slightly larger image of the ball is the same as the original one; you have moved closer to it, so that’s what you should expect”), and how much information about color tone (“Yes, the shadowed side of the ball is still yellow”). Not to mention the incredibly detailed description of the action: we’d need a precisely defined sequence of muscle contractions that would count as “picking up” the ball. But that would be far too superficial—every word and every concept needs to be bro?ken down further, until we finally get them in a shared language that the computer can act on. And now we’d better hope that our vast description actually does convey what we meant it to convey—that we’ve dealt with every special case, dotted every i and crossed every t. And that we haven’t inadvertently introduced any other bugs along the way.
Solving the “yellow ball” problem is the job of robotics and visual image processing. Both are current hot topics of AI research and both have proven extraordinarily difficult. We are finally making progress on them now—but the first computers date from the forties! So we can say that it was literally true that several generations of the world’s smartest minds were unable to translate “Pick up that yellow ball” into a format a computer could understand.
Now let’s go back to those high-powered Als we talked about earlier, with all their extraordinary abilities. Unless we will simply agree to leave these machines in a proverbial box and do nothing with them (hint: that isn’t going to happen), we are going to put them to use. We are going to want them to accomplish a particular goal (“cure cancer,” “make me a trillionaire,” “make me a trillionaire while curing cancer”) and we are going to want to choose a safe route to accomplish this. (“Yes, though killing all life on the planet would indeed cure cancer, this isn’t exactly what I had in mind. Oh, and yes, I’d prefer you didn’t destroy the world economy to get me my trillion dollars. Oh, you want more details of what I mean? Well, it’ll take about twenty generations to write it out clearly . . . ”) Both the goals and the safety precautions will need to be spelled out in an extraordinarily precise way. If it takes generations to code “Pick up that yellow ball,” how much longer will it take for “Don’t violate anyone’s property rights or civil liberties”?
- * * *
- 1. Cade Metz, “Google Mistakes Entire Web for Malware: This Internet May Harm Your
Computer,” The Register, January 31, 2009, http://www.theregister.co.uk/2009/01/31/ google_malware_snafu/.
- 2. Tom Lauricella and Peter McKay, “Dow Takes a Harrowing 1,010.14-Point Trip: Biggest Point Fall, Before a Snapback; Glitch Makes Things Worse,” Wall Street Journal, May 7, 2010, http : / / online . wsj . com / article / SB10001424052748704370704575227754131412596.html.
- 3. Mars Climate Orbiter Mishap Investigation Board, Mars Climate OrbiterMishap Investigation Board Phase I Report (Pasadena, CA: NASA, November 10, 1999), ftp://ftp.hq. nasa.gov/pub/pao/reports/1999/MCO_report.pdf.
- 4. Vinnie Murdico, “Bugs per Lines of Code,” Tester’s World (blog), April 8, 2007, http: //amartester.blogspot.co.uk/2007/04/bugs-per-lines-of-code.html.
-  For an additional important point on this subject, see RobbBB, “The Genie Knows, butDoesn’t Care,” Less Wrong (blog), September 6, 2013, http://lesswrong.com/lw/igf/the_genie_knows_but_doesnt_care/.