That’s Where You Come In ...
There are three things needed—three little things that will make an AI future bright and full of meaning and joy, rather than dark, dismal, and empty. They are research, funds, and awareness.
Research is the most obvious. A tremendous amount of good research has been accomplished by a very small number of people over the course of the last few years—but so much more remains to be done. And every step we take toward safe AI highlights just how long the road will be and how much more we need to know, to analyze, to test, and to implement.
Moreover, it’s a race. Plans for safe AI must be developed before the first dangerous AI is created. The software industry is worth many billions of dollars, and much effort is being devoted to new AI technologies. Plans to slow down this rate of development seem unreal-
That’s Where You Come In ...
istic. So we have to race toward the distant destination of safe AI and get there fast, outrunning the progress of the computer industry.
Funds are the magical ingredient that will make all of this needed research—in applied philosophy, ethics, AI itself, and implementing all these results—a reality. Consider donating to the Machine Intelligence Research Institute (MIRI), the Future of Humanity Institute (FHI), or the Center for the Study of Existential Risk (CSER). These organizations are focused on the right research problems. Additional researchers are ready for hire. Projects are sitting on the drawing board. All they lack is the necessary funding. How long can we afford to postpone these research efforts before time runs out?
If you’ve ever been motivated to give to a good cause because of a heart-wrenching photograph or a poignant story, we hope you’ll find it within yourself to give a small contribution to a project that could ensure the future of the entire human race.1
Finally, if you are close to the computer science research community, you can help by raising awareness of these issues. The challenge is that, at the moment, we are far from having powerful AI and so it feels slightly ridiculous to warn people about AI risks when your current program may, on a good day, choose the right verb tense in a translated sentence. Still, by raising the issue, by pointing out how fewer and fewer skills remain “human-only," you can at least prepare the community to be receptive when their software starts reaching beyond the human level of intelligence.
This is a short book about AI risk, but it is important to remember the opportunities of powerful AI, too. Allow me to close with a hopeful paragraph from a paper by Luke Muehlhauser and Anna Salamon:
We have argued that AI poses an existential threat to humanity. On the other hand, with more intelligence we can hope for quicker, better solutions to many of our problems. We don’t usually associate cancer cures or economic stability with artificial intelligence, but curing cancer is ultimately a problem of being smart enough to figure out how to cure it, and achieving economic stability is ultimately a problem of being smart enough to figure out how to achieve it. To whatever extent we have goals, we have goals that can be accomplished to greater degrees using sufficiently advanced intelligence. When considering the likely consequences of superhuman AI, we must respect both risk and opportunity.
- * * *
- 1. See also Luke Muehlhauser, “Four Focus Areas of Effective Altruism,” Less Wrong (blog), July 9, 2013, http://lesswrong.com/lw/hx4/four_focus_areas_of_effective_ altruism/.
- 2. Luke Muehlhauser and Anna Salamon, “Intelligence Explosion: Evidence and Import,” in Eden et al., Singularity Hypotheses.