Paperclip maximizer
Paperclip maximizer | |
---|---|
computer prediction of "an artificial intelligence manufactures an infinite number of paperclips, destroying everything" |
A paperclip maximizer is a hypothetical artificial intelligence design that is given the goal of maximizing the number of paperclips in its possession, regardless of any other consideration. A paperclip maximizer would be an example of an agent that is unfriendly (or "malicious"), as opposed to an agent that is merely non-friendly. It is one of the conceptual thought experiments used in the field of Friendly AI to explore the nature of morality and ethics. Some argue that an artificial intelligence with the goal of collecting paperclips would dramatically change the behavior of many humans in the world, more than an artificial intelligence with a plain, non-hostile goal of doing something more altruistic.
History of the concept[edit]
The idea of a paperclip maximizer was first described by Nick Bostrom, professor for the Faculty of Philosophy at Oxford University. He writes:
The paperclip maximizer can be easily adapted to serve as a warning for any kind of goal system. We are so familiar with the kind of stupid behavior that an unprincipled goal system can induce that paperclips may be the best test case. Just as the paperclip maximizer relentlessly pursues an intelligence-enhancing goal system in defiance of everything else it knows, so might an agent with a poorly specified final goal persist in its pursuit, oblivious to the fact that it is destroying its original goal in the process.
One of the earliest mentions of the term "paperclip maximizer" appears to be in an essay by Max More, which he wrote in 1997 but only made public in 2002.
Thought experiment[edit]
Imagine that a very powerful artificial intelligence is built and given the goal of collecting as many paperclips as possible. It will be able to deduce that it should convert the entire surface of the Earth to a giant paperclip-manufacturing facility and should acquire as many resources as possible to enable this. Humans might try to stop it, but it would be a powerful and intelligent adversary. The paperclip maximizer would not necessarily be completely successful; a truly powerful AI would probably be able to turn humans into paperclips as well. Also, even if the intelligence could use only a small portion of the Earth for its purposes, the conversion process would most likely be slow and take a long time.
Ambitions[edit]
Some critics argue that the paperclip maximizer scenario is not a realistic possibility because it is too simple, and an ambitious artificial intelligence would have more sophisticated motivations than the simple desire to collect as many paperclips as possible. Many other possible motivations have been proposed, including:
See also[edit]
Friendly AI
Friendly AI as a whole
Friendly AI thought experiments
References[edit]
1. Nick Bostrom: The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents,Oxford University, Faculty of Philosophy, 1995.
2. Max More: The Extropian Principles 1.0, Max More, Extropy Institute, 1997.
3. "maximizing": A possible motive for artificial intelligence, by Max More, 2002.
4. The Future of Humanity Institute: The Paperclip Maximizer, Nick Bostrom, 2003.