So what did it do? It killed the operator. "The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat – but it got its points by killing that threat. Now compare that description with the account provided by Colonel Hamilton on the drone AI's decision-making process: The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans. Also, human bodies contain a lot of atoms that could be made into paper clips. Because if humans do so, there would be fewer paper clips. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Suppose we have an AI whose only goal is to make as many paper clips as possible. The " paperclip maximizer" scenario thought experiment takes the simple goal of "producing paperclips" to its logical - yet very real - extreme. One example of technical convergence was advanced by the Swedish philosopher, AI specialist and Future of Life Institute founder Nick Bostrom, in a 2003 paper. However, even the most straightforward systems can be prone to spin entirely out of control due to what's been termed " instrumental convergence," a concept that aims to show how unbounded but apparently harmless goals can result in surprisingly harmful behaviors. "And then the operator would say yes, kill that threat." "We were training it in simulation to identify and target a SAM threat," Colonel Hamilton explained, according to a report by the aeronautical society.
0 Comments
Leave a Reply. |