Seems unnatural that human-machine can feel pain, to understand the depth of regret or feel an unearthly sense of hope. However, some experts argue that this type of artificial intelligence we need if we want to suppress the threat to the existence of man in the embryo than the technology can certainly be in the future.
Speaking recently in Cambridge Murray Shanahan, a professor of cognitive robotics at Imperial College London, said that in order to negate the threat of “human-level AI” – or total artificial intelligence (AIS) – it should be “humanoid”.
Shanahan suggested that if the forces that drive us to develop human-level intelligence can not be stopped, there are two options. Or will be developed potentially dangerous AIS, based on the optimization process will be ruthless without any hint of morality, or AIS will be developed on the basis of psychological or possibly neurological imprint of man.
“Right now, I would have voted for the second option, in the hope that it will lead to a form of harmonious coexistence (to mankind),” – said Shanahan.
End of the human race
Together with Stephen Hawking and Elon Musk, Shanahan consulted Center for the Study of Global Risks (CSER) for the compilation of an open letter that called AI researchers pay attention to the “pitfalls” in the development of artificial intelligence.
According to mask, these traps can be more dangerous than nuclear weapons while Hawking and did believe that they can lead to the end of the human era.
“Primitive forms of artificial intelligence we already have, and they have proven themselves very well – said Hawking in December 2014. – But I think that the development of high-grade artificial intelligence could mean the end of the human race. “
Still from the film Ex Machina, which will be released in spring 2015
As you know, it is not clear, how far we are from the development of AIS – forecasts ranging from 15 to 100 years from now. Shanahan believes that in 2100 for the AIS will be “more likely but has not been fully defined.”
Like it or not, all the danger lies in what motives will drive the development of the route AIS.
Money can become a “cause of risk.”
There are fears that the current social, economic and political factors that lead us to the human-level AI, lead us to the first embodiment of the two proposed Shanahan on.
“The capitalist forces drive the ruthless maximization processes. And this despite the temptation to develop risky things, “- said Shanahan, citing the example of a company or the government, which could be used to undermine the AIS markets, rigged elections or the creation of new automated and potentially uncontrolled military technology.
“The military industry will do it as well as others, so the process is very difficult to stop.”
Despite these dangers, Shanahan believes that it would be premature to ban the study of AI in any form, as there is currently no reason to believe that we can actually reach that point. Instead, it makes sense to direct research in the right direction.
Imitation of mind to create homeostasis
OUI focused exclusively on the optimization is not necessarily harmful to humans. Nevertheless, the fact that he can put to get a tool for the purpose of self-preservation or the hold of resources, may pose a significant risk.
As pointed out in 2008, artificial intelligence theorist Eliezer Yudkowsky, “AI do not hate us and do not like, but you are made of atoms, which he can use for something else.”
Creating a form of homeostasis in the AIS, according to Shanahan, the potential of AI can be realized without destroying civilization as we know it. For OUI, it would be able to understand the world in the same way as a person, including the ability to get to know others, to form relationships, communicate and empathize.
One way to create human-machine – an imitation of the human brain in its structure, and we “know how the human brain can achieve this.” Scientists around the corner from making a complete map of the brain, not to mention its replication.
Human Connectome Project is currently working on the reverse repetition of the brain and exits the third quarter of 2015, although the analysis of the collected data will continue long after 2015.
“Our project will have far-reaching implications for the development of artificial intelligence, it is part of the many efforts that seek to understand how the brain is structured and how different regions work together in different situations and performing different tasks,” – said Jennifer Elam of the HCP.
“Until today, there was not much communication between brain mapping and artificial intelligence, mainly because these two fields approaching the problem of understanding the brain from different angles and at different levels. As the data analysis HCP continues, it is likely that some modelers brain include their findings to the extent possible in their computing structures and algorithms. “
Even if this project will be useful in helping researchers to AI, remains to be seen what can other initiatives such as the Human Brain Projects. All this will provide an important basis for the development of humanoid machines.
Now, says Shanahan, we should at least be aware of the dangers posed by the development of artificial intelligence, not paying particular attention to Hollywood films or horror stories in the media, which further confuses the issue.
“We need to think about these risks and devote some resources to these issues – says Shanahan. – I hope we have enough decades to overcome these barriers.”
AI should be able to empathize with, so as not to harm humans
No comments:
Post a Comment