AI as a threat to humanity- seriously?
I started blogging about AI since 2016 with one post each year (read my 2016, 2017). In this blog, I am going to pitch further on the dangers to human extinction, and why this is possible. And what can be done about it, I have to say I am not optimistic. Read on to see why (from many scientists and researchers in this field).
Most of the information below are from this December 2018 article, in which it poses 9 questions on Artificial Intelligence covering the basics, and latest developments in the field. Below are some excerpts which are both interesting and alarming…
- the computer doing what we told it to do but not what we wanted it to do. They are goal-driven and they are excellent at that. But that means it’d strive to achieve the goals it’s set (programmed) to do, and may take whichever avenue to achieve it the most effective and efficient manner- that may include cheating, or some immoral (according to human values) ways. Not that the creator/programmer of the AI machine intend that way, or may have include every means he can think of preventing it taking such inhumane actions, but the creator/programmer may (and this is likely as/when AI surpasses human intelligence) not even have thought out those possibilities. To quote an extreme scenario- if the AI who’s goal to find an answer to an immensely complicated problem and the most effective way is to maximize it’s computer resources (such as by grabbing every computer in the word for its use), then one quick way is to stop all other computer consumption such as by any human beings, an effective way it may think of would be to kill all humans by release of certain bio-pathogens which it may have under it’s control in whatever convoluted way (that humans may never think of?)?
- AI safety researchers emphasize that we shouldn’t assume AI systems will be benevolent by default. They’ll have the goals that their training environment set them up for, and no doubt this will fail to encapsulate the whole of human values.
- “once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.” – Nick Bostrom wrote in 2014, heis a professor at the University of Oxford, the director of the Future of Humanity Institute, and the director of the Governance of Artificial Intelligence Program
- Facebook’s chief AI scientist Yann LeCun, is a vocal voice on the skeptical side. But while he argues we shouldn’t fear AI, he still believes we ought to have people working on, and thinking about, AI safety. This I totally agree, and that this is urgent.
- Even if we know how to take appropriate precautions (and right now we don’t), we also need to figure out how to ensure that all would-be AI programmers are motivated to take those precautions and have the tools to implement them correctly. This is vitally important, but I’m not optimistic about it. Just think about all those fake food harming people, those who make them know about it, but that does not prevent them from doing it just for own profit.
- To people who think the worrying is premature and the risks overblown, AI safety is competing with other priorities that sound, well, a bit less sci-fi — and it’s not clear why AI should take precedence. To people who think the risks described are real and substantial, it’s outrageous that we’re dedicating so few resources to working on them.
Why I am pessimistic and people should be worried?
In the 6th question of the said article – What are we doing right now to avoid an AI apocalypse?
“It could be said that public policy on AGI [artificial general intelligence] does not exist,” concluded a paper this year reviewing the state of the field. The truth is that technical work on promising approaches is getting done, but there’s shockingly little in the way of policy planning, international collaboration, or public-private partnerships. In fact, much of the work is being done by only a handful of organizations, and it has been estimated that around 50 people in the world work full time on technical AI safety. Artificial narrow intelligence (as opposed to AGI) is basically focused on specific purposes/goals like playing chess (Deep Blue is an example), these are rather advanced and widely available now.
Look at the more talked about topic on global climate change and the imminent detrimental effect on humans, just observe how the politicians are reacting to this. Global climate change would probably not kill all on earth, but AGI can- seriously! The particular danger of AGI on affecting humans is that, unlike the advance of nuclear fusion (the atomic bombs) which is immediately understood and easy to explain to all, people and politicians can continue with endless debates while just one advance in such super-intelligence (which is capable of creating more and better ones) without proper precautions and enforceable laws would spell the end of all!
This is not just my pessimistic view, but that of many prominent scientists and researchers in this field since 1965- the latest being from Stephen Hawking and Elon Musk. I do hope I am only talking about science fiction.
Credits and Sources:
The case for taking AI seriously as a threat to humanity. (2018). Vox. Retrieved 23 December 2018, from https://www.vox.com/future-perfect/2018/12/21/18126576/ai-artificial-intelligence-machine-learning-safety-alignment
Benefits & Risks of Artificial Intelligence – Future of Life Institute. (2016). Future of Life Institute. Retrieved 23 December 2018, from https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/