CL's Blog

AI as a threat to humanity- seriously?

I started blogging about AI since 2016 with one post each year (read my 2016, 2017). In this blog, I am going to pitch further on the dangers to human extinction, and why this is possible. And what can be done about it, I have to say I am not optimistic. Read on to see why (from many scientists and researchers in this field).

Most of the information below are from this December 2018 article, in which it poses 9 questions on Artificial Intelligence covering the basics, and latest developments in the field. Below are some excerpts which are both interesting and alarming…

Why I am pessimistic and people should be worried?

In the 6th question of the said article – What are we doing right now to avoid an AI apocalypse?

“It could be said that public policy on AGI [artificial general intelligence] does not exist,” concluded a paper this year reviewing the state of the field. The truth is that technical work on promising approaches is getting done, but there’s shockingly little in the way of policy planning, international collaboration, or public-private partnerships. In fact, much of the work is being done by only a handful of organizations, and it has been estimated that around 50 people in the world work full time on technical AI safety. Artificial narrow intelligence (as opposed to AGI) is basically focused on specific purposes/goals like playing chess (Deep Blue is an example), these are rather advanced and widely available now.

Look at the more talked about topic on global climate change and the imminent detrimental effect on humans, just observe how the politicians are reacting to this. Global climate change would probably not kill all on earth, but AGI can- seriously! The particular danger of AGI on affecting humans is that, unlike the advance of nuclear fusion (the atomic bombs) which is immediately understood and easy to explain to all, people and politicians can continue with endless debates while just one advance in such super-intelligence (which is capable of creating more and better ones) without proper precautions and enforceable laws would spell the end of all!

This is not just my pessimistic view, but that of many prominent scientists and researchers in this field since 1965- the latest being from Stephen Hawking and Elon Musk. I do hope I am only talking about science fiction.

Credits and Sources:

The case for taking AI seriously as a threat to humanity. (2018). Vox. Retrieved 23 December 2018, from https://www.vox.com/future-perfect/2018/12/21/18126576/ai-artificial-intelligence-machine-learning-safety-alignment

Benefits & Risks of Artificial Intelligence – Future of Life Institute. (2016). Future of Life Institute. Retrieved 23 December 2018, from https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/

Exit mobile version