Nov 23 - 3min readThe Real Reason We Should Fear AIBy Launchbase

In a world of Black Mirrors, I, Robots and Terminators, it’s no wonder people fear technology or more specifically, robots.. For all the progress we are making, the message portrayed on television, media and in books is loud and clear; artificial intelligence (AI) is going to end us.

Elon Musk, Bill Gates and Stephen Hawking have all shared concerns on artificial intelligence, and, while they haven’t yet alluded to murderous robots, their concerns are not far off. Musk claimed in 2014 that AI is the most serious threat to the survival of the human race. We actually thought the same until Trump was elected.

Musk himself has invested in AI research through his non-profit OpenA, claiming he wants to ‘keep an eye on what’s going on’, due to the potentially dangerous outcome. Well, I don’t know about you, but if there’s one person we trust to take care of us when the robots attack, it’s Musk.

Bill Gates has also voiced concerns. Although he said it will be ‘decades’ before artificial intelligence poses a threat to humans, he admitted that he is ‘in the camp that is concerned about super intelligence’. Stephen Hawking took his concerns even further. He said he believes that the ‘development of full artificial intelligence could spell the end of the human race‘. In an open letter to the AI community this year, Hawking wrote that ‘one can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand’.

Facebook founder Mark Zuckerberg has a rather more optimistic approach. Zuckerberg believes that ‘along the way we will figure out how to make [AI] safe’, and compared our fears to how we would originally have felt about planes. If people were focused on safety first, said Zuckerberg, ‘no one would ever have built a plane. This fearful thinking might be standing in the way of real progress’.

As Zuckerberg has experienced the benefits of AI first hand, it’s no wonder he’s feeling so chirpy. He revealed earlier in the year that he is dedicating his year to making a home AI butler, telling The Verge that it can already control his home and make his breakfast. Future plans include teaching it to let his friends into his home by looking at their faces when they ring the doorbell, and chatbot capabilities. Just you wait until it turns evil, Mark.

The real worry

It seems reasonable to argue that currently, AI is not that dangerous. Yes, people remain sceptical of ‘super intelligence’, but at the moment it’s not too much of a threat to the human race. We can’t see Amazon’s Echo conspiring against us.

But, we still have a number of reasons to be concerned. What if it’s not a case of what AI will do to humans, but in fact the opposite? What could humans do to AI? Can we trust the humans behind artificial intelligence not to abuse their power? Essentially the biggest worry right now is, what happens when Trump gets involved?

The real concern here is the security of AI’s programming. There is no reason why robots won’t be hacked maliciously by criminals. In this instance it would not be the robots turning bad of their own accord, but humans turning them bad.

Although this fear itself seems a little far fetched (although less so than the idea of fully conscious murderous robots roaming our streets), hacking is a modern day concern that can effect us all. Just this year, Charlie Miller and Chris Valasek remotely hacked a moving car, bypassing the vehicle’s safeguards. Miller hoped that his work into the security of self-driving would help car manufacturers build better protections into cars, and their message hasn’t been ignored. Chrysler are offering a cash sum to hackers who inform the company of flaws in it’s vehicles. This is an example of AI being hacked, can we really trust that it won’t happen to our super-intelligent robotic butlers?

So, although media-represented AI doesn’t seem to be a current threat, it is still totally rational to be concerned about AI. Yes, AI could soon assist us, and in fact do a huge number of tasks for us. AI could work in deadly environments where humans are unable to, instantly translate a huge number of languages and diagnose diseases, yet we cannot ignore the potentially dangerous consequences. 100 years from now, will AI have saved or destroyed us? The jury’s out.

 

share:

Monthly newsletter

Insight updates • Podcasts • Case studies • New Services • New positions


    By Launchbase

    Mastering Agile Prioritisation: 7 Powerful Techniques for Product Owners

    Introduction: Steering the ship in the dynamic world of mobile app development...

    Dec 20 • 2min read
    By Launchbase

    Revolutionising Industries: Unleashing the Potential of Quantum App Development

    In today’s fast-evolving technological landscape, the demand for quicker and more efficient...

    Dec 13 • 3min read
    By Launchbase

    Building Your Tech Startup: A Comprehensive Guide to Success with a Mobile App Development Company

    In the dynamic landscape of the tech industry, launching a startup requires...

    Dec 11 • 3min read