Skip to content

“Immature” Superintelligence

February 6, 2019

artificial-intelligence-circuit-board-computing-50711

I met up with Tony Czarnecki of Sustensis recently to follow up some of his ideas on superintelligence, and the Singularity.  Tony is the author of a book called “Who can save Humanity from Superintelligence?”, and is concerned that such developments could happen very quickly and would need international co-operation to manage the risks.

Despite the many rapid advances of AI in different fields, I’ve always been sceptical about the speed with which Artificial General Intelligence (AGI) might come about. The UN has suggested maybe 2045/50 which would give us time to adjust – but even that I thought was too “optimistic”. Tony had in mind just 10 years before real threats emerge, so I was interested to hear why.

Tony’s intriguing new idea was the concept of a malevolent  “Immature” Superintelligence, which on first hearing made me think of AI behaving like an adolescent – rather terrifying!  AI doesn’t have to be fully AGI before it could deliberately pose serious threats to society. It could purposefully set off malicious process control events. Or it could make mistakes, erroneously executing its tasks. Such threats could include switching off critical infrastructure, releasing bacteria from controlled labs, creating false military postures (and hence over-reactions), even firing nuclear weapons.

The underlying reason for these risks developing is the inherent difficulty of training AI systems to meet specific objectives. In the confines of a highly structured environment like chess or Go, success or failure is clear. But in real-world situations even defining what “good” looks like is itself a challenge.  Setting up measures and metrics is fraught with difficulty often producing unwanted consequences – think of Ofsted who have just decided that education is not all about exam results. Anyone who has ever designed a bonus system for salespeople knows that their bonus-maximising behaviours are fantastic and you had better be pretty sure you are rewarding the right things.

There are also issues about AI systems setting their own goals, or at least sub-goals against some meta-level objective. And having them identify when specific rules should be broken or ignored in the context of the greater good. When do you drive through a red light (it might not be working properly)? When do you put country before party?  When do you act against your own self-interest?

This leads us into the difficult area of ethics. And, in a global context, whose ethics? Even if a UN for AI could be established, the chances of agreement on a set of “Universal Values of Humanity” must be low – the Universal Declaration of Human Rights is a major achievement, but hardly a comprehensive success. Can we expect to see religious AI? LGBTQ+ AI?

This takes on to the “wetware” argument – that human (and animal) intelligence is intrinsically related to its physical nature. Octopus intelligence is different because it senses the world differently.   Basically this is challenging the Mind-Body duality of Cartesian philosophy, which seems to unwittingly underpin much AI.  That AI cannot become superior to human intelligence until it learns to play, feel pain, become emotional, become unstable, love.

So much of my scepticism about AGI any time soon remains. But I do agree with Tony that there are huge risks in implementing AI successfully, and that concerted action is needed to make sure the world knows that and does something to control what someone called the G-MAFIA (Google, Microsoft, Apple, Facebook, IBM, Amazon) actually build.

Written by Huw Williams, SAMI Principal

The views expressed are those of the author and not necessarily of SAMI Consulting.

SAMI Consulting was founded in 1989 by Shell and St Andrews University. They have undertaken scenario planning projects for a wide range of UK and international organisations. Their core skill is providing the link between futures research and strategy.

If you enjoyed this blog from SAMI Consulting, the home of scenario planning, please sign up for our monthly newsletter at newreader@samiconsulting.co.uk and/or browse our website at http://www.samiconsulting.co.uk

Advertisement
One Comment leave one →
  1. February 9, 2019 11:51 am

    In a recent workshop on ICT and safety at work, one group came up with a futures headline of “Out of control robot “gets rid of” inefficient workers – by killing them” !!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: