AI and ethics – event review
This is the first of two blogposts by Huw Williams about the Law Society event on “AI and ethics”. It covers the Opening Address by Lord Clement-Jones, and the Keynote Address by Professor Richard Susskind. The second blogpost will cover the three panel sessions and the closing remarks by Christina Blacklaws, Vice President of the Law Society.
At the end of April, the Law Society organised an “AI and ethics” event, hosted by Hogan Lovells in their modern lecture theatre in Holborn. The timing was very pertinent, as the previous day the Government had announced an AI Sector Deal worth almost £1 billion, including almost £300 million of private sector investment and 1,000 new government funded AI PhDs. And a week earlier theHouse of Lords Select Committee on Artificial Intelligence had published its report on AI, including some 78 recommendations.
The Opening Address in fact was given by Lord Clement-Jones, the chair of the House of Lords Select Committee on AI. In the time available, he wasn’t able to go into much detail about the report and its recommendations – we may review the full report in a future posting. But he did stress that the UK was well-positioned to lead the debate on ethical AI, because of its history and contacts. He argued that it was important not to stifle the development of novel AI systems through over-regulation, and so they were looking instead to ensure an ethical dimension in the actions of existing regulators, like Ofcom. He also noted that AI systems were already operating, so there was no time to waste.
He spent much of his time covering a new recommended 5-point “AI Code” with the aim of it being adopted nationally, and internationally:
- Artificial intelligence should be developed for the common good and benefit of humanity.
- Artificial intelligence should operate on principles of intelligibility and fairness.
- Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
- All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
- The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.
These sound very worthy and sensible points, but the more one looks into them, the more the difficulties emerge:
- In a capitalist system, how can one ensure “common good and benefit”?
- Artificial intelligence often lacks transparency (a point much debated later), so it may be hard to ensure intelligibility or guarantee fairness.
- The Cambridge Analytica scandal has already highlighted how hard it is to manage a sensible line on data privacy.
- Education is clearly a vital need, but will it be funded? Lifetime learning will be essential, as the world of work is transformed.
- Several automated weapon systems already exist. The Phalanx close-in weapons system, aboard the amphibious assault ship USS Boxer, is essentially a large machine gun that can detect and automatically destroy anything coming its way.
The Keynote Address was given by Professor Richard Susskind, joint author with his son Daniel of the book “The future of the professions”.
He said that he would primarily concentrate on “narrow AI”, rather than “artificial general intelligence”, but even then significant developments are already happening. Commentators often over-state the impact of new technology in the short-term, but under-state it in the long-term (say 10 years for AI). He stressed that in assessing which areas of human thinking AI could replace, people often argued that the human method of thinking could not be mimicked. However, all that is required is for the narrow AI system to achieve the same or better outcome. An autonomous vehicle for example would not have a humanoid robot sitting at the driving wheel.
Furthermore, when you delve into experts’ explanations of why they took a decision, it often comes down to intuition or judgement – even they are not fully “transparent”. The AlphaGo system which beat the world Go champion made a move which experts thought at first was a mistake but in the end they described as “creative”. The inductive nature of machine learning makes transparent explanations impossible. And a narrow AI system doesn’t “know” anything – AlphaGo didn’t go out celebrating with friends after its win!
Turning to ethics and meta-ethics (how we build ethical systems), he said that ethics were “normative” (what oughtto be) rather than objective (what is). Can anything be argued to be objectively right or wrong?
He ended with four concerns:
- a) Is there an existential threat to the human race from AI? Bill Gates, Elon Musk and Stephen Hawking have all suggested there could be. Susskind suggested reading Nick Bostrom’s book “Superintelligence” on the subject. Personally, he felt the possibility was very far off, but AI would have major disruptive societal effects well before then.
- b) Are there moral limits to decisions we should let AI systems make, even if they could?Would you want an AI doctor deciding to switch off a life support system? An AI judge passing a life sentence? A death sentence? How do we establish where the “no-go” areas are?
- c) What will happen to the future of work? Will change come so quickly that we face technological unemployment, or will enough new jobs be created to offset those lost?
- d) What effect will there be of AI ownership and control? Income will be a return to capital rather than labour, likely to increase societal inequality – it’s concentrated already, but will become more so. Does this call for new forms of re-distribution?
Being an optimist, Susskind suggested that new developments in the Centre for Data Ethics and Innovation, and the Nuffield Ada Lovelace Institutewould help us move towards some answers.
With a tight agenda, so there was only time for one question. An AI entrepreneur asked whether we needed to worry about AI-equivalents of Greenpeace resisting “progress”. Susskind replied that balanced sensible debate would make sure good ideas weren’t crowded out. To my mind, this highlighted a more fundamental question – whose ethics is it anyway?
Written by Huw Williams, SAMI Principal
The views expressed are those of the author and not necessarily of SAMI Consulting.
SAMI Consulting was founded in 1989 by Shell and St Andrews University. They have undertaken scenario planning projects for a wide range of UK and international organisations. Their core skill is providing the link between futures research and strategy.
If you enjoyed this blog from SAMI Consulting, the home of scenario planning, please sign up for our monthly newsletter at newreader@samiconsulting.co.uk and/or browse our website at http://www.samiconsulting.co.uk
Trackbacks