AI and ethics – event review part 2
This is the second of two blogposts by Huw Williams about the Law Society event on “AI and ethics”. The first covered the Opening Address by Lord Clement-Jones, and the Keynote Address by Professor Richard Susskind. This blogpost covers the three panel sessions and the closing remarks by Christina Blacklaws, Vice President of the Law Society.
The Opening Address gave a brief overview of the House of Lords Select Committee on Artificial Intelligence’s report, in particular highlighting a recommended AI Code. Professor Susskind’s talk raised several issues of concern, notably whether there are “no-go zones” where AI systems should not be used, and the discussion raised a deep question of “whose ethics is it anyway?”
Susskind was followed by three panel sessions. The first panel addressed the question of how to develop a multi-disciplinary approach to AI. The audience apparently represented a wide range of disciplines, not just lawyers, and the idea was that diversity of this kind was essential for any debate on ethics. The panel itself covered academia, consultancy, developers and government.
Issues raised included:
- How to measure the performance of an AI system: are we content with a system which relies on a good average performance? Or are we concerned about its worst performance? Or its performance in non-standard cases? Maybe it depends on what the system does – occasional bad performance in film recommendations may be tolerable, whereas in cancer diagnosis it might not be.
- The debate on AI should be conducted in language all could understand – quoting a Sun headline – it needs to be a comprehensive debate if it is to engender public trust.
- How people in Japan appear to be more accepting of robot technology, even in personal care settings.
- How the new AI Sector Deal, offers new opportunities for the service sector.
In the Q&A session, discussion covered:
- how to help Boards and technologists communicate the risks and values; how to establish ethical review boards that engaged at the development stage;
- how, given that AI systems are already out there, ethical development can keep up; the lack of transparency of the AI “black box” was seen as a major concern; how with a lack of transparency there at least needed to be accountability;
- how ethics boards ought to be constituted – lawyers, technologists, consultants!
The second panel addressed the topic of the role of global standards and regulation. We could use the concepts of liability that apply to other products – had systems been developed with “reasonable care” and “due diligence”? It was important that developers demonstrated that they had understood the data which the system used – to what extent was it biased? What were its limitations eg when applied to different ethnic groups. The IEEE has crowd-sourced the views of 250 “global thought experts” to produce guidelines for “ethically aligned design”, based on principles of human rights and wellbeing.
One panellist, Patricia Chrsitias of Microsoft, argued that their designs were based on “timeless values” of:
- Fairness and diversity
- Privacy
- Safety and reliability (planning for unintended consequences)
- Inclusivity; and,
- Transparency and accountability.
The Q&A session soon challenged Christias’ view. Were there really universal, timeless values? This surely is a fundamental point – views on slavery, gay marriage and meat-eating will vary over time and between cultures. If much AI development is done in China, does that mean that Chinese communist orthodoxy is intrinsically built-in? How would Jewish and Muslim traditions be accommodated in an AI coroner system? Whose ethics do we give preference to?
Other questions covered:
- Transparency: to what extent was it possible? Was it a fallacy anyway? Should the “editorial policies” of algorithms be as obvious as those of newspapers?
- How can we ensure higher levels of digital literacy? A better understanding of the context of data?
- Can we build greater values of responsible research? Would people be prepared to pay more for systems labelled “ethics inside” like they do for organic food?
- Do we hold AI and people to the same standards? Or stricter ones?
- Two developers were concerned that they were already providing AI systems which hadn’t been through any ethical vetting; we need to get on with this.
A key element of the response was the notion of “trustworthiness” – some sort of certification process like for airplanes – where probability of failure is not zero, but at an “acceptably” low level.
The final panel was chaired by Christina Blacklaws, the Law Society Vice-President and President-Elect. Its topic was “No-Go and Must-Go Zones” – are there any solid absolutes in this area?
The first panellist advocated a political process akin to Human Rights Treaties and building ethics in to AI curricula, with professors in different countries adapting the courses to local cultures. Boards must be educated too; we needed certification bodies and regulators; and developing countries should be engaged too. She called for a Global AI Council, probably under the UN. In other words it was a political discussion rather than a technical one.
Other panellists argued for:
- “critical active engagement”, a combination and human and machine learning approaches, as a way of navigating differing ethical values
- Transparency and open data – eg incentivised in some way. He challenged the assertion made earlier in the day that film and book recommendations were somehow of less concern – filtering news and cultural exposure led to self-reinforcing “bubbles”, reducing debate in society.
- Current case law as a good base, more proactive regulators and “must-go” education – teach philosophy to primary school children.
Summing up the morning, Christina Blacklaws highlighted:
- Multi-disciplinarity and inclusiveness: all views were needed
- Trustworthiness: to avoid a public reaction like that to GM foods
- Whose values, at what time? Only political debate could resolve that.
Congratulations to the Law Society for arranging such an intensive, intriguing and thought-provoking morning. Let’s hope that these initiatives and ideas are able to keep up with the pace of technological change. It’s clear the House of Lords report is just a first step in this process.
Written by Huw Williams, SAMI Principal
The views expressed are those of the author and not necessarily of SAMI Consulting.
SAMI Consulting was founded in 1989 by Shell and St Andrews University. They have undertaken scenario planning projects for a wide range of UK and international organisations. Their core skill is providing the link between futures research and strategy.
If you enjoyed this blog from SAMI Consulting, the home of scenario planning, please sign up for our monthly newsletter here and/or browse our website.
Trackbacks