Robot Ethics – Reports, Articles, Blogs and News Links

Audio – Video – Written Material – Organisations

A collection of reports, blogs, papers and other written material about robot ethics.


Reports

IEEE Ethically Aligned Design: A Vision For Prioritizing Wellbeing With Artificial Intelligence And Autonomous Systems

For Public Discussion – By 18th March 2018

Version 2 of this report is available by registering at:
http://standards.ieee.org/develop/indconn/ec/auto_sys_form.html

Public comment on version 1 of this document was invited by March 2017 to encourages technologists to prioritize ethical considerations in the creation of autonomous and intelligent technologies. The document was created by committees of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, comprised of over one hundred global thought leaders and experts in artificial intelligence, ethics, and related issues.

Version 2 presents the following principles/recommendations:

Candidate Recommendation 1 – Human Rights
To best honor human rights, society must assure the safety and security of A/IS so that they are designed and operated in a way that benefits humans:
1. Governance frameworks, including standards and regulatory bodies, should be established to oversee processes assuring that the use of A/IS does not infringe upon human rights, freedoms, dignity, and privacy, and of traceability to contribute to the building of public trust in A/IS.
2. A way to translate existing and forthcoming legal obligations into informed policy and technical considerations is needed. Such a method should allow for differing cultural norms as well as legal and regulatory frameworks.
3. For the foreseeable future, A/IS should not be granted rights and privileges equal to human rights: A/IS should always be subordinate to human judgment and control.

Candidate Recommendation 2 – Prioritizing Wellbeing
A/IS should prioritize human well-being as an outcome in all system designs, using the best available, and widely accepted, well-being metrics as their reference point.

Candidate Recommendation 3 – Accountability
To best address issues of responsibility and accountability:
1. Legislatures/courts should clarify issues of responsibility, culpability, liability, and accountability for A/IS where possible during development and deployment (so that manufacturers and users understand their rights and obligations).
2. Designers and developers of A/IS should remain aware of, and take into account when relevant, the diversity of existing cultural norms among the groups of users of these A/IS.
3. Multi-stakeholder ecosystems should be developed to help create norms (which can mature to best practices and laws) where they do not exist because A/IS-oriented technology and their impacts are too new (including representatives of civil society, law enforcement, insurers, manufacturers, engineers, lawyers, etc.).
4. Systems for registration and record-keeping should be created so that it is always possible to find out who is legally responsible for a particular A/IS. Manufacturers/operators/owners of A/IS should register key, high-level parameters, including:

• Intended use
• Training data/training environment (if applicable)
• Sensors/real world data sources
• Algorithms
• Process graphs
• Model features (at various levels)
• User interfaces
• Actuators/outputs
• Optimization goal/loss function/reward function

Standard Reference for Version 2
The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically
Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, Version 2. IEEE, 2017. http://standards. ieee.org/develop/indconn/ec/autonomous_ systems.html.

Report, The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically
Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, Version 2. IEEE, 2017, December 2017, 136 pages

http://standards.ieee.org/develop/indconn/ec/auto_sys_form.html


Growing the artificial intelligence industry in the UK

This independent review, carried out for the UK government (Business Secretary and Culture Secretary) by Professor Dame Wendy Hall (Professor of Computer Science at the University of Southampton, UK) and Jérôme Pesenti (CEO BenevolentTech), reports on how the Artificial Intelligence industry can be grown in the UK.

The recommendations cover:

  • Improving data access
  • Improving the supply of skills
  • Maximising UK AI research
  • Supporting the uptake of AI

Notably, a search of the recommendations for the words ‘ethics’, ‘morality’, ‘caution’, ‘safety’, ‘principle’ and ‘harm’ all return zero results. Transparency and accountability are mentioned in relation to supporting the uptake of AI (recommendation 14 in the main report). The report refers the reader to The Royal Society and the British Academy review on the needs of a 21st century data governance system (see below).

In the main report (page 14) it states:
‘Trust, ethics, governance and algorithmic accountability: Resolving ethical and societal questions is beyond the scope and the expertise of this industry-focused review, and could not in any case be resolved in our short time-frame.

However, building public confidence and trust will be vital to successful development of UK AI. Therefore this Review stresses the importance of industry and experts working together to secure and deserve public trust, address public perceptions, gain public confidence, and model how to deliver and demonstrate fair treatment. Fairness will be part of gaining economic benefits, and addressing ethical issues effectively to support wider use of AI could be a source of economic advantage for the UK.’

Page 66 of the main report:

‘As noted above, AI can also create new situations with new implications for fairness, transparency and accountability. AI could also change the nature of many areas of work.

AI in the UK will need to build trust and confidence in AI-enabled complex systems. There is already collective activity to work towards guidelines in ethics for automation, but we can expect this field to grow and change. A publicly visible expert group drawn from industry and academia, which engages with these issues would help to build that trust and confidence.’

Page 68 has a section on ‘Explainability of AI-enabled uses of data’ which concludes with the recommendation:

‘Recommendation 14: The Information Commissioner’s Office and the Alan Turing Institute should develop a framework for explaining processes, services and decisions delivered by AI, to improve transparency and accountability.’

and

‘Further on, it is possible that new applications of AI may hold solutions on transparency and explainability, using dedicated AIs to track and explain AI-driven decisions.’

Report,Growing the artificial intelligence industry in the UK, Professor Dame Wendy Hall, October 2017
https://www.gov.uk/government/publications/growing-the-artificial-intelligence-industry-in-the-uk


The Royal Society and the British Academy review on the needs of a 21st century data governance system.

‘The amount of data generated from the world around us has reached levels that were previously unimaginable. Meanwhile, uses of data-enabled technologies promise benefits, from improving healthcare and treatment discovery, to better managing critical infrastructure such as transport and energy.

These new applications can make a great contribution to human flourishing but to realise these benefits, societies must navigate significant choices and dilemmas: they must consider who reaps the most benefit from capturing, analysing and acting on different types of data, and who bears the most risk.’

Article and Report, The Royal Society, Data management and use: Governance in the 21st century – a British Academy and Royal Society project, October 2017
https://royalsociety.org/topics-policy/projects/data-governance/


ASILOMAR AI Principles

A set of principles signed by 1,273 AI/Robotics researchers and 2541 others at the BAI 2017 conference, the Future of Life Institute’s second conference on the future of artificial intelligence in January 2017

‘Artificial intelligence has already provided beneficial tools that are used every day by people around the world. Its continued development, guided by the following principles, will offer amazing opportunities to help and empower people in the decades and centuries ahead.’

Conference Report, ASILOMAR AI Principles, Future of Life Institute, January 2017
https://futureoflife.org/ai-principles/


BS 8611:2016 – Robot Ethics Standard

Robots and robotic devices. Guide to the ethical design and application of robots and robotic systems

BS 8611 gives guidelines for the identification of potential ethical harm arising from the growing number of robots and autonomous systems being used in everyday life. The standard also provides additional guidelines to eliminate or reduce the risks associated with these ethical hazards to an acceptable level. The standard covers safe design, protective measures and information for the design and application of robots.

This standard for robot and robotics device designers and managers, and the general public. It BS was written by scientists, academics, ethicists, philosophers and users to provide guidance on specifically ethical hazards associated with robots and robotic systems and how to put protective measures in place. It recognizes that these potential ethical hazards have a broader implication than physical hazards, so it is important that different ethical harms and remedial considerations are considered. The new standard builds on existing safety requirements for different types of robots, covering industrial, personal care and medical.

Hardcopy and PDF document, BS 8611:2016 – Robots and robotic devices, British Standards Institute, April 2016
Available from British Standards Institute Shop




Articles


What jobs will still be around in 20 years?

‘Jobs won’t entirely disappear; many will simply be redefined. But people will likely lack new skillsets required for new roles and be out of work anyway’

Newspaper Article, What jobs will still be around in 20 years? Read this to prepare your future, The Guardian, June 2017
https://www.theguardian.com/us-news/2017/jun/26/jobs-future-automation-robots-skills-creative-health?CMP=share_btn_link


The Doomesday Invention

Article, The Doomesday Invention, Raffi Khatchadourian – Review of Nick Bostrom’s warning about AI, The New Yorker, November 2015
https://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom




Blogs

Regulator Against the Machine

Artificial intelligence (AI) is part of today’s zeitgeist: whether it be parallels with ‘I, Robot’ dystopias or predictions about its impact on society. But for all the potential, the development of machines that learn as they go remains slow in health care.

Blog post, Regulator Against the Machine, British Journal of Health Computing, November 2017
http://www.bj-hc.co.uk/blog-regulator-against-machine


What do Robots Believe? – Ways of Knowing

How do we know what we know? This article considers: (1) the ways we come to believe what we think we know (2) the many issues with the validation of our beliefs (3) the implications for building artificial intelligence and robots based on the human operating system.

Blog Post, Human Operating System 4 – Ways of Knowing, Rod Rivers, September 2017
http://www.wellbeingandcontrol.com/?p=1062




Books

Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence 1st Edition

Patrick Lin (Editor),‎ Keith Abney (Editor),‎ Ryan Jenkins (Editor)

Expanding discussions on robot ethics ‘means listening to new voices; robot ethics is no longer the concern of a handful of scholars. Experts from different academic disciplines and geographical areas are now playing vital roles in shaping ethical, legal, and policy discussions worldwide. So, for a more complete study, the editors of this volume look beyond the usual suspects for the latest thinking. Many of the views as represented in this cutting-edge volume are provocative–but also what we need to push forward in unfamiliar territory.’

Book, Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence 1st Edition, ISBN-13: 978-0190652951 ISBN-10: 0190652950, 440 pages, October 2017
https://global.oup.com/academic/product/robot-ethics-20-9780190652951?cc=us&lang=en&


Robot Ethics: The Ethical and Social Implications of Robotics

(Intelligent Robotics and Autonomous Agents series)
Patrick Lin (Editor),‎ Keith Abney (Editor),‎ George A. Bekey (Editor)

‘Starting with an overview of the issues and relevant ethical theories, the topics flow naturally from the possibility of programming robot ethics to the ethical use of military robots in war to legal and policy questions, including liability and privacy concerns. The contributors then turn to human-robot emotional relationships, examining the ethical implications of robots as sexual partners, caregivers, and servants. Finally, they explore the possibility that robots, whether biological-computational hybrids or pure machines, should be given rights or moral consideration.’

Book, Robot Ethics: The Ethical and Social Implications of Robotics, January 2014
https://mitpress.mit.edu/books/robot-ethics


Moral Machines: Teaching Robots Right from Wrong

1st Edition by Wendell Wallach (Author),‎ Colin Allen (Author)

‘Computers are already approving financial transactions, controlling electrical supplies, and driving trains. Soon, service robots will be taking care of the elderly in their homes, and military robots will have their own targeting and firing protocols. Colin Allen and Wendell Wallach argue that as robots take on more and more responsibility, they must be programmed with moral decision-making abilities, for our own safety. Taking a fast paced tour through the latest thinking about philosophical ethics and artificial intelligence, the authors argue that even if full moral agency for machines is a long way off, it is already necessary to start building a kind of functional morality, in which artificial moral agents have some basic ethical sensitivity. But the standard ethical theories don’t seem adequate, and more socially engaged and engaging robots will be needed. As the authors show, the quest to build machines that are capable of telling right from wrong has begun.

Moral Machines is the first book to examine the challenge of building artificial moral agents, probing deeply into the nature of human decision making and ethics.’

Book, Moral Machines: Teaching Robots Right from Wrong, ISBN-13: 9780195374049, first published January 2009
http://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780195374049.001.0001/acprof-9780195374049




News Items


Data is the fuel for AI

Newsletter Post, Data is the fuel for AI, so let’s ensure we get the ethics right, Birgitte Andersen, City AM Newsletter, December 2017

http://www.cityam.com/276828/data-fuel-ai-so-lets-ensure-we-get-ethics-right


BBC News: MEPs vote on robots’ legal status – and if a kill switch is required

MEPs have called for the adoption of comprehensive rules for how humans will interact with artificial intelligence and robots.
The report makes it clear that it believes the world is on the cusp of a “new industrial” robot revolution.

It looks at whether to give robots legal status as “electronic persons”.
Designers should make sure any robots have a kill switch, which would allow functions to be shut down if necessary, the report recommends.
Meanwhile users should be able to use robots “without risk or fear of physical or psychological harm”, it states.

BBC News, MEPs vote on robots’ legal status – and if a kill switch isrequired, By Jane Wakefield, Technology reporter, January 2017
http://www.bbc.co.uk/news/technology-38583360


BBC News, Sex robots: Experts debate the rise of the love droids

Would you have sex with a robot? Would you marry one? Would a robot have the right to say no to such a union?

These were just a few of the questions being asked at the second Love and Sex with Robots conference hastily rearranged at Goldsmiths University in London after the government in Malaysia – the original location – banned it.

BBC News, Sex robots: Experts debate the rise of the love droids, By Jane Wakefield, Technology reporter, December 2016
http://www.bbc.co.uk/news/technology-38389862

Share Button

About This Blog

This series of blog postings takes a multi-disciplinary approach to social policy, bringing together ideas from psychology, economics, neuroscience, philosophy and related subjects to inform policy makers and other professionals about how we might think in new ways about the individual and society . There are some easy ways to read it:

• Very Easy – Just read the blog titles: Most blog title are propositions that the blog content attempts to justify. Just reading the names of the blogs in order from first to last will provide an overview of the approach.

• Quite Easy - Just read the text in bold. This brings out the main points in each posting.

• Easy - Just watch the videos. This is easy but can take a while. The running time of each video can be seen in the caption above it. Hover over the video to see the controls – play and pause, large screen, and navigate around.

• Harder – Read the whole blog. Useful if you are really interested, want to learn, or want to comment, disagree with the content, have another angle or whatever. The blog is not being publicised yet but please feel free to comment and I will try to respond if and when I can.

The blog attempts not to be a set of platitudes about what you should do to be happy. In fact, I would like to distance myself from the ‘wellbeing marketplace’ and all those websites/blogs that try and either sell you something or proffer advice. This is something quite different. It takes as its premise that there is a relationship between wellbeing, needs and control in both the individual and society. If needs are not being met and you have no control to alter the situation, then wellbeing will suffer.

While this may seem obvious, there is something to be gained by understanding the implications of this simple idea. We are quite used to thinking about wellbeing in terms of specifics like money, health, relationships, work and so on, but less familiar with dealing with the more generic and abstract concepts of need and control.

Taking a more abstract approach helps filter out much of the distraction and noise of our usual perceptions. It focuses on the central issues and their applicability across many specifics that affect how we think and feel.

The blog often questions our current models of the way we think about the human condition and society. It looks at the things we all know and talk about – decisions and choices, relationships and loss, jobs and taxes, wealth and health but in a way in which they are not usually described. It tries to develop a new account, that draws on a broadly based understanding of what we now know from science, culture and common sense.

If you are looking for simple answers you will not find them here. This is not because the answers are complex. It is because the answers are not necessarily what you expect.

If you are looking to explore in some depth the nature of wellbeing and how it is influenced by what you can control, and what others can control that may affect you, then read on. Playing through some of these ideas into the specifics of policy, at the level of society and the individual, will take time but I hope you will see the virtue of working from first principles.

When walking through any landscape different people will see different things. A geologist might see an ice-age come and go, forming undulations in its wake. A politician might see territorial boundaries. Somebody else may see a hill they have to climb together with the weight of their back-pack.

Taking a perspective of wellbeing and control is different from how we normally look at the world. It's a deeper look at why and how things happen as they do and the consequences on wellbeing. It questions the relationship between intention and outcome.

We normally see and act through the well-worn habits of our thoughts and behaviours as they have evolved to deal with things as they are now. We mainly chose the easy options that require the least resource. As a survival strategy this generally works well, but it also entrenches patterns of thought, behaviour and emotion that sometimes, for the benefit of our wellbeing, need to be changed. When considering change, people often say ‘well, I wouldn’t start from here’. And that’s the position I take. I am not starting from the ways things are or have evolved, but from the place they might have been had we known what we know now and had designed them.

The blogs argue that, in an era of specialisation, we have forgotten the big picture – we act specifically and locally within the silos of our specialised education and experience. We check process rather than outcomes. We often fail to integrate our knowledge and apply it to the design of our social and work systems (as well as our own thoughts and behaviours).

To understand society we first need to understand the individual and to this end, a psychological account of how we feel, think and behave based on notions of wellbeing and control is proposed. And not in an abstract airy-fairy kind of way, but as a more or less precise theory that forms the basis of a predictive and testable computational model. The theory is essentially about how, both as individuals and society we manage multiple (and often conflicting) intentions in real time within limited resources. I call this model 'the human operating system'. This is like a computer operating system except that it is motivated by emotions, modulated by reason and is expressed in the language of mind and its qualities of agency and intentionality.

Just as in the mathematics of fractal geometry, complex structures can emerge from simple rules. The explanation given of the interplay between emotions, physical bodily states, thoughts and behaviours shows how much of the complexity in the individual can be accounted for by a set of relatively simple rules. This can be modelled using a system of symbolic representation and manipulation involving intentions and priorities operating in a complicated and changing environment.

The language and models that we use to understand the individual can also be applied to organisations and other structures in society. Through an understanding of what makes for wellbeing in the individual we can also understand what makes for better wellbeing in society generally. The focus, therefore, is on understanding the individual and then using that understanding to inform how we might think about other structures in society and how all these structures relate to each other from the point of view of wellbeing, shifting patterns of control and the implications for social policy.