June 2016 Pulse
Without doubt one of the most fascinating and rapidly developing fields in modern technology is robotics. From surgical robots to DNA nano-robots capable of bipedal motion, the advances in this field in the past decade have been so staggering and surreal that they seem more like the stuff of science fiction.
But accompanying the robot revolution are numerous complex philosophical, ethical and legal issues that have profound social implications.
As Paolo Dario has observed, the changing nature of robotic engineering from a discipline where robots perform tasks assigned by humans to one where robots work alongside humans, presents new and profound ethical challenges.
One of the more perplexing issues that the newly minted ‘roboethics’ (first coined at a conference in Italy in 2004) has to deal with has to do with the question of responsibility. Simply put, who should be held responsible when a robot does harm?
For example, who should bear the blame when a military robot malfunctions and kills civilians instead of enemy combatants – the designer, the manufacturer, the programmer, the operator or the robot itself?
When the robots in question are merely machines controlled directly by users, the questions of liability and responsibility are quite similar to those of other machines like cars, and therefore pose no new ethical problems. The responsibility for robot failure may rest on the designer, manufacturer or user, depending on the causes of the malfunction and the circumstances.
Here, we must distinguish between two types of responsibilities.
When a user deliberately directs the robot to cause harm, both the user and the robot may be said to be causally responsible for the harm, but only the user (not the robot) is morally responsible. This is because the robot, controlled by the user, has no choice of its own.
But what about semi-autonomous or autonomous robots that are capable of performing tasks by themselves, without direct or explicit human control? Who should be held responsible for the failures of such robots? Can autonomous robots be said to be morally responsible for the harm it caused?
Put differently, can autonomous robots be considered as moral agents?
Some ethicists argue that in order to achieve clarity on this question, we must go to the source of any robotic moral agency.
The actions of the robot are based on software that begins as a code programmed by human beings. The robot is therefore an amoral agent and cannot be said to be morally responsible for its actions because it is unable to behave independently from the way it is programmed.
However, with increasing sophistication in programming and artificial intelligence (AI), the picture becomes much more complex.
The philosopher of science Peter Asaro has suggested a continuum of moral agency with reference to robots, beginning with robots that are wholly amoral to those that may be considered as fully autonomous moral agents, depending on the sophistication of their programming.
He suggests that we should think of different tiers of robots that are above the amoral status in this continuum of moral agency. The first tier comprise what he calls ‘robots with moral significance’, by which he means robots that are able to make decisions with ethical consequences.
Asaro describes the second tier of ‘moral’ robots are machines ‘with moral intelligence’. This category of robots differs from ‘robots with moral significance’ in the sense that these robots are able to assess the ethics of a particular cause of action because moral precepts are imbued in their programming.
Even more superior to these robots are the machines that belong to the third tier that possess what Asaro calls ‘dynamic moral intelligence’. These machines not only have the ability to reason morally but they are also able to learn new ethical lessons from their experiences and even develop their own moral codes.
Finally, we have agentic machines that are fully moral. For Asaro, this would mean that such machines would have acquired self-awareness, possess some form of consciousness, have a sense of self-preservation and could even feel the threat of pain or destruction (death).
If this final stage were possible – and many scientists and philosophers remain sceptical – it would raise serious philosophical and theological issues of the possible ‘personhood’ of such machines.
Be that as it may, each advance in robotics would present different moral and legal challenges.
For many years, Isaac Asimov’s Three Laws of Robotics have provided the framework for thinking about issues of liability and responsibility.
The Laws are as follows: (1) the robot may not injure a human being or through inaction allow a human being to be injured; (2) the robot should obey orders given by the human being except when the orders are in conflict with the first law; and (3) the robot must protect its own existence, but in doing so it must not transgress the first or second law.
But with highly autonomous and intelligent robots (those belonging to the third and fourth tiers), these laws no longer apply. In fact, as Daniel Howlader has rightly observed, ‘To create machines that can make their own choices, be aware of their existence, and at the same time subordinate that free will to the benefit of humanity is frankly unethical’.
Here, the question ‘Who is responsible?’ must not only be looked at from a different angle but radically broadened. The question must be directed not to just at the designer, manufacturer, programmer, user and the robot itself. It must be directed at society as a whole for allowing some kinds of machines to come into existence in the first place.
As researchers Pawel Lichocki, Peter Kahn and Aude Billard have poignantly put it, in designing and building robots, ‘We might be motivated by the beauty of our artifacts. Or by their usefulness. Or by the economic rewards. But in addition we are morally accountable for what we design and put into our world’.
Dr Roland Chia is Chew Hock Hin Professor of Christian Doctrine at Trinity Theological College and Theological and Research Advisor of the Ethos Institute for Public Christianity. This article was originally published in the December 2016 issue of the Methodist Message.