12FeatureWS_2December2024_GodsNo
12CredoWS_2December2024_AHealthyTheologyofHealing
12PulseWS_2December2024_FarRightsTwoChristianities
ETHOSBannerChinese11
NCCS50thCommemorativeBook
ETHOSBannerChinese
previous arrow
next arrow

Pulse
4 March 2024

In 2018, a group of scholars from the United States, Canada and Singapore published a paper entitled ‘Building Ethics into Artificial Intelligence.’ The paper discusses and examines ‘recent advances in techniques for incorporating ethics into AI …’

The explosion of AI in many different sectors has been welcomed by many.

However, scholars working on the phenomena of AI from the perspective of different disciplines have also recognised a number of significant limitations (vulnerabilities) of machine learning, despite its rapid advancements and impressive achievements.

One obvious limitation is that machine learning is dependent on a vast amount of data if it is to work well. The amount of data that it is fed will determine the kind of predictive power that the AI-powered machine will possess.

Also obvious is the fact that for machine learning to be at an optimal level, not only does it need a huge amount of data, but also quality data. If the machine learning algorithm is trained on inadequate or inaccurate data, it will make bad predictions, sometimes with disastrous outcomes.

Finally, the machine learning algorithm itself must be well-designed. As Matthew Liao, the Director of the Centre for Bioethics at New York University, puts it ‘… even if a machine learning algorithm receives adequate and accurate data, if the algorithm itself is bad, it will also make bad predictions.’

These limitations and vulnerabilities raise numerous ethical issues in relation to machine learning and AI.

Besides addressing these issues, scholars are also exploring the possibility of designing machines run by artificial intelligence to behave ethically by building moral rules into them. In other words, while attention must be given to the ethics of AI, some believe that it is also important to explore the possibility of creating an ethical AI.

There are basically three approaches to ethics.

In consequentialist ethics, an agent is deemed ethical when it evaluates the possible consequences of each choice and elects the one which has the most moral or ethical outcome. This approach is also sometimes associated with utilitarian ethics, which states that the most ethical choice is the one which results in the greatest good for the greatest number.

Deontological (or duty) ethics – commonly associated with the Enlightenment philosopher Immanuel Kant – maintains that an agent is considered ethical only if it respects the duties, rights and obligations that pertain to a given situation. And finally, virtue ethics states that an agent is ethical if its actions are governed by moral values such as justice.

In an article entitled ‘Prospects for a Kantian Machine’, the American intelligence expert Thomas Powers discusses the possibility of creating an algorithm that would enable a machine to follow Kant’s categorical imperative. Others have proposed creating moral machines that operate on the basis of the utilitarian calculus.

In 1942, the great science fiction writer Isaac Asimov devised The Three Laws of Robotics, which continue to be discussed today by ethicists working in the area of machine ethics today. The Laws are as follows:

  1. A robot may not injure or harm a human being or, because of inaction, allow a human being to be harmed.
  2. A robot must obey the orders given by human beings except where the orders in question conflict with the First Law.
  3. A robot must protect its existence so long as in doing so it does not transgress the First or Second Laws.

Programming intelligent machines to obey these laws appears to be less ambitious – and certainly less complicated – than designing them to behave according to the principles of Kantian deontological ethics.

Yet, there are problems with writing Asimov’s laws into an intelligent machine. As Matthew Liao has observed, even Asimov has repeatedly indicated in his novels that these laws do not always operate successfully in a robot.

‘[W]hat Asimov’s stories teach us,’ writes Liao, ‘is that building into an AI explicit laws against harming humanity does not seem to work.’ He adds: ‘Indeed the premise of most of his novels is that the Three Laws of Robotics repeatedly fail to prevent robots from harming humans in various situations.’

Still others are of the view that the way forward is to adopt what has been described as the case-driven approach.

For example, the Massachusetts Institute of Technology (MIT) initiated the Moral Machine Project whose aim is to see whether it is possible to design an intelligent machine based on how humans would respond to various situations. Participants are instructed to judge the various possible scenarios presented to them that an autonomous vehicle (AV) might encounter and select the outcomes they prefer.

The decisions of the participants are then analysed according to the following considerations: (1) saving more lives, (2) protecting passengers, (3) upholding the law, (4) avoiding intervention, (5) gender preference, (6) species preference, (7) age preference, and (8) social value preference.

According to the authors of the paper mentioned at the beginning of this article, ‘Based on feedbacks from 3 million participants, the Moral Machine project found that people generally prefer the AV to make sacrifices if more lives can be saved.’ ‘If an AV can save more pedestrian lives by killing its passenger,’ they add, ‘more people prefer others.’

There is much to be said for this approach, which is based on the majoritarian viewpoint and sensibilities. But decisions made in real-life situations not only can be much more varied and complex than the scenarios presented to the participants, they can also put stress on this broadly utilitarian approach.

Consider the following two variations of the famous ‘Trolley Problem’.

Sidetrack: A runaway trolley is headed towards five people who will be killed if the trolley is not stopped or diverted. You can hit a switch that will turn the trolley onto a sidetrack where another person is sitting. The trolley will kill him instead of the five.

 

Footbridge: A runaway trolley is headed towards five people who will be killed if the trolley is not stopped or diverted. You are standing next to a very large man off the footbridge. If you push the large man off the bridge, he will die but his body will stop the trolley, thereby saving the five.

 

In both cases, we have the choice of killing one person or letting five people die. Based on the findings of the Moral Machine Project, the choice of the majority is that one person should be killed in order to save five lives. As I mentioned earlier, this decision is in keeping with the fundamental principle of utilitarian ethics.

Yet, many ethicists would argue that the two cases do not present exactly the same moral challenge. They would argue that while it is ethically permissible – under the circumstances – to switch the trolley onto the sidetrack and kill the person on it in order to save the five, it is not permissible to push the large man onto the track to achieve the same result.

Ethicists use the principle of Double Effect – frequently commandeered in medical ethics – to explain the difference between the two cases. The principle of Double Effect states that it is sometimes permissible to cause a harm as an unintended but foreseen side effect (‘double effect’) of bringing about a good.

This principle can be traced to the medieval theologian Thomas Aquinas who argued that killing someone’s assailant in self-defence is permissible, provided one does not intend to kill him. In Summa Theologica, Thomas writes: ‘Nothing hinders one act from having two effects, only one of which is intended, while the other is beside the intention … Accordingly, the act of self-defence may have two effects: one, the saving of one’s life; the other, the slaying of the aggressor.’

The key here is the intention behind the act. Applying this principle to the two cases, we may argue that it is permissible to switch the trolley onto a sidetrack to save five people even if it has the foreseeable consequence of killing the one person on the track.

This is to be distinguished from pushing a man onto the path of the runaway trolley. Liao explains: ‘In contrast, in Footbridge, because it seems that you intend the innocent bystander to be hit by the trolley as a means to stopping the trolley from hitting the five, it is not permissible for you to push him off the footbridge.’

The point here is whether it is at all possible to programme a machine in such a way that it is able to take into consideration all the complexities involved in making a moral decision. This brings us to the even deeper question concerning the assumptions behind the very concept of a ‘moral machine’ – assumptions that must be interrogated, challenged and even refuted.

Just as ‘artificial intelligence’ points to the superficial and contrived nature of machine ‘intelligence’, so ‘moral machines’ makes specious use of the concept of ‘morality’.

Perhaps it is only when we strip our discussion of machine ethics of the anthropomorphisms that we have habitually clothed it can we have a clearer idea of what machines – however sophisticatedly designed – are and are not capable of.

As Brendon Dixon has arrestingly put it in his article provocatively entitled ‘The “Moral Machine” is Bad News for Ethics’:

The Moral Machine exposes the shallow thinking behind the many promises made for artificial intelligence. Machines are not humans; we must not pretend that they are. Machines can help us do what we do better but they cannot replace that which we alone possess: minds.

 


Dr Roland Chia is Chew Hock Hin Professor at Trinity Theological College (Singapore) and Theological and Research Advisor of the Ethos Institute for Public Christianity.