Pulse
19 February 2024
On 31 January 2023, Channel News Asia published an article which explores the question: ‘Who do you think should be responsible when artificial intelligence or algorithms malfunction: The programmer, manufacturer or user?’
As Singapore aims to be a global leader in AI by 2030, it has to address the many thorny ethical issues surrounding the use of this rapidly advancing technology.
This brief article examines some of the issues surrounding the responsible use of AI. It raises the kind of questions that must be addressed even as this new technology becomes more pervasive in society. The aim of this article is not to provide a comprehensive treatment of these issues, but to bring to the fore their complex nature from the standpoint of ethics.
Some of these ethical issues have been discussed globally and tentative solutions and protocols have been proposed and implemented. But as AI technology advances, and the range of its applications expands, these issues must be revisited to address new permutations that were previously had not emerged.
Before the dawn of this century, the whole area of machine ethics is mostly found in the realm of science fiction, as enthusiasts imagined future scenarios and the challenges they might pose. However, in the past two decades or so, ethics related to the use of machine algorithms or AI is no longer a subject of thought experiments and abstract speculations.
Within the space of a few short years, machine learning and AI have become ubiquitous in our society, impacting our lives in numerous ways, both positively and negatively. Machine ethics, especially in relation to AI, has become a growing concern, and a topic of intense debate across the globe.
There is no consistent definition of what constitutes responsible AI (RAI). According to Sray Agarwal and Shashin Mishra, this is because ‘even though there have been multiple works that call for the need for RAI, there has not been an end-to-end guide that covers the different facets and explains how to achieve them for your product.’
Be that as it may, it is possible to sketch the goals of responsible AI, albeit in very broad strokes. For example, responsible AI is a system that seeks to ensure high levels of fairness and which is free from biases. However, it should also be a system which is able to make positive discriminations, so that the mistakes or wrongs of the past can be identified and corrected.
While these goals are clear and enjoy universal endorsement, achieving them is never straightforward and easy.
Take, for instance, the identification and removal of biases. The challenge here has to do with both the detection and the eradication of the biases inherent in a system. There is generally a lack of understanding of how this can be consistently and satisfactorily done.
The detection of the presence of bias in any system requires a thorough evaluation of the data for historical bias, distorted representations of certain user groups, and what is called proxy features that were undetected. This itself is a herculean task that is difficult to accomplish due to a number of technical reasons.
However, as Agarwal and Mishra explain, this process can get even more complicated:
In some instances, biases that were not present in the original data can get introduced through engineered features, and in others, engineered features can hide or mask the discrimination that a model can still learn, making it even more difficult to detect the biases.
Another issue of great importance in regard to responsible AI is that of data and model privacy. While much attention has been given to data privacy, more needs to be done to ensure that the model being trained is secure as well.
Studies have shown that attackers can reverse-engineer a model to uncover the training data. They can then use the data to reverse engineer the model or distort the dataset thereby influencing the performance of the model itself.
Responsible AI is not confined only to issues pertaining to biases, discrimination and privacy – important though they obviously are. It extends to who is liable when intelligent machines cause serious damage to property, or when they cause physical harm to humans, or even death.
On July 10, 2023, Impakter, a website dedicated to creating a bridge between millennials and baby boomers, published an article on self-driving cars entitled ‘Revolutionising Transportation.’ According to the article, Ernst Dickmanns launched the first AI powered vehicle in 1987, which is ‘capable of self-driving without human assistance in traffic-free environments.’
Since then, the technology has advanced steadily. The aim is to develop an AI powered vehicle that performs all driving tasks, including navigating through traffic, without the need for human attention and intervention.
In their excellent Introduction to Ethics in Robotics and AI, Christoph Bartneck et al present a scenario where an autonomous vehicle is involved in an accident. They ask:
So if an accident happens during the time when the car was in control, who would be held responsible or liable? The driver was not in control, not even required to do so. The technology did just as it was programmed to do. The company maybe? But what if they did everything possible to prevent an accident – and still it happened?
The answers to these questions are far from clear.
There are several ways in which product liability has been conceived and understood in the legal system. In ‘strict liability’, the company or person is held liable even if they were not directly involved in any wrongdoing in the strict sense. For example, the company which produces the autonomous vehicle can be held liable even if the concrete harm caused by the vehicle is not intended or planned.
In the German Ethics Code for Automated and Connected Driving, this approach is taken, especially in the case where the system in the vehicle is in full control. This means that a monitoring device (black box) must be installed to record the activity of the car, especially who was in control of the car at each moment – the driver or the car.
However, there are cases where ‘many hands’ are involved in a wrongful death, making it difficult or impossible to accord blame to any particular person or entity. An example of this would be an autonomous weapon system (AWS) which accidentally kills non-combatants due to misidentification. In such a situation of ‘complex liability’, where it is often the case that no one is at fault, blame is assigned to the collective entity such as the State operating the AWS.
Returning to our consideration of autonomous vehicles, things can get even more complicated. This is because of what is often called the dilemma of robotics morality, which can be illustrated by a simple adaptation of the trolley problem, often used in ethical discussions.
Consider the following scenario. An autonomous car, fully powered by AI and carrying five passengers, approaches a heavy vehicle. For some inexplicable reason, the heavy vehicle suddenly swerves towards the autonomous vehicle.
The sensors of the autonomous car detect the oncoming heavy vehicle, and its algorithms calculate that the high impact of the collision would kill all five passengers. This can be averted if the car swerves towards the pavement on the right-side.
However, there is an elderly pedestrian walking on the pavement. If the car swerves to the right to avoid the collision which will kill all its five passengers, it will most certainly kill the elder pedestrian on the pavement.
‘This particular dilemma of robotic morality,’ writes Ali Srour, ‘has long been chewed on in science fiction books and movies. But in recent years it has become a serious question for researchers working on autonomous vehicles who must, in essence, programme moral decisions into the machine.’
In 2017, a survey was conducted in which 56.32 percent of the participants answered ‘No’ when asked if the driverless car should hit a pedestrian in order to save their lives. 26.32 percent replied in the affirmative.
The question concerning liability or responsibility is also addressed in the survey. 77.78 percent of the participants readily blamed the developer of the software installed in the driverless car if it is involved in an accident, while 11.11 percent blamed the manufacturer.
Additionally, it is pertinent to note that although 55 percent of participants say that they would use an autonomous car if the technology is available, 40 percent had safety concerns and worry about the ethical decisions that the AI powered cars would make.
All this suggests that while AI technology can and will radically change our lives for the better, there are important and complex issues that just cannot be ignored or papered over. They must be taken very seriously, indeed.
Furthermore, because of the complexities of the issues involved, the ‘solutions’ to current ethical challenges and the drive towards responsible AI cannot be left only to certain groups of people – programmers, manufacturers, or even policymakers. They must be the concern of all stakeholders.
As Agarwal and Mishra have rightly pointed out:
Within every product team utilizing AI to add intelligence to their products, the responsibility of ‘responsible AI’ does not lie with a few stakeholders. All stakeholders have a role to play in there, from product owners and business analysts to the data scientists. Similarly, external stakeholders also influence the development of the standards of adoption, like an ethics committee internal to the organisation or a regulator.
Dr Roland Chia is Chew Hock Hin Professor at Trinity Theological College (Singapore) and Theological and Research Advisor of the Ethos Institute for Public Christianity.