March 2021 Pulse
On 23 Jan 19, Minister for Communications and Information, S. Iswaran, announced that Singapore has released a framework on how artificial intelligence (AI) can be used ethically and responsibly in businesses. Making this announcement at the World Economic Forum at Davos, Switzerland, Mr Iswaran said that the framework is a ‘living document’ that will evolve as the technology develops and as its applications diversify.
From the second half of the last century onwards, computers, supercomputers, robots and AI have increasingly become part of our lives, signalling the dawn of what Byron Reese has described as the Fourth Age. From self-driving vehicles to the Internet-of-Things, these new technologies, enhanced by the ever-expanding capabilities of AI, have not only changed the way we live and work but also how we relate to one another.
While these game-changing technologies have no doubt benefited society in numerous ways, they have also generated certain anxieties and worries that previously did not exist or were not as acute. It should not surprise us that some writers have speculated about dystopian futures as autonomous machines with super-intelligences take control of society and subjugate their makers.
There is, however, no need to cast our eyes to such a distant future to note the ethical and social concerns surrounding these new technologies (even with their current capabilities). Much debate has already been generated in a wide range of issues such as job loss, privacy, safety, security, inequality and even discrimination (resulting from algorithmic biases). These are large and complex issues for which meaningful and constructive consensus is difficult to achieve.
Space allows us to briefly discuss only two of these concerns.
The first is safety. Already, there are many documented examples of people getting hurt or killed when a robot powered by AI (used for military purposes, for example) malfunctions. For instance, in 2007, a semi-autonomous robotic canon controlled by a computer and deployed by the South African army killed nine ‘friendly’ soldiers when it malfunctioned. Similar risks also present themselves if a robot in a factory or an autonomous vehicle were to malfunction.
Alongside the problem of safety is also that of security. Robots animated by AI, when hacked, can quickly become a security concern. As Patrick Lin points out: “What makes a robot useful — its strength, ability to access and operate in difficult environments, expendability, and so on — could also be turned against us, either by criminals or simply mischievous persons”. And, when robots are networked with other machines and computers in the Internet-of-Things, the safety and security threats are multiplied many times over.
But quite apart from these practical concerns, the unstoppable march of smart machines also raises more fundamental philosophical questions that science is unable to adequately address — not to mention resolve — on its own. Debate on the philosophical, ethical and social challenges that AI-powered machines poses must therefore be truly inter-disciplinary. This means that it must also take seriously the contributions from religious thinkers.
One such concern, according to Joanna Bryson and Philip Kime, has to do with the ‘general confusion about humanity’ that this new age of intelligent machines can bring about. According to these authors, at the very heart of this confusion is our misidentification (or over-identification) with machine intelligence and how this can distort our ethical judgement resulting in serious consequences.
This misidentification is due to our conception of human nature, which, as a result of the influence of philosophers like Descartes and Kant, is focused mainly on capabilities like ‘reason’. As robots (enhanced by AI) acquire more ‘human’ capabilities like intelligence, memory, and, some would even venture to predict, consciousness, we tend to over-identify with these machines, thereby ‘humanising’ them.
As Bryson and Kime point out, by identifying with machines in this way, “we endow them with the rights and privileges of ethical status”. It is quite common to find in the literature, discussions on whether it is ethical to unplug our computers once it becomes ‘conscious’, or whether an ‘autonomous’ machine that has caused harm should be ‘punished’. The anthropomorphising of intelligent machines has profound cultural and social ramifications.
Additionally, scholars have expressed concern that over-reliance on smart machines can result in the phenomenon they describe as ‘technological dependency’. Taken to the extreme, this would lead to the gradual erosion of skills and knowledge across different fields. For example, as robots out-perform their human counterparts in surgeries that require a certain level of precision, the number of human practitioners with the requisite knowledge and skills to perform those surgeries will decrease drastically.
But technological dependency can also cause society and its institutions to become more fragile and vulnerable. One striking example is the worldwide panic that the Y2K problem caused because so many critical systems such as air-traffic control and banking were dependent on computers.
The Christian response to these unprecedented advances in technology must therefore be characterised by caution and responsible stewardship. Believing that both science and technology are made possible by God’s common grace in the world, Christians should encourage their development and proper use.
But the Christian’s welcome and embrace of these new technologies can never be uncritical or naïve. These cultural artefacts are fashioned by fallen human beings and are therefore in some ways marred by human sinfulness and the distortions and perversions that result.
Thus, due to human sinfulness, the very technologies that were created to benefit the human community can be used against it. To put this differently, these technologies, whose purpose is to serve humankind, can also quite easily become an idol that enslaves their human creators.