November 2014 Pulse

The title of this article is inspired by the third instalment of the wildly successful Terminator franchise directed by Jonathan Mostow, in which Arnold Schwarzenegger starred (for the last time) as the humanoid whose mission is to protect John Conner from Skynet, a self-aware artificial intelligence system that seeks to dominate and destroy humans.

To fulfil his mission, Schwarzenegger had to engage a new and far more sophisticated model of the Terminator called T-X (played by Kristanna Loken) made of liquid metal that had the ability to morph into different forms and an endoskeleton with built-in weapons.

In a sense, the Terminator series exploits the anxieties that have accompanied the rapid advance of modern robotics by imagining a future where intelligent machines turn against their makers. Whether these anxieties are warranted or not is moot. The fact remains that robots are here to stay, and their sophistication and ubiquity will only increase in the future.

While the subjugation of humans by intelligent machines portrayed in the Terminator series is a scenario that is probably too far-fetched, theologians and philosophers are concerned about the way in which the growing presence of robots would change human relationships and society itself.

One such concern has to do with the possible social implications of technological dependency, as robots increasingly encroach on human life. Robots are replacing humans in certain jobs, especially those that require special skill and precision.

For example, as robots prove to be much better at performing difficult surgeries than their human counterparts, the fear has to do not only with job losses, but, more crucially, with the erosion of medical skill and knowledge.

Some commentators worry that the Robotics Revolution that is presently underway will result in job losses in an unprecedented scale. They draw striking parallels with the Industrial Revolution of the 18th and 19th centuries, where factories and automation replaced scores of workers who performed the same tasks by hand.

Companion robots can also bring about profound changes in human relationships. For example, robots like Wakamaru and Nao are designed to share living spaces with humans. They recognise ten thousand words, communicate by speech and gestures, and can place phone calls and read the owner’s email.

As some of these robots are programmed with ‘artificial emotions’ and are capable of expressing happiness and even anger, many owners relate to them as if they were human. The psychological and social impact of this is still not fully known.

The introduction of sophisticated anthropomorphosized robots with artificial intelligence and emotions has brought to the fore other important ethical issues. When an intelligent robot makes a serious mistake, perhaps causing the death of a human being, who is to be held accountable?

For example, when a US military robot kills women and children when it is programmed to target only combatants, who should be made responsible for its actions? Although there is a chain of those who could be held accountable – the manufacturer, the programmer, the robot’s handler, the military procurement officer, the field commander, and even the President of the United States – it is not absolutely clear who should take the blame.

If the robot is an autonomous machine programmed to distinguish between armed combatants and unarmed civilians, perhaps the responsibility should be assigned to the robot itself. But if intelligent robots can be said to be responsible for their own actions, should they also be accorded with rights? Should they be treated with respect as if they were humans?

Roboticists are now exploring the possibility of designing robots capable of making autonomous moral decisions. Some philosophers argue that this is impossible since machines can never acquire the genuine human consciousness and emotions required for making autonomous moral judgements. At best a robot can be programmed to display a ‘functional morality’ that adheres to some version of Isaac Asimov’s three laws of robotics.

Others opine that although creating an artificial moral agent (AMA) poses a great challenge, it is possible once we are able to develop a computational model for human cognition and decision-making processes.

The question is, should engineers create AMAs if they have the technology to do so? This question has been addressed in a number of conferences on robotics, and it comes as no surprise that many roboticists answer in the affirmative. Several reasons are offered, including the argument that such creations would in turn help scientists to better understand the relationship between the brain and moral behaviour in humans.

History, however, has taught us the importance of foresight. Vaccines, cars, computers, gunpowder and the printing press can no doubt be described as the game-changers for past generations in that their invention and introduction have changed the world in remarkable ways. Not all their consequences were foreseen or even foreseeable. Some could perhaps be predicted, but we have not been bothered to think hard enough or imaginatively enough to do so.

The new game-changers are neuroscience, nanotechnology, synthetic biology, 3D printing, and, of course robotics. To predict what these new technologies might do for us and to us, we must not only think hard. We must also think imaginatively. We must think ‘science fictionally’ because, as we have seen time and again in the 20th century and in ours, that which were fiction in the past have in a very short time become fact.


Dr Roland Chia


Dr Roland Chia is Chew Hock Hin Professor of Christian Doctrine at Trinity Theological College and Theological and Research Advisor of the Ethos Institute for Public Christianity.