August 2020 Pulse
In an animated debate with Elon Mask at the three-day World Artificial Intelligence Conference in Shanghai in August 2019, Jack Ma, the co-founder of Alibaba, insisted that artificial intelligence poses no threat to humanity. ‘Computers may be smart’, Ma said to the industrialist behind Tesla and SpaceX, ‘but human beings are smarter. We invented the computer – I’ve never seen a computer invent a human being’.
The term ‘artificial intelligence’ (AI) was first coined in 1956 by John McCarthy at a conference which discussed whether machines could indeed be made intelligent – more intelligent than their creators. With the advent of computers and super-computers, AI technology has been advancing by leaps and bounds and bringing in its wake anticipation and anxieties in almost equal measure.
Whether we realise it or not, AI is already ubiquitous in our daily lives. AI algorithms are involved whenever we make a purchase using a credit card, use a GPS to find our way around or use Google to search for the best Vietnamese restaurant in Berlin.
AI is used to power robots like the Baxter robot that can work with humans in production chains and bots that can take care of the tasks of an entire warehouse. AI is the magic behind companion bots such as Nao, Pepper, Aibo and Giraff that can entertain and talk to humans and help the elderly stay connected to relatives, friends and doctors.
Self-driving cars is another exciting and innovative technology that is dependent on AI. So is that intricate web of interrelated computing and mechanical devices and machines ranging from cell-phones, washing machines, headphones, and lamps, to the jet engine of an airplane and the drill of an oil rig that we have come to describe as the Internet-of-Things (IoT).
That AI has the potential to change the way we live, work and communicate for the better and in unprecedented ways is never in doubt. But the question that must be asked is whether there is a dark side to its ubiquity, pervasiveness and dominance in so many areas in our lives.
What are some issues that must be addressed and uncomfortable questions asked even as the use of AI continues to advance?
Before we address these questions, we need to clarify what it is that we actually mean when we ascribe intelligence to a machine or computer. When we say that Mary is intelligent, we refer to her ability to acquire knowledge, to reason and make judgements based on experience.
Human intelligence is a complex phenomenon that does not only include knowledge but the ability to interpret them. Human intelligence therefore includes wisdom, moral judgement, intuition, imagination and the control of the emotions (emotional intelligence) in the use of knowledge and acquired experience in a particular set of circumstances and in specific contexts.
Machine ‘intelligence’ is profoundly different. It is based on the ability to store a large amount of data – many times more than a human brain is capable of retaining – and the ability to make sense of the data at great speed.
This means that in some cases and under certain conditions machines such as computers have the ability to evaluate certain situations and come up with a good solution or strategy in a way that exceeds human capability. One good example is chess: while it is impossible for the human brain to evaluate all possible sequences of moves in a short time, the computer can.
With its ability to store, interpret and use data, AI can be deployed in a variety of different fields and arenas ranging from banking to healthcare. But in order for society to be able to properly reap the benefits of AI systems, people must be able to trust them.
Developing trust in AI is no different from building trust in any technology such as an automobile or an airplane. Trust is built when these technologies show themselves to be reliable (trustworthy) and consistent. As someone has rightly observed: ‘Put simply, we trust things that behave as we expect them’.
In addition, trust is also built upon accountability, and in the case of AI systems, this means that transparency and, to some extent, comprehensibility is of utmost importance. As the European Parliament paper on AI states, AI systems ‘need to be able to explain their behaviour in terms that humans can understand – from how they interpreted their input to why they recommend a particular output. To do this, we recommend all AI systems should include explanation-based collateral systems.’
In order to build public trust, ethical principles must play a prominent role to ensure that industries as diverse as aviation and food security follow best practices that put the welfare of human beings and society first. Ethics must ensure that AI is not used to deceive, manipulate or coerce the unsuspecting public – an issue that is already of great concern today.
AI ethics should not only focus on practical, operational questions, but should also address bigger issues such as the changes that this new and developing technology can bring about in human relationality, in how our society is ordered and in issues concerning justice.
There are also questions concerning technological transcendence, that is, our growing dependence of this new technology and even our subservience to it. There are questions concerning what AI can and cannot do, and how its prowess can delude us into thinking that it will eventually be able to solve most, if not all, of the world’s problems.
These questions are not new. Nor is the naïve way in which scientists, politicians and the general public can pin their hopes in a particular technology, elevating it to ‘saviour’ status, new. We have seen this kind of naivety in certain attitudes towards science and biotechnology.
We must subject our scientific, technological, and business communities as well as political and government bodies to critical scrutiny, reflection and evaluation as far as the use of technology is concerned, especially AI. This kind of self-examination and interrogation should be conducted on across countries and jurisdictions – it should be done on the global scale and it should be particularly attentive to nations that are less technologically advanced.
This is not to suggest that we all become Luddites, resisting new advances in technology at every turn. It is to ensure that we do not swing to the opposite extreme, that of embracing every new gadget and artefact that our science and technology are able to conjure (like a child let loose in Toy “R” Us!) and thus slavishly obey the technological imperative without ever questioning their possible impact on society.
The excitement over AI and the miraculous wonders it promises in so many areas of our lives can be seductive and also corrupting. It can generate the ‘colossalism’ of the human spirit that is matched only by the story of the building of the Tower of Babel (Genesis 11:1-9). It can be the display of that sinful and rebellious human arrogance that would eventually lead to its downfall.
Dr Roland Chia is Chew Hock Hin Professor at Trinity Theological College (Singapore) and Theological and Research Advisor of the Ethos Institute for Public Christianity.