August 2015 Pulse
In the summer of 1956, a group of scientists gathered at the campus of Dartmouth University for a two-month workshop that would launch the modern artificial intelligence (AI) programme. At the end of the workshop, MIT scientists Marvin Minsky, Claude Shannon and Ray Solomonoff together with six other researchers predicted that ‘Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.’
Since then, research in AI has advanced at a phenomenal pace as computers continue to double their capacity in information-processing power every two years. In 1997, ‘Deep Blue’ amazed the world when it defeated world chess champion Garry Kasparov. It is estimated that computers will have capacities equivalent to the human brain in the near future, around the year 2025.
Some scientists even speculate that it would be possible to create computers with advanced AI, which they call superintelligence. Oxford philosopher Nick Bostrom defines superintelligence as any intellect that ‘vastly outperforms the best human brains in practically every field, including scientific creativity, general wisdom, and social skills.’
Scientists even think that it would one day be possible for superintelligent computers to engage in moral reasoning. Insofar as ethics is a cognitive pursuit, they argue, a machine with superintelligence should be able to solve ethical problems based on available evidence and logic better than their human counterparts.
To be sure, the possibility of creating such machines has led some scientists to express hope that they will help to eradicate some of the most crippling problems in our world.
As Bostrom confidently predicts, ‘It is hard to think of any problem that a superintelligence could not either solve or a least help us to solve. Disease, poverty, environmental destruction, unnecessary suffering of all kinds: these are things that a superintelligence equipped with advanced nanotechnology would be capable of eliminating.’
Others, however, are not so sanguine. In fact, some have argued the exact opposite: that the creation of superintelligent computers would spell destruction for humankind.
‘Within thirty years,’ writes Vernor Vinge in his 1995 book, The Coming Technological Singularity: How to Survive in the Post-Human Era, ‘we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended. Can the Singularity be avoided? If not to be avoided, can events be guided so that we may survive? What does survival even mean in a Post-Human Era?’
Both the optimism and fear surrounding AI are, however, misguided.
Superintelligent machines cannot solve the world’s problems like hunger, disease and poverty because these problems are the result of that destructive form of human inwardness called sin. Although superintelligent machines, like most of the science and technology already available can alleviate human suffering, they are unable to eradicate it.
Science and technology, however advanced, cannot bring about a ‘new heavens and a new earth’ – a man-made utopia, where the evils of the world are vanquished and where the deep fractures they inflict are fully healed.
To think that superintelligent machines can do better ethics than humans is to adopt the most naïve and reductionist concept of ethics. Ethics can never be reduced to a puzzle solving exercise.
Ethics has to do with human relationality, with our appropriate and positive response to each other and to the world in which we live. Only the creatures created to be bearers of the divine image are capable of this set of attitudes, judgements and behaviour we call morality or ethics. Ethical discourse and conduct are epiphanies of human transcendence which no machine, however intelligent, can replicate.
The exaggerated fears about superintelligent machines taking over the planet and orchestrating the extinction of the human species are also misplaced. In fact, they can distract us from the real issues surrounding advanced technologies.
These issues are not new. They have been with us since the dawn of modern science and technology. And they have to do not so much with how superintelligent machines can take over the world and destroy their creators. Rather, they have to do with how such technologies can be misused by some to the detriment of others.
As Joanna Bryson and Philip Kime perceptively point out: ‘The real dangers of AI are no different from those of other artifacts in our culture: from factories to advertising, weapons to political systems. The danger of these systems is the potential for misuse, either through carelessness or malevolence, by the people who control them.’
But there is one other aspect of this debate that perhaps is not given the serious attention it warrants. In reflecting on the development of any technology, it is important not only to ask what it can do for us. We must also ask what it can do to us.
As intelligent machines intrude into our lives and take on significant tasks, the way in which they may change how we perceive our own humanity and our relationships simply cannot be ignored.
AI may impact our society in radical and sometimes unwelcomed ways. And we must try to imagine how society should navigate around the changes they bring about by embracing some and by averting others.
Dr Roland Chia is Chew Hock Hin Professor of Christian Doctrine at Trinity Theological College and Theological and Research Advisor of the Ethos Institute for Public Christianity.