12FeatureWS_2December2024_GodsNo
12CredoWS_16December2024_DoctrinalReflectiononCreation
12PulseWS_16December2024_StudyBiblesABriefGuideforthePerplexed
12CredoWS_2December2024_AHealthyTheologyofHealing
12PulseWS_2December2024_FarRightsTwoChristianities
ETHOSBannerChinese11
NCCS50thCommemorativeBook
ETHOSBannerChinese
previous arrow
next arrow

Pulse
20 May 2024

On 11 June, 2022, The Washington Post reported that Google engineer Blake Lemoine was convinced that his chatbot LaMDA (Language Model for Dialogue Applications) had become conscious.

Together with a collaborator, Lemoine tried to present evidence to Google that LaMDA was sentient. However, as the article reports, Google vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, considered the claims and summarily dismissed them.

The question if it is possible for AI-powered machines to be conscious is being debated by a number of philosophers, AI researchers and ethicists. Many writers have pointed out that this question is important because of the profound implications of its answer. Grace Huckins puts it this way:

AI consciousness isn’t just a devilishly tricky intellectual puzzle; it’s a morally weighty problem with potentially dire consequences. Fail to identify a conscious AI, and you might unintentionally subjugate, or even torture, a being whose interests ought to matter. Mistake an unconscious AI for a conscious one, and you risk compromising human safety and happiness for the sake of an unthinking, unfeeling hunk of silicon and code.

 

It goes without saying that a variety of viewpoints and opinions have been offered in response to such a ‘devilishly tricky intellectual puzzle.’

The renowned philosopher of mind, David Chalmers, whose entire life’s work has been focussed on the question of consciousness, cautions that although large language models are able to mimic human writing in very impressive ways, they lack too many requisites for consciousness to compel us to believe that they are indeed conscious.

That said, Chalmers believes that the chances of developing conscious AI in the next ten years are above one in five.

Before we can even begin to speculate about the possibility of conscious AI, we must first clarify what exactly is this thing called consciousness. Unfortunately, while there has been no shortage of theories about consciousness, there is also no consensus among philosophers, neuroscientists and AI researchers on what it is.

Many writers have therefore described human consciousness as a mystery. For example, the British biologist Ian Glynn writes in his book Anatomy of Thought that ‘Consciousness has always been a mystery.’ David Chalmers agrees. In his book The Conscious Mind, he writes that ‘Conscious experience is at once the most familiar thing in the world and the most mysterious.’

 

WHAT IS CONSCIOUSNESS?

Be that as it may, we must turn now to the difficult task of describing consciousness – although what can be attempted here is obviously a very superficial (and woefully inadequate) sketch. I agree with Bennett and Hacker that despite it being a mystery – that is, despite the fact that we don’t fully understand it – it is nevertheless possible to say something sound and significant about the phenomenon we call consciousness.

Adam Zeman explains in his book Consciousness: A User’s Guide, published in 2002 by Yale University Press, that the English word consciousness is a combination of two Latin words – scio, which means ‘I know’, and cum (which becomes con when used as a prefix), which means ‘with’. This means that consciousness is ‘knowledge with’ or knowledge that is shared with another person or with oneself.

These two senses of conscio are presented in the writings of Thomas Hobbes and John Bunyan respectively. Hobbes drew on the first sense (knowledge shared with another person) when he wrote: ‘When two or more men know of one and the same fact [i.e., deed] they are said to be conscious of it one to another.’ And Bunyan was referring to the second sense when he wrote: ‘I am conscious to myself of many failings.’

But what do we mean by ‘consciousness’ in colloquial English?

To answer this question, we should ask another: what do we mean when we say that someone is conscious? There are several senses in which the words ‘conscious’ or ‘consciousness’ can and have been used.

The first sense of consciousness is simply being awake. The second has to do with being aware of, that is, to be conscious of something is to be aware of it.

In this second sense, consciousness has to do with what we might call ordinary experience – the encounter and awareness of the things around us such as people, trees, traffic, and food.

Yet another way in which consciousness can be understood is in relation to the mind, that is, the subjective, interior aspect of the human being. ‘This sense of “conscious”’, Zeman explains, ‘is the most wide-ranging, the most inclusive, of the three. It encompasses all that we know, think, mean, intend, all that we can hope, wish, remember or believe.’

Human consciousness also involves self-consciousness. This refers to one’s ability to detect things that are happening around us. Children develop self-consciousness, understood in this sense, as they grow older.

To be self-conscious is also to be self-recognising. This includes not only an idea of oneself, but also what Zeman calls second-order evaluation of emotions such as pride, guilt, shame, etc. These emotions are only possible when there is a sense of oneself as the object of the intention and action of others.

Finally, being self-conscious can be understood in the broader sense of having self-knowledge, which, according to Zeman, has to do with ‘our knowledge of the entire psychological and social context in which we come to know ourselves.’

My ‘idea of me’, Zeman explains, ‘takes in not just a body and a mind, but also membership of a cultural community, a profession, a family group, the use of a particular language and so on.’

It must be clarified at this point that being conscious does not always imply self-consciousness or self-awareness or of being ‘aware of awareness.’ As Iain McGilchrist has pointed out in his brilliant work entitled The Matter with Things:

I discriminate, reason, make judgements, find things beautiful, solve problems, imagine possibilities, weigh possible outcomes, take decisions, exercise acquired skills, fall in love, and struggle to balance competing desires and moral values … without being reflexively aware of it.

 

This rather preliminary and sketchy description of that complex phenomenon called consciousness should already give us pause about too hastily ascribing consciousness to machines or even entertaining that possibility.

To this already complex picture, we must also add that human consciousness cannot be properly understood without reflecting on how it relates to experience. Experience here refers to everything that one is subjectively aware of at a single moment or over a duration of time. Experience is therefore the sequence and confluence of lived events that can be brought into some form of unity.

So important is the relationship between consciousness and experience that McGilchrist could advance the concept of consciousness as experientiality. McGilchrist describes consciousness as the ‘field of me’, which permits and guides one’s intentions, decisions and actions.

Thus, as a conscious being, the things that I do and love, (McGilchrist avers):

… rely on my whole embodied being, my experience, my history, my memory, my feelings, my thoughts, my personality, even – I dare say? – my soul: ‘psyche’ in the broadest sense.

 

Far from being located only in an organ or in the insulated and disembodied ‘ego’, consciousness has to do with the whole person in his embodied reality and the experiences that his environment, history, relationships, etc., afford and shape him.

This complex understanding of consciousness confronts the reductionism associated with Cartesian dualism, on the which basis the philosophy of AI is arguably developed, even as it further reduces the Cartesian ‘ego’ or ‘mind’ to mere information fed to a computer.

 

CONSCIOUSNESS AND THE BRAIN

This brings us to the relationship between consciousness and the brain. Many neuroscientists and philosophers make the connection between consciousness and this most complex organ in the human body.

For example, Colin McGinn, in an essay entitled ‘Could a machine be conscious’, writes that ‘The brain has some property which confers consciousness upon it’. He reflects on what is it about the human brain that makes it ‘uniquely the organ of consciousness.’

The biologist and Nobel Prize winner (1972), Gerald Edelman and the psychiatrist Giulio Tononi maintain that ‘consciousness arises as a particular kind of brain process,’ adding that it is a ‘special kind of physical process that arises in the structure and dynamics of certain brains’ in their book Consciousness: How Matter Becomes Imagination (2000).

Theories of consciousness such as the Global Neuronal Workplace Theory which postulates that the brain is an information processing organ which is responsible for consciousness. In a similar vein, the Integrated Information Theory, which identifies the thalamocortical system as the substrate for consciousness, also emphasises the brain as the source of consciousness.

This idea that the brain is the locus and source of consciousness is often wedded to what has been described as the computational theory of the brain in the work of some AI researchers. This is hardly surprising since artificial intelligence, according to Drew McDermott, ‘is a field of computer science that explores computational models of problem solving, where the problems to be solved are of the complexity of problems solved by human beings.’

When these two ideas – ‘consciousness is solely associated with the brain’ and ‘the brain is essentially a computer’ – are combined, the notion that AI-powered machines may one day become conscious is also hardly surprising.

However, to maintain that the brain is the source of consciousness is to commit a mereological fallacy, a mistake to which neuroscience is prone and which conceptualisations about AI are also susceptible.

Mereology is an aspect of metaphysics and philosophy of mind which examines the relationship between parts and the whole, and vice versa. A mereological fallacy occurs when properties of the whole are ascribed to its parts, or properties of the parts are ascribed to the whole.

A mereological mistake or fallacy is committed when one ascribes consciousness to the brain. As neuroscientist Max Bennett and philosopher Peter Hacker describe it in their profoundly insightful book Philosophical Foundations of Neuroscience (2022):

This widely accepted idea is a particular instance of what we have called ‘the mereological fallacy in neuroscience’, inasmuch as it involves ascribing to the brain – that is, to a part of an animal – an attribute which makes sense to ascribe only to the animal as a whole.

Once this mereological fallacy is exposed and addressed, a path is opened for greater clarity to be achieved in our attempt to understand human consciousness.

As Bennett and Hacker brilliantly put it, ‘It is not the brain that is conscious or unconscious, but the person whose brain it is.’ They add: ‘The brain is not the organ of consciousness. One sees with one’s eyes and hears with one’s ears, but one is not conscious with one’s brain.’

This important (and in my view correct) understanding of consciousness disabuses us of the reductionisms that often plague neuroscience, and our understanding of AI – especially as it relates to machine intelligence and consciousness.

It introduces complex concepts such as personhood, which are often ignored or neglected in discussions of conscious AI. But it is precisely these important concepts – for which the theological and philosophical traditions can provide a wealth of resources – that must not be set aside or glossed over if we are to think clearly about AI and consciousness.

MARY’S ROOM

 Philosophers and neuroscientists are agreed that a conscious being is capable of subjective experience. The term ‘qualia’ is used to signify the ‘qualitive character of experience.’

David Chalmers argues that a mental state is conscious,

… if it has a qualitative feel – an associated quality of experience. These qualitative feels are also known as phenomenal qualities, or qualia for short. The problem of explaining these phenomenal qualities is just the problem of explaining consciousness.

 

The philosopher of mind, John Searle, explains qualia in this way:

Every conscious state has a certain qualitative feel to it, and you can see this if you consider examples. The experience of tasting beer is very different from hearing Beethoven’s Ninth Symphony, and both of those have a different qualitative character from smelling a rose or seeing a sunset. These examples illustrate the different qualitive features of conscious experiences.

 

Qualia is such an inextricable part of consciousness that if AI were to achieve this state of mind, it must also be capable of it. The question, however, is whether the information we feed into AI will make it capable of having that subjective experience the philosophers have called qualia.

As far back as 1982, the Australian analytic philosopher, Frank Cameron Jackson, has emphatically answered this question in the negative. He argued that physical information processing is radically different from subjective experience. He shows this with the classic thought experiment called Mary’s Room.

Mary, a scientist and an expert in colorimetry, had lived her entire life in a black and white room. Although she has amassed and mastered all the information about colours, she has never seen any colours.

When Mary was suddenly able to leave her room, she saw (experienced) colours for the very first time. The most significant question here is: ‘Did Mary learn anything new?’

Jackson maintains that Mary did indeed learn something new. This is because all the available information she has analysed about colour cannot convey the intimate knowledge of what it means to experience colour.

This means that even if all the information available in the entire world were to be fed into and processed by an AI computer, there is no possibility whatsoever that the machine would achieve subjective experience. And if consciousness is indeed experientiality, then it follows that it is impossible for a machine – however sophisticated – to ever become conscious.

But there is another reason why this is so. We recall the profound statement that Bennett and Hacker penned – that it is the person (not the brain) that is conscious. This brings us to the whole question of ontology, which AI researchers often fail to address.

The machine can never be conscious because it is not and can never be the kind of creature that possesses personhood. Thus, while the machine or computer powered by AI can utter the words ‘I am conscious’ (perhaps in every language known to man!), it can never do so in any meaningful way.


Dr Roland Chia is Chew Hock Hin Professor at Trinity Theological College (Singapore) and Theological and Research Advisor of the Ethos Institute for Public Christianity.