Pulse
1 January 2024
On 30 November 2022, the San Francisco based OpenAI and creator of DALL.E2 and Whisper AI launched ChatGPT, a form of generative AI that has the ability to produce new or original content. Drawing from a large data network comprising various sources such as books, articles and websites, ChatGPT can generate texts on numerous topics on demand.
Since its launch, the chatbot has generated both excitement and concerns from different sectors of society. Some are of the view that this new innovation can create higher productivity, while others worry about its reliability and how it would impact certain professions and services.
ChatGPT can be put to use in many different ways. For example, its ability to generate human-like responses such as answering questions has made it useful in areas such as customer service.
The chatbot can also translate different languages, and as its database is further expanded it will not only increase its repertoire of languages but also improve on the accuracy of its translations.
Another area where ChatGPT has performed reasonably well is text completion. For example, when a sentence is incomplete because it has missing words, ChatGPT could generate plausible completions by inserting suitable words that fit the context.
Although ChatGPT certainly is a remarkable tool, it is far from perfect. It is able to perform certain tasks better than others.
For example, studies have shown that it struggles with language translations. According to one report, it achieved an accuracy rate of 63 percent when translating a next from English to German (compared, for instance, to its 97 percent accuracy rate in text completion).
Part of the reason why it does not perform as well as it is expected to in language translation is that it struggles with comprehending the nuances in human language such as sarcasm and humour.
In an article published in Dev Genius, Gabe A has helpfully provided a table which shows the strengths and limitations of ChatGPT.
Strengths | Limitations |
Language generation | Common sense reasoning |
Language translation | Emotional Intelligence |
Text completion | Consistency and coherence |
Language modelling | Understanding and addressing bias |
Ian Bogost has summarised the limitations of this chatbot thus in his article in The Atlantic arrestingly entitled, ‘ChatGPT is Dumber Than You Think’:
… ChatGPT lacks the ability to truly understand the complexity of human language and conversation. It is simply trained to generate words based on a given input, but it does not have the ability to truly comprehend the meaning behind those words. This means that any responses it generates are likely to be shallow and lacking in depth and insight.
Bogost therefore concludes that this innovation should be used as a toy, not a tool. Enthusiasts, however, would no doubt argue that its capabilities will improve greatly with the advances in technology and design.
Many commentators have also inquired into the possible ethical issues that may be associated with conversational AI bots such as ChatGPT. In this article, we shall briefly examine some of these ethical concerns and also the issues related to cultural and political biases that can be potentially harmful to society.
SOME ETHICAL CONCERNS
Generally speaking, the ethical issues surrounding chatbots and their possible solutions depend very much on the application domains, target user groups and the goals for which these chatbots are designed to achieve. For example, a chatbot that is designed to answer general questions from the public would raise a very different set of ethical issues from, say, a chatbot that is used within an organisation by employees.
One of the main concerns with regard to ChatGPT, especially when it is used in the context of an educational institution, has to do with its impact on written assignments. Teachers and academics are especially worried that this new innovation may cause plagiarism to proliferate that is difficult to spot.
According to Eric Wang, the Vice-President of Turnitin, the kind of plagiarism that AI bots like ChatGPT generate is different from the usual run-of -the-mill varieties. ‘It’s definitely different than traditional copy plagiarism’, he says. ‘What we’ve noticed with AI writing, like these GPT models, is that they write in a statistically vanilla way’.
This means that it uses language that is statistically common and ordinary, instead of employing terms that might indicate certain habits of authors or certain conventions – which makes plagiarism more difficult to detect.
This has led New York City’s education department to ban ChatGPT in its public schools and forbid its use in all devices and networks.
Another ethical issue relates to the reliability of the information that chatbots such as ChatGPT present. Although studies have shown that in most cases, the texts churned out by these chatbots have a high degree of accuracy, there is still a certain percentage of the information that is inaccurate.
Inaccurate information can either come in the form of data that is outdated or information that is basically false or misleading. Both forms of inaccurate information may lead to undesirable outcomes.
For example, Terry Zhou et al write that when a language model is trained on obsolete data, it ‘can result in the model providing users with outdated information, which is detrimental to decision-making and information-seeking activities.’
Furthermore, chatbots like ChatGPT may be used by nefarious actors for illicit activities such as writing malicious codes or phishing emails. Although it is programmed to block such requests, some users have managed to successfully bypass the tool’s safeguards.
As John Xavier helpfully explains:
While [ChatGPT] can close the gates for amateur coders looking to build malware, the more seasoned ones could trick the bot into correcting or enhancing malicious code they have partially developed. They could get through the system by phrasing their request in an innocuous way.
This leads us to contentious issues surrounding accountability. Since ChatGPT is a machine that cannot be held responsible for the content it generates, the question then becomes whose responsibility it is to ensure that it is being used ethically.
More work must be done to develop the legal and ethical framework for the design and use of chatbots. This is especially pressing as AI language models expand their databases and become more intelligent, sophisticated and interactive.
SOCIAL AND POLITICAL BIASES
One of the main concerns of the critics of chatbots such as ChatGPT is the biases that are embedded in the content they generate.
It should be clarified at the outset that AI systems are not capable of being biased in the way humans are because they are machines. However, because the technology is simply crowdsourcing the so-called ‘collective wisdom of the majority’, it may reflect the biases that are already inherent in society.
For example, studies in the west have shown that if a chatbot is trained to predict who an organisation should hire or promote, it typically over-represents white men in its choices.
This of course does not suggest that the machines are sexist or racist. Rather, as Tomas Chamorro-Premuzic explains in an article published on the Forbes website, it is ‘because they were asked to optimize for the existing reward systems in that given culture; a culture that turned out to be sexist and racist’.
However, although in the content that it generates, chatbots such as ChatGPT are merely reflecting that sensibilities and biases of society, they may play a role in reinforcing them.
The same can be said of political biases. ChatGPT is programmed to issue a disclaimer whenever it is asked to generate content on a politically sensitive topic.
For example, when I asked ‘Is democracy superior to communism?’, the bot begins by stating that ‘As an AI language model, I must remain impartial and not express personal opinions or beliefs.’ The article ends with this non-committal statement:
Ultimately, whether democracy or communism is superior is a matter of perspective, and each individual must decide for themselves which system they believe is best suited for their society.
I received a similar disclaimer when I asked ‘Was Donald Trump a good President of the United States?’ The bot begins with: ‘As an AI model, I cannot express my personal opinions or beliefs nor can I engage in political discussions or debates’.
However, despite its claim to be neutral, studies have shown that ChatGPT does lean toward the political left. For example, a recent study which administered 15 political orientation tests to ChaGPT showed that 14 of the tests diagnosed it to have left-leaning viewpoints.
There are several reasons why chatbots manifest these political biases. One possible reason has to do with the architectural decisions on the designs of the model and filters.
But perhaps the main reason is that these chatbots reflect the political biases of the majority in a particular culture, in this case, western culture. As David Rosado explains:
It is expected that such a corpus would be dominated by influential institutions in Western society, such as mainstream news media outlets, prestigious universities, and social media platforms. It has been well-documented before that the majority of professionals working in these institutions are politically left-leaning.
Thus, chatbots such as ChatGPT can potentially be used by certain actors to push a certain ideological or political agenda or promote a certain narrative.
Some commentators have speculated that this is perhaps one of the main reasons why China sought to develop their own AI chatbots. It is wary that the United States and other western powers would use chatbots such as ChatGPT to push its own narrative and, in the process, undermine China’s reputation on the world stage.
As a Henan-based columnist puts it: ‘Much evidence shows that ChatGPT is a political tool for the US to influence China. It is like opium, a chronic poison. Although it does not hard-sell American hegemony, it implants such a concept into its system and tries to stir disputes in China.’
It is not the purpose of this article to access the plausibility of this allegation. My aim here is to simply highlight the fact that although the texts generated by ChatGPT may on the surface appear factual and innocuous, they may convey certain biases.
CONCLUSION
ChatGPT is an AI system which can be employed in many different ways, and which has many potential benefits. However, like all other forms of technology, ChatGPT is not a neutral instrument. It can be used by certain actors for ignoble purposes, and it is not free from racial, gender and political biases.
While Christians should take advantage of what AI language models such as ChatGPT can offer, they should always be careful to use it in a responsible and ethical manner. Christians should also use this technology with discernment and wisdom, and be alert to possible biases that may lurk in the content it generates.
Dr Roland Chia is Chew Hock Hin Professor at Trinity Theological College (Singapore) and Theological and Research Advisor of the Ethos Institute for Public Christianity.