Pulse
15 April 2024
The spectacular success and relentless advance and deployment of AI in almost every sector of society have been greeted with enthusiasm by some and concern by others. There is a heightening of awareness among scholars and the general public alike of the possible negative ramifications this technology has on economics, politics and society.
The impact of the pervasive presence of AI is evident in many aspects of political life. There are on-going discussions on the algorithmic shaping of digital communication, for example, and how it may deteriorate political discourse. Generative AI can be used to flood the public arena with misinformation that may mislead the public and erode trust.
Numerous writers have reflected on how the deployment of AI in the political arena not only can destabilise local politics but also present a real threat to the democratic process itself. For example, in a paper entitled ‘Artificial Intelligence, Democracy and Elections’ (2023), the European Parliamentary Research Service (EPRS) acknowledges that AI ‘poses multiple risks to democracies, as it is also a powerful tool for disinformation and misinformation, both of which can trigger tensions resulting in electoral-related opinions that do not represent the public sentiment.’
In this brief article, we discuss some of the threats to the democratic process that AI can present if it is not properly and rigorously governed and regulated. However, while regulations in the form of protocols, laws, guidelines, etc., are without doubt extremely important, we have to also regard them with a healthy dose of realism – no amount of regulations, however thorough, can totally safeguard the democratic process from the interference of nefarious actors.
AI-powered technologies, as we all know, are a double-edge sword. Thus, before discussing the threats and dangers that such technologies may present to the democratic process, and therefore to democracy itself, let us briefly examine how they can be constructively used in politics.
SERVING DEMOCRACY
There is no doubt whatsoever that AI is a tremendous tool or instrument that can be used for the benefit of democracy and the democratic process – if it is used in a principled and ethical manner. Regardless of the expression of democracy we have in view – majoritarian, representative, deliberative, or participatory – AI can be employed to assist both politicians and electorate alike.
One of the most important requirements of democracy is surely the accessibility of information – including viewpoints and arguments – by citizens that will deepen their understanding of issues and enable them to make informed decisions at the polling stations. AI can help to make such information available to voters more effectively than older technologies and media.
As the EPRS paper puts it, ‘AI can serve to educate citizens in the principles of democratic life, whether by gaining knowledge about a policy issue or getting familiar with a politician’s stance.’ This augurs well for the democratic process in that it will encourage and enhance civic debates which should play an important role in any democracy.
AI can also benefit politicians in a variety of ways. For example, it can collate, analyse and summarise the viewpoints of citizens expressed at public consultations. This would give politicians a better understanding of the deepest concerns of citizens. A study published in the Michigan Law Review in 2022 showed that the use of AI in such processes does not undermine voters’ trust as long as there is sufficient transparency and human oversight.
Finally, as the EPRS paper points out: ‘AI could also play an important role in the policymaking process and generate more value in each of its five stages: identification, formulation, adoption, implementation and evaluation.’ For example, AI could process huge amounts of data and summarise complex problems, thereby enabling policymakers to better identify and address societal issues.
The list of the benefits that AI can offer to the democratic process can be easily expanded. However, as I have alluded to above, AI is a double-edged sword which offers immense help as well as present serious issues, dangers and threats.
The proper approach is, of course, not to shun the new technology altogether but to be fully aware of all its pitfalls and risks, and to address them as best as we can as a global community.
GROWING CONCERNS
In recent years, a spate of books examining the threats of AI to democracy have appeared, such as Yuval Noah Harari’s Homo Deus: A Brief History of Tomorrow (2019), which argues that the abundance of data can in fact be detrimental to democracy, and Shoshanna Zuboff’s The Age of Surveillance Capitalism (2019), which examines how mass behaviour modification techniques can overthrow the people’s sovereignty.
Scholars writing on this topic have raised a great many issues indeed. Space, however, allows me to discuss only a few of their concerns, all of which have grave implications not only for politics, but also for wider issues such as societal health and wellbeing.
Perhaps the most obvious concern has to do with the use of AI technology to spread misinformation and disinformation. The ability of AI to shape the information environment of citizens and voters is especially significant in this regard because democracy is profoundly associated with the idea of people being able to make decisions about themselves and their communities.
As Andreas Jungherr explains:
AI affects these informational foundations of self-rule directly. This includes how people are exposed to and can access political information, can voice their views and concerns, and how these informational foundations potentially increase opportunities for manipulation.
There are many ways in which misinformation can be generated and spread with the use of AI. For example, AI can be used to conduct what has been termed as astroturf campaigns, where a small group presents itself as a grassroots movement, thereby generating a distorted and skewed view of public opinion. It can also be used to give the false impression that political consensus has been achieved on a particular issue by posting millions of automatically generated content entries on that issue online.
The advances in generative AI have also led to the advent of deepfake videos that are easy to produce and are becoming more and more convincing. The EPRS paper therefore aptly warns that ‘Deepfakes have a huge potential for misinformation (false or inaccurate information), or even disinformation (information having as its intention to mislead) …’
‘Overall, deepfakes severely risk undermining trust in the information environment’, it adds. ‘They also make it easier for some politicians to dodge responsibility for their real words, on the pretext of having fallen victim to AI-generated content.’
Disinformation is often used to manipulate citizens and voters into rejecting certain narratives and accepting others. Malicious actors can employ AI technology to interfere with the electoral process, for example, with the view of swaying voters to bring to realisation certain outcomes.
This is achieved by launching a barrage of AI-generated communicative interventions into the informational environments of voters so that their individual informational autonomy is negatively impacted. In employing AI in this manipulative way, nefarious actors are in fact eroding the very informational foundations of democracy or self-rule.
AI can also be used to radically alter the perspective and change the philosophy of voters. In his book, You are Not Gadget (2010), Jaron Lanier chillingly describes how easily this can be done:
We tinker with your philosophy by direct manipulation of your cognitive experience, not indirectly, through argument. It takes only a tiny group of engineers to create technology that can shape the entire future of human experience with incredible speed.
‘In the context of representative democracy,’ writes Mark Coeckelbergh in his important and insightful book The Political Philosophy of AI, ‘AI and other digital technologies can nudge voters towards supporting a particular politician or party through personalised advertisements, and so on.’
Since the dawn of social media, theologians and culture-watchers have warned about the negative effects of echo chambers, created to reinforce a particular viewpoint by the refusal to consider others. In the arena of politics, echo chambers not only have the potential to narrow the perspectives of citizens. They can also be detrimental to the future of a nation.
Echo chambers have the potential to undermine the epistemic foundations of democracy. As Dave Kinkead and David M. Douglas explain:
One risk to the epistemic virtue of democracy is that closed social networks appropriate the public sphere and make it private. Once private and shared only among similar individuals, political discourse loses some of its epistemic robustness as ideas are no longer challenged by diverse perspectives.
Finally, we have the problem of AI and bias. This issue has been discussed in many different areas where AI-technology is used – for example, in population studies and healthcare. The problem is especially pertinent here, because a healthy democracy very much depends on people having equal rights of participation as well as representation.
The problem is complex, with many different aspects and layers. For example, there is a risk that AI will only reinforce existing biases in society, since it differentiates people according to certain criteria represented in data points. Some scholars have even warned that AI may bring past discriminatory patterns that were discontinued into the present and future.
AI also has trouble recognising people who belong to groups that are underrepresented in the data that is used to train it. Thus, minorities that are traditionally underrepresented will remain invisible to AI. This, of course, has detrimental consequences to democracy, as Andreas Jungherr explains:
… the systematically invisibility of specific groups means they would be diminished in any AI-based representation of the body politic and in predictions about its behaviour, interests, attitudes, and grievances. Accordingly, already disenfranchised people could risk further disenfranchisement and discrimination in the rollout of government services, the development of policy agendas based on digitally mediated preferences and voice, or face heightened persecution from state security apparatus.
To conclude: AI, which can be used as a tool to support and enhance the democratic process, can also be used directly or indirectly to undermine democracy. It is vitally important, therefore, that jurisdictions and countries should produce a robust and adaptable legal framework to address the dangers and to promote the use of trustworthy and accountable AI systems.
Dr Roland Chia is Chew Hock Hin Professor at Trinity Theological College (Singapore) and Theological and Research Advisor of the Ethos Institute for Public Christianity.