2024ETHOSSeminar-WebsliderBSS
ETHOSBannerChinese11
1007102024FeatureWS_2September2024_BiblicalJustice
10CredoWS_7October2024_ReadingMeditatingHearingandDoing
902092024PulseIVFFrozenEmbryosareChildrenWS
SpecialWS_1August2024_WhatisCriticalSocialJusticeAnAnatomyofanIdeology
NCCS50thCommemorativeBook
ETHOSBannerChinese
previous arrow
next arrow

Pulse
16 Sept 2024

In its paper entitled ‘The Global Expansion of AI Surveillance’ published in 2019, the Carnegie Endowment for International Peace reports that:

AI surveillance technology is spreading at a faster rate to a wider range of countries than experts have commonly understood. At least seventy-five out of 176 countries globally are actively using AI technologies for surveillance purposes. This includes: smart city/safe city platforms (fifty-six countries), facial recognition systems (sixty-four countries), and smart policing (fifty-two countries).

 

Adapted from the French, surveillance simply means ‘watching over’ (sur = ‘over’, ‘from above’; veillance = ‘watching’, ‘monitoring’). In the past, public surveillance had distinctive limitations, being inherently physical, spatial and visual. But with the advent of surveillance technology in the last century, especially those enabled by AI, these limitations have largely been overcome.

With AI, surveillance can be conducted retrospectively, contemporaneously, and even predictively, in anticipation of a possible activity, for example, a crime.

The phenomenon of public surveillance has led philosophers and scholars to devise constructs such as ‘surveillance society’ (David Lyon), ‘ubiquitous transparency’ (David Brin), ‘location and tracking’ (Roger Clarke), ‘equiveillance’ (Steve Mann), and ‘uberveillance’ (K. Michael and M.G. Michael).

The French philosopher Michel Foucault regards certain kinds of surveillance as the exercise of power, describing it as Panopticon (a term which he borrowed from Jeremy Bentham) – a prison where prisoners are watched, but are unable to see who is watching them.

While AI surveillance is employed mostly in law enforcement and by governments, or in corporate contexts, it is also used in the private sphere such as social media. Such surveillance is therefore not only vertical (i.e., by the state or by the social media company), but also ‘horizontal’ (i.e., by the users, mediated by algorithms).

It is not at all surprising that authoritarian regimes have taken advantage of the latest surveillance technologies to monitor and control the population. As Steven Feldstein observes, ‘Around the world, AI systems are showing their potential for abetting repressive regimes and upending their relationship between citizens and state, thereby accelerating a global resurgence of authoritarianism.’

There are many ways in which these regimes can use AI-powered technologies to exercise control over citizens. For example, tracking technologies can be embedded in chat platforms which enable these regimes to know immediately when protest crowds begin to form.

AI systems can also be used to monitor crowd density, and facial-recognition technologies can be used in urban public spaces to monitor persons of interest. The regimes can also use AI to impose selective censorship as well as online disinformation to spawn confusion or circumvent potential opposition or protests. As Feldstein explains:

This can take the form of denial-of-service attacks against protest campaigns (undercutting the ability of opponents to organise and effectively censoring vital information) or of bot-driven information-distortion campaigns (producing a flurry of misleading posts to blur opponent’s messaging and overwhelm information channels with noise).

 

Scholars have pointed out that the misuse of AI technology is not confined only to authoritarian regimes. Democratic governments must also be circumspect in their use of these technologies, which greatly enhance their monitoring and surveillance capabilities. There must be a constant return to prevailing policies and practices to see if the use of these technologies is within acceptable limits.

The reason why the principled use of these technologies is imperative is because their misuse would result in the violation of something which is highly prized in democracies, namely, individual freedom.

Freedom is not only highly valued in secular society. It is also a central concept in Christian theological anthropology – a Christian understanding of what it means to be a human being, the bearer of the divine image.

The theological understanding of freedom and how it radically differs from secular accounts need not detain us at this point – important though it is. What is of importance is the importance of this concept in the Christian faith.

That human freedom is of great concern for Christianity is evident in the freedom movements for the abolition of slavery in the past (e.g., William Wilberforce) and the present (e.g., International Justice Mission).

The culture of monitoring and surveillance that AI technologies not only shape but invigorate is worrying because it dramatically impacts the freedom of individuals.

More specifically, this culture puts at considerable risk what political philosophers have called ‘negative liberty’, which Isaiah Berlin has famously defined as freedom from interference. In his essay entitled ‘Two Concepts of Liberty’ (1997) – now a classic – Berlin states that negative freedom is concerned with the question: ‘What is the area within which the subject – a person or a group of persons – is or should be left to do or be what she is able to do or be, without interference from other persons?’

But the insidiousness of the surveillance culture goes beyond the violation of the (negative) freedom of individuals. It subtly changes our understanding of freedom itself and shapes human sociality. As Daniel Solove perceptively describes in his book The Digital Person: Technology and Privacy in the Information Age (2004):

Digital dossiers do cause a serious problem that is overlooked by the Big Brother metaphor, one that poses a threat not just to our freedom to explore taboo, but to freedom in general. It is a problem that implicates the type of society we are becoming, the way we think, our place in the larger social order, and our ability to exercise meaningful control over our lives.

 

In the wake of evermore pervasive surveillance and monitoring by evermore sophisticated AI systems, we should ask ourselves again and again just what kind of society we are becoming, and whether this is the kind of society we want. Are we willing to embrace an Orwellian world in which individual freedom is incrementally surrendered? Or are we careful to set limits so that technology will never be allowed to rob us of our humanity?

There is a fundamental difference between the dystopian future sketched by George Orwell in his famous 1984 and our present situation. While the citizens in Orwell’s novel were forced into subjugation, we appear to embrace it willingly. But our acquiescence and willing participation in the surveillance culture – which gives us an illusory sense of autonomy and control – can possibly result in some sort of enslavement.

This especially the case when individuals increasingly come under the control of large organisations by willingly surrendering their personal data and information.

Scholars have commented on the ‘culture of transparency’ that the use of social media has brought about, which requires everything to be shared and exposed to public view, and how this poses a threat to society. ‘Full transparency’, writes Mark Coeckelbergh, ‘… threatens liberal societies, and big tech plays an important role in this.’ He adds:

Using social media, we voluntarily create digital dossiers about ourselves, with all kinds of personal and detailed information that we willingly share, without any governmental Big Brother forcing us to give it or having to do the painstaking work to acquire it in covert ways. Instead, tech companies openly and shamelessly take the data.

 

In his 1925 novel The Trial, the Czech writer Franz Kafka tells the story of Joseph K. who was arrested by officials and summarily executed based on information they had about him that he was unable to verify, even as accusations that, up to his death, remained unclear. As Solove puts it, Kafka

… captures the sense of helplessness, frustration, and vulnerability one experiences when a large bureaucratic organisation has control over a vast dossier of details about one’s life. At any time, something could happen to Joseph K.; decisions are made based on his data, and Joseph K. has no say, no knowledge, and no ability to fight back. He is completely at the mercy of the bureaucratic process.

 

This returns us to the question of how the use of AI-powered surveillance technology should be moderated and controlled so that human dignity and freedom are protected. This requires careful analysis of the profound implications of such ‘dual use’ technologies, which have the potential to be both ‘beneficial’ and ‘harmful’.

To be sure, there must be deeper collaboration between policymakers, engineers and researchers to ensure the ethical use of these technologies. Perhaps the rule-of-thumb that is articulated well by Carissa Véliz in her insightful book Privacy is Power (2020) that ‘nothing more should be subjected to public scrutiny than what is necessary to protect individuals and cultivate a wholesome collective life’ is the place to start when thinking about the kind of controls we must implement.

But the regulation of the use of AI-fuelled surveillance and monitoring technologies should not be left only to policymakers. Democratic societies should also consider how the capacity of civil societies can be strengthened and their full participation in shaping guidelines for AI use encouraged.

The obvious reason for this is that the negative consequences of a surveillance culture, of uberveillance and ubiquitous transparency affect every member of society, and often in dehumanising ways.


Dr Roland Chia is Chew Hock Hin Professor at Trinity Theological College (Singapore) and Theological and Research Advisor of the Ethos Institute for Public Christianity.