Feature
4 November 2024
“We are entering an era when our enemies can make it look like anyone is saying anything at any point in time,” warned the former U.S. president Barack Obama about the increasing dangers of fake news and false videos in a video clip back in 2018. The irony is: Obama did not make that clip; it was a deepfake created by Jordan Peele, whose intention was to create awareness about the potential dangers of deepfakes.
One of the more controversial deepfake productions must be a video on Channel 4 (a British public broadcast television channel) which gave the impression that the Queen did a pop dance at the end of her traditional Christmas message in 2020. In all fairness, the channel informed viewers at the start, and labelled the short broadcast as an “alternative Christmas message.” The broadcast deployed deepfake technology to create an altered video of the Queen’s address with the voice of an actress superimposed onto the former.
Amid waves of anger and annoyance, Channel 4’s director Ian Katz underlined the crux of the matter when he claimed that “we can no longer trust our eyes” as deepfake technology is viewed as the “terrifying new frontier between disinformation and truth.”
Between Disinformation and Truth
Simply, deepfake is a combination of algorithmic power with machine learning and a modern information processing platform to create fake surroundings in which a real-world person’s oral and visual information (voice, facial and body expressions, etc.) can be inserted. In short, it is an act of digital fabrication that has ethical implications.
This technological imitation portrays the subject as doing something that the person never did, such as expressing a political opinion or appealing to the masses. An example in point is, in 2022, a deepfake video released in Russia featured Ukrainian President Volodymyr Zelensky appealing to the Ukrainians to surrender.
Public figures implicated aside, what is insidious about deepfake lies beyond its veneer of imitation. Far more destructive is the manner computer-generated images persuades us almost instantaneously to believe something just by looking at it. Credibility, then, is dependent on our sight as we rely on the deepfake video as the evidence.
One of the ways deepfakes catch us off guard is its subtlety: in cases where the subjects of the constructed videos are familiar to us, such as public figures, we can recall a certain amount of validated information of those subjects to mind. Psychologically, then, we suspend our disbelief momentarily, and then exercise intuitively some degree of trust in the authenticity of the content before our eyes.
In short, this means truth and trust are at stake. With countless advertising and claims bombarding us daily, our memory begins to register these things as familiar information, and consequently, we assume them to be true. In the long run, this adaptive process, also known as “fluency-based heuristic for judging truth,” inhibits our ability to decipher fact from fiction, and thus, become vulnerable to deception. Is it any wonder that one of the most oft heard question today is “how can we tell if a video is real?”
Yet the challenges we face are more than just the verification of what is real or true.
The Creative Goodness
Two areas come to mind. Deepfakes reminds us of ethical questions involved in using them and the ramifications and exponential harm that accompany its use. One thing is obvious. The truth of the matter is that deepfakes, created with the intention of mimicking people, will always embed ambiguities of purpose.
I surmise that it is not the creative, cultural production that is the issue but the intent of fabrication to masquerade the real, turning truth to fictitious imagination, and thus possessing the power to indulge viewers in sensationalism. To discover the intention of creators of deepfake is almost impossible. Some claim it is for creative fun, while others do it for unscrupulous profits.
The fact remains that considerable harm is done to a nonconsensual victim who is portrayed in a deepfake video as doing or saying something they did not. Another instance is one of the most prevalent early and ongoing types of deepfakes, which is creating images and videos of non-consensual people having sex by superimposing someone’s face on someone else’s body. In 2020 Japan’s leading newspaper, Asahi Shimbum, reported how mainstream Japanese female celebrities fell victims to face-swapping misappropriated by porn sites.
Damage to the lives of the victims can be irreparable, given the ability of the visual system to distort our thoughts and the impact that deepfakes can have on one’s sense of self, especially in the cases of revenge porn. While this fear and suspicion can cause a vendetta against the mediascape, let us also be aware that generative AI, of which deepfake is a type, also has made positive contributions in the arts and sports industry.
A 2023 sports advertisement, in support of French women’s football team in the Women’s World Cup, adopted deepfake editing to place their better-known men’s football stars over the women’s faces, and then revealed the truth at the end. The slogan that came with the montage states it wants to debunk the “misogynistic trope” that women’s soccer is not as entertaining as men’s. Fans’ responses across media platforms were overwhelmingly positive, both at the visual effects and the message.
Attunement, Anyone?
Deepfake, as a corollary of the frighteningly powerful AI, gained much, perhaps too much, of our attention. For the limited scope and length of discussion here, I propose looking at two possible types of attunement as we contrive to find effective solutions. One focuses on the perpetrators, and the other on us.
In Erotic Attunement, Cristian L. H. Traina speaks of attunement as the perceptive attention and adjustment to needs, desires, and feelings in human relationships. Extending this analogy, we could begin to consider the unattuned feelings, even conditions of alienation, of perpetrators who created harmful deepfake content.
Generally, what the world is looking at superficially is these perpetrators’ behavioral crime with contempt. Since legal punishment, rehabilitation, and moral correction of a person requires time, resources, and professional approaches, the least we as media users can do, instead, is reciprocal attunement. “Reciprocal” requires us to re-examine our stewardship of our use of time and accountability to God in our own moral formation.
Additionally, in the name of efficiency and intellectual advancement, have we overlooked our innermost hubris in our push for digital technological sophistication? In the process, have we made technology after our image, placing our trust in these “advancements”? (Ps. 115: 8) Perhaps it is our insecurity that compels us to exercise control and power over materiality, in the manner which King Rehoboam filled his palace with counterfeits of brass shields after the gold shields were robbed by Shishak (1 Kings 14).
Attuning ourselves as responsible and ethical users of technology, we can abstain from the constant stream of online images and, in general, visual culture periodically. The purpose is not to completely shun images as false depictions, but instead, to approach them with more deliberation, reflection, and astuteness.
Kris H.K. Chong (PhD, Fuller Theological Seminary; MPhil, Cambridge University) currently teaches at Baptist Theological Seminary. She has a prose column 《海角一方》in the Lianhe Zaobao newspaper. Recent academic publications include: Transcendence and Spirituality in Chinese Cinema: A Theological Exploration (Routledge, 2020) and “(Not) the End: From Death to Life in East Asian Films” (Concilium: International Journal for Theology, Dec. 2021). Kris worships at Paya Lebar Chinese Methodist Church.