2024ETHOSSeminar-WebsliderBSS
ETHOSBannerChinese11
1007102024FeatureWS_2September2024_BiblicalJustice
10CredoWS_21October2024_ReligiousVoicesinthePublicSphere
10CredoWS_7October2024_ReadingMeditatingHearingandDoing
902092024PulseIVFFrozenEmbryosareChildrenWS
SpecialWS_1August2024_WhatisCriticalSocialJusticeAnAnatomyofanIdeology
NCCS50thCommemorativeBook
ETHOSBannerChinese
previous arrow
next arrow

Pulse
3 June 2024

On 2 and 3 November 2023, delegates from 27 governments around the world, including the heads of artificial intelligence companies, gathered at a museum at Bletchley Park near London for the world’s first AI Safety Summit. A Bletchley Declaration on AI was signed by 28 countries, including the U.S., U.K., China, India as well as the European Union.

According to TIME Magazine, ‘The declaration said AI poses both short-term and long-term risks, affirmed the responsibility of the creators of powerful AI systems to ensure they are safe, and committed to international collaboration on identifying and mitigating the risks.’

One of these concerns must surely be the use of AI in the military. As an article published on the United Nations University website puts it quite straightforwardly:

The militarization of AI has profound implications for global security and warfare … However, these developments raise concerns regarding the escalation of conflicts, the possibilities of autonomous weapons being compromised or misused and the possibility of an AI arms race.

 

The twentieth century saw the emergence of two ground breaking technologies that have shaped the trajectory of the human future, and continue to do so. Interestingly, both technologies – nuclear technology and artificial intelligence – appeared at about the same time, that is, in the 1950s.

These two technologies have no doubt contributed significantly to human society and its progress.

Nuclear technologies in the form of nuclear energy have provided a reliable and efficient source of power. In addition, nuclear power plants have contributed to the global fight against climate change by generating electricity with low greenhouse emissions.

In the same way, AI has revolutionised numerous industries by enhancing both efficiency and productivity. For example, machine learning algorithms have made the analysis of vast datasets possible. They are also capable of optimizing processes and even making predictions.

However, both nuclear and AI technologies are dual-use technologies.

The European Commission defines dual-use goods or technologies as ‘items, including software and technology, which can be used for both civil and military purposes.’ The U.S. government’s Code of Federal Regulations describes dual-use technologies as ‘items that can be used both in military and other strategic uses … and commercial applications.’

This implies that dual-use technologies can be used for the good of humanity as well as for its harm.

This is brought out clearly in the report by the National Academy of Sciences (NAS) in 2004, entitled ‘Biotechnology Research in an Age of Terrorism’. Focussing specifically on biology, the report describes its dual-use dilemma in this way: ‘when the same technologies can be used legitimately for human betterment and misused for bioterrorism.’

One of the ways in which AI technology can be used in warfare is to integrate it with advanced weapons simultaneously across multiple combat zones. This would enable these systems to respond at machine speed to threats and attacks.

However, despite the obvious tactical advantages of responding at great speed, experts are warning that the reaction of machines to different combat situations may elude human comprehension. This has the potential result of commanders being unable to control, contain and even terminate AI-prompted actions.

As James Johnson explains:

… while AI-enabled autonomous early-warning systems would theoretically allow defence planners to identify and monitor threats faster and more reliably than before, the lack of human judgement and supervision coupled with the inherent brittleness (i.e., a lack of real-world common sense to deal with new situations) and ‘black-box’ (or opaque and unexplainable) characteristics of AI-machine learning algorithms mean that risk of destablising accidents and false alarms will likely arise.

 

AI-powered military systems also have the potential to escalate warfare, that might lead to disastrous outcomes. ‘Military AI systems functioning at machine speed’, writes Johnson, ‘could push the pace of combat to a point where the actions of machines eclipse the ability of human decision-makers to control (or even comprehend) events.’

He adds:

In extremis, human commanders might lose control of the outbreak, course, and termination of warfare. Were humans to effectively lose (or pre-delegate) control of warfare to machines, inadvertent escalation pathways and crisis instability would increase, potentially with catastrophic results.

 

The risks that we have been discussing is greatly compounded when AI technology is integrated into weapons with the capability of mass destruction, such as nuclear, chemical, and biological weapons. According to a brief published by the Future of Life Institute, the consequences of AI integration in terms of nuclear risks have ‘received relatively limited exploration, research and international dialogue.’ This is a complex issue that must be addressed by the international community with great urgency.

As Jill Hruby, who served as Under Secretary of Energy for Nuclear Security and Administrator of the National Nuclear Security Administration of the United States, points out:

Given the lack of technological maturity, fully autonomous nuclear-weapon systems are highly risky. These risks, combined with the potential instability these weapons may cause, suggest that a ban on fully autonomous systems is warranted until the technology is better understood and proven.

 

That said, much is being done to regulate the use of AI technologies in warfare, especially with regard to its integration into nuclear weapons.

For example, on January 3, 2022, the People’s Republic of China, the French Republic, the Russian Federation, the United Kingdom of Great Britain and Northern Ireland, and the United States of America issued a document entitled ‘Joint Statement of the Leaders of the Five Nuclear-Weapon States on Preventing Nuclear War and Avoiding Arms Races.’

Although the document does not specifically address the question of the use of AI technology, it makes this sobering statement: ‘We affirm that a nuclear war cannot be won and must never be fought.’

It also articulates its vision of ‘a world without nuclear weapons’: ‘We underline our desire to work with all states to create a security environment more conducive to progress on disarmament with the ultimate goal of a world without nuclear weapons with undiminished security for all.’

We should applaud the composers and signatories of this statement even if we remain understandably sceptical about whether their vision can ever be realised.

Statements such as these, however, do go a long way to reinforce existing nuclear treatises such as the Treaty on the Non-Proliferation of Nuclear Weapons (1970), Treaty on the Prohibition of Nuclear Weapons (2017), and the New START (‘Strategic Arms Reduction Treaty’) Treaty (2011).

In light of the use of AI technologies in warfare, Hruby suggests the following additional requirements:

  • States with nuclear weapons should make clear the role of human operators in nuclear-weapon systems and the prohibition or limitations of AI use;
  • The international community should increase dialogue on the implications of the use of AI in nuclear-weapons, ‘including how AI could affect strategic and crisis stability, and explore areas where international cooperation or development of international norms, standards, limitations, or bans could be beneficial’.

 

Christians must support every effort to improve regulations on the application of AI in the military, especially when weapons of mass destruction such as nuclear weapons are involved.

However, in attempting to do this the international community faces many immense challenges. Some of these challenges have to do with legitimate concerns about national security, which may impede full transparency and cooperation.

But the integration of AI into nuclear weapons itself introduces new challenges which the international community has hitherto not encountered and may be ill-prepared to address.

In their insightful book The Age of AI, Henry Kissinger, Eric Schmidt and Daniel Huttenlocher explain the complexities that accompany the militarisation of AI:

The management of nuclear weapons, the endeavour of half a century, remains incomplete and fragmentary. Yet the challenge of assessing the nuclear balance was comparatively straightforward. Warheads could be counted, and their yields were known. Conversely, the capabilities of AI are not fixed; they are dynamic. Unlike nuclear weapons, AIs are hard to track; once trained, they may be copied easily and run on relatively small machines. And detecting their presence or verifying their absence is difficult or impossible with the present technology.

 

The authors suggest ‘a strategy of responsible use, complete with restraining principles’ as the way forward. Indeed, this is the only strategy that the international community can realistically take.

However, the answer to the question concerning the appropriate requirements of restraint cannot be easily arrived at in such debates where national interests very often trump even the existential threat of annihilation.


Dr Roland Chia is Chew Hock Hin Professor at Trinity Theological College (Singapore) and Theological and Research Advisor of the Ethos Institute for Public Christianity.