The arts: ammunition in a AI arms race?
If you ask artists if they want their art to be militarized, there’s a good chance they’ll answer: No! But, wait… what are you talking about? Yet killer robots are no longer in science fiction, they’re in the field, and research that combines art and artificial intelligence is contributing to their development. In fact, historically, artists have never had such a vital role to play in the development of a new technology that can automate the act of killing and increase its efficiency.
The arms race and the escalating use of killer robots
An arms race that incorporates artificial intelligence (AI) is in full swing, resulting in “increased global spending by developed countries to create autonomous weapons, battlefield analysis systems and other artificial intelligence tools.” The U.S. Department of Defense has announced a Generative AI Department for the integration of autonomous weapons systems and military decision support systems. Canada’s Ministry of Defence is expected to finalize its own AI strategy soon.
What does this mean in practice? In this article, Radio-Canada explains the use of target identification systems by the military in Gaza. These are part of the larger family of Lethal Autonomous Weapon Systems (LAWS). Here are some excerpts:
The Israeli army has designated tens of thousands of Gazans as suspects for assassination, using an AI targeting system with little human verification and based on a permissive casualty policy.
Officers are not required to independently review AI system assessments in order to save time and enable mass identification of human targets, a senior officer explained.
Allowing time for information verification only when the alleged target was a senior Hamas commander.
The program was initially used only as an auxiliary tool, but military leadership reportedly approved the widespread use of AI-suggested kill lists around two weeks into the war.
In another report published the same day by CBC, two of the soldiers who attacked the humanitarian convoy were dismissed for “failure to follow procedures”. The article doesn’t specify whether this involved the use of AI target identification systems, but the organization that was the victim of the attack that killed seven aid workers, the World Central Kitchen, will be calling for an investigation.
The use of LAWS is growing, and applications have been reported in the context of the war in Ukraine and on Africansoil. Finally, the proliferation of LAWS is causing concern among NGOs. A consensus on their governance seems to be taking shape, but there are as yet no clear international rules to limit and regulate their use.
AI and the arts: are artists ready to go to the battlefield?
Generative AI is also used to generate images, text, music, videos — in short, to generate cultural content. You may have heard either with enthusiasm, or fear, or a little of both, of generative systems that produce Monets in a matter of seconds, of OpenAI that can generate 60-second videos?
An in-depth exploration of the ethical, legal, economic and political implications of generative AI in the arts is carried out in theinteractive exhibition Algorithmic Frontiers, so it is rather the militarization of art that I want to bring to the table today.
Generally speaking, we speak of “dual-use” when a technology or algorithmic systems can have both applications for the good of humanity, and for causing it serious harm. In the field of Art and AI research-creation, the dual-use of algorithms includes an improved ability to:
Interpret complex visual environments and distinguish targets in a variety of conditions, enhancing their accuracy, and improving object recognition and scene interpretation in surveillance images;
Create “deepfakes” and other manipulated content to deceive adversaries, or civilian populations, or to personalize the targeting of disinformation or polarization campaigns.
For example, in this research, caricatures are used to improve surveillance systems, and to address the balance of datasets in terms of gender, race, age and image type, essential for an effective surveillance system.
In San Francisco, a seemingly innocuous research project, Brainwash Café, by Stuart Russel was used by Stanford University researchers affiliated with the National University of Defense Technology (NUDT) in China for research (human head detection) aimed at improving object detection capabilities to more accurately isolate the target region in an image. The NUDT is controlled by the People’s Liberation Army (PLA).
In both these examples, neither the end use, nor the end user, would be the first to come to mind when using a caricature or other phone photo app or taking part in an academic research project in a café.
OpenAI recently changed its policies to allow use for military purposes. Since policies for the use of generative AI platforms, unlike laws, are not adopted democratically, and since they can change at the mere will of the organization, our ability to limit the militarization of art and AI research is in danger of disappearing. On the other hand, Google, Microsoft and a whole ecosystem are competing, quite creatively I might add, to answer the Pentagon’s calls.
Yet in 2018, Demis Hassabis, co-founder and senior executive of Google DeepMind, now heading up a new AI division at Microsoft, is among the signatories of a pledge against using AI to develop killer robots.
AI researchers can refuse to participate in the development of autonomous weapons, but cannot control what others do with the discoveries they have published.
Yoshua Bengio, an AI pioneer at Mila, had told the Guardian that if this commitment helped shame military companies and organizations that build autonomous weapons, public opinion would turn against them. “This approach worked for landmines, thanks to international treaties and public shaming, even though big countries like the US didn’t sign the landmine ban treaty. American companies have stopped building landmines,” he declared. Mr. Bengio signed the pledge to express his “deep concerns about lethal autonomous weapons”. (Excerpts from the Guardian article below)
The year is 2024, and it’s high time to renew this commitment.
Art and AI partnerships for peace and democracy
AI and generative AI, like other technological revolutions (gunpowder, nuclear weapons), are driving an arms race and transforming the nature of military power and conflict. It is the role of artists that is new in this dimension.
Artists have never had such a vital role to play in the development of a new technology that can automate the act of killing and increase its efficiency.
To avoid finding ourselves in a situation where, without our knowledge or consent, our art contributes to increasing the capacity of automated weapons, it’s useful to make the links between 1) art, AI and research-creation in the generation of visual, sound, written and cultural content with AI, and the growing interest of big American tech and AI companies, such as Google, in art and artists exploring AI.
Since research in art and AI contributes to LAWS research and development, and no international consensus yet exists on the governance of these systems, being intentional in our choice of partnerships and sponsors for our Art and AI residencies is critical.
The aim of this article is to ensure that artists are aware that their art can be militarized. It’s then up to each of us to decide whether it’s worth the gamble. As for me, my choice is unequivocal. It led to the launch of a petition signed by over 2,000 people and the Art Impact AI coalition.
And I can’t wait to announce the AI, Art, Law and Society platform of Alliance Impact (AIIA). The values of peace, digital sovereignty and the promotion of democracy are fundamental to it.
Stay tuned. Click here to subscribe to our newsletter.
Thanks to Serife (Sherry) Wong for her research support in writing this article.