Who Benefits from AI Art? Is Big Tech Stealing our Democracy?

How we regulate AI in a democratic capitalist system relies on the capacity of governments to listen to and protect their citizens. When technological and scientific breakthroughs are important enough to cause both industrial and information revolutions, leaders will attempt to manage the chaos with rules.

Corporations have control over “inventions”, which in the case of AI we would call “innovative integrations of AI systems” into marketable products and/or monetizable information (data). Corporations might have invested in the costly research that led to some of those breakthroughs and will want a return on investments. Corporations will want rules that maximise their benefits.

Who else wants to inform and shape the rules? Us. The People. Civil society. This is not one homogenous group of people who agrees on what we want out of AI Systems. The benefits and the risks for one social group will likely be very different than the impacts on another. The usefulness for one class of workers might increase safety and efficiency, but for others it could mean a complete and unrequested end to their practices and livelihoods. However, an informed civil society capable of engaging deliberative approaches in policy innovation on AI governance is the very foundation of democracy.

Changing Definitions with Big Consequences

The arts and cultural sector specifically, has a vital role in AI stewardship (governance) that governments and policy makers must protect.

To start, AI, and let’s address more specifically for now, Generative AI, will not only disrupt (eliminate old and create new) work, but it will disrupt information systems. It will disrupt media creation and transmission. How we create and share content. Maybe it’s a little like inventing and deploying Printing Press at the same time as Ford’s Assembly Line…times ten?

What is certain is that the definitions of Generative AI, Art and Media are blending very rapidly, affecting all profit or non-profit sectors in that broad range of activity. That blending will have critical policy, legal and economic repercussions and could shake the very ground of capitalist democracies.

Generative AI is poised to be a $1.3 trillion market by 2032, so we can expect a lot of push and shove at the regulation table because regulation means 1) a share of that pie, 2) a control of information media. In the rush to innovate and scale, are leaders losing sight of the foundational value of quality information and genuine culture? With their eyes on the “prize”, are they forgetting what gave them power in the first place?

Arts in AI Governance

A little background

In 2019, I had the incredible opportunity to go across Canada and meet with artists, cultural workers, arts administrators, community leaders and elders for the Art Impact AI program. The report includes a summary of the recommendations made by many of the workshop participants, and this resulted in international policy recommendations underscoring the role of artists in AI governance. It also led to a community coming together on some of those questions. In the Art + AI Pre-Residencies, participants ranged from a 76 year old oil painter to academic researchers in Natural Language Processing (vital to what everyone now knows as Generative AI). At the AI on a Social Mission 2020 conference, the first pandemic edition, the entire program was about the future of storytelling featuring experts such as the founder of MIT’s Open Documentary Lab, William Uricchio, and Sandra Gaudenzi from the University of Westminster. In 2021, with AI on a Social Mission’s supporters, we raised the money to be able to offer modest grants to artists who wanted to engage in the ethics and politics of AI. I was honoured to be gifted with learning from a number of Indigenous artists and cultural leaders throughout those years of exploration.

In 2022, I regrouped those thoughts in a working paper Art Shaped AI: Emerging Scientific Orientation in Machine Learning Design and Best Curatorial Practices for AI Ethics. It is to those Best Practices that I refer to in the ART X AI Conversation petition.

Policy and Regulatory Innovation Versus Lobbying

In short, what that paper attempted to say was that the Arts are an excellent tool to inform and engage citizens in AI, in learning about its many social implications and allowing busy workers to understand the potential and risks of AI, so that they can make free, informed civic choices. This in turn, through new “consultation” mechanisms, would help the government make better, more democratic, legitimate, trustworthy policy and regulatory choices.

In Canada, only 1 in 4 Canadians trust the government to regulate AI.

Access to information, freedom of press and expression of political opinions through new forms of media expression is key to democracy. Being able to communicate civil society’s perspectives and being heard is also key. “Public participation in public policy is more important than ever. We can’t afford to wait 3–5 years to vote on something that is advancing so rapidly. This is where electoral democracy needs to be complemented with deliberative democracy.” says PeiChin Tay summarizing a panel discussion at Asia TechX2023 on this geopolitcally sensitive topic important conference on AI in Asia.

Policy innovation in deliberative democracies is different from lobbying and regulatory capture which serves very different purposes.

Regulatory capture is “a form of corruption of authority that occurs when a political entity, policymaker, or regulator is co-opted to serve the commercial, ideological, or political interests of a minor constituency, such as a particular geographic area, industry, profession, or ideological group.[1][2] When regulatory capture occurs, a special interest is prioritized over the general interests of the public, leading to a net loss for society.”

As the push to regulate AI intensifies, we must prioritize independent information and dialogue creation (news, images and visuals, education, social work, and all forms of art).

Art and Canada’s proposed AI and Data Law, Bill C-27

Following an emergency meeting of the Canadian AI Advisory Council held by Minister Champagne, co-signatories gathered to ask our elected officials to support Bill C-27, which includes one of the world’s first legal frameworks to regulate AI. As a member of the AI Advisory Council, I expressed some priorities which were addressed in the final draft, however, as many experts raised, the frameworks’ “flexibility” left key concepts undefined, such what constitutes harm, high-impact system, reasonable person. Optimistically, I referred to Recommendation no. 5 made by the Public Awareness Group (subgroup of the AI Advisory Council) which highlights the important role of Arts in AI engagement.

This gave me a reasonable amount of hope that this was a commitment, and that following the eventual adoption of the Bill, a true engagement effort would be made to inform, engage, and listen to civil society, and importantly that artists would have an essential role to play. It was a clear win for different ways of civil society engaging with our government, of being heard, and to shape emerging AI regulations.

Bill C-27 must come hand in hand with a serious commitment towards the art and cultural sector.

What Are The Privileges That Come With Defining Generative AI As Art?

Artistic, Litterary or Journalistic Purpose: an Exception to Privacy Protection

Privacy protection in Canada has some exceptions for good reason, but as defnitions of art and media blur, those exceptions might become a serious cause for concern.

Bill C-27 determines who can collect citizens’ personal data without their consent. Part 1 of the Act reiterates an existing exception in privacy law: “An organization may collect an individual’s personal information without their knowledge or consent if the collection is solely for journalistic, artistic or literary purposes.” Section 38, Bill C-27

Jurisprudence defining “artistic exception” is scarce (most artists can afford going to Court). In one case, CBC supported Google’s claim that Google was protected by the journalistic exception (they lost), and in another, seismic data architecture was recognized as “art work (they won). That’s about it. In both cases, the persons seeking their use to be considered an artistic or journalistic exception were actually technological uses.

If generative AI is defined as art, the exception “for artistic purposes” could open the door to a number of uses of personal data without consent, such as immersive and interactive art forms that collect participants’ sensitive biometric data, for purposes other than public good.

AI, Art and…Security?

If Generative AI is recognized as art, or as litterary artist, or journalism, the terminology morphs to Generative “Art”. Generative “Art” can and will be used for military purposes (deep fakes/fake news, facial and biometric surveillance, automated headhunters to increase online radicalization).

Large tech companies bid on very lucrative contracts and an increase in the military applications of AI is to be expected. DeepMind (subsidiary of Google) will be at the first AI and Security conference to be held this fall in the UK, while the “Foreign Secretary will also convene the first ever briefing of the UN Security Council on the opportunities and risks of Artificial Intelligence for international peace and security.

Conclusion

Definitions matter legally and economically and, if Generative AI is defined as Art, via various funding and regulatory capture mechanisms, this will have critical policy, legal and economic repercussions. Making sure that artists, and the organizations that represent them, are at the helm of how art is defined could make the difference a strong democracy that values its news, art and media makers, or the following legal sci-fi scenario:

  • Generative AI is recognized as art, or as litterary art, or journalism, becoming Generative “Art”.

  • No consent necessary to collect personal data for Generative “Art” purposes.

  • This data is used to train AI systems with likely uses for military strategies, increased surveillance, polarization, decreased trust in public and democratic institutions, social unrest, etc.

We still have the power to choose.

Here’s how you can support the arts and cultural sector:

  • You can sign our ArtXAI Conversation petition;

  • Follow me on social media as I am planning an meeting with key stakeholders at the table;

  • If you are an AI research lab, you can adopt policies that prevent Big Tech from accessing your publicly funded research and urgently preventing them from making “policy recommendations” aka lobby;

  • If you’re a decision maker in government, you can make a public and strong commitment to protect the vital role of the arts including in deliberative mechanisms and sustainable democracies.

Photo credit: Gaëlle Vuillaume.

Previous
Previous

The Group of Seven: Big Tech Heroes Commit to AI Adoption Strategy.

Next
Next

Art X AI Conversation: Recommandations et appel à action