The Group of Seven: Big Tech Heroes Commit to AI Adoption Strategy.

This week, seven US AI companies, Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI, made a list of commitments to the White House to ensure that AI’s development and deployment uphold principles of safety, security and trust. Although this can be seen as a step in the right direction, it somehow looks more like a huge marketing stunt that gives some of the biggest players in the game even more power.

I discuss below how Principle №5, which includes watermarking, and Principle №8, which includes AI for Good and algorithmic literacy, appear to actually be branding and promotional mechanisms that seem more appropriately labeled as an AI Adoption Strategy than a commitment to safe, secure and trustworty AI.

The 8 Principles

1) Commit to internal and external red-teaming of models or systems in areas including misuse, societal risks, and national security concerns, such as bio, cyber, and other safety areas. This includes bias and discrimination.

2) Work toward information sharing among companies and governments regarding trust and safety risks, dangerous or emergent capabilities, and attempts to circumvent safeguards.

3) Invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights.

4) Incent third-party discovery and reporting of issues and vulnerabilities.

5) Develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated, including robust provenance, watermarking, or both, for AI-generated audio or visual content.

6) Publicly report model or system capabilities, limitations, and domains of appropriate and inappropriate use, including discussion of societal risks, such as effects on fairness and bias.

7) Prioritize research on societal risks posed by AI systems, including on avoiding harmful bias and discrimination, and protecting privacy. This includes protecting children.

8) Develop and deploy frontier AI systems to help address society’s greatest challenges.

Commitment №5.

Commitment №5 is meant to protect Trust in AI products. Here’s how it is explained in the White House announcement:

“Companies making this commitment recognize that it is important for people to be able to understand when audio or visual content is AI-generated. To further this goal, they agree to develop robust mechanisms, including provenance and/or watermarking systems for audio or visual content created by any of their publicly available systems within scope introduced after the watermarking system is developed. They will also develop tools or APIs to determine if a particular piece of content was created with their system. (…) The watermark or provenance data should include an identifier of the service or model that created the content, but it need not include any identifying user information. More generally, companies making this commitment pledge to work with industry peers and standards-setting bodies as appropriate towards developing a technical framework to help users distinguish audio or visual content generated by users from audio or visual content generated by AI.”

Does this mean that the watermark would identify the company that created xyz content, as in OpenAI’s logo, or will they all agree to a shared watermark that shouts out Made in America? More importantly, that criteria will determine when a piece of work is to be watermarked by OpenAI, Microsoft and Google. Who gets to decide that?

What are Gen AI visuals Art, and will Art be exempt of watermarking?

One thing for sure is artists and the organizations that represent them should be at the table and leading this discussion. Not surprising that the Art X AI Conversation already has over 1500 signatures! Experts of all disciplines (including some world renown leaders in their respective fields) and citizens are expressing their support forartists, creative workers, arts organizations.

Let’s remember that Microsoft fired the AI Ethics teams that was trying to minimize the impact of image generators on artists, therefore one would hope that they aren’t in charge of distinguishing “illegally web-scraped algorithmically generated text, audio and visuals” from “art creations that use legally and equitably built Generative AI platforms as a tool, while remaining mostly human created”. Google has their eyes on the astronomical potential gains to be made using generative “art” in Military uses of AI, so, again, they probably aren’t the best positioned to lead policy innovation on these questions.

The criteria that will guide that distinction will have very important legal and economic repercussions for the arts and cultural sector, and potentially destructive implications for democracy and everyone’s well-being. The question “What is Art” has reached new levels of political importance and that is why the Art Impact AI Coalition and its partners are asking to be at the forefront of the determination of the guiding criteria, not Big Tech.

Principle №8.

Advocating for Mission-Oriented AI has been AI Impact Alliance’s mission since it was founded in 2017. The commitment in Principle №8 should be music to our ears, however, a dissonant tone rings in between the lines.

This group of 7 agree to “support of research and development (R&D) of AI systems that can help meet society’s greatest challenges, such as climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats.”

Google and Microsoft already do this type of R&D via their AI for Good programs and sponsorships, so it’s not clear to me what’s new here. Is it a commitment to change their business model to ensure this R&D trickles down to equal access to quality health care for all?

The group then further commits to “support initiatives that foster the education and training of students and workers to prosper from the benefits of AI, and to helping citizens understand the nature, capabilities, limitations, and impact of the technology.”

There’s definitely a foreseeable positive outcome to this, on the other hand, there are also of important questions that come to mind.

  • Which workers exactly are to be trained on how to prosper from the benefits of AI?

  • Which initiatives are they talking about? Are they in-house programs and the selection of the projects as well as resulting narratives and “policies” are under their control? Or do they sponsor independant civil society-led programs with a hands-off approach leading to legitimate policy and regulatory innovation?

In the end, if industry commits to using only for Sustainable Development Goals, for improved health and climate action, and it leaves policy and regulation to be adopted through proper independant democratic mechanisms, then, it appears we have made a step forward.

Sceptically, as AI Impact Alliance has been advocating for more capacity for the civil sector (civil society organizations, NGOs, mission-oriented non-profits and AI social-preneurs) for over 5 years with little or no change in governments and industry practices since, it’s hard to image this is about to change. Yet, it’s never been more important…

Conclusion?

The media’s fanfare around the group of 7 Big Tech CEOs’ commitment to uphold security, safety and trust must be taken with lots of salt.

What does salt mean in this context?

It can mean read, listen, learn about AI and politics and voicing your thoughts to elected representatives in government; it could mean joining the Art Impact AI Coalition (1500 signatures and growing, supported by Canada’s leading Arts Organizations) or becoming a member of AI Impact Alliance (as soon as our membership re-opens) as we are stronger together.

Thou Art the Salt of the Earth, Mixed media, analog and digital paint, 2018, Valentine Goddard

Previous
Previous

The Rise of Generative AI in the Arts: Democracy at Risk.

Next
Next

Who Benefits from AI Art? Is Big Tech Stealing our Democracy?