Emerging Scientific Orientations in Machine Learning System Design and Best Curatorial Practices in Artificial Intelligence Ethics

Emerging Scientific Orientations in Machine Learning System Design and Best Curatorial Practices in Artificial Intelligence Ethics

Valentine Goddard, AI Impact Alliance

Introduction

The arts are instrumental in the future of artificial intelligence (AI), as a tool for digital and scientific literacy, as a means of civic engagement in a digital democracy, and as part of emerging interdisciplinary machine learning design methods. While there is already substantial literature covering the role that the arts can adopt in order to illustrate complex notions, to depict realities that must be hushed, or to oppose status quo, the literature on the role of art in the development and governance of AI is still emerging. The intersection of new scientific directions in machine learning design with inter-arts curatorial practices in AI ethics leads the reader to imagine a creative, sustainable and inclusive AI.

In this chapter, the author outlines systemic barriers that, from the earliest stages of the designs of AI systems, undermine the ethical and responsible development of AI. Specifically, this chapter refers to the following three barriers: a gendered digital divide, a general lack of understanding of the ethical and social implications of AI, and a body of AI ethics guidance that suffers from the underrepresentation of civil society.

New scientific orientations provide solutions, where the social sciences and humanities can, throughout the different stages of algorithmic model design, increase the quality, legitimacy and governance of AI. Indeed, a growing number of researchers in machine learning express the need for an interdisciplinary design of learning systems. The author links these new scientific directions with the inter-arts discipline of AI ethics to explain how art, including algorithmic art, can intervene in a socio-technical “pipeline” of data collection and algorithmic design while contributing to a more inclusive narrative and understanding of the ethical, social, legal, cultural, economic and political implications of AI. The submitted case study, PearAI.Art, promotes a participatory approach to data collection and annotation, and interdisciplinary machine learning design, in the form of an algorithmic art project to counter gender bias in image generation. This chapter concludes with a presentation of best practices to consider when curating projects involving AI ethics.

  1. Some obstacles to ethical and responsible AI

1.1        A gendered digital divide

« Socioeconomic (income and other) inequalities are closely associated with digital inequalities, as the former typically shapes the latter, which in turn reinforces existing inequalities, creating a vicious circle. Tackling socio-economic inequality through digital technologies can therefore only address the symptoms, but not the root causes of inequalities. Policies to reduce the digital divide must be multidimensional: technological, economic, social and educational (awareness raising) and should address both socioeconomic and digital inequalities simultaneously. » (UNDESA).

Moreover, digital transformation has accelerated during the pandemic due to health measures, but so has the digital divide, which disproportionately affects women. It is therefore important, if not paramount, to emphasize upfront that:

«The policies in response to COVID-19 highlight the ways in which many women and girls are disadvantaged due to digital exclusion and lack of digital equality. Equitable digital engagement is necessary to ensure women’s full economic engagement, to amplify their voices, and to enforce laws that enshrine women’s rights. Transformative interventions must focus on cultural change and gender equality policies […]. Failure to do so means that women will continue to ‘pay the price’ for the systemic inequalities amplified by the pandemic. Systemic change also implies the need to scale up interventions to address the challenges of a gendered digital divide.» (Nefresh-Clarke et al.)

Starting a race when more than half of the runners do not have access to the Start Line was an issue even before the pandemic. Several authors, such as Cathy O’Neil, Safiya Umoja Noble, Virginia Eubanks, Joy Buolamwini, Timnit Gebru and many others have demonstrated the seriousness and tangibility of the risks of AI increasing social inequality, concerns shared by many AI experts. “Left to its current course, the digital economy is likely to widen both regional and gender divides” (Benjamin). Ultimately, the social, political, and economic system in which AI is deployed will determine what benefits it can bring, and to whom.

1.2 Future citizenship: lack of capacity to make informed political choices

Experts agree that ethical guidelines for the development and governance of artificial intelligence require accountability, fairness, and transparency. However, the definition of what these terms entail can differ quite significantly. We assume that transparency goes beyond the ability to explain the results of algorithms (a concept called “explainability” or “interpretability”) and is not just about being able to explain an algorithmic decision to disgruntled customers/investors/judges. It is fundamentally about enabling citizens to make informed decisions about the use of their data in algorithms. Yoshua Bengio, a renowned AI expert and researcher, is adamant: “We have a responsibility not to leave (these decisions) in the hands of a few people because the impacts (of AI) will affect everyone. There are political choices to be made and the ordinary citizen needs to understand them.” Niskar et al. found that in order to design legitimate policies, policymakers must ensure that a large number of citizens with diverse perspectives understand the implications of new technologies or scientific applications, and their research shows that the arts are among the most effective tools for achieving these goals. Recent United Nations policy recommendations emphasize the important role of civil society and the arts in sustainable and ethical digital governance (UNDESA); however, on the ground, a better understanding of the implications of AI remains a goal to be achieved.

1.3 Controlling the narrative around new AI technologies creates a lack of trust

A new technology understood by a limited number of experts (Gagné) and investors (Brandusescu) is fertile ground for restricting its benefits to this set of players. The history of the regulation of new technologies shows that it is strongly influenced by powerful consortia of private interests. In Electric Sounds, Technological Change and the Rise of Corporate Mass Media, Steve J. Wurtzler explains how corporations built strategic alliances to control both the narrative of the new technology and ownership through the creation of patent pools, defined as agreements between patent owners to share the profits. Innovation in acoustics thus exacerbated an increasing concentration of ownership and power within the U.S. mass media. During this same period, acoustic innovation was lathered up as a “tool of public necessity” when in fact the independent and educational uses of acoustic innovation were elided by the above strategies (Wurtzler).

Without serious intervention at the systemic level, history will repeat itself as a parallel emerges with the commercialization of AI technologies, a discussion often distinct from discussions on Responsible AI. From 2005 to 2018, the five largest technology companies in the US spent US$582 million to influence legislation (Dellinger). Worldwide, the benefits of AI are being privatized: 26 of the top 30 AI patent applicants are corporate conglomerates. Only four of those 30 are from universities or public research organizations and are based in China (WIPO). In Canada, public funding for AI is overwhelmingly reserved for the private sector, a gap that raises concerns about how the values and priorities of capitalist business models shape the impact of AI on society (Brandusescu).

A growing number of frameworks are taking shape and shaping AI and its impact on society. These guidelines inform judges, politicians, business leaders, and determine what is acceptable use of AI. However, in 2019, the values that have been incorporated into these guidelines represent only those of a limited number of citizens and economic sectors. In fact, researchers identified 84 written documents that included ethical principles, guidelines, frameworks and analyzed who contributed to their guidance. 54.7% of these documents were produced by or with the private sector, compared to only 2% by civil society organizations such as trade unions, NGOs and independent non-profit organizations (Jobin et al.). The resulting norms risk prioritizing some values over others and leading to exclusionary policies in a digital democracy. Moreover, industry self-regulation by industry does not inspire trust or effectiveness (Colclough; Jordan; ICTC). The independence of the adoption processes of normative frameworks in AI can be enhanced by supporting the creation of more frameworks led by civil society organizations (NGOs, NPOs, etc.), or public-civil partnerships. In the second part of the chapter, the author introduces how the arts can contribute to ensuring the independence and transparency of AI governance, and increase trust in them.

Finally, an informed social dialogue, constructive deliberation and critical design are part of the necessary processes leading to a greater collective decision-making capacity in the face of important policy choices. Let us digress briefly here to point out that recent amendments to data protection and consumer protection laws and policies are fundamentally based on the notion of meaningful consent. In Quebec civil law, this legal concept is based on Article 1399 of the Civil Code of Quebec, which requires that valid consent must be free and informed, and cannot be vitiated by error, fear or harm. To be able to give informed consent, a person must understand the impact of his or her choice and the arts are a tool that increases the ability of the population to understand the various social, legal, economic and political implications of these choices.  On a day-to-day basis, the arts can help inform citizen consent to the use of their personal data. As such, the arts are instrumental in a societal progression necessary to keep up with the rapid implementation of AI.

Over the next few sections, the reader is invited to discover emerging methodologies of data collection, annotation, and machine learning design to better understand the role of the arts in the socio-technical pipeline of AI development and governance. It introduces how these emerging scientific directions intersect with an artistic discipline, inter-arts curation.

  1. Sociotechnical pipeline: issues and intervention methods

2.1 Gender bias in the socio-technical pipeline, from input data to algorithmic output

The term “sociotechnical pipeline” should be read in the context of this paper as a space of intervention intended to reduce the harm that algorithms might cause, or increase their benefits (Suresh and Guttag). The pipeline starts from the design of the questions asked/solutions sought, and includes data collection, data preparation (annotation, labeling), data architecture design, algorithmic model development, and its governance (ethical and normative frameworks). It is from the beginning of the data-algorithm pipeline to its end, and ideally in a continuous loop, that an inter-arts transdisciplinary approach can intervene to foster an ethical and responsible development and governance of AI.

The data that is collected, and the data that is not, reflects biases rooted in our history, resulting in incomplete or unrepresentative data. Systemic racism and sexism influence the questions asked, how the answers (data) are used, and the design of AI technologies. It favors support for some uses of AI over others, and limits access to the social and economic potential that AI could bring to a limited number of players. It is fundamental “to interrogate the norms and values that underlie the creation of datasets, as these are often extractive processes that benefit only the collector and users of the datasets.” (Chan et al.)

Most of the databases currently used in machine learning are the result of a technology culture heavily dominated by white men. As Catherine D’Ignazio and Lauren F. Klein argue in their work on Data Feminism:

« Today, data science is a form of power. It has been used to expose injustice, improve health outcomes, and overthrow governments. But it has also been used to discriminate, police and monitor. This potential for good, on the one hand, and harm, on the other, makes it essential to ask: Data science by whom? Data science for whom? Data science for whose benefit? The narratives around big data and data science are overwhelmingly white, male, and techno-heroic.».

Commonly used databases contain sexually charged, derogatory, and discriminatory annotations or words, and are often used to falsely describe to the AI the meaning of words like “woman.” In turn, machine learning algorithms respond by internalizing and reiterating biases about women and other underrepresented people/communities in AI. Fuelled by a distorted representation of ideas of femininity, AI suffers from a severe gender crisis that affects us all, a consequence of algorithms developed by a small percentage of human actors. Moreover, the gender gap that exists in the technology sector may partly explain a feedback loop between the low number of women and algorithms that increase gender inequalities in employment, for example (Luccioni and Bengio).

“Smart” technologies cannot ignore the history and social context from which they originate and in which they are deployed. Note that to date, 80 percent of AI faculty are men (West et al.), between five and 20 percent of AI workers are women (variation by country and industry) (Yuan), only 15 percent of science graduates come from working-class households (Nature), and black/African American AI workers in the tech industry make up less than five percent of the workforce (Alake).

2.2 AI generates images based on the words of humans

Algorithmic applications and models in AI are vast, and the author chose a case study that specifically focuses on deep learning models that are used to automate image generation. Automated image generation uses deep neural networks trained on large amounts of data consisting of images and corresponding written descriptions (Xu et al., 2018, and references in Goddard et al., 2021). These models, along with the data collection and annotation processes, replicate existing systemic discrimination in society, and in this case, discrimination against women or people who identify as women.

Prioritizing the uses of AI in beneficial social and sustainable development goals is a consensus of intent. Given that the challenge of defining what is beneficial and what is not differs from culture to culture, community to community, and individual to individual, iterative methods of collective data collection and socio-annotation in the design of algorithmic models are rising to the top of the list of recommendations from AI experts.

A consensus is emerging in the recent AI literature, and new scientific orientations aim at an interdisciplinary design of machine learning models. Note that interdisciplinarity in science is not new, however the integration of social sciences (STEAM approach) in AI is still subject to persistent systemic resistance when it comes to forming research and development teams. This is why these recent publications recommending an interdisciplinary design of AI models that integrates social sciences, law, humanities and arts, right from the preparation of the datasets on which machine learning models are based, are in line with the author’s recommendations about inter-arts curatorial practices in AI ethics.

2.3 Interdisciplinarity to improve the quality of AI algorithms and technologies

First, we note the extensive research of Sheuerman et al. who analyzed 113 machine vision (computer vision) datasets to identify the values that framed the choice of data collected or rejected. Four dominant values were identified: efficiency, fairness, universality, and “algorithmic model improvement”. On the other hand, other values were neglected or implicitly devalued in favour of the selected values. Thus:

  • Efficiency is privileged over empathy and caring (an approach to data curation that is considered more progressive).
  • Impartiality is preferred to positionality, i.e., taking into account social and political influences on the understanding of the world.
  • Universality is preferred to contextuality, which consists in focusing on more specific tasks, places or audiences.
  • Model work is valued over data work, i.e. most authors of the datasets studied focus little on data practices in favour of efforts to improve machine learning models.

According to a study corroborating these findings, data curation practices have not been guided by a focus on equitable representation or diversity, but rather by tasks or convenience, which contributes to a lack of inclusivity in AI (Jo et al.). Sheuerman et al. recommend prioritizing the values of contextuality, positionality, caring, and data work, via proactive interventions, throughout the sociotechnical pipeline from data curation to algorithmic model development. In doing so, the authors are confident that the result will be greater trust in the models developed, more ethical and human-centered AI (Scheurerman et al.).

This study reinforces the recommendations of research earlier that year that emphasizes the importance of deliberate, interdisciplinary, and participatory methods of machine learning design.  In order to address the problems of automation bias and inequality, “A new specialization should be formed within AI that is focused on data collection and annotation methodologies, and more conscious and systematic in data curation, and leverage interdisciplinary expertise.” (Jo et al.)

Inspired by participatory governance methods, another group of researchers seek to design algorithms in a way that balances divergent interests in a moral and legitimate manner, concluding that “Participatory algorithm design improved both procedural fairness and equitable algorithm outcomes, increased participants’ algorithmic literacy, as well as helped to identify inconsistencies in human decision making in the governing organization.” (Lee et al.)

2.4 Inter-arts transdisciplinary research in AI ethics meets emerging scientific directions in machine learning design 

This chapter focuses on “inter-arts” practice, an artistic discipline recognized by the Canada Council for the Arts (CCA). The CCA defines inter-arts practice as the exploration or integration of multiple traditional and/or contemporary artistic disciplines that are merged in such a way that no single artistic discipline dominates the final result. These trans-disciplinary methods intersect the arts with other non-arts disciplines to explore a theme or issue. The author, a legal scholar and inter-arts curator, promotes iterative and participatory research into the social, legal, economic, political and ethical implications of AI, including algorithmic art as a tool. Inter-arts interventions focus on specific issues such as social justice or climate change.

PearAI.Art, the case study presented later in this chapter, is an inter-arts project that uses algorithmic art to counter gender bias in AI; but let’s get back to basics for a moment. The real power of art lies in its ability to see, feel, hear, imagine alternative digital futures. “Art is not about stagnation, conformity, fear. Art is about risk-taking, resistance, empowerment and transformation. If we are to reorganize society after the pandemic, we need […] institutions that focus on systemic solutions and collective/collaborative practices that promote community care and participation, collective consciousness, and the realization of concrete actions.” (Salas) Algorithmic art fits into this dynamic, as it can be a tool that helps to eliminate gender, racial, and cultural biases. The definition of algorithmic art varies and includes various literary, musical, and performance disciplines; however, for the purposes of this article, we refer to the visual output generated by a deep learning model, sometimes adapted using digital and/or analog methods by an artist.

2.5 Pear AI.Art: data collection and participatory rehabilitation of algorithms

Biases are not always reflected in numbers, they can also be reflected in the words we use to describe the world around us (Luccioni and Bengio). In their study, D. Smith et al. concluded that different words are used to describe male and female leaders, and that women are given significantly more negative attributes. To this end, diversity of perspective in labeling images is essential because both data collection and annotation are highly subjective processes. (Haralabopoulos et al.)

When an algorithmic model in 2019 interpreted the words “woman,” “beauty,” and “imperfection” as a pear shape, the project “Algorithmic Art to Counteract Gender Bias in AI,” (2020-2022), www.PearAI.Art, was born (hereafter PearAI.Art). The goals were to better understand where this result originated, and how it would be possible to somehow re-educate the image generation models to achieve different images.

OpenAI further emphasizes the importance of studying the social impacts of automated image generation: “Work involving generative models has the potential to have significant societal impacts […] and plans to analyze how models like DALL-E relate to societal issues such as the economic impact on certain work processes and professions, the potential for bias in model output, and the longer-term ethical challenges involved with this technology.” (Open AI). It is beyond the scope of this chapter, but the reader can find a more in-depth discussion of the state of the art in automated image generation and the application design process in Algorithmic Art to Counter Gender Bias in Artificial Intelligence: Changing AI’s Mis-pear-ceptions of Us (Goddard et al, 2021).

The PearAI.Art project entails a data collection phase including the design of a word crowd-annotation application, an AI research phase aimed at eliminating gender bias in text-to-picture generation models (planned for Fall 2021 and Winter 2022 with support from researchers in the NSERC CREATE program in Responsible AI), and a creation phase.

The PearAI.Art crowd-annotation app invites women, and people who identify as women, to redefine the concepts of femininity, beauty and imperfection with nine words. The app initiates a process of engagement and algorithmic literacy as the text invites participants to learn more about this particular technique in AI.

The words collected by the application are saved in a Google Sheet document, hereafter referred to as the PearAI dataset. This dataset will be used for both AI research and digital print creation as well as algorithmic literacy workshops. At the time of submission, PearAI.Art has already collected nearly 2000 words over a one-month period, with entries from over 40 countries. By inputting words they choose early in the algorithmic data-to-results pipeline, participants help deconstruct automated biased perceptions and build an “ontology of becoming” (Maruska).

Immersing oneself in these thousands of words (data sets) of PearAI already provokes a pleasant feeling of well-being, like a breath of relief and inspiration for women and people who identify themselves as women. The words collected convey a sense of strength, resilience, creativity, benevolence as they speak of strength and empathy at the same time, propose allegories of trees, oceans, mountains, all concepts that go beyond a bodily form (pear). As an addendum, the reader will find as examples two digital prints and some preliminary observations inspired by submitted words (Figures 2, 3 and 4).

The author hopes that this chapter will have allowed the reader to connect the emerging scientific orientations in AI that recommend an interdisciplinary and participatory approach to machine learning design, and data collection and annotation, with those in inter-arts practices in AI ethics. The submitted case study, PearAI.Art, can be summarized as inter-arts interventions, throughout the socio-technical pipeline, from “data work” to “model work”, aimed at designing more inclusive algorithmic models. The next section serves as a guide for projects integrating art and AI ethics.

  1. Best practices in projects integrating art and ethics of AI 

Part of the history of generative art is the desire to avoid the darker side of humanity, residing in its subjective nature, and the aspiration to find objective ways to support democratic, transparent and participatory processes of collective communication. Part of the thinking was that if machines could remove the subjectivity of art and aesthetic judgment and imbue them with the transparency and clarity of science, we could achieve clearer communication (Caplan). Contrary to these hopes, in 2021, one need only read the technology news feed to recognize that AI systems are neither neutral nor unbiased, and that machines alone cannot provide the hoped-for impartial communication tool.

According to the author, algorithmic art should “embrace the subjectivity of humans, the diversity of their lived experiences as a result of their physical, political and cultural contexts” (Ellis and Flaherty). Algorithmic art offers the opportunity to embrace positionality, and to engage a wide range of people, with diverse perspectives, in the important choices involved in a digital democracy.

This is a best practice guide for projects integrating AI art and ethics.  It is designed to feed curatorial thinking when designing interactive and immersive installations, or digital literacy workshops, but also for the elaboration of public and private funding policies in AI research and development.

3.1 Algorithmic art must be political and contribute to the evolution of AI governance

“Art is born of its social context and must always be in dialogue with this social element: art has a social purpose [and] art belongs to the people. And the art is without shame, without embarrassment, if this word exists, social. It is political, it is economic. Those who tell you ‘Don’t put too much politics in your art’ are not honest. If you look carefully, you will see that these are the same people who are quite happy with the situation as it is. And what they are saying is not to not bring in politics. What they are saying is don’t upset the system. They are just as political as any of us. It’s just that they are on the other side.” (Achebe).

Algorithmic art is social and political, as it involves notions of data ownership, closely touching on issues of cultural appropriation (e.g. using cultural data to generate images) that are beyond the scope of this chapter.

“All technical systems are cultural and social systems. Every piece of technology is an expression of cultural and social frameworks for understanding and engaging with the world. AI system designers must be aware of their own cultural frameworks, socially dominant concepts, and normative ideals; beware of the biases that arise from them.” (Lewis).

In addition, the Indigenous Protocol and Artificial Intelligence Position Paper explains that the concepts of ownership and appropriation do not reflect how Indigenous communities wish to govern the use of their cultural knowledge (data). The Protocol also highlights the important role of the arts in developing AI that reflects diverse Indigenous values, including the governance frameworks under which AI technologies will be deployed.

3.2 Avoiding dystopia and fostering a sense of “agency”

Sommer and Klöckner’s research, based on environmental psychology theory, identified the mechanisms by which engaged art affects the audience. They concluded that artists who care about the impact of their work should move away from depicting issues such as climate change or the impact of AI on human rights in a dystopian way, and instead prefer a design that offers the audience solutions. The artworks that most engaged participants highlighted the personal consequences of participants and their own role in the situation. Their research recommends fostering a sense of “empowerment” in the audience.

Art can be boring, useless or even harmful in raising awareness and inviting action. Art can also highlight negative and destructive scenarios that will draw attention, but painting things black and inducing fear only reduces motivation. (O’Neill et al.) As Naom Chomsky said, “If you assume there is no hope, you guarantee there will be no hope. If you assume that there is an instinct for freedom, that there are opportunities to change things, then there is an opportunity to help make the world a better place.”

 3.3 We learn best together

An exploratory study on the impact of group immersion learning concluded that immersive art installations and environments promote learning, but that participants learn best when they are in the environment with others. (Du Vignaux et al.)

3.4 Inclusion in Design

Good curatorial practice in the design of games, or other forms of artistic intervention, that explore the ethical implications of AI should include the (paid) participation of people underrepresented in AI. For example, the Art Impact AI games (Goddard) were designed by a team of artists from communities underrepresented in AI and allowed for an open dialogue about the implications of facial recognition, recommendation, and decision support systems.

3.5 Get out of institutions, favour public places

The same research concludes that it is best to take art out of institutions and into public spaces, not only to reach a wider audience, but also because it avoids the connotation that art is reserved for a certain elite population (Sommer and Klöckner). Jer Thorp’s book, Living in Data, invites citizens to collect data about themselves, and to allow artists to use that data to, in turn, engage citizens on important social issues. He says that while data visualization can be a powerful tool, the tools and knowledge to use it effectively are not always accessible. For this reason, analog art forms, as well as simple tools like cardboard boxes, can be very effective in expressing the meaning of data.

3.6 Recognize the plurality of knowledge sources in co-construction processes

Capturing data is a way to document our perceptions of a facet of a reality. The results, rendered by a traditional media or a new AI technology, are a way of co-constructing a documentary. Assuming that the goal of this process is beneficial social change (human rights, sustainable development goals), the recommendations of authors and experts in emerging media emphasize the importance of highlighting and appreciating this plurality of knowledge sources (Auguiste et al, 2020). It is one person’s questions, another’s wonder, an author’s research, a random lecture, a painting from another era that informs best practices, ethical frameworks that evolve through an equitable iterative process that promotes greater inclusion and diversity of perspectives.

3.7 Authenticity and concrete objectives 

Algorithmic art, within a framework of engaged inter-arts practice, is an important tool for challenging societal and automated systems that promote gender, racial, and cultural biases and subsequent systemic discrimination. Therefore, it must foster “a climate in which there is genuine concern for (and a concrete commitment to achieving) full equal rights,” and avoid the “danger that using the law to achieve change” will “focus too much on the (minimal) changes deemed necessary. (H. Smith et al.)

  1. Conclusion

In addition to being a fundamental literacy and civic engagement tool in a digital democracy, the author hopes to have demonstrated the importance of the arts, particularly an inter-arts practice via algorithmic arts, in data curation and machine learning design. This meeting between an inter-arts practice of AI ethics and the emerging scientific orientations of machine learning design, leads us to a transdisciplinary approach that transcends the traditional boundaries and definitions of each of the disciplines involved, and aims beyond the interdisciplinarity between two disciplines, becoming a discipline in itself (Choi and Pak). Let us call this one the inter-arts design of AI for the purposes of this conclusion.

Inter-arts AI design is a constructive response that contributes to greater digital literacy, and fosters AI governance processes that build greater trust. It is a tool, an emerging discipline that joins the scientific orientations of AI experts who put forward an interdisciplinary conception of AI. It increases the number of citizens able to be active in a digital economy. It facilitates the involvement of more women and people of diversity in the development and governance of the digital future, thus contributing to the political balance of a digital democracy. Given the significant impact that this new discipline and field of practice could have, it will be essential to focus on the funding policies that will facilitate its adoption.

“In the scientific and cultural transformation needed to align AI technologies with our values and well-being, and thus reduce discrimination, projects like PearAI.Art, which combine art and appropriation of AI by all and for all have an important role.” (Bengio)

In closing, 2021 was the year of the creative economy for sustainable development, and the author hoped that an inter-arts design of AI and the proposed best practices provide a framework for AI development and governance that is in line with the UN goals. “Data collection, consultation with (creative) industry workers, and a gender-based perspective can serve as a guideline for working together toward a truly inclusive and prosperous creative economy.” (UNESCO)

Acknowledgements: for research on the current state of knowledge in automated image generation systems, Daniel Harris, AI Impact Alliance; for the design of the PearAI.Art application, Jonathan Reyes and Marta Kersten-Oertel of Applied Perception Lab, Concordia University; for miscellaneous contributions, Giulia Taurino; for scientific illustration, Audrey Désaulniers. I would also like to thank the members of the Art + AI collective, the participants of the AI in Social Mission workshops, and the partners who made the AI Impact Alliance action-research possible.

References

Achebe, Chinua, from Conversations with James Baldwin, James Baldwin, Fred L Standley, Louis H Pratt (eds.), University Press of Mississippi, 1989.

Alake, Richmond, “Are There Black People In AI?”, Towards Data Science, 2020. https://towardsdatascience.com/are-there-black-people-in-ai-fb6928166d73

Auguiste, R., De Michiel, H., Longfellow, B., Naaman, D., Zimmermann, P.R., “Co-creation in Documentary, Toward Multiscalar Granular Interventions Beyond Extraction”, University of California Press, After Image, March 2020

Bengio, Yoshua, Future of AI, Launch of AI Impact Report, 2020.

Benjamin, Ruha, “Ruha Benjamin on Deep Learning: Computational Depth Without Sociological Depth is ‘Superficial Learning’”, Venture Beat, April 2020 https://venturebeat.com/2020/04/29/ruha-benjamin-on-deep-learning-computational- depth-without-sociological-depth-is-superficial-learning/

Bond, S.E., and Nyasha Junior, N. “How Racial Bias in Tech Has Developed the ‘New Jim Code’”, 2020, https://hyperallergic.com/593074/how-racial-bias-in-tech-has-developed-the-new-jim-code

Buolamwini, Joy, Algorithmic Justice League, https://www.ajl.org/

Brandusescu, Ana, “Artificial intelligence policy and funding in Canada: Public investments, private interests”, Centre for Interdisciplinary Research on Montreal, McGill University, 2021. https://www.mcgill.ca/centre-montreal/files/centre-montreal/aipolicyandfunding_report_updated_mar5.pdf

Caplan, Lindsay, “The Social Conscience of Generative Art”, Art in America, 2020. https://www.artnews.com/art-in-america/features/max-bense-gustav-metzger-generative-art-1202674265/

Chan, A., Okolo, C. T., Terner, Z., Wang, A., “The Limits of Global Inclusion in AI Development”, Association for the Advancement of Artificial Intelligence, 2021.

Choi B.C., Pak A. W. “Multidisciplinarity, interdisciplinarity and transdisciplinarity in health research, services, education and policy: 1. Definitions, objectives, and evidence of effectiveness”, Clin Invest Med., 2006 Dec;29(6):351-64. PMID: 17330451.

Chomsky, N., “Noise: Noam Chomsky interviewed by Fred Branfman”, Hotwired, 1997. https://chomsky.info/199702__/

Colclough, C., “From whistleblower laws to unions: How Google’s AI ethics meltdown could shape policy”, 2020. https://venturebeat.com/2020/12/16/from-whistleblower-laws-to-unions-how-googles-ai-ethics-meltdown-could-shape-policy/

Dellinger, A. J., “How the Biggest Tech Companies Spent Half a Billion Dollars Lobbying Congress,” Forbes, 2019 https://www.forbes.com/sites/ajdellinger/2019/04/30/how-the-biggest-tech-companies-spent-half-a-billion-dollars-lobbying-congress/?sh=3b64df857c96

D’Ignazio, Catherine, and Klein, Lauren, F., Data Feminism, MIT Press, 2020.

Du Vignaux, Maÿlis Merveilleux, Léger, P-M., Charland, P., Salame, Y., Durand, E., Bouillot, N., Pardoen, M., Sénécal, S., “An Exploratory Study on the Impact of Collective Immersion on Learning and Learning Experience”, Multimodal Technol. Interact. 5(4), 17, 2021. https://doi.org/10.3390/mti5040017

Ellis, Carolyn, et Flaherty, M. G. (Eds), “Investigating Subjectivity: Research on Lived Experience”, American Psychological Association, 1992. https://psycnet.apa.org/record/1992-97468-000

Eubanks, Virginia, Automating Inequality, How High-Tech Tools Profile, Police and Punish the Poor, St-Martin’s Press, 2018.

Haralabopoulos, G., Tsikandilakis, M., Torres, M. T., & Mcauley, D., “Objective Assessment of Subjective Tasks in Crowdsourcing Applications”. Proceedings of the 12th Language Resources and Evaluation Conference, 2020. https://nottingham-repository.worktribe.com/output/4554093/objective-assessment-of-subjective-tasks-in-crowdsourcing-applications

Gagne, J.F., « Global AI Talent Report 2019 », 2019, https://jfgagne.ai/talent-2019/

Goddard, Valentine, Art Impact Report, https://a07cf5.a2cdn1.secureserver.net/wp-content/uploads/2021/06/Art-Impact-AI-Observations-and-recommendations.pdf

Goddard, V., Harris, D., Reyes, J., Taurino, G., Ratté, S., Marta Kersten-Oertel, M., “Algorithmic Art to Counter Gender Bias in Artificial Intelligence: Changing AI’s Mis-pear-ceptions of Us”, Transformations Journal, soumis pour publication, 2021.

Information and Communications Technology Council, “Responsible Innovation in Canada and Beyond:  Understanding and Improving the Social Impacts of Technology”, 2021. https://www.ictc-ctic.ca/wp-content/uploads/2021/01/ICTC_Report_SocialImpact_Print.pdf

Jo, Eun Seo, and Gebru, T., “Lessons from Archives: Strategies for Collecting Sociocultural Data in Machine Learning”, Conference on Fairness, Accountability, and Transparency (FAT* ‘20), January 27–30, 2020, Barcelona, Spain. 2020. https://arxiv.org/abs/1912.10389

Jobin, Anna, Ienca, M., Vayena, E., “The Global landscape of AI Ethics guidelines”, Nature Machine Intelligence, 2019. https://www.nature.com/articles/s42256-019-0088-2

Jordan, C., “International Policy Standards: An Argument for Discernment”, 2018, CIGI Policy Brief No.135. https://ssrn.com/abstract=3258591

Lee, Min Kyung, Kusbit, D., Kahng, A., Kim, J.T., Yuan, X., Chan, A., See, D., Nooth-igattu, R., Lee, S., Psomas, A., and Procaccia, A.D., “WeBuildAI: Participatory Framework for Algorithmic Governance”, Proc. ACM Hum.-Comput. Interact., 3, CSCW, Article 181, 2019. https://doi.org/10.1145/3359283

Lewis, Jason Edward, (ed.), Indigenous Protocol and Artificial Intelligence Position Paper. Honolulu, Hawaiʻi: The Initiative for Indigenous Futures and the Canadian Institute for Advanced Research (CIFAR), 2020.

Luccioni, A., and Bengio, Y., “On the Morality of Artificial Intelligence”, December 2019, https://arxiv.org/pdf/1912.11945.pdf

Maruska, Jennifer Heeg, “Feminist Ontologies, Epistemologies, Methodologies, and Methods in International Relations”, Oxford University Press, 2010. https://doi.org/10.1093/acrefore/9780190846626.013.178

Nature, “Is Science Only for the Rich?”, Nature, 2016. https://www.nature.com/news/is-science-only-for-the-rich-1.20650#/elite

Nefresh-Clarke, L., Orser, B., Thomas, M., “COVID-19 Response Strategies, Addressing Digital Gender Divides”, Frontiers in Psychology, 2020

Noble, Safiya Umoja, Algorithms of Oppression, How Search Engines Reinforce Racism, New York University Press, 2018

OMPI, Rapport sur la propriété intellectuelle dans le monde, 2019. https://www.wipo.int/wipr/fr/

O’Neil, Cathy, Weapons of Math Destruction, Crown Books, 2016. https://www.programmer-books.com/weapons-of-math-destruction-pdf/

O’Neill, S.J., Hulme, M., Turnpenny, J., Screen, J., Disciplines, “Geography, and Gender in the Framing of Climate Change”, Bulletin of the American Meteorological Society, 2010.

OpenAI. January 2021. “DALL.E: Creating Images from Text”. https://openai.com/blog/dall-e/

Salas, Carmen, “What should we expect from art in the next few years/decades? What is Art Anyway?”, Medium, May 2020, https://medium.com/@CarmenSP/what-should-we-expect-from-art-in-the-next-few-years-decades-and-what-is-art-anyway-be9f75c3d1ae

Scheuerman, M. K., Denton, E., Hanna, A., “Do Datasets Have Politics? Disciplinary Values in Computer Vision Dataset Development”, August 2021, https://arxiv.org/pdf/2108.04308.pdf

Smith, D.G., Rosenstein, J.E., and Nikolov, M.C., “The Different Words We Use to Describe Male and Female Leaders”, Harvard Business Review, 2018. https://hbr.org/2018/05/the-different-words-we-use-to-describe-male-and-female-leaders

Smith, H. J.L., Ginley, B., Goodwin, H., “Beyond Compliance? Museums, Disability and the Law”, Museums, Equality and Social Justice, Routledge, 2012.

Sommer, L.K. and Klöckner, C.A. “Does activist art have the capacity to raise awareness in audiences? A study on climate change art at the ArtCOP21 event in Paris”. Psychology of Aesthetics, Creativity, and the Arts (2019)

Suresh, H., and Guttag, J., “A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle”, June 2021, https://arxiv.org/pdf/1901.10002.pdf

Thorpe, Jer, “Living in Data, A Citizen’s Guide to a Better Information Future”, MCD, 2021.

UNDESA, “Socially just transition towards sustainable development: The role of digital technologies on social development and well-being of all”, August 2020. https://www.un.org/development/desa/dspd/2020-meetings/socially-just-transition-digital-technologies.html

UNESCO, « Année internationale de l’économie créative au service du développement durable », 2021. https://fr.unesco.org/commemorations/international-years/creativeeconomy2021

West, Sarah, M., Whittaker, M., Crawford, K., “Discriminating Systems: Gender, Race and Power in AI”, AI Now Institute, 2019 https://ainowinstitute.org/discriminatingsystems.html

Wurtzler, Steve J., Electric Sounds, Technological Change and the Rise of Corporate Mass Media, Columbia University Press, 2007.

Xu, T., Zhang, P., Huang, Q., Zhang, H., Gan, Z., Huang, X. and He, X. “AttnGAN: Fine-grained text to image generation with attentional generative adversarial networks”. 2018. https://arxiv.org/abs/1711.10485

Yuan, Yuan, “Exploring Gender Imbalance in AI: Numbers, Trends, and Discussions”, Synced, 2020, https://syncedreview.com/2020/03/13/exploring-gender-imbalance-in-ai-numbers-trends-and-discussions/

 

Addendum 

These digital art prints created from selected words in the PearAI dataset have allowed for preliminary observations. These words, subjected to the image generation model, in turn raise questions about existing algorithmic models and the datasets on which they still rely.

Figure 2: These word clouds representing the words collected by the PearAI.Art application. Cloud by Marta Kersten-Oertel

Digital print 1 : Woman on the moon

The word “moon” did not generate an image on its own when fed into the algorithmic model, but the word “vagina” did render an image with a shape that matched the anatomical reality fairly closely, topped by what appears to be a human head.

Figure 3 – Image generated by the AttnGAN model using Runway ML with the phrase “The Woman on the Moon”; Valentine Goddard digital editing and painting.

Digital print 2 : the beauty of imperfection

In the PearAI dataset, the word most often used to define “Imperfection” is the word “Beauty”, hence the title of this 2nd digital print.

Figure 4- Image generated by the AttnGAN model on Runway ML using the phrase “Imperfectly beautiful woman”; Valentine Goddard digital editing and painting.