Auxane Boch & Alexander Kriebitz, both AI ethicists, show their point of views on diversity in AI
Authors: Auxane Boch & Alexander Kriebitz
Dr. Alexander Kriebitz is an AI ethicist, political scientist, a postdoctoral researcher at the Technical University of Munich, and co-founder of iuvenal research GmbH. Alexander’s work focuses on the intersection of international law, business ethics and international relations. His research is mainly concerned with the impact of artificial intelligence on human rights and the ethical handling of technological and economic exchange with authoritarian regimes.
Auxane Boch is an AI ethicist, cyberpsychologist and doctoral candidate at the Technical University of Munich (TUM). Auxane's work involves human and societal-centric ethical evaluation of interactive technologies from a human-computer or human-robot interaction perspective while exploring the impact of artificial intelligence on humans' general health, well-being, and behaviours. She also has expertise in cultural AI ethics, and AI governance. She works closely with Women in AI. She also is a Women in Games ambassador as her expertise extends to Video Game ethics and psychology.
Diversity in AI seems to remain a huge concern today. To understand what it is all about, could you first tell us what diversity means exactly?
Diversity encompasses many dimensions, and its exact understanding can be contentious. Nevertheless, existing definitions of diversity share an essential commonality by emphasising the distinctions within a group and context-dependent. Further, diversity also stands for raising awareness of individual differences and recognising these as qualities that enhance society.
According to the American Psychological Association (APA), building on psychology work including but not limited to age, gender and cultural research, diversity encompasses a broad spectrum of factors, including age, biological sex, gender identity, sexuality, race, ethnicity, nationality, religion, education, livelihood, ability, and marital status, among others. Those characteristics are thus demographic but can also be professional in the work context, such as a diverse background in training and experience. In other words, a “diverse team” could consist of different nationalities, gender identities, political views and life experiences.
Beyond its conceptual and practical dimensions, diversity carries significant ethical implications. It speaks to the aspiration of representing society, particularly in positions of power, societal influence or wealth, but also in its dedication to attention and care.
Now that we have a clearer idea of what diversity is, could you tell us what are the key aspects we need to understand to grasp the concept of diversity?
Diversity has many contextual layers, particularly when determining which group characteristics are the most important to consider in a given context. These continuously evolve, reflecting societal changes and historical patterns of discrimination, including historical injustices such as Apartheid or slavery. That being said, the focus of diversity lies in the inclusion of groups in society that have been historically marginalised and that remain often largely underrepresented in given positions or contexts.
Furthermore, the concept of intersectionality plays a crucial role in understanding diversity. Individuals possess multiple identities and characteristics that intersect, making them unique and defying simplistic categorisations. For instance, black females are likely to face different types of discrimination in the U.S. labour market than black males.
How does all that relate to the development and deployment of artificial intelligence?
The performance of artificially intelligent (AI) solutions relies heavily on analysing diverse datasets using statistical methods, as underscored by the European Union's Annex to the AI Act (AIA). At its core, AI systems derive their outcomes from identifying patterns in data. Therefore, the quality of an AI system's underlying dataset is crucial for its effectiveness.
Diversity is pivotal for AI, mirroring the statistical foundations of the technology. In mathematical terms, diversity refers to the composition of a group and the representation of individuals within it. Statistical metrics can effectively measure the diversity or homogeneity of a group, which has significant implications for AI systems.
The use of AI implies that data analysis and decision-making are increasingly performed by autonomous processes rather than humans. This brings risks and opportunities for promoting diversity in sensitive social, economic and political domains. A key question is how AI impacts diversity in areas essential for societal participation.
The question of diversity within the AI context has also garnered attention from forthcoming legislation. Notably, NYC Local Law 144 exemplifies this focus by addressing concerns regarding biases in AI-facilitated recruitment processes.
Similarly, the European Union (EU) AIA emphasises measures to identify and mitigate biases in data management but also calls for diverse teams to develop AI solutions. Recognising the significance of diversity and its impact on the accuracy of AI solutions, the EU aims to establish guidelines and safeguards to promote fairness and transparency.
We’re talking here mainly about AI at large. Can you elaborate on the Data aspect?
The importance of diversity in AI becomes evident when considering the representation of individuals in the data sets that these AI systems analyse. AI solutions are not designed to consider all demographic groups equally, and this inherent bias can lead to significant performance disparities. An involuntary lack of diversity within the data sets can result in what can be termed as "discrimination" or unequal treatment. In some cases, the lack of consideration for underrepresented groups might be regarded as “racist” or “sexist”, mainly if it is a conscious decision not to cater for underrepresented groups.
For instance, biases within AI systems can lead to discrimination in various domains. In the hiring processes, AI-driven recruitment tools trained on biased data can perpetuate existing disparities in employment opportunities. In the healthcare context, the underestimation of health needs for misrepresented patients compared to their well-represented counterparts, despite having the same level of risk, can lead to serious life and death consequences.
In summary, diversity in data sets is not merely a theoretical concern but a practical imperative for AI systems. Failure to ensure diversity can result in biased AI outcomes and discrimination, all of which have far-reaching consequences across various fields of high impact on individuals' lives and result in possible unequal access to paramount resources.
You mentioned “the representation of individuals in the data sets”. Doesn’t that raise question about privacy?
Balancing diversity and privacy in AI poses challenges. People may not disclose personal data like ethnicity or sexuality if worried about discrimination based on past misuse of such information. However, diversity requires more representative data.
We must find a balance in safeguarding individuals through stringent privacy regulations while ensuring enough diversity to prevent bias when developing and deploying AI systems. Considering both privacy and diversity is complex but vital when building fair, transparent AI and legislation, we are still about to find a way to navigate this trade-off.
So you’re advocating for more diversity in teams working on AI systems?
Team diversity’s implications go beyond the development process and extend to understanding specific use cases, contexts, and firsthand experiences.
The AI industry faces a lack of representation from diverse groups, which hinders the development of AI tools. Diverse teams bring varied backgrounds and experiences, making them sensitive to different issues and designing AI tools accordingly. Technical expertise is essential, but diverse perspectives provide unique viewpoints on data collection, use, and privacy, for example. Involving human and social scientists as well as individuals from diverse training and life paths backgrounds throughout the AI lifecycle promotes inclusivity, ethical considerations, and targeted development for different populations.
Building diverse teams can be achieved through mentoring programs and actively seeking representation from intended users and stakeholders. Overall, diversity throughout the AI lifecycle leads to better-designed, ethical, and inclusive AI tools that address the needs of diverse populations while mitigating biases and inadequate implementations.