Jump to content

Lentis/Algorithmic bias by gender

From Wikibooks, open books for an open world

Algorithmic bias refers to “systematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one group over another”. It can emerge in programs designed to learn from large datasets and make predictions on new data. Algorithmic bias can appear in technologies like facial recognition, self-driving cars, and resume analyzers, where algorithms may unintentionally favor certain groups due to imbalances in their training data.[1]

Gender Bias in AI

[edit | edit source]

Gender bias in artificial intelligence (AI) arises when algorithms reflect the demographics of male-dominated fields, meaning that data primarily reflect male experiences, which can skew algorithmic outputs.[2] This bias occurs when an algorithm produces systematically unfair outcomes based on gender, often because of imbalances in the training data or biased practices within the industry.[3] For instance, the lack of female representation in medical research data can result in AI systems that provide less accurate diagnoses and treatment suggestions for women.[3] Similarly, hiring algorithms in technology fields may favor male applicants due to the industry’s demographic composition, inadvertently reinforcing gender inequalities in hiring decisions.[4]

Algorithmic vs AI Bias

[edit | edit source]

The difference between algorithmic and AI bias is subtle but significant. An algorithm is a set of instructions given to a computer that produces an output based on an input. Algorithms have a broad range of complexity from sorting numbers to recommending social media content. Conversely, AI systems simulate human intelligence to make decisions and can adapt to new situations.[5] Thus, the decision-making process for algorithms is defined, but it is less explicit in AI systems. The biases in algorithms and AI come from different sources but can manifest in similar applications.

Sources of Bias

[edit | edit source]

Bias in AI stems from many sources, each impacting how algorithms interpret and respond to data. Some biases are unintended and arise from unbalanced datasets, while others mirror long-standing social inequalities.[2] AI training data can include biased human decisions or reflect historical and social inequalities. These human biases often underlie algorithmic biases, which influence decision-making. For instance, researchers at MIT showed that facial analysis technologies have higher error rates for minorities and women. This finding was due to unrepresentative training data: most of the training images were white males.[6]

Another source of bias is intentionally encoded bias, particularly in Natural Language Processing (NLP) models that use “word embeddings” to process text. Word embeddings are mathematical representations of words that capture semantic relationships by analyzing word patterns in large text datasets.[7] This technique enables algorithms to identify word associations based on language use and frequency. Word embeddings identify patterns in language by analyzing statistical, grammatical, and semantic relationships, often encoding biases present in human writing. While accounting for human-like biases can improve model accuracy by aligning with human language patterns, it can also produce unintended effects in applications reliant on these models.[7]

Bias in Generative AI

[edit | edit source]

Most generative AI models are trained on data scraped from the internet. But, the information available on the internet is not representative and is full of biases. Unless companies take active measures to curate their training data, the AI models they create will perpetuate and exacerbate biases. [8] Machine learning is a field dominated by white men who frequently do not see the importance of ensuring their models are representative. Women only make up 20% of technical employees in machine learning. [9]

Unless purposefully removed, there is a lot of sexual and pornographic material of women included in these training sets. One reporter tested a model that creates avatars based on headshots. She received hypersexual images while her male colleagues’ avatars were of astronauts or explorers. Of the 100 generated avatars, 16 were fully nude and 14 were scantily clad or posed suggestively. [8][10] Additionally, “of 15,000 deepfake videos it found online, an astonishing 96 percent were non-consensual pornographic content featuring the swapped-in faces of women”. [8]

Models trained on biased data will perpetuate the stereotypes it learns. Bloomberg used Stable Diffusion to generate images for 7 ‘high paying’ jobs, 7 ‘low paying’ jobs, and three types of criminals. They found that the model made gender and race disparities worse. [11] For example, 34% of judges in the United States are women, but only about 3% of the images generated with the key word “judge” were of women. [11]

Generative AI companies are aware of this disparity, and some companies have tried to mitigate it. DALL-E 2 and DALL-E 3 adds words to user’s prompts to add diversity. For example, the model would add words like ‘woman’ or ‘black’ to user prompts (without the user knowing) in order to produce more diverse images. [12] This solution does not fix the underlying biased training data and instead focuses on debiasing the output.

Case study

[edit | edit source]

These biases can have severe consequences when applied to hiring, healthcare, and other critical areas. A well-known example involves Amazon’s AI-driven hiring tool, which inadvertently reinforced gender stereotypes.[4] In 2018, Amazon stopped using a hiring algorithm after discovering that it was biased towards selecting male applicants. They trained the model on data from previous hiring cycles, and with the gender imbalance in tech, most of the candidates that had been hired were men. Because of this, the model learned to favor words and qualifications associated with men. It penalized candidates who included the word “women’s” (as in “women’s chess club captain) in their resume or who went to all-women’s colleges.[4] By not ensuring equal gender representation in the data, Amazon created an AI that further reinforced the gender imbalance in their field.

Social Perspectives

[edit | edit source]

Many different social groups are involved in the effort to eradicate gender bias in AI algorithms, all with their own agendas, approaches, and efforts.

Corporations

[edit | edit source]
Technology Companies
[edit | edit source]

Tech companies such as Google, Apple, and Facebook make most of the progress on the latest machine learning models which are then open-sourced for the world to use. When these models have bias, that bias propagates to many applications. For example, Google’s Switch Transformer and T5 models, which are used by other businesses for NLP applications, were shown to have “been extensively filtered” to remove black and Hispanic authors, as well as materials related to gay, lesbian, and minority identities.[13] This affected countless NLP applications without the authors being aware. Activists argue that a few engineers at the largest tech firms are responsible for building unbiased AI, not the broader public. Corporations often publicly commit to addressing algorithmic bias.[14] [15] Algorithmic bias negatively affects AI performance, and these companies recognize the necessity of mitigating this impact.

Scale AI
[edit | edit source]

Scale AI is a company founded in 2016 by Alexandr Wang and Lucy Guo that provides training data for AI and machine learning systems. [16] Scale AI provides labeled datasets to help organizations build AI systems. Some of its notable customers include Microsoft, Meta, OpenAI, Nvidia, US Army, US Air Force, and General Motors. Scale AI offers services for labeling data such as images, text, videos, and 3D sensor data for applications like autonomous vehicles, robotics, NPL, defense, and more. Scale AI reached a valuation of over $14 billion in May 2024.

To keep up with the labeling demand while preserving profit margins, Scale AI established Remotasks in 2017. Fairwork, which evaluates companies offering web-based labor based on five principles (fair pay, conditions, contracts, management, and representation), gave Remotasks a score of 1 out of 10 in 2023. [17] Remotasks exploits its workers through low wages, poor transparency, delayed and withheld payments, and inconsistent work. Thus, Scale AI needs drastic changes in its ethics and priorities before substantial progress toward ethical and accountable AI can be made.

Scale AI and other companies that provide labeled training data for AI development are among the few groups that can debias AI systems.

Social Groups

[edit | edit source]
Feminist Data Manifest-No
[edit | edit source]

The Feminist Data Manifest-No is a manifesto committed to the cause of “Data Feminism”. This movement aims to change how data is collected, used, and understood to better line up with the ideals of gender equality espoused in modern day feminism.[18] [19]

Manifest-No consists of a series of points broken into a refusal and a commitment. The refusal consists of a rejection of a currently established idea about data and algorithms prefaced with the words “we refuse.” The commitment then offers a rebuttal to that idea in the form of a new standard embraced by the authors of the paper, prefaced with the words “we commit”. The language creates a sense of community within the manifesto and enables the authors to clearly lay out both their understanding of the current world and their vision for a better one. This Manifest-No, while not always explicitly used or cited, forms a basis for many of the modern approaches to combating gender bias in algorithms.

Algorithmic Justice League
[edit | edit source]

The Algorithmic Justice League (AJL) is an advocacy founded by Dr. Joy Buolamwini in 2016. Its mission is "to raise public awareness about the impacts of AI, equip advocates with resources to bolster campaigns, build the voice and choice of the most impacted communities, and galvanize researchers, policymakers, and industry practitioners to prevent AI harms." [20] The project is funded by Allied Media Projects, a nonprofit that supports media-based initiatives for social justice.[21] The AJL advocates for equitable and accountable AI through a combination of art and research. It raises awareness and educates individuals, institutions, and policymakers through research, storytelling, and creative media that expose discriminatory practices in AI systems.

The AJL uses many types of initiatives in their strategy including research, talks, events, advocacy, press releases, exhibitions, educational resources, and other projects. For example, one initiative by the AJL is the film Coded Bias, which will be available on Netflix in April 2025.[22] The film is a call to action that uses anecdotes to demonstrate how civil rights and democracy are threatened by AI. Another initiative was a study by Joy Boulamwini and Timnit Gebru titled Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. The study evaluated the accuracy of IBM, Microsoft, and Face++ gender classification algorithms powered by AI, and the authors concluded that all three classifiers performed best for lighter individuals and males and performed worst for darker individuals and females.[23]

Research Institutions

[edit | edit source]
Berkeley Haas Center for Equity, Gender, and Leadership
[edit | edit source]

The Berkeley Haas Center for Equity, Gender, and Leadership has written a large playbook for business leaders to mitigate algorithmic bias within their companies.[24] This playbook consists of overviews and deep dives into the issue of bias in AI algorithms and how businesses can address the issues. The existence of this playbook indicates the belief of its authors that effective mitigation of this issue must come from the top down. However, the authors understand that effective incentives for change can be provided by grassroots organizers, even when these organizers often lack the power necessary to actually implement the change.

To this end, the Berkeley Haas Center for Equity, Gender, and Leadership maintains a list of gender-biased algorithms that have been used by different companies over the years. Recognizing the negative press associated with using biased algorithms, the use of this list aims to encourage the correction and avoidance of bias by making it public when it occurs.[25]

The Lamarr Institute for Machine Learning and Artificial Intelligence
[edit | edit source]

The Lamarr Institute is a German group that researches and develops ethical AI. They are focused on ensuring that AI in the European Union is trustworthy and used responsibly.[26] They believe there needs to be a paradigm shift in the AI space; their research focused on integrating knowledge, data, and context to create AI systems that benefit people and organizations.[27]

On their blog, the Lamarr institute wrote about the ethical use of training data, fairness, and data protection. They outline some key steps to detecting and preventing bias. Before training begins, developers should ensure they have enough representative data, consider the ethical implications of AI making the intended decisions, and the potential impacts of the system. Additional best practices throughout the development process include data collection scrutiny, having diverse test sets, cross-validation, avoiding overfitting, and continuous monitoring. The blog post also discusses the General Data Protection Regulation, a set of strict guidelines that regulate the collection, processing, storage, and sharing of personal data in the European Union.[28]

Proposed Solutions and Future Directions

[edit | edit source]

Addressing gender bias in AI requires a multifaceted approach that includes improving data representation, implementing fairness algorithms, fostering diversity in AI development, and ensuring ongoing monitoring and accountability.

One of the most effective ways to reduce gender bias in AI is to ensure the training data used by algorithms are more representative of diverse groups. As bias often arises from unbalanced datasets, it is crucial to gather data that better reflects the diversity of the populations that AI systems will serve. Researchers and developers should prioritize data collection practices that include underrepresented groups, ensuring that women, minorities, and other marginalized communities are adequately represented in the datasets used for training AI systems. Ensuring diverse datasets helps avoid unintentional exclusion or bias in AI decision-making processes, particularly in fields like healthcare and hiring where these biases can have severe real-world consequences [5],[6].

Transparency and accountability are also essential in mitigating algorithmic bias. Developers must be more transparent about how algorithms are built, what data they use, and the criteria for their decisions. Public awareness and open-access tools that allow others to review and audit algorithms could help uncover hidden biases and promote accountability among tech companies. Furthermore, creating frameworks for algorithmic accountability—where the creators of algorithms are held responsible for the biases in their models—can incentivize businesses to adopt more ethical practices [7].

Additionally, fairness algorithms must be integrated into AI systems to actively detect and correct for bias during the development process. Techniques such as regular audits, bias detection tools, and fairness constraints can be used to ensure that AI systems provide equitable outcomes. These measures should be implemented throughout the AI lifecycle, from data collection and model training to deployment and real-world application. Cross-validation, diverse test sets, and continuous monitoring are key strategies in identifying and addressing any emerging biases after the model is launched [8],[9].

Promoting diversity within the AI development teams is another critical factor in mitigating bias. Research indicates that a lack of diverse perspectives in tech teams often leads to biased AI systems. By hiring more women and individuals from diverse backgrounds, companies can ensure that the AI systems they create are more inclusive and less likely to perpetuate harmful stereotypes. Promoting diversity within AI research and development not only leads to fairer outcomes but also encourages innovation in the field [5],[10].

Lastly, collaboration between researchers, industry leaders, and policymakers is necessary to create ethical standards and regulations that govern AI. Government policies should be updated to reflect the challenges posed by algorithmic bias, and organizations like the Lamarr Institute emphasize the importance of establishing strict ethical guidelines to promote fairness in AI systems [11],[12]. Such standards can set limits for bias in all sectors and ensure that AI is used responsibly, in line with human rights and social justice principles.

These strategies should be viewed as part of an ongoing effort to tackle algorithmic bias.

Conclusion and further steps

[edit | edit source]

As AI continues to shape decision-making processes across industries, addressing algorithmic bias is crucial to achieving equitable outcomes in society.[1] The tech companies and organizations should make sure that researchers and engineers who collect data and architect AI algorithms should be aware of potential bias against minorities and underrepresented groups. Some companies seem to be taking steps in this direction. For example, Google has published a set of guidelines for AI use both internally and for businesses that use its AI infrastructure. [29]

It is imperative to study the anticipated and unforeseen consequences of using artificial intelligence algorithms in a timely manner, especially as current government policies may not be sufficient to identify, mitigate and eliminate the consequences of such subtle bias in legal relations. Solving algorithmic bias problems solely by technical means will not lead to the desired results. The world community thought about the introduction of standardization and the development of ethical principles to establish a framework for the equitable use of artificial intelligence in decision-making. It is necessary to create special rules that set limits for algorithmic bias in all sectors.

References

[edit | edit source]
  1. a b Siau, K., & Wang, W. (2020). Artificial Intelligence (AI) ethics. Journal of Database Management, 31(2), 74–87 https://doi.org/10.4018/jdm.2020040105
  2. a b Rustagi, G. S. & I., Rustagi, I., & Smith, G. (2021, March 31). When good algorithms go sexist: Why and how to advance AI Gender Equity (SSIR). Stanford Social Innovation Review: Informing and Inspiring Leaders of Social Change. https://ssir.org/articles/entry/when_good_algorithms_go_sexist_why_and_how_to_advance_ai_gender_equity.
  3. a b Akter, S., Dwivedi, Y. K., Biswas, K., Michael, K., Bandara, R. J., & Sajib, S. (2021). Addressing algorithmic bias in AI-Driven Customer Management. Journal of Global Information Management, 29(6), 1–27. https://doi.org/10.4018/jgim.20211101.o
  4. a b c Dastin, J. (2018, October 10). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G.
  5. a b c ashutosh (2024-03-05). "How Do AI and Algorithms Different From Each Other?". SDLC Corp. Retrieved 2024-12-10.
  6. a b Mayinka, J., Presten , B., & Silberg, J. (2019, October 25). What do we do about the biases in ai? Harvard Business Review. https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai.
  7. a b c Caliskan, A. (2021, May 10). Detecting and mitigating bias in Natural Language Processing. Brookings.https://www.brookings.edu/research/detecting-and-mitigating-bias-in-natural-language-processing/
  8. a b c d Lamensch, Marie. "Generative AI Tools Are Perpetuating Harmful Gender Stereotypes". Centre for International Governance Innovation. Retrieved 2024-11-15.
  9. a b "Generative AI: UNESCO study reveals alarming evidence of regressive gender stereotypes". UNESCO. 5 July 2024.
  10. a b Heikkiläarchive, Melissa (December 12, 2022). "The viral AI avatar app Lensa undressed me—without my consent". MIT Technology Review. Retrieved 2024-11-15.
  11. a b c Nicoletti, Leonardo; Bass, Dina (June 9, 2023). "Humans Are Biased. Generative AI Is Even Worse" (in en). Bloomberg.com. https://www.bloomberg.com/graphics/2023-generative-ai-bias/. 
  12. a b Salvaggio, Eryk (2023-10-19). "Shining a Light on "Shadow Prompting" | TechPolicy.Press". Tech Policy Press. Retrieved 2024-11-15.
  13. Anderson, M. (2021, September 24). Minority voices 'filtered' out of google natural language processing models. Unite.AI. https://www.unite.ai/minority-voices-filtered-out-of-google-natural-language-processing-models/
  14. Google. (n.d.). Our principles. Google AI. https://ai.google/principles/
  15. Meta. (2021, April 8). Shedding light on fairness in AI with a new data set. Meta AI. https://ai.facebook.com/blog/shedding-light-on-fairness-in-ai-with-a-new-data-set/.
  16. "About Us | Scale AI". scale.com. Retrieved 2024-12-10.
  17. "Cloudwork". fair.work. Retrieved 2024-12-10.
  18. Cifor, M., & Garcia, P. (n.d.). Feminist data manifest. Manifest-No. https://www.manifestno.com/.
  19. Cifor, M., & Garcia, P. (n.d.). Feminist data manifest. Full Version of Manifest-No. https://www.manifestno.com/home.
  20. "Mission, Team and Story - The Algorithmic Justice League". www.ajl.org. Retrieved 2024-12-10.
  21. "About". Allied Media Projects. Retrieved 2024-12-10.
  22. "VIRTUAL CINEMA". CODED BIAS. Retrieved 2024-12-10.
  23. "Gender Shades". gendershades.org. Retrieved 2024-12-10.
  24. Smith, G., & Rushtagi, I. (2020, July). Mitigating bias - haas school of business. Berkeley Haas UCB Playbook. https://haas.berkeley.edu/wp-content/uploads/UCB_Playbook_R10_V2_spreads2.pdf.
  25. Bias in AI: Examples Tracker https://docs.google.com/spreadsheets/d/1eyZZW7eZAfzlUMD8kSU30IPwshHS4ZBOyZXfEBiZum4/edit#gid=1838901553
  26. "About the Lamarr Institute for Machine Learning and Artificial Intelligence". Lamarr Institute for Machine Learning and Artificial Intelligence. 2024-02-21. Retrieved 2024-12-08.
  27. "Research on ML and AI » Lamarr Institute". Lamarr Institute for Machine Learning and Artificial Intelligence. 2024-02-12. Retrieved 2024-12-08.
  28. Dethmann, Thomas; Spiekermann, Jannis (2024-07-03). "Ethical Use of Training Data: Ensuring Fairness & Data Protection in AI". Lamarr Institute for Machine Learning and Artificial Intelligence. Retrieved 2024-12-08.
  29. Responsible AI practices. Google AI. (2021,October). https://ai.google/responsibilities/responsible-ai-practices