Jump to content

Information Technology and Ethics/Algorithmic Bias and Fairness

From Wikibooks, open books for an open world

What is Algorithmic Bias

[edit | edit source]

Algorithmic bias is when a computer system consistently makes systematic and repeatable errors that create unfair outcomes or discriminate against a person or subject based on various factors. Often, the factors used to discriminate against a specific person are factors like race, gender, and socioeconomic standing. There are several places where bias can emerge, including the design of the algorithm, the use of the algorithm differing from the intended use, or the data used in the training of the algorithm. This bias can have a profound effect on the subjects it is being used on and will perpetuate societal inequalities.

History

[edit | edit source]

Algorithmic bias saw its first instance when Joseph Wizenbaum wrote about algorithmic bias in his 1976 book Computer Power and Human Resource. It was Wizenbaum who first suggested that the bias could manifest through the data given to the artificial intelligence as well as the way in which the program was written[1]. Given that a program can only process and come to decisions about data using the set of written rules the program was given, there is a concern that the program will have the same biases and expectations as the writer. Wizenbaum also wrote that any data being fed into the program is another instance of “human decision-making processes” while the data is selected. Wizenbaum also talks about another instance of algorithmic bias showing itself: the blind trust of the writer in the program. Wizenbaum talks about a writer being unable to understand the decision-making of the program, which is the same as a tourist making his way through a hotel, going left and right based on the flip of a coin[1]. It does not matter if the solution ends up being correct; it is irreproducible and inconsistent.

An example of algorithmic bias from this time was the applicant screening program used by St. George’s Hospital Medical School. If the applicant was female or had a “foreign-sounding” name, the program already docked points to the application, giving white males a much higher chance of admission. The program was written by Dr. Geoffrey Franglin, who wrote it to lessen discriminatory action and make the task of the initial application round easier[2]. Franglin thought that ceding the responsibility would make the process both easier and fairer. Franglin has coded the bias directly in the program, and the program was perpetuating the same racial bias that the human assessors had.

In the modern day, algorithms are more expertly made and often will avoid most bias that the writer may accidentally introduce through rules or by data, but there are still instances where algorithmic bias will still show itself. It can show itself in often unpredictable ways that are hard to account for. Algorithms and algorithmic bias are often used less as a tool to achieve some end and more as an authoritative figure used to generate said end while posturing itself as the virtual means. Instead of using algorithms to study human behavior, it could be a way for human behavior to be defined[3]. Considering this, working groups co-created the group Fairness, Accountability, and Transparency in Machine Learning (FAT)[4]. Those within the group had the idea to patrol the outcomes of algorithms and to vote on if algorithms have a harmful effect and if they should be controlled or restricted. Many, however, think that FAT cannot serve effectively due to the fact that many of the members are funded by large corporations.

Existing Frameworks and Regulations

[edit | edit source]

Frameworks

[edit | edit source]

When it comes to frameworks for artificial intelligence there is an ever growing amount to choose from.  They are all at least a little different from each other since they are designed for a specific purpose in mind, for example JAX is a framework that was “designed for complex numerical computations on high performance devices like GPUs and TPUs”[5] according to Intel. There are many other frameworks that are available for use for just about any project that could be thought of since there are more and more being made.  So far there isn’t much ethics taken into consideration within artificial intelligence frameworks currently since it is still such a new and evolving technology.  There are many potential areas for concern though.  For example imagine an AI chatbot that was being trained on a framework that didn’t take into consideration any ethics.  That chatbot could potentially, through interactions with people, say some things that it really shouldn’t be saying to people or expose information that it wasn’t supposed to spread.  That could lead a company into a lot of trouble between the incident happening and the potential reputation downfall that it could bring.  

Regulations

[edit | edit source]

With regards to the regulation of artificial intelligence there has been a lot of legislation between many states that have been passed.  There is a lot of legislation regarding safeguarding data privacy, accountability, and transparency of ai.  For example according to Rachel Wright “Texas HB 2060 (2023) is one such example. This bill established an AI advisory council consisting of public and elected officials, academics and technology experts.  The council was tasked with studying and monitoring AI systems developed or deployed by states agencies as well as issuing policy recommendations regarding data privacy and preventing algorithmic discrimination.”[6]  There is also a blueprint for a potential AI bill of rights that has been made by the Office of Science and Technology Policy which lays out the rights that people should have when AI is being in use or using ai.  The rights it goes over is protection from unsafe systems, protection from algorithmic discrimination, data privacy measures, the right to know when ai is being used, and the right to opt out of using ai if you don’t want to[7].  

Case Studies

[edit | edit source]

Case Study 1: Predictive Policing: Chicago Police Department and the Strategic Subjects List[8]

[edit | edit source]

Predictive policing algorithms utilize data analysis and machine learning methods to predict the areas where crime is probable to happen and distribute law enforcement resources accordingly. Advocates believe that these systems can reduce crime and improve public safety, but critics fear they may be biased and infringe on civil liberties.

A study by Rudin et al. in 2020 examined the utilization of predictive policing algorithms in Chicago. The research discovered that these algorithms focused mainly on Black and Hispanic areas, resulting in unequal monitoring and policing of minority communities. Furthermore, the algorithms depended on past crime data, which could mirror biases in policing methods and uphold systemic disparities.

Ethical Implications

[edit | edit source]

The employment of predictive policing algorithms gives rise to ethical concerns regarding fairness, transparency, and accountability. Critics claim that these systems may worsen current inequalities in policing and erode trust between law enforcement and marginalized groups.

Public Debate and Reform Efforts

[edit | edit source]

Groups advocating for civil rights, community organizations, and advocates are pushing for more transparency, community involvement, and accountability in the creation and use of predictive policing algorithms such as the SSL.

Legislation has been implemented in certain areas to oversee the use of predictive policing algorithms, with a focus on transparency, accountability, and preventing bias and discrimination.

The case of the Chicago Police Department's Strategic Subjects List (SSL) shows how predictive policing algorithms have intricate ethical and social consequences. Although these algorithms offer the potential to reduce crime and improve public safety, they also bring up important issues regarding transparency, accountability, fairness, and the risk of bias and discrimination. Dealing with these obstacles involves thoughtful reflection on the ethical principles and values that should steer the creation and utilization of predictive policing technologies to guarantee they advance justice, fairness, and the safeguarding of civil liberties.

Case Study 2: Amazon Hiring System

[edit | edit source]

To assist in hiring top people, Amazon created a recruitment tool in 2014 that is driven by AI.  Throughout a ten-year period, resumes submitted to Amazon were used to train the AI model. Since males predominate in the computer field, the majority of these resumes were provided by men. The system thus learnt to devalue graduates of all-women's universities and punish resumes that contained the phrase "women's". Having been unable to completely troubleshoot the biased system, Amazon finally abandoned the project in the following years. The example demonstrated how, if not properly examined and rectified, AI algorithms may inherit and magnify social biases contained in training data. It acted as a warning about the moral perils of using AI recruiting tools without sufficient measures for mitigating prejudice and testing. The event brought to light the need of having a diverse staff in AI and doing thorough bias evaluations when developing AI.

Challenges

[edit | edit source]

Data Bias

[edit | edit source]

Algorithms trained on data sets that are skewed, reflecting past biases or social disparities, will pick up on and reinforce those biases. ProPublica's 2018 report uncovered racial bias in COMPAS software used in the American criminal justice system. The algorithm was more likely to misclassify black defendants as high-risk compared to white defendants. This highlights a common issue - unbalanced data sets. If certain demographics are underrepresented in the training data, the algorithm may perform poorly for those groups.[9]

Model Bias

[edit | edit source]
Algorithmic Design Choices
[edit | edit source]

The way algorithms are designed can introduce bias. Choices about factors and weightage can lead to discriminatory outcomes. Even with unbiased data, models can still exhibit biases due to the algorithm's design or assumptions. In 2020, the U.S. Department of Housing and Urban Development (HUD) lodged a complaint against Facebook, accusing the company's advertising platform of enabling discrimination based on race, gender, and other protected characteristics. This situation highlights the legal and ethical consequences of biased algorithms in advertising and housing.[10]

Perpetuating Bias through Feedback Loops

[edit | edit source]

Algorithms that depend on user data have the potential to magnify existing biases as time goes on. If an algorithm favors a specific group, it could produce outcomes that perpetuate that bias in user engagements. Algorithms that produce biased results can reinforce societal biases when incorporated into decision-making processes, establishing a cycle of perpetuation. YouTube's recommendation algorithm has been criticized for promoting extremist content. This bias arises from the algorithm's tendency to recommend content similar to what a user has previously watched, potentially leading users down paths of radicalization.[11]

Lack of Transparency

[edit | edit source]

Many algorithms are complex "black boxes". It's difficult to understand how they arrive at decisions, making it hard to identify and address bias. US Department of Homeland Security involved a US citizen who was selected for extra screening at the border due to an algorithm utilized by Customs and Border Protection (CBP). The plaintiff was granted the right by the court to contest the decision made by the algorithm because of its lack of transparency.[12]

Fairness Metrics and Trade-offs

[edit | edit source]

Achieving a balance between various concepts of fairness in algorithmic decision-making is difficult because prioritizing one fairness measure could mean sacrificing another. In 2016, Google researchers discovered that an image recognition algorithm had a greater error rate for individuals with darker skin as opposed to those with lighter skin. When they tried to counteract this prejudice by fine-tuning the model to lower the overall error rates, they discovered that the error rates for people with lighter skin rose, showcasing the compromises needed to tackle algorithmic bias.[13]

[edit | edit source]

Discrimination Laws

[edit | edit source]

Current laws aimed at preventing discrimination, like the Civil Rights Act of 1964 in the US, forbid bias related to protected traits like race, sex, and religion. Yet, the complexity of algorithmic systems makes it challenging to apply these laws and to pinpoint discriminatory intent. In 2019, Apple Card was accused of gender bias when it was found that women were receiving significantly lower credit limits than men, despite having comparable credit scores. Apple and Goldman Sachs faced criticism for lack of transparency in determining credit limits for the card.[14]

Privacy and Data Protection

[edit | edit source]

Laws like GDPR in the EU and CCPA in the US seek to safeguard individuals' rights on how their personal data is collected, used, and shared. More and more, rules are focusing on the transparency and accountability of algorithms, making it necessary for companies to give reasons for automated decisions that have a big impact on people. The French Data Protection Authority (CNIL) imposed a €50 million fine on Google in 2019 for breaching GDPR rules on transparent consent for personalized ads. In 2020, the GDPR of the EU was utilized to question a facial recognition technology employed by Clearview AI. The company was penalized for gathering biometric information from individuals in the EU without their permission.[15]

Regulatory Challenges

[edit | edit source]

Policymakers face significant challenges in regulating algorithmic bias and fairness because of the rapid pace of technological innovation and the global reach of digital platforms. Attempts to create regulatory structures frequently encounter opposition from industry participants worried about inhibiting innovation or enforcing onerous compliance standards. Nonetheless, regulatory efforts are being made in different areas to tackle the issues of algorithm transparency, accountability, and fairness. The European Commission's proposed Artificial Intelligence Act aims to regulate high-risk AI systems, including those with potential biases that could harm individuals.[16]

Ethical Considerations

[edit | edit source]

Data Bias

[edit | edit source]

AI algorithms are trained on data; they learn from the datasets we provide. However, if the data contains biases, the algorithm might inadvertently strengthen or even blow them out of proportion. To prevent this, it's crucial to carefully select and prepare a diverse, well-rounded, and balanced dataset. This ensures that any underlying biases can be identified and minimized for fairer AI systems.

Algorithmic Bias

[edit | edit source]

Due to its design, presumptions, or optimization goals, the algorithm itself may generate biases even in the case of impartial training data. It is critical to assess how well the algorithm performs across various subgroups and make sure it does not prejudice against any one group.

Data Privacy

[edit | edit source]

Large volumes of personal data are frequently used by AI systems, which presents privacy issues. To preserve people's privacy, strong data protection mechanisms including anonymization, data encryption, and access control must be put in place.

Promote Transparency

[edit | edit source]

Organizations can create autonomous oversight tools to keep an eye on the behavior of AI systems and use algorithmic auditing procedures to encourage accountability and transparency in AI. Algorithmic impact evaluations are one type of transparency tool that might improve credibility and accountability.

Diverse Perspective

[edit | edit source]

One way to lessen biases in the creation and use of AI systems is to make sure AI teams are inclusive and diverse. Diverse viewpoints and life experiences can aid in recognizing and resolving possible biases and ethical quandaries.

Human Oversight

[edit | edit source]

Even if AI algorithms are capable of automating decision-making procedures, human supervision and responsibility must always be maintained. Particularly in delicate or high-stakes circumstances, humans ought to examine and confirm the algorithm's conclusions.

Continuous Monitoring and Updating

[edit | edit source]

Biases may appear or change over time in AI systems since they work in dynamic contexts. To preserve fairness and reduce newly developing biases, the algorithm's performance must be regularly monitored and updated as appropriate.

Future Directions and Innovations

[edit | edit source]

Advanced Fairness Metrics and Tools

[edit | edit source]

Fairness Metrics

[edit | edit source]

Establishing future metrics in the field of AI ethics, especially regarding algorithmic bias and accuracy, is important to ensure fair and accurate AI systems. Bias in AI comes from systematic errors that lead to negative results. This often results from assumptions made in development phases such as data collection, algorithm design, and model training. For example, a scoring algorithm trained on biased data can favor candidates from certain populations, preserving existing biases in AI systems.

This ensures that AI models treat everyone fairly, regardless of factors such as age, gender, race, and socioeconomic status. Technology managers must define and use metrics to ensure the development of ethical AI systems. Although the US government does not have such legislation, the legal environment surrounding AI and equity is changing. Current laws, such as the Fair Credit Reporting Act and the Equal Credit Opportunity Act, affect AI equity. Around the world, countries are moving forward with AI legislation, with the EU and Canada leading the way in promoting transparency, accountability, and fairness in AI systems.[17]

Fairness Tools

[edit | edit source]

Tools like IBM’s AI Fairness 360[18] provide a framework for detecting and mitigating bias in machine learning models, providing the foundation for real-time monitoring solutions. The toolset was designed as a part of IBM’s broader effort to bring processes to the delivery of AI and provide a comprehensive set of algorithms, metrics and datasets focused on accuracy. AIF360 includes over 70 fairness metrics and more than 10 bias mitigation algorithms.

AI Fairness 360 can be used in variety of industries and fields such as finance, human resources, healthcare, and criminal justice, where AI decision-making systems can have a significant impact on people’s lives. By installing this set of tools, organizations can ensure that their AI systems are more efficient and accurate, reducing the risk of negative biases that lead to discrimination.

Stakeholder Collaboration and Public Engagement

[edit | edit source]

Stakeholder collaboration and public participation are important steps in addressing procedural bias and equity in AI ethics. These efforts demonstrate the importance of collaboration and the power of public action to ensure that AI systems are developed and deployed ethically and fairly.

Interdisciplinary Collaborations

[edit | edit source]

Collaborative efforts across multiple disciplines are critical to developing accurate and sustainable AI systems. Stakeholders from various fields, such as technology, social science, ethics, and law, are involved in these collaborations. The objective is to develop AI in a comprehensive manner that incorporates ethical considerations at every stage. This approach of learning aids in comprehending the complex nature of the AI hypothesis and developing a comprehensive mitigation strategy.[19][20]

Public Participation in AI Ethics

[edit | edit source]

Public participation is essential in managing AI technology. This includes community engagement through communications, open forums, and participatory design processes. This work will ensure that the development of AI technologies is consistent with public value and social norms. AI systems will be better understood and accountable. Public engagement can be promoted using methods such as advisory voting, which brings together diverse groups of people to discuss and provide input on AI policy.[19]

Educational Initiatives and Awareness

[edit | edit source]

To effectively address algorithmic bias and fairness through educational initiatives and public awareness campaigns, several approaches have been explored and implemented across various organizations and educational bodies.

Educational Initiative

[edit | edit source]

The knowledge and attitudes of technologists and policymakers can be shaped by training curricula that focus on the ethics and legitimacy of AI. For example, projects such as the AI ​​+ Ethics curriculum developed by the MIT Media Lab[21] for high school students aim to raise awareness of AI technology and its impact on society, including the issue of algorithmic bias.

Public Awareness Campaigns

[edit | edit source]

Organizations like AI for Humanity are using popular culture to raise awareness among some people about the implications of AI for social justice and to focus on the impact of these technologies on black communities. This includes legislative efforts to ensure transparency and accountability in AI applications.[22]

Collaborative Research and Public Discussions

[edit | edit source]

Sites like AI and the Carnegie[23] Commission's Equality Initiative bring academics together to discuss and address biases in AI, such as gender bias, inequality persists at algorithmic cost. These debates not only raise awareness but encourage research that informs policy and practice.

References

[edit | edit source]
  1. a b Weizenbaum, Joseph (1976). Computer power and human reason: from judgment to calculation. San Francisco: Freeman. ISBN 978-0-7167-0464-5.
  2. "Untold History of AI: Algorithmic Bias Was Born in the 1980s - IEEE Spectrum". spectrum.ieee.org. Retrieved 2024-04-21.
  3. Lash, Scott (2007-05). "Power after Hegemony: Cultural Studies in Mutation?". Theory, Culture & Society. 24 (3): 55–78. doi:10.1177/0263276407075956. ISSN 0263-2764. {{cite journal}}: Check date values in: |date= (help)
  4. Garcia, Megan (2016-12-01). "Racist in the Machine". World Policy Journal. 33 (4): 111–117. doi:10.1215/07402775-3813015. ISSN 0740-2775.
  5. "AI Frameworks". Intel. Retrieved 2024-04-23.
  6. "Artificial Intelligence in the States: Emerging Legislation - The Council of State Governments". 2023-12-06. Retrieved 2024-04-23.
  7. "Blueprint for an AI Bill of Rights | OSTP". The White House. Retrieved 2024-04-23.
  8. "Predictions Put Into Practice: a Quasi-experimental Evaluation of Chicago's Predictive Policing Pilot | National Institute of Justice". nij.ojp.gov. Retrieved 2024-04-22.
  9. Mattu, Jeff Larson,Julia Angwin,Lauren Kirchner,Surya. "How We Analyzed the COMPAS Recidivism Algorithm". ProPublica. Retrieved 2024-04-22.
  10. "Facebook Settles Civil Rights Cases by Making Sweeping Changes to Its Online Ad Platform | ACLU". American Civil Liberties Union. 2019-03-19. Retrieved 2024-04-22.
  11. Haroon, Muhammad; Wojcieszak, Magdalena; Chhabra, Anshuman; Liu, Xin; Mohapatra, Prasant; Shafiq, Zubair (2023-12-12). "Auditing YouTube's recommendation system for ideologically congenial, extreme, and problematic recommendations". Proceedings of the National Academy of Sciences. 120 (50). doi:10.1073/pnas.2213020120. ISSN 0027-8424. PMC PMC10723127. PMID 38051772. {{cite journal}}: Check |pmc= value (help); Check |pmid= value (help)CS1 maint: PMC format (link)
  12. Olsson, Alex (2024-02-09). "Racial bias in AI: unpacking the consequences in criminal justice systems". IRIS Sustainable Dev. Retrieved 2024-04-22.
  13. Ferrara, Emilio (2023-12-26). "Fairness And Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, And Mitigation Strategies". Sci. 6 (1): 3. doi:10.3390/sci6010003. ISSN 2413-4155.
  14. Osoba, Osonde A. (March 25, 2019). "Did No One Audit the Apple Card Algorithm?". RAND: pp. 2. https://www.rand.org/pubs/commentary/2019/11/did-no-one-audit-the-apple-card-algorithm.html. 
  15. "Clearview AI gets third €20 million fine for illegal data collection". BleepingComputer. Retrieved 2024-04-22.
  16. "European approach to artificial intelligence | Shaping Europe's digital future". digital-strategy.ec.europa.eu. 2024-04-05. Retrieved 2024-04-22.
  17. Councils, Forbes. "AI & Fairness Metrics: Understanding & Eliminating Bias". councils.forbes.com. Retrieved 2024-04-22.
  18. Trusted-AI/AIF360, Trusted-AI, 2024-04-21, retrieved 2024-04-22
  19. a b "Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms". Brookings. Retrieved 2024-04-22.
  20. "Stakeholder Participation in AI: Beyond "Add Diverse Stakeholders and Stir"". ar5iv. Retrieved 2024-04-22.
  21. "Project Overview ‹ AI Audit: AI Ethics Literacy". MIT Media Lab. Retrieved 2024-04-22.
  22. Gupta, Damini; Krishnan, T. S. (2020-11-17). "Algorithmic Bias: Why Bother?". California Management Review Insights.
  23. "Artificial Intelligence & Equality Initiative | AI Ethics | Carnegie Council". www.carnegiecouncil.org. Retrieved 2024-04-22.