Jump to content

Engineering Education in 2050/Keeping Humans in Responsible Charge of AI

From Wikibooks, open books for an open world

Introduction

[edit | edit source]

The rapid emergence of artificial intelligence (AI) in the 21st century presents significant ethical challenges, particularly in its development and societal impact. This chapter, drawing from AI experts Geoffrey Hinton and Timnit Gebru, highlights these issues, including existential risk, encoded biases, and labor exploitation in AI development. It advocates for a reformed engineering curriculum focused on equipping future engineers with the necessary skills and ethics to address these challenges in 2050, ensuring AI advancements align with inclusivity and social responsibility.

Future Risk of AI

[edit | edit source]

Hinton's Projections and Concerns for AI's Future

Geoffrey Hinton, often referred to as "The Godfather of AI," has voiced concerns about the trajectory of AI development. He recently stated in 2023 that AI could surpass human performance within as little as five years, revising his earlier prediction of twenty to thirty years.[1] His warnings reflect that AI systems must be aligned with human values, and designed as tools to augment humanity rather than replace them. Despite this, Hinton does not advocate for a halt in AI research, arguing that global research efforts will continue regardless.[1] His concerns encompass the spread of misinformation, along with the potential automation of jobs, such as assistants and paralegals, and the wider societal effects of job displacement. [2][3]

Gebru's Critique of AI Ethics

Timnit Gebru, a former co-lead on Google’s AI ethics team, brings a different but equally critical perspective. Her dismissal from Google following her criticism of large language models (LLMs) underscores the tension between profit and safety in AI development. Gebru highlighted how these models, driven by profit motives, often fail to address inherent biases.[4] Biased data trains biased AI, and the reflection of human biases in AI underscores the need for the 2050 engineering curriculum to focus on comprehensive ethical training, critical thinking, and bias mitigation strategies.

Labor Exploitation of AI Workers

The exploitation of labor in AI development highlights unforeseen challenges from AI's rapid growth. In the Philippines, Remowork has faced criticism for exploiting workers tasked with classifying images for AI datasets, often in poor conditions with low pay, delayed wages, and revocations.[5] Similarly, Sama in Kenya has drawn workers into discerning explicit graphic content to train models like ChatGPT in flagging inappropriate material.[6] These workers, some paid as low as $1 an hour, endure mental health impacts from the distressing content[6]. Such exploitation underlines the ethical issues in building advanced technologies on the labor of disadvantaged individuals, reinforcing a cycle of inequality.

AI Biases and Ethical Considerations

[edit | edit source]

By 2050, it is predicted that AI will be more commonly integrated into everyday life and more powerful, however, with this there is the potential for algorithm decisions to be more biased as they present higher amounts of information to users accumulated over the years. A consideration to focus on by this year is the possibility that algorithms are reinforcing biases rather than helping them.

What is AI Bias?

Biases can emerge from the data that is used for training, the algorithms themselves, or the design decisions made during development.[7] Important case studies of AI bias include the 1988 British medical school discrimination where a computer program was biased against women and applicants with non-European names when reviewing applications. In 2018, a similar occurrence happened when Amazon’s computer models were biased against women as most resumes came from men with data input.[8] Another example is the ProPublica news site finding that the criminal justice algorithm said African American defendants were “high risk” at twice the rate as white defendants.[9] It is clear that AI bias can lead to misinformation and unequal opportunities to users and the public, so is there a way to completely stop it with the help of engineers and their professional responsibility?

A New Curriculum

Behind AI development are the professionals working to create such algorithms. To keep AI in check, the task of completely eradicating bias is an unrealistic approach, rather, it is proposed that engineers will need to be trained at some point in the future to recognize such biases in data. Engineers need to understand their professional responsibility in ensuring AI technologies align with ethical standards and societal values. To achieve this mindset, we predict that AI bias recognition will be addressed and integrated into course curriculums. Following principles by Geoffrey Hinton, there will be a creation of courses or electives that help engineering students navigate complex ethical dilemmas that will most likely occur in AI development. Main topics to address include user control and privacy, and transparency of AI systems in a diverse environment. Engineers will learn about user consideration when designing AI which allows users to have control of the system, rather than the system controlling the user.  Respecting user privacy is also important as personal information must be handled responsibly with low possibilities of data breaches, as well as compliance with set privacy regulations. Lastly, ensuring that AI systems are transparent and explainable helps build trust and allows users to understand how decisions are made. For example, this may be seen with the system displaying warning signs of information that may be at higher risk of having biases, and keeping the user aware of how they can better use AI in their life while keeping in mind the information and perspectives presented have their flaws. To address these topics, in particular, hands-on exercises such as finding innovative solutions to help reduce bias in AI will prepare students to make responsible decisions when faced with real-world scenarios. This can be done with learning how to potentially filter data sources that are at high risk for being biased. In addition, seminars with guest speakers will be more available for students to attend on their own time. Speakers may be professors in the area or from around the world, as engineers discuss global AI standards at the time.

Opportunities and Challenges

As AI infiltrates more into our daily lives, it has the potential to reinforce biases from human thinking and behavior because of the data used. It is unrealistic, however, to find a way to completely eliminate bias, therefore, addressing a change in mindset and encouraging engineers to be more adaptive via integration of multidisciplinary courses and electives are predicted to be available in the future. It is important that humans, the creators of AI systems, heed Hinton’s warning by being aware and reflective of AI biases and ethical standards as it is crucial for creating a future with AI technologies that contribute positively to society, and so that humans do not blindly rely on the incorporation of these powerful, yet flawed systems.

AI Law in Engineering Schools

[edit | edit source]

As previously mentioned, our vision sees engineering education taking the necessary steps to teach CS students how AI should align with inclusivity and social responsibility. Another method to reach this end would be to include Pre-Law for undergraduate engineering students.

Current AI Law

Presently, laws around AI are in the intermediate stages and have yet to take full effect. By the year 2050, with AI being more present in everyday life, we suspect there will be many AI laws. Already, 15 states in the U.S. have enacted legislation around AI.[10] They all have different aspects to them. Some state legislatures focus on collecting data and establishing policies, while others further clarify that AI does not legally qualify as a person.[11] Although the current legislature is scarce and disjointed, in 2022, the White House Office of Science and Technology Policy released the Blueprint for an AI Bill of Rights, which is an indication that in the few upcoming years, nationwide policies on AI will be formed.[12]

AI Litigation

With any policy or law made, eventually, citizens will break them. We predict that in regards to AI, this will stay consistent, creating a new field of AI litigation. For example, there could be cases that involve harmful bias done by AI, plagiarism of AI software, liability disputes if someone were to follow AI instructions, and more. CS students would be the perfect candidates for the role of AI lawyers as they could understand the background of the software that is relevant in an AI law case and would be able to give context to a jury or judge. This is similar to how patent law, which focuses on litigating patent cases and writing patent applications, currently requires patent lawyers to have at least one technical degree. This is because people with technical backgrounds better understand technical terminology and models, allowing them to grasp the interworkings of an invention that a patent law case is centered around. With AI law, the same would apply. Introducing Pre-Law topics to undergraduate engineering CS students would allow more students to be introduced to the new career path we predict. There could be classes in engineering schools that start with the intersection between law and engineering and higher level courses teaching students law basics to prepare them to potentially go to law school and take subsequent exams. Our prediction is that there would be an AI Law Bar similar to the Patent Bar. With AI lawyers, the future of AI could be tried legally to keep creators responsible.

AI Certifications

Also, it would be important for CS students to learn about future regulations and their purposes. In our idea of 2050, it would be required for those who create AI software professionally to have a certification to create AI within a specific jurisdiction. This model would combine the current Software Code of Ethics, which does not currently include AI ethics codes, and the Fundamentals of Engineering (FE) exam that Civil Engineers must take as a step to be considered a professional engineer. These certifications would ensure that professionals who create AI systems can be held accountable for being aware of AI regulations as they have received a certification that will have tested them on AI regulations as well as technical skills. A present challenge to this idea is determining what jurisdiction a person should become certified to create AI professionally since one AI software could be used outside of the state or even the country where it was created. Also, people must pay to take the FE exam, so we can assume a similar test would also require payment. This could create barriers to who will receive the certification.

Conclusion

[edit | edit source]

These issues lead to the prediction of a significant shift in the engineering education by 2050. This new approach will prioritize ethical grounding, bias prevention, human-centric AI design, interdisciplinary collaboration, and continual learning. By equipping future engineers with the skills and ethical frameworks to navigate these challenges, the vision for 2050 will be one where AI is a force for positive change and developed in a manner that respects both human dignity and moral responsibility.

References

[edit | edit source]
  1. a b Allyn, Bobby (2023-05-28). "'The godfather of AI' sounds alarm about potential dangers of AI". NPR. Retrieved 2023-12-05.{{cite web}}: CS1 maint: url-status (link)
  2. ""Godfather of AI" Resigns From Role At Google, Expresses Worries About Artificial Intelligence". Hypebeast. 2023-05-01. Retrieved 2023-12-08.
  3. Taylor, Josh; Hern, Alex (2023-05-02). "‘Godfather of AI’ Geoffrey Hinton quits Google and warns over dangers of misinformation" (in en-GB). The Guardian. ISSN 0261-3077. https://www.theguardian.com/technology/2023/may/02/geoffrey-hinton-godfather-of-ai-quits-google-warns-dangers-of-machine-learning. 
  4. "Why Timnit Gebru Isn't Waiting for Big Tech to Fix AI's Problems". Time. 2022-01-18. Retrieved 2023-12-08.
  5. "MSN". www.msn.com. Retrieved 2023-12-08.
  6. a b "Exclusive: The $2 Per Hour Workers Who Made ChatGPT Safer". Time. 2023-01-18. Retrieved 2023-12-08.
  7. "AI Bias - What Is It and How to Avoid It?". levity.ai. Retrieved 2023-12-08.
  8. Dastin, Jeffrey (October 10, 2018). "Insight - Amazon scraps secret AI recruiting tool that showed bias against women". Reuters. Retrieved 12/08/23. {{cite web}}: Check date values in: |access-date= (help)CS1 maint: url-status (link)
  9. "Untold History of AI: Algorithmic Bias Was Born in the 1980s - IEEE Spectrum". spectrum.ieee.org. Retrieved 2023-12-08.
  10. "Artificial Intelligence 2023 Legislation". www.ncsl.org. Retrieved 2023-12-08.
  11. "Artificial Intelligence 2023 Legislation". www.ncsl.org. Retrieved 2023-12-08.
  12. "Blueprint for an AI Bill of Rights | OSTP". The White House. Retrieved 2023-12-08.