Jump to content

Information Technology and Ethics/Future Robotic Ethics

From Wikibooks, open books for an open world

Autonomous Systems and Their Applications

[edit | edit source]

Autonomous systems, a subset of artificial intelligence, are systems that are capable of acting and making decisions with no human input or supervision. These systems and their related applications have made a huge impact on the world with towering achievements not only in the field of technology, but other areas such as agriculture, marine technology, geospatial imaging, while other applications of autonomous systems can be seen in self-driving vehicles, service robotics, healthcare, and more[1].

Autonomous systems bring an aggressive approach to the country's economy and are vital in reducing pressure placed on the workforce, and therefore such systems have been developed, and continue to be improved, quickly. However, concerns regarding the ethics of these systems and the decisions they subsequently make, have been under fire for as long as this technology has existed. Any decisions made should align with human reasoning and thought, taking into consideration various emotions, contexts and future consequences of the action. Certain rules and regulations need to be put in place to ensure that even with no human supervision, autonomous systems do not cross set limits, or take a morally/ethically ambiguous path. This technology certainly has come a long way but still has far to go in terms of being trusted completely, as illustrated by a group of hackers who were able to "trick" a Tesla vehicle into driving at 85 mph in a 35 mph zone by strategically placing tape on the speed limit signs. While the vehicle was still performing to the best of its ability, human drivers would be able to take into consideration the location of the road to determine whether or not 85 mph would be a safe speed to drive at[2].

Autonomous systems are also widely used in warfare, known in that capacity as lethal autonomous weapons (LAW), autonomous weapons systems (AWS) or lethal autonomous robots (LAR). Ethical concerns have been raised by the increased use of unmanned drones and sentry guns globally, with some governments and human rights groups calling and advocating for a complete ban[3]. Like all moral instances, human emotions and values play a deciding factor in the outcome of combat, and removing humans from the equation is effectively bypassing any moral and ethical judgements that may be made in the heat of battle. With criticisms being raised consistently on this subject, rules are starting to be put into place, as the Department of Defense Directive states in clause 4.b: "Persons who authorize the use of, direct the use of, or operate autonomous and semi-autonomous weapon systems must do so with appropriate care and in accordance with the law of war, applicable treaties, weapon system safety rules, and applicable rules of engagement (ROE).[4]"

However, many opponents of lethal autonomous weapons still resist the use of "killer robots" on the principle that laws and legislation are not necessarily reflective of our moral values. As a result, concentrations are shifting from the development of lethal autonomous weapons to target humans to developing systems that target infrastructure, albeit that raises issues of its own. While theoretically destroying infrastructure is more ethical than taking human lives, there is no guarantee against the loss of life by targeting infrastructure; however, the use of autonomous weapons systems, due to their very nature, releases governments and militaries of any accountability.

The International Committee of the Red Cross (ICRC) held a round-table conference in Geneva in August of 2017 to discuss the ramifications of the use of any and all lethal autonomous weapons in warfare, and received support on their proposal to ban them[5]. Conclusive action has yet to be taken, however, with the United States government releasing a statement communicating that ICRC's concerns were heard, but this technology is too young for such a definitive stance to be taken[6].

Ethics and code of conducts

[edit | edit source]

Along with the invention of high tech autonomous machines, a lot change in the rules and regulations were needed to be frame. "Moreover, stability analysis for non-autonomous systems usually requires specific and quite different tools from the autonomous ones (systems with constant coefficients)"[7]. Among all, Drones have been highly developed and more researched areas of A.I. Researchers and high-tech people are finding ways to use Drones more and more in the day-to-day world. So far, Drones has been successful in the areas which they are being used and has enriched that area of study. Due to its recent success, it is more in demand and due to the development in new areas of drones: arises a lot of other protocols which needs to be followed to put them for the use. Huge percentage of drone are used in the military and for the defense purpose around the borders of the countries. Since, A.I. is vast field it requires to undergo a lot of different rules and regulation through which it needs to be passed taking into considerations that requirements fit all the departments. The ethical implication is necessary to have the best use of autonomous technology and particularly for drones which is achieving great heights day by day. To do this, it needs to filter through aviation ministry rules, defense ministry and protocols and other commanders of the army.

Drones are already part of our daily life, since they are part of armed conflicts, as well as to control livestock, for journalism or film production, traffic surveillance ... Considering that the evolution of drones has been much faster than the regulatory frameworks, that is why these new regulations are emerging little by little and on the fly. These drones are very easy to buy online, so seeing a drone flying over a populated area of the city is not strange at all, and we are not surprised when we are told that it can cause any kind of accident. Just as the need for drones in society is gradually being regulated, so is the need to regulate privacy issues with respect to drones. Since nowadays drones can be equipped with much more than just traditional cameras, which implies a resounding invasion of privacy. Little by little they are also regulating what can and cannot be done with drones, this is the case of the Professional Society of Drone Journalists that says that as long as the information can be obtained by other means, it is not necessary to use drones to obtain information.

Since ancient times, combatants have sought to inflict maximum damage on opposing forces and protect their own. The quest to act at a greater distance than the adversary or to have greater protection has been a historical constant. No self-respecting leader would seek victory by unnecessarily risking the lives of his troops. That is why these drones provide an answer to these questions of not risking unnecessary lives, they have also been used for more than a decade for armed conflicts. Which creates an understandable concern, since this type of systems can revolutionize the way in which combat itself was fought. They turn war into a kind of game that is directed, controlled and executed through a computer screen.


In addition, the differentiation between a civilian and a soldier should also be determined for combat issues. For it must be taken into account who is responsible for the attack if a drone violates the rights of civilians in such conflicts, or whether or not they actually commit war crimes. Issues of this kind have already been addressed by the Human Rights Council, in particular issues such as the use of lethal weapons, say drones, in military operations due to the rise of unmanned drones or unmanned aerial vehicles, which has contributed to the lack of clarity of air strikes. Since there is what is called a vacuum of responsibility. In any case, the debate basically raises two classic factors in relation to the use of force: self-defense or preemptive action. Precision technology, the use of armed drones, or the introduction of robotics are distancing society from the harsh reality of warfare in which, as always, combatants need the support of their rearguard.

Anthropomorphism

[edit | edit source]
HitchBot

"Anthropomorphism" The issue of Anthropomorphism comes in to play when robots become, act and feel more human than ever before and when humans begin to take on these new bots as pets and significant others, even family members. Although most are oblivious to the fact that just alone in the manufacturing and industrial workplace robots dominate our most tedious tasks. But when you assign a face, speech recognition and a few pet-like features the tables will turn, and emotions become a factor. The attribute of human traits or emotions or intention to a non-human entity, is considered to be an innate tendency of human psychology. A good example of this human characteristic can be observed and witnessed in the peculiar case of a robot named (HitchBot). In 2015 'HitchBot', a hitchhiking robot, was set off as an experiment to hitch hike across the country. The poor robot traveled a considerable number of miles, but along the way was ultimately vandalized beyond repair. The story of this tragic incident made headlines, and there was an outpouring of sympathy and emotions for the little hitch hiking robot. There were many tweets nationwide in response to this atrocious act of violence that many considered cruel and inhumane, such as "Sorry lil Guy for the behavior of some humans whom are bad people" and others showing grief and sorrow in their tweets. Let us also consider the case of 'Spot', a new dog-like robot created by Boston Dynamic, a Google-owned company. There was a video online demonstrating how stable the robot dog was by kicking it hard over an over as it trotted along, showing its ability to maintain its composure. People were outraged and even contacted. P.E.T.A., the animal cruelty association, with the attempt to shame the company for its abuse towards a 'dog like' Robot!

Transhumanism

[edit | edit source]

Transhumanism is the research and development of robot human-enhancements. This is the act of augmenting the human body with robot technology to enhance human abilities. Transhumanists believe that technology is needed to surpass the human limits of evolution. There is moral and ethical issue with this. An ethical argument against transhumanism is that enhancements would raise the baseline of what is normal and further widen the gap between those that have access and those that do not. This would create further inequality in the world. Given human nature there is an ethical concern that those that aren't enhanced will become lesser of second-class citizens. There is also the medical concern that overtime there could be damage to the body that could pose great harm to themselves and others. Privacy is a major ethical concern given the current state of privacy issues and technology. Certain enhancements could have tracking data on its users and those corporations could use your data without your consent. The argument for this is to help and level the playing field. An equitable outcome for all but there would still be an ethical problem with deciding you can and cannot receive the enhancements.

Theory of Uncanny Valley

[edit | edit source]

When it comes to the human attachment to things, it is a very real and driving force behind pretty much everything. We all want to feel like our item that we own is ours and there is none out there like it. The same can be when it comes to the field of robotics and how they are designed. When we create a robot to appear to be more human, it is our desire to create something that can be our own and it to be "the only one out there". The issue with this is the closer someone comes to designing a robot to be more human, then the more off-putting it comes to appear to a human. This is called the "Uncanny Valley".

The graph on the human eeriness on humanoid robot

The "uncanny valley" is a dip in the emotional response that happens when we encounter an entity that is almost, but not quite, human. According to the theconversation.com, "It was first hypothesized in 1970 by Japanese robot designer Masahiro Mori who identified that as robots became more human-like, people would find them to be more acceptable and appealing than their mechanical counterparts, but this only held true to up a point (Lay, S. (2015, November 10))." The article continues and explain that when robots are close to human appearance but not distinguishable, then there is a sense of unease and discomfort that is developed. However, the results of the test say that when robot design is pushed past this point then it starts to become a positive response when it deals in the emotional response. The name "Uncanny Valley", is derived from this dip that develops (Lay, S. (2015, November 10)).

One of the reasons that this theory exists is that it has to do with our psyche and how we view our own lives. For example, if there are minor flaws and imperfections in appearance can give us a feeling of seeing dead matter impersonating humans. It would be like watching a zombie movie in real life. This can elicit fear or disgust with the robot since it's been proposed that it reminds us of our own immortality.

One new element to the Uncanny Valley is now scientist and doctors have now found a way to make robots look identical to humans. It was one thing to see a robot and it was blatantly obvious it was not a human. Dr. Fumiya Yonemitsu of Kyushu University did an experiment to test people's reactions, the team asked people to look at photos individuals with the same face (clones), with different faces, and of one single person. Research participants were asked to rate how realistic the images appeared, their emotional reaction to them, and how eerie they found them.

he response was pretty conclusive and straight forward. The more that the clone looked identical to someone the more people were very scared and nervous. In fact it started a new phenomenon called the "Clone devaluation effect". This is why shows like Westworld and Humans have done so well people are becoming very intrigued at the fact oh look they made robots look like humans for a movie. Now here we are and they are making robots look like humans and its very unsettling to people.

How Automation of Labor Will Affect the Job Market

[edit | edit source]

Autonomous machines and systems represent a desire to both supplement and replace the human factor in certain jobs and labor. Predictions for the use of AI in the workforce have both positive and negative consequences for all parties involved, as artificial intelligence systems can be used to completely replace people in specific occupations, thereby eliminating these jobs, but they also create a multitude of other jobs and opportunities in what could be considered a job displacement.

Job Losses

[edit | edit source]

As artificial intelligence systems are researched to become more and more adept at completing tasks that normally require human intelligence, they will eventually be used to replace human labor in the industries that can allow it. Such industries are ones that contain tasks that fall into the categories of being easily automated, which include the following[8]:

  • Manufacturing – AI systems can do more than any human can, with the ability to fully replace them in certain tasks, and its also cheaper.[9]
  • Agriculture – AI companies have been training robots to perform multiple tasks in farming fields, such as controlling weeds and harvesting crops, at a faster pace with higher volumes compared to humans.[10]
  • Retail – Many retail jobs have already been replaced by self-checkout kiosk, AI concierges, mobile payment systems, and Amazon Go-style stores, with an automation potential of 53%.[11]
  • Transportation and logistics – While driverless vehicles remain a technology that has proven to be far more difficult to bring to markets than expected, they still present a solution that aims to eliminate the need for human drivers. Delivery service drivers in particular are predicted to see an automation potential of 78%.[12]

Estimates have shown that in the U.S. alone, 47 percent of human jobs are at high risk of "computerization".[13] These losses disproportionately affect lower-skilled workers, such as blue-collar workers and those working assembly lines.[14] While the goal of artificial intelligence may not have been to run people out of their jobs, as with all technological advancements, there is always some older technology or group of people that are negatively impacted by it.

Job Creation

[edit | edit source]

While artificial intelligence systems will eventually replace lower-skilled jobs and/or jobs that have repetitive or easily programmable tasks, they will also create a multitude of other jobs and opportunities for different segments of the population. This leads to what can be considered an overall job displacement as the result of introducing AI into the workforce, since new technologies not only destroy, but they also create jobs.[15]

A report from the World Economic forum in 2018 estimated that AI would create a net total of 58 million new jobs by 2022, and that many of these jobs will likely not follow the traditional full-time employment model, instead expanding to remote workforces.[16] As a result, most of these new jobs would require the "upskilling" and "reskilling" of the workforce.[17] This is because artificial intelligence systems would begin to supplement these jobs and occupations

By 2030, AI will lead to an estimated $15.7 trillion increase in global GDP, a 26% increase.[18] The jobs created will allow workers to focus on higher-value and higher-touch tasks that will create benefits for both businesses and individuals.[19] As artificial intelligence becomes more and more researched, it will lead to an overall increase in the number of jobs created, especially high-skill and high-demand jobs, but it will ultimately decrease the need for lower-skill labor.

References

[edit | edit source]
  1. Human Centered Artificial Intelligence, Special Interest Groups. https://www.uu.nl/en/research/human-centered-artificial-intelligence/special-interest-groups
  2. Hackers Made Tesla Cars Autonomously Accelerate Up To 85 In A 35 Zone. (2020, February 19). https://www.forbes.com/sites/daveywinder/2020/02/19/hackers-made-tesla-cars-autonomously-accelerate-up-to-85-in-a-35-zone/?sh=71f981147245
  3. Ethics and Autonomous Weapon Systems: An Ethical Basis for Human Control? (2018, July). https://www.armscontrol.org/act/2018-07/features/document-ethics-autonomous-weapon-systems-ethical-basis-human-control
  4. Department of Defense Directive, Autonomy in Weapon Systems. (2012, November 21). https://www.esd.whs.mil/portals/54/documents/dd/issuances/dodd/300009p.pdf
  5. Ethics and autonomous weapon systems: An ethical basis for human control?. (2018, April 3). https://www.icrc.org/en/document/ethics-and-autonomous-weapon-systems-ethical-basis-human-control
  6. CCW: U.S. Opening Statement at the Group of Governmental Experts Meeting on Lethal Autonomous Weapons Systems. (2018, April 9). https://geneva.usmission.gov/2018/04/09/ccw-u-s-opening-statement-at-the-group-of-governmental-experts-meeting-on-lethal-autonomous-weapons-systems/
  7. Finite-time stability of linear non-autonomous systems with time-varying delays. Advances In Difference Equations, 2018(1), 1-10. doi:10.1186/s13662-018-1557-3
  8. Technology Org . (2021, February 1). The impact of artificial intelligence on unemployment. Technology Org Science & Technology News. https://www.technology.org/2019/12/17/the-impact-of-artificial-intelligence-on-unemployment/
  9. The Manufacturer. (2020, June 25). How AI and IOT affects the manufacturing job market. The Manufacturer. https://www.themanufacturer.com/articles/ai-iot-affects-manufacturing-job-market/
  10. Jain, P. (2021, July 23). Artificial Intelligence in Agriculture : Using Modern Day AI to Solve Traditional Farming Problems. AI in agriculture: Application of Artificial Intelligence in agriculture. https://www.analyticsvidhya.com/blog/2020/11/artificial-intelligence-in-agriculture-using-modern-day-ai-to-solve-traditional-farming-problems/
  11. McDonald, S. (2019, May 29). Nearly Half of All Retail Sales Jobs May Soon Be Replaced by Automation. Report: Automation and AI to Replace Nearly Half of Retail Sales Jobs. https://footwearnews.com/2019/business/technology/automation-retail-sales-jobs-ai-1202786189/
  12. Deming, D. (2020, January 30). The Robots Are Coming. Prepare for Trouble. https://www.nytimes.com/2020/01/30/business/artificial-intelligence-robots-retail.html
  13. Saner, M., & Wallach, W. (2015). Technological unemployment, AI, and workplace standardization: The convergence argument. Journal of Ethics and Emerging Technologies, 25(1), 74-80.
  14. Vincent, J. (2017, March 28). Robots do destroy jobs and lower wages, says New Study. https://www.theverge.com/2017/3/28/15086576/robot-jobs-automation-unemployent-us-labor-market
  15. United Nations. (n.d.). Will robots and ai cause mass unemployment? not necessarily, but they do bring other threats. United Nations. https://www.un.org/en/desa/will-robots-and-ai-cause-mass-unemployment-not-necessarily-they-do-bring-other
  16. Technology Org . (2021, February 1). The impact of artificial intelligence on unemployment. Technology Org Science & Technology News. https://www.technology.org/2019/12/17/the-impact-of-artificial-intelligence-on-unemployment/
  17. Technology Org . (2021, February 1). The impact of artificial intelligence on unemployment. Technology Org Science & Technology News. https://www.technology.org/2019/12/17/the-impact-of-artificial-intelligence-on-unemployment/
  18. Kande, M., & Sonmez, M. (2020, October 26). Don't fear ai. the Tech will lead to long-term job growth. World Economic Forum. https://www.weforum.org/agenda/2020/10/dont-fear-ai-it-will-lead-to-long-term-job-growth/
  19. Kande, M., & Sonmez, M. (2020, October 26). Don't fear ai. the Tech will lead to long-term job growth. World Economic Forum. https://www.weforum.org/agenda/2020/10/dont-fear-ai-it-will-lead-to-long-term-job-growth/