Information Technology and Ethics/Current Robotic Ethics
Artificial Intelligence
[edit | edit source]Artificial Intelligence is an algorithm designed to replicate the logical response of humans to improve the autonomy of technology. Artificial Intelligence has a robust history and multiple applications and functions beyond improving the efficiency and function of applications, big data analytics etc. There has recently been a development of public interest in Artificial Intelligence widely due to the moral, societal and legal consequences of the AI’s actions and decisions. It brings into question the societal and moral ramifications of the decision and ethical considerations surrounding this technology. This distinction can not only help determine the extent to which we trust in AI as a society, but will also help to determine the impact and trust in AI.
History of Artificial Intelligence
[edit | edit source]In 1950 English mathematician Alan Turing published his famous “Computer Machinery and Intelligence” paper. In this document, he wishes to prove if machines could exhibit intelligence similar to that of humans [1]. Turing proposes the Turing Test to prove wherever or not computers could think. This publication was the world’s first introduction to artificial intelligence, but it would not become official until the 1956 Dartmouth Conference [1]
This conference was better known as the Dartmouth Summer Research Project on AI and is credited to be the birthplace of Artificial Intelligence. The term “Artificial Intelligence” was first seen used when five of the future attendees submitted a proposal for the event the previous year [2]. This group of five consisted of: John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon.
At the beginning AI was the central topic of conversation and caught the attention of the US government and received funding [1]. One of the first successful projects was the Early Natural Language Processing Computer Program or ELIZA between 1964-1966 (Wortzel, 2007). Spearheaded by German-American computer Scientist Joseph Werizenbaum at MIT. This became one of the world's earliest renditions of a chatterbot. Despite the primitive design behind it, it was able to pass the Turing test (Wortzel, 2007). In 1974, due to slow progress the United States dropped out of the race for AI, and brought the field to an abrupt stop. This standstill period lasted from 1974-1980 and became known as the first Winter of AI [1]. This project then became picked up by the British government but then got dropped in 1987. From 1987 to 1993 there was another standstill in the progress and known as the second Winter of AI. It would be until 1996 the first breakthrough occurred with IBM’s Deep Blue.
Four types of Artificial Intelligence
[edit | edit source]Artificial intelligence (AI) is a description of a field that has many different subfields and classifications. Not all artificial intelligence is created equal. AI classification is an important part of AI ethics, as different types of AI are capable of different things, and some, do not even exist yet.
AI classification can be broken down into two types, the first type is a more practical and tangible definition of AI, what it can do, and how it is able to think. The second classification is more theory-oriented.
Type 1
[edit | edit source]This first classification can be broken down into 4 different types, or subclasses. Reactive machines, limited memory machines, theory of mind, and self-aware machines.
Reactive machines are able to react to a set of input parameters, which are limited. They are not able to “think” in the sense of being able to handle inputs that are not already known. They are “without memory and past experience, they analyze possible situations and choose the most strategic/suitable move” [3]. This is why, one of the great examples of reactive machines was Deep Blue, the IBM supercomputer that was able to defeat the current world champion at the time, Gary Kasparov.
Limited memory machines build upon reactive machines, where they are able to use past experiences and information to make future decisions. In a sense, these machines are able to “think” and not just react to predefined inputs and perform pre-programmed outputs. Autonomous vehicles use this system for some of their features [3].
Theory of mind refers to artificial intelligence systems that are expected to run under the assumption that decisions taken by others are impacted by their own beliefs, desires, and intentions. [3]. They do not exist yet, as this requires machines to be able to process complex human emotions and thought processes.
Self-aware machines are the final stage of Artificial Intelligence, which currently are only theoretical. These are AI systems that are comparable to the human brain where they have developed self-awareness [4].
Type 2
[edit | edit source]The second type of Artificial Intelligence classification has three subcategories. Artificial Narrow Intelligence, Artificial General Intelligence, and Artificial SuperIntelligence.
An Artificial Intelligence is classified as an Artificial Narrow Intelligence (ANI) if it is specialized in a specific area, and is a reactive machine. IBM’s deep blue is classified as an ANI, since it is reactive, and specializes in playing chess. Some modern-day examples of ANI’s are personal assistants that are built into smartphones, such as Siri or Google Assistant, as well as “videogames, search engines, social networks, web cookies, online advertising services, data miners and data scrapers, autopilots, traffic control software, automated phone answering services and so on” [5].
Type 3
[edit | edit source]Artificial General Intelligence is representative of “Human-Level AI’s”, that are computers as smart as humans in every aspect and are capable of performing all intellectual tasks that humans can” [5]. Currently, AI’s are extremely good at performing complex tasks and calculations extremely quickly, however human-level tasks such as voice and image recognition are very difficult, due to the difficulty of presenting certain predetermined conditions for these AI’s to identify when external conditions appear random. AI Scientists estimate that humanity will develop an AGI around the year 2030 [5].
Type 4
[edit | edit source]While AGI’s are Artificial Intelligence that matches human-level cognitive abilities, artificial Superintelligcnes (ASI’s) is artificial intelligence that is far smarter than humans in practically every field, including scientific creativity, general wisdom, and social skills [6] These are the types of artificial intelligence depicted by science fiction, and what many folks are worried about regarding developing computers that are far smarter than humans. The point in time at which ASI’s are created is when the idea of the singularity comes to fruition. This is a point in time where technological progress becomes uncontrollable, due to discoveries being made by ASI’s themselves faster than humans ever could research.
Ethical Considerations
[edit | edit source]Ethics
[edit | edit source]Many of the practical use cases for artificial intelligence bring about ethical concerns. While having the potential to be extremely powerful and beneficial for humanity, artificial intelligence can also be used by malicious actors. Ethical debates are complex, and to understand them, must be broken down by category. This section will discuss practical use cases for Artificial Intelligence, and then the ethical concerns that come with each.
Privacy
[edit | edit source]Data privacy is a hot topic in today’s age of online presence and social media. Billions of people globally are on multiple social media platforms. These people generate extreme amounts of data and metadata that are used by companies in data mining practices. “Data mining is related to machine learning, information retrieval, statistics, databases, and even data visualization” [7].
Features generated by people using social media platforms can be used by artificial intelligence to find correlations and make predictions. This was first seen in politics during the 2008 presidential election, where social media sites were first used as a significant method for candidates to reach voters [8]. The information gathered by these online interactions, allowed researchers from MIT to find correlations between the amount of social media used by candidates, and the outcome of the 2008 election [9]
Thus, the information generated by online use can be extremely valuable, and there is much ethical debate over how social media companies can use this data, or even if they should be storing it at all. The General Data Protection Regulation (GDPR) was the first of its kind to introduce sweeping data privacy legislation for European citizens, as they believe that people should have rights over their online data.
Transparency
[edit | edit source]Another ethical concern, is the transparency of these AI systems when they are finding correlations and making predictions, especially if they have real-world implications. Many AI systems follow a closed and black-boxed approach, meaning that only the developers understand how the AI is structured and how the algorithms are functioning. This leaves an opportunity for algorithmic biases that can go undetected. With some machine learning techniques, algorithms are even opaque for the experts who are working on them, making it difficult to spot these biases as well.
Bias
[edit | edit source]Algorithmic bias is a huge ethical concern for artificial intelligence. There are extremely important and real-world implications of AI systems. Artificial intelligence is used in financial, medical, and even political decision-making. This will continue to increase as AI systems become more advanced.
That is why the ethical debate of AI bias is important in tandem with transparency concerns. If artificial intelligence is going to be making decisions that impact many different people, then it is argued that it is of the utmost importance for those systems and algorithms to be developed and reviewed by a diverse group of people, in hopes of minimizing algorithmic bias, and then correcting it when needed.
Bias brings up the debate of the inevitability of algorithmic bias, and who is responsible for the decisions that the AI makes while in operation.
One prime example of artificial intelligence algorithmic bias and transparency issues was seen in 2015, with the Google Photos application. This photo application used artificial intelligence to tag people, places, and objects in photos. This is an example of an AI system that processes large amounts of data, just like how facial recognition systems work that are used by police apartments and other government agencies. The google photos application was caught labeling African Americans as “gorillas”, and overall showed biases against women and people of color. These systems were quoted to be too complex for anyone to be able to predict what they will ultimately do [10].
Safety
[edit | edit source]Safety As seen in all the milestones stated previously, AI has made leaps and bounds of progress. With such an undeniable power, safeguards must be implemented. The first step to ensure the safe and ethical management of IT is developing guidelines for AI safety. To guarantee the proper use of Artificial Intelligence in form (physical/digital) companies are to create a framework that considers the moral side of using such technology.
This would include regulations to create a relationship between software engineers, users, and other parties involved with AI. The usage of artificial intelligence must be compliant with the governance system. Companies need to remain aware of the responsibility they hold in making sure that AI doesn't remain harmful to society [11]. Since this artificial machine will be holding data in the billion bytes it remains important that users maintain the integrity of this data. The data must remain fair, and non-discriminatory to maintain the trust of the public [11]. This includes the implementation of safeguards to protect and detect any malicious characters on the system [11]. To assist in making sure the AI developed by a party won’t be harmful extensive testing is required from the system before being publicized.
Human and AI Interactions and Combinations
[edit | edit source]Autonomous Vehicles
[edit | edit source]The convergence of Artificial Intelligence with the advancements in sensor technology has facilitated the realization of autonomous vehicles. Many safety features that have been developed for the purpose of collision avoidance have evolved into sensors capable of providing the sensory information required for AI control units to navigate a vehicle without driver input [12]
The National Highway Traffic Safety Administration (NHTSA) had defined 6 levels of autonomy, Level 0 meaning no automation and Level 5 meaning full automation. Level 5 means that the occupants do not need to pay attention to the operation of the vehicle and there may not even be manual controls, such as a steering wheel, within the vehicle [13] While there can be tremendous benefits in the automation of the driving function in terms of the efficient use of the roadway and the immeasurable benefit of lives saved by removing the human factor from the equation, great care must be taken in the development and implementation of such a system in terms of liability [14]
The classic “Trolly problem” where now the AI control unit is deciding between two undesirable outcomes. The AI control unit that undergoes machine learning, will have to be trained on what is considered to be the more desirable outcome. On what authority would developers have to train on life and death decisions?
There is the potential for sensor, hardware, and software failures that will need to be addressed with contingency procedures for a fully autonomous vehicle to be viable. Even with lesser levels of automation, education and a complete understanding of the functionality are critical for the safety of the occupants and surrounding persons. If a driver “believes” they have invoked the autopilot functionality and hop into the back seat, when in fact the autopilot was not engaged, where does the responsibility lie [15]?
There are tremendous benefits to be had with automation, but careful consideration of the safety of the occupants and surrounding persons needs to outweigh being first to market and the leader in the deployment of AV.
Military Weapons - AI in Conflict
[edit | edit source]Missiles, Drones, Self-correcting ammunition, and weapons have enabled servicemen and women to stay out of harm’s way while continuing to bring the fight to the enemy. This imbalance in the scales of human conflict, however, has brought up some ethical concerns. There is a great burden we place on an armed service’s personal by asking them to take a human life in accordance with rules of war. Because the consequences of their actions can be felt for a lifetime, their actions are not entered into lightly. Whereas a machine that has no remorse may be unethical for militaries to use to do their bidding. [16] [17]
Ground Systems
[edit | edit source]Ground system specific ethical concerns currently include the use of robotic droids used to deliver and detonate explosives on human targets as seen in the downtown Dallas shootout on July 7th, 2016. Other issues include the introduction of artificial intelligence into robotics. For instance, whether an emotional bond with a robot is desirable, particularly when the robot is designed to interact with children or the elderly. This concept of managing artificial intelligence within robotic frame is currently the most important issue facing both robotics and artificial intelligence and will remain so moving forward. Everything from the coding of AI behavior, to the safety parameters for shutting down a robot equipped with AI deserve intense scrutiny under the provision that they do not harm humans and obey orders.
Aerial Systems
[edit | edit source]Issues specific to Aerial systems include surveillance and application for the use of taking human life. Drone strikes under the Obama administration killed up to 117 civilians worldwide. 526 drone strikes were ordered under the Obama administration. Surveillance specific issues include illegal audio and video recording of private citizens.
Drones
[edit | edit source]The sales of drones risen steadily over the last couple of years. Drone sales are expected to grow from 2.5 million this year to 7 million in 2020, according to report released this week by the Federal Aviation Administration. Hobbyist sales will more than double from 1.9 million drones in 2016 to 4.3 million in 2020, the agency said. Meanwhile, business sales will triple over the period from 600,000 to 2.7 million (Vanian 2016 Pg. 1). It is already common practice to restrict the flight of drones from airfields, stadiums, and other various public events. Drones are already equipped with applications that allow it to follow a designated user. The user can be snowboarding, golfing, or hiking through the woods. The natural ethical implications that arise relate this application, still include weaponization in addition to surveillance. The FAA believes that 2017 will be the big turning point in drone adoption by businesses, which use them for everything from scanning power lines to inspecting rooftops for insurance companies. Commercial sales are expected to reach 2.5 million, after which sales will increase only slightly for the next few years. Currently, companies must obtain FAA certification to fly drones for business purposes. Some businesses and drone lobbying groups have grumbled that the regulation is partly to blame for preventing the drone industry from taking off in the United States. As of March of 2016 the FAA has granted over 3,000 business class drone licenses [18].
Aquatic Systems
[edit | edit source]Aquatic robotic ethical concerns are related to surveillance and warfare. Current issue includes the seizure of an American submarine drone in December of 2016 by China. The drone was eventually returned, but future incursions are guaranteed. It is also possible to weaponize a drone similar to its aerial counterpart and deliver lethal strikes.
Medical Devices and Decisions
[edit | edit source]Exoskeletons
[edit | edit source]Exoskeletons having been developed for the purposes of treating persons with limb injuries or disorders are also now becoming “smarter” using AI. Using AI in conjunction with the exoskeleton can provide control and stability functions that the patient lacks. [19].
Ethical considerations are the ability of this convergence to provide an unfair advantage in its use or the misuse of the technology for illegal or immoral purposes. This unfair advantage can be seen in simple mechanical advantage arguments as in the case of Blake Leeper losing the appeal to allow the use of prosthetic legs in the 2021 Tokyo Olympic Games [20]
Implants
[edit | edit source]Neurotechnology is a growing area of research that initially was aimed at helping with neurological disorders and injuries such as quadriplegic patients. The connection between the human and computer is being tested with Elon Musk’s Neuralink company creating an implant that can wirelessly transmit brain activity [21]. This connection is an enabler to applications and technology that can utilize brain activity to be converted into actions in the physical world. This is a step toward cyborgs where humans become part human and part machine. The ethical questions are also some religious questions as to what level of medical intervention transcends prolonging life and become “playing God”?
Mental Treatments
[edit | edit source]AI used in treatments in the mental capacity also should carefully weigh the ethical issues with the intended benefits. Technology has been used to create a virtual presence of a deceased love that allows survivors to have interactions with the digital version in order to provide closure in dealing with the loss. In the era of Covid-19, where many have lost the ability to grieve at the bedside of a dying loved one, technology has attempted to bridge the gap. Video conferencing with those in the hospitals where visitors are not allowed due to the pandemic has helped some with closure. Bridging AI technology with the psychology of humans can be a very beneficial but also a very dangerous tool [21].
Anthropomorthism
[edit | edit source]“Anthropomorphism” The issue of Anthropomorphism comes into play when robots become, act and feel more human than ever before and when humans begin to take on these new bots as pets and significant others, even family members. Although most are oblivious to the fact that just alone in the manufacturing and industrial workplace robots dominate our most tedious tasks. But when you assign a face, speech recognition and a few pet-like features the tables will turn, and emotions become a factor. The attribute of human traits or emotions or intention to a non-human entity is considered to be an innate tendency of human psychology. A good example of this human characteristic can be observed and witnessed in the peculiar case of a robot named (HitchBot). In 2015 ‘Hitchbot’, a Hitchhiking robot was set off as an experiment to hitchhike across the country. The poor robot traveled a considerable number of miles, but along the way was ultimately vandalized beyond repair. The story of this tragic incident made headlines, and there was an outpouring of sympathy and emotions for the little hitchhiking robot. There were many tweets nationwide in response to this atrocious act of violence that many considered cruel and inhumane, such as “Sorry lil Guy for the behavior of some humans whom are bad people” and others showing grief and sorrow in their tweets. Let us also consider the case of ‘Spot’, a new dog-like robot created by Boston Dynamic, a Google-owned company. There was a video online demonstrating how stable the robot dog was by kicking it hard over and over as it trotted along, showing its ability to maintain its composure. People were outraged and even contacted. P.E.T.A, the animal cruelty association, with the attempt to shame the company for its abuse towards a ‘dog like’ Robot!
Ethics of the End User
[edit | edit source]Ethics in end users concerns about how the designed AI is used. It addresses how the technology is to be used. For example, how the autonomous AI military weapon is used, in what scenario, and under what circumstances because it would raise a moral concern if an autonomous AI-based weapon kills an innocent individual. With the vast majority of models being trained by human employees, it’s vital that users know the information it needs to provide for the goal of usage to be reached, so that alerts of any anomalies can be as accurate as possible. Ensuring trust from end-users not only requires clarity but also transparency. Making users know what unusual activities can be found with the use of that particular AI is necessary. Some Artificial intelligence makes mistakes like Alexa’s response to audio requests to play Spotify tunes, but this mistake is something that does not make much difference, unlike self-driving/ auto-pilot cars. One would like to be certain about how self-driving cars work so that they know how secure they will be while driving in autopilot mode. This can be either misuse of technology or a mistake made by technology. Robots now work along with humans like in factories, hospitals, banking, and even flying aircraft and performing surgeries. Unlike human robots cannot differentiate between pain and pleasure, what is wrong or what right, whether it is justifiable or not, performing surgeries on a human patient is the matter of life and death if the algorithm or command provides goes slightly wrong who will be responsible? A robot? As the surgery was performed using AI.
Misuse of AI Technology
[edit | edit source]Any misuse of artificial intelligence systems can potentially cause disruptions and unprecedented results. AI systems also help in analyzing integral data and perform predictions of a large number of data that can be misused by cybercriminals for ill- gain. Another popular abuse of AI is deep fakes which uses Ai techniques to manipulate audio or video content and make it look authentic. Deep Fakes are tough to differentiate from legitimate content, even with the use of the technological solution. As an example, involving a UK- based energy firm that was tricked into transferring around 200,000 British pounds approximately around $270,000 to the Hungarian bank account after a malicious individual used deepfake audio technology to impersonate the voice of the firm’s CEO to authorize the payment.
Unintended Consequences
[edit | edit source]Another concern is the unintended consequences of the technology when demonstrating or eliciting emotion. One such endeavor was the virtual recreation of deceased loved ones or family members so that the survivors could have some closure when deprived of the opportunity at the time of death. The potential issue is if the virtual version of the deceased behaves in a way that is contrary to the real person would have behaved. This could therefore have the counter effect of closure but create heartache [22].
Safety Override
[edit | edit source]Should there be a procedure to override the AI decision as a safeguard? It can be argued that the reason for AI making the decisions in the first place is to remove the sleep-deprived caffeine-addicted near sighted emotional human from the equation in the first place. However, if controls do not exist for intervention, then how can the AI decision be altered or correct in the case of an error?
Robotic Responsibilities and Rights
[edit | edit source]There is already a widespread interest going on when it comes to robots or artificial intelligence. One of the issues often debated in artificial intelligence is whom to hold responsible when an algorithm predicts poorly? Especially in the medical and automotive industry. There has been a discussion going on where some suggest an individual engineer should be held responsible as the impact of their model has affected other lives, so they are at fault. In extreme hypotheses, it has also been argued that the advanced level of AI may have the potential to destroy human life. While companies developing AI for commercial use like Amazon, Microsoft, Google, IBM, Facebook, and Apple have undertaken individual as well as collective efforts in building safeguards around AI, academic institutions like the University of California, Berkeley, Harvard, and the Universities of Oxford and Cambridge have also shown commitment towards working on a set of universal ethics and safety standards in Artificial Intelligence. (Archana Khatri Das) When we give artificial intelligence our rights, we also hand them some of our unwanted responsibilities. There are some nations who are working on giving robots rights as we humans have, the European Union is exploring the possibility of granting AI “personhood”. Many nations already have advanced by giving non-human rights. Here are some examples of non-human entities enjoying some degree of what previously human rights were listed below: 1. Shibuya Mirai, a civil-servant chat-bot with the “personality” of a small boy for the Shibuya neighborhood of Tokyo, Japan was given “official residency” by the city
Sophia the robot was given actual citizenship by Saudi Arabia. 2. Erica the fembot was given a job. There is even an American Society of Prevention of Cruelty to Robots, the ASPCR (“Robots are people too! Or at least they will be someday “). Estonia has gone a step further and promised to grant full human rights to any entity that can pass the so-called Kratt Law test, which is named after a mythological creature made out of household objects that gains sentience after its “creation”.
Three Laws of Robotics
[edit | edit source]One of the best-known fictional work entitled “Run-around” by Isaac Asimov, published in Astounding Science Fiction in March 1942 (Asimov 1950) set out his “three laws of robotics” First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm. Second Law: A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. Third Law: A robot must protect its own existence as long as such protection does not conflict with the First and Second Laws. In addition to this Asimov added “Zeroeth Law” in 1983 to [recede the other three laws by providing that a robot may not harm humanity, or, by inaction, allow humanity to come harm” but many scholars have pointed out deficiencies in Isaac Asimov laws. What rights and responsibilities should be given to robotics is still an ongoing discussion. According to Christopher Stone, there should be some criteria satisfied for anything to be a holder of legal rights.
Policy and Procedures in AI
[edit | edit source]The value of utilizing AI comes from its ability to improve human lives, while policies and procedures are set to guide the development and deployment of AI in order to avoid major concerns and lower risks. AI needs to be designed to be easy to understand and trustworthy. Policies need to be in place to implement AI in a safe manner as there is debate around AI deployment that involves how privacy is protected and whether AI bias can lead it to performing harmful acts. For example, with the advancement in driverless car technology governments have begun to develop regulations to guide or restrict the testing and usage of self-driving vehicles. “The National Highway Traffic Safety Administration (NHTSA) has issued a preliminary statement of policy which advises states against authorizing members of the public to use self-driving vehicle technology at this time”. The updated preliminary statement of policy would facilitate and encourage wherever possible the development and deployment of technologies with the potential to save lives. When it comes to AI such as autonomous vehicles, the United States has been active in producing policies and regulations. Twenty-nine states have enacted legislation related to autonomous vehicles and eleven governors have issued executive orders related to them. While the benefits of AI are significant, it is important to take a calculated approach to AI through the use of policies, procedures and regulations.
AI and Employment
[edit | edit source]Job Creation
[edit | edit source]It is no secret that AI has changed the workforce, and that the use of AI will only grow over time. Artificial Intelligence has both helped create jobs and unfortunately also contributed to job disappearance. It is important to remember that the jobs that are created from technological advancement will require a different skill set from those that disappear. Edward Tenner notes that “Computers tend to replace one category of worker with another…You are not really replacing people with machines; you are replacing one kind of person-plus-machine with another kind of machine-plus-person” [19].For example, the use of robots in the workforce has led to an increase for the need of designers, operators, technicians who can repair the robots. In addition, Smith and Anderson’s survey results find that “half of the experts who responded (52%) expect that technology will not displace more jobs than it creates by 2025.”[17].
Job Displacement
[edit | edit source]As much as the job creation potential there is sought with the integration of AI into our modern-day society, there are still a few losses. One of the most important is that of them replacing our current human employees. Al guru Kai-Fu Lee, current CEO of Sinovation Ventures expects for AI to be able to replace that 50% of the job market will be taken up by AI in the next fifteen years (Stein, 2018). This includes industries such as healthcare to agriculture (Stein, 2018). While this will leave billions displaced this will help raise the standard for people to achieve higher education in hopes of getting a job. The implementation of AI is still not possible without the assistance of people, for at the end of the day the humans will be managing these intelligent machines.
Social Media
[edit | edit source]Social media today has undoubtedly become the world’s widest range of keeping communication with others. Companies had not missed a beat in implementing AI in their artificial intelligence in their applications for enhancing user experience and to rake in more users. One example is face recognition software utilized by Facebook to better target advertisement to users (Heller, 2019). Another approach is LinkedIn’s, a popular social networking site for job searching, deploys artificial intelligence to connect employers and employees based on interest in their feed (Ivanschitz & Korn, 2017). Just as much as AI can help the user, it can also be turned to manage the users on the platform. A use of this would be blacklisting keywords and filtering texts that do go against the standards of the company [23].
Open Source
[edit | edit source]In the world of computing, a common occurrence is of software developers opening their projects to the public to let them manage, change, and distribute their software. By programmers making their code or program open source it allows for development to be made by the community. [24]. Artificial intelligence is no exception to this trend, as it too has become available to the public. A company that has been managing the distribution has been OpenAI whose mission is developing AI that benefits humanity as a whole. While there are plenty of benefits to be reaped by publicizing one’s code there are still plenty of repercussions that the industry has to be aware of [24]. One of these dangers is that of deep fakes, which can manipulate the image or voice to appear like someone else. Any malicious author could take advantage of this and reap damage to companies and individuals [24]. This is the only example of many with opening the source code to the millions of users of the internet.
Benefits vs Risks of AI
[edit | edit source]Benefits
[edit | edit source]Home security
[edit | edit source]“These systems utilize machine learning and facial recognition software to create a catalog of frequent visitors to your home. This allows the system to identify strangers instantly. AI-powered security is an initial step toward home automation, which offers many other useful features, such as notifying you the moment children arrive home from school or tracking the movements of pets. These systems can even notify emergency services autonomously, which makes them a great alternative to other similar subscription-based services” [25]
When someone thinks of their own home, they want it to be the most secure place for them. AI technology has been used in home automation and this helps keep everyone one step ahead of any kind of crime that can occur. Tracking and knowing anything that can occur is extremely beneficial especially when the possibility of a home invasion can happen.
Digital Media
[edit | edit source]“Machine learning has vast potential in the entertainment industry, and it is already used in streaming services like Netflix, Google Play, and Amazon Prime. These sites employ algorithms that act like neural networks to eradicate low-quality playback and buffering, offering you top quality from your internet service provider. Algorithms that are powered by AI also assist in media production. News stories are already being produced by AI algorithms to increase efficiency” [25].
Almost every kind of platform that is used today for our day to day needs has AI implementation. The algorithms are understanding and learning the kind of movies or music we like to entertain ourselves with. Not only that, the help of AI technology is being used to help with quality and any other features that may keep us tied in.
Self-driving cars
[edit | edit source]“AI technology is hastening the development of self-driving cars. In fact, according to research by Google, AI-powered cars already surpass human drivers when it comes to safety, as AI allows self-driving cars to adapt immediately to changing conditions and learn from new situations. Currently, most car manufacturers are looking to integrate AI technology in future product offerings.”
Self-driving cars are the future of any type of automobile. We want to believe that a car will be able to perform as high as a very safe driver. Using AI to stay ahead of the curve, will allow us to get even closer to actual self-driving cars. The safety mechanisms are expected to work for us so that we will be able to get to our destination much more safely.
Ride Sharing Applications
[edit | edit source]“Ride-sharing services like Uber employ AI to determine the time needed to transport users to their desired locations. The technology lets users know details such as when their driver will arrive, when they will arrive at their destination, and how long it will take for food to be delivered. Uber also uses AI to set prices depending on what they think you are willing to pay. According to The Independent, Uber also uses AI to determine if a rider is drunk before a driver accepts a pickup. It does this by analyzing and comparing factors like walking speed and typing patterns.”
Ride-sharing applications are only getting more advanced. These applications are using AI technology to know how much food will cost or even how long it will take for the food to arrive. AI technology is also understanding what kind of atmosphere we are coming from. AI being able to comprehend the comparison of a sober person compared to a drunk person might seem very out of the ordinary. In fact, this may help an Uber driver be better prepared for any type of situation that may arise since the customer may be under the influence.
Fraud Prevention
[edit | edit source]“Banks are using AI to send mobile notifications to help detect fraud. For instance, if an unusually large transaction is posted to your account, you might receive a warning notification on your phone, or if purchases occur in a location far from your home, your account might be flagged and you may be asked to confirm the purchases. AI enables such warnings by analyzing your typical daily transactions to identify unusual patterns in your spending behavior.”
AI is here to help improve our daily needs. Banking is one of the most apparent forms of keeping our money “safe”. Since the banking industry has a great deal of responsibility, it is obvious that they would set forth stricter analysis over the accounts of their customers. This will create a much better trust between the bank and its customers.
Improves Work Efficiency
[edit | edit source]“AI-powered systems are well trained to perform every task that is done by humans. Marvelous work efficiency is ensured using AI technology. AI machines rectify human errors and deliver the best business results. Moreover, AI-based machines can operate 24/7. For instance, AI Chatbot can understand and respond to client queries at any time. Thus, AI chat assistants will improve the company’s sales.”
One of the biggest perks of having AI technology is the fact that it improves work efficiency. The work that is done by humans heavily depends on the performance of the person working. Some days may differ from others. AI technology is there to create a much easier and efficient quality of work. AI machines will make a much more consistent type of work environment. Having a machine creating less errors and understanding the different types of needs that may arise for clients will only help businesses thrive.
Reduces Operational Costs
[edit | edit source]“Using Machine Learning (ML) algorithms such deep learning and neural networks, the systems can perform repetitive tasks efficiently. Also, ML-driven systems eliminate the need to write code every time to learn new things.Machine Learning can optimize machine abilities to learn new patterns from the information. Thus, compared to humans, AI machines can reduce operational costs. For example, AI machines can perform tasks repeatedly without any breaks or variations in results.”
Machine learning focuses on allowing systems to work much more efficiently. Creating this type of environment will help businesses save a great amount of money. This helps with saving businesses money and focuses more attention on fixing other issues. However, this could potentially be a threat to employees.
High Accuracy in Tasks Completion
[edit | edit source]“Artificial Intelligence is used to train machines to perform work more efficiently than humans. AI systems can do critical tasks and solve complex solutions and obtain accurate results. Because of this advantage, AI is highly adopting by the healthcare sectors. Robots are detecting life-threatening diseases accurately and also performing surgeries to save human lives.Moreover, AI in healthcare is greatly influenced radiology and digital consultation applications.”
AI technology can possibly create a very solid work environment. The tasks that can be completed with AI machines are more concrete. To further explain this concept, an AI machine can potentially work much better than a standard employee. The accuracy especially can be much greater since the repetitive tasks are learned and deployed at a greater rate. Not only that, AI machines are creating a much more competitive environment. AI machines are known to surpass an employees accuracy.
Automates and Improves Work Processes
[edit | edit source]“By the above AI benefits, it was proved that AI-powered machines automate the end-to-end work process. AI machines securely process the given data. In addition, valuable insights into data are delivered. Next, new opportunities will open up to businesses. Thus, overall business performance is increased. Many researchers are agreed that AI can do all tasks that are performed by humans. For instance, AI Chabot’s can assist customers when they visit web portal AI-powered machines can automate the work process across all industries AI robots can exhibit human emotions like happiness/sad and love/hate”
Clearly with all the above benefits, AI technology is continuing to thrive and prosper in many business related functions. Business professionals are trying their very best to get their hands on any type of AI that will successfully perform any need the business may need. Businesses are going to continue to create a better understanding of AI in order to implement it into their own business. Could AI potentially be taking over?
Risks
[edit | edit source]Misuse Can Lead To Severe Threats
[edit | edit source]“With the increasing adoption of AI, the concerns of artificial intelligence technology are also rising in humans in many ways. For instance, the misuse of AI-powered autonomous weapons could cause mass destruction. This means that, if the autonomous weapons are in the wrong hands, they become against humans. Thus, AI is a major threat to humans in the future.”
AI technology could potentially fall into the wrong hands at any given moment or time. People have used AI to create a much more efficient working environment, but the possibilities of people having a more malicious intent is very high. The power of AI technology is endless and could potentially be very dangerous.
Data Discrimination
[edit | edit source]“Super-intelligent machines can gather, store, and process vast amounts of user data. But, the drawback is, these machines can use your personal information without your permission. AI will be giving many benefits to society, but unfortunately, its advanced capabilities are also used to perform harmful or dangerous actions.”
When taking a look at a much more advanced kind of AI is Artificial Super-Intelligence (ASI). This kind of intelligence could cause many problems especially if it is not created in a proper way. It is upsetting to see that the possibility of having such advanced technology is in our reach, but people with bad intentions can create a world of terrible events. ASI is much too powerful to explore, the timeline is not definite according to many sources, but it may not be too far.
Reduces Employment
[edit | edit source]“AI is a trend in the market. The experts are estimating that AI eliminates approximately 75% of employment in the future. Most of the industries are already using AI machines, devices, and apps. Thus, the replacement of humans with AI-machines can lead to global unemployment. Driven by the usage of AI, the human workforce has to depend on machines at large extent. Thus, employees lose their creative power.”
One of the biggest benefits that were explained was the fact that AI technology can perform much higher than a standard human. The accuracy, efficiency and other factors pertaining to work is done at a much higher level. This can cause an increase in job loss and create much more difficulties. The competition is already very tough and adding an almost near perfect machine to do a job will only increase unemployment rates.
Making Humans Lazy
[edit | edit source]“Artificial Intelligence is obviously making humans lazy with its app automating most of the tasks. There are chances for humans to get addicted to these advanced technologies which may create a problem for future generations.AI machines can’t think out of the box. They can only do what they programmed. Though they can do the task easier and faster, AI-machines cannot touch the power of human intelligence. AI-devices can store vast information and draw patterns, but cannot map the data with the requirements without minimum human intervention.” One of the biggest challenges facing the advancement of AI is creating a world of much more laziness. People are getting too comfortable with allowing machines to do what they want for them. Creating this laziness is making creativity less important. Instead of focusing on getting better at a particular task, it is much easier to have an intelligent machine do the work for someone. This is making laziness much more apparent and any out of the box thinking extinct. It is very important for people to educate themselves on AI machines before fully adapting to that kind of lazy lifestyle.
Cost of AI Platform is high
[edit | edit source]“The development of AI systems requires immense costs, as they are complex machines. Next, the maintenance of AI machines also requires enormous costs. The in-built software programs in AI machines have to be upgraded to provide the best results. In this case, if any severe breakdowns to the systems, then the code recovering process might waste time and costs.”
AI is very expensive, but that is not stopping corporations from adapting these machines to their business needs. As technology is constantly evolving, the only way is up. AI technology will get less expensive as years go on with more implementation. As with any kind of technology, this may be one technology that will forever change the way businesses function.
References
[edit | edit source]- ↑ a b c d Benko, A., & Lányi, C. S. (2009). History of artificial intelligence. In Encyclopedia of Information Science and Technology, Second Edition (pp. 1759-1762). IGI Global.
- ↑ Flasiński, M. (2016). History of artificial intelligence. In Introduction to Artificial Intelligence (pp. 3-13). Springer, Cham.
- ↑ a b c Maity, S. (2019, September 20). Identifying opportunities for artificial intelligence in the evolution of training and development practices. Retrieved April 18, 2021, from https://www.emerald.com/insight/content/doi/10.1108/JMD-03-2019-0069/full/html
- ↑ Hassani, H., Silva, E., Unger, S., TajMazinani, M., & Mac Feely, S. (2020, April 12). Artificial intelligence (AI) or intelligence Augmentation (IA): What is the future? Retrieved April 18, 2021, from https://www.mdpi.com/2673-2688/1/2/8/html
- ↑ a b c Gurkaynak, G., Yilmaz, I., & Haksever, G. (2016). Stifling artificial intelligence: Human perils. Computer Law & Security Review, 32(5), 749-758. doi:10.1016/j.clsr.2016.05.003
- ↑ Bostrom, N. (1998, January 01). Nick Bostrom, how long BEFORE Superintelligence? - PhilPapers. Retrieved April 18, 2021, from https://philpapers.org/rec/BOSHLB
- ↑ Larose, D. T. (2004). Discovering knowledge in data. doi:10.1002/0471687545
- ↑ E. Qualman. Socialnomics. Knopf Books for Young Readers, New York,2009.
- ↑ P. Gloor, J. Krauss, S. Nann, K. Fischbach, and D. Schoder. Web sci-ence 2.0: Identifying trends through semantic social network analysis.volume 4, pages 215 –222, aug. 2009.
- ↑ Metz, C. (2019, November 11). We Teach A.I. Systems Everything, Including Our Biases. Retrieved April 18, 2021, from https://people.eou.edu/soctech/files/2020/12/BERTNYT.pdf
- ↑ a b c Leslie, D. (2019). Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector. The Alan Turing Institute. https://doi.org/10.5281/zenodo.3240529
- ↑ Artificial intelligence driving autonomous vehicle development. (2020, January 30). Retrieved from IHS Markit: https://ihsmarkit.com/research-analysis/artificial-intelligence-driving-autonomous-vehicle-development.html
- ↑ Lynberg, M. (n.d.). Automated Vehicles for Safety. Retrieved from National Highway Traffic Safety Administration: https://www.nhtsa.gov/technology-innovation/automated-vehicles-safety.
- ↑ Bogost, I. (2018, March 20). TECHNOLOGY Can You Sue a Robocar? Retrieved from The Atlantic: https://www.theatlantic.com/technology/archive/2018/03/can-you-sue-a-robocar/556007/.
- ↑ Pietsch, B. (2021, April 18). 2 Killed in Driverless Tesla Car Crash, Officials Say. Retrieved from The New York Times: https://www.nytimes.com/2021/04/18/business/tesla-fatal-crash-texas.html .
- ↑ US military adopts 'ethical' AI guidelines https://www.dw.com/en/us-military-adopts-ethical-ai-guidelines/a-52517260
- ↑ a b Armed robots, autonomous weapons and ethical issues http://centredelas.org/actualitat/armed-robots-autonomous-weapons-and-ethical-issues/?lang=en.
- ↑ Vanian, J. (2016, March 25). Federal Government Believes Drone Sales Will Soar By 2020. Retrieved May 02, 2017, from http://fortune.com/2016/03/25/federal-governmen-drone-sales-soar/
- ↑ a b Researchers Working on AI for Exoskeletons. (2021, March 22) Retrieved from Paint Square: https://www.paintsquare.com/news/?fuseaction=view&id=23530
- ↑ Blake Leeper loses appeal to use prosthetic legs in Olympics bid. (2020, October 26) Associated Press Retrieved from ESPN https://www.espn.com/olympics/story/_/id/30196806/runner-blake-leeper-loses-appeal-use-prosthetic-legs-olympics-qualifying-bid
- ↑ a b Waltz, E. (2020, August 28) Elon Musk Announces Neuralink Advance Toward Syncing Our Brains With AI. Retrieved from IEEE Spectrum: https://spectrum.ieee.org/the-human-os/biomedical/devices/elon-musk-neuralink-advance-brains-ai
- ↑ Manley, J. (2020, November 26). The Ethics of Rebooting the Dead. Retrieved from Wired: https://www.wired.com/story/ethics-reviving-dead-with-tech./
- ↑ Ivanschitz, R., & Korn, D. (2017). Digital Transformation and Jobs: Building a Cloud for Everyone. The University of Miami Inter-American Law Review, 49(1), 41-50. Retrieved April 30, 2021, from https://www.jstor.org/stable/26788342
- ↑ a b c Etzioni, A., & Etzioni, O. (2017). Should Artificial Intelligence Be Regulated? Issues in Science and Technology, 33(4), 32-36. Retrieved April 30, 2021, from http://www.jstor.org/stable/44577330
- ↑ a b Kevin Gardner (n.d.). Six Ways AI Improves Daily Life. Digitalist Magazine. https://www.digitalistmag.com/improving-lives/2019/05/28/6-ways-ai-improves-daily-life-06198539/.