Professionalism/Knightscope and Autonomous Data Machines
Introduction
[edit | edit source]Founded in 2013, Knightscope is a privately-held Silicon Valley startup in the security industry. It targets both the private and public sectors, with its primary product being various models of Autonomous Data Machines (ADMs).
Product
[edit | edit source]There are several models of ADM, including stationary, mobile (indoor and outdoor), and an all-terrain version under development. [1]
These ADMs can be rented for $7//hour and are deployed in 15 states. Customers must rent ADMs 24 hours a day for the time period over which they have it. [2]
ADMs have various features, including eye-level 360 degree video streaming and recording, people detection, license plate recognition, thermal anomaly detection, signal detection, live audio broadcast, two-way intercom, and pre-recorded messages. Customers can monitor live through their ADMs using the Knightscope Security Operations Center, a monitoring application provided with any ADM. [3]
Importantly, ADMs are "not armed and never will be," according to CEO William Santana Li. He says they "provide both a physical deterrence and generate 90+ terabytes of data per machine per year." [4] According to Santana Li, the data they generate is to help security personnel do their jobs, not replace them.[5]
Knightscope is currently developing "visible and concealed weapon detection" to be able to identify suspicious bulges as weapons. [6]
Knightscope Rhetoric
[edit | edit source]Knightscope's executives acknowledge that their success will "depend largely on [their] customers' acceptance" of their robots.[7] Evaluating their rhetoric with this in mind lends insight into its authenticity.
Appeals to Patriotism
[edit | edit source]Knightscope's stated mission is to "make the United States of America the safest country in the world, changing everything for everyone."[8] CEO William Santana Li claims the events of 9/11 inspired him, a New Yorker, to innovate in the security space.[9] He says he grew up "believing in [New York City's] skyline" representing "the American dream," and that on 9/11 "freedom was attacked."[10] He believes citizens have a "fundamental right to be safe from crime, safe from violence, and safe from terrorism." [11]
These calls on patriotism and national tragedy can be found elsewhere. Santana Li has said that Knightscope was founded "after what happened at Sandy Hook" because our country is "never going to have an armed officer in every school."[12] Each Knightscope default model also has an American flag on it, and one of their taglines is "Join Us and Be A Force for Good."[13]
Appeals to Monetary Interest
[edit | edit source]The Knightscope website states "crime has a $1+ trillion Negative Economic Impact on the US Annually." [14] CEO Santana Li says if their machines could "cut the trillion-dollar problem in half," they would "change everything." [15] He speaks about these machines cutting theft from major corporations thus "lowering prices for everyone."[16] He says these machines could "increase the value of your home because you now live in a safer neighborhood."[17] One of Knightscope's website testimonials highlights "more than $125,000" saved in costs associated with "traditional security services."[18]
Appeals to a Sense of Control
[edit | edit source]In spite of appeals to patriotism or monetary interest, people might have concerns about security personnel unemployment, AI bias, or failure of the technology where humans would succeed. Knightscope addresses these concerns by appealing to a sense of human control.
One website testimonial highlights that "robots... add" to existing security forces, and "dispatchers are able to see what the robots are seeing."[19] This testimonial portrays the robots as tools rather than decision makers.
CEO Santana Li calls Knightscope's approach "decidedly 'Software + Hardware + Humans'," with his goal being "machines [doing] monotonous and computationally heavy work and... humans [doing] the strategic and decision making work."[20] Santana Li's rhetoric places humans in the driver's seat with robots only doing boring or mindless work and humans calling the shots. His rhetoric distracts from the 'autonomous' in 'autonomous data machine.'
Despite the fact that these machines are deliberately priced below minimum wage and have resulted in reduction of security forces,[21] Santana Li is "infuriated" that 2 million plus "law enforcment and private security professionals who... are willing to take a bullet for you and your family" are provided with technology that is "beneath the dignity of this nation."[22] Knightscope paints a picture of their machines as tools to aid the noble security professional in spite of the many cases where ADMs replace these professionals.
Privacy
[edit | edit source]Homeless Groups
[edit | edit source]The robots constantly collect data as they survey their surroundings so it is no surprise that they angered the homeless who live in public areas. As Jennifer Friedenbach, the Executive Director of San Francisco’s Coalition on Homelessness, said “when you’re living outdoors, the lack of privacy is really dehumanizing after awhile.” [23] The homeless are subjected to people walking around during the day, and the addition of autonomous data machines leaves them with no semblance of privacy. This response is mostly due to the use of an ADM by a San Francisco SPCA in late 2017.[24]
Expectation of Privacy
[edit | edit source]The machines do not just affect the homeless, they affect anyone who uses public space. Anything that occurs in the presence of one of their robots can be recorded. While some would say that this lack of privacy is unreasonable, Santana Li says that this expectation is unfounded. “Privacy in a public area is a little bit odd. You have no expectation of privacy… where all these machines are operating.”[25] This quote has some truth to it as most people have a camera in their pocket in the form of a phone, however the machines do more than constantly take videos. They can also track network data, thermal detection, and other features that phones do not have.[26] While Knightscope does provide extra security, it takes away any remaining privacy that the public might expect in public spaces, especially for the homeless. Whether the added security is worth the lack of privacy will be decided by business owners and regulators.
Data Privacy
[edit | edit source]Each Knightscope machine generates 90+ Terabytes of data per year.[27] While Knightscope says they will only use the data for security purposes, recent privacy scandals have shown that companies may not always follow their word.[28] And they do not state what they do with this data, despite making some statements about what they will not do. Knightscope has stated: "Eventually, we will be able to predict and prevent crime."[29] So it is likely they are using that data for this goal.
Risk of Bias
[edit | edit source]Ubiquity and Fairness
[edit | edit source]Knightscope's goal is to make everyone feel safer. However, the technology they utilize could be contrary to this goal. One feature of the ADM is People Detection.[30] But facial detection and human detection is inherently biased, as the technology has not developed enough to allow for fairness.[31]. Someone who knows they are more likely to be mis-identified or labeled as "high-risk" cannot feel safer.
Sense of Security
[edit | edit source]A future feature that Knightscope would like to add to the ADM is a detection system for suspicious bulges. But there are two scenarios in which this system fails to meet this goal. Person 1, someone with a legally acquired weapon, concealing that weapon legally, is targeted by these systems and labeled as high risk, while posing no danger to the area. Person 2, someone with bulky or oddly shaped clothing or body, is targeted and labeled as high risk. It is easy to think that this system would primarily serve its goal of catching criminals. However, the success of this system depends on human judgment to supersede the ADM's judgment in the case of false alarms.
The Human Aspect
[edit | edit source]Preventing displacement of human judgment by machine judgment is difficult. Each ADM generates roughly 250 GB of data per day. It is unlikely that a human will pay attention to anything that is not "interesting". And examples where this lead to serious consequences are plentiful. Uber entrusts a human driver to “check” on the vehicle, leading to the death of Elaine Herzberg.[32] Police officers take the results of facial recognition, pardon my pun, at face value, and people are wrongly arrested or worse because of it.[33] It is not ethical to offload the responsibility onto the user while knowing the reality of how people interact with intelligent systems.
Public Perception
[edit | edit source]Opposition
[edit | edit source]The rise or fall of autonomous data machines depends on public perception. The CEO, Santana Li, believes that the machines will take human jobs, resulting in public resistance.[34] The improved efficiency and cost effectiveness will need to outweigh this cost. The machines are already facing attacks by the public including a couple of physical attacks. In one incident, a drunk man began punching a K5.[35] When asked why, he said that it “looked at him funny”.[36] In another instance, homeless people covered a K5 with a tarp and smeared it in barbecue sauce.[37] For those who are homeless, the public space is their home and the constant intrusion of a robot is an understandable nuisance. While the homeless are a group that are directly affected by the K5s, other groups fear the development of robots themselves. Aristotle Georgeson is a comedian who stated some of his most popular posts attacked and made fun of robots. Some of his fans say that they should be doing it so robots can never rise up.[38] They fear the development of artificial intelligence that can control humanity, as many science fiction media has portrayed. As Wykowsa puts it, humans dislike robots because they are not human and they are not a part of our “group”.[39]
Support
[edit | edit source]However, many people have begun humanizing the Knightscope robots. In an incident where a Knightscope robot fell into a fountain, one Twitter user joked about how the K5 robot must have been stressed out from work.[40] The CEO of Knightscope also stated that most clients end up naming the machines.[41] While this may not indicate an acceptance of the knightscope robots as “human”, it may indicate that clients are beginning to consider them as part of our “group”. When students were asked how they felt about the knightscope robots, most said that they should not be attacked because they are property.[42] They don’t deserve rights, but they should not be harmed.[43] As these different views clash, the prevailing view will determine the success of the Knightscope agenda.
Parallels with Other Controversies
[edit | edit source]In 2012, the surveillance device "Stringray" began to be used by Los Angeles Police Department. Stingrays are capable of intercepting significant amounts of cell phone traffic by mimicking a cell tower. The goal is to discover criminal information by parsing this data. Within a four-month period, the device was utilized for twenty-one investigations that included both violent and non-violent crime. [44] These uses are significantly broader than the use cases cited as necessitating the Stingray, such as investigation of terrorism and violent crime. Many organizations have called out the use of the Stingray as a breach of privacy. The FBI has used similar devices since 1995, but it is becoming a more visible issue due to more wide use by local authorities.[45] The device is now in use in Baltimore, Tallahassee, and Milwaukee. [46]
While the Stingray is utilized by federal enforcement agencies, the Knightscope ADMs are utilized by private organizations. Additionally, ADMs openly patrol the spaces they monitor, while Stingrays are used quietly by law enforcement agencies. The most alarming parallel is the handling of data. The Stingray casts a wide net of data collection, and captures more data from innocent civilians than criminals. It is unclear what is done with this data after collection, and some police departments are actively concealing their use. [47]
This use of the Stingray parallels the ADM because of their questionable breaches of privacy in public spaces. While the Stingray explicitly invades the privacy of cellular traffic, the ADM is a visible patrolling element. This does not change the fact that "public" spaces are rapidly becoming places of unknown observation.
References
[edit | edit source]- ↑ https://www.knightscope.com
- ↑ https://www.sec.gov/Archives/edgar/data/1600983/000114420416141283/v455625_253g2.htm
- ↑ https://www.knightscope.com
- ↑ https://www.knightscope.com/invest
- ↑ https://medium.com/@WSantanaLi/what-if-a-robot-could-save-your-life-88632a76cb3f
- ↑ https://www.sec.gov/Archives/edgar/data/1600983/000114420416141283/v455625_253g2.htm
- ↑ https://www.sec.gov/Archives/edgar/data/1600983/000114420416141283/v455625_253g2.htm
- ↑ https://www.knightscope.com
- ↑ https://medium.com/@WSantanaLi/what-if-a-robot-could-save-your-life-88632a76cb3f
- ↑ https://medium.com/@WSantanaLi/what-if-a-robot-could-save-your-life-88632a76cb3f
- ↑ https://medium.com/@WSantanaLi/what-if-a-robot-could-save-your-life-88632a76cb3f
- ↑ https://www.mysecuritysign.com/blog/knightscope-k5-robot-replace-security-guards/
- ↑ https://www.knightscope.com/
- ↑ https://www.knightscope.com
- ↑ https://medium.com/@WSantanaLi/what-if-a-robot-could-save-your-life-88632a76cb3f
- ↑ https://medium.com/@WSantanaLi/what-if-a-robot-could-save-your-life-88632a76cb3f
- ↑ https://medium.com/@WSantanaLi/what-if-a-robot-could-save-your-life-88632a76cb3f
- ↑ https://www.knightscope.com
- ↑ https://www.knightscope.com
- ↑ https://medium.com/@WSantanaLi/what-if-a-robot-could-save-your-life-88632a76cb3f
- ↑ https://www.knightscope.com/
- ↑ https://medium.com/@WSantanaLi/what-if-a-robot-could-save-your-life-88632a76cb3f
- ↑ https://www.wired.com/story/the-tricky-ethics-of-knightscopes-crime-fighting-robots/
- ↑ https://www.theverge.com/2017/12/13/16771148/robot-security-guard-scares-homeless-san-francisco
- ↑ https://www.wired.com/story/the-tricky-ethics-of-knightscopes-crime-fighting-robots/
- ↑ https://www.knightscope.com/
- ↑ https://www.knightscope.com/invest
- ↑ https://www.nytimes.com/2018/09/28/technology/facebook-hack-data-breach.html
- ↑ https://www.youtube.com/watch?v=89ok1fla1ks&feature=youtu.be
- ↑ https://www.knightscope.com/knightscope-k5
- ↑ https://www.wired.com/story/amazon-facial-recognition-congress-bias-law-enforcement/
- ↑ w:Death of Elaine Herzberg
- ↑ https://www.wired.com/story/amazon-facial-recognition-congress-bias-law-enforcement/
- ↑ https://www.sec.gov/Archives/edgar/data/1600983/000114420416141283/v455625_253g2.htm
- ↑ https://abc7news.com/technology/police-say-drunk-man-knocked-down-robot-in-mountain-view/1915713/
- ↑ https://abc7news.com/technology/police-say-drunk-man-knocked-down-robot-in-mountain-view/1915713/
- ↑ https://daily.jstor.org/do-security-robots-signal-the-death-of-public-space/
- ↑ https://www.nytimes.com/2019/01/19/style/why-do-people-hurt-robots.html
- ↑ https://www.nytimes.com/2019/01/19/style/why-do-people-hurt-robots.html
- ↑ https://twitter.com/SparkleOps/status/887038957262786560/photo/1
- ↑ https://www.nytimes.com/2019/01/19/style/why-do-people-hurt-robots.html
- ↑ https://www.nytimes.com/2019/01/31/learning/what-students-are-saying-about-how-to-treat-robots-being-resilient-and-ghosting.html
- ↑ https://www.nytimes.com/2019/01/31/learning/what-students-are-saying-about-how-to-treat-robots-being-resilient-and-ghosting.html
- ↑ https://www.eff.org/deeplinks/2013/02/secretive-stingray-surveillance-tool-becomes-more-pervasive-questions-over-its
- ↑ https://epic.org/foia/fbi/stingray/
- ↑ https://www.citylab.com/equity/2016/10/racial-disparities-in-police-stingray-surveillance-mapped/502715/
- ↑ https://www.citylab.com/equity/2016/10/racial-disparities-in-police-stingray-surveillance-mapped/502715/