Jump to content

Information Technology and Ethics/Surveillance Capitalism

From Wikibooks, open books for an open world

Introduction

[edit | edit source]

In today’s digital age, individuals create a great amount of data every day that can be tracked, recorded, and analyzed. Actions like clicking on a link, browsing the internet, or liking a post on social media can paint a surprisingly detailed portrait of someone’s digital life without them knowing it. There are various reasons for collecting this information, but not all of this data is collected for service improvement. Companies can use this data to be sold, traded, and used to predict and influence our future behaviors; This is what is known as surveillance capitalism.

The use of behavioral data to increase profit raises serious ethical and legal concerns. These include violations of privacy, inadequate consent mechanisms, and manipulation of behavior. In response, regulatory efforts such as the General Data Protection Regulation (GDPR) in Europe and the California Consumer Privacy Act (CCPA) in the United States aim to increase transparency, empower users, and introduce accountability into data-driven systems. As issues regarding personal data continue to grow, it becomes more important to prioritize fairness and accountability.

Predictive Analytics and Behavior Targeting

[edit | edit source]

In the context of surveillance capitalism, a term coined by Shoshana Zuboff, predictive analytics and targeted behavior algorithms allow companies to extract value from user data in a sophisticated manner. These technologies represent not just the understanding of consumer behavior (and how best to influence it), but also how to create products from people and their experiences.

Predictive Analytics: Turning Behavior into Forecasts

[edit | edit source]

Predictive analytics applies historical data, machine learning, and statistical modeling to analyze historical patterns to predict future behaviors. In surveillance capitalism, user data collected via search histories, location tracking, social media interactions, and even biometric feedback is transformed into an individual’s behavior and identity profile. The companies that have done surveillance capitalism well, collect and analyze this data to predict what an individual user is going to do, want, or purchase next. They sell those behavioral predictions as "behavioral futures" to advertisers, insurance companies, and other third parties that are looking to shape or profit from the individual user's future actions[1].

As an example, both Google and Facebook look at user behavior across a vast digital ecosystem to improve their ad targeting models. Their models learn to predict the very moment a user is the most likely to click, engage, or purchase something. The companies use the predictions to maximize the effectiveness of the ads.

Algorithmic Behavior Targeting: Influencing Actions in Real-Time

[edit | edit source]

Apart from prediction, companies use algorithmic behavioral targeting to alter users' behavior in real time. Companies collect data about users' actions in different spaces online based on an algorithm's continuous scrutiny of users' digital footprints, to give users newly crafted content to see in real time, which involves altering whatever content the user would normally see. It is as if the algorithm rearranges the play of the content relating to all of the different actions of the user, which aims to shape their behavior to fulfill the company's specific commercial goals with regard to the user, such as maximizing time spent on the screen, encouraging more engagement, and/or increasing conversion rates, which is all behavioral modification — a mechanism of feedback loop surveillance capitalism.

The most well-known example is Facebook's ad platform, where the algorithmic targeting drives users towards content that incites users, and therefore gives the platform more time to sell you ad views driven by their merchant customers through algorithm-targeted impressions and clicks on engaging and self-generative subject matter.[2]

Monetizing Behavioral Data

[edit | edit source]

Surveillance Capitalism is the process where human behavior is turned into a commodity. Google and Meta, for instance, collect a behavioral surplus, which is information that is not needed to provide a service but has value for predictive purposes. That surplus, during the course of the services being provided, will then be used for algorithms and targeted advertisements[1]. This predictive data becomes the commerce of a behavioral futures market where behaviors are predicted and profited from.

Behavioral data is monetized in multiple ways:

  • Ad Auctions: A real-time bidding marketplace for ads managed by organizations like Google and Meta, in which advertisers would pay for the chance to run targeted ads based on predictive data.
  • Data Brokerage: User behavior data may be shared and/or sold to a data broker, which aggregates and resells this data to various entities, such as marketing departments of political campaigns, financial or manufacturing companies, etc.[3]
  • Product Development: The behavioral data that is sold may be used to develop or enhance services, further cementing users into data-producing ecosystems.[4]

Often, these processes take place with little transparency and little in the way of meaningful consent from the user. As Pasquale observes, they operate in opaque systems (i.e. "black boxes") that hide how data is collected and sold. These circumstances raise ethical questions about manipulation, autonomy, and the future of privacy in democratic societies.[5]

Criticisms and Implications

[edit | edit source]

Critics claim this form of behavior commodification takes away individual choice and reduces our experience as a human being to a number. Zuboff cautions that surveillance capitalism “unilaterally claims human experience as free raw material” for commercial purposes and leads to informational imbalance and therefore power imbalance, between the corporation and the individual.

Relatedly, predictive and behavioral targeting systems have been connected to algorithmic bias, filter bubbles and manipulation of public opinion in situations, like the scandal involving Cambridge Analytica.

Ethical Concerns

[edit | edit source]

Placeholder text

[edit | edit source]

As surveillance capitalism becomes more common, there are lawmakers around the world have started to take action to protect people’s personal data and online privacy. This is especially important because many companies collect huge amounts of information about users, and they will use that data in their own benefit to be able to make money from ads and tracking behavior without some people even knowing about the data that is being collected. In response, several major laws and regulations have been created to give people the opportunity to either decline or revise the information that is being collected, rather than, fake information being leaked and not being able to do anything about it.

What is GDPR? The General Data Protection Services (GDPR) is a law made by the European Union to protect people’s personal information, like names, emails, or even things like where you live or what websites you visit. There needs to be consent given by a person before collecting data.[6] There's also other things that the GDPR clearly states. For example, anyone can ask about the information that one has about them and ask for a change if something is incorrect.[6] Along with this even if a company isn't based in Europe but they collect information about people who do live there they are still required to follow the rules of the GDPR because the law requires it.[6] To follow up with this there isn't a law like this in the United States but California has created something similar which is California Consumer Privacy Act (CCPA).

The California Consumer Privacy Act (CCPA) is a law that helps protect people’s personal information. It gives California residents the right to know what kind of personal data a business collects about them, how it’s used, and who it’s shared with. People can also ask companies to delete their personal information and can say no if they don’t want their data to be sold. If someone chooses to use these rights, the business isn’t allowed to treat them unfairly or differently because of it.[7] This is what we need all over the country and other states need to start implementing these type of laws to protect citizens information and rights.

In 2020, voters approved an update to the law called the California Privacy Rights Act (CPRA). This update added even more protections starting in 2023. Now, people also have the right to ask companies to fix any wrong information they have about them and to limit how much sensitive information is used or shared.[7] This expands the way people can go about the information getting released. They can make sure that certain information stays concealed while other information can be fixed and correct and can be shared.

While these laws are major steps forward in protecting the people, there are still some problems. Many states in the U.S. don’t have privacy laws at all, and some companies find ways to follow the rules while still collecting lots of data. Because of this, many experts believe we need a national privacy law in the U.S. that would protect everyone equally and make the rules clearer. There’s also a growing push for new rules to make sure artificial intelligence and algorithms are used fairly, since these technologies often rely on personal data too.[8]

Societal Impacts

[edit | edit source]

Surveillance capitalism has completely shifted the relationship between users and digital platforms by turning every human action into predictive data and leveraging that for profits[1]. This shift is not simply technological; it is societal in scale and consequence. The widespread and largely invisible monitoring and collecting of user data has changed how users interact with digital spheres, and subsequently has implications for autonomy, identity, democracy, and trust in public social trust[9].

Perhaps the most troubling impact on society as a result of surveillance capitalism is erosion of autonomy. User behavior is monetized through platforms that delegate the ability to take certain actions (i.e. buy products) based on behavioral data. This is accomplished quietly through advertisements, linkages, and algorithmic manipulations in the information to which users are privy. This information can influence not only consumer decisions but opinions, beliefs, and even voting habits—which also raises concerns over ethics regarding freewill and consent[10].

In addition, normalizing mass data surveillance promotes a society of self-censorship. The more users are made aware that they are being watched, even informally, the more their digital expression can be limited ultimately limiting individual creativity, dissent, or free, open discourse—especially among minority communities[10]. This displacement flies in the face of essential characteristics that support a free and open society.

Surveillance capitalism also deepens social inequalities. Wealthy tech companies often disproportionately profit from data produced by low-income users, who are more likely to rely on 'free' digital services. At the same time, low-income users have little transparency into or power over the means by which their data, is used and monetized. This disparities in power contribute to what some scholars call "data colonialism," in which data is treated like an extractive resource owned by few[11].

Finally, trust in public and private institutions is declining as surveillance practices go unexamined. A number of high-profile data breaches, algorithmic bias, and the absence of enforceable regulation, creates a culture of mistrust regarding mindful technology use. Decreasing trust is not only damaging to the user-platform relationship; it also has implications for the health of democratic governance, which necessitates the informed consent of democratic citizens.

What the Future holds/Conclusion

[edit | edit source]

What the Future Holds

[edit | edit source]

The trajectory of surveillance capitalism is one of increasing digital ecosystems woven deep in the fabric of everyday life. Emerging technologies such as ambient computing, emotion-aware AI, and biometric surveillance will more seamlessly incorporate data extraction, enabling companies not only to monitor what people do, but also their emotional and physical existence. This will enable ever more precise prediction and shaping of behaviors, raising issues of consent, agency, and autonomy at a basic level.[12]

As Shoshana Zuboff alerts us in The Age of Surveillance Capitalism, this continuous slide is a “new economic order” that markets human experience and reorganizes it as feedstock to be fed into prediction industries. Entertainment maxims include the systemic threat of surveillance capitalism to the pillars of a democratic society. Unless halted, technologies can make normal a climate of invisible control, where prediction algorithms shape personal conduct without notice or actual consent.[13]

But there is backlashed to be forthcoming. One can expect a mounting public clamor and legislative pushback. Regulatory frameworks such as the GDPR and the CCPA of California articulate a new international norm with respect to transparency, minimization of data, and empowerment of the user. Data justice movements, ethical AI, and digital sovereignty movements are gaining a foothold and are asking for new digital-era social contracts. Calls for a federal privacy law in the US and calls for ethics-based auditing of algorithmic systems suggest that regulatory and civil counter tensions are already confronting the unfettered dominance of tech behemoths.

Moreover, the technical countermeasures are synchronized. Technologies that enhance privacy decentralized platforms, encrypted communication, and user-possessed data in general are remodeling the way humans interact with digital services. Such alternatives point to a critical shift away from extractive and towards participatory and consent-based digital economies.

Ultimately, the future of surveillance capitalism rests on the capacity of democratic institutions, ethical designers, and informed publics to come together and push technology toward transparency and fairness. The next ten years will decide if digital realms are for enriching the few at the cost of the many or become systems that respect human dignity, autonomy, and rights.

Conclusion

[edit | edit source]

Surveillance capitalism is a fundamental change in how power is modeled in the digital age, not only through ownership of material resources, but through ownership of our behavioral data. If companies have made tens of billions of dollars by monetizing human experience, the moral, legal and social costs of this have inspired a worldwide crisis of conscience. The non-transparency of data practices, dilution of privacy, behavior manipulation and amplification of social inequality indicate that we need systemic reforms.

As we enter the digital age, our laws should transform with it, customizing our justice, autonomy, and democratic participation. What is troubling is that surveillance capitalism subverts the very principles of informed consent and digital freedom, which serve as ground zero for any kind of digital revolution, but it also presents an opportunity for society to imagine in what kind of future it wants to live in. The way forward is more than regulation; it is a collective re-vision of technology that places human dignity before profit. A future where ethical design, transparency and accountability are the norm is not just possible, it is imperative. Without intentional action, we risk locking in systems of digital control in which access is freedom.

References

[edit | edit source]
  1. a b c Hongladarom, Soraj (2023-12). "Shoshana Zuboff, The age of surveillance capitalism: the fight for a human future at the new frontier of power: New York: Public Affairs, 2019, 704 pp. ISBN 978-1-61039-569-4 (hardcover) 978-1-61039-270-0 (ebook)". AI & SOCIETY. 38 (6): 2359–2361. doi:10.1007/s00146-020-01100-0. ISSN 0951-5666. {{cite journal}}: Check date values in: |date= (help)
  2. Tufekci, Zeynep (2015-01-01). "Algorithmic Harms Beyond Facebook and Google: Emergent Challenges of Computational Agency". Colorado Technology Law Journal. 13 (2): 203. ISSN 2374-9032.
  3. Andrejevic, Mark B (2010-09-09). "Surveillance and Alienation in the Online Economy". Surveillance & Society. 8 (3): 278–287. doi:10.24908/ss.v8i3.4164. ISSN 1477-7487.
  4. Laniuk, Yevhen (2021-06-04). "Freedom in the Age of surveillance capitalism: Lessons from Shoshana Zuboff". Ethics & Bioethics. 11 (1–2): 67–81. doi:10.2478/ebce-2021-0004.
  5. Pasquale, Frank (2016). The black box society: the secret algorithms that control money and information (First Harvard University Press paperback edition ed.). Cambridge, Massachusetts London, England: Harvard University Press. ISBN 978-0-674-97084-7. {{cite book}}: |edition= has extra text (help)
  6. a b c "What is GDPR, the EU's new data protection law?". GDPR.eu. 2018-11-07. Retrieved 2025-04-21.
  7. a b "California Consumer Privacy Act (CCPA)". State of California - Department of Justice - Office of the Attorney General. 2018-10-15. Retrieved 2025-04-21.
  8. "Federal privacy legislation should protect civil rights". Brookings. Retrieved 2025-04-21.
  9. Couldry, Nick; Mejias, Ulises A. (2019-05-01). "Data Colonialism: Rethinking Big Data's Relation to the Contemporary Subject". Television & New Media. 20 (4): 336–349. doi:10.1177/1527476418796632. ISSN 1527-4764.
  10. a b Cohen, Julie E. (2019-10-14). Between Truth and Power. Oxford University PressNew York. ISBN 0-19-024669-3.
  11. West, Sarah Myers (2019-01-01). "Data Capitalism: Redefining the Logics of Surveillance and Privacy". Business & Society. 58 (1): 20–41. doi:10.1177/0007650317718185. ISSN 0007-6503.
  12. Makanadar, Ashish (2024-11-13). "Digital surveillance capitalism and cities: data, democracy and activism". Humanities and Social Sciences Communications. 11 (1): 1–7. doi:10.1057/s41599-024-03941-2. ISSN 2662-9992.
  13. Hongladarom, Soraj (2023-12-01). "Shoshana Zuboff, The age of surveillance capitalism: the fight for a human future at the new frontier of power". AI & SOCIETY. 38 (6): 2359–2361. doi:10.1007/s00146-020-01100-0. ISSN 1435-5655.