In 2018, Nick Couldry and Ulises A. Mejias published a major reassessment of our understanding of the role of Big Data in contemporary global society. The abstract to their paper states, “The capture and processing of social data unfolds through a process we call data relations, which ensures the ‘natural’ conversion of daily life into a data stream. The result is nothing less than a new social order, based on continuous tracking, and offering unprecedented new opportunities for social discrimination and behavioral influence.”
The authors “propose that this process is best understood through the history of colonialism.” They assert that the very process of “normalizing the exploitation of human beings through data” is nothing less than a modern version of “historic colonialism[, which] appropriated territory and resources and ruled subjects for profit. Data colonialism paves the way for a new stage of capitalism whose outlines we only glimpse: the capitalization of life without limit.”
Unlike exploiting natural resources, data colonialism is “appropriated”—built into the passive agreements that people take on whenever they use the web-based systems or internet services that virtually everyone on the planet is a part of. However, people have no control of or real influence on how the data is gathered, used, shared, or maintained. And perhaps worse yet, there are no legal or ethical standards on how individual privacy can be guaranteed. Researchers today are recognizing that some type of international conventions and/or legal framework governing the capture, maintenance, and use of this data is needed.
Experts from the Thematic Research Network on Data and Statistics (TReNDS) explain, “Paralleling past patterns of colonial resource extraction, many Northern corporations profit from the personal data they are gathering around the world and are undermining the data sovereignty of nations in the Global South. The process of ‘digital colonialism’—whereby the Global North monopolizes the digital technology supply—can impede Southern economies, particularly in Africa, to develop their own digital economies, manufacturing capabilities, and other domestic industries.”
The Alan Turing Institute’s description of a 2021 panel discussion it hosted, Data as an Instrument of Coloniality, examined this phenomenon, sharing, “Only by looking at the histories of colonial extraction and appropriation of land, nature and labor can we understand that our lives are being reconfigured in unprecedented ways, through the medium of data.” The infrastructures of data colonialism “perpetuate old racial, gender and class injustices,” and decolonizing data “in this context means developing new strategies for resisting the new extractivist order, and for re-imagining internet governance and the digital commons.” (The full discussion is available on YouTube.)
RESEARCHERS SEEK NEW WAYS TO COUNTER ASSUMPTIONS
Researchers are questioning the evolving new standards and models that are being developed and applied across the globe—especially to non-Western countries and cultures. The TReNDS article notes that “not only are countries in the Global South prompted to prioritize the statistical needs set by the Global North, but new data methods and systems based in the North risk undermining their statistical credibility altogether.”
In 2020, Couldry and Mejias wrote an opinion piece in Al Jazeera in which they argue that data colonialism is of critical importance in the future of us all. They say that it “is the startling new social order based on continuous tracking of our devices and online lives that has created unprecedented opportunities for social discrimination and behavioural influence by corporations. It goes well beyond the social media platforms and search engines that have attracted most criticism, and comprises a complete reorganisation of everyday life and business.”
In 2021, a Google Research team wrote that AI and machine learning (ML) are centered on Western history and concerns: “We find that in India [specifically], data is not always reliable due to socio-economic factors, ML makers appear to follow double standards, and AI evokes unquestioning aspiration. We contend that localising model fairness alone can be window dressing in India, where the distance between models and oppressed communities is large. Instead, we re-imagine algorithmic fairness in India and provide a roadmap to re-contextualise data and models, empower oppressed communities, and enable Fair-ML ecosystems.”
SOCIAL DATAFICATION INCLUDES US ALL
The book The Black Box Society: The Secret Algorithms That Control Money and Information, by Brooklyn Law School professor Frank Pasquale, notes that “every day, corporations are connecting the dots about our personal behavior—silently scrutinizing clues left behind by our work habits and Internet use. The data compiled and portraits created are incredibly detailed, to the point of being invasive.” The Federation of American Scientists’ Steven Aftergood writes in Nature, “Everyone who uses the Internet for entertainment, education, news or commerce is implicated in a web of data collection whose breadth surpasses ordinary awareness.”
Data & Society is an independent nonprofit research organization focused on “the social implications of data-centric technologies & automation.” One of its current initiatives is to track “how the public sphere is currently understood, controlled, and manipulated in order to spark a richer conversation about what interventions should be considered to support the ideal of an informed and engaged citizenry.”
THE RISE OF CRITICAL DATA STUDIES
The field of critical data studies is forcing researchers to re-envision data as less an impartial set of numbers and more as a form of power, built on assumptions and due to the rapid, easy accumulation of data by the private sector, with little oversight in terms of professional standards, ethical guidelines, or legal frameworks. In short, critical data studies goes beyond technical solutions and incorporates research from the social sciences and humanities on unequal social structures. According to the abstract from the paper “Big Data From the South(s): Beyond Data Universalism,” it requires that we see datafication as having “put new weapons in the hands of institutions and corporations in the business of managing people. And it seems to hit harder where people, laws, and human rights are the most fragile.”
ORGANIZATIONS ARE BEGINNING TO ACT
In the past several years, we have seen a growing number of national and international rulemaking organizations establish priorities, policies, and best practices for data stewardship. They include the Asia-Pacific Economic Cooperation (APEC) Privacy Framework, the European Union’s General Data Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA), and the European Commission’s 2019 High-Level Expert Group on Artificial Intelligence report.
In the U.S., stakeholders are starting to look at how issues of privacy and data protection intersect with the 14th Amendment, as well as increase research into the impact on fairness in developing social systems and governmental assessments. At the same time, there is the potential for Big Data blacklisting—designating someone guilty until proven innocent because of suspicious data—by government agencies.
FINDING NEW MODELS TO IMPROVE DATA QUALITY
There are many different approaches being suggested to better understand (and perhaps even find ways to control) the huge stores of personal data being collected, used, and sold. Some of these models come from Indigenous and non-Western cultures. They include the following:
- Comprehensive Community Planning (CCP), a process that is “designed to be very inclusive, culture-and-community-specific, and long term. CCPs range from 25-100 year plans, with high-level goals and a vision that represents the overarching dreams for the community.”
- Data governance frameworks that focus on developing strategies, structures, legislation, and policies based on the specific interests and priorities of Indigenous people and non-Western countries. This approach has been researched by Canada’s First Nations Information Governance Centre and other groups.
- The South American modernity/coloniality school of thought, led by the work of Anibal Quijano and others, focuses on ethnocentrism, with Quijano noting that “this model of power, or any other, can mean that historical-structural heterogeneity has been eradicated within its dominions. Its globality means that there is a basic level of common social practices and a central sphere of common value orientation for the entire world.”
In his book Antisocial Media: How Facebook Disconnects Us and Undermines Democracy, University of Virginia’s Siva Vaidhyanathan focuses on the effects that Facebook has had on our lives and offers a number of proposals to attack the problems social media poses to our global society. The book describes how Facebook’s very design is intended to satisfy users by leveraging their assembled data to build profiles that feed their desires and perceptions, calling the company a “surveillance machine.”
In a 2019 Deloitte Insights article, the authors argue that “the infusion of AI into common activities such as social media interactions, credit decisions, and hiring can be controlled to avoid unintended or adverse outcomes for individuals and businesses.” However, there are few standards in development to guide researchers or protect societies from biased or inaccurate conclusions. Individuals still have no real engagement or involvement with data collection or use, and the degree of oversight or protective jurisdiction is nearly nonexistent.
“Culturally sensitive research,” as described in a 2017 SAGE article, “involves integrating cultural beliefs, characteristics, attitudes, values, traditions, experiences, and norms of a target population into research design, implementation, evaluation, and materials. Increasing globalization and growing multiculturally diverse populations have highlighted the need for research to move beyond focusing predominantly on European-American, White, middle-class participants, as research findings may not generalize to nonmajority cultures.”
FINDING WAYS TO MOVE FORWARD
Researchers and their professional organizations are working for change. In Minnesota, for example, the Center for Economic Inclusion—a nonprofit sponsored by key local corporations—recently unveiled the Minnesota Racial Equity Dividends Index: “Companies can register for the index online and assess their equity and inclusion progress against 37 ‘standards of excellence’ used to build racially inclusive workplaces and supply chains.”
Boston Consulting Group recently created a Diversity and Inclusion Assessment for Leadership (DIAL) tool, which “analyzes diversity and inclusion benchmarking data within and across industries and geographies. By drawing on our database of more than 25,000 responses from around the globe, we help companies identify the right targets and metrics to keep people strategy moving in a positive direction.”
These are both relatively small efforts. National and international laws have not been able to keep up with all of the new tools and applications. They have not been able to effectively legislate, monitor, or otherwise stay up-to-date with the changes in data and its use. Decisions made internally by companies and other organizations are still beyond any effective international convention or legal authority.
Information professionals, who are dedicated to providing quality information, are a key audience for—and collaborator in—emerging efforts to guarantee that critical information and data remain impartial, reliable, and easily available for everyone. The need for developing critical thinking skills in our users and to follow the development of better analytical tools and standards for the use and evaluation of data is becoming critical. The challenge is enormous, especially given the amount of private sector, copyrighted, or legally protected data that exists and drives not only our global economy, but also decisions made at local, national, and international levels across the globe.