Data Genocide: Are Minority Opinions Being Murdered Online?

Privacy Co-op Media Staff
49 min readMay 6, 2021

--

Rose Davis

This paper, from Data Privacy Journal, underwritten by the Privacy Co-op Media Staff, is re-issued here for free viewing with the author’s permission. Site of record can be found here: https://doi.org/10.52785/J2S8V3N7. While linking to this article is permitted, please do not distribute or copy without express written permission of author. Formal citations can be copied from https://www.academia.edu/48765398, © 2021.

INTRODUCTION

With the advent of the internet in 1983, many people believed it offered the chance for humanity to evolve as a civilization, no longer needing to resort to war, genocide, and violence as a way for power and control. The internet brought freedom, a new way of communication, sharing information, connecting people globally, and moving data. Few at the time likely anticipated that it would quickly become a digital wild west where laws were scarce and the ability to manipulate, influence, control, and ultimately dominate would become powerful forces of change impacting society well beyond the digital realm.

From its early days to today an unimaginable amount of data and information has been created containing secrets and indications of every participant in this arena. The rights of the individual to own their data were seemingly lost to the legal constructs of internet companies through crafty terms of service, license agreements, and sophisticated collection mechanisms. Tech giants like Facebook, Amazon, Alphabet, Netflix, and Google, known as the FAANG’s, are examples of those that collect untold amounts of information on every user of their services. While they are not alone in their pursuit of data, they are clear examples of successful companies who recognized this unclaimed territory and quickly took advantage to create their own empires. Today, they have built something that we could have never imagined existing back in 1983 — a digital empire where we can instantly connect with each other across the globe, quickly access extraordinary amounts of information, and conveniently complete many tasks for ourselves.

While this all seems like pleasantries, what lies underneath is a group of Big Tech companies clawing their way to the top and destroying new competitors in their path for means of power and control. The benefits we “receive” are a facade for their true motivation of gaining total power and control of many forms of information. Voices have been silenced for expressing their beliefs, for sharing opposing opinions to the common narrative, and becoming competitive threats. Meanwhile, we continue to scroll, click, upload, and share our lives and information again and again. What is worse, is how easy it is for them to do it in the absence of strong oversite and accountability. Rather, society at large has let them self-govern and freely push forward their own agendas. These tech giants are now making up the rules as they go, while we, as a world, have ignored the signs and warnings from countless scholars on the dangers of self-regulation and feigned moral authority. What we are experiencing now is a threat to everyone’s free speech, individual rights over their data, online identity, and as this paper will set forth, perhaps even our digital lives if not, in certain circumstances, our physical lives as well; something everybody should be worried about.

A NEW AGE OF DATA GENOCIDE

The digital revolution was a natural evolution for society. More so than the industrial revolution because it presented easier problem-solving that had practical applications to improving every aspect of life. Everything from commerce to entertainment was made easier by the digital revolution and that dependence on it came more swiftly than its industrial counterpart. People easily ceded everything from their livelihood and even love to its conveniences. In doing so, the internet has rapidly become life for many, likely most of humanity. With the rise of the digital age, our connectedness is life, and separation from it would be death in a mental or alternate sense. Think about it, it is like the common question of, “if you didn’t post it online, did it really happen?” We live life through these devices and the separation from it largely affects us all. These social media platforms are crucial to how we perform our day-to-day tasks, communicate and express ourselves. So, since we are dependent on these platforms, what does this new death look like? It is the complete removal of individuals online, the silencing of news and public figures, and the dehumanization and victimization of people groups.

It sounds familiar, right? These are not new aspects to what we have seen historically with Hitler, Pol Pot, and Slobodan Milošević. They wiped out the masses and suppressed the population to push forward their agenda for totalitarian control. It seems as if we are on the brink of a new age of genocide or rather a data genocide, a 1984 like dystopia, where we are all the victims.

To pause for a second, it is understood that there is incredible weight behind the meaning of the word Genocide. The purpose for using this word is not to take away from the many who have lost their lives and voices because of atrocities that have occurred, but to make an argument that shows we are living in the shadows of a new age of data genocide where we no longer control our information, opinions, or expressions and can be deleted against our will, contradicted, and ultimately erased.

This is not hard to recognize. There are many recent examples, and the actions of data genocide are becoming prevalent and unfortunately increasingly accepted as the norm. Here is a partial list of pertinent, contemporary events.

1) Twitter and Facebook have permanently banned the former president of the United States, Donald J. Trump, from freely expressing his ideas online (Satariano, 2021);

2) Parler, a nonpartisan alternative to Twitter that encourages the right to free speech has been kicked off Amazon’s Web Service’s (AWS) after violating “terms of service” concerning the 2021 Capitol riots in the United States (Allyn, 2021);

3) Google has threatened to shut down its branch in Australia, blocking life-saving search capabilities to all citizens, as the country tries to propose a solution that will bring money back into the hands of news reporters and journalist (Hart, 2021);

4) countless voices on social media are being shadow banned on popular social media sites and internet services like Tik Tok, Facebook, Twitter, Instagram, and YouTube for speaking about varying political beliefs (Grimes, 2020);

Is this a new type of genocide? To put this in perspective, genocide by definition is the deliberate killing of a large number of people from a particular nation or ethnic group with the aim of destroying that nation or group (United Nations, n.d.). Breaking this down requires two elements, 1) a mental element: the “intent to destroy, in whole or in part, a national, ethnical, racial or religious group, as such”; and 2) a physical element, which includes the following five acts:

  • Killing members of the group,
  • Causing serious bodily or mental harm to members of the group,
  • Deliberately inflicting on the group conditions of life calculated to bring about its physical destruction in whole or in part,
  • Imposing measures intended to prevent births within the group and,
  • Forcibly transferring children of the group to another group (United Nations, n.d.)

What do “killing”, “physical destruction”, “births”, and “transferring” mean in the context of this new data genocide? Has our online world replaced the need to be actively present in the physical world with being actively present in the digital? Can “killing” mean having your online presence and data permanently deleted because these Big Tech companies do not agree with your minority group (cohort), or your content, or even your thoughts?

Raphael Lemkin, the coiner of the word Genocide after WW2, said that genocide could be considered genocide by the mental element alone (1947). This makes it possible for genocide to occur in the digital space, which is why we can use a term like data genocide to describe the expungement and deletion of people’s presence online. It is a misconception that genocide must be physical, especially if we are living in world 2.0, the digital world.

Gregory Stanton from Genocide Watch has mapped out several stages of genocide that are relevant too. These stages are classification, symbolization, discrimination, dehumanization, organization, polarization, preparation, persecution, and extermination (2013).

Big Tech has classified people on their beliefs through algorithms. These algorithms classify and categorize clusters of people using techniques that may often have an inherent bias to those who are of value and importance versus those who are not (Praharaj, 2020). As such, they may inadvertently discriminate based on religion, skin color, gender, etc.… or intentionally discriminate on certain non-protected categories. Likewise, these platforms foster polarization in areas like political opinions and separate the views from the person as a form of dehumanization (Praharaj, 2020).

While the mental element is strong enough to classify this as genocide on its own, there is also a physical element that is taking place. We do not have control over our own data, rather Big Data has control and chooses what to do with our information (Wheeler, 2020). They have the power to delete and suppress voices, competitors, and more.

Willard Waller, an American sociologist whose work on family, education, and military sociology spanned from 1929 through 1945, and who expanded on Sigmund Freud’s premises on will concluded that control has two manifestations — authority and power (1932). While Waller concluded that authority is derived largely from sacrifice or a provision, power is derived from withholding love, one from another, or a taking (1932). As to provision, the internet is filled with a plethora of “free” information and tools such as free maps served up to us by Big Tech usually for the cost of our data. Equally evident Big Tech has power over us through deletion of accounts and denial of service or other restrictions to ban certain users. Thus, both states of Waller control exist.

If we look at the digital space in terms of oppression, we see it too. The definition for oppression has two components: (a) keep in subservience or hardship, and (b) especially by the unjust exercise of power/authority.

In most regulations governing data around the world, there are generally recognized two types of roles: (a) providers — who are routinely required to act justly, and (b) publishers — who may act with bias. For providers “acting justly” means as morally right, fair and equal. The concept of equality is unalienable from the rest of the definition — justice is blind. In the United States, Section 230 of the 1996 Communications Decency Act provides for limits to liability for providers. However, these same protections are withheld from publishers.

Larger tech companies that enjoy these protections resist being considered publishers, yet routinely behave with bias. For example, in early 2021 Twitter banned Donald J. Trump while he was technically still in office due to what they cited as “the potential to promote rhetoric that may incite violence”, while they simultaneously allowed Ayatollah Khamenei to regularly Tweet messages that were directly calling for the destruction of others including entire countries without a similar challenge. It seems they behave with bias, and as such self-identify through actions more as publishers than providers.

To preface, I am not concerned with the ideologies of a particular group, but with the “in-group” and “out-group” mentality that has led to the current conditions of power and control. In this way, I am attempting to ask questions nobody else seems to be, like why do these Big Tech companies want to remove voices whether influential or not? What is in it for them? What is their end goal?

Many of the same tactics were used in physical oppression of the past horrors of genocide but are just digitally upgraded and at a larger scale with 57% of the world’s population being interconnected online. This time it is your digital identity at stake. They have control at any moment to say we are not going to permit your existence and, as we will see in the rest of this paper, have subsequently checked off all the other points of the definition breakdown of the word genocide.

DYSTOPIAN TOLERANCE

If that is the case, what is this data genocide really meant to do? In our modern age, it is less about bloodlust and more about the desire to eliminate something deemed to be intolerable by a controlling group. How does that happen? Part of the answer is the desire of certain groups to fully control the narrative by convincing people that tolerance or intolerance is a virtue that may ultimately lead to physical actions and violence.

The initial flaw is that tolerance, in and of itself, is a measurement of something else — the virtue of temperance. Temperance is the actual virtue, not tolerance. A measurement cannot be a virtue. Temperance is defined as the controlled strength and reason, gentleness, and self-control; against such things, there is no law. You can measure its individual strength or weakness by how tolerant or intolerant a person becomes on any given subject. Temperance requires controlling your strength and applying reason which forms the enemy of genocide. Tolerance or intolerance is the measurement of how far you bend temperance.

If a leadership can convince a populous that tolerance is the virtue, then it becomes easier to convince them that a group is first intolerable and second that the only course of action is to eliminate them. It was Radovan Karadžić’s blatant propaganda that suggested Serbians were the victims of ethnic attacks in former Yugoslavia and needed reparations, that led to the death of thousands of Muslim Bosnyaks. It was the Young Turks blaming Armenians for the financial disparity in the Ottoman Empire that led to the killing of thousands of Armenians. The increased intolerance pouring out of the leading groups is what led to these deaths. In the early days of the Communist Party in China, Mao Zedong was able to forcibly take control through his little red book which brainwashed a large majority of the youth in the nation into intolerance and ultimately secluded those who disagreed.

The device of “tolerance” is often used to persuade a group to abandon their own temperance and willingly replace it with complete tolerance or intolerance. Once you have accomplished that, the larger populous becomes more easily controlled. It is where we get our modern pseudo-political term of “dog whistle” or a soundbite metaphor that instantly reminds a large group of people of their decided tolerance or intolerance for a thing.

SUPPRESSION OF SPEECH

Why is free speech important in society? Freedom of thought and the right to free speech creates avenues for a society to evolve financially, governmentally, morally, physically, emotionally, and encourages research and invention. Free speech is crucial in the evolution of a society because of the free flow of ideas. It supports the development of a thriving democratic culture.

Free speech means a diversity of views are shared openly. The old saying, ‘two heads are better than one’ connotates that people can look at the same problem but have potentially two different ways and views of how to solve that problem. Having divergent points of view should not carry the weight or stigma of disassociation but ought to merely be perceived as another point of view.

When a society takes a stance against a differently leaning point of view and only one perspective can be tolerated, or even heard, it can stunt a society’s growth. It can potentially produce an atmosphere, whereby one ideal leads the way or dictates the way for the whole tribe.

America’s founding fathers were passionate about the republic being a country by the people. They were passionate about creating a more perfect union insinuating that she would ever be changing to form that perfect union. Freedom of speech was instrumental in her establishing justice, ensuring domestic tranquility, providing for the common defense, promoting general welfare, and securing the blessings of liberty and posterity.

Despite the desire of the founding fathers, in recent days, we are seeing more instances of canceled or suppressed speech. One does not have to look far to find examples of people calling for the removal of thoughts or words. On January 22, 2021, in an interview, Bari Weiss, a former New York Times Op-Ed editor discussed her scathing public resignation from the New York Times. She views a free press as an entity that, “holds up the mirror to society so people can make rational decisions about where to live, where to work, where to invest their money, and who to vote for.” She left no stone unturned as she systematically listed her reasons for her departure. She discussed her reasons for joining the prestigious paper and how she was onboarded to bring a more balanced perspective. Beri Weiss, per her interview, leans politically center-left and center-right. She shared how ideological succession which, according to her, is how left-leaning but liberally minded institutions essentially spoke the truth claiming to speak for most readers, swung radically. She stated that this radicalism has become the story of the press, universities, the K-12 educational system, publishing houses, Hollywood, and increasingly the story of corporate America. She also stated that the New York Times was enthralled with ideological succession and was not open to differing points of view.

She said, “My own forays into Wrongthink have made me the subject of constant bullying by colleagues who disagree with my views.” Moreover, it is insightful to see what she wrote about Twitter. “Twitter is not on the masthead of The New York Times. But Twitter has become its ultimate editor. As the ethics and mores of that platform have become those of the paper, the paper itself has increasingly become a kind of performance space. Stories are chosen and told in a way to satisfy the narrowest of audiences, rather than to allow a curious public to read about the world and then draw their own conclusions. I was always taught that journalists were charged with writing the first rough draft of history. Now, history itself is one more ephemeral thing molded to fit the needs of a predetermined narrative.”

Another pertinent paragraph from her letter speaks to how two years has significantly changed what can and cannot be printed in the paper of record. She says, “Op-eds that would have easily been published just two years ago would now get an editor or a writer in serious trouble, if not fired. If a piece is perceived as likely to inspire backlash internally or on social media, the editor or writer avoids pitching it. If she feels strongly enough to suggest it, she is quickly steered to safer ground. And if, every now and then, she succeeds in getting a piece published that does not explicitly promote progressive causes, it happens only after every line is carefully massaged, negotiated and caveated.” This kind of suppression hinders a full view of what is happening in society.

Again, this pattern of suppression rears its head in the Twitter space. Tom MacDonald, a 32-year-old white, tattoo-laced Canadian rap artist living in America released his song, Fake Woke, on January 29, 2021. In a little over a week, it amassed 4.1 million views and was the second most downloaded song on iTunes on February 6, 2021. It touched many politically incorrect topics like cancel culture and wokeism. Twitter user @wysoKapri said, “if you like Tom MacDonald u gotta be racist.”

It seems illogical to assume that because one listens to a certain song or reads a certain book that one is racist, but the quick assumption and association by affiliation seems to be a popular theme today. Another Twitter user @YeahIdkAnymoree tweeted, “If someone listens to Tom MacDonald cut them completely off.” It appears that Twitter users feel a sense of empowerment to cut out voices contrary to theirs. Having a platform, whereby, words can be easily disseminated makes it convenient to easily express one’s views without doing it in person.

In this way, people with opposing views use being offended as a gambit in debate: the one who is offended first — wins. It becomes a rationale to explain how they are void of liability or for having any empathy for any pain they may cause from that point forward.

Another extreme example of complete suppression of speech affected the leader of the free world. As briefly mentioned previously, on January 8, 2021, Twitter permanently suspended Trump’s @realDonaldTrump account. According to Twitter, Trump tweeted the following,

“The 75,000,000 great American Patriots who voted for me, AMERICA FIRST, and MAKE AMERICA GREAT AGAIN, will have a GIANT VOICE long into the future. They will not be disrespected or treated unfairly in any way, shape or form!!!” (2021)

He later tweeted,

“To all of those who have asked, I will not be going to the Inauguration on January 20th.” (2021)

Twitter made a swift decision in the wake of the Capitol riots deeming that his tweets incited violence and went against their Glorification of Violence policy. They determined that these two tweets would potentially inspire others to “replicate the violent acts that took place on January 6, 2021”. Consequently, they suspended him for life. Facebook followed suit, but it is unclear whether he will be permanently banned.

According to The Jerusalem Post, democrats have been calling for Trump’s account to be suspended for some time since it was the main avenue he used to reach the general public and not as subject to the filters of prime-time newscasts.

Because this company is not a government entity it is not subjected to the same rules that the government is bound by, thus allowing Trump to tweet at will. It is interesting to note the slight sense of hypocrisy within the Twitter realms given that they allow Ayatollah Ali Khamenei to remain active in light of their Glorification of Violence policy.

Last year, Khamenei tweeted urging “Jihad” against Israel. He tweeted,

“everyone must help the Palestinian fighters” (2020)

and,

“the struggle to free Palestine is Jihad in the way of God. Victory in the struggle has been guaranteed because a person, even if killed, will receive “one of the two excellent things.” (2020)

He also tweeted,

“Zionist regime is a deadly cancerous growth and that it must be uprooted and destroyed.” (2020)

When Twitter executives were asked about this violent speech their response was that Khamenei’s comments “did not violate the hate-speech rules since they are considered ‘foreign policy saber-rattling.’” The disparity between Trump’s tweets saying he would not attend the inauguration versus Khamenei’s calling for the genocide of Israel is vast. How does Twitter justify canceling one account and allowing another? How do they become the moral voice for one and not hold their morals in line across the board and across the world?

This question begs deeper thinking given that our democratic society was founded on free speech and the seemingly constant barrage against minority views is becoming not only tolerated but expected and even applauded by political leadership.

A prominent Yale Law School constitutional lawyer, Jeb Rubenfeld and Vivek Ramaswamy wrote an Op-Ed in the Wall Street Journal entitled, Save the Constitution from Big Tech (2021). In it, they discuss how the tech giants banned Trump and his supporters from Parler, Twitter’s, albeit small, competitor. This was based on their “ever-changing terms of service” (2021). He submits that this censorship was politically motivated. Accordingly, Rubenfeld states, “Conventional wisdom holds that technology companies are free to regulate content because they are private, and the First Amendment protects only against government censorship. That view is wrong: Google, Facebook, Twitter should be treated as state actors under existing legal doctrines. Using a combination of statutory inducements and regulatory threats, Congress has co-opted Silicon Valley to do through the back door what government cannot directly accomplish under the Constitution.” (2021)

On January 9, 2021, BBC reported that Parler, a “free speech” social network platform was removed from its Google Play Store over its failure to remove “egregious content”. Apple followed suit and Amazon suspended it from using its server. Parler became a popular app, with nearly 20 million users (about the population of New York), after the 2020 elections for people who felt censored by Facebook and Twitter. It was a place where people could have a voice, but its life was cut short after the Capitol Riots on January 6, 2021.

Apple said it had seen, “accusations that the Parler app was used to plan, coordinate and facilitate” the attacks on the US Capitol. In an interview on January 13, 2021, John Matze, former founder, and CEO disagrees with their assessment saying that these riots were allegedly coordinated on Facebook. He also states that this complete shutdown from Big Tech caused a tidal wave of reaction from his user base (2021).

Despite the removal of Parler being offline, the domain was still able to post sporadic updates from high-profile users. Senator Rand Paul parleyed,

“Competition is the surest means to preserving free speech. Everyone, left and right, should be horrified at Big Tech’s attempt to stamp out speech. I wish Parler and all other innovators success in keeping the marketplace of ideas open and uncensored.” (2021)

Sean Hannity also chimed in saying, “I’ve spent my entire career fighting for free speech, even for those I strongly disagree with. We stand with Parler in the fight for free and open dialogue.” Parler’s takedown reverberated within conservative circles and Representative Andy Barr (R-KY) was quick to condemn their actions. Politico reports that he tweeted,

“Amazon, Google, and Apple’s decisions to block the download or use of Parler by their consumers is dangerous. This blatant monopolistic behavior is designed to shut down debate and silence conservatives.” (2021)

“Removing someone from social media, especially for political speech, should indeed follow a specific legislative path,” says Dia Kayyali, a Germany and US-based expert in surveillance and content moderation, who leads advocacy for Mnemonic, an organization supporting human rights including through moderation and documentation of digital media content,

“but this path is simply not available in the US. Most countries impose certain limits on freedom of speech — Germany, for instance, specifically regulates hate speech online — but in the US, freedom of expression, as enshrined in the first amendment, is all but unlimited.

“In the US, this idea of the first amendment is so strong that there has not been much action in regulating platforms. The interpretation of what constitutes incitement of violence, for instance, is too narrow and outdated to account for the reach and reality of social media and should therefore be reinterpreted.

“This legislative hole leaves it to the tech giants to regulate themselves, even though they are not legally liable for any content published on their platforms. Similarly, social media platforms and other digital publishing channels are largely free to moderate content they consider inappropriate, even when it is protected by the first amendment. They have not traditionally done a good job of it, often failing, or even refusing, to moderate conversations or suspend accounts.” (2021)

MODUS OPERANDI

For an objective approach in proving that data genocide is indeed following the patterns of genocides of the past, one need not look any further than the structure of a recognized element of genocide and determine if there is evidence to support each component thereof. For this analysis, the components of genocide are quoted directly from the 3rd Element of the “Crime of Genocide” definition (in bold) with past and current examples for each component. (2000)

Genocide by deliberately inflicting conditions of life calculated to bring about physical destruction:

1. Conduct

· The perpetrator inflicted certain conditions of life upon one or more persons.

Past: Dehumanization — Non-Jew German neighbors were conditioned and even trained to hate Jews and make their life intolerable. Encouraged to confront them or shun them.

Today: Dehumanization — People of one political view are encouraged by open rhetoric, to hate opponents and make their life intolerable. Statements from one side are routinely allowed to stand, such as Congresswoman, Maxine Waters, “If you see anybody from that Cabinet in a restaurant, in a department store, at a gasoline station, you get out and you create a crowd and you push back on them, and you tell them they’re not welcome anymore, anywhere.” While other such statements from the opposition result in censorship.

· Note: The term “conditions of life” may include, but is not necessarily restricted to, deliberate deprivation of resources indispensable for survival, such as food or medical services, or systematic expulsion from homes.

Past: Minorities were removed from areas of commerce and discourse by forced segregation.

Today: Individuals and even large groups (AWS expulsion of Parler’s 20 million users) are removed from areas of commerce and discourse.

2. Consequences and Circumstances

· The conditions of life were calculated to bring about the physical destruction of that group, in whole or in part.

Past: Banopticon (profiling technologies are used to determine whom to place under surveillance), various minorities were required to self-identify, informed against, marked, and tracked as were the Jews (Bigo, 2008).

Today: Banopticon in the form of algorithms as those employed in a conflation of data from Square and Twitter in order to analyze posts and buying habits and to identify people as QAnon, a non-membership, amorphous and largely unorganized group that share inconsistent subsets of ideas and conspiracy theories, yet Twitter banned 70,000 accounts after being profiled.

As this paper is being written, Google is promoting another Banoptican in the form of its new Federated Learning of Cohorts (FLoCs) technology (to be covered in some detail later in this paper) that corrals groups of minorities into what they have termed “cohorts,” which they claim to be anonymous groups without titles (Bohn, 2021). These cohorts are given unique identifiers, such as numbers. Privacy critics are already pointing out that larger tech companies, such as Facebook, will be able to map your identity to the FLoCs you belong to by cross-referencing them against other known groups from their 2 billion members and be able to track and even manipulate cohorts with relative ease, where smaller competitors will have to go through Google to manage things such as ad sales. This gives a decided advantage to the larger tech companies, galvanizing their market monopolies and power in the name of privacy.

· Such person or persons belonged to a particular national, ethnical, racial, or religious group.

Past: Jews in Germany, Armenians in Eastern Syria, Cambodians by the Khmer Rouge

Today: Any group found out of favor with powerful technology leaders, currently largely conservatives, Christians, and jingoistic nationalists. However, in the recent past, this has included people that espoused liberal points of view that were unpopular at the time, such as ABC firing Bill Maher for comments against the U.S. due largely to internet backlash fostered by large tech companies.

3. Intent

· The perpetrator intended to destroy, in whole or in part, that national, ethnical, racial, or religious group, as such.

Past: in January 1944, S.S. General Heydrich assembled a group of Nazi leaders to discuss a plan called, the Final Solution to “evacuate” Germany’s Jews and other undesirables as a deliberate and sustained effort. Changes were claimed to be as a result of a crisis — America’s entrance into the war.

Today: in January 2021, Jack Dorsey’s leaked video shows an assembled group of Twitter’s leadership gathered to discuss a plan called Full Retro to “ban” QAnon and other undesirables as a deliberate and sustained effort. Changes were claimed to be as a result of a crisis — the storming of the U.S. Capital on January 6, 2021.

4. Context

· The conduct took place in the context of a manifest pattern of similar conduct directed against that group or was conduct that could itself affect such destruction.

Past: Propaganda of Nazi’s (Gerbils) casting Jews

Today: Domination of mainstream narrative (propaganda)

· Note: The term “in the context of” would include the initial acts in an emerging pattern; — The term “manifest” is an objective qualification

Past: Increased as a result of crisis: political unrest and inflation

Today: Increased as a result of crisis: political unrest, COVID, BLM

As one can see from the solid examples, each component necessary to establish these elements of genocide are present and fully satisfied. Therefore, by accepted definition, “data genocide” is indeed real and occurring in the world today.

WHY NOW?

Why are we seeing this all now? As previously stated in Willard Waller’s theory, everyone has a will to control. So, the question may be better asked, “Why not see this all the time?” As covered by generally accepted genocide theory, widescale genocide is greatly aided by coincident factors including a global crisis and civil unrest, common goals of leadership, and an ability to function with absolute power. All three elements evolved in 2020. Towards the end of 2019, a global crisis in the form of a pandemic and civil unrest were present, but not with the ubiquity expected to play a supporting role in genocidal efforts. Primary news stories of the day included refugee population growth in countries with liberal border policies, lagging economic growth in secondary markets, and isolated and diminished pockets of terrorism.

The primary economic driver at that time was the U.S. who was enjoying historically good economic numbers across the board since 1945. Crime and unemployment were low, and the hottest issues were largely posturing for a new election cycle. In this environment, tech companies were routinely grilled by various governments and were for the most part openly contrite while continuing to strengthen their own control in various silos such as social media, operating systems, devices, marketplace, and media content.

But with the dawn of 2020, a pandemic caused by COVID-19 brought mandated lock-downs. First, these promised to flatten the curve within seven weeks and would be lifted, but as that deadline continued to be pushed out, anxieties and other psychological issues increased. More volatile and short-lived emotions gave way to older and more vitriolic feelings of sadness and bitterness.

Three notable police-involved killings of minorities occurred in a short span of time, ending with George Floyd’s death on May 25, 2020. As protestors took to the streets, violence likely fueled by vitriol crescendoed. During the concurrent political cycle in various countries, ruling parties allowed more latitude in crime related to these protests. Emboldened, various other groups such as those seeking anarchy began to expand social unrest. A compelling argument can be made that those already in control of our data were fomenting and protecting the growth of global crisis and civil unrest. Their actions certainly did little to deter either.

During this time, tech platforms clearly allowed strong rhetoric from these protestors to stand, while striking down counter rhetoric for various policy infractions such as racial bias or incitement. As they de-personed critics of the new insurgent violence by first shadow-banning, then banning, and then ultimately by deleting accounts, tech companies saw some pushback, but according to authorized agents that handle opt-outs and deletion of data requests, less than 1% acted.

In January 2021, Twitter began large-scale deletions in what they called “Full Retro” of 70,000+ accounts.

By January 10, 2021, the practice of de-personing had become so commonplace that first Apple, then Google crippled the ability for a large platform called Parler to make their app available, followed days later by AWS completely expelling Parler’s servers. This caused 20 million people (about the population of New York) to be immediately silenced and cut off e-commerce to many small and large businesses that depended on that platform for traffic. One business that depended on this traffic, MySmartPrivacy, reportedly measured a drop in website visits from an average of 70k per day to 0 (zero) immediately following Parler’s expulsion from AWS.

When asked for reasoning for the de-platforming of Parler, Apple, Google, and Amazon gave their own version of a potential for inciting violence by a handful of Parler users, citing only one specific account to reporters. But coincidentally reported by CNN, there were tens of thousands of similar posts on Twitter and Instagram. These platforms were allowed to continue operation uninterrupted.

YouTube (Google) also froze the account of the sitting U.S. President, Donald J. Trump. This move affected more than Trump as they also shut off all ability to comment below his post — thereby silencing voices on both sides of any issue represented.

James Grimmelmann, a professor of digital and information law at Cornell University, said at the time, “If we’re in a world where they do have that kind of power, then we want them to have things like due process and fairness and evenhanded treatment of different viewpoints,” he said. “Next time,” he mused, “the target could be a merchant that Amazon is retaliating against over a business dispute. If this kind of ban continues, it could create a subterranean bifurcation of web infrastructure, as clients deemed unacceptable by the big players seek web services elsewhere.” (Rivero, 2021) Meanwhile, the grilling of tech companies continued by various governments, but and the tech leadership became less contrite and began pushing back and even visibly arguing with lawmakers.

Tech companies have also developed internal mechanisms for self-governance, setting up or hiring friendly organizations to be arbiters of truth for them. For example, in a report by MSN January 16, 2021, one such organization, Zignal Labs, measured misinformation about election fraud, while at the same time posting a dubious statement on their own webpage. It stated that one of their Advisors, Josh Ginsberg, worked on the presidential campaign of Arnold Schwartzenegger, who is unqualified to run for president as he was born in another country (Williams, 2021).

These flawed and malleable internal arbiters are routinely assigned blame by these tech companies for decisions to ban individuals or groups, thereby becoming a scapegoat. This provides these tech companies with absolute power and yet maintains plausible deniability.

CONNECTEDNESS vs. GROUPTHINK

Humans desire to be connected. Basic sociology has studied and proved this many times over. We need only look at overcrowded cities in every culture and country with adjacent vast open countryside within ten miles or so to understand this basic human need.

Now, if we all lost our connectedness at once, we would likely adapt and cope. In the early days of the 2020 pandemic, we saw campaigns of “Alone Together,” as public service announcements attempted to help mitigate the commonality of separation. Losing connectedness is a challenge, but humanity overcomes when it is a common experience shared by many.

Yet, if one group could forcibly choke out another group while keeping all the life for itself, what would the analog future look like for the victims? Mental harm? Surely. Physical destruction? Likely, absolutely for some. Death? Sadly, there are recorded data that supports even that as people either take their own lives, die of declining health, or are killed by others incited to do so.

These results would fit the required elements for a coordinated event to be considered genocide. What about births? Most genocide definitions include suppression or forced elimination of births by a minority as a required element.

In this modern era where online accounts are routinely canceled by platforms, rejected users would simply sign back on under a new account. Moreover, as tech platforms continued to evolve, increasingly sophisticated algorithms using geographical location, Internet Protocol also known as IP addresses, configurations of browsers, and many other data points are used to create what is known as digital fingerprinting. This technology evolved out of security and privacy efforts, but it has been clearly co-opted rather to pierce the veil of privacy and identify people even with biometrics, such as how they type. The user may think they are getting away with skirting observation, but the larger tech companies already know who you are before you enter a new username (Smith, 2019).

This means that some users that are banned may find it very hard to create a new account, or perhaps more insidiously, may be allowed to create one and have the appearance to communicate, but no one actually sees their messages. This is called shadow banning, and it may apply to other people in their household or even their sphere of influence. This can certainly be considered the digital equivalent to prohibiting births.

Other genocide framework elements include transference of children from their original home to another. For people, and even businesses who worked for years to establish a digital home on one platform, and finding themselves shut out, may find the only suitable alternative is to start all over again in a new digital home. For millions, this happened when after they first were booted from Twitter, many moved to Parler, only to find themselves digitally homeless again and moving to other platforms such as Gab.

Ultimately, the result is an inability to re-establish a sense of community by one group while another group in control increases their connectedness. By the breaking down of connectedness of the minority, while strengthening their own, this surely meets the necessary elements to establish an important component of genocidal framework theory.

These forced mutations of community from healthy to dysfunctional, such as moving from one home to another and starting your work all over again under new rules, are meant to change how you think and are a critical part of implementing genocidal activity.

Be that as it may, how could those following orders to execute such cruelty continue to do so without question? Another element of genocide found in the strengthening of the majority’s connectedness is the blind following of such orders without question.

Often those reporting directly to a supreme leader will be passionate in their allegiance, and yet relatively ignorant to their own beliefs being far enough removed from their origins. In this environment, groupthink, the practice of thinking or making decisions as a group in a way that discourages creativity or individual responsibility, becomes prevalent.

There is an established formula to identify groupthink. Passionate ignorance leads to insecurity and self-doubt, which is dissuaded when in the company of like-mindedness. When many people are far enough removed from the cause of a movement, and yet collectively pledged to it, their actions can become more easily extreme. History has born this out as most major religious groups have sought to destroy nonbelievers usually around 700 to 1000 years after their origin.

Individual insecurity in the presence of groupthink can more easily lead to sociopathy. Sociopathy is defined as the willful harm caused to another person that is void of any empathy for the pain that person suffers because the sociopath can justify their own actions based on some rationalization. Collective sociopathy has been prevalent in most genocides in the past. While the current data genocide has had some rogue detractors, such as those that established the Center for Humane Technology, most rank-and-file workers routinely communicate their support for their company’s efforts.

Recently, a Disney Imagineer, Chris Kidder, was mocking Disney Park fans and tweeted negatively about Gina Carano who was fired for her social views. Reportedly, Kidder is not a low-level employee either, he has correspondence in his Twitter feed with legendary former-Imagineer Joe Rohde.

Here are the tweets:

“if LucasFilm can make sure Gina Carano wont be in another Star Wars related project, 100 US Senators can make sure Donald Trump is never allowed to hold public office again!” (2021)

Thank you for doing the right thing LucasFilm! https://t.co/tVrNuFkW5C” (2021)

From these and many other posts, we see just one of many examples of this type of celebration of the harm caused to one person, and Kidder appears to be void of any empathy by the rationalization that the fans and Ms. Carano had it coming (2021).

TYRANNICAL SURVEILLANCE AGE

The term, “surveillance capitalism” began showing up several years ago in conversations and articles published by subject matter experts in the area of Trusted Identity in Cyberspace. Beginning with a white paper published by the Obama White House, and subsequently answered by a green paper from the Commerce Department, the interwoven topics of identity and privacy began to coalesce in an effort for standardization out of the National Institute for Standards and Technology (NIST) called the National Strategy for Trusted Identity in Cyberspace (NSTIC).

As existing coalitions of such, subject matter experts began to apply NSTIC goals to technology, it became apparent that some players were leveraging identity to collect insights and monetize predictive analytics. The fact that this data was derived from unexpected surveillance gave rise to the term surveillance capitalism.

By embedding surveillance components in products and services businesses can offer better and better products at reduced prices wherever they could embed these forms of digital identity. For example, a voice-activated personal assistant with early adaptive artificial intelligence from Blackberry cost nearly $1,000 just ten years ago, but during the rise of surveillance capitalism, Amazon rolled out Alexa for $46, later lowering it to less than $30, and offering far superior features and performance, including free interfaces for smart home, turning all of the isolated IoT devices in your homes into informants for Alexa.

It would be impossible to compete with a product not subsidized by advertising or data sales today. After achieving dominance, these tech companies make bold moves to stifle or even coopt competitors.

David Lyon put it this way, “Surveillance today sorts people into categories, assigning worth or risk, in ways that have real effects on their life-chances. Deep discrimination occurs, thus making surveillance not merely a matter of personal privacy but of social justice.”

“We are now living in a “Tyrannical Surveillance Age” that has given rise to corporate authoritarianism and cyber oppression, due to the predatory surveillance and data-mining business practices that dominate telecom, tech, and electronic products of necessity.

“Rather than being oppressed by way of politics (though one could argue that politics plays a part), people will be oppressed by way of corporate authoritarianism which is a form of digital oppression associated with smartphones, tablet PCs, and connected technology that’s rooted in surveillance capitalism.

“As my research indicates, and as reported by CNBC, data-driven technology providers such as Google and Amazon are going to be able to monitor, track, and data-mine nearly every aspect of your life by way of telecom, tech, and electronic products of necessity, whether you like it or not, because our government isn’t enforcing existing consumer and antitrust laws.

“Regarding antitrust, companies such as Google, Apple, Microsoft, Amazon, Facebook, and others are cutting exclusive deals with PC manufacturers, smartphone manufacturers, original equipment manufacturers, auto manufacturers, and electronics manufacturers to ensure that all products are supported by predatory surveillance and data-mining technology (operating systems, apps, etc.) developed by all technology providers concerned.” (Lee, 2019)

What are these surveillance oligopolies doing with this information? Big Data has “marked” us by their algorithms, categorizing us for our political views, belief systems, and values. Surveillance is used as social sorting. They do this in order to divide us. The so-called digital divide is no longer merely a matter of access to information. Information itself can be the means of creating divisions.

They realize that we are becoming increasingly dependent on our online lives. Irma van der Ploeg, in “Biometrics and the Body as Information,” observed that, “the world has moved towards a more disembodied and virtual existence.” (2003) Ploeg asks how far the very distinction between the embodied person and information about that person can be maintained (2003). In a day when body data — biometrics — are increasingly sought for surveillance purposes, could it be that such information is not merely “about” the body but part of what the body is? Illustrating her case with examples such as fingerprinting, iris scans, and facial recognition systems — common to today’s high-tech social sorting practices — she concludes that our normative approaches, or ethics, require rethinking.

Today, these tech companies likewise fingerprint our devices. The Electronic Frontier Foundation recently released their PanOptiClick test platform to demonstrate how these businesses identify the uniqueness of a device, even though it has various privacy software installed. You can try it right now while reading this paper by clicking on this site: https://coveryourtracks.eff.org/.

All a system would need is for you to login in using that device, and they would be able to tie in your communications to your biometrics and finally to their version of your identity. What happens to personal data is a deeply serious question if that data in part constitutes who the person is. Questions of identity are central to surveillance, and this is both a question of data from embodied persons and of the larger systems within which those data circulate.

“We are where we surf,” and “If you are not paying for the product, then you are the product,” have become common expressions in the study of surveillance capitalism. The Internet has become a major means of classifying and categorizing its users, through an array of increasingly sophisticated devices that began with “cookies.” But as browsers seek further dominance, they are supporting demands of privacy experts to do away with cookies. The irony is that as the browsers will absolutely know who you are, without cookies, it will be harder for their competition to know the same thing, giving the browser companies a huge advantage. For example, Google sells advertising, provides identity and supports an industry-standard browser — Chrome. How can any other advertising company compete?

What is subsequently occurring is an unrecognized adoption of common technology, paring down our options. Just 10 years ago, there were more than 8 mobile operating systems sharing a split marketplace. Now there are two.

This means that larger and larger groups of people are communicating in a common technology vocabulary of functionalities and services. If you want to emphasize something you are communicating, you likely use the same tool and the same features as others around the world. Analog social constructs with which society has functioned for centuries, such as etiquette and social protocol, are now becoming automated in the process. Click trails, cookies, context analytics that sort out transactions, interactions, visits, calls, and other activities are the invisible doors that permit access to or exclude from participation in a multitude of events, experiences, and processes and dividing us into segregated groups by the very organizations that are surveilling us.

To see this in action, find a friend that differs from you by some minority aspect, and sitting next to each other, log into Google on two different devices, and in the search field, type the same first few words for something controversial, such as, “Donald Trump is” — you will likely see completely different suggestions for the rest of the search, taking you both in entirely different paths and providing you with different access to different data experiences, known as digital gating.

It is often, but not always, accomplished by means of remote networked databases whose algorithms enable digital discrimination to take place. Risk management, we are reminded, is an increasingly important spur to surveillance. Its categories are constructed in socio-technical systems by human agents and software protocols are subject to revision, or even removal. Additionally, their operation depends in part on the ways that surveillance is accepted, negotiated, or resisted by those whose data is being processed. However, technology firms do not merely rely on automation for identifying unfavorable minorities. In most platforms, there are buttons called, “report as offensive.” These button clicks are aggregated by algorithms as either originating from favorable or unfavorable individuals.

In a recent meeting of 250 banks in Paris to discuss fighting climate change, different ideas were tested in what they called a beta test, including assigning every person a carbon credit score similar to the existing financial credit scores. In tests, different evaluation algorithms were infused with additional weighted factoring from these carbon credit scores. In this way, the desired social change of the elites that run the tests can drive even the labeling of “unfavorable” as an additional data “feature” in a larger machine learning algorithm, thereby punishing undesirable behaviors while rewarding favorable behaviors (Greenfield & Makortoff, 2021). Clicks of “report as offensive” from unfavorable individuals were largely ignored, while favorable receive action with higher confidence and priority. Tech companies and banks are not alone with harnessing the power of majority informants. With sufficient desensitization, even governments can convert citizens into informants against a minority group that has fallen out of favor. The goal is to render insensitive or less sensitive, to make emotionally insensitive or unresponsive, as by long exposure or repeated shocks of a particular kind. As technology companies continue to raise the bar of increasingly invasive surveillance, society has decreased its sensitivity to the subject, either by market capitalism with products and services people voluntarily engage in, or by government mandate.

Governments are now employing sweeping surveillance that would have caused widespread shock and reaction from citizens only a handful of years ago. In China, a project known as ‘Sharp Eyes’ rolled out in 2013. Tens of thousands of cameras were deployed in relatively small areas, such as a single town or province. Those cameras were not just monitored by police and automated facial recognition algorithms. Through special TV boxes installed in their homes, local residents could watch live security footage and press a button to summon the police if they saw anything amiss. The security footage could also be viewed on smartphones.

In order to function in a society whose business options are governed by those who surveil, we then must accept, largely without negotiation, their Notice and Consent contracts. Some lawyers call these adhesion contracts because they are so one-sided, and caution that they may be considered void ab initio if they are ever challenged. Yet, to date, they have not been seriously challenged, likely due to the perception that without an agreement, we will not be permitted to function in society.

From governments employing tyrannical surveillance, citizens are subject to a government’s self-given legal authority. If the rights of the citizens are granted by some authority above their lawgivers, such as in the United States where we are endowed by our creator, then citizens can challenge their government as it is by the people. Yet, where rights are granted by an independently self-sovereign government such as a dictatorship, there may be little direct influence a people can have against tyrannical surveillance.

THE RIGHT TO BE REMEMBERED

The landmark law in Europe known as GDPR contains a rather famous provision known as “the right to be forgotten”. This phrase imposes an obligation on social media companies to remove links and data that are personal in nature. For many consumers, they feel this means that they will delete all their data. However, this is a misconception. It does not mean they will be expunged from the internet. For example, businesses will need to maintain records of transactions, whether they are purchase data or posting data, that can provide for nonrepudiation, a legal term that means that if the company is sued, they have records to back up their defense.

Some companies, such as Verizon Wireless, famously attempted to twist the language of their privacy policy to somehow honor GDPR requirements while remaining truthful about data, and used the phrase, “you can delete how we use your data.” This means the data remains, but their use for other productization or publication will be curbed.

Individuals request data deletion for a variety of motivations. It is an individual right but has limitations.

Now consider if there were a similar right to be remembered. This is based on ownership and rights equivalent to ownership over personal content. Unfortunately, we have obfuscated many of those rights through clever, crafty user agreements. The right to forget you has seemingly become the mantra of many tech giants.

This is more than an interesting legal curiosity. Right to be remembered speaks directly to other legal precepts that are well established for physical property, such as probate of information rights heritage.

The viability and potential durability of the information rights owned by consumers are protected by law, regulations, tools, and rules that apply just the same as other estate assets. As part of customer viability, inherent control requires methods for 1) deletion, 2) disassociation, or 3) otherwise sufficient modification to the point that it is not recognizable as a “substantial part” of the original. There will be many uses for such an ability to effectively wipe information in a smart way including heritage (post-probate death of the customer, etc.), and the “right to be forgotten.” In other words, how an estate uses assets in perpetuity is just as important as how it disposes of the same assets.

For some individuals, the value of their likeness and memory is more valuable than their physical property. If a power wanted to destroy opposing views it would certainly want to dispose of these information rights assets by complete deletion but, now we see alteration as another tool of destruction.

Margaret Court is an Australian sports star, retired tennis player, and former world №1. She won 24 Grand Slam women’s singles titles in her career, 19 Grand Slam doubles titles, and 21 Grand Slam mixed doubles titles. She won more Grand Slam titles than any other player in history and is considered one of the greatest. There is a stadium named after her in Australia.

Ms. Court is also a fundamentalist Christian who has argued as late against nationally recognized same-sex marriage laws. Taking matters into their own hands, Google has renamed Margaret Court stadium into Evonne Goolagong Arena (“Google Maps ‘renames’ Margaret Court Arena”, 2017).

Is Google digitally murdering Ms. Court solely for her beliefs? Is this reversing her right to be remembered by Google’s right to have her forgotten? What is the goal when powerful groups remove or alter the physical evidence that we exist?

Now that we are here in our thinking… why do they want to remove them? I imagine there are several complementary answers to that question. For an important percentage of the antagonists, it has historically included insecurities in their own belief systems.

“I believe in a particular god,” is a common statement. For those who believe, it drives them to further and further scholarship, their knowledge increases, and apologetics become easier, their tolerance increases. If they only accept a passing knowledge combined with a passionate heart, then even the most trivial of debates cause their intolerance to rise and they seek to destroy any opposition or threat to their own belief system.

Right to be forgotten or remembered will either remain the sole election of the information rights owner, or it will become a tool in the hands of those seeking to perpetrate genocide.

CHANGING THE PARADIGM, TECH GIANTS ARE UTILITIES

U.S. Tech Giants jealously guard their role as a provider when it comes to protections from Article 230. They do not want to become considered a publisher where they would become open to liability and lose the protection Article 230 provides. However, they carefully balance that desire against being viewed as a monopoly or a utility where they would become exposed to regulation or government oversite, and even more so avoid the term monopoly for the same reason, saying that there is ample opportunity for competitors to spring up and challenge their market position.

If you think about it, what message are they conveying when they call the contract they write between them and you to co-manage your information rights a “Terms of Service”? Clearly, they are attempting to establish themselves as a service provider instead of a product.

Maybe with bitter irony, there are also examples where deletion of individuals as well as whole platforms for political bias that have had the additional benefit of quashing competition, as an example: the previously mentioned attempts by Amazon, Apple, Google, Twitter, and Facebook to shut down Parler.

It has become clear to even casual observers that tech giants desire to have their cake and eat it too. Until these conditions change, they have also demonstrated the power and even the willingness to wade into the waters of data genocide.

Many experts feel that the most insidious aspects of the move to be viewed as benevolent are when the tech giants make decisive and strategic moves to gain and use their power to crush minority groups and opinions online. The latest example of this is Google’s recent moves to create what they call a Privacy Sandbox.

As a self-proclaimed pious effort to protect our privacy and rid the world of “cookies”, those small bits of data that companies deposit on your hard drive for many purposes, not all of which are sinister, Google established a common development environment for other tech companies to experiment with new ideas to function without cookies. Of course, for identity services and the like, they would all just use Google’s technology — because they were the benevolent company providing the resources.

This led to a new approach called “FLoC”s (Federated Learning of Cohorts) as mentioned earlier in this paper. Google’s FLoC replaces these individual identifiers with a system that puts users into groups, or cohorts, based on common interests. Of course, they run the system.

Google even admits the plan is flawed, “FLoC will never be able to prevent all misuse,” Google said in a recent statement. “There will be categories that are sensitive in contexts that weren’t predicted,” and companies that use the technology “will need to ensure that people are treated fairly.” (Xiao & Josh Karlin, 2021)

A recent blog by technology expert, Don Marti, was titled, “Your cohorts are just ethnic affinity groups. Change my mind.” Marti writes in detail how this new and self-proclaimed benevolent system will easily be used to target minority groups raising questions of how it can even be prevented. For example, he stated, “Facebook has enough logged-in Google Chrome users that they will know which FLoC cohorts match up to their old ethnic affinity groups as soon as FLoC turns on.” (2021)

In short, the tech giants are developing digital gating to put stars on the identities of people that match certain ethnic affinity groups, and they are doing it in the utilitarian name of “privacy.”

HAS THE GOVERNMENT FAILED PEOPLE?

We can see governments like Australia trying to take back power from Google, but there is a fight to maintain their power.

Other governments, rather than fighting against the tech giants, have identified a new source of income — Privacy Regulations. All they have to do is set up tripwires in regulations that businesses will easily stumble through, and they collect hefty fines.

A good question to ask is, “Why have the tech giants gone along with such development of regulations in multiple countries and fought little against paying the fines?”

The main reason is likely that the large fines are, relative to their cash position, small to the huge tech giants, but would mean financial ruin for their up-and-coming competition. In other words, the tech giants actually help in crafting these regulations to further cement their positions of power and their ability to destroy whatever minority opinions they so choose.

Tech companies also embrace and declare obligations to report suspected illegal activity because they get to be the arbiters of what “suspected” means. For example, Facebook and Twitter have been able to slap COVID-19 warnings onto every post that even obscurely relates to the virus, but yet they admit that they cannot take down all sexually violent material because it is too difficult to get the algorithms right (Statt, 2020: Crawford 2017). This reveals their priorities.

One area where governments have helped, and maybe as an unintended consequence, is in the establishment of Authorized Agents. These are typically nonprofits that can act as a type of limited power of attorney for individuals in the area of information rights, largely defined as opt-out and right to be forgotten, bringing strength in numbers to average consumers.

There are many such Authorized Agencies. The oldest one known is the Privacy Co-op, founded in 2018, and newer ones include the Data Dividend Project founded by former U.S. Presidential Candidate, Andrew Yang in 2020. These groups bring education and action to a global community of data creators and may hold the key to ending the current data genocide.

But this requires people to join and support them in large numbers.

WHAT CAN WE DO?

The introduction of this paper set forth that the ability for each of us to own our data was seemingly lost to the legal constructs of internet companies through crafty terms of service, license agreements, and sophisticated collection mechanisms and potentially resulting in oppression through data genocide. Later we discussed how these contracts are considered by some to be adhesion contracts that may be considered void ab initio but have not seriously been challenged in court, and so they still stand.

This is what we have been led to believe. However, the truth is there is a recently unearthed solution called Battle of the Forms, a long-standing global legal construct that paves the way for us to re-assert our ownership of rights equivalent to ownership through various nonprofits called Authorized Agencies. Battle of the Forms is the ensuing debate between two sides in forming a contract each with their own language.

These nonprofits can act as sort of an aggregating power of attorney where many users can speak with one voice and demand to opt-out of all secondary uses of their information until such time as these tech giants capitulate to our terms and agreements. This also works to their benefit as it brings them licensed opt-ins that can help them, in turn, reduce risk from multinational privacy fines and settlements.

When you learn there is a problem as serious as the one discussed in this paper — data genocide, you have taken the first step. The history of genocide is replete with millions of people that took that first step, but then abandoned the second step, so tell other people!

1. Consider sharing this message with your social clubs, schools, religious organizations, and communities. Authorized Agents, organizations like the Privacy Co-op (booking@privacyco-op.com), and Andrew Yang’s Data Dividend Project are great places to get speakers.

2. Be concerned when you do not see countering opinions on your social media feed and internet searches.

3. Sign petitions and encourage your elected officials to highlight this problem.

4. Support print media companies.

When there are people that want to make healthy change happen, the next step is to equip them. In this case, the most powerful tool is what you already own. It is a legal concept called “rights equivalent to ownership.” You own the rights to information contained within the data you generate. These rights have two different parents in the law in most countries.

One primary right is privacy, which is typically a sovereign right guarded by the laws of a sovereign state like a nation. In the United States, privacy is considered a personal or individual right. It cannot be bought or sold, and it is encapsulated in federal laws.

Another primary right is publicity, which is also guarded by laws, typically of a more local region within a sovereign state. In the United States, publicity is considered a contractual right or property. It can be bought or sold, and it is encapsulated in state laws.

Combining these two things, we find that in most countries, individuals have the ability through at least four torts to control their “rights equivalent to ownership” which have significant power. These torts are typically something like Peeping Tom, publication of private facts, misappropriation, and defamation of character. Relying largely on these, nations or states will craft privacy regulations.

In most countries, including most of those cited in this paper, individuals own their information rights and can opt-out of all secondary uses of data listed in a company’s privacy policy, which is a contract between them and their users. That privacy contract is not considered by the courts to be a single contract. It is considered to be a contract per user. That means that Facebook has over two billion data user contracts.

These businesses count on less than 1% of 1% to ever opt-out. They consider it significant if that number ever goes over 1%. In some cases, if 20 or more individuals were to opt-out at once, it would get their attention because it would represent a significant threat to their profitability.

Meanwhile, citizens wait for and look to their governments to fine these large entities. In rare cases where this happens, and even if the fine is significant, the government uses the income for other purposes and typically does not share it with the individuals who created the data value in the first place. This means that such fines do little to curb abuse and is viewed largely as 1) a tax on the users collected through the business, or a taking, 2) a payoff to politicians from the tech companies, or 3) an affordable way to quash competition that cannot afford the large fines. It may actually be all three at once.

Based on all of this, the most powerful tool we have to fight data genocide is to form meaningful groups and submit our opt-outs. Authorized Agents, such as the aforementioned Privacy Co-op makes this reportedly quick and easy for most people. You can use their tools to look up any business, quickly understand how they use your data, and click a button to opt-out.

The key is learning more, sharing it with others, and then using tools to remedy the problem.

Social clubs, schools, religious organizations, and communities can hold a single meeting, get a meaningful number of people to opt-out of a business or two, and collectively that number could cross the 1% margin.

When people opt-out through Authorized Agents, typically a cease-and-desist letter is sent to that company demanding opt-outs for all of those listed. This begins a legal chain of events that can result in harm and remedy being established per the aforementioned torts.

This relatively simple act would likely have a similar effect to a town standing up to a smaller group of armed terrorists. The only real control tyranny has historically used is the leverage over inaction of the masses.

REFERENCES

Allyn, B. (2021, January 21). Judge Refuses To Reinstate Parler After Amazon Shut It Down. NPR. https://www.npr.org/2021/01/21/956486352/judge-refuses-to-reinstate-parler-after-amazon-shut-it-down.

Amoretti, F., & Santaniello, M. (2016). Between Reason of State and Reason of Market: The developments of internet governance in historical perspective. Soft Power, Vol. 3, no. 1 (ene.-jun. 2016); p. 147–167.

Ash, T. G. (2016). Free speech: Ten principles for a connected world. Yale University Press.

Barlow, J. P. (2019). A Declaration of the Independence of Cyberspace. Duke Law & Technology Review, 18(1), 5–7.

Bohn, D. (2021, March 30). Privacy and ads in Chrome are about to become FLoCing complicated. The Verge. https://www.theverge.com/2021/3/30/22358287/privacy-ads-google-chrome-floc-cookies-cookiepocalypse-finger-printing.

Bigo, D. (2006). Security, exception, ban and surveillance. Theorizing surveillance: The panopticon and beyond, 46–68.

Bigo, D. (2008). Globalized (in) security: the field and the ban-opticon. In Terror, insecurity, and liberty (pp. 20–58). Routledge.

Bigo, D., & Tsoukala, A. (Eds.). (2008). Terror, insecurity, and liberty: illiberal practices of liberal regimes after 9/11. Routledge.

Bunn, M. (2015). Reimagining repression: New censorship theory and after. History and Theory, 54(1), 25–44.

Callamard, A. (2017). The Control of” Invasive” Ideas in a Digital Age. Social Research: An International Quarterly, 84(1), 119–145.

Crawford, A. (2017, March 7). Facebook failed to remove sexualised images of children. BBCNews. https://www.bbc.com/news/technology-39187929.

Fussell, J. (2000, July 6). Elements of the Crime of Genocide. http://www.preventgenocide.org/genocide/elements.htm#:~:text=%20%20%20Act%3A%20%20%20Genocide%20by,have%20%20...%20%204%20more%20rows%20.

Gandy Jr, O. (2006). 15 Data Mining, Surveillance, and Discrimination in the Post-9/11 Environment. The new politics of surveillance and visibility, 363.

Gershgorn, D. (2021, March 2). China’s ‘Sharp Eyes’ Program Aims to Surveil 100% of Public Space. Medium. https://onezero.medium.com/chinas-sharp-eyes-program-aims-to-surveil-100 of-public-space-ddc22d63e015.

Godwin, M. (2003). Cyber rights: Defending free speech in the digital age. MIT press.

Gerstenfeld, P. B., Grant, D. R., & Chiang, C. P. (2003). Hate online: A content analysis of extremist Internet sites. Analyses of social issues and public policy, 3(1), 29–44.

Greenfield, P., & Makortoff, K. (2020, March 18). Study: global banks ‘failing miserably’ on climate crisis by funneling trillions into fossil fuels. The Guardian. https://www.theguardian.com/environment/2020/mar/18/global-banks-climate-crisis-finance-fossil-fuels.

Grimes, K. G. K. (2020, October 7). Conservatives, Republican Candidates Report Censorship and Shadow Banning by Twitter, Facebook, Google. California Globe. https://californiaglobe.com/section-2/conservatives-republican-candidates-report-censorship-and-shadow-banning-by-twitter-facebook-google/.

Hart, R. (2021, January 22). Google Threatens To Shut Down Search Engine In Australia If Forced To Pay Publishers For News. Forbes. https://www.forbes.com/sites/roberthart/2021/01/22/google-threatens-to-shut-down-search-engine-in-australia-if-forced-to-pay-publishers-for-news/?sh=4fb9238f1291.

Jonassohn, K., & Chalk, F. (1987). A typology of genocide and some implications for the human rights agenda. Genocide and the Modern Age: Etiology and Case Studies of Mass Death, 3(4).

Lyon, D. (Ed.). (2003). Surveillance as social sorting: Privacy, risk, and digital discrimination. Psychology Press.

Lemkin, R. (1947). Genocide as a crime under international law. American Journal of International Law, 41(1), 145–151.

Marti, D. (2021, March 12). Your cohorts are just ethnic affinity groups. Change my mind. https://blog.zgp.org/floc-affinity/.

Moore, M., & Tambini, D. (Eds.). (2018). Digital dominance: the power of Google, Amazon, Facebook, and Apple. Oxford University Press.

Moses, A. D. (2010). Raphael Lemkin, culture, and the concept of genocide. In The Oxford handbook of genocide studies.

Neilsen, R. S. (2015). ‘Toxification’ as a more precise early warning sign for genocide than dehumanization? An emerging research agenda. Genocide Studies and Prevention: An International Journal, 9(1), 9.

Praharaj, K. (2020, June 29). How Are Algorithms Biased? Medium. https://towardsdatascience.com/how-are-algorithms-biased-8449406aaa83.

Ramaswamy, V., & Rubenfeld, J. (2021, January 11). Opinion | Save the Constitution From Big Tech. The Wall Street Journal. https://www.wsj.com/articles/save-the-constitution-from-big-tech-11610387105.

Rivero, N. (2021, January 13). The Parler problem shows just how much the US has outsourced free speech protections. Quartz. https://qz.com/1956380/amazons-parler-ban-displays-big-techs-power-over-online-speech/.

Satariano, A. (2021, January 14). After Barring Trump, Facebook and Twitter Face Scrutiny About Inaction Abroad. The New York Times. https://www.nytimes.com/2021/01/14/technology/trump-facebook-twitter.html.

Savage, R. (2013). Modern genocidal dehumanization: a new model. Patterns of Prejudice, 47(2), 139–161.

Smith, C. (2018, April 16). Facebook tracks you even if you’re not a user, and you can’t really do anything about it. BGR. https://bgr.com/tech/facebook-tracking-non-users-5624952/.

Stanton, G. (2013). The ten stages of genocide. Genocide Watch.

Statt, N. (2020, August 12). Facebook will now show a warning before you share articles about COVID-19. The Verge. https://www.theverge.com/2020/8/12/21365305/facebook-covid-19-warning-notification-post-misinformation.

The Daily Telegraph. (2017, June 8). Google Maps ‘renames’ Margaret Court Arena after tennis controversy. https://www.dailytelegraph.com.au/sport/tennis/google-maps-renames-margaret-court-arena-after-tennis-controversy/news-story/d44141e86f396b85388d6216f9eba0c5.

United Nations. (n.d.). United Nations Office on Genocide Prevention and the Responsibility to Protect. United Nations. https://www.un.org/en/genocideprevention/genocide.shtml.

Van der Ploeg, I. (2003). Biometrics and the body as information. Surveillance as social sorting: Privacy, risk and digital discrimination, 57–73.

Waller, W. (1932). The sociology of teaching.

Wheeler, T. (2020, July 31). Big Tech and antitrust: Pay attention to the math behind the curtain. Brookings. https://www.brookings.edu/blog/techtank/2020/07/31/big-tech-and-antitrust-pay-attention-to-the-math-behind-the-curtain/.

Wu, T. S. (1996). Cyberspace Sovereignty — The Internet and the International System. Harv. JL & Tech., 10, 647.

Xiao, Y., & Karlin, J. (2021, April 13). Federated Learning of Cohorts. https://wicg.github.io/floc/.

York, J. C. (2016). Facebook and Twitter Are Getting Rich by Building a Culture of Snitching.

ABOUT THE AUTHOR

Graduating with honours in Communication and Sociology at the University of Toronto, Rose Davis specialized her focus in genocidal studies, which led her to travel and study the first-hand effects of the Holocaust and Cambodian genocide. She is an advocate for human rights and is passionate about putting a stop to human trafficking through the use of digital platforms.

--

--

Privacy Co-op Media Staff

htts://Privacy.coop You own the rights to your information and businesses desire your direction. Learn about your choices, direct them in less than 3 minutes.