The looping effects of 2020- cancel culure, Uber and Yelp

Miranda Marcus
4 min readNov 15, 2020

The way we categorise things is often socially constructed. For example race, one of the most common demographic categories we are surrounded by, is pretty universally agreed across biology, anthropology and sociology to be a socially constructed label. So it isn’t surprising that the way we categorise ourselves affects the way we behave.

Every categorisation or classification represents a different form of being, and when we change how that form of being is described and grouped, it changes the self-conception of those in that group and their according behaviour. This is a phenomenon which Ian Hacking has termed ‘the looping effects of human kinds’. He describes how ‘human kinds’ dare distinguishable from ‘natural kinds’ by their tendency to have loop in on themselves recursively.

There is a looping or feedback effect involving the introduction of classifications of people… Kinds are modified, revised classifications are formed, and the classified change again, loop upon loop.

Take the debates this year about cancel culture. Meredith Cark in her paper ‘DRAG THEM: A brief etymology of so-called “cancel culture”’ writes about how social media callouts have evolved from their roots in Black vernacular tradition to their misappropriation in the digital age by social elites. She argues that the application of useful anger by minoritized people and groups has been effectively challenged by the dominant culture’s ability to narrativize the process of being “canceled” as a disproportionate moral panic and attempt to curtail freedom of speech. Those being canceled found themselves in a new category based on their behaviours. In response, the concept of ‘cancel culture’ emerged and with it a set of apparent behaviours that described those doing the canceling. As both categories evolved, so too did the behaviour they described and the groups of people aligned with them. They looped and then looped again.

This is an example of explicit categorisation, but many of the conversations about the data economy and its ability to reinforce structural racism through platform technologies and their use of data are fundametnally related. The ever-shifting recategorization of ourselves and our environments through data employed by the ad-tech industry and beyond gently nudges us to behave one way or another. This implicit power is the trick that makes surveillance capitalism tick.

The way data is treated perpetuates the idea that it is objective- a pure, euclidian reflection of our messy, situated, social world. Those deciding what data gets collected and how tend to perform the god trick of removing themselves from the picture. In doing they erase the setting from the data set, and along with it the values they have inevitably embedded in the space between the data points. These values then get amplified through the use of networked technologies, mainly for financial or sometimes political gain and it all gets very complicated.

Often we think of this as being perpetrated by invisible AIs weilded by ad tech, oppressive governments and mega corporations. And it is. But it is also seen in thorugh technology that affords collective categorisation of others such as Uber and Yelp.

Uber is currently facing a class action law suit, filed by a former driver, who claims that the ratings system that allows riders to rate their drivers violates the US Civil Rights Act. The argument is that non-white drivers experience disproportionate levels of low ratings, thereby affecting their ability to earn an income. The riders rating the drivers are embedding their implicit racism into their ratings which is then aggregated across the system. The commercial infrastructure intended to enforce a metitocracy through mutual ratings, in fact goes towards reinforcing structural racism. Uber denies this and maintains that the fact that riders are also rated based on their behaviour makes it a level playing field. This of course does not take into account the commercial dynamics of customer to service provider, the fact that the rider can always get a bus, and that no technology is neutral.

But it’s not all bleak. Hacking also suggests that once new categories are made, they open up the space for new realities to take effect.

“Social change creates new categories of people, but the counting is no mere report of developments. It elaborately, often philanthropically, creates new ways for people to be.”

And so it is not a small thing when new categories emerge. In contrast to the Uber example, following the Black Lives Matter protests in the summer Yelp has announced that they will be labeling businesses accused of racist behaviour in the US.

Example of the Yelp alert

The platform, which lets users rate bars, restaurants and attractions, said the labels would require “resounding evidence”, in the form of public attention and “floods of reviews”. Such a label would of course have a big impact on the businesses under fire and Yelp struggles against a never ending tide of fake reviews. But despite the complexities of implementing such a category in a fair way, it demonstrates that it is possible for platforms not only to acknowledge their power and the power of the data they collect and distribute. In acknowledging this yelp is taking an active standpoint as opposed to perpetuating the idea that techology is neutral.

So pay attention to the new categories that we are creating around us, and the infrastructures we use to allocate them. They pack a punch.

--

--

Miranda Marcus

Acting Head BBC News Labs / Wellcome Trust Data For Mental Health Research. ex Open Data Institute. Writes about data, design, digital, and anthropology.