Little Big Brothers

Every second of every day, there are thousands and thousands of people all over the world sifting through the content of social media.

Giornalista

Scrivi all'autore | Pubblicato il 17 Dicembre 2018
14 minuti

Facebook, Twitter, YouTube… It’s not only machines and algorithms filtering millions of posts, videos, photos, tweets. Artificial intelligence has not reached a point where it can replace human intervention and, paradoxically, the more users there are on the networks, the more a human eye is requires to vet the huge flow of data that constantly zips across the planet.

Their work is hush-hush, carried out behind closed doors. But recently some have managed to pry open these doors allowing us a glance inside.

They work at breakneck rhythms, under stressful and isolated conditions, their pay is low and contracts are ambiguous. A lot of the simple “filtering” takes place in countries like India and the Philippines, but there are also country specific moderators in accordance with the limitations, laws and specific characteristics of each individual nation. A checker in Manila cannot be expected to understand the offensive subtleties of an Italian dialect, just to give one example…

A furtive, submerged world, but one that is becoming increasingly decisive. Because social media exists and thrives on two basic unwaivable precepts: emotional engagement and the customer experience of those using the platform. In the case of Facebook, especially, something is starting to give. The levels of conflict, rage and division into “partisan packs” competing against each other has reached such a level that people are abandoning Zuckerberg’s little game in droves, and turning to other social networks. This mechanism of rejection is also starting to develop on Twitter and Youtube, with some tentative inroads also being made into what was always considered to be the most benign of the social networks, Instagram, which has long since become the preferred online social media platform for the under-20s.

So in the face of the gradual increase of “haters” on social networks, the role of moderators and checkers is increasingly important.

One of the first people of analyse and study these workers – conferring onto the job role the specific name of Commercial Content Moderation (CCM) – was Sarah Roberts, Assistant Professor of Information Studies at UCLA https://gseis.ucla.edu,who spoke exclusively toEstreme Conseguenze:

 

EC: How long have you been studying this phenomenon of “checkers”?

SR: Interest in these issues is relatively recent. My field of study and research centres on people who work as professional “checkers” in an organised manner. I make this distinction because up until a short time ago, this kind of work was often done by volunteers. The best case example is that of Wikipedia. The major social networks, though, need to organise the staff and pay them. And in secret.

We are talking about huge numbers of people the world over. A rough estimate brings us to believe there are at least 100,000 checkers. There are a variety of strategies, and a knowledge of the language and the cultural characteristics of every single nation is a vital factor. In all likelihood there is an “Italian team”, just as there’ll be a French one or a German one, because the trend of these big tech companies is to have ever more competent and localised personnel. For some types of visual content, the vetting mechanism is simpler, but once you enter into cultural or political content, you need people who understand the context in which they are being posted.

 

EC: Why is there so much secrecy shrouding these workers, so much so that requests for information from various countries’ HQs are rarely replied to?

SR: There are a variety of reasons behind this secrecy. The first ever admissions of employing this kind of worker by Facebook, YouTube etc. dates back to the start of 2017, so not a lot of time has gone by since then. I think that they released this information because the public were starting to ask questions about how social networks functioned, especially after the Trump campaign. That was the turning point, when people started to ask themselves: “Is there anyone vetting this content?” And the majors had to reply that, yes, they had dedicated personnel dealing with these issues. The veil of secrecy was due to the fact that the companies wanted to resolve the problems themselves, they didn’t want to advertise either the problem or how they were remedying it, because it would have been tantamount to an admission that the problem existed in the first place. And also, to avoid being criticised about the internal solutions they were coming up with.  For years, they proclaimed that social media was a platform for open expression, with everyone being able to contribute to vetting, so as to avoid – for financial reasons – making advertisers sit up and take worry. There is a third element, one that is less tangible but just as important: these companies consider themselves to be IT companies, rather than media companies. They feel like technology firms with a sideline of mass communication. So they firmly believe that they have the technology to resolve any problems from within.

It was an epochal step for them: understanding that certain issues could not be resolved with a new algorithm or machine checks, but required the necessary intervention of human beings. Their ideology revolves around numbers and artificial intelligence, so having to rely on people was almost traumatic for them. It overturned their ideology, and not only with regards to social media but also, for example, in the construction of smartphones. It was as if the predestined trajectory of an envisioned wholly artificial management had been overturned by problems of another nature.

 

EC: What can and can’t be censured?

SR: Let’s begin from the fact that every rule can be bent and is constantly updated. On the one hand, there is content that is obviously illegal and socially unacceptable, such as child pornography, representations of rape, bullying, disturbing or violent images, incitement to hate, where the removal of content by the company is a legal obligation. This is where the majority of the checkers’ work comes into play: they watch dozens of videos or other violent content day in, day out. And this takes its toll on them, in terms of stress, because they need to watch or read 8-10 hours per day of horrifying videos or text posted by anonymous users who know full-well how to work the social network system. Their working conditions vary greatly according to the country, in accordance with labour laws in that specific nation, but if we try to imagine a standard model, we should try to picture a call centre in a place like India or the Philippines, with hundreds of people in an open space, usually underpaid. Or in Western nations, these workers have better conditions, perhaps they work in the ultra-modern Google or Facebook HQs, but they are not paid directly by Big Tech, they are subcontracted to smaller firms. And this is a problem. There was a particularly clamorous case in the USA in which two checkers sued Microsoft over this very fact, that is, over the fact that they worked for the IT giant to all extents and purposes, but were treated as external staff, with all the relevant contractual and salary differences. It is surprising that the case came to light, because I would have thought that Microsoft would have preferred to come to an internal settlement without generating negative publicity, as was the case with many other lawsuits we have never heard about; in the end, they came to a pre-court settlement. There are still unresolved lawsuits against Facebook: https://mashable.com/article/facebook-moderator-sues-ptsd/?europe=true. Sometimes these workers operate in remote settings, all alone, and I think this is even more dangerous. These workers should be guaranteed psychological assistance by the big companies, something that does not happen. But this is just the beginning; there is still as yet no data on the damage caused by prolonged exposure to this kind of content. But studies need to be carried out, because over the next few years, the number of checkers will increase drastically. And they will cost the companies more money, because they need more safeguarding and higher stipends. Will Big Tech be willing to spend the necessary money? Time will tell.

 

EC: What else do they vet?

SR: There’s a whole other universe of fake news, propaganda, deviated or partial information, and this universe is more complicated because often these same companies have no precise guidelines on how to deal with this kind of content. Nobody has any clear-cut ideas or defined policies. It’s undoubtedly uncharted terrain. And that is why I think the number of checkers is on the rise, because the moderation of so much information is destined to increase, and we have to come to terms with the fact that social networks are not the open, uncensored places we have always fooled ourselves into thinking that they are. Because the true problem at root is the mechanism by which social networks function, that is their engagement with users, the volume of visualisations that group (or maybe they could be better defined as pack) content generates, which inevitably favours negative aspects, rather than positive ones. Social networks thrive on hatred, it brings people together more. But this mechanism is being increasingly put under pressure by society, because it is causing havoc the world over, something that is apparent every day.

Now Big Tech finds itself at a crossroads: the satisfaction of the “pack” and the increasing dissatisfaction of users finding themselves in more violent and aggressive situations; indeed “mature” users are gradually abandoning social networks in more developed countries. For now, Facebook and YouTube’s primary aim is to maintain their number of users, rather than increase them. And here too, the role of checkers is fundamental, in that they have to ensure a pleasant environment without too must disturbance within the network. Artificial intelligence will always play a role, and in the future they will probably find a way to implement it better. But perhaps this will mean that more human eyes are required to verify the increased volume of data elaborated by machines!

 

EC: How do you see the future of social networks?

SR: Big Tech is reaching its critical mass of users. I think that they will be obliged to concentrate on content, on quality, even if it means losing users. I think that the users will be the first to demand a more user-friendly environment, holding the platforms to account for content posted. So there will be more filters, more checkers, more data available, but less users overall for each individual social network, which will increasingly have to differentiate themselves to stand out and survive.

To further explore Sarah Roberts’ work, visit: https://illusionofvolition.com

 

Condividi questo articolo:

Giornalista

Daniele De Luca ha lavorato per 15 anni come redattore a RadioPopolare di Milano, passando dalle notizie locali ai GR nazionali. E’ stato corrispondente dagli Stati Uniti per Radio Popolare. Ha collaborato con Diario e il settimanale L’Espresso. Caporedattore a CNRMedia. E’ direttore di ‘FuoriDiMilano’, il primo magazine free-press composto da una redazione di utenti dei servizi di salute mentale.

ARTICOLO PRECEDENTE

For sale: My Kidney

Commenta con Facebook