EmploymentEuropeTechThought

The 21st century hazardous job

Are hazardous jobs a thing of the past in Europe? In the age of artificial intelligence, we may need to start thinking differently about what such a job looks like, focusing less on the lungs or the heart and more on the mind. There is a rapidly expanding industry in Europe in which there is growing evidence that mental ill-health is a systemic problem for workers. That industry is content moderation. And a network of workers, researchers and politicians are catching on and fighting back.

“There are surely some content moderators that haven’t suffered mental health problems connected to the job, but I haven’t met them,” says sociologist and computer scientist Milagros Miceli, who has studied the content moderation industry for the past six years. “I have no doubt that content moderation, like coal mining, is a hazardous job.”

Coal mining, known for the proliferation of ‘black lung disease’, is a classic example of a hazardous job, but there are only approximately 200,000 coal miners left in the whole of the European Union. There are plenty of other jobs which come with dangers, but few where ‘risk to your health’ are still written on the job description.

Content moderation may, however, qualify as the new exception. Just as exposure to silica dust caused lung diseases in miners, endless toxic and disturbing content is a threat to the mental health of those employed to engage with it on a daily basis.

Content moderators are essentially the security guards of social media. They are tasked by platforms like Facebook and Tik-Tok to remove content that breaches their guidelines. The posts that they filter out include hate speech, violent, graphic and pornographic content (including child sexual exploitation), content from proscribed organisations such as terror groups, bullying and harassment, and suicide and self-harm.

The content that reaches the screens of content moderators has either been flagged by a user or identified by an AI system as a potential candidate for removal. A big part of what content moderators are doing is labelling the content they see on their screen in order to train the AI to become more proficient in identifying harmful content.

Chris Gray is the first former content moderator to take Meta to court in Europe. He worked in Dublin for CPL, an outsourcing firm for Meta, from 2017 to 2018. It was only years after he was fired that Gray began to come to terms with how the job had affected him. “I met a journalist who wanted the human-interest angle, she was pushing me to talk about disturbing content,” he recounts. “I hadn’t spoken to anyone about that. I didn’t speak to my wife about it. The NDA [non-disclosure agreement] had been hammered into me: ‘Don’t ever talk about the work’.”

“As I started to tell her, I had a complete meltdown, I just lost control of myself completely. I sat in a coffee shop with tears streaming down my face. The journalist insisted I go to a doctor, and it started from there.” Gray was diagnosed with PTSD and in 2019 he took a claim against CPL and Meta to the Irish High Court, alleging psychological injuries from repeated exposure to extremely disturbing content. The case is still pending.

In the United States, a similar case involving Meta content moderators was settled out of court, with the workers receiving damages of up to 50,000 US dollars per person. Gray has not been offered an out-of-court settlement yet, and says he wouldn’t agree to one if he was. ‘As part of the US settlement, they had to accept that no one was harmed to get the pay-out. That’s no good to me.

“Content moderation is like where the tobacco industry was in the 1960s. Everyone knows it’s harmful but it hasn’t been proven yet and there’s a huge vested interest in maintaining the fiction that it isn’t an issue. I want it to be proven by a court of law that this job is harmful to workers’ health. Once we’ve established that it’s harmful, we can start the conversation about how to mitigate the risk.”

While the work of content moderators is for the big social media platforms, they are hired almost exclusively through outsourcing firms, companies which are typically called ‘BPOs’ (‘business processing outsourcing’). A veil of secrecy surrounds this industry.

No major platform has been willing to say how many content moderators are hired on their behalf by these firms or even provide a list of the BPOs they work with, but there is little doubt that this is a big and rapidly growing sector: there were three million posts flagged for removal every day on Facebook alone in 2021.

Some content moderation work can be offshored, with the Philippines in particular a major global centre for content moderation. However, Antonio Casilli, an expert in ‘click work’ (which content moderation is one sub-section of), says that the platforms cannot avoid employing content moderators within the EU.

“Sometimes content moderation has to happen in Europe for legal reasons, because they are managing content and data that is subject to GDPR [the EU General Data Protection Regulation]. Also, there are linguistic reasons. You can’t find people in Africa, for instance, who speak specific languages, like Lithuanian or Swedish. Some things can’t be outsourced to lower-income countries.”

According to Casilli, the European content moderator industry has become highly concentrated in recent years, with a few big firms buying out rivals and dominating the sector. Teleperformance, Appen and Telus are three of the biggest players. These BPOs organise the industry in a call centre-style office environment where surveillance of workers is intensive and secrecy is a top priority.

“Their contracts are extremely strict on non-disclosure agreements, they are really NDAs disguised as work contracts,” explains Casilli. “In these contracts, not much is said about the rights that workers have. Basically there is a lot of emphasis on secrecy and confidentiality. And they do not mention the specific health risks that are associated with this work.”

Another typical feature of the content moderator industry is that the workers are migrants. Casilli is one of the authors of Who Trains the Data for European Artificial Intelligence?, a new study by the EnCOre Initiative on click workers (including content moderators), commissioned by The Left Group in the European Parliament. The researchers held focus groups with content moderators working at BPO firms Telus and Accenture in Germany (in Berlin and Essen) and at an anonymised BPO firm in Portugal.

At the Portuguese site every worker they spoke to, was a migrant: from Russia, Poland, India and Turkey. At the German sites, most of the workers were migrants, including many from Asia and Africa. “They are contractually blackmailed because their work status is often linked to their visa,” explains Casilli. “So if they stop working for these companies, or if they whistle-blow, they face the risk of deportation.”

From the BPOs to the NDAs to the migrant visas, the big social media platforms are protected from accountability for the working conditions of their content moderators by layers of deniability, secrecy and marginalisation. But behind these walls of opacity there exist real people with real stories, and some of them are determined to be heard, despite the barriers they face in speaking out.

It’s only a matter of time before the content moderator industry is brought out of the shadows and into the light. At that point, platforms like Meta and TikTok will have to answer for why they can have hundreds of pages of guidelines to ensure the safety of their users, but none concerning the safety of their content moderators

Just like Chris Gray, Ayda Eyvazzade, an Iranian former content moderator in Berlin, is also in no doubt about the harms of the job. “I experienced some really traumatising moments,” she tells HesaMag. “I remember watching a child being exploited sexually, and that image has stayed in my mind. You feel yourself very alone and solitary when doing that work; very hopeless and insecure. The quality of my sleep was really damaged. I would see the images in my nightmares. I would wake up more tired than when I went to sleep.”

Eyvazzade was sacked in November 2023 after working for an outsourcing firm for almost five years (which she does not wish to name due to the NDA she signed). She says that the combination of human and digital surveillance intensifies the pressures of the job. The content moderators have KPIs (‘key performance indicators’) which they have to meet, and time away from the computer due to the distress of the images and videos that they have witnessed count as ‘unproductive’ time.

“If something you see is really difficult then you can leave your desk, but at that moment you have to remember to put on your computer that you are on ‘wellbeing’,” explains Eyvazzade. “But if the supervisors think you are using ‘wellbeing’ more than you should, they will intervene. They would say: ‘Your ‘production’ time is a bit lower than expected, you have been on ‘wellbeing’ a lot.’ So you are pressured to increase your time on ‘production’ by decreasing your ‘wellbeing’.”

After a content moderator in Telus’ Essen office committed suicide, the company changed its policy so that workers could have unlimited ‘wellbeing’ time. However, Milagros Miceli, who has conducted research with content moderators in Essen, says that the pressure to watch a lot of content in a short space of time still exists.

“The content moderators have the right to wellness breaks, but the KPIs that you need to achieve, you won’t achieve them if you take too many breaks,” she explains. “KPIs are always the most important disciplining factor for workers managed via algorithmic management.”

The EnCOre Initiative study, which Miceli is also a co-author of, found ‘incidents in which workers have fainted, suffered from burnout, experienced psychotic episodes, and, tragically, in at least one instance, committed suicide’. None of this will be news to Meta founder and CEO Mark Zuckerberg.

In a leaked audio recording of a meeting in 2019 he was told by staff that many content moderators were suffering from PTSD. The CEO responded by saying that “some of the reports, I think, are a little overdramatic.”

Miceli, who has spoken to hundreds of content moderators, believes the exact opposite. “The problems are much worse than people think,” she says. “I’ve heard a man say his wife has left him because he cannot perform sexually after reviewing paedophilia content. All of these workers have real conditions, certified by real psychiatrists.”

The BPOs say they provide in-house counsellors, but both Gray and Eyvazzade said that most of the counsellors they spoke to were under-qualified. Miceli agrees. “Quite a lot of the in-house counsellors are not certified therapists. Also, a lot of workers have the suspicion that counsellors inform managers about what they are told from the workers.”

She thinks Ver.Di (Vereinte Dienstleistungsgewerkschaft – United Services Union) could have been more assertive in challenging the company, including through the courts.

Miceli also believes that if unions are going to grow in this sector, they need to be better prepared to confront the secrecy and intimidation that dominates content moderation work. “Definitely part of the issue is unions struggling to adapt to new times and struggling to relate to the migrant workers who are leading the organising in these new digitised spaces,” she says.

If unions have work to do in this area, surely regulators should also be taking a close look. The 2022 EU Digital Services Act (DSA) significantly increased the burden on platforms to police content, leading to the content moderator industry burgeoning in Europe, but the DSA paid no attention to the safety of content moderators themselves.

“The Digital Services Act has increased the amount of moderation but it has also increased the centralisation of moderators,” Casilli says. “The market is becoming bigger and bigger but with fewer actors.”

In May, the European Commission announced a new investigation into Meta over potential breaches of DSA in relation to the safety of children using Instagram and Facebook. A senior Commission official has also questioned how X can be meeting its DSA obligations while hiring significantly fewer content moderators than Meta and TikTok. But the more that platforms hire content moderators to cope with the political pressure from the EU, the more workers will be placed at risk – a tricky balance that needs to be addressed.

“It’s totally about what is politically important,” content moderator Chris Gray says of the regulatory debate in Europe. “Everyone who has kids cares about their kids not being exposed to horrible stuff on social media, but how many of those parents care that there’s a bunch of people in a room somewhere that has to look at this stuff over and over again to stop your kid seeing it?”

The content moderator Works Council at Telus in Essen has made a number of proposals to improve working conditions: increased vacation time to alleviate mental stress, access to professional mental health support without fear of reporting to management, fair compensation, recognition of their work as a skilled profession, and recognition that this is a hazardous job through the application of appropriate mitigation measures.

In the German Bundestag (federal parliament), a summit for content moderators was held in 2023 at which the content moderators presented a manifesto and one of the workers from the Essen Works Council gave testimony. But in a clear sign that the BPOs have little appetite to change, this worker was subsequently suspended by Telus for breaching his NDA. The Bundestag has yet to act on the recommendations of the content moderators.

Leila Chaibi, Member of the European Parliament who leads The Left Group’s work on AI and work, says the EnCOre Initiative study highlights the need for European regulatory action in this area. “This report should be a wake-up call to the EU decision-makers to act to protect the rights of click workers, and address their specific needs,” she tells HesaMag.

Despite the secrecy of the platforms and BPOs, it’s only a matter of time before the content moderator industry is brought out of the shadows and into the light. At that point, platforms like Meta and TikTok will have to answer for why they can have hundreds of pages of guidelines to ensure the safety of their users, but none concerning the safety of their content moderators.

This article was first published in HesaMag, the ETUI magazine on health and safety at work.