Self-harm? Smartphones, Facebook & Social Media

Just two minutes of scrolling through stories on social media — known as ‘doomscrolling’ — on Twitter, or viewing them on YouTube, is enough to cause a person’s positive mood to plummet. Several scientific studies have found that levels of anxiety and depression have increased over the pandemic — particularly when reading covid–related news every day.

Psychologists from the University of Essex, led by Dr Kathryn Buchanan, set out to discover how quickly the negative impact was felt after exposure to covid content. In two separate studies, people were randomly assigned to spend a few minutes consuming covid–related information, either by reading a real–time Twitter feed or watching a You Tube video of someone commenting on bad covid news.

In both studies, participants reported lower well–being compared to a control group, who had not been exposed to any covid news. The researchers found as little as two minutes of bad news about the pandemic was enough to have a powerful effect on people’s mood and emotions.

Perhaps unsurprisingly, positive covid stories about random acts of kindness did not have the same negative effect. This suggests that it is not simply time spent on social media that is problematic, but rather the consumption of bad news that is the problem.

If just a few minutes of exposure to bad COVID news can result in immediate reductions of well–being, then extended and repeated exposure may over time add up to significant mental health consequences. One should be aware of one’s own news consumption on social media, which in many countries is on the rise. This is despite the fact that most people are aware that news on these platforms has lower quality, accuracy, impartiality, and not to be trusted!

Half of adults in the UK now use social media to keep up with the news, including 16% who use Twitter, and 35% who use Facebook. Even minimal exposure to bad news on these platforms can have negative consequences.

People should attempt to undo the negative by balancing it with more positive information. 

But today’s teenagers are lonelier at school than those 20 years ago because smartphones stop them talking with friends. Teens often feel excluded if they see online pictures of peers having fun without them.

Researchers — and parents — believe the problem is because of the widespread use of smartphones and social media by this age group. The researchers also say adolescent wellbeing began to decline after 2012, at the same time as the rise in smartphone access.

In 2000, 10% of 15 and 16 year–olds in the UK had high levels of school loneliness. By 2012, the figure had increased to 15%, soaring to 25% in 2015 and 33% in 2018.

Around 60,000 teenagers were interviewed for the research. They were asked to rate how much they agreed or disagreed with statements including ‘I feel like an outsider, or I feel left out of things at school’ and ‘I feel awkward and out of place in school’.

The research team, led by San Diego State University, studied children worldwide and said school loneliness is a predictor of low well–being and depression among adolescents. The authors added that social media in particular is having a negative effect because it may heighten feelings of missing out, or lead to cyber bullying.

The research confirmed a strong correlation between smartphones and loneliness, although the connection, and thus the blame, cannot be categorically proved.

Generally, the increases were higher among girls than boys. 

The study was published in the Journal of Adolescence 

The negative impact of social media on young people’s mental health has long been recognised, but a new study suggests that the age at which they are most susceptible
differs between girls and boys.

Researchers asked adolescents about their use of social media sites such as Instagram and Twitter and their level of ‘life satisfaction’, and then looked for a link between the two.

They found girls experience a negative link between social media use and life satisfaction when they are 11–13 years old and boys when they are 14–15 years old.

Sensitivity to social media use is probably linked to developmental differences, such as changes in the structure of the brain, or puberty, which occurs later in boys than in girls, although the exact mechanisms will require further research. But the team also found that lower life satisfaction can drive increased social media use, contributing to a vicious circle.

However, the key findings indicate that in girls, social media use between the ages of 11 and 13 years was associated with a decrease in life satisfaction the following year. In boys, this occurred between the ages of 14 and 15 years.

In both females and males, social media use at the age of 19 years was again associated with a decrease in life satisfaction. Decreases in life satisfaction also predicted increases in social media use one year later. However this does not change across age and or differ between the sexes.

Not only can social media use negatively impact wellbeing, but lower life satisfaction can drive increased social media use. However, outside of the sex–based differences, the team were unable to predict which individuals are most at risk.

The study was led by Dr Amy Orben a group leader at the MRC Cognition and Brain Sciences Unit, University of Cambridge.

Dr Orben said “Changes within our bodies, such as brain development and puberty, and in our social circumstances appear to make us vulnerable at particular times of our lives… With our findings, rather than debating whether or not the link exists, we can now focus on the periods of our adolescence where we now know we might be most at risk and use this as a springboard to explore some of the really interesting questions.”

The team cannot prove ‘causality’ — meaning they can’t specifically conclude that increases in social media use cause decreases in life satisfaction, but it is the most likely cause, if only because it is the only thing that has really changed in society in the last two decades.

According to the researchers, there is still ‘considerable uncertainty’ about the precise mechanisms by which social media use relates to wellbeing. To better establish which individuals might be influenced by social media, the researchers are now calling on social media companies like Meta (owners of Facebook and Instagram) to share their data with scientists. The chances of them doing that, are of course roughly nil.

Professor Andrew Przybylski is the director of research at the Oxford Internet Institute, University of Oxford.

Professor Przybylski says “To pinpoint which individuals might be influenced by social media, more research is needed that combines objective behavioural data with biological and cognitive measurements of development. We therefore call on social media companies and other online platforms to do more to share their data with independent scientists, and, if they are unwilling, for governments to show they are serious about tackling online harms by introducing legislation to compel these companies to be more open.

The research team – which included psychologists, neuroscientists and modellers – analysed two UK datasets comprising some 84,000 individuals between the ages of 10 and 80 years old. These included longitudinal data (data that tracks individuals over a period of time) on 17,400 young people aged between 10 and 21 years.

For each participant, data was collected once a year on social media use and self– reported life satisfaction between 2011 and 2018. To measure social media use, adolescents were asked:

On a normal weekday during term time, how many hours do you spend on social networking or messaging sites or app on the internet such as Facebook, Twitter and WhatsApp?

The team looked for a connection between estimated social media use and reported life satisfaction, with the belief that a spike in the former could cause a fall in the latter. They found key periods of adolescence where social media use was associated with a decrease  in life satisfaction 12 months later. In the opposite direction, the researchers also found that teens who have lower than average life satisfaction use more social media 12 months later.

The team acknowledge that the time points for data collection were one year apart, so this is the only information they had. It is possible that the association could be seen before one year. The problem is that the participants weren’t asked about their wellbeing during that year.

Results also showed that increased social media use again was linked with lower life satisfaction at age 19 years – for both males and females. At this age, it may be that fairly important social changes, such as leaving home or starting work, may make us particularly vulnerable.

At other times of the teenager’s lives the link between social media use and life satisfaction was not statistically significant. Also, the study looked at averages, meaning social media use will have a positive impact on some teenagers but not others. Some may use social media to connect with friends, or cope with a certain problem, or because they don’t have anyone to talk to about a particular problem or how they feel. For these individuals, social media can provide valuable support.

Here is no easy or straightforward answer to whether or not social media is or isn’t harmful – assessing vulnerability in adolescents is a complex and dynamic process which needs to consider multiple factors at any one point in time – and in the 21st century, that includes teen’s relationship with social media.

This major study of longitudinal community data has identified different points of vulnerability to social media use in males and females, but it is still unable to answer the crucial questions of why this might be. The study only covers a period up to 2018. Since then, social media use has become ever more prominent in young people’s lives, particularly during the pandemic, and emotional difficulties, notably in older adolescent girls, have risen significantly. It is vital to build on this research to understand both the harmful as well as supportive role of social media in young people’s lives.

One thing however, is certain… if not managed with care, social media can adversely affect youngster’s mental wellbeing.

In 2003, social media was still in its infancy. One of the early social networks – MySpace – was founded in 2003. Facebook would be established in 2004. Currently, social media networks such as Facebook and Instagram are used by more than 3.6 billion people worldwide.

Academics at the University of Technology Sydney reviewed over 50 research papers published between 2003 and 2018 and identified 46 harmful effects linked to the use of social media – in particular the use of Facebook, Twitter and Instagram – and they are not just mental health–related.

Focussing on the benefits and potential of social media meant the negative effects were overlooked. Among the harmful effects of social media were found to be privacy violation, deception, panic, conflict with others and an increased appetite for taking financial risk, but overall, issues of social media range from physical and mental health problems to negative impacts on job and academic performance, as well as security and privacy issues.

Overall, issues of social media range from physical and mental health problems to negative impacts on job and academic performance, as well as security and privacy issues.

Harmful Effects of Sites like Twitter and Facebook can be Grouped into distinct themes:

1. Cost of social exchange: includes both psychological harms, such as depression, anxiety or jealousy, and other costs such as wasted time, energy and money
2. Annoying content: includes a wide range of content that annoys, upsets or irritates, such as disturbing or violent content or sexual or obscene content
3. Privacy concerns: includes any threats to personal privacy related to storing, repurposing or sharing personal information with third parties
4. Security threats: refers to harms from fraud or deception such as phishing or social engineering
5. Cyberbullying: includes any abuse or harassment by groups or individuals such as abusive messages, lying, stalking or spreading rumours
6. Low performance: refers to negative impact on job or academic performance.
7. ‘Flaming’ or ‘Roasting’ involves posting or sending offensive messages in order to get some kind of response.

A greater awareness of the potential dangers of social media can encourage user moderation, and help software engineers, educators and policymakers develop ways to minimise their negative effects. In fact it would be a very good idea if these issues were tackled and talked about in schools.

The study was published in the Journal of Global Information Management.

Internet bullies are just as mean in real life

The internet has created people who post inflammatory, irrelevant or offensive comments online. What motivates someone to become a troll?

A study from the political science department at Aarhus University, debunks the long-held theory that people are only nasty while posting anonymously online. The study also found that people who are nice may choose to avoid all political discussions online, whether the
forums are hostile or not.

The researchers did find that the hostility levels of online political discussions are worse than offline discussions, but the frequency of behaviour was about the same online as in real life.

The behaviour of an internet troll is much more visible’ than the behaviour of the same person offline. The reason many people feel that online political discussions are so hostile has to do with the visibility of aggressive behaviour online.

The researchers began the paper with a dig at Mark Zuckerberg, who in 2010 was named Time Magazine’s ‘Person of the Year’. The Magazine state that “Facebook wants to populate the wilderness, tame the howling mob and turn the lonely, antisocial world of random chance into a friendly world.”

But the researchers noted that efforts from social media giants to get people to engage in civil discussions on topics such as politics have failed spectacularly. The truth is that online discussions about politics turned out to be nasty, brutish and not nearly short enough.

They cited a 2017 Pew Research Center survey which found that 62% of Americans believe online harassment has become a major problem, online discussions being significantly more hostile than offline discussions.

While conducting their study, the researchers considered the ‘mismatched hypothesis’, one of the most common theories in academic debate about online hostility. The mismatch hypothesis states that people who are ‘otherwise agreeable’ can turn into nasty trolls when they cannot physically see the person they are arguing with.

Alexander Bor, a post-doctorate researcher who co-authored the study said “The people hateful on Twitter offend others in face-to-face conversations too.

There are many psychological reasons people can get angry online – one being that fastpaced written communication can easily lead to  misunderstandings. Hostile people know their words hurt and that’s why they use them. The best way to deal with online hostility would be more efficient moderation.

The research suggests that it is necessary to describe what is OK and what is not okay and concludes that future studies could evaluate whether the actions of provocateurs – like Russia’s infamous Internet Research Agency – could create hostility even in nicer, more reasonable people by hijacking online discussions.

Aggression however, is not an accident triggered by unfortunate circumstances, it is a strategy hostile people employ to get what they want – and this includes a feeling of status and dominance in online networks.

Researchers from Brigham Young University found those who share such content have the dark triad personality traits narcissism – Machiavellianism, psychopathy – coupled with schadenfreude, a German word describing someone receiving pleasure from other people’s misfortunes. Ironically, those with schadenfreude, according to the team, consider trolling to be a form of communication that enriches, rather than obstructs, online discussion.

The University’s public relations professor and co-author of the study, Pamela Brubaker, claims “People who exhibit those traits known as the ‘dark triad’ are more likely to demonstrate trolling behaviours if they derive enjoyment from passively observing others suffer.

According to a study in published in the journal of Social Media and Society, online trolls have been described as self-aggrandising, individualistic, and unremorseful in their behaviour. Research suggests that trolls possess dark personality traits, including psychopathy, narcissism, sadism, and Machiavellianism – a personality trait that includes cunningness, the ability to be manipulative, and a drive to use any means necessary to gain power including, but not limited to, socially irresponsible behaviour, and disregarding or violating the rights of others.

People who get pleasure out of seeing others fail actually considering trolling an acceptable behaviour. Women who participated in the survey viewed trolling as dysfunctional while men were more likely to view it as functional, although the study does not break down how many men and women were in the study.

Study co-author professor Dr. Scott Church claims that the behaviour may be because it feels appropriate to the medium. People who regularly comment online may consider any and all trolling is ‘functional’ simply because it’s what people do.

The team also noted that those who possess schadenfreude do not care about how their online words come across to others. Neither do they see trolling as destructive behaviour, but simply as a way of communication.

But surely there is a difference between being outspoken and trolling? Trolls are aware the people they are trolling will have feelings, that they are also human. Do they really forget they are real people? Or do they see them as merely usernames or avatars? The answer must be that they get some kind of pleasure from trolling – and that points to psychopathy.

Social media gives us the power to connect with people who have both similar and different ideas and experiences from our own. Other people’s opinions and perspectives may not align with our own, but that is not a reason to engage in what in reality amounts to hate.

How Facebook and Instagram are harming children…

While working at the firm’s ‘Integrity Unit’ – a position quite high up in the company —Frances Haugen secretly copied internal memos – now known as ‘the Facebook Papers’. They revealed how for years bosses ignored internal complaints from staff to put profits first… how they “lied” to investors to shield Mr Zuckerberg from public scrutiny.

Disillusioned with the way Facebook was exploiting its users, she was forced to become a whistleblower and go public with a stark warning to parents… we should take heed of that warning…

Ms Haugen first aired her bombshell revelations in front of the US Senate, where she argued a federal regulator was needed to oversee digital giants like Facebook. She said that Facebook knew Instagram was dangerous for young people but did not want to act because “young users are the future of the platform and the earlier they get them the more likely they’ll get them hooked.”

Speaking in Britain to a parliamentary committee of MPs scrutinising the government’s Online Safety Bill, Ms Haugen said that Facebook meant childhood bullying was no longer confined to the classroom. Instead, she said it follows them home and into their bedrooms.

At present, users be at least 13 years old to use the service, but it’s easy for kids to lie about their age. Facebook had been developing an Instagram Kids specifically for children, but the idea had been put on hold due to a raft of concerns.

Facebook’s own research says now the bullying follows children home, it goes into their bedrooms… The last thing they see at night is someone being cruel to them… The first thing they see in the morning is a hateful statement and that is just so much worse…

The kids say ‘this makes me unhappy, I don’t have the ability to control my usage of it, and I feel if I left it would make me ostracised…

Children don’t have as good self regulation as adults do, that’s why they’re not allowed to buy cigarettes… When kids describe their usage of Instagram, Facebook’s own research describes it as ‘an addict’s narrative…’

I am deeply worried that it may not be possible to make Instagram safe for a 14 year-old and I sincerely doubt that it is possible to make it safe for a 10-year-old.”

Ms Haugen said Facebook could estimate people’s ages with “a great deal of precision” but did not act to stop underage users.

Facebook could make a huge dent on this if they wanted to and they don’t because they know that young users are the future of the platform.”

Ms Haugen claimed that the firm’s own research found that Instagram is more dangerous than other social media such as TikTok and Snapchat, because the platform is focused on “social comparison about bodies, about people’s lifestyles, and that’s what ends up being worse for kids

The ‘Facebook Papers’ revealed the social media giant was working to target children as young as 6 years old to expand its consumer base and generate greater profits.

Facebook already targets children starting at 13 years old, but an internal blog post announced that the company was in the process of hiring people to re-envision its full range of products for kids ages 6 to 9 and tweens ages 10 to 12.

The post, titled ‘The internet wasn’t built with young people in mind, but we’re about to change that,’ was among documents released by Ms Haugen’s legal team and provided to Congress and the Securities and Exchange Commission.

The leaked post discusses the five different groups Facebook plans to establish: kids 6 to 9 years old, tweens 10 to 12 years old, early teens from 13 to 15 years old, late teens from 16 to 17 years old, and adults 18 and above.

Our company is making a major investment in youth and has spun up a cross-company virtual team to make safer, more private, experiences for youth that improve their and their household’s well-being… For many of our products, we historically haven’t designed for under 13. 

These five age groups can be used to define education, transparency, controls and defaults that will meet the needs of young users,” the post states. It then goes on to cite the Age Appropriate Design Code (AADC)- a new statutory code that will apply to the company’s products in Europe.

In the post, Facebook stated that they were looking to hire people who had a background in ‘global research among youth (particularly kids, tweens, and their caregivers)’ and ‘partnering with external parties (e.g. academic, policymakers, regulators, child advocates).’

Open positions are listed for Privacy Research, Instagram Child Safety and Family Center, MK Youth Research, Instagram Youth Research, Instagram Youth Overall, Instagram Child Safety, and Messenger Kids/Youth Platform — Messenger Kids is a video calling and messaging app created by Facebook that is currently available in app stores.

A diagram titled Where We’ve Been and Where We’re Going outlines how the company plans to expand beyond’s Facebook’s current target market to reach kids and tweens.

The company acknowledges the Federal Trade Commission’s current regulations, the Children’s Online Privacy Protection Rule (COPPA), which imposes requirements on online services when dealing with children under 13-years-old. But while the new diagram shows that Facebook will now be targeting users under 13 years of age, it doesn’t explain how they will manage COPPA. 

A spokesperson for Facebook responded to the Wall Street Journal’s reporting:

Companies that operate in a highly competitive space — including the Wall Street Journal — make efforts to appeal to younger generations. Considering that our competitors are doing the same thing, it would actually be newsworthy if Facebook didn’t do this work.

Instagram announced they would temporarily halt plans to develop a version of the photo sharing app aimed at children on the heels of a damning study that said the app is harmful to young girls’ body image.

In other leaked material from Facebook, the company described children ages 10-12 years old as a valuable ‘untapped audience’ and even suggested they could appeal to younger children by ‘exploring playdates as a growth lever.’

Facebook formed a team to study ways to get tweens (aged 10 to 12) to use their platform, after becoming concerned by the threat from rivals such as TikTok and SnapChat.

Facebook said “Why do we care about tweens? They are a valuable but untapped audience. Our ultimate goal is message primacy with US Tweens” said a document from 2020, obtained by The Wall Street Journal. Even ‘young kids’ from 0-4 were included in the chart, suggesting Facebook may eventually try and recruit infants to their site. Another slide asked “Is there a way to leverage playdates to drive word of hand/growth among kids?”

The leaked Facebook Papers have shed light on a large amount of evidence that former employees have said prove the company is aware of many of their problems including the negative impact it has on its users mental health — specifically young girls. Previously leaked research revealed that since at least 2019, Facebook has been warned that Instagram harms young girls’ body image.

One message posted on an internal message board in March 2020 said the App revealed that 32% of girls said Instagram made them feel worse about their bodies if they were already having insecurities.

Another slide, from a 2019 presentation, said “We make body image issues worse for one in three teen girls… Teens blame Instagram for increases in the rate of anxiety and depression. This reaction was unprompted and consistent across all groups.

Another presentation found that among teens who felt suicidal, 13% of British users and 6% of American users traced their suicidal feelings to Instagram.

The research not only reaffirms what has been publicly acknowledged for years — that Instagram can harm a person’s body image, especially if that person is young — but it confirms that Facebook management knew as much and was actively researching it.

According to Ms Haugen, “Facebook believes in a world of flatness… [they] won’t accept the consequences of their actions and, so, I think that is negligence and ignorance, but I can’t see into their hearts, so I don’t want to consider it malevolent… The social network is filled with kind, conscientious’ people but systems that reward growth make it hard for Facebook to change.

How Instagram has harmed young girls and boys:

QUESTION: Did any of the things you’ve felt in the last month start on Instagram?
I’m not attractive      41% (US) — 43% (UK)
I don’t have enough money      42% (US) — 42% (UK)
I don’t have enough friends      32% (US) — 33% (UK)
I feel down, sad or depressed      10% (US) — 13% (UK)
Wanted to kill themselves      6% (US) — 13% (UK)
Wanted to hurt themselves      9% (US) — 7% (UK)

QUESTION: In general, how has Instagram affected the way you feel about yourself / about your mental health?

Much worse
US total: 3% US boys: 2% US girls: 3%
UK total: 2% UK boys: 1% UK girls: 2%

Somewhat worse
US total: 16% US Boys 12% US girls: 18%
UK total: 19% UK boys: 13% UK girls: 23%

No effect
US total: 41% US boys: 37% US girls: 43%
UK total: 46% UK boys: 50% UK girls: 44%

Somewhat better
US total: 29% US boys: 32% US girls: 29%
UK total: 28% UK boys: 31% UK girls: 26%

Much better
US total: 12% US boys: 18% US girls 8%
UK total: 5% UK boys: 5% UK girls: 4%

How Facebook promotes hate

Mark Zuckerberg’s public comments about the company are often at odds with internal messaging.

Facebook staff have reported that for years they have been concerned about the company’s failure to police hate speech and that Facebook executives knew it was becoming less popular among young people but shielded the numbers from investors.

Staff failed to anticipate the disastrous January 6 Capitol riot despite monitoring a range of individual, right-wing accounts. On an internal messaging board that day, staff said Facebook had been “fuelling this fire for a long time and we shouldn’t be surprised it’s now out of control.

Facebook’s algorithm “prioritises” extreme content and although the firm is “very good at dancing with data” it is “unquestionably” making online hate worse and pushing users towards extremism. “I am extremely, extremely worried about the state of our societies. I am extremely concerned about engagement-based ranking, which prioritises extreme content.

Whistleblower Frances Haugen said Facebook was reluctant to sacrifice even a “slither of profit” to make the platform safer, and said the UK could be particularly vulnerable because its automated safety systems may be more effective with US English than British English. One of the effects of Facebook’s algorithm was to give hateful advertising greater traction, meaning it was “cheaper” for companies and pressure groups to produce angry messages rather than positive ones, describing this process as “subsiding hate” and that “the failures of Facebook are making it harder for us to regulate Facebook…

“Facebook has been trying to make people spend more time on Facebook, and the only way they can do that is by multiplying the content that already exists on the platform with things like groups and re-shares… One group might produce hundreds of pieces of content a day, but only three get delivered…

“Only the ones most likely to spread will go out… You see a normalisation of hate and dehumanising others, and that’s what leads to violent incidents… Facebook has studied who has been most exposed to misinformation and it is… people who are socially isolated…

“I am deeply concerned that they have made a product that can lead people away from their real communities and isolate them in these rabbit holes and these filter bubbles…

What you find is that when people are sent targeted misinformation to a community it can make it hard to reintegrate into wider society because now you don’t have shared facts…

“We didn’t invent hate, we didn’t invent ethnic violence. And that is not the question. The question is what is Facebook doing to amplify or expand hate… or ethnic violence…?

“When we see something like an oil spill, that oil spill doesn’t make it harder for a society to regulate oil companies. But right now the failures of Facebook are making it harder for us to regulate Facebook…

“Anger and hate is the easiest way to grow on Facebook. We are literally subsidising hate on these platforms. It is substantially cheaper to run an angry hateful divisive ad than it is to run a compassionate, empathetic ad…

“Part of why I came forward is that I am extremely worried about the condition of our societies … and of the interaction of the choices that Facebook has made and how it plays out more broadly… it is very good at dancing with data.” 

She said that Twitter and Google were “far more transparent” than Facebook, as she called for Mr Zuckerberg to hire 10,000 extra engineers to work on safety instead of 10,000 engineers to build its new ‘Metaverse’ initiative. “The current system is biased towards bad actors and those who push Facebook to the extremes.

Situations like [ethnic violence in] Ethiopia are just the opening chapters of a novel that is going to be horrific to read… Facebook is closing the door on us being able to act. We have a slight window of time to regain people control over AI – we have to take advantage of this moment.

Ms Haugen urged MPs to regulate paid-for advertisements on Facebook, because hateful ones were drawing in more users. “We are literally subsidising hate on these platforms… It is substantially cheaper to run an angry hateful divisive ad than it is to run a compassionate, empathetic ad.

Ms Haugen said systems for reporting employee concerns at Facebook were a ‘huge weak spot’ at the company. “Right now there’s no incentives internally, that if you make noise saying we need more help, like, people will not get rallied around for help, because everyone is underwater… We didn’t invent hate, we didn’t invent ethnic violence. And that is not the question… The question is what is Facebook doing to amplify or expand hate… or ethnic violence?

The company changed its corporate name to Meta in an attempt to rebrand itself after the Facebook Papers highlighted other troubling accusations against the social media titan, including incentives to promote hate and misinformation, a list of high-profile people who can skirt censorship, and even human trafficking.

Apple threatened to delete Facebook and Instagram from its app store because they were being used to traffic women to work as maids in the Middle East. Facebook acknowledged it was not doing enough to prevent its platforms assisting the abusive trade – and Apple relented, but the incident appears to have had little effect. Accounts continued to show images of African and South Asian women with ages and prices next to their pictures. 

Worse, Facebook founder Mark Zuckerberg and senior executives intervened to allow US politicians and celebrities to post content that broke its rules, while Vietnamese dissidents were censored.

SeveralBritish Department for Digital Cultural Media and Sport (DCMS) officials have gone on to work for Facebook recently, after working elsewhere before joining Facebook. There is no suggestion they have solicited information from former Civil Service colleagues.

Another Facebook whistleblower, Sophie Zhang, raised the alarm after finding evidence of online political manipulation in countries such as Honduras and Azerbaijan. 

She was fired.

Facebook International 

After publicly promising to crack down, Facebook acknowledged in internal documents that it was ‘under–enforcing on confirmed abusive activity’ that saw Filipina maids complaining on the social media site of being abused.

Even today, a quick search for ‘khadima,’ or ‘maids’ in Arabic, will bring up accounts featuring posed photographs of Africans and South Asians with ages and prices listed next to their images. The Philippines government has a team of workers that do nothing but scour Facebook posts each day to try and protect desperate job seekers from criminal gangs and unscrupulous recruiters.

Apple threatened to remove the App from the App Store because of how it failed to police he trafficking of maids in the Philippines.

While the Middle East remains a crucial source of work for women hoping to provide for their families at home in Asia and Africa, Facebook acknowledged some countries across the region have ‘especially egregious’ human rights issues when it comes to labourers’ protection.

Domestic workers frequently complained of being unpaid, locked in their homes, starved, forced to extend their contracts indefinitely, and repeatedly sold to other employers without their consent. There was also evidence that recruitment agencies were dismissing more serious crimes, such as physical or sexual assault, rather than helping domestic workers.

Despite the continued spread of ads exploiting foreign workers in the Middle East, in a statement to the Associated Press Facebook said it took the problem seriously: “We prohibit human exploitation in no uncertain terms… We’ve been combating human trafficking on our platform for many years and our goal remains to prevent anyone who seeks to exploit others from having a home on our platform.” Except of course for Facebook itself.

One of the problems is that its AI tools do not have the capability to appropriate pick out hateful commentary, and there aren’t enough staff with the language skills to do it manually.

Internal Facebook documents offer detailed snapshots of how employees in recent years have sounded alarms about problems with the company’s tools aimed at rooting out or blocking speech that violated its own standards. Put simply, the world’s largest social network has repeatedly failed to protect users from problems on its own platform and has struggled to monitor content across languages.

There has been no screening algorithms for languages used in some of the countries Facebook has deemed most ‘at–risk’ for potential real–world harm and violence. For example, when, in 2018, United Nations experts investigating a brutal campaign of killings and expulsions against Myanmar’s Rohingya Muslim minority, Facebook was widely used to spread hate speech toward them.

Facebook’s former head of policy for the Middle East and North Africa, Ashraf Zeitoon (who left in 2017) said the company’s approach to global growth has been ‘colonial’ and focused on monetisation without safety measures.

Facebook has long touted the importance of its artificial–intelligence (AI) systems, in combination with human review, as a way of tackling objectionable and dangerous content on its platforms. Machine–learning systems detect such content with varying levels of accuracy. But more than 90% of Facebook’s active users are outside the United States or Canada. Languages spoken outside the United States, Canada and Europe have been a stumbling block for Facebook’s automated content moderation. The company did not have screening algorithms to identify misinformation in Burmese, the language of Myanmar, or hate speech in the Ethiopian languages of Oromo or Amharic. These gaps allow abusive posts to proliferate in the countries where Facebook itself has determined the risk of real–world harm is high.

In Myanmar, where Facebook–based misinformation has been linked repeatedly to ethnic and religious violence, the company acknowledged it had failed to stop the spread of hate speech targeting the minority Rohingya Muslim population.

The Rohingya’s persecution, really ethnic cleansing, led Facebook to publicly pledge in 2018 that it would recruit 100 native Myanmar language speakers to police its platforms.

The company did not disclose how many content moderators it ultimately hired or revealed which of the nation’s many dialects they covered. Despite Facebook’s promises, the rights group Global Witness said the company’s recommendation algorithm continued to amplify army propaganda and other content that breached the company’s Myanmar policies.

Reuters found posts in Amharic, one of Ethiopia’s most common languages, referring to different ethnic groups as the enemy and issuing death threats. Facebook said the company now has proactive detection technology to detect hate speech in Oromo and Amharic and has hired more people with ‘language, country and topic expertise,’ including people who have worked in Myanmar and Ethiopia.

In an undated document, which someone familiar with the disclosures said was from 2021, Facebook employees also shared examples of ‘fear–mongering anti–Muslim narratives’ spread on the site in India, including calls to oust the large minority Muslim population there. The document read “Our lack of Hindi and Bengali classifiers means much of this content is never flagged or actioned.” Internal posts and comments by employees also noted the lack of classifiers in the Urdu and Pashto languages to screen problematic content posted by users in Pakistan, Iran and Afghanistan.

Facebook’s human review of posts — crucial for nuanced problems like hate speech — has gaps across key languages. Content moderation struggled with Arabic–language dialects of multiple ‘at–risk’ countries, leaving it constantly ‘playing catch up.’ Even within its Arabic–speaking reviewers, ‘Yemeni, Libyan, Saudi Arabian (really all Gulf nations) are either missing or have very low representation.’

Facebook acknowledges Arabic language content moderation “presents an enormous set of challenges.”

Three former Facebook employees who worked for the company´s Asia Pacific and Middle East and North Africa offices in the past five years told Reuters they believed content moderation in their regions had not been a priority for Facebook management. They said leadership did not understand the issues and did not devote enough staff and resources.

Facebook says it cracks down on abuse by users outside the United States with the same intensity applied domestically and it uses AI proactively to identify hate speech in more than 50 languages. It said it bases its decisions on where to deploy AI on the size of the market and an assessment of the country’s risks. It declined to say in how many countries it did not have functioning hate speech classifiers.

Facebook also says it has 15,000 content moderators reviewing material from its global users.

Facebook claims that in the past two years, it has hired people who can review content in Amharic, Oromo, Tigrinya, Somali, and Burmese, and this year added moderators in 12 new languages, including Haitian Creole.

Facebook’s users are a powerful resource to identify content that violates the company’s standards, in fact it has built a system for them to do so, but it also acknowledged that the process can be expensive and time consuming for users in countries without reliable  internet access. According to documents and digital rights activists, the reporting tool also has had bugs, design flaws and accessibility issues for some languages. Facebook’s content review system is not always able to see objectionable text accompanying videos and photos in some posts reported by users.

That issue prevented serious violations, such as death threats, from being properly assessed. Facebook said the issue was fixed in 2020 and said it takes feedback seriously and continues to work to improve its reporting systems.

Language coverage remains a problem. A Facebook presentation included in the documents, concluded “there is a huge gap in the Hate Speech reporting process in local languages” for users in Afghanistan. The recent pullout of U.S. troops there after two decades ignited an internal power struggle in the country. So–called ‘community standards’ — the rules that govern what users can post — are also not available in Afghanistan’s main languages of Pashto and Dari.

A Reuters review this month found that community standards weren’t available in about half the more than 110 languages that Facebook supports with features such as menus and prompts.

Facebook said it aims to have these rules available by the end of 2022.