Hate speech online promotes ethnic tensions and violence, heightening trafficking vulnerabilities

 

Online hate speech is growing in the Asia region and social media companies struggle to manage it, contributing to ethnic, ethno-religious, and gender-based tensions and violence that could heighten trafficking vulnerabilities for marginalized communities.

 
 
 
 

For years, Facebook has been accused of contributing significantly to the Rohingya crisis in Myanmar. Demands persist for Facebook’s parent company, Meta, to be held accountable for its role in amplifying hate speech, disseminating anti-Rohingya propaganda, and facilitating coordinated attacks against Rohingya communities, leading to reports of mass killings, sexual violence, and the displacement of hundreds of thousands of Rohingya people to Bangladesh and elsewhere. Moreover, as the whistle-blower "Facebook Papers" showed, company executives were fully aware of the spread of toxic content against ethnic minorities and other groups, yet the company failed to heed warnings and took inadequate measures to respond. Rohingya refugees are at significant risk of trafficking, forced labor, labor exploitation and other crimes against humanity perpetrated by trafficking syndicates in the region. While Facebook’s role in the Rohingya crisis provides an extreme example of social media’s power, it is not the only social media platform with unchecked influence and, similar to the experience in other regions of the world, online hate speech continues to grow across Asia.

The tools used to combat misinformation, hate speech, and other abuses are ill-equipped to moderate content because they aren’t built to accommodate the vast diversity of languages or the social, historial, economic and cultural contexts. In several South and South-Asian countries, the dynamics underlying political divides have manifested over decades, if not longer – though even younger ones can be intense and corrosive to democracy. As The New York Times puts it, “Facebook moves into a country without fully understanding its potential impact on local culture and politics, and fails to deploy the resources to act on issues once they occur.” Furthermore, “eighty-seven percent of the company’s global budget for time spent on classifying misinformation is earmarked for the United States, while only 13 percent is set aside for the rest of the world — even though North American users make up only 10 percent of the social network’s daily active users.”

Asian innovators are developing technology to combat the problem in a way that’s tailored to local contexts; however, it’s unclear how well their efforts can match the sheer scale of hate speech and misinformation, which quickly goes viral, getting thousands of views over hours or days before being identified and removed. Meta and others have tried to outsource content moderation to third parties, but according to Equal Times, “content moderators employed by these third-party companies have complained of inadequate training and on-the-job trauma from reviewing post after disturbing post.” Platforms like Facebook, Twitter and YouTube get the most scrutiny, but other social media apps operate similarly while still flying under the radar. For example, disinformation campaigns are also spread via WhatsApp, which has about half a billion users in India alone, and the tools built for Facebook don’t work well with WhatsApp.

Hate speech online and the capacity to incite violence is not limited to extreme cases such as the Rohingya in Myanmar. Ethnic tensions are particularly incendiary when stoked by political leaders, as seen in the recent violence in Mandipur, India, where nearly 60,000 people have been displaced after angry mobs burned people out of their homes in a dispute over access to land and jobs. The tensions leading up to this episode have been exacerbated by migration from Myanmar, demonstrating how difficult it can be to contain violence or its effects.

Meanwhile, in Malaysia, racial tensions played a key role in the most recent election, where anti-Chinese sentiments pushed on social media were used to perpetuate disinformation and shift voting patterns. A report published by Malaysia’s Centre for Independent Journalism showed that refugees and migrants were also severely targeted on social media. While the targeting of these two groups did not seem to be part of any coordinated political campaign, it ranked highest for severity of hate speech. According to the report, “posts about these two communities contained explicit suggestions calling for physical harm, damage or death. The dehumanisation of these two marginalised groups was made worse when the Immigration Department asked social media users to complain and submit information about people they suspected were without documents. It is however noted that the attacks against refugees and migrants on social media have been consistent and in fact continued post elections.”

As a recent United Nation Human Rights Office of the High Commissioner report argued, “when not addressed, episodes of incitement to discrimination, violence and mass disinformation campaigns have significantly increased the risks experienced by marginalized communities and translated into serious real-world violence.” The report not only emphasizes the impact on ethnic discrimination, it also highlights misogynistic online attacks, including trolling and doxing, that target women, feminist movements, LGBTQIA+ activists, nonbinary people and sexual minorities. Concern also rises with the potential for moderation to lead to censorship of political speech and the silencing of critics of those in power. As the UNHROHC Report stresses, it is also important to ensure access to effective remedies for those who’ve experienced harm.

Artificial intelligence tools can help tackle the majority of content evaluation; however, human moderators with sufficient language proficiency, cultural-historical expertise, and training in managing the work’s psychological toll will continue to be greatly needed.


 
 

Have You Considered…?

Nonprofit Lowdown has a podcast episode on how to use Chat GPT for fundraising. Generating copy for routine communications with AI tools can be a time saver, and AI can help with some types of brainstorming. However, this tool should be used with caution – it still requires deep knowledge of the material, skill with language and discretion to use it effectively. Unskilled use can lead to communication failures or even be recognizable as inauthentic, running the risk of turning off donors. As this lawyer's case shows, it’s not a substitute for research and will generate false information. Also, to protect data privacy, sensitive or personal information should not be used. Finally, depending on the extensiveness of the use of AI-generated copy, the question of ethical use, copyright and attribution is still hotly debated. Here’s how to spot AI-generated material.

 
 

 

Share your news

Post your experiences from the field and initiatives to feature


Keep reading