The United Kingdom moved a step closer to regulating social media in December when a parliamentary committee recommended major changes to the country’s Online Safety Bill so as to hold internet service providers responsible for material published on their platforms. “We need to call time on the Wild West online,” said committee chair Damian Collins. “What’s illegal offline should be regulated online.” The draft law, which will be submitted to the British parliament in 2022, is aimed at penalizing companies that allow content relating to crimes like child abuse and online harassment; news reports and free expression groups have flagged similar efforts in Kazakhstan, Australia, Indonesia, Chile, and Canada, among other countries
Social media regulation is significant for journalists who use platforms for work, especially when the legislative focus is on information or speech. In 2021, U.S. nonprofit Freedom House found that at least 24 countries were seeking to govern how platforms regulate content. States like the UK, which set out to prevent platforms from censoring journalistic posts in the draft safety bill, face thorny questions about whose posts merit protection and how regulations should be enforced.
Many journalists are themselves demanding that governments regulate social media to help solve issues that affect the press, like online abuse, disinformation, or falling advertising revenue, but there could be other unforeseen consequences. Lawmakers in the United States, the U.K., India, Pakistan, and Mauritius are among those discouraging platforms from offering encrypted messaging, which helps journalists communicate safely. Legislation mandating that platforms share data with police would be bad news in countries that jail journalists for social media posts. Some social media laws, like Turkey’s, affect news websites and search engines as well. Others have implications for news websites with comments sections.
At worst, authoritarians can jump on the regulatory bandwagon to stifle reporting. In 2020, a report by Danish think tank Justitia found 25 countries had drawn inspiration from Germany’s 2017 Network Enforcement Act to “provide cover and legitimacy for digital censorship.” Such laws leave social media companies with a difficult decision: comply, or leave the country.
CPJ’s Alicia Ceccanese spoke with Kian Vesteinsson, a research analyst for technology and democracy at Freedom House, and Jacob Mchangama, executive director of Justitia, about their respective research.
Each told CPJ how social media regulations can incentivize platforms to remove more news:
Banning broad categories of content
Governments are “outsourcing the policing of online content that [they] don’t like to the platforms themselves,” essentially requiring technology companies “to do the dirty work for them,” according to Mchangama.
In 2018 David Kaye, the former United Nations Special Rapporteur for freedom of opinion and expression, noted broadly-worded and restrictive laws on topics like extremism, blasphemy, defamation, and false news being used to require companies to suppress legitimate discussions on social media.
A troubling example:
- Reuters reported last year that Facebook had committed to restrict significantly more Vietnamese content after Vietnam passed a cybersecurity law in 2018, and Amnesty International documented some of the impact on activists and journalists. CPJ noted that the law’s vague prohibitions on posts that would offend leaders, distort history, or cause confusion were a clear threat to press freedom.
Enforcing short takedown windows
Germany requires platforms to remove “manifestly unlawful content” within 24 hours, or up to seven days if the legality is unclear, and other countries have followed their example without adopting the same rule of law protections, according to Mchangama.
“Typically [it takes a court] more than a year to process a single case of hate speech,” he said. “Some of these states then demand that social media companies make roughly the same legal assessment in 24 hours.”
Under pressure, platforms take down more content, according to Vesteinsson. “Companies overcorrect,” he said.
Tight deadlines incentivize companies to use solutions like artificial intelligence to automatically screen posts for something that might be illegal, according to the Washington D.C.-based Center for Democracy and Technology. But recent analysis of leaked internal Facebook documents indicate such filters have been ineffective, especially in certain languages – as have poorly-trained human moderators, according to The Associated Press and international journalism non-profit Rest of World.
A troubling example:
- Information Technology Rules introduced in India in February require content takedowns within 36 hours of receiving a notice from a government agency or a court, according to the digital rights organization Electronic Frontier Foundation. CPJ has in the past documented restrictions on social media accounts sharing news and opinion on Kashmir via an opaque process.
Eroding intermediary liability protection
Best practices protect intermediaries like social media companies from legal action over someone else’s content, which “safeguards [companies] to moderate and remove content on their platforms and shields them from legal liability for the activities of their users,” Vesteinsson told CPJ. Liability makes them less likely to push back against censorship and surveillance demands, he said.
Mchangama agreed. Laws that erode liability protections provide an “obvious incentive for platforms to say, “Better safe than sorry” when governments make requests, he said.
A troubling example:
- On December 24, a Russian court fined Google nearly $100 million in the largest of several recent fines for major platforms accused of failing to remove banned content, according to the The Washington Post. Local access to Twitter has been slowed for the same reason under a law passed in July, according to Reuters. The nature of the content involved in each case wasn’t clear, but regulators separately warned journalists and other social media companies not to allow information about anti-government protests earlier in the year.
Requiring localization
Localization laws mandate that social media companies host staff – often local nationals – and data in country under the eye of local authorities. Representatives risk being hauled into court if the company doesn’t comply with the government’s rules, according to a recent analysis by Rest of World.
“Companies [will] think twice about whether they want to challenge these governments [and] risk the freedom and safety of their employees on the ground,” Mchangama said.
A troubling example:
- Legislation in effect in Turkey since October 2020 requires social media platforms with over 1 million daily users to open local offices and enforce content restrictions; in 2018, CPJ found dozens of Turkish journalists had their Twitter accounts hidden from local readers based on legal demands.