"The happiest place on earth"

Get email updates of new posts:        (Delivered by FeedBurner)

Thursday, July 14, 2022

New laws to fight online trolls should specify timeline for removal of harmful content: Experts

Need to tackle discriminatory beliefs in fight against online harms

When Tara (not her real name) received an anonymous message on Instagram threatening to post sexually explicit photos of her along with her real name online, she felt at a loss as to what to do.

Reporting the threat to the police was one option, but the photos in question had been posted by the 24-year-old herself last year, using a pseudonym on the online adult content platform OnlyFans.

"I was worried I might get into trouble instead if I reported the person to the police, especially after the Titus Low incident," she said.

She was referring to the influencer who is awaiting trial after being charged with transmitting obscene materials by electronic means last December. Low had posted nude photos of himself on OnlyFans.

The only other avenue Tara felt she had was to report the user to the social media platform.

Over the next few weeks, Tara lived in fear that compromising photos of her could be leaked online.

She was also worried that photos might not be taken down swiftly even after flagging them to the platforms - a commonly heard complaint.

Tara spent a few hours every day for about a month regularly searching the Internet for her own name and usernames. She was relieved that nothing noteworthy came up.

"In the end, I decided to just close my OnlyFans account and keep a low profile. Thankfully, it seems like the person didn't follow through with the threat to expose me," she said.

The anonymous account of her stalker was blocked about two days after she reported it.

When Singapore's new codes of practice to combat online harms take effect, users like Tara who experience online harassment or are victims of "revenge porn" will have more assurance that corrective actions will be carried out swiftly.

She could also have more options to report such unwanted interactions.

Under the proposed Code of Practice for Online Safety and the Content Code for Social Media Services, the platforms will also need to ensure additional safeguards for users who are under 18 years old, and more options to report unwanted interactions.

The codes will also cover other kinds of harmful content, such as racially offensive videos and posts that incite violence.

The Ministry of Communications and Information (MCI) has not released details about the specific requirements under the new codes as they are still being developed in collaboration with social media platforms.

However, some similar laws that have been enacted elsewhere indicate a takedown timeframe of 24 hours.

Germany's Network Enforcement Act - which took effect in 2018 - requires online platforms with more than two million local users to take down or remove clearly harmful online content within 24 hours of receiving a user complaint.

Australia's Online Safety Act, which came into force in January this year, also grants the same time limit for platforms to remove harmful material after being notified by the country's online safety regulator, eSafety.

Tara said she hopes Singapore's new rules will impose a stricter timeline for social media platforms to respond, as a lot of reputational damage and harm can take place in a short length of time.

"Even 24 hours is too long, as the photos and information could quickly spread outside of the platform and it will be too late to contain it. Ideally, it should be as immediate as possible," she said.

But experts and observers said having Singapore's laws impose a single fixed timeframe to review user reports and remove all kinds of harmful content would be too restrictive. They proposed a range of time limits instead.

"They are asked to make a judgment call (on the content)… it might not be possible for them to do it within 24 hours," said Mr Gilbert Leong, a senior partner at law firm Dentons Rodyk & Davison.

Mr Leong said that the range of time limits can be based on severity of harm. For example, a video inciting a racial riot or a school shooting should be taken down immediately before it goes viral.

For other cases where it is not obvious whether or not the content is harmful, social media platforms could be given up to seven or 14 days to make a decision, said Reed Smith lawyer Bryan Tan.

Germany's Network Enforcement Act allows social media platforms up to seven days to act on a user complaint in situations where the harmful content is not so clear.

One of the criticisms of the present reporting procedures of certain social media platforms is that the moderators employed to handle such reports are not responsive or are slow to respond to such reports.

"By the time something is done, the harmful content may have already gone viral," said Withers KhattarWong lawyer Jonathan Kok.

He suggested requiring social media platforms to regularly test their reporting procedures.

Experts also said tech platforms could do more to help victims gather evidence against online harassers.

For instance, victims should be able to obtain information from tech platforms to identify their harassers, which will be useful when lodging police reports or commencing lawsuits under the Protection from Harassment Act, said Mr Leong.

More protection for users who are under 18 years old can also be expected when Singapore's new rules kick in, but tech platforms are expected to be given the flexibility to choose the tools to use.

Last month, Meta started testing on Instagram a new age verification tool, which screens people trying to change their age from under 18 to that or older. These users are required to record a video selfie, which will be analysed by artificial intelligence software to determine their real age. Friends in their network will also be called on to verify their reported age.

Tech firms can expect to face fines if they fail to comply with the new laws, similar to what is rolled out elsewhere.

For instance, the European Union's proposed Digital Services Act and Britain's proposed Online Safety Bill specify fines of up to 6 per cent and 10 per cent, respectively, of the non-compliant firm's annual global turnover.

This approach is not new in Singapore, said Mr Leong, noting that an upcoming amendment to the Personal Data Protection Act will allow large non-compliant firms to be fined up to 10 per cent of their annual turnover here.

He suggested temporarily shutting down local access to the errant platform as another possible penalty. This punishment is set out in the EU's Digital Services Act and Britain's Online Safety Bill.

"Maybe that will be more effective. The platforms have a lot of advertising revenue, and this (penalty) may hurt them more than a fine," said Mr Leong.

When contacted, Twitter, Meta and TikTok said they are working on more methods to better protect users, but added they already have measures to combat online harm.

TikTok's head of public policy for South-east Asia and Singapore Teresa Tan said the platform proactively enforces its community guidelines using a mix of technology and human moderation.

For instance, the platform trains its safety moderation team to identify signs that an account may be used by a child under the age of 13 so it can remove suspected underaged account holders. It also analyses keywords used by users and crowdsource reports from the TikTok community to surface potential underage accounts.

Twitter said it has taken steps to not amplify or recommend potentially harmful content to its users, among other things. "This is driven by a hybrid of tech and human review that allows us to respect the uniqueness and culturally-specific nuances of online speech, while using tech to proactively remove particular patterns of egregious behaviors, such as terrorism, predictable forms of account-level abuse, and child sexual exploitation," it said.

Tara, the former OnlyFans user, said she is glad moves are being made to raise standards for online safety.

She added: "It is a recognition that what happens online can be just as harmful as in real life and holds social media companies responsible for ensuring they aren't being abused."

 

It is interesting that someone reposting the (illegal) photos that you yourself posted is framed as "online harassment" or "revenge porn" - this is a great example of LPPL

But in any event, the new laws won't help people like "Tara": as The Honourable Choo Han Teck J noted in Buergin Juerg v Public Prosecutor 2013, "I am not aware of any known defence in criminal law that a person is not guilty of an offence if he was a victim of some other offence"

blog comments powered by Disqus
Related Posts Plugin for WordPress, Blogger...

Latest posts (which you might not see on this page)

powered by Blogger | WordPress by Newwpthemes