Policing the internet

You will likely have heard that this week, Elon Musk bought Twitter for $44bn. Concerns have arisen regarding how he hopes to ‘transform’ the social media platform with free speech. Many critics have speculated that this openly invites hate speech and bigotry.

The controversy has sparked debate, with many questioning the ethics of regulating people’s thoughts and content. On the flip side, critics have argued that stopping hate speech and restricting dangerous content is more important than one’s right to offend.

IoT Insider looks at how technology interacts with the law, whose responsibility it is to police the internet and why this can be complicated.

EU laws for big tech

Recent EU legislation has ruled that big tech companies, including Google and Meta, now have a legal responsibility to monitor and prevent hate speech and discrimination across their digital platforms.

Last year saw the revelations of Facebook whistleblower, Frances Haugen, in 2021 whereby she revealed that the company was knowingly ignoring dangerous content targeted at young users. Calls to tighten the reigns on social media have consequently resulted in the EU’s banning of online adverts that are aimed at minors.

The initiative is packaged under the Digital Services Act, which “aim[s] to create a safer digital space where the fundamental rights of users are protected and to established a level playing field for businesses.”

The rules mean that a company can be fine dup to 6% of its global turnover if it violates the rules. The rules don’t come into play for another two years. Similarly, the British government created the Online Safety Bill which seeks to “make the UK the safest place in to world to be online while defending free expression.”

Both laws are set to end the reign of self-regulating tech companies. Margaret Vestager, Executive Vice-President, European Commission said: “Platforms should be transparent about their content moderation decisions, prevent dangerous disinformation from going viral and avoid unsafe products being offered on marketplaces.”

Policing social media

As the onus on digital regulation shifts away from tech giants and towards the police, Sky News has highlighted the knowledge disparity between regular officers and specialists in collecting evidence from technology companies.

In cases that involve social media and online platforms, officers have been told they must obtain specialist support but were not trained how such support could aid an investigation. Urgent cries for transformation within policing, particularly in the way they deal with digital crime, have been made by many.

After the horrific crimes of Abdul Elahi, an online predator that blackmailed thousands of victims and sold illegally sold their explicit images to paedophiles, it was uncovered that police had failed to connect over a dozen reports from his victims. Sky News further reported that less than one fifth of police knew how to collect evidence from technology companies.

Cyber investigations are gaining prevalence as criminal offences in an increasingly digital world. Without the proper training or tools, police don’t understand the extent of evidence that can be recovered. Even end-to-end encrypted messages on WhatsApp can be accessed, as well as deleted ones where victims have blocked their harasser’s number.

On top of inexperience or a lack of digital understanding, police often face problems in obtaining online evidence due to varying legalities across different governing bodies. One such example is that data can take time to obtain and by the time it is received, often the statute of limitations could have expired.

Hopefully with the introduction of tighter digital safety laws, tech firms will have to respond more quickly to requests for data sharing, aiding investigations. Also, there will be more checks in place to prevent cybercrimes from reaching the extent to which they often do through more rigorous content-monitoring.

Whilst these changes are welcome, we can’t help but wonder what implications this might have for the future of the metaverse? The ever-changing nature of the web means there’s an abundance of complexities which stand in the way of collecting evidence in a ubiquitous internet network.

How will data protection interact with a virtual world? Who will police this world? What kinds of limitations will the tech giants that own these worlds have in place?