Top executives at tech companies could face jail time under proposed UK laws if they fail to meet regulators’ demands. After nearly a year of consultation, the laws were introduced to Parliament on Thursday in the form of an Online Safety Bill.
In May of last year, the UK government began work on proposed legislation to impose a duty of care on social media platforms, requiring tech companies to protect users from harmful content such as disinformation and online abuse.
“When we buckle our seat belts to protect ourselves while driving, we don’t give it a second thought. Given the numerous dangers that exist online, it is only reasonable that we implement similar basic safeguards for the digital age “Nadine Dorries, the Digital Secretary, stated.
Executives of tech companies could face prosecution or jail time if they refuse to cooperate with information notices issued by Ofcom, the UK’s communications regulator, under the proposed legislation. The Bill would give Ofcom the authority to issue information notices to determine whether tech companies are fulfilling their online safety obligations.
In-scope companies’ senior managers will be held criminally liable if they destroy evidence, fail to attend or provide false information in interviews with Ofcom, or obstruct the regulator when it enters company offices, according to the Bill.
The bill also proposes requiring social media platforms, search engines, and other apps and websites that allow users to post their content to take steps to protect children, combat illegal activity, and adhere to their stated terms and conditions.
Mandatory age checks for pornographic websites, criminalization of cyber flashing, and a requirement that large social media platforms give adults the ability to automatically block people who have not verified their identity on the platforms are among the measures.
If passed, the proposed laws would compel social media platforms to step up their moderation efforts, with the Bill requiring platforms to remove paid-for scam ads as soon as they are made aware of their existence. The Bill also includes a requirement for social media platforms to moderate “legal but harmful” content, requiring large social media platforms to conduct risk assessments on this type of content. Platforms will also need to spell out their service terms.
“Companies will have to say if they intend to remove, limit, or allow specific types of content,” Dorries said.
The digital secretary added that the agreed categories of “legal but harmful” content will be laid out in secondary legislation that will be released later this year.
While the UK government has described the Online Safety Bill as “world-leading online safety legislation,” law experts have criticised it for using vague language in the “legal but harmful” classification, which they say could lead to unintended consequences.
“The Online Safety Bill is a dreadful piece of legislation, doomed not only to fail in its stated purpose but also to make it much more difficult for tech companies and make the internet less safe, especially for children,” said Paul Bernal, an IT law professor at the University of East Anglia.
The UK government isn’t alone in wanting to enact legislation governing how social media platforms handle content moderation. The Australian federal government is considering two pieces of legislation, one aimed at combating online defamation and the other at protecting online privacy.
The federal government has framed the defamation laws as anti-trolling laws, to force social media companies to reveal the identities of anonymous accounts that post potentially defamatory content on platforms.
Online abuse victims and privacy advocates have criticised Australia’s proposed online defamation laws, claiming that they could have unintended, negative consequences.