Tech Giants Face Scrutiny as U.K.’s Online Safety Act Takes Effect

The U.K. has officially implemented its sweeping Online Safety Act, ushering in tighter controls on harmful online content and the potential for hefty fines against tech giants like Meta. 

According to the communications regulator, Ofcom, social media platforms still have significant work to do to meet the requirements of the new law, which aims to protect both children and adults from harmful material. While Ofcom published codes of practice and guidance on Monday to help tech companies comply, it warned that many major platforms have yet to fully implement necessary safeguards.

“We don’t think any of them are doing all of the measures,” said Jon Higham, Ofcom’s online safety policy director. “There is still work to be done.” 

Under the law, all platforms within the scope of the Act ranging from Facebook and Google to Reddit and OnlyFans have three months to assess the risks of illegal content on their sites. By March 17, they must begin implementing safety measures to address these risks, with Ofcom overseeing their progress. Following these guidelines will ensure compliance with the act.

This law only applies to platforms that publish user-generated content, including major social media sites and search engines, covering more than 100,000 online services. It targets 130 “priority offenses” such as child sexual abuse, terrorism, and fraud, that tech companies will now be required to address proactively through stronger content moderation systems.

Technology Secretary Peter Kyle wrote in The Guardian that the guidelines mark a significant shift in online safety policy. “For the first time, tech firms will be forced to proactively take down illegal content,” he stated. “If they don’t, they will face enormous fines, and Ofcom can ask the courts to block access to their platforms in Britain.”

The Ofcom guidelines include measures such as appointing a senior executive to oversee compliance, ensuring platforms have sufficient moderation teams to remove illegal content swiftly, improving algorithm testing to prevent the spread of harmful material, and shutting down accounts linked to terrorist organizations. Platforms will also be required to offer easy-to-use tools for users to report harmful content and be transparent about the handling of complaints.

Additionally, platforms must implement automated systems to detect and remove child sexual abuse material, using techniques like “hash matching” to identify known abusive content. File-sharing services, including Dropbox and Mega, will now also fall under these regulations due to their high risk of distributing abusive material.

However, child safety advocates, including the Molly Rose Foundation formed after the tragic death of 14-year-old Molly Russell expressed disappointment over the lack of specific measures to address self-harm and suicide-related content. The NSPCC also raised concerns that platforms like WhatsApp might not be required to remove illegal content if it is technically difficult to detect.

To combat fraud, platforms will be required to establish dedicated reporting channels with bodies like the National Crime Agency and the National Cyber Security Centre.

Ofcom is also planning consultations for the spring on creating protocols for handling crises such as the riots that followed the Southport murders earlier this year.