A UK parliamentary committee that’s spent almost half a year scrutinizing the government’s populist yet controversial plan to regulate Internet services by applying a child safety-focused framing to content moderation has today published its report on the draft legislation — offering a series of recommendations to further tighten legal requirements on platforms.
Ministers will have two months to respond to the committee’s report.
The committee broadly welcomes the government’s push to go beyond industry self regulation by enforcing compliance with a set of rules intended to hold tech giants accountable for the content they spread and monetize — including via a series of codes of practice and with the media regulator, Ofcom, given a new major oversight and enforcement role over Internet content.
In a statement accompanying the report, the joint committee on the draft Online Safety Bill’s chair, Damian Collins, said: “The Committee were unanimous in their conclusion that we need to call time on the Wild West online. What’s illegal offline should be regulated online. For too long, big tech has gotten away with being the land of the lawless. A lack of regulation online has left too many people vulnerable to abuse, fraud, violence and in some cases even loss of life.
“The Committee has set out recommendations to bring more offences clearly within the scope of the Online Safety Bill, give Ofcom the power in law to set minimum safety standards for the services they will regulate, and to take enforcement action against companies if they don’t comply.
“The era of self-regulation for big tech has come to an end. The companies are clearly responsible for services they have designed and profit from, and need to be held to account for the decisions they make.”
The committee backs the overarching premise that what’s illegal offline should be illegal online — but it’s concerned that the bill, as drafted, will fall short of delivering on that, warning in a summary of its recommendations: “A law aimed at online safety that does not require companies to act on, for example, misogynistic abuse or stirring up hatred against disabled people would not be credible. Leaving such abuse unregulated would itself be deeply damaging to freedom of speech online.”
To ensure the legislation is doing what’s claimed on the tin (aka, making platforms accountable for major safety issues), the committee wants Ofcom to be “required to issue a binding Code of Practice to assist providers in identifying, reporting on and acting on illegal content, in addition to those on terrorism and child sexual exploitation and abuse content”.
Here MPs and peers are pushing for the bill to take a more comprehensive approach to tackling illegal content in what can be contested areas of hate speech, arguing that regulatory guidance from a public body will “provide an additional safeguard for freedom of expression in how providers fulfil this requirement”.
In earlier iterations the legislative plan was given the government shorthand “Online Harms” — and the draft continues to target a very broad array of content for regulation, from stuff that’s already explicitly illegal (such as terrorism or child sexual abuse material) to unpleasant but (currently) legal content such as certain types of abuse or content that celebrates self harm.
Critics have therefore warned that the bill poses huge risks to free speech and freedom of expression online as platforms will face the threat of massive fines (and even criminal liability for execs) for failing to comply with an inherently subjective concept of ‘harm’ baked into UK law.
To simplify compliance and avoid the risk of major sanctions, platforms may simply opt to purge challenging content entirely (or take other disproportionate measures), rather than risk being accused of exposing children to inappropriate/harmful content — so the committee is trying to find a way to ensure a public interest interpretation (i.e. of what content should be regulated) in order to shrink the risks the bill poses to democratic freedoms.
Despite the bill attracting huge controversy on the digital rights and speech front, where critics argue it will introduce a new form of censorship, there is broad, cross-party parliamentary support for regulating tech giants. So — in theory — the government can expect few problems getting the legislation through parliament.
This is hardly surprising. Internet giants like Facebook have spent years torching goodwill with lawmakers all over the world (and especially in the UK); and are widely deemed to have failed to self regulate given a neverending parade of content scandals — from data misuse for opaque voter targeting (Cambridge Analytica); to the hate and abuse direct at people on platforms like Twitter (UK footballers have, for example, been recently campaigning against racist abuse on social media); to suicide and self harm content circulating on Instagram — all of which has been compounded by recent revelations from Facebook whistleblower, Frances Haugen, which included the disclosure of internal research suggesting Instagram can be toxic for teens.
All of which is reflected in a pithy opener the committee pens to summarize its report: “Self-regulation of online services has failed.”
“The Online Safety Bill is a key step forward for democratic societies to bring accountability and responsibility to the internet,” it goes on, adding: “Our recommendations strengthen two core principles of responsible internet governance: that online services should be held accountable for the design and operation of their systems; and that regulation should be governed by a democratic legislature and an independent regulator — not Silicon Valley.
“We want the Online Safety Bill to be easy to understand for service providers and the public alike. We want it to have clear objectives, that lead into precise duties on the providers, with robust powers for the regulator to act when the platforms fail to meet those legal and regulatory requirements.”
The committee is suggesting the creation of a series of new criminal offences in relation to what it describes as “harmful online activities” — such as “encouraging serious self-harm”; cyberflashing (aka the sending of unsolicited nudes); and comms that are intended to stir up hatred against those with protected characteristics) — with parliamentarians endorsing recommendations by the Law Commission to modernise comms offences and hate crime laws to take account of an age of algorithmic amplification.
So the committee is pushing for the (too) subjective notion of ‘harmful’ content to be tightened to stuff that’s explicitly defined in law as illegal — to avoid the risk of tech companies being left to interpret too fuzzy rules themselves at the expense of hard won democratic freedoms. If the government picks up on that suggestion it would be a major improvement.
In another intervention, the committee has revived the thorny issue of age checks for accessing porn websites — and preventing kids from accessing adult content online is something the UK has been trying (and failing) to figure out how to do for over a decade — by suggesting: “All statutory requirements on user-to-user services, for both adults and children, should also apply to Internet Society Services likely to be accessed by children, as defined by the Age Appropriate Design Code“; and arguing that the change would “ensure all pornographic websites would have to prevent children from accessing their content”.
Back in 2019 the government quietly dropped an earlier plan to introduce mandatory age checks for accessing adult websites — saying it wanted to take a more comprehensive approach to protecting children from online harms via what’s now called the Online Safety Bill.
However child safety campaigners want the bill to go further — and so, it seems, does the joint committee; albeit in a “proportionate” way.
“We want all online services likely to be accessed by children to take proportionate steps to protect them,” the committee writes. “Extreme pornography is particularly prevalent online and far too many children encounter it — often unwittingly. Privacy-protecting age assurance technologies are part of the solution but are inadequate by themselves. They need to be accompanied by robust requirements to protect children, for example from cross-platform harm, and a mandatory Code of Practice that will set out what is expected. Age assurance, which can include age verification, should be used in a proportionate way and be subject to binding minimum standards to prevent it being used to collect unnecessary data.”
Other recommendations pick up on specific suggestions made by Facebook whistleblower, Haugen, in her testimony to UK lawmakers earlier this fall, with the committee calling for the law to include a requirement on service providers to conduct internal risk assessments to record “reasonable foreseeable threats to user safety”; and further specifying this should include “the potential harmful impact of algorithms, not just content” [emphasis theirs].
The committee is also urging ministers to extend the scope of the bill to include scams and fraud that stem from paid-for advertising (not merely user-generated content scams) — an issue that’s faced high profile campaigning by UK consumer advice personality, Martin Lewis, who previously sued Facebook for defamation over scam investments ads misusing his image.
UK lawmakers further suggest individual Internet users should be able to make complaints to an ombudsman when platforms fail to comply with the new law — and recommend that regulated platforms are required to have a senior manager at board level or reporting to the board who is designated the “Safety Controller.”
“In that role they would be made liable for a new offence: The failure to comply with their obligations as regulated service providers when there is clear evidence of repeated and systemic failings that result in a significant risk of serious harm to users,” the committee suggests.
While the suggestions are not binding on the government, during her own recent evidence session to the committee last month, the secretary of state for digital, Nadine Dorries, told lawmakers she’s “open” to their suggestions for improving legislation which she argued will change Internet culture for good — further predicting “huge kickback” from platforms that have got used to being able to mark their own homework.
“I believe that there will be huge, huge [change],” she said/ This will set off a culture change in terms of our online environments and landscape. There will be huge kickback. Because you have to follow the money — people are making a huge amount of money from these platforms and sites. And of course there will be kickback. But we must not forget the world is watching what we are doing in terms of legislating to hold those platforms to account. That is why it has to be watertight.”
Law and digital rights experts, meanwhile, have given a cautious thumbs up to the committee’s intervention — while continuing to warn that the bill itself still poses major risks to online freedoms, and also looks set to usher in nightmarishly complex compliance regime for UK internet services…