
The government is facing renewed pressure to strengthen online safety laws after rejecting key recommendations designed to curb the spread of misinformation, despite agreeing with most MPs’ findings on the scale of the problem.
The Science, Innovation and Technology Committee today published the government and Ofcom’s responses to its July report, which concluded that the Online Safety Act (OSA) does not address algorithmic amplification of false content and leaves users exposed to fast-spreading misinformation – much of it infused with generative AI.
Both the government and Ofcom accepted the committee’s assessment that disinformation poses significant risks, yet ministers refused to adopt several key recommendations, including calls to expand online safety legislation to explicitly cover generative AI platforms. The committee said such platforms are capable of spreading large amounts of inauthentic content and should be regulated in line with other high-risk online services.
The government rejected this proposal, insisting that AI-generated content is already covered by the OSA – a position that contradicts Ofcom’s previous testimony before the committee, in which the regulator said the legal status of AI-generated intelligence was “not entirely clear” and suggested that more work was needed.
MPs also warned that misinformation cannot be meaningfully tackled without confronting digital advertising business models that incentivize social media companies to promote harmful content. The government acknowledged the link between advertising and content amplification, but refused to commit to reform, instead saying the issue would remain “under review.”
The Chairperson of the Committee, Mrs. Chi Onwora, MP, criticized the government’s reluctance to take the necessary measures. “If the government and Ofcom agree with our conclusions, why should we stop adopting our recommendations?” She said. “The committee is not persuaded by the argument that the OSA already covers generative AI. The technology is evolving much faster than the legislation, and it is clear that we will have to do more.”
She added that failure to address monetization of harmful content leaves a major loophole: “Without addressing the ad-based models that incentivize platforms to algorithmically amplify misinformation, how can we stop it?”
Onwurah warned that complacency poses real risks to public safety. “It is only a matter of time until the riots fueled by misinformation are repeated in the summer of 2024,” she said. “The government urgently needs to close the loopholes in online safety law before further damage is done.”
The post Government risks further harm if it fails to act on viral misinformation, MPs warn first appeared on Investorempires.com.
