
For three months the Online Safety Act has been gloriously defeating abiding companies. Shall we get prepared for blocking?
It’s been three months since the Online Safety Act’s major duties came into force in the UK, and so far, the only people not criticizing it seem to be the ones who wrote it. Demand for VPNs has skyrocketed, while a petition against the law collected nearly half a million signatures in just a few days. Still, there’s no sign of anyone in the government reconsidering it. Xeovo looks into who’s actually benefited, what the costs have been, and whether the promised results are showing up at all.
How it works
The Office of Communications (Ofcom) hasn’t published a definitive list of platforms required to implement age verification. In practice, this covers nearly every platform that could possibly host content “potentially harmful for children.” That means file-sharing services, gaming platforms, marketplaces, social networks — basically “the biggest platforms where children spend most time.” Сhildren, however, use roughly the same platforms as adults. The vagueness here is convenient — it avoids accusations of bias and leaves room to expand the law’s scope whenever needed.
Yet Ofcom has released its list of “harmful content” categories, and it reads like a control freak’s wishlist: not only child sexual abuse, firearms, and drugs, but also hate speech, terrorism, unlawful immigration, financial fraud, and “foreign interference offences.” The last one refers to search results containing content “not aligned with UK interests” and “linked to foreign governments.” Hardly the kind of threat children are most likely to encounter.
Unlike porn sites, social media and hosting platforms don’t primarily deal in “harmful” content — and they already enforce strict moderation. Facebook, Instagram, and YouTube routinely remove anything that could upset large groups of users or, worse, advertisers. Algorithms label certain material as violent or sensitive and either automatically hide it from younger audiences unless age-verified, or require verifying before watching.
Those algorithms already have a pretty good idea of a user’s age range — based on interactions, friends, searches, and the date of birth given at registration. “Adult” videos (flagged by frame analysis or self-assigned ratings) are restricted until the user proves they’re not a minor.
Platforms without their own age-analysis systems must rely on external providers — and the stricter the risk level of the content, the tougher the verification has to be.
Verification providers
Verification platforms use one or several methods suggested by Ofcom: open banking, photo ID matching (uploading a photo ID alongside a selfie), facial age estimation, mobile network data, credit card checks, digital ID wallets, or even email-based age estimation.
These providers return only the verification result to the requesting site. They swear to respect privacy — keeping no data longer than a few days, encrypting all images and videos, or analyzing them entirely on the user’s device. They have no access to user actions on the verified website.
That’s the promise, anyway.
Users who want convenience can create accounts directly with a verification provider. This becomes their digital ID, reusable across any partnered site. Naturally, that allows the provider to link those sites together. Matching the same email or phone number across verifications? Technically possible. Whether they do it or not is another question.
Verification providers of some prominent platforms (except from major social media and a few other platforms, like Amazon and Match Group, which verify users’ age by themselves) are presented in the table below:
|
Platform |
Verification provider |
Verification Methods |
|
Yoti (UK) |
Face Scan, ID Verification |
|
|
Internal system and All Pass Trust (Cyprus), including VerifyMyAge Ltd, OneID Ltd, Google LLC |
Credit Card, Email, mobile network operator, open banking, Digital ID |
|
|
Persona (USA) |
Selfie, ID Verification |
|
|
Facetec (USA) |
Face Scan, photo ID matching |
|
|
Yoti |
Selfie, ID Verification, Mobile Provider, Credit Card Check |
|
|
Kid Web Services (KWS) |
Face Scan, ID verification, payment card |
|
|
k-ID (Singapore) |
Face Scan, ID verification |
|
|
k-ID |
Face Scan, ID verification |
|
|
Persona |
photo ID matching |
|
|
k-ID |
Face Scan |
Among the most common methods are face scans and ID checks — a compromise between accuracy and data breach risk. But either way, verification isn’t cheap. Yoti, one of the biggest players, charged between $0.17 and $0.42 per verification in mid-April to mid-May, depending on the method used.
The costs
Platforms that refuse to comply face fines of up to £18 million or 10% of global revenue — whichever is higher. So far, Ofcom has flagged 69 violators. One of them is 4chan, fined £20 million plus £100 per day until compliance. Still, that’s less than Google’s fines in Russia — so for many platforms, it’s simply the cost of doing business until the political weather changes and those astronomical penalties are removed. Unless, of course, Ofcom decides to start blocking access entirely.
Rebellion here isn’t about principle so much as practicality. Most non-compliant platforms just can’t afford it. Take 4chan again: its annual revenue is estimated at less than $12.2 million, with roughly 7% of its global traffic — about 180,000 monthly users — coming from the UK. If age verification causes 90% of users to drop off, there’s literally no economic reason to implement it.
And that rejection rate isn’t likely to shrink anytime soon, given the privacy concerns. A recent cyberattack on Discord’s age-verification provider leaked ID photos from a third of its users — not exactly reassuring. Add in the fact that some verification providers track user activity or have ties to intelligence agencies, and public trust sinks even lower.
Sure, no one wants kids addicted to porn. No one wants them stumbling into “foreign interference offences” or cringey AI slop either. But can we guarantee they won’t just turn to one of the 3,000 clone sites pirating content from the originals — the ones now hidden behind age gates? No, we can’t.
Can we guarantee they won’t dive deeper into the darker corners of the internet to avoid those gates — and find something far worse than porn? Also no. Not to mention false positives from face-scanning errors or kids simply asking their older siblings or schoolmates to pass the check for them.
So far, so…not good
-
Circumvention.
Despite earlier talk of banning VPNs after the OSA adoption, their popularity exploded after the law’s introduction. By late July, VPNs made up half of all free app downloads in the App Store, and the internet is flooded with guides on how to bypass age verification. Less common but still popular workarounds include AI deepfakes and even using video games screenshots with faces.
-
Punishing compliance and copyright erosion.
The Washington Post examined 90 UK adult sites and found that traffic soared for 14 of those that didn’t implement age checks, while compliant platforms lost users en masse.
And it’s not just the rule-breakers benefiting — clone sites are thriving too, mirroring major platforms, cross posting their content without permission and without bothering about age verification. The result: a booming underground of unverified and pirated adult sites.
-
Censorship creep.
The Act allows authorities to block any content deemed harmful to children — including informational material. Access has already been restricted to footage of anti-immigration protests and a speech by MP Katie Lam about sexual assaults — ironically, the party she belongs to pushed the bill.
Platforms like Reddit and X (formerly Twitter) have begun self-censoring preemptively, hiding potentially “sensitive” content not just from UK users, but globally.
-
Eroded privacy and rising fraud.
The Discord case likely won’t be the last. As many have pointed out: once the door is open to one actor — verification providers in this case — it’s open to everyone. British law enforcement has yet to present any breakthrough solution to the growing wave of impersonation, ID theft, and credit card fraud the Act may soon indirectly fuel.
This law mostly punishes those who try to follow it. It’s boosted piracy, normalized circumvention tools, and handed a windfall to verification providers. Maybe someday it’ll genuinely help protect children from harmful content — but for now, its main effect has been to erode privacy rights.
The silver lining? VPNs still work beautifully. And given that regulators haven’t yet found a way to find their way to compliance, they may soon reach for something harsher.

Silence censorship. Protect your privacy and bypass restrictions with Xeovo VPN. Use code «HBR-10«.
ссылка на оригинал статьи https://habr.com/ru/articles/1024266/