To ban or not to ban: Is that even the question?
Australia’s Under-16 Social Media Ban: The Parent in Me Cheers. The Professional in Me Worries.
By a parent and a youth-facing professional, watching Australia’s “world‑first” under-16 social media ban with two minds at once.
Australia’s ban came into effect on 10 December 2025, requiring major platforms to take “reasonable steps” to prevent under-16s from holding accounts, with penalties that can reach up to $49.5 million for non-compliance.
It’s a bold move that’s already shaping debates in the UK and elsewhere about whether we should follow. But it’s also an experiment, and like most experiments involving young people and technology, the outcomes are likely to be messy and mixed.
I’m writing this with two hats on. As a professional, I disagree with the ban; I think education for young people and tough, enforced penalties for companies (paired with design reform) would work better.
Why? Well, because I know teenagers. The second something is out of bounds, they’re going to find a way to break the rules; it's why we love them, right?! They’ll find VPNs and workarounds.
The education will leave schools because, well, ‘it's illegal’, so they shouldn’t have it anyway, the mindset kicks in. It will go underground, arguably making our children severely more vulnerable.
The evidence of bringing RSHE into schools shows that more education and training delivered by the right people can dramatically reduce harmful outcomes.
But as a parent, I understand the appeal deeply: the ban takes responsibility out of households in a way that feels realistic when parents are constantly outgunned by peer pressure and brain cell-killing algorithms. Australia’s own messaging leans into exactly that “resetting the norm” logic.
As a parent, I would really appreciate the ability to say, "It's illegal, so no," instead of constantly battling and debating with my child. I want to protect them from being manipulated into damaging their attention span, being influenced by algorithms, and being exposed to content that promotes toxic opinions.
I could go on, but ultimately, myself and many other like-minded parents are looking for an easier way to navigate these challenges. Removing obstacles to building a positive and fruitful relationship with our children, where we are the main source of influence over their values and morals, NOT social media and toxic influencers.
Is it really too much to ask? It's hard enough raising children in this age as it is, right?
So let's delve a little deeper…
What Australia actually did (and didn’t do)
First, some clarity, because “ban” can mean different things. Australia’s rules are primarily about accounts, not total access. Under‑16s can still view some publicly available content without logging in, but covered platforms must prevent under‑16s from creating or keeping accounts, and they face fines if they don’t take “reasonable steps.”
This approach places the legal burden on companies rather than punishing families.
Australia’s eSafety Commissioner published detailed guidance explaining what “reasonable steps” should look like. This includes focusing first on identifying and removing existing underage accounts, building accessible reporting pathways, and putting review processes in place if someone is wrongly flagged.
Importantly, the guidance also says platforms cannot require government ID as the only option; they must offer a reasonable alternative. So the design is not just “show us your passport or lose your account.”
The parent case for the ban: relief, norms, and peer pressure
Let me start with the parent voice, because it’s the one most people recognise in day-to-day life. Parenting around social media often feels like being told to build a fence while someone else keeps moving the cliff edge.
Even if you set household rules, you still must manage the social consequences: group chats, streaks, invitations, social status, and the fear of exclusion.
Australia’s leaders explicitly framed the ban as easing that pressure, turning “my parents won’t let me” into “we’re not allowed yet.”
Appealing right?
There’s also the raw emotional reality: many parents believe the harms are real, immediate, and hard to control (and were not wrong). From cyberbullying to exposure to grooming risks to the sleep-destroying pull of endless scrolling.
Australia’s ABC explainer notes the policy intent is to reduce harms like doomscrolling, cyberbullying and grooming (of all kinds), even while acknowledging it won’t be perfect.
And early “how it’s going” coverage suggests some teens report feeling “freer” without the pressure to maintain constant presence, even as others try to bypass the rules.
The professional case against the ban: blunt tools and unintended harms
Now, the youth professional in me. A ban is a blunt instrument applied to a complex reality. Young people don’t just use social media for entertainment; many use it for community, identity exploration, and support, especially those who feel isolated or marginalised.
Australian reporting has highlighted worries that the ban may worsen isolation for some young people in regional communities and for LGBTQIA+, neurodivergent teens, and other vulnerable groups who have found support online. That’s not a theoretical concern; it’s an immediate well-being concern.
The second worry is displacement. When you close doors on the most visible platforms, you don’t eliminate the underlying needs (connection, belonging, curiosity).
You can push young people into less regulated spaces or newer apps that aren’t yet covered.
An Australian journalist has been tracking alternative apps that are being put “on notice” as usage surged. The risk is that we swap “big platforms with resources” for “smaller platforms with weaker safety systems.”
What seems to be working (so far): compliance activity and account removals
Australia has shown that a government can prompt major platforms to move quickly. The eSafety guidance explicitly expected platforms to focus first on detecting and removing existing accounts under‑16 years old.
In January 2026, the Australian government publicly claimed that more than 4.7 million under-16 accounts had already been deactivated, removed or restricted (based on early information gathered by eSafety).
That’s not nothing. It indicates measurable compliance activity, at least in the first phase.
Reuters also framed the ban as the beginning of a global test case: a “live experiment” that other governments are watching because they’re frustrated at what they see as slow or insufficient voluntary action from tech firms.
In that sense, Australia’s policy is already “working” as a forcing mechanism pushing platforms into visible action and making age assurance a board-level issue rather than a PR promise.
What hasn’t worked (yet): circumvention, inconsistency, and “reasonable steps”
But even in the early reporting, you can see the limitations. Australia’s own government and media have stressed the ban won’t be flawless, and ABC has described concerns about age‑assurance tools being fooled, with some teenagers expected to “slip through the cracks.”
The regime relies on “reasonable steps” which allow variation and create a cat‑and‑mouse dynamic between young people and enforcement methods.
There are also practical implementation problems: different platforms will deploy different checks, with different error rates and different user experiences.
That matters because false positives (wrongly blocking someone) and false negatives (missing underage users) both carry harms.
eSafety requires accessible review mechanisms, but real-world friction can still hit families hard. Particularly where children feel excluded because friends found workarounds. Journalistic accounts from the rollout have described exactly that unevenness.
Will it make a difference?
Here’s the point I most want a UK audience to hold: account takedowns are not the same as improved wellbeing. We have early indicators (like removal numbers) that enforcement activity is happening, and we have technical feasibility claims in the regulator’s guidance.
We do not yet have strong causal evidence that the ban reduces anxiety, improves sleep, reduces bullying, or improves learning outcomes because it’s too soon, and the pathways are complicated. But, in my opinion, we will see those results; we might not be able to catch all, but we can reduce dramatically.
There is, however, a clear intent to measure impact over time. Reuters reported that Australia’s regulator planned a longer‑term analysis involving academic expertise to study outcomes over multiple years. That’s the right direction, but until those evaluations mature, much of what we’re trading in is lived experience (not to be undervalued), fear, hope, and early anecdote.
Where I land:
I get the ban, I think I'm more for it than against it, but I’d like us to fix the systems and the society that leads young people into the hands of algorithms and influencers just as much. Teaching young people about critical thinking, the physiology of wellbeing, managing relationships etc etc etc…
holding both hats. As a parent, I empathise with the relief: a policy that changes the social norm can genuinely help families breathe.
As a professional, I worry a ban can become a substitute for the harder work: product redesign, effective moderation, algorithmic accountability, and enforced penalties that change company behaviour not just children’s access.
Critiques from digital rights and policy communities argue that age‑gating can expand data collection and surveillance incentives while leaving the underlying “harmful by design” dynamics untouched.
If the UK is tempted to follow Australia, my plea would be: don’t treat “ban vs no ban” as the only choice. Combine robust enforcement with education, youth participation in policy design, and reforms that make platforms safer by default.
If we do adopt restrictions, we must invest heavily in the “what happens next” infrastructure support services, reporting pathways, and safe alternative online spaces so that vulnerable young people don’t lose connection without gaining protection.
UK-focused academic commentary has already warned that bans can create trust issues and displace harms rather than resolving them.
A practical checklist for a better approach (ban or no ban)
If you’re reading this as a parent, youth worker, policymaker or simply someone who cares, here’s what I think matters most:
Enforcement that’s real: penalties that are applied, not just threatened.
Safety-by-design rules: reduce addictive patterns and risky recommender loops not just under‑age accounts.
Credible Education: co-designed digital literacy, not one-off assemblies.
Privacy‑protecting age assurance: minimal data, alternatives to ID, transparent standards and audits.
Support pathways: for those losing online communities, especially marginalised young people.
Closing thought
Australia’s ban is a mirror: it reflects how exhausted families feel, and how little trust many of us have that platforms will voluntarily put children’s wellbeing ahead of the bottom line.
Whether the policy ultimately helps or harms will depend on enforcement, adaptation, and what else governments do alongside it. My parent self says, “Thank goodness someone acted.” My professional self says, “Yes, but let’s not stop here.”
Sources
The Ban Arguement
ABC News (Australia): Clear explainer of what the ban does in practice
https://www.abc.net.au/news/2025-12-10/australias-social-media-ban-for-under-16s-starts-today/106119800
eSafety Commissioner (Australia): The regulator’s official guidance on “reasonable steps,” age assurance, safeguards
https://www.esafety.gov.au/sites/default/files/2025-09/eSafety-SMMA-Regulatory-Guidance.pdf
Reuters (via U.S. News republish): Trusted international reporting + global context
https://www.usnews.com/news/world/articles/2025-12-08/australia-social-media-ban-set-to-take-effect-sparking-a-global-crackdown
RSHE/RSE evidence (education + trained delivery)
UK Parliament (POSTnote): Accessible summary of evidence + what makes RSE effective (incl. trained delivery)
https://researchbriefings.files.parliament.uk/documents/POST-PN-0576/POST-PN-0576.pdf
BMJ Open evidence synthesis: Best‑practice features and what works in SRE/RSE (still readable)
https://bmjopen.bmj.com/content/7/5/e014791.full.pdf
UNESCO/UNFPA evidence overview: Pulls together systematic‑review evidence on comprehensive sexuality education https://unesco.org.uk/site/assets/files/10492/comprehensive_sexuality_education_-_an_overview_of_the_international_systematic_review_evidence.pdf