In a groundbreaking move, Australia has proposed a ban on social media access for children under 16, covering platforms like Instagram, TikTok, Facebook and X. The legislation, expected next year, is a measure to protect young minds from harmful content, including body image pressures and misogynistic messages that can affect mental health. While well-meaning, the ban raises questions over its practicality and potential effectiveness. Some countries have attempted to restrict children’s access to social media through age verification and parental control measures, but these efforts have proven difficult to enforce. Australia has proposed enforcement through biometric and government ID checks.
Studies show that children on social media frequently encounter cyberbullying, mental health struggles and exposure to online predators, which cause emotional distress, anxiety and eating disorders. In one extreme incident, 14-year-old Molly Russell was found to have viewed self-harm-related content before her suicide. Social media companies have been consistently criticised for profiting from engagement algorithms that expose children to harmful content. A tough stance is, indeed, needed to protect vulnerable youngsters. At the same time, an outright ban could push children towards darker, unregulated corners of the Internet. An industry expert has called Australia’s approach a ‘20th-century’ solution. Blocking access risks alienating young users from essential online resources and support networks. The expert suggests that age-appropriate platforms, digital literacy and better safeguards are more effective solutions.
Australia’s proposal highlights a global dilemma: how to balance children’s safety online with nurturing responsible digital engagement. If effectively implemented, this policy could set an important standard in the matter. Safeguarding children requires cooperation among governments, tech companies and parents, with a shared commitment to addressing social media’s risks while fostering healthy online habits.