The world of social media is facing a growing concern over the age of its users. With younger and younger people joining platforms like TikTok, lawmakers are scrambling to come up with effective solutions to keep children safe online.
TikTok's latest move has been hailed as a compromise between regulatory pressure and user privacy concerns. The company has implemented an age-detection system across Europe that uses a combination of profile data, content analysis, and behavioral signals to identify accounts belonging to minors under 13. Unlike automatic bans, this new system flags suspicious accounts for human moderators to review.
However, experts say that even with the best intentions, TikTok's strategy still requires social platforms to surveil users more closely than before. "This is a fancy way of saying that TikTok will be surveilling its users' activities and making inferences about them," says law professor Eric Goldman. While it may not automatically ban underage users, the increased surveillance raises concerns about digital privacy.
Goldman also warns that age verification mandates can have unintended consequences. "Users probably aren't thrilled about this extra surveillance, and any false positives—like incorrectly identifying an adult as a child—will have potentially major consequences for the wrongly identified user." The lack of standardization in age-verification methods across platforms also poses scalability issues.
Data & Society's director of research Alice Marwick agrees that while TikTok's new approach may seem marginally better than automatic bans, it still requires closer surveillance. "This will inevitably expand systematic data collection, creating new privacy risks without any clear evidence that it improves youth safety."
Meanwhile, in other parts of the world, governments are taking a more drastic approach to online child safety. Australia has banned social media for children under 16, and the European Parliament is advocating for mandatory age limits on online platforms.
The debate around online child safety raises broader questions about whether technology alone can resolve what is fundamentally a policy and societal challenge. Advocacy groups in Canada are calling for the creation of a dedicated regulatory body to address online harms affecting young people.
As TikTok's strategy becomes the new norm, it remains to be seen whether it will be enough to keep children safe online. One thing is certain, however: policymakers must tread carefully when balancing regulation with user privacy concerns. The future of social media depends on it.
TikTok's latest move has been hailed as a compromise between regulatory pressure and user privacy concerns. The company has implemented an age-detection system across Europe that uses a combination of profile data, content analysis, and behavioral signals to identify accounts belonging to minors under 13. Unlike automatic bans, this new system flags suspicious accounts for human moderators to review.
However, experts say that even with the best intentions, TikTok's strategy still requires social platforms to surveil users more closely than before. "This is a fancy way of saying that TikTok will be surveilling its users' activities and making inferences about them," says law professor Eric Goldman. While it may not automatically ban underage users, the increased surveillance raises concerns about digital privacy.
Goldman also warns that age verification mandates can have unintended consequences. "Users probably aren't thrilled about this extra surveillance, and any false positives—like incorrectly identifying an adult as a child—will have potentially major consequences for the wrongly identified user." The lack of standardization in age-verification methods across platforms also poses scalability issues.
Data & Society's director of research Alice Marwick agrees that while TikTok's new approach may seem marginally better than automatic bans, it still requires closer surveillance. "This will inevitably expand systematic data collection, creating new privacy risks without any clear evidence that it improves youth safety."
Meanwhile, in other parts of the world, governments are taking a more drastic approach to online child safety. Australia has banned social media for children under 16, and the European Parliament is advocating for mandatory age limits on online platforms.
The debate around online child safety raises broader questions about whether technology alone can resolve what is fundamentally a policy and societal challenge. Advocacy groups in Canada are calling for the creation of a dedicated regulatory body to address online harms affecting young people.
As TikTok's strategy becomes the new norm, it remains to be seen whether it will be enough to keep children safe online. One thing is certain, however: policymakers must tread carefully when balancing regulation with user privacy concerns. The future of social media depends on it.