Strasbourg, France – On November 26, the European Parliament, sitting in Strasbourg, passed a landmark resolution aiming to set the default minimum age for unsupervised use of social media platforms and AI-driven chatbots at 16. The resolution also proposes banning children under 13 from creating social media accounts, uploading videos, or interacting with “AI companion” applications.
EU lawmakers stated this measure is a direct response to a sharp increase in rates of anxiety, depression, body-image disorders, and self-harm among adolescents who spend more than three hours daily on algorithmically curated feeds.
Although the resolution itself is non-binding, it instructs the European Commission to draft enforceable legislation within the next twelve months. Under the proposed rules, all platforms operating within the EU would be required to verify users’ ages via “privacy-preserving, state-certified digital wallets.” Users identified as under 16 would be automatically switched to a “maximum-safety” mode, disabling features such as targeted advertising, infinite scroll, and push notifications by default.
National data-protection authorities would be empowered to levy fines of up to 6% of a company’s global annual turnover for violations. Furthermore, EU member states would be required to fund digital-literacy courses for parents and teachers.
The Parliament rejected amendments from industry groups that sought to keep the minimum age threshold at 13, arguing that the adolescent brain remains highly vulnerable to dopamine-driven design tricks between the ages of 13 and 15. Civil society groups welcomed the move but cautioned that strict technological enforcement must be accompanied by enhanced mental-health support services.
If formally enacted, these rules would establish the strictest child online safety standards in the democratic world and could potentially become a global benchmark, similar to the EU’s General Data Protection Regulation (GDPR).