Meta Uses AI to Detect Underage Users by Scanning Photos and Videos for Visual Age Clues
By admin | May 05, 2026 | 2 min read
Meta announced on Tuesday that it will begin using artificial intelligence to analyze photos and videos for visual indicators that suggest a user might be under 13, potentially leading to their removal from Facebook and Instagram. The company stated that these visual cues could include factors like a person’s height or bone structure. "We want to be clear: this is not facial recognition," Meta explained in its blog post. "Our AI looks at general themes and visual cues, for example height or bone structure, to estimate someone’s general age; it does not identify the specific person in the image. By combining these visual insights with our analysis of text and interactions, we can significantly increase the number of underage accounts we identify and remove."
The visual analysis system is currently operational in select countries, but Meta is working toward a broader rollout. The company says this system is part of its broader efforts to keep children under 13 off its platforms. These efforts already include using AI to scan entire profiles for contextual clues, such as mentions of birthday celebrations or school grades. Meta looks for these signals across various formats, including posts, comments, bios, and captions. The company plans to expand this technology to more parts of its apps in the future, including Instagram Live and Facebook Groups.
If Meta determines that a person may be underage, it will deactivate their account. The user will then need to prove their age through the company’s age verification process to prevent their account from being permanently deleted. This announcement comes weeks after a New Mexico jury ordered Meta to pay $375 million in civil penalties for misleading consumers about platform safety and putting children at risk. The court also required Meta to implement fundamental changes to its platforms. Meta has since threatened to shut down its social media services in the state. It is worth noting that this case is one of many lawsuits that Meta and other Big Tech companies are facing over child safety.
In a separate announcement on Tuesday, Meta said it is expanding its technology that automatically places teens into stricter "Teen Accounts" on Instagram to 27 countries in the EU and Brazil. These accounts provide a more restrictive experience with additional safeguards, such as only receiving direct messages from people they follow or are already connected to, hiding harmful comments, and setting accounts to private by default. Additionally, Meta confirmed that it is expanding this technology to Facebook in the U.S. for the first time, followed by the U.K. and the EU in June.
Comments
Please log in to leave a comment.
No comments yet. Be the first to comment!