To determine whether users are underage, Meta will utilize AI to examine their height and bone structure
Meta has announced a new AI-powered system designed to identify and remove users under the age of 13 from its platforms, including Facebook and Instagram. The move is part of the company’s broader effort to improve child safety and comply with regulations that restrict young users from accessing social media services.
The new system uses artificial intelligence to analyze photos and videos for visual indicators of age. These include general features such as height, body proportions, and bone structure. Meta emphasized that the technology does not use facial recognition or identify individuals, but instead estimates a user’s approximate age based on overall visual patterns. This visual analysis is combined with other signals, such as text, captions, profile information, and user interactions, to improve accuracy.
For example, the system may detect clues like birthday celebrations, mentions of school grades, or language patterns that suggest a user may be underage. By combining these different data points, Meta aims to significantly increase the number of underage accounts it can detect and remove.
If the AI system flags an account as potentially belonging to someone under 13, Meta will deactivate the account. The user will then be required to go through an age verification process to regain access. This step is intended to prevent false removals while still enforcing platform rules.
Currently, the AI-based detection system is being tested in select countries, but Meta plans to expand it globally in the near future. The company also intends to extend this technology to more features, including Instagram Live and Facebook Groups.
This announcement comes shortly after a legal setback for Meta. A jury in New Mexico ordered the company to pay $375 million in penalties for misleading users about platform safety and failing to adequately protect children. The ruling also requires Meta to implement major changes, highlighting growing legal and regulatory pressure on Big Tech companies regarding child safety.
In addition to the AI detection system, Meta is expanding its “Teen Accounts” feature on Instagram to 27 countries in the European Union and Brazil. These accounts include stricter safety settings, such as limiting direct messages to known contacts, filtering harmful content, and setting profiles to private by default. The company is also rolling out similar protections on Facebook in the United States, with plans to expand to the U.K. and EU soon.
Meta says these combined efforts reflect its commitment to creating safer online environments for younger users, though the approach may continue to raise questions around privacy and accuracy.