A Cinema-Inspired Classification to Govern Online Content.
This giant cat is the size of a 9-year-old—and his story is captivating everyone
Fruit yogurts under fire: Experts warn about hidden sugars and unhealthy additives
In response to ongoing concerns about the safety of young users online, and following recent measures to identify minors posing as adults, Instagram is taking a new direction.
As recently announced on Meta’s website, the social network is set to undergo a significant overhaul of its moderation system for minors. Drawing inspiration from the American film rating system, this update will implement a so-called “PG-13” approach, with stricter content restrictions for teenagers.
A Film-Inspired Model to Screen Posts
In the 1960s, the American film industry developed its own classification system ranging from “G” to “R” to avoid government-imposed regulations.
This is the model that Meta now intends to apply to Instagram. Specifically, the accounts of teenagers aged 13 to 18 will automatically be set to a mode designed to mirror the PG-13 movie experience.
“A doctor reveals the hidden triggers behind cold sores—it’s not just fever”
Italy Unveils €4.5 Billion Bank Contribution to Slash Deficit Below EU Limit—Controversy and Relief Mix in New Budget Plan
Meta has stated that posts featuring profanity, suggestive poses, risky challenges, or content related to alcohol and drugs will be hidden or not recommended. These restrictions will also extend to searches, and certain keywords linked to adult content will now be blocked.
Accounts that frequently share content deemed inappropriate will no longer be able to interact with teenagers, nor appear in their recommendations.
New Control Tools for Parents
Beyond automatic filters, Meta is introducing a setting for parents who want to enforce stricter controls. This feature will allow for further restrictions on visible posts and the ability to disable comments or certain interactions with the platform’s integrated AI systems.
According to Meta, this update is supported by feedback from thousands of parents, with 95% finding these new settings helpful and easier to understand.
In addition, parents will also be able to report posts they consider inappropriate, aiding Instagram’s teams in refining the classification system.
Protection Still Imperfect
This update comes amidst increasing pressure on Meta and its competitors, who are accused of contributing to mental health issues.
Unfortunately, this protection may still be inadequate. A recent survey by Ofcom found that 22% of 17-year-olds claimed to be 18 or older on social media platforms.
Meta also acknowledges the use of identity verification or video selfie checks in addition to other measures already in place and more that are expected to roll out by 2026…
Similar Posts
- Instagram Uses AI to Spot Teens Posing as Adults: Protecting Young Users
- Meta Introduces Special Accounts for Teens on Facebook, Messenger: Protecting Young Users
- Teens Overusing AI Chatbots on Instagram? Here’s How to Stop It Instantly!
- Kids and Social Media: Online Hours Soar, Mental Health Concerns Rise
- France Considers TikTok Ban for Minors, Nightly Curfew from 10 PM to 8 AM: Will It Happen?

Samantha Klein is a seasoned tech journalist with a sharp focus on Apple and mobile ecosystems. With over a decade of experience, she brings insightful commentary and deep technical understanding to the fast-evolving world of consumer technology.