A Cinema-Inspired Classification to Govern Online Content.
Scientists confirm: This is the most effective way to get your cat’s attention, according to new research
Elderly Couple Refuses Reserved Seats—Viral Train Standoff Sparks Fiery Debate on Courtesy
In response to ongoing concerns about the safety of young users online, and following recent measures to identify minors posing as adults, Instagram is taking a new direction.
As recently announced on Meta’s website, the social network is set to undergo a significant overhaul of its moderation system for minors. Drawing inspiration from the American film rating system, this update will implement a so-called “PG-13” approach, with stricter content restrictions for teenagers.
A Film-Inspired Model to Screen Posts
In the 1960s, the American film industry developed its own classification system ranging from “G” to “R” to avoid government-imposed regulations.
This is the model that Meta now intends to apply to Instagram. Specifically, the accounts of teenagers aged 13 to 18 will automatically be set to a mode designed to mirror the PG-13 movie experience.
Why You Should Never Reheat These Foods in the Microwave – The Hidden Dangers Experts Warn About
I tried the top 5 guard dogs—here’s what makes these breeds the ultimate protectors
Meta has stated that posts featuring profanity, suggestive poses, risky challenges, or content related to alcohol and drugs will be hidden or not recommended. These restrictions will also extend to searches, and certain keywords linked to adult content will now be blocked.
Accounts that frequently share content deemed inappropriate will no longer be able to interact with teenagers, nor appear in their recommendations.
New Control Tools for Parents
Beyond automatic filters, Meta is introducing a setting for parents who want to enforce stricter controls. This feature will allow for further restrictions on visible posts and the ability to disable comments or certain interactions with the platform’s integrated AI systems.
According to Meta, this update is supported by feedback from thousands of parents, with 95% finding these new settings helpful and easier to understand.
In addition, parents will also be able to report posts they consider inappropriate, aiding Instagram’s teams in refining the classification system.
Protection Still Imperfect
This update comes amidst increasing pressure on Meta and its competitors, who are accused of contributing to mental health issues.
Unfortunately, this protection may still be inadequate. A recent survey by Ofcom found that 22% of 17-year-olds claimed to be 18 or older on social media platforms.
Meta also acknowledges the use of identity verification or video selfie checks in addition to other measures already in place and more that are expected to roll out by 2026…
Similar Posts
- Instagram Uses AI to Spot Teens Posing as Adults: Protecting Young Users
- Meta Introduces Special Accounts for Teens on Facebook, Messenger: Protecting Young Users
- Teens Overusing AI Chatbots on Instagram? Here’s How to Stop It Instantly!
- Social Media Crackdown: Australia Deactivates Nearly 5 Million Under-16 Accounts!
- Kids and Social Media: Online Hours Soar, Mental Health Concerns Rise

Samantha Klein is a seasoned tech journalist with a sharp focus on Apple and mobile ecosystems. With over a decade of experience, she brings insightful commentary and deep technical understanding to the fast-evolving world of consumer technology.