Instagram STILL Fails to Protect Teens Despite Warnings

A Meta whistleblower and university researchers released damning findings showing Instagram’s teen safety features remain largely ineffective despite years of congressional pressure. The report from Arturo Béjar, former Facebook engineering director, and Cybersecurity for Democracy at New York University and Northeastern University reveals that 64 percent of safety measures fail to protect younger users on the social media platform.

The comprehensive analysis examined 47 safety features and found them woefully ineffective at preventing harm to teens and children. Ian Russell and Maurine Molak, who lost children to cyberbullying and exposure to depression and suicide-related content, contributed to the study through their organizations Molly Rose Foundation and ParentsSOS. Working alongside kids safety group Fairplay, they demonstrated how the company continues prioritizing engagement and profits over youth well-being.

Most Safety Tools Fail Basic Protection Standards

The research team assigned a red rating to safety measures that proved trivially easy to circumvent or evade within three minutes. These failed protections included filtering for keywords and offensive content in comments, warnings on captions and chats, blocking capabilities, and messaging restrictions between adults and teens. The findings reveal how users can simply bypass these safeguards without meaningful effort.

Instagram STILL Fails to Protect Teens Despite Warnings
(Teens pose with holding smartphones in front of a Instagram logo) Image taken by www.reuters.com

Another 19 percent received a yellow rating for providing reduced harm but facing significant limitations. Features like allowing users to swipe and delete inappropriate comments fell into this middle category because accounts can continue commenting without consequences. Parental supervision tools designed to restrict teens’ usage or provide information when a child makes a report also earned yellow ratings due to low adoption rates among parents.

Limited Success in Teen Protection Measures

Only 17 percent of safety tools received a green rating for effective protection. These successful features include the ability to turn off comments, restrictions on who can tag or mention teens, and tools prompting parents to approve or deny changes to default settings on their kids’ accounts. However, even these working protections represent a small fraction of Instagram’s overall safety infrastructure.

The blocking feature exemplifies the platform’s approach to safety measures. While the tool functions as designed, researchers noted it lacks the ability for users to provide a reason for blocking an account. This missing functionality eliminates an invaluable signal to detect malicious accounts and prevent future harm to other teens.

Meta Disputes Research Findings and Methods

Meta, the parent company of Instagram and Facebook, called the report “misleading and dangerously speculative” in a statement to The Hill. The company claims the analysis “undermines” the conversation around teen safety and “misrepresents” their efforts to empower parents and protect teens. Meta accused researchers of “misstating” how their safety tools work and how millions of parents and teens use them daily.

The company defended its track record by highlighting that teens using these protections experienced less sensitive content, reduced unwanted contact, and spent less time on Instagram at night. Meta emphasized that parents have access to “robust tools” for limiting usage and monitoring interactions. The company expressed commitment to improving their tools while welcoming “constructive feedback” but dismissed this particular analysis.

Whistleblower Testimony Exposes Internal Safety Concerns

Béjar, the former Facebook employee who participated in the report, testified before Congress in 2023 about executives dismissing warnings regarding teens experiencing unwanted sexual advances and bullying on Instagram. His testimony came two years after Facebook whistleblower Frances Haugen brought forward allegations that the company was aware of negative mental health impacts on young girls from its products.

Nearly four years later, additional Meta whistleblowers continue raising concerns about the company’s safety practices. Six current and former employees accused the company earlier this month of doctoring or restricting internal safety research, particularly regarding young users on virtual and augmented reality platforms. These ongoing revelations suggest systemic issues with how Meta assessed and implemented safety updates across its platforms.

Leave a Reply

Your email address will not be published. Required fields are marked *