Instagram revamps teen safety features as Congressional pressure mounts

Mark Zuckerberg’s Instagram on Tuesday unveiled what was billed as a major overhaul of child safety features, a move that online watchdogs quickly criticized as an attempt to head off a looming congressional crackdown on the social media giant.

Instagram said it will automatically place users under 18 into “teen accounts” and block people who don’t follow them from viewing their content or interacting with them.

It will also mute Instagram app notifications for teenage users between 10 p.m. and 7 a.m. and send “time limit reminders” urging teens to close the app after 60 minutes per day.

Mark Zuckerberg issued a surprising apology to the families of victims of online harm in January. AP

Parents will be able to see which accounts their child has recently messaged, set daily time limits, and block teens from using the app during specific time periods.

Additionally, users under the age of 16 will need parental permission to make changes to their account security settings.

Meta’s announcement was not well received by online security groups, many of which said the security improvements were inadequate.

Sacha Haworth, director of the Technology Oversight Project, said parents “should ignore Meta’s latest fake ad” and said the company has a history of “broken promises and policy changes” when it comes to online safety.

“Meta can release as many kid- or teen-focused features as it wants, but that won’t change the fact that its core business model is based on making a profit and encouraging kids and teens to become addicted to its products — and American parents are aware of this hustle and are demanding legislative action,” Haworth said in a statement to The Post.

The reform was announced as the bipartisan Children’s Internet Safety Act, a landmark bill that would impose a legal “duty of care” on Meta, the parent company of Instagram, TikTok and other social media companies to protect children from online harm, gains momentum in Congress.

In July, the Senate passed KOSA and another bill called COPPA 2.0, which would ban advertising to minors and data collection without their consent and give parents and children the option to remove their information from social media platforms, in an overwhelming 91-3 vote.

The House Energy and Commerce Committee is expected to approve the bills on Wednesday, a key procedural step that would clear the way for a floor vote in the near future.

Another watchdog, the Tech Transparency Project, argued that Meta “has claimed for years that it is already implementing” versions of the features detailed in Tuesday’s announcement.

For example, Meta originally announced plans to make teen accounts private by default and limit their interactions with strangers starting in 2021, according to previous blog posts.

The group also noted that several of the online security experts who promoted Meta’s security changes in the company’s blog post work for organizations that received funding from the company.

“Not only is Meta presenting these efforts as new while simultaneously claiming for years that it was implementing these security tools, but it is also hoisting Meta-funded voices and presenting them as independent experts,” the Tech Transparency Project said. wrote in X.

Instagram on Tuesday announced new safety features for kids and their parents. Ink drop – stock.adobe.com

Fairplay for Kids, one of the groups leading the push to pass KOSA, criticized Meta’s announcement as an attempt to bypass a significant legislative crackdown.

“Default private accounts for minors and disabling notifications in the middle of the night are safety measures that Meta should have implemented years ago,” said Josh Golin, CEO of Fairplay. “We hope lawmakers are not fooled by this attempt to block legislation.”

“The Children’s Internet Safety Act and COPPA 2.0 will require companies like Meta to ensure their platforms are safe and protect young people’s privacy at all times, not just when it is politically convenient,” Golin added.

Alix Fraser, director of the Responsible Media Council, had a similar view of the announcement.

“The fact is that this announcement comes at a time when pressure from Congress is mounting and support for the bipartisan Children’s Internet Safety Act continues to grow,” Fraser said. “It would not be the first time Meta has made a promise to avoid Congressional action and then either never followed through or quietly backed out of it.”

Online security groups accused Meta of trying to evade a legislative crackdown. New Africa – stock.adobe.com

The Post has reached out to Meta for comment.

Policymakers have called out Meta for failing to protect children from “sextortion” scams and other forms of online sexual abuse.

Critics have also accused apps like Instagram of fueling a youth mental health crisis with negative outcomes ranging from anxiety and depression to eating disorders and even self-harm.

Last fall, a coalition of state attorneys general sued Meta, alleging the company had relied on addictive features to hook children and boost profits at the expense of their mental health.

In January, Zuckerberg issued a stunning apology to the families of victims of online abuse during a tense hearing on Capitol Hill.

Online security groups accused Meta of trying to evade a legislative crackdown. Allison Bailey/NurPhoto/Shutterstock

Despite its easy passage in the Senate, KOSA’s final prospects in the House of Representatives remain uncertain, with some critics in both parties raising concerns about its impact on online freedom of expression.

In July, US Surgeon General Vivek Murthy called for the implementation of a “warning label” similar to those used on social media apps to raise awareness about their potential mental health risks, including depression and anxiety.

Fuente

Leave a comment