Synopsis

The shift has raised concerns internally. A former Meta executive told NPR that faster product rollouts with fewer checks could increase the risk of real-world harm. “Insofar as this process functionally means more stuff launching faster, with less rigorous scrutiny and opposition, it means you’re creating higher risks,” the former executive said on condition of anonymity.

Meta Platforms plans to automate up to 90% of internal checks that evaluate privacy, safety, and risk implications across its apps — including Instagram, WhatsApp, and Facebook — using artificial intelligence, according to internal documents reviewed by US news publisher NPR.

These product risk reviews, which previously relied heavily on human reviewers, assess whether new features could cause harm to users, violate privacy, or spread harmful content. Under the new system, AI tools will approve most updates — including changes to Meta’s core algorithms, safety tools, and content-sharing policies — without requiring manual scrutiny or human debate.

The internal documents indicate that human experts will only be involved in “novel or complex” cases, while low-risk changes will be fully automated.

The shift has raised concerns internally. A former Meta executive told NPR that faster product rollouts with fewer checks could increase the risk of real-world harm. “Insofar as this process functionally means more stuff launching faster, with less rigorous scrutiny and opposition, it means you’re creating higher risks,” the former executive said on condition of anonymity. “Negative externalities of product changes are less likely to be prevented before they start causing problems in the world.”

Meta responded to the report saying the goal is to streamline decision-making while maintaining compliance and oversight. “We leverage technology to add consistency and predictability to low-risk decisions and rely on human expertise for rigorous assessments and oversight of novel or complex issues,” a company spokesperson told TechCrunch.

Meta is required to conduct internal privacy reviews under a 2012 agreement with the US Federal Trade Commission. These checks have so far been largely human-led.

The company said it has invested more than $8 billion in its privacy programme and is committed to balancing innovation with compliance.

Internal records cited in the report also suggest that Meta could extend AI oversight to highly sensitive areas including youth safety, misinformation, and AI-related risk.

Contact to : xlf550402@gmail.com


Privacy Agreement

Copyright © boyuanhulian 2020 - 2023. All Right Reserved.