
Meta wants to use AI for assessing privacy violations: Report
What's the story
Meta, the parent company of Facebook and Instagram, is planning to use artificial intelligence (AI) for its risk assessment process.
This move comes after years of human-led reviews assessing potential risks like privacy violations or harm to minors, from new features on these platforms.
As per internal documents obtained by NPR, up to 90% of all risk assessments could soon be automated.
Automation impact
AI to approve major changes on Meta's platforms
The shift toward AI means that major changes such as algorithm updates, safety features, and content-sharing policies will be largely approved by an automated system.
This is a departure from the traditional method where staffers would debate potential unforeseen consequences or misuse of platform changes.
The change is seen as a win for product developers who can now roll out app updates and features faster.
Worries
Concerns over AI's role in risk assessments
Despite the benefits, former Meta employees are worried about AI making complex decisions on how its apps could lead to real-world harm.
A former executive told NPR that this process could lead to more things debuting faster with less rigorous scrutiny and opposition, thereby creating higher risks.
They added that negative externalities of product changes are less likely to be prevented before causing problems in the world.
Company statement
Meta's response to privacy concerns
In response to the concerns, Meta said it has invested billions of dollars in user privacy.
The company also clarified that the changes in product risk review are aimed at streamlining decision-making.
It added that "human expertise" is still being used for "novel and complex issues," and only "low-risk decisions" are being automated.
However, the internal documents show Meta is considering automating reviews for sensitive areas like AI safety and youth risk.
AI integration
New process for risk assessment and decision-making
The new process for risk assessment involves product teams getting an "instant decision" after filling out a questionnaire about their project.
This AI-driven decision identifies the risk areas and requirements to address them.
Before launch, the product team has to confirm it has met those requirements.
Under the old system, product and feature updates could not be sent to billions of users without approval from risk assessors.
Diverse opinions
Mixed reactions to Meta's automation of risk reviews
The automation of risk reviews has drawn mixed reactions from former employees.
Zvika Krieger, a former director at Meta, said while there is room for improvement in streamlining reviews through automation, "if you push that too far, inevitably the quality of review and the outcomes are going to suffer."
Another ex-employee questioned if speeding up risk assessments was a good strategy for Meta given the scrutiny each new product launch faces.