Oversight Board urges Meta to tackle AI deepfakes, misinformation
Meta's independent Oversight Board is urging the company to crack down on AI-generated deepfakes and misinformation, especially during conflicts like the current U.S.-Israel-Iran situation.
They're pushing for clearer guidelines so users can spot AI-manipulated content more easily, with a focus on making sources and edits obvious.
Board's recommendations include better AI detection tools
The board suggests better AI detection tools, clear labels for AI-made media (including a "High Risk AI" tag), and penalties if creators hide that something's been altered.
This push follows an AI-generated video from the June 2025 Israel-Iran conflict that the Board said lacked the 'High Risk AI' label it recommended.
Meta has 30 days to respond, and while it is not required to adopt the Board's recommendations, some jurisdictions (for example, India) have their own legal requirements on labeling AI-generated content.