Facebook bug reveals moderators' identities to suspected terror groups
Reportedly, over 1,000 moderators across 22 departments who used Facebook's moderation software to remove problematic and inappropriate content were affected.
Out of them, 40 worked in Facebook's counter-terrorism unit in Dublin.
Details of the leak
The leak was identified in November 2016 when moderators started getting friend requests from potential terrorists they were censoring.
Upon identification, Facebook reportedly warned all employees it believed was affected.
When the bug in the software was finally fixed, it had been active for over a month and had retrospectively exposed identities of moderators who had censored content and accounts from August 2016 onwards.
How did the profiles get leaked?
As a result of the software bug, the personal profiles of the content moderators appeared as notifications in the activity logs of Facebook groups from where content and/or users had been removed for breaching Facebook's terms of service.
These identities therefore became viewable by the remaining admins of those groups.
Love Tech news?
Stay updated with the latest happenings.
Yes, notify Me
Facebook's measures to protect and support those affected
Facebook offered to install home alarm systems and provide transport to and from work to employees who were assessed to be in the "high risk" category.
It also offered counselling through its employee assistance programme to help those affected deal with the panic and stress.
Moderator went into hiding for five months
The Guardian conducted an interview of one of six "high priority" victims, all of whom worked in Facebook's counter-terrorism unit.
The Iraqi-born Irish moderator fled to eastern Europe for five months after he found that his profile had been viewed by members of an Egypt-based group that backed Hamas and sympathized with the Islamic State.
He is currently seeking compensation for psychological damage caused.