LOADING...
Summarize
FDA's AI tool used for drug approvals generates fake studies
Elsa fabricates non-existent studies: FDA employee

FDA's AI tool used for drug approvals generates fake studies

Jul 24, 2025
09:37 am

What's the story

The US Food and Drug Administration's (FDA) artificial intelligence (AI) tool, Elsa, has been accused of generating fake studies. The revelation comes from a CNN report that interviewed six current and former FDA employees. While three of them found Elsa helpful for tasks like meeting notes and summaries, the other half raised concerns over the system's accuracy.

AI concerns

Elsa 'hallucinates confidently,' fabricating non-existent studies

The employees who raised concerns over Elsa's reliability said it often fabricates non-existent studies, a phenomenon known as "hallucinating" in AI. One unnamed FDA employee told CNN, "Anything that you don't have time to double-check is unreliable. It hallucinates confidently." This issue is not unique to Elsa but is common with all AI chatbots, which require human verification for accuracy.

Approval speed

Elsa touted as 'cost-effective' in internal documents

FDA Commissioner Marty Makary had introduced Elsa to the agency on June 2. An internal slide leaked to Gizmodo touted the system as "cost-effective," costing just $12,000 in its first month. Despite these claims, concerns remain over the potential for inaccuracies and misrepresentation of information by this AI tool.

Efficiency debate

Concerns over accuracy of drug approvals

Robert F. Kennedy Jr., the Secretary of Health and Human Services, has testified that AI is already being used to "increase the speed of drug approvals." However, the accuracy of these approvals remains a major concern. If an FDA employee asks Elsa for a summary of a lengthy paper on a new drug, there's no easy way to verify its accuracy or catch any red flags in the original report.