Deloitte admits to using AI for citations in $1.6 million report
Deloitte Canada has come clean about using AI to help with some research citations in a $1.6 million healthcare report, after it was found that four references were either made up or misattributed.
The company insists AI wasn't used to write the report—just for a few citations—but the mistake led to calls for stricter oversight and standards for AI use in government-commissioned research and reporting.
What actually happened?
Deloitte says, "AI was not used to write the report; it was selectively used to support a small number of research citations."
Even though they admitted four citation errors, Deloitte stands by the main findings of their work.
Critics, such as NAPE President Jerry Earle, have expressed concerns and are calling for higher standards and accountability in how AI is used.
Not Deloitte's 1st run-in with AI mistakes
This isn't new territory for Deloitte—earlier this year, their Australia branch had to refund part of another government contract over fake AI-generated citations.
These slip-ups highlight how tricky it can be to double-check facts when using generative AI tools.
Why does this matter?
With over 80% of health execs expecting big changes from generative AI by 2025, stories like this are raising eyebrows about whether current rules are enough to keep things honest and accurate as tech moves fast.