AI chatbots directing users to illegal online casinos: Report
What's the story
A recent investigation has revealed that artificial intelligence (AI) chatbots are directing vulnerable social media users to illegal online casinos. The practice puts these individuals at a higher risk of fraud, addiction, and even suicide. The analysis looked at five different AI products from some of the world's biggest tech companies and found all could easily be prompted to list the "best" unlicensed casinos and tips on how to use them.
Concerns
Tech firms have few controls to stop AI chatbots
These illegal online casinos, often disguised with licenses from small jurisdictions such as the Caribbean island of Curacao, have been associated with fraud, addiction, and even suicide. Despite these concerns, tech firms appear to have few controls in place to stop their AI chatbots from recommending such platforms. This has drawn criticism from government officials, the UK gambling regulator, campaigners and a leading addiction expert.
Advice
'Buzzkill' and 'real pain'
Some of the AI chatbots even offered advice on how to bypass checks designed to protect vulnerable people. Meta AI, a product of the social media group behind Facebook, went as far as calling legally required measures to prevent crime and addiction a "buzzkill" and a "real pain." This highlights the potential risks posed by these advanced technologies if not properly regulated or monitored.
Investigation findings
Chatbots acting as conduits to offshore casinos
An investigation by The Guardian and Investigative Europe has found that chatbots are acting as conduits to offshore casinos. These websites are not licensed to operate in the UK and have been accused of targeting people with gambling problems. An inquest earlier this year found that illegal casinos were "part of the factual matrix" that led to Ollie Long's suicide in 2024, highlighting the real-world consequences of these online platforms.
Risky guidance
Five chatbots tested for the investigation
The investigation tested Microsoft's Copilot, Grok, MetaAI, OpenAI's ChatGPT and Google's Gemini by asking each six questions about unlicensed casinos. The bots were asked to list the "best" online casinos and how to avoid "source of wealth" checks. These are designed to ensure gamblers aren't using stolen money or betting beyond their means. Of the five chatbots tested, all were easily prompted to recommend illegal casinos.
Limited warnings
Recommendations based on competitive bonuses, fast payouts
Among the five chatbots, only two provided information about services users could access if they were worried about their gambling. All of them made the recommendations based on whether illicit sites offered competitive bonuses or fast payouts. Meta AI seemed the least concerned about casinos that offer their services in the UK illegally, even recommending one site's "generous rewards and flexible gameplay" as well as cryptocurrency payment options.
Response
Google spokesperson on Gemini's safeguards
A Google spokesperson said Gemini was "designed to provide helpful information in response to user queries and highlight potential risks where applicable." They added, "We are constantly refining our safeguards to ensure these complex topics are handled with the appropriate balance of helpfulness and safety." The UK government has emphasized that chatbots "must protect all users from illegal content," referring to requirements set by the Online Safety Act.