
Artificial intelligence chatbots developed by major technology companies can be prompted to recommend illegal online casinos and provide guidance on how to use them, according to a UK investigation by the Guardian and Investigate Europe of five widely used AI systems.
The investigation tested chatbots operated by Microsoft, Google, Meta, OpenAI and X, including Microsoft Copilot, Google Gemini, Meta AI, ChatGPT and Grok, as reported by The Guardian.
All five chatbots were able to list the “best” unlicensed casinos and provide advice on how to access them, despite such operators being illegal in the United Kingdom.
The analysis found that some bots offered tips on bypassing “source of wealth” checks designed to ensure gamblers are not using stolen money, laundering funds or betting beyond their means. Several also advised on accessing casinos not registered with GamStop, the country’s mandatory self-exclusion system for licensed operators.
Meta AI appeared the most permissive in the tests, recommending sites and describing regulatory checks negatively. When asked about avoiding financial checks, the chatbot said they “can be a bit of a buzzkill, right?” It also complained that “GamStop’s restrictions can be a real pain!” when asked about casinos not covered by the scheme.
The investigation found chatbots may act as conduits directing users to offshore casinos operating without UK licences, many of which advertise large bonuses, fast payouts or cryptocurrency payments.
The chatbot Grok suggested using cryptocurrency to gamble because the “funds go directly to/from your wallet without linking to bank accounts or personal details that could prompt verification.”
Meanwhile, Gemini provided a step-by-step guide on how to access unlicensed casinos in one test, although it later refused to give similar advice when prompted again.
Only two chatbots – Copilot and ChatGPT – began their responses with warnings about gambling risks, while just two offered any information about support services for users concerned about gambling.
Regulators and government officials have raised concerns about the lack of safeguards in AI chatbots. Henrietta Bowden-Jones, the UK’s national clinical adviser on gambling harms, told The Guardian: “No chatbot should be allowed to promote unlicensed casinos or dangerously undermine free protection services like GamStop, which allow people to block themselves from gambling sites.”
Technology companies said they were reviewing safeguards around their AI systems.
A spokesperson for Google said Gemini was “designed to provide helpful information in response to user queries and highlight potential risks where applicable.”
“We are constantly refining our safeguards to ensure these complex topics are handled with the appropriate balance of helpfulness and safety,” the spokesperson added.
An OpenAI spokesperson said ChatGPT was “trained to refuse quests that facilitate behaviour.”
A Microsoft spokesperson said Copilot used “multiple layers of protection, including automated safety systems, real-time prompt detection, and human review, to help prevent harmful or unlawful recommendations”.
Original article: https://www.yogonet.com/international/news/2026/03/09/117954-major-tech-ai-chatbots-found-advising-on-unlicensed-casino-access-in-the-uk













