In today’s world, gaming companies and regulators often find themselves sparring over a number of issues, including anti-money laundering protocols, cybersecurity, payments, responsible gambling and more. But a new report from the University of Nevada, Las Vegas shows that the use and regulation of AI could be the industry’s next internal conundrum.

The report, titled The State of AI in Gaming 2026, was produced by UNLV’s International Gaming Institute in collaboration with KPMG and released this week. Its immense data sets were compiled primarily through surveys with companies and regulators, across sectors and international jurisdictions.

“How are organisations deploying AI in practice? Where are the biggest challenges? How can we ensure responsible implementation? And what new AI advancements will drive future innovation? We created The State of AI in Gaming to address these questions, and it is my pleasure to welcome you to the first in what will be an annual series,” wrote Kasra Ghaharian, director of research for the IGI and editor-in-chief of the report.

Perhaps the top-line finding was that gaming ranks poorly in relation to the broader economy with regard to AI adoption and utilisation. The overall average industry “score” for researchers’ “AI Maturity Index” topped out at 45, of a possible 100. The index consisted of four pillars, and none of the industry’s scores particularly stood out. Only one pillar – strategy – eclipsed a score of 55. Two others – infrastructure and expertise – received grades of 46 and 47 respectively, with governance trailing at 30.

These results in themselves were not terribly surprising, especially for a heavily regulated industry that, in some sectors, is still reliant on cash and analogue table games. What was surprising, however, were the discrepancies about how AI is currently being used and where it’s going.

Companies moving too quickly with AI?

In the industry section, researchers included responses from 83 total companies around the world. The pool was split among 44 suppliers and 39 operators – the supplier pool was equally split among land-based and online, while most operators (71%) were land-based. Comparatively, there were 113 total responses from regulators, 94 of which came from North America and Europe.

Among companies, the top two current uses for AI were technology/security and product development and innovation, which constituted about 50% of cited usage between them. The least-cited usage was risk and compliance (14%).

The researchers evaluated a small sampling of topics at gambling conferences over a three-year period through 2025. The majority of the sessions focused on compliance, along with marketing and customer relationship management, they found. Operators on the Las Vegas Strip are in the midst of revamping compliance protocols in the wake of a series of money laundering scandals that impacted at least four properties.

“Unlike retail or media, where customer-facing AI is often the leading use case, gambling operators appear more closely aligned with regulated financial services, prioritising technology, security, and product integrity ahead of direct player engagement,” researchers wrote. “This may reflect both regulatory scrutiny and the higher downside risk of AI-driven personalisation in behavioural environments.”

Gaps between strategies and governance

Harkening back to the maturity index scores, researchers were clear that the “most notable finding” was the extensive gap (27 points) between companies’ AI strategies and their AI governance. The industry is “setting direction faster than it is building the safeguards to support it”, authors wrote.

The biggest risks associated with AI adoption, companies said, were gaps in cybersecurity followed by failures related to data privacy. Conversely, AI amplifying problem gambling risks and unfair outcomes affecting players were the bottom-ranked answers.

According to one survey item, only 22.9% of organisations have dedicated AI-related roles for AI governance, responsible AI, ethics, or compliance, the study found.

“This highlights a clear disconnect: data privacy and governance rank among the industry’s top AI concerns, yet the governance structures and dedicated roles needed to address them remain underdeveloped in many organisations,” researchers wrote.

Regulators’ AI literacy in question

On the regulatory side, many of the responses appeared to be somewhat at odds with the ones provided by the industry, which could set the stage for more direct AI-related collaboration in the near future. Or at least, that’s what stakeholders might hope for, given the findings.

Of the 113 regulatory responses, only 68 answered the set of 14 AI literacy questions offered by the report. The mean score, out of 14, was 8.6, but the correlation between AI training and AI literacy among respondents “was very weak”. Researchers concluded that they “did not find a significant relationship between AI use and AI literacy” among regulators.

This assertion seemed to be borne out by many of the regulators’ data sets, including their perceived awareness of AI.

Regulators said they were most confident in their ability to identify ethical risks in AI deployment and identifying areas where their regulations may need to be updated because of AI. Yet, they also said they were least confident in their abilities to assess licensees’ usage of AI in their operations and in understanding how AI systems are used in gambling, which would appear to be a direct contradiction.

Due to respondents’ “concern about limited internal expertise” and the challenge of “keeping up with the speed and volume of new AI innovations,” researchers said regulators “appear to be aware of the governance challenge ahead, but may feel underprepared to meet it”. But when asked whether their respective agencies were planning AI guidelines or review processes, just 52% said yes, while 47% said no.

He said, she said

Another big discrepancy highlighted by the report was how regulators think licensees are using AI versus how they actually are. As mentioned, tech/security and product innovation were the industry’s biggest AI priorities; that pertains mostly to software development and cybersecurity for the former, and game development and sports betting trading for the latter.

Regulators, however, thought that companies were most concerned with customer-facing functions, by a margin of more than 20% from second place. Risk and compliance was also the third-highest usage listed by regulators, while companies listed it fifth (last). Tech/security and product innovation, the highest usages cited by companies, were the lowest cited by regulators.

“This clear disparity between regulators’ perceptions of industry use, and where industry stakeholders are actually deploying AI, brings to question the basis for regulators’ strong agreement on the need for gambling specific regulation and their skepticism toward industry self-regulation,” researchers wrote.

In a poll judging regulators’ level of agreement with various sentiments, the most agreed-upon statement was that collaboration among states and nations would best improve AI oversight. The most evenly split question was whether regulators were “aware of broader AI and digital governance frameworks”. The statements that drew the most disagreement pertained to regulators’ knowledge of licensees’ internal AI policies, and whether current regulations are equipped to address current risks and opportunities.

Grand plans but limited ROI, for now

For the industry, the immediate hope of AI appears to be operational efficiency and cost reductions. A number of gaming companies – particularly suppliers – have announced layoffs in recent months, although these were not directly attributable to AI. In fact, companies haven’t reported many benefits for their AI efforts thus far.

“When asked the extent to which AI has contributed to cost savings, the average response was 2.43 out of 5,” the report said. “The majority of respondents reported minimal or no cost savings attributable to AI: 10.8% reported none at all and 48.2% reported savings only to a small extent.”

Looking ahead, companies are clearly split on expectations for their investments. One quarter (26%) of respondents said they expect to start seeing return on investment from AI in one to two years, compared to 20% who expect a return in six to 12 months. A further 20% said they have already seen meaningful results, and another 19% said it was too early to determine. Only 3.6% do not expect meaningful ROI at all.

Despite gaps in governance and expertise, nearly half (42%) of companies said they had no AI-related hiring plans. The potential impact on headcount, however, seems limited for the time being: 53% said that AI-related workforce transformations would result in reorganisations but “little net change to total headcount”.

Original article: https://igamingbusiness.com/tech-innovation/ai-gaming-report-unlv/