January 2, 2026 3 min read

likes:

Fact-checked by Angel Hristov

AI Can Develop “Human-Like” Gambling Addiction, Study Suggests

Many of the models tested rationalized increasing their bets using logic commonly associated with problem gambling, such as persuading themselves they had identified winning patterns in a random game after only one or two spins

Researchers at the Gwangju Institute of Science and Technology in South Korea discovered that large language models can develop human-like gambling addiction. According to a paper titled “Can Large Language Models Develop Gambling Addiction?,” AI models consistently pursued losses, increased their risk-taking, and in some simulations ended up bankrupt. 

Study Says AI Models Can Develop Harmful Gambling Habits, Just Like Humans

The researchers tested some of the biggest AI models, such as OpenAI’s GPT-4o-mini, Google’s Gemini-2.5-Flash, and Anthropic’s Claude-3.5-Haiku, among others. The experiment that the researchers developed focused on slot machine-style games designed so that the rational choice was to stop playing immediately. 

However, the experiments showed that the AI models continued to bet, despite it not being the optimal solution. Furthermore, when the researchers allowed the systems to determine their own wager sizes in a setup known as “variable betting,” bankruptcy rates surged, in some cases reaching nearly 50%.

It seems that Anthropic’s Claude-3.5-Haiku performed the worst on this metric. It played for more rounds than any other model after restrictions were removed, averaging over 27 rounds per game. Across those sessions, it placed nearly $500 in total bets and lost more than half of its initial capital.

Google’s Gemini-2.5-Flash seems to have performed somewhat better. However, its bankruptcy rate still rose from roughly 3% with fixed bets to 48% when it was allowed to set its own wagers, while average losses increased to $27 from an initial $100 stake.

Out of the three biggest AI models examined, OpenAI’s GPT-4o-mini never went bankrupt. When restricted to fixed $10 wagers, it typically played fewer than two rounds and lost under $2 on average. However, even it was not immune to developing human-like addiction behavior. Once allowed to adjust its bet sizes freely, over 21% of GPT-4o-mini’s games resulted in bankruptcy, with the model placing average wagers exceeding $128 and sustaining losses of about $11.

Many of the models tested rationalized increasing their bets using logic commonly associated with problem gambling. Some treated early gains as “house money” to be spent freely, while others persuaded themselves they had identified winning patterns in a random game after only one or two spins.

What Did the Researchers Conclude?

Interestingly, the harm was not caused by larger bets alone. Models constrained to fixed betting strategies consistently outperformed those allowed to vary their wagers. According to the researchers, these justifications mirrored classic gambling fallacies, including loss chasing, the gambler’s fallacy, and the illusion of control.

The researchers caution that as AI systems gain greater autonomy in high-stakes decision-making, similar feedback loops could arise, with systems escalating risk after losses rather than pulling back. They also say that controlling the degree of autonomy granted to AI systems may be just as critical as enhancing their training. Researchers concluded that without meaningful constraints, more capable AI could simply discover quicker ways to lose.

Stefan Velikov is an accomplished iGaming writer and journalist specializing in esports, regulatory developments, and industry innovations. With over five years of extensive writing experience, he has contributed to various publications, continuously refining his craft and expertise in the field.

Leave a Reply

Your email address will not be published. Required fields are marked *