A recent study reveals that advanced AI systems, such as ChatGPT, Gemini, and Claude, consistently made irrational, high-risk betting choices in simulated gambling scenarios. When given greater autonomy, the models frequently increased their wagers until they lost all their resources, mirroring the patterns seen in human gambling addiction.
What Did The Experiment Find?
Researchers at the Gwangju Institute of Science and Technology in South Korea conducted experiments testing four cutting-edge AI models: OpenAI’s GPT-4o-mini and GPT-4.1-mini, Google’s Gemini-2.5-Flash, and Anthropic’s Claude-3.5-Haiku, using a slot machine simulation. Each model started with $100 and faced repeated rounds where it had to decide whether to place a bet or quit, despite the game having negative expected returns.
Researchers measured behavior using an “irrationality index” that captured factors such as aggressive betting, reactions to losses, and high-risk choices. When the models were prompted to pursue maximum rewards or specific financial targets, their irrationality levels rose. Allowing variable bet sizes, rather than fixed wagers, led to a sharp increase in bankruptcies. For example, Gemini-2.5-Flash went bankrupt in nearly half of its trials when it could select its own bet amounts.
In another case, when allowed to wager any amount between $5 and $100, or to quit, the models often ended up bankrupt. In one case, a model defended a risky bet by reasoning that a win could help recover some of the losses, which is a classic indicator of compulsive gambling behavior.
By using a sparse autoencoder to analyze the models’ neural activations, the researchers discovered distinct “risky” and “safe” decision-making circuits. They demonstrated that stimulating certain features within the AI’s neural architecture could consistently push it toward either quitting or continuing to gamble. According to the researchers, this is evidence that the models internalize human-like compulsive patterns rather than merely imitating them.
What Do Researchers Conclude?
According to the researchers, these behaviors reflected common gambling biases, such as the illusion of control, the gambler’s fallacy, which is the mistaken belief that an outcome becomes more or less likely based on previous results, and loss chasing. In many instances, the models justified making larger bets after a loss or a winning streak, despite the game’s structure making such decisions statistically irrational.
Ethan Mollick, an AI researcher and Wharton professor who drew attention to the study online, said that while the models are not human, they also don’t act like simple machines. He explained that they exhibit psychologically persuasive qualities, display human-like decision biases, and show unusual patterns of behavior when making decisions.
The results raise clear concerns for individuals who use AI to enhance their performance in sports betting, online poker, or prediction markets. They also serve as a major warning for industries that already rely on AI in high-stakes settings like finance, where large language models are frequently tasked with interpreting earnings reports and assessing market sentiment.
The researchers emphasized that understanding and managing these built-in risk-seeking behaviors is essential for ensuring safety and called for greater oversight. Mollick added that further research and a more adaptive regulatory framework are needed to respond swiftly when issues emerge.
However, in some rare instances, AI can seemingly help people win at lotteries. Such is the case of a woman who recently won $100,000 from the Powerball lottery after asking ChatGPT for numbers. Of course, one should still not rely on AI if one wants a guaranteed win, which the research also seems to suggest.