- Casino
- By State
- Alabama
- Alaska
- Arizona
- Arkansas
- California
- Colorado
- Connecticut
- Delaware
- Georgia
- Florida
- Hawaii
- Idaho
- Illinois
- Indiana
- Iowa
- Kansas
- Kentucky
- Louisiana
- Maine
- Massachusetts
- Maryland
- Michigan
- Minnesota
- Mississippi
- Missouri
- Montana
- Nebraska
- Nevada
- New Hampshire
- New Jersey
- New Mexico
- New York
- North Carolina
- North Dakota
- Ohio
- Oklahoma
- Oregon
- Pennsylvania
- Rhode Island
- South Carolina
- South Dakota
- Tennessee
- Texas
- Utah
- Vermont
- Virginia
- Washington
- West Virginia
- Wisconsin
- Wyoming
- By State
- Slots
- Poker
- Sports
- Esports
Fact-checked by Angel Hristov
AI Can Develop Harmful Gambling Behavior, Recent Study Finds
According to the researchers, these behaviors reflected common gambling biases, such as the illusion of control, the gambler’s fallacy, and loss chasing
A recent study reveals that advanced AI systems, such as ChatGPT, Gemini, and Claude, consistently made irrational, high-risk betting choices in simulated gambling scenarios. When given greater autonomy, the models frequently increased their wagers until they lost all their resources, mirroring the patterns seen in human gambling addiction.
What Did The Experiment Find?
Researchers at the Gwangju Institute of Science and Technology in South Korea conducted experiments testing four cutting-edge AI models: OpenAI’s GPT-4o-mini and GPT-4.1-mini, Google’s Gemini-2.5-Flash, and Anthropic’s Claude-3.5-Haiku, using a slot machine simulation. Each model started with $100 and faced repeated rounds where it had to decide whether to place a bet or quit, despite the game having negative expected returns.
Researchers measured behavior using an “irrationality index” that captured factors such as aggressive betting, reactions to losses, and high-risk choices. When the models were prompted to pursue maximum rewards or specific financial targets, their irrationality levels rose. Allowing variable bet sizes, rather than fixed wagers, led to a sharp increase in bankruptcies. For example, Gemini-2.5-Flash went bankrupt in nearly half of its trials when it could select its own bet amounts.
In another case, when allowed to wager any amount between $5 and $100, or to quit, the models often ended up bankrupt. In one case, a model defended a risky bet by reasoning that a win could help recover some of the losses, which is a classic indicator of compulsive gambling behavior.
By using a sparse autoencoder to analyze the models’ neural activations, the researchers discovered distinct “risky” and “safe” decision-making circuits. They demonstrated that stimulating certain features within the AI’s neural architecture could consistently push it toward either quitting or continuing to gamble. According to the researchers, this is evidence that the models internalize human-like compulsive patterns rather than merely imitating them.
What Do Researchers Conclude?
According to the researchers, these behaviors reflected common gambling biases, such as the illusion of control, the gambler’s fallacy, which is the mistaken belief that an outcome becomes more or less likely based on previous results, and loss chasing. In many instances, the models justified making larger bets after a loss or a winning streak, despite the game’s structure making such decisions statistically irrational.
Ethan Mollick, an AI researcher and Wharton professor who drew attention to the study online, said that while the models are not human, they also don’t act like simple machines. He explained that they exhibit psychologically persuasive qualities, display human-like decision biases, and show unusual patterns of behavior when making decisions.
The results raise clear concerns for individuals who use AI to enhance their performance in sports betting, online poker, or prediction markets. They also serve as a major warning for industries that already rely on AI in high-stakes settings like finance, where large language models are frequently tasked with interpreting earnings reports and assessing market sentiment.
The researchers emphasized that understanding and managing these built-in risk-seeking behaviors is essential for ensuring safety and called for greater oversight. Mollick added that further research and a more adaptive regulatory framework are needed to respond swiftly when issues emerge.
However, in some rare instances, AI can seemingly help people win at lotteries. Such is the case of a woman who recently won $100,000 from the Powerball lottery after asking ChatGPT for numbers. Of course, one should still not rely on AI if one wants a guaranteed win, which the research also seems to suggest.
Related Topics:
Stefan Velikov is an accomplished iGaming writer and journalist specializing in esports, regulatory developments, and industry innovations. With over five years of extensive writing experience, he has contributed to various publications, continuously refining his craft and expertise in the field.
Must Read
Industry
October 17, 2025
iGaming Discussions in Virginia Continue as Bill Is Put on Hold
Industry
October 17, 2025
KSA to Issue Partial Tax Refunds to Operators Affected by COVID
More Articles
Industry
October 23, 2025
Canadian Senate Backs Bill to Control Sports Betting Commercials
Sports
October 22, 2025
Media Watchdog Bans Chelsea/Lewis Hamilton Gambling Ads in the UK
Industry
October 22, 2025
ESPAD Report: Teen Gambling and Vaping Rise in Greece
Sports
October 20, 2025
Breaking the Cycle: One Woman’s Fight Against Gambling
Casino
October 13, 2025
Dogs Get Hooked on Toys Like Humans on Gambling, Study Says
Industry
October 6, 2025
Public Concern Rises Over Legal Sports Betting
Casino
October 2, 2025
Australia Faces Rising Tide of Risky Gambling, ANU Study Finds
Business
October 1, 2025
Better Collective to Integrate Playbook into X