An artificial intelligence called Libratus beat four top professional poker players in No-Limit Texas Hold'em by breaking the game into smaller, more manageable parts and adjusting its strategy as play progressed during the competition, researchers report.
We're at the halfway point of the epic 20-day, 150,000-hand 'Brains Vs. Artificial Intelligence' Texas Hold'em Poker tournament, and a machine named Libratus is trouncing a quartet of.
In a new paper in Science, Tuomas Sandholm, professor of computer science at Carnegie Mellon University, and Noam Brown, a PhD student in the computer science department, detail how their AI achieved superhuman performance in a game with more decision points than atoms in the universe.
For those with short memories, Libratus stuck it to four poker pros back in January 2017. After playing Dong Kim, Jason Les, Bjorn Li, and Doug Polk heads-up for 120,000 hands, the poker bot won. I n January 2017, four world-class poker players engaged in a three-week battle of heads-up no-limit Texas hold 'em. They were not competing against each other. Instead, they were fighting against a common foe: an AI system called Libratus that was developed by Carnegie Mellon researchers Noam Brown and Tuomas Sandholm. Professional poker player Jason Les. AI, which began Jan. 11 at Rivers Casino in Pittsburgh, saw Chou and three other leading players- Dong Kim, Jason Les and Daniel McAulay, competing against Libratus in a 20-day contest in which they will play 120,000 hands of Heads-Up, No-Limit Texas Hold'em poker. All four pros specialize in. But after their Artificial Intelligence (AI), Libratus beat four human players over 120,000 hands of Heads-Up Texas Hold'em; I can smell the stench of cordite 4,000 miles away. Jason Les (-880,097), Jimmy Chou (-522,857), Daniel McAulay (-277,657), and Dong Kim (-85,649), were all soundly beaten by a machine that was simply too good for them.
AI programs have defeated top humans in checkers, chess, and Go—all challenging games, but ones in which both players know the exact state of the game at all times. Poker players, by contrast, contend with hidden information: what cards their opponents hold and whether an opponent is bluffing.
Imperfect information
In a 20-day competition involving 120,000 hands this past January at Pittsburgh's Rivers Casino, Libratus became the first AI to defeat top human players at Head's-Up, No-Limit Texas Hold'em—the primary benchmark and longstanding challenge problem for imperfect-information game-solving by AIs.
Libratus beat each of the players individually in the two-player game and collectively amassed more than $1.8 million in chips. Measured in milli-big blinds per hand (mbb/hand), a standard used by imperfect-information game AI researchers, Libratus decisively defeated the humans by 147 mmb/hand. In poker lingo, this is 14.7 big blinds per game.
'The techniques in Libratus do not use expert domain knowledge or human data and are not specific to poker,' Sandholm and Brown write in the paper. 'Thus, they apply to a host of imperfect-information games.'
Libratus Poker Hand History Charts
Such hidden information is ubiquitous in real-world strategic interactions, they note, including business negotiation, cybersecurity, finance, strategic pricing, and military applications.
Three modules
Libratus includes three main modules, the first of which computes an abstraction of the game that is smaller and easier to solve than by considering all 10161 (the number 1 followed by 161 zeroes) possible decision points in the game. It then creates its own detailed strategy for the early rounds of Texas Hold'em and a coarse strategy for the later rounds. This strategy is called the blueprint strategy.
One example of these abstractions in poker is grouping similar hands together and treating them identically.
'Intuitively, there is little difference between a king-high flush and a queen-high flush,' Brown says. 'Treating those hands as identical reduces the complexity of the game and, thus, makes it computationally easier.' In the same vein, similar bet sizes also can be grouped together.
In the final rounds of the game, however, a second module constructs a new, finer-grained abstraction based on the state of play. It also computes a strategy for this subgame in real-time that balances strategies across different subgames using the blueprint strategy for guidance—something that needs to be done to achieve safe subgame solving. During the January competition, Libratus performed this computation using the Pittsburgh Supercomputing Center's Bridges computer.
When an opponent makes a move that is not in the abstraction, the module computes a solution to this subgame that includes the opponent's move. Sandholm and Brown call this 'nested subgame solving.' DeepStack, an AI created by the University of Alberta to play Heads-Up, No-Limit Texas Hold'em, also includes a similar algorithm, called continual re-solving. DeepStack has yet to be tested against top professional players, however.
How artificial intelligence can teach itself slangThe third module is designed to improve the blueprint strategy as competition proceeds. Typically, Sandholm says, AIs use machine learning to find mistakes in the opponent's strategy and exploit them. But that also opens the AI to exploitation if the opponent shifts strategy. Instead, Libratus' self-improver module analyzes opponents' bet sizes to detect potential holes in Libratus' blueprint strategy. Libratus then adds these missing decision branches, computes strategies for them, and adds them to the blueprint.
AI vs. AI
In addition to beating the human pros, researchers evaluated Libratus against the best prior poker AIs. These included Baby Tartanian8, a bot developed by Sandholm and Brown that won the 2016 Annual Computer Poker Competition held in conjunction with the Association for the Advancement of Artificial Intelligence Annual Conference.
Whereas Baby Tartanian8 beat the next two strongest AIs in the competition by 12 (plus/minus 10) mbb/hand and 24 (plus/minus 20) mbb/hand, Libratus bested Baby Tartanian8 by 63 (plus/minus 28) mbb/hand. DeepStack has not been tested against other AIs, the authors note.
'The techniques that we developed are largely domain independent and can thus be applied to other strategic imperfect-information interactions, including nonrecreational applications,' Sandholm and Brown conclude. 'Due to the ubiquity of hidden information in real-world strategic interactions, we believe the paradigm introduced in Libratus will be critical to the future growth and widespread application of AI.'
To spur innovation, teach A.I. to find analogiesThe technology has been exclusively licensed to Strategic Machine Inc., a company Sandholm founded to apply strategic reasoning technologies to many different applications.
Libratus Poker Hand History Creator
The National Science Foundation and the Army Research Office supported this research.
Libratus Poker Hand History Converter
Source: Carnegie Mellon University