AI Game Development – Best Techniques for AI of a Card Game

artificial intelligencegame development

I’m trying to develop an AI for a card game and I’m a bit stuck about the technique/algorithm I should use. Here are a few assumptions about the game:

  • After the cards are distributed to players, there is no randomness. I mean here that every player can choose which cards he plays but no random process occurs as when distributing the cards at the beginning of the game.
  • There is restriction about the cards which can be played when a card was already played.
  • The player which wins the trick, plays then first. E.g. Player 1 plays a card, Player 2 plays a card and wins. Then Player 2 plays a card and then Player 1 plays.

I know a lot of hints/rules (e.g. if I know the player has cards A, B, C then I should play D) which helps me to win to the game. Thus I first wanted to use a Bayesian network to describes those rules. The problem is that I don’t know any probabilities to assign, but I could compute an heuristic using the history of played games (against a human). Second problems, it is very likely that I don’t know all rules and that there is some implicit rules which are needed by the AI to find the optimal play.

I’m unsure if this would be a good way to develop an AI for such a card game?

I am also wondering if there is others techniques which would best fit to the problem. For instance, I had a look at minimax (maybe with a pruning algorithm), but would be a good option for this problem? I’m quite unsure since the most important plays are at the beginning of the game when there is the highest unknown parameters (most of cards are not played yet).

Best Answer

Your example sounds similar to Bridge. Top Bridge-playing systems use Monte Carlo methods to select moves. At a high level:

  • Determine the probabilities of each card being in a given hand. You know with certainty which cards are in your hand and which cards have been played. Determine the probability of all other cards based on cards that have been played and possibly a player's bid if there's bidding involved. To start, you could just use a naive and equal probability that a card is in some player's hand.
  • Now, run through as many "virtual" games as you can. Simulate playing a card from your hand and then determine your opponents' responses using the rules of the game and your probabilities. For each virtual game, use your probabilities to assign cards to a player and then quickly simulate the game. Assume each player will play to the best of their ability. You know all the cards in your virtual game so you can make each player play perfectly.
  • When you have a solid sampling (or you run out of time), pick the legal move that gave you the best outcome most often.

Once you get something working, you can add all sorts of enriched strategies. For instance, vary your probabilities based on a player's historic plays, vary probabilities based on a player's style (passive, cautious, aggressive), or even consider the effects of specific players playing together.


Edit per LaurentG's comment:

Ultimately, you may want to scrap the idea of perfect play for all players and substitute something more realistic. Conceptually, separate the probabilities for a card being in someone's hand (card distribution) from the probability of a player playing a given legal card during a hand (card selection).

Card selection is ripe for learning. If you track plays across games, you can learn how a given player, or players in general, tend to play based on the cards in their hand and the cards that have been played. You could even get fancy and model their assumptions about cards hidden from them.

There are also learning opportunities for card distribution. A player's past bids and card selection during a hand might reveal a "tell" about what's hidden in their hand. You could use historic data to adjust probabilities when building each virtual game.

Related Topic