开发者

Writing an AI for a turn-based board game

i'm currently programming a board game (8x8) in which I need to develop an AI.

开发者_JS百科I have read a lot of articles about AI in board games, minmax with or without alphabeta pruning, but I don't really know how to implements this, I don't know where to start...

About my game, this is a turn-based game, each player has pieces on the board, they have to pick one and choose in moving this piece (1 or 2 cells max) or clone the piece (1 cell max).

At the moment, I have a really stupid AI which choose a random piece then choose a random move to play...

Could you please give me some clues, on how to implement this functionality ?

Best regards


(Edit: 8/8/2013) - I've written some sample code: Ultimate Tic-Tac-Toe, where I have a generic implementation of min/max game search. The code is written in a silly variant of C++, called prepp.


Your best resource would be a college-level course on Artificial Intelligence. The classic textbook is "Artificial Intelligence: A Modern Approach" by Russel & Norvig

I'll try to give you a breakdown of the key concepts.

Game state - this is a set of variables which can be used to determine:

  • The estimated "Value" of the current state
  • Is the game over?

Value - the "value" of a particular state is a function of that state evaluated by the current player, let's call it f(x). Another name for this function is the "heuristic" function. It determines the best possible value that the current player could achieve, given board state, x. How you model x is up to you, but x should encompass everything used in the natural flow of the game. So, in your case, x may be an 8x8 matrix of values mapped to pieces on the board. And, f would be a function that gives the "Value" of that board configuration for the Red player (or some such.)

A possible simplification to consider in a 2-player turn-based game is to encode the "current player" as part of the state. This adds state to input to the heuristic function. So, instead of f(x), we have f(x, player). However, that can be simplified by rolling player into x. Either way, f will always return the "Value" for the same player, but according to who's turn is next.

To clarify by (simplified) example, f in chess should nearly always return a very high score if white can kill black's queen. But if black will have the possibility of killing white's queen in the next move, then f should return a very low number.

Note that so far there is no mention of the tree-searching part of minmax. f is always defined based on the current state. So, when I said "black will have the possibility...", what I mean is that your heuristic function can determine, for example, that given x, there is a line of sight (diagonal, horizontal, or vertical) between a black bishop, or rook, to white's queen. The heuristic for chess should be sophisticated enough to know that this alone creates a threat, and therefore it should be scored accordingly when computing the total value of f.

Before I get to minmax, note that A* is an optimization for searching the space of possible x values. But, you should not worry about optimization until you fully understand and have implemented minmax.

So, the minmax algorithm is a way to organize your AI's decision-making process. The things you'll need to be able to do in order to implement minmax:

  • Generate valid game states from an existing game-state x.
  • Stop looping or recursing after a particular depth has been reached.
  • Remember the "Value" for each level recursion by passing it back to the caller.

The basic flow starts by understanding, whose turn is it? As I mentioned earlier, f will always return high values to indicate success for player 1, and low values to indicate success for player 2. At each deeper level of recursion, player will allow you to understand whether to choose the minimum or maximum of the potential values discovered through your recursion.

Let me say that another way. There are two reasons to actually evaluate f(x).

  1. You want to know if the game is over at state x.
  2. You have reached the deepest level of recursion and you'd like to evaluate what the score will be if the game progressed this far.

So, your code would do something like this:

function minmax(x, player) returns value

  • My current state is x, and the next player to move is known. I know this because
    • function choose-move told me so. (see below)
    • I was called recursively by myself. (see below)
  • Is the game over? Ask f(x). If it returns positive infinity, then white has won. If it returns negative infinity, then black has won. If your game has a stalemate option, or has an end score (perhaps as part of a larger game), then you will simply have to make up a way to represent winning + score as a return value of f. Or, if you want to separate it out, just make a new function g(x) which does this. If the game is over, return that value to your caller. If the game is not over, continue.
  • From x I will enumerate all possible x'. In an 8x8 game, there may be a few, to tens or hundreds of possible valid moves from any single game state. This depends on your game's rules.
  • If the deepest level of desired recursion has been reached,
    • For each x', call f(x', player). For each call, we associate its return value with the particular x' we had passed in.
  • Else
    • For each x', recursively call minmax(x', other-player). (Note that other-player is the opposite of player at this point.) For each call, we associate its return value with the particular x' we had passed in.

At this point, all possible next moves should have a "value" associated with them.

  • If player is player 1, choose the maximum value from all values associated with all x', and return that value. That is the best that player 1 will be able to do from this game state.
  • If player is player 2, choose the minimum value from all values associated with all x', and return that value. That is the best that player 2 will be able to do from this game state.

end function minmax

function choose-move(x, player) return *next_state*

  • My current state is x, and we're trying to figure out what player's next move should be.
  • Check whether the game is over, as described above.
  • From x I will enumerate all possible x'. (You should be able to re-use whatever code you had to do this in minmax.
    • For each x', call minmax(x', player). For each call, we associate its return value with the particular x' we had passed in.
  • If player is player 1, choose the maximum value from all values associated with all x', and return that value. That is the best that player 1 will be able to do from this game state.
  • If player is player 2, choose the minimum value from all values associated with all x', and return that value. That is the best that player 2 will be able to do from this game state.

end function choose-move

Your driver code will just need to call choose-move, and it should return the next game board. Obviously, you can also encode the return value as a "move" instead of a state, for a different representation.

Hope this helps. Sorry for the long-winded explanation, I just had some coffee.


Classically, you would need to decide how many moves in advance (depth) you would want your AI to calculate. Depending on your game, the maximum depth could vary considerably. Think Tic-Tac-Toe versus Checkers versus Chess.

You also would want to quantify board position (value) in a way that would allow you to compare various board states.

At the most extreme, you need to take the current board and calculate its value. Then consider every possible move you could make. Then consider, for each of those, every possible move your opponent could make. Iterate to maximum depth. You will have constructed a tree structure (breadth first).

Prune your tree. Any branch that guarantees you a decrease in board value can be pruned.

Then, somehow, compare remaining branches. I do not know how best to do this - but it seems fairly straightforward. Either you optimistically weigh possible best outcomes and choose those branches. Or continue pruning based on potential of worst outcomes.

Hope this helps.


To start, you only need to think about 2 functions:

  • generate a list of valid moves based on a given game state
  • evaluate how "good" the current game state is for a given player

If you want to keep it simple, you can break down the first item into 2 seperate functions:

  • generate a list of all moves
  • check if a given move is valid based a given game state

If you have implemented this functionality you can use one of the existing AI algorithms (e.g. minmax or alpha-beta) for building and pruning the search tree and returning you the best available move.

Keep in mind, that the faster you are when build the list of valid moves and the faster you can evaluate a game state, the deeper you can search the tree.

I once wrote an article about this subject (board game AI), which you might find interesting: http://proghammer.wordpress.com/2010/11/30/chess08-implementing-an-ai-artificial-intelligence-player/

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜