Saturday 7 August 2010

Sniffing out Pacman

How would you write the AI for ghosts in Pac-Man? Your first thought is probably to centralize the logic in a ghost object/function and go from there. This approach has quite a few traps, and a lot of complexity, for example how do you deal with dead-ends? Eventually the algorithm is going to get pretty complicated (something like A* search)

The excellent paper Programming Anti-Objects shows a very simple way to solve this problem with a bit of lateral thinking. Instead of centralizing the logic for ghost behaviour, why not distribute it? Reconceptualizing the problem in this way brings the background objects to the fore - the behaviour does not live in the ghost; it lives in the background tiles. The simple approach described in the paper is based on diffusion - each target has a scent that is distributed via the background tiles. Complex behaviour can emerge because some tiles distribute the scent (paths) whereas some block scent (obstacles).

I've represented the tiles as a map from Point to a list of Agents. The list is really a stack where the top-most agent is the only one that really matters. I've assumed that the tiles are arranged in a square, hence the one size variable and I keep the current location of the pursuers and goal handy.



The algorithm consists of two stages; diffusing the scent over the grid followed by updating the pursuers using a simple hill-climbing approach (e.g. move to the surrounding tile with the highest scent).

I've looked at the diffusion algorithm before in Barely Functional Fluid Dynamics, but I implemented it in an imperative style that makes it complicated to understand. This time around, I looked at a purely functional solution. Calculating the diffusion for any cell is a matter of doing some simple calculations on he four cells around it (also known as the Von-Neumann neighbourhood). The equation below is taken from the paper:

Diffusion Equation from Colloborative Diffusion - Programming AntiObjects

One way to solve this in Haskell would be to use some mutable state and just write some imperative-style code to mutate it in place. This would probably give the best performance, but at the cost of making the code destructive and difficult to follow. Alternatively, we can express the whole thing functionally. When I looked at the Floyd Warshall algorithm, it made use of dynamic programming and in particular defining a data structure recursively. We can use a similar trick to define updates to the grid in terms of previously calculated values.

Grid Diffusion Image

In the image above, if we are calculating the diffusion value for the red-tile, then we can reference the "new" diffused values for the green tiles, but we still must use the original values for the blue tiles.



Ignoring the update pursuers function, we update the board by constructing a map based on the previous environment and itself.

Updating the pursuers is simple, just find the nearest path cell with the highest scent and move to it. Putting this all together and we can run some little tests. The example below shows a pursuer (in green) solving a little maze. It takes a while to get going whilst the scent is diffused, but once it is it homes in on a solution very quickly.



As explained in the paper, collaborative behaviour emerges because pursuers block the scent from the goal. This means if you have multiple paths to the goal, the pursuers will each choose a different route purely because once one pursuer is on a route it blocks out the scent for those that follow. In the example scenario below, the agents will never take the same route and will always ensure there's no escape for the target.



The code for this is available on my github page together with a simple GUI that allows you to build an environment and run the simulation.