Intrinsic Algorithm’s Dave Mark, a fixture at the Game Developer Conference’s AI Summit, is also the author of Behavioral Mathematics for Game AI.
Most game AI literature covers the basics in a general way, such as finite state machines, flocking and steering to control movement, pathfinding algorithms such as Djikstra’s or A*, goal-oriented action planning (or GOAP), and more.
Mark’s book, however, covers a specific topic in great depth: game AI decision-making.
You might have a character that can do interesting things such as hunt, flee, eat, track, alert nearby allies, etc, but if you don’t create a good system that allows that character to make decisions between those behaviors, it may not convince your players that it is intelligent at all.
The goal is to create behavioral algorithms to get computer-controlled agents responding to their environment in believable and sensible ways. To get there involves a journey through the subjects of psychology, decision theory, utilitarian philosophy, and probability and statistics, among others.
Mark was great at walking you through each step of this journey, combining theory with detailed explanations and examples, including code. Sometimes I felt the detailed explanations were a bit too detailed, but at no point did I feel like I was lost.
He never made a leap in logic that left me behind because he was holding my hand at every step of the way. Sometimes I appreciated that hand-holding, especially for the more involved statistics, but there were a couple of times when I found myself getting a bit impatient and wanting to run ahead.
And it is probably partly due to the fact that it’s a long journey. At one point, I realized I was over 300 pages into the book without feeling like I knew how to integrate and apply all of the individual tools I was learning into a cohesive system.
The examples he used to illustrate his point were sometimes bizarrely relatable. I have never tried to create a model of my behavior related to when I decide to replace my older razor blades with newer ones, but Mark did, and I actually found myself nodding with recognition that I do tend to use my last blade in the refill pack for way longer than my other blades.
Other examples demonstrate how his utility-based decision-making system can address problems with past games, such as the strategy game AI that kept sending its attack force towards the most vulnerable target. Savvy players can keep defensive forces outside of city walls, then place them in the city at the last moment and moving units out of a city far away. By doing so, they could keep the AI units moving back and forth, unable to carry out an attack, for as long as they want.
Solving this issue involves giving the AI the ability to have decision momentum by making decisions include all of the relevant information. The AI isn’t just deciding what city to attack. A single decision is what city to move to AND attack, which incorporates the time it takes to travel to a target city. Suddenly, the decision to change course in order to attack a different city is a bit more painful, and so the current target is more likely to be maintained.
I appreciated that he covered the problem of having the AI always making the best decision. While academic researchers might love that result, players are likely to find such AI as unrealistic and, worse, uninteresting. And if there are multiple AI agents that do exactly the same thing simultaneously, it’s even more of a problem. So there’s an entire chapter on ways to ensure that the game AI can be reasonable yet still interesting from one play session to the next.
The book was published in 2009, and so you would think it means that any information you could glean out of it would be obsolete after almost 10 years of advances and progress in the field. And yet, the basic decision-making system that drives behaviors is still relevant.
One of the benefits of reading an older book is seeing the ideas of that book illustrated in front of you in other media.
You can see Mark’s talks with Kevin Dill from GDC 2010 and 2012 in the GDC Vault. Improving AI Decision Modeling through Utility Theory and Embracing the Dark Art of Mathematical Modeling in AI both introduce the use of this utility-based system in games.
In 2013, Mark’s portion of the panel Architecture Tricks: Managing Behaviors in Time, Space, and Depth introduced the Infinite Axis Utility System, which takes the concepts from the book and puts them together into a simple yet powerful architecture.
In 2015, Mark and Mike Lewis presented Building a Better Centaur: AI at Massive Scale, in which they describe the Infinite Axis Utility System that was the architecture behind an MMO.
I’ve seen these videos before, but having now read the book, I found that upon rewatching them that I understand the sections on response curves and how they apply to the actions the IAUS chooses.
Behavioral Mathematics for Game AI is not a beginner’s book at all, but if you are interested in learning how to give your AI powerful reasoning abilities that produce rich, believable behaviors for your players and want it to be easy to understand and design with, I’m not aware of another book on the subject that is as accessible as this one.