In this blog post, Paul talks about the principles, and some of the techniques, behind Bullion's in-game AI.
We've all played games with bad AI.
Games where the computer players flawlessly pull off impossibly accurate shots at the first attempt. Games where the computer players win not by being better, but by being faster, having more resources, or where the odds are stacked overwhelmingly in their favour. Games where the computer-controlled characters work on a fixed pattern, with the same movement and behaviour regardless of what the human players are doing. Or games where the computer players clearly know more about the environment than the human players could (e.g. Locations of randomised treasure, which could not have been learnt from previous playings of the game, but imply the AI can access internal game data that isn't visible on the screen to human players).
We've all played games with bad AI.
Games where the computer players flawlessly pull off impossibly accurate shots at the first attempt. Games where the computer players win not by being better, but by being faster, having more resources, or where the odds are stacked overwhelmingly in their favour. Games where the computer-controlled characters work on a fixed pattern, with the same movement and behaviour regardless of what the human players are doing. Or games where the computer players clearly know more about the environment than the human players could (e.g. Locations of randomised treasure, which could not have been learnt from previous playings of the game, but imply the AI can access internal game data that isn't visible on the screen to human players).
It is important to us that the enemies and computer-controlled players in Bullion behave fairly and sensibly. We'll want the AI to control both the enemy characters and also to provide opponent players when someone wants to play the game on their own, so it’s important that the AI can play the game and feel like another human player, not like a machine. It needs to react to what the other players are doing. It needs to bear grudges and extract vengeance. It needs to set goals and construct plans to achieve them - but constantly review whether those goals are still both achievable and sensible, and if not re-plan accordingly.
Movement
It would be very easy for Bullion's AI to directly set the position of the characters it controls. It might not intentionally move at warp speed, but it could still behave unnaturally compared to other human-controlled characters. The solution to this is quite simple: the AI is given a "virtual joystick", so the way it controls its characters is essentially the same as the way the human players can control theirs. This also simplifies the game logic!
So it is up to the AI to track its own progress against its plan, and direct its characters to move in the appropriate direction.
It would be unfair if the AI could react immediately. For a person playing the game, it takes time for the brain to process what it is seeing, decide what to do, and make the appropriate muscle movements. The latter in particular is where the lag is most prevalent. Therefore the AI also has a "lag" built in so that it takes a small amount of time for its intentions to manifest into a change in the input controls. Without this lag, it can appear to human opponents that the AI had foreknowledge of what was going to happen, as the AI is already reacting whilst the human is still processing the change in game state.
It would be unfair if the AI could react immediately. For a person playing the game, it takes time for the brain to process what it is seeing, decide what to do, and make the appropriate muscle movements. The latter in particular is where the lag is most prevalent. Therefore the AI also has a "lag" built in so that it takes a small amount of time for its intentions to manifest into a change in the input controls. Without this lag, it can appear to human opponents that the AI had foreknowledge of what was going to happen, as the AI is already reacting whilst the human is still processing the change in game state.
Path Finding
The most fundamental part of the AI movement/plan execution process is moving around the play arena. The islands are covered with impassable obstacles - some permanent, such as the trees and rocks; some temporary such as the chests; and some mobile such as other players and enemies. Not all terrain is created equal, either - different surfaces can be traversed at different speeds; the bulls walk faster over grass than sand, and slower through marshy ground or in the sea itself. The AI needs to route its character around (or determine it would be faster to pass through, where possible) these obstacles on its way to reaching its destination, in an efficient manner. It also has to do this in a way that looks natural - it shouldn't walk directly towards its goal and "bump" its way around obstacles. It should look at the arena, decide on the best route from A to B, then walk along it - and re-plan if something happens along the way to change the optimum route.
Path-finding algorithms are easy to find on the Internet, but need to be adapted to the circumstances. You'd use a different approach for a twisty maze than for an arena that consists mostly of open spaces with the occasional obstacle. In Bullion there are few walls, but the playing area can become quite crowded with rocks, trees and treasure chests.
In Bullion we do this by laying an invisible grid over the playing area - squares, hexagons, or octagons tend to work well for this. We calculate a 'cost' for travelling through each grid cell - those with permanent impassable objects get a high score; others are relative to the time it would take to pass through the cell according to the type of surface. We then compute a cumulative "cost distance" of each cell from the target back to the character's current position, and walk back along the resulting trail. The trick is in implementing this efficiently, in choosing the right cell size, and balancing the cost parameters correctly. Make the cells too big and the character moves unnaturally; too small and it becomes too expensive to come up with a path, and the character will keep bumping into things.
Reactive Agents
The AI has no "secret knowledge" about the playing area - we only let it "see" what the players can see. That doesn't mean we force the AI to look at the rendered pixels and detect what's happening, or analyse the audio. If there is a sound effect to accompany a change in the game (e.g. A new enemy has appeared) we provide that information to the AI - but it is up to the AI to "find" that new enemy on the map, and to keep track of where it is going. We don't share the plans of different AI-controlled players and characters with each other, as that would be unfair.
The environment is constantly changing, so the AI needs to react. A chest has popped up on my course: do I route around it or smash my way through? An enemy is heading my way - do I ignore, avoid or confront it? Another player is approaching the treasure chest I was heading for - are they closer than me, should we divide the spoils, or should I pick another?
The AI is constantly validating that its current plan is still achievable, and is the most appropriate thing to be doing. As in nature, the AI is first concerned with its own survival - so if it is attacked, it will abandon its current activity to flee or retaliate. The action it takes will depend on its own speed, strength and stamina levels compared to that of the attacker - and also on any "history" between the characters. Enemies and AI players will have their own characteristics that help proscribe their behaviour - speed and stamina, but also more esoteric parameters: cowardice; prevalence towards thievery; curiosity. Should I go after that unguarded chest, hover near players already attacking chests, or go for that weakened player who's carrying a lot of treasure already? If a player is already being attacked by several enemies, am I better to join them and take my share of the spoils, or go after another player and risk trying to take them out on my own and claim all the booty for myself if I succeed?
The last aspect of a reactive AI is to provide a fair and proportionate opponent for the human players. Players don't really want to have to choose between "easy", "medium" and "hard" computer opponents. Instead, you want the game to detect the skill level of the human players and adapt accordingly. This needs to be finely tuned. As noted in the introduction, this shouldn't be done by biasing where treasure chests appear (starving a winning AI, or showering a losing AI with gold) - players would soon tire of such tactics which seem like cheating. The AI should attempt to create a level playing field amongst the characters, to make the fight and the challenge as equal as possible. This might mean that an AI player that has gathered far more treasure will spend more time hunting enemies, and a losing AI might decide to go after more chests or attack a hugely-scoring player.
Whatever the AI decides, it needs to feel natural to other players - and not just pace on the spot, or pointlessly commit suicide-by-shark, while waiting for human players to "catch up". No-one likes being condescended to. This isn't a turn-based game such as chess or pool, where one might expect the AI player to be "perfect" and any deviation for that would seem comical (such as the AI hugely missing an easy shot to give the human opponent a chance) - open-field play such as Bullion offers more subtle ways of levelling the playing field.
Above all: play fair, because that's what's fun.
Whatever the AI decides, it needs to feel natural to other players - and not just pace on the spot, or pointlessly commit suicide-by-shark, while waiting for human players to "catch up". No-one likes being condescended to. This isn't a turn-based game such as chess or pool, where one might expect the AI player to be "perfect" and any deviation for that would seem comical (such as the AI hugely missing an easy shot to give the human opponent a chance) - open-field play such as Bullion offers more subtle ways of levelling the playing field.
Above all: play fair, because that's what's fun.