Learning Techniques


As computer games become more complex and consumers demand more sophisticated computer controlled opponents, game developers are required to place a greater emphasis on the artificial intelligence (AI) aspects of their games. Modern computer games are usually played in real time, allow very complex player interaction and provide a rich and dynamic virtual environment. Techniques from the AI fields of autonomous agents, planning, scheduling, robotics and learning would appear to be much more important than those from traditional games such as connect4, chess, go, tic-tac-toe etc.

Computer games typically rely on the use of so-called bots which are effectively computer-controlled autonomous agents within the game which are usually employed as “enemy” figures that the player engages in combat with. These agents need to exhibit some sort of intelligent behaviour in order to firstly provide an engaging experience for the player, and secondly to provide an effective opponent. As part of this, the agent needs to be able to rapidly navigate around the game world, in order to be able to move from place to place in a convincing manner.

This problem is usually solved by building a map of the game world, stored in the engine, that constitutes some efficient representation of the area. Given the problem of finding a route from point A to point B on the map, the game engine will use a search algorithm such as A*, to figure out an appropriate route. This route is then presented to the relevant agent.

However this solution to the problem is not valid anymore as increasingly it is not possible to have a pre-defined fixed map of the game world available. The reasons for this are two-fold. Firstly, newer games employ physics simulation. This means that players (and indeed agents) can knock over walls, kick down doors, and effectively change the layout and geometry of the world as the game progresses. Therefore routes which were valid at the start of the game may become blocked off as play progresses. Traditional path-finding based on a pre-defined static map cannot cope with this situation. Secondly players can often define their own worlds and geometry in which the play takes place and hence no maps are available.

Our solution to this problem was to provide the agent with a means of navigating its own way around the world rather than simply relying on routes provided by the game engine via a path-finding algorithm such as A*. Providing an agent with this functionality means providing it with two important abilities. Firstly it needs to ability to examine its environment in some way in order to know what is in front of it and around it. Secondly it needs some way of processing this information to accomplish tasks such as steering around obstacles that have been placed in its path.

The first ability is achieved by embedding sensors in the agent. This is a concept borrowed from the robotics literature where ultrasound or infrared sensors are common. We adapted this idea for the virtual agents by casting rays from the agent with test for intersections with the geometry of the game world. In this way information can be provided to the agent pertaining to the proximity of objects within its field of vision. The second ability is to process this information in some way, and our solution to this problem was to furnish each agent with an Artificial Neural Network (ANN) which takes the sensor information as input. The ANN is a learning algorithm that we trained to exhibit the behaviour we wanted – namely that the agent would have the ability to steer around objects. Because of the nature of the Neural Network this provides very robust steering behaviour that is extremely tolerant of noisy data. Another advantage of this approach is that the amount of processing required is minimal and hence multiple agents can be imbued with this behaviour without causing a major strain on the CPU.

This must be used in conjunction with a traditional path-finding algorithm such as A*. The path-finding algorithm works out a path for the agent but the sensors/ANNs are responsible for moving the agent along that path and are capable of adapting the path to steer around obstacles or other dynamically introduced geometric changes which are introduced.We implemented our system using the Quake 2 game engine and extensively tested these ideas against more traditional approaches to path-finding. Our results indicate that this approach is extremely useful in situations where the environment is dynamic.

Researcher: Ross Graham
Project Leader: Hugh McCabe and Stephen Sheridan
Funding Agency: Postgraduate R&D Skills Programme
Duration: 2002-2004

Relevant Publications

Graham, R. Real-Time Agent Navigation with Neural Networks for Computer Games, M.Sc. thesis, 2006.

Graham, R., McCabe,H., and Sheridan,S. Realistic Agent Movement in Dynamic Game Environments, in Proceedings of DIGRA 2005 Conference: Changing Views – Worlds in Play ps. 249-259, June 2005, Vancouver, Canada

Graham, R., McCabe,H., and Sheridan,S. Neural Pathways for Real Time Dynamic Computer Games, Proceedings of the Sixth Eurographics Ireland Chapter Workshop, ITB June 2005, Eurographics Ireland Workshop Series Volume 4 ISSN 1649-1807, ps.13-16

Graham, R., McCabe,H., and Sheridan,S. Neural Networks for Real-Time Pathfinding in Computer Games, in Proceedings of ITB Research Conference 2004, 22/23 April 2004.

Graham, R., McCabe,H., and Sheridan,S. Pathfinding in Computer Games, ITB Journal Issue 8 December 2003.

Sheridan, S. A Review of Parallel Mappings for Feed Forward Neural Networks using the Backpropogation Learning Algorithm, ITB Journal Issue 4, December 2001

Sheridan, S. Non-deterministic processing in Neural Networks, ITB Journal Issue 2, December 2000