Академический Документы
Профессиональный Документы
Культура Документы
Abstract
Artificial intelligence (AI) in gaming is getting more and more realistic as we head well into the twentieth century. But will we ever be at a state where games replicate humans perfectly on a consistent basis and learn their instincts? Games are getting more complicated. This is due to the use of more advanced algorithms that are being used as well as the new hardware that allows more computations. Games wouldnt be where they are today if it wasnt for artificial intelligence. I will be discussing the importance of artificial intelligence in games such as RTS-type games, First Person Shooter-type (FPS-type) and sports games. I will also discuss the most popular algorithms used today for artificial intelligence in games such as A* algorithm, finite state machines, and artificial neural networks.
Introduction
AI in gaming refers to methods used to produce intelligence of non-player characters or computer players. The methods are usually created from existing methods in AI. There may be other methods used but it is a vastly different approach to common AI. In first person shooter (FPS) games the computer is basically cheating due to the fact that it has to know where you are at all times to try to achieve the most intelligent non-player character. In a lot of cases, the AI of the non-player character has to be lowered or tweaked to give the human player a sign of fairness and a chance to win. Programmers will want to stay away from cheating as much as possible. In a way this makes the game have an unrealistic challenge because it is not using intelligence. Cheating also draws away focus from programmers to program more human-like bots. AI can also be thought as to simulate thinking, or as representing intelligent behavior in an algorithmic way [3]. When creating AI for games, one thing should always be in the programming teams head, what would a human player do? The obvious goal is to make games to have a realistic challenge for the human-players. This makes sure that the game does not get too boring and shoved under the shelf in a week.
Many techniques are used to achieve human like behavior for computer players. The most popular techniques are the A* algorithm, finite machines, and artificial neural networks. Each contributes to the AI in different ways. Without these techniques AI wouldnt be where it is today not only in games but in all other software trying to replicate human intelligence.
History
Videogames first entered the world in the 1960s and into the early 1970s with the likes of Spacewar!, Pong, and Gotcha [1]. These games had no AI in them but used some discrete logic because they were based on a competition between two human players. Artificial Intelligence began to get implemented into games in the 1970s with a single player mode to play against the computer enemies [1]. The enemy moved by the use of stored patterns. Stored patterns are paths that never change, and the computer-players follow the same paths that are stored in the game. When microprocessors came into play, the building of the AI became more complex due to the speed and amount of computation that the microprocessors allowed. When the game Space Invaders was released in 1978, it brought to the table distinct moving patterns, which is different from stored patterns in that there are different paths for the computer-player [1]. The following year, Galaxian was released and the AI in this game stepped it up a notch from Space Invaders in that it allowed for more complex and varied enemy movements [1]. Pac-Man debuted in 1980 and was the first popular game character. Pac-Man sold all kinds of memorabilia. As the years went on, different personalities for the enemy were introduced as well as fighting games such as Karate Champ in 1984 [1]. A big accomplishment for AI occurred in 1983 when a human player went down in defeat to a computer player in chess. This was the first victory recorded for AI against a human player at anything. This opened up many eyes in the software industry. An even bigger accomplishment for AI occurred in 1997 when Deep Blue, a chess playing computer developed by IBM, defeated Kasparov, a world champion chess player [1]. This event gave confidence to every AI developer to believe that anything can be done or programmed to what they want it to be or do. Sports games advanced game AI in the early 1990s with the use of expert systems [1]. An expert system is a program that contains knowledge on a subject of one or more human experts. An example would be the popular Madden Football series. The AI of this game tried to mock the coaching or managerial style of John Madden himself. Extensive work and time put into this game to try to maximize the accuracy of the AI. In the newer Madden games one can change the variables of the opponent to play how the user wants them to play. Most games now have a difficulty level as well which allows the user to pick a difficulty level which matches their experience with the game.
Finite state machines started to enter the gaming world in the 1990s as well as artificial neural networks. Real-time strategy and first-person shooter type games first entered the market in the 90s as well. The first Real-Time Strategy games had notorious problems which included path finding problems, real-time decisions and economic playing and many more. As for an example, in the game Dune II, the enemy uses cheats and when the computer-players attack a human base, they attack in a bee line instead of attacking in random positions [1]. The RTS games did get better in the AI department and have gotten more popular. For instance, one of the most popular games are the WarCraft series, and in the 1990s the first WarCraft game was the first game to ever implement path-finding algorithms at such a big scale for hundreds of units to engage in huge battles [2]. Graphic cards also come out in the 90s and allow for more processor time to be spent on AI. What we are seeing now in the 21st century is that more games are successfully implementing artificial neural networks. Battlecruiser 3000AD was the first game to use neural networks 1996 [2]. That spawned the games Creatures and Black & White where computer-controlled characters learning was used for the first time. Hyper-threading processors came out in the early 2000s as well as just recently the core-duo processors which allow for even more of a complex AI engine to be built.
where there are buckets that leak some of their contents over time. The script that gets ran is the script with the most filled bucket [3]. For example, lets say there exists the buckets flee, fight, and restock. Events that occur fill the buckets to a different extent such as: Seen enemy Add 5% to flee and 10% to fight Low ammo Add 20% to restock Low health Add 20% to flee and 10% to restock A lot of ammo and health Add 50% to fight, remove 20% from flee and restock Lost 50% health in one hit Add 50% to flee, add 20% to restock and remove 50% from fight. After an event occurs, the bucket with the most content in it gets ran. So if the computerplayer sees an enemy and has 45% in its flee bucket and 30% in its fight bucket then the computer-player will flee instead of fight. Most all first-person shooter games use some sort of path-finding system. The path finding is based on graphs describing the world where each vertex on the graph represents a location [2]. The location could be anything related to the game such as a room or a tower. When the computer player is ordered to get to a given point, navigation is done by using the points that it should consecutively head towards to reach the specified location [2]. The A* algorithm is the most popular algorithm or technique used for path-finding. It guarantees to find the shortest path between two points. The animation system or the computer player movement in first-person shooters has to play an appropriate sequence of animation at a chosen speed. Basically it must be in sync with the AI system in the game. Different body parts should be able to play at different animation sequences such as for example a soldier running and aiming at the enemy then shooting and reloading while still running [2]. Such games often use the inverted kinematics (IK) system, which is the process of computing the pose of a human body from a set of constraints [4]. In an IK animation system, parameter calculations can be done for arm positioning animation so that the hand can grab an object off a shelf or the ground [2].
Figure 1: Representation of the world in a FPS-type game with the red lines being a path.
The goal based engine responds by moving units that are capable of air defense into position [3].
Figure 2: Representation of the world in a RTS-type game with the red lines being a path
AI in Sports Games
A lot of the AI in sports games has some sort of cheating. For example, in racing games, two paths are marked on the track: the first represents the optimal driving path; the second represents the path used when passing opponents. Basically it is the same as stored patterns. The track is also split into sectors and each sector length is calculated. These sectors are used to build a graph describing the track and to acquire characteristics of the track in the vehicles closest vicinity [2]. In effect, the computer-driver knows when to slow down if a curve is near because it knows what sectors are approaching. The AI system in racing games must also analyze the terrain to detect objects in the road so the computer-driver can go around it or hit it if preferred. There must also be co-operation
between the AI system and the physics module [2]. The physics module provides information such as when the car is skidding and the AI system needs to react immediately to get the vehicle under control.
Figure 3: Segmentation and driving paths of the track. Blue represents optimal path, dashed pink represents passing path. Cheating can be found in other sports games as well. A lot of times, the computer-players behavior or move is decided before the beginning of its turn. This is not realistic AI and should be avoided. The computer-player should evaluate the human-players move before deciding a move of its own.
The A* Algorithm
A key problem in most games is finding a path from two points on a map. Not only does the A* algorithm find a path between two points, but it finds the shortest path between two points. The computer game world has thoroughly evaluated the issue of path-finding and to this date, the A* algorithm has become a standard in games where path-finding is a huge part of the computer-player movement.
Prerequisites Two prerequisites are needed for A* to work. There must be a graph or a map of the game world. The map can be divided into squares with each square containing a value. Each square will have eight neighbors unless it is located on the edge of the map or near an obstacle. A method also needs to be developed to estimate the distance between two points. This is the heuristic of the algorithm. This heuristic must underestimate the distance to ensure the algorithm will find the shortest path. If it overestimates the distance it will slightly run faster and still find a path, but it wont necessarily be the shortest path. Method How about try all paths, and then choose the shortest? This will work, but it takes time and resources. This is where A* comes in to play. A* minimizes areas of the map to be examined by orienting the search towards the target and guaranteeing to find the shortest path. This is the main advantage of A*. It is faster and guarantees to find the shortest, most optimal path, and in todays games, we cant go with out it. The algorithm uses some heuristic to help orient the search towards the target. Typically, the heuristic is the estimated cost of getting to the destination or target, that is, the distance from the current point being examined to the target or destination. The heuristic must be an underestimate of the actual cost. If it is an overestimate it may run slightly faster and a path will still be found, but it doesnt guarantee that it will be the shortest or optimal path. Optimal does not always mean the shortest. Additional factors need to be taken account such as the terrain, angle of turns, number of enemies in area, and others depending on the game [2]. Some games may have the algorithm avoid certain areas or situations occurring on the map. For example if there is a battle or firing of a weapon you may want to keep distance from teammates or friendly units. Algorithm The algorithm can be a bit confusing at first. Once there is a basic understanding of a few concepts the algorithm uses, it is not as trivial. One concept is an open list. The open list contains the nodes that need to be considered as possible starts for future growth of the path [5]. The start node is typically placed into this list at the beginning. Now that we have an open list, we will also need a closed list. The closed list contains the nodes that have had all of their neighbors added to the open list [5]. Each node in the open and closed lists has a G score. The G score indicates the distance or weight of the path leading to the node from the start node [5]. There is also an H score which is the heuristic that is used. The H score is like the G score except it represents the distance from the current node to the endpoint [5]. The tracing out of the path is not completed ahead of time so you cannot really know this score. As discussed before, a method is used to estimate this distance.
Starting out, the closed list is empty and the starting point is in the open list. Each node holds its G score and the node that was used to get to the current node for backtracking [5]. Typically these nodes are referred to as parent and children nodes. The path needs to be extended until the target destination is reached. This occurs by calculating the h score for all of the nodes in the open list and then picking the node in the open list with the lowest sum of the G and H scores [5]. If the open list is empty, then there is no path from the starting point to the end point. Lets call the point picked T. Every point that is adjacent to T and not in the closed list gets added to the open list [5]. If it is in the open list already then dont add it again. The G scores of the new nodes in the open list are the G score of T plus the distance between the new node and T [5]. The previous or parent node for the new nodes is T. If a node was already in the open list that is adjacent to T, check if the new G score is less than the current one and if it is, update it as well as the previous pointer, otherwise do nothing [5]. If the new point is the target point, then the optimal path is found. T is moved to the closed list and the process is started over [5].
Figure 5: Artificial neural network representation There was a debate of how to get artificial neural networks implemented into computer games. It has been a trendy topic as of late. It has been said that artificial neural networks have huge potential in computer games. Collin McRae Rally 2 was one of the first games to implement an artificial neural network successfully in 2001. The trained neural network in this game is responsible for keeping the computer players car on the track while trying to go around the track as fast as possible [2]. The input to the neural network could be the curvature of the road, distance from the corner, the type of road, speed of car, or the cars properties [2]. From receiving this input, the neural network can generate an output which is optimal for the given conditions. This output is then passed to the physical layer module. The neural networks application is limited in games due to a few problems such as [2]: Choosing appropriate input for neural networks Need for re-training the network when changes occur in the game Complicated theory and debugging Training the network takes time and is complicated
Choosing the networks input parameters can be a serious problem. The parameters need to be chosen in a way that will let the network learn to solve situations that have not appeared [2]. A general rule of thumb is to have the input data represent as much information about the game world as possible. A set of input data is needed for training the network. Several hundred samples for most games today is normal. This process usually requires a significant amount of effort and time from the developers. The last step is to train the neural network. Simultaneous testing should be occurring with the training process to ensure that the game doesnt get too difficult or too easy and in need of further training [2]. Fuzzy logic is often used with neural networks. Fuzzy logic allows for more human like reasoning [2]. It is usually in the form: if variable is set then action. For example, if road is dry then maintain normal speed or if road is wet then slow down. When the simultaneous use of fuzzy logic and neural networks is successful, the results are incomparable with what can be achieved by using rules hard-coded into the code with traditional logic [2].
Conclusion
Where would the game world be today if it werent for artificial intelligence? I can guarantee it wouldnt be where it is today with out artificial intelligence. It is not possible
to create a game with out AI in todays world. With out AI in a game is like the Earth without the sun. AI in gaming has come a long way since the 1970s. A*, finite state machines, and artificial neural networks provide games with the most human like computer player possible. Tremendous accomplishments keep occurring such as neural networks just recently being implemented into a game for the first time. We are slowly moving towards having artificial neural networks and fuzzy logic being a standard in games like the A* algorithm is now. Luckily for gamers, that day is soon here.
References
[1] Game Artificial Intelligence. Wikipedia Encyclopedia. September 7, 2006. http://en.wikipedia.org/wiki/Game_artificial_intelligence [2] Grzyb, Janusz. Artificial Intelligence in Games. Software Developers Journal. June 2005. [3] Petersson, Anders. Artificial Intelligence in Games. WorldForge Newsletter. August 2001. http://worldforge.org/project/newsletters/August2001/AI/#SECTION000200000000000000 00 [4] Popovic, Zoran; Martin, Steven; Hertzmann, Aaron; Grochow, Keith. Style-Based Inverse Kinematics. 2004. http://grail.cs.washington.edu/projects/styleik/styleik.pdf [5] A*. The Game Programming Wiki. September 15, 2006. http://gpwiki.org/index.php/A_star [6] Finite State Machine. Wikipedia Encyclopedia. November 1, 2006. http://en.wikipedia.org/wiki/Finite_state_machine