Вы находитесь на странице: 1из 116

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames.

D. Ezra Sidran 22C 290: 004 Computer Science Readings for Research Summer 2003 University of Iowa Professor Joe Kearney

Table of Contents
Statement of the Problem The Simulation Environment Methods of Research 5 6 7

Human-Level AIs Killer Application Interactive Computer Games. Laird; van Lent (2001) 8 Tricks of the Windows Game Programming Gurus. Andr LaMothe Game Programming Gems. Edited by Mark DeLoura, 2000 Artificial Intelligence in Games: A Review. Sweetser (2002) Qualitative Spatial Interpretation of Course-of-Action Diagrams. Ferguson; Rasch, Jr; Turmel; Forbus (2000) Enhanced Military Decision Modeling Using a MultiAgent System Approach. Sokolowski (2003) 23 24 10 18 22

Pathfinding, Strategic, Tactical, and Terrain Analysis: A Look at Artificial Intelligence in Real Time Strategy Games. Fisher (2001) 26 Smart Moves: Intelligent Pathfinding. Stout (1996) 27 Tactical Movement Planning for Individual Combatants. Reece; Kraus; Dumanoir (2001) 28 A Bayesian Network Representation of a Submarine Commander. Yu (2003) Strategy and Tactics in Military WarGames: The Role and Potential of Artificial Intelligence. Veale (1997?) Lecture I: Introduction to WarGaming Lecture II: Movement, Terrain and Visibility Lecture III: Modeling Combat and Predicting Losses Lecture IV: Modeling Fatigue and Morale Lecture V: Ancient Masters -- how much algorithmic content can we find in Sun Tzu's "The Art of War"? 33 Fabulous Graduate Research Topics in CGF, Improving the Army Graduate Program; Marshall; Argueta; Harris; Law; Rieger (2001) 35 31

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 2 of 116

Army Digitization Overview for NDIA. Balough (2000) DoD Modeling and Simulation (M&S) Glossary U. S. Army Field Manual FM 101-5: Staff Organization And Operations. U. S. Army Field Manual FM 34-130: Intelligence Preparation of the Battlefield. Computing Machinery And Intelligence; Turing (1950) Do We Really Want Computer Generated Forces That Learn? Petty (2001)

37 37 38 38 38 39

Realtime Initialization of Planning and Analysis Simulations Based on C4ISR System Data. Furness; Isensee; Fitzpatrick (2002) 40 Representing Computer Generated Forces Behaviors Using eXtensible Markup Language (XML) Techniques. Lacy; Dugone (2001) 41 Moneyball: The Art of Winning an Unfair Game. Michael Lewis 2003. Heterogeneous Agent Operations in JWARS. Burdick; Graf; Huynh; Grell; MacQueen; Argo; Bross; Prince; Johnson; Blacksten (2003) 43 45

The Representation and Storage of Military Knowledge to Support the Automated Production of CGF Command Agents. Allsopp; Harrison; Sheppard (2001) 50 Variability in Human Behavior Modeling for Military Simulations. Wray; Laird (2003) 51 An Intelligent Action Algorithm for Virtual Human Agents. Undeger; Isler; Ipekkan (2001) 54 Terrain Analysis in Realtime Strategy Games. Pottinger (2001?) 58 An Exploration into Computer Games and Computer Generated Forces. Laird (2001) 60 A Modified Naive Bayes Approach for Autonomous Learning in an Intelligent CGF. Chia; Williams (2003) 63 Modeling Cooperative, Reactive Behaviors on the Battlefield with Intelligent Agents. Ioerger, Volz; Yen (2001) 67 A Distributed Intelligent Agent Architecture for Simulating Aggregate-Level Behavior and Interactions on the Battlefield. Zhang, Biggers, He, Reddy, Sepulvado, Yen, Ioerger (2001) 67 Modeling Adaptive, Asymmetric Behaviors. Bloom (2003) Developing an Artificial Intelligence Engine. van Lent; Laird (1999) 71 72

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 3 of 116

Agent Based Toolkit for Intelligent Model Development. Napierski; Aykroyd; Jacobs; White; Harper (2003) 74 CIANC3: An Agent-Based Intelligent Interface for Future Combat Systems Command and Control. Wood; Zaientz; Beard; Frederiksen; Huber (2003) Game AI: The State of the Industry, Part One. Woodcock (2000) Game AI: The State Of The Industry, Part Two. Pottinger; Laird (2000) Computer Player AI: The New Age. Pottinger (2003) in July 2003 75 78 78 78

Sketching for Military Courses of Action Diagrams. Forbus; Usher; Chapman (2003) 81 Calculi for Qualitative Spatial Reasoning; Cohn (1996) 82 Describing Topological Relations With Voronoi-Based 9-Intersection Model. Chen; Li; Li; Gold (1998) 82 How Qualitative Spatial Reasoning Can Improve Strategy Game AIs. Forbus; Mahoney; Dill (2000) Potential of Qualitative Spatial Reasoning in Strategy Games. Chew (2002) 82 82

Commander Behavior and Course of Action Selection in JWARS. Vakas; Prince; Blacksten; Burdick (2001) 85 PC Games: A Testbed for Human Behavior Representation Research and Evaluation. Biddle; Stretton; Burns (2003) A Method for Incorporating Cultural Effects into a Synthetic Battlespace. Mui; LaVine; Bagnall; Sargent (2003) 89 90

A Methodology for Modeling and Representing Expert Knowledge that Supports Teaching-Based Intelligent Agent Development. Bowman; Tecuci; Boicu (2000) 95 JWARS OUTPUT ANALYSIS; Blacksten; Jones; Poumade; Osborne; Stone (2001)97 AI Middleware: Getting into Character; Part 1 - BioGraphic Technologies' AI.implant; Dybsand (2003) 102

AI Middleware: Getting into Character; Part 2 - MASA's DirectIA; Dybsand (2003) 102 AI Middleware: Getting into Character; Part 3 - Criterion's Renderware AI 1.0; Dybsand (2003) 102 Other Papers of Interest Conclusions 107 109

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 4 of 116

The Problem
When we talk about Artificial Intelligence in the context of strategy games or military simulations we can all picture what were talking about even if we cant agree on the methods used to create it or even a precise name to call it. John Laird has used the phrase Human-Level Intelligence by which he means an AI that is able to compete with a human on an equal level. George and Cardullo1 use the term humanlike expertise in the military domain.

Grants Vicksburg campaign (top) and MacArthurs landing at Inchon (right). These campaigns are classics of military strategy because the maneuvers were completely unsuspected and were contradictory to established military doctrine. That is also why they were successful.

However, we know what we want the AI to do: confronted with the situation that faced Grant before Vicksburg in 1863 we want the AI to ignore conventional military wisdom and send its troops below the objective and conduct a lightening
Intelligent systems such as CGF must possess humanlike expertise in the military domain. Like a human or group of humans in a military organization they must be able to adapt and learn - G. R. George and F. Cardullo. Application of Neuro-Fuzzy Systems to Behavioral Representation in Computer Generated Forces, Proceedings of the Eighth Conference on Computer Generated Forces and Behavioral Representation, Orlando FL, May 11-13 1999, pp. 575-585.
1

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 5 of 116

campaign that ends with the envelopment and surrender of the enemy forces. Likewise, in a simulation of the Korean peninsula in September, 1950, we want the AI to conceive of a daring amphibious invasion 180 miles behind enemy lines as MacArthur did. We want an AI that understands the rules and the physics of its environment and yet can think outside of the box when necessary. One of the books making quite a stir this summer is Michael Lewis Moneyball. The book describes the management style of Billy Beane of the Oakland As but that is not what is controversial. Rather it is the books description of Bill James Sabermetrics and how it throws conventional baseball wisdom out the window that is causing so much discussion around the ball fields of America. Bill James makes a very convincing case that 150 years of common sense baseball is completely wrong. Billy Beane, embracing the tenets of Sabermetrics has built a winning ball club for a fraction of the cost of his opponents. We want an AI that can come to the same counterintuitive, but correct, conclusions.

The Environment
What military and sporting simulations have in common is that they are not Discrete State Games. Chess is a Discrete State Game2. There are a finite number of possible positions3; it is an extraordinarily large number but it is a definitive number. If one had a computer, or a book or a file cabinet that was capable of storing every conceivable position in chess with the best possible move to make at every discrete state in the game then you would have the ultimate, unbeatable chess master. Call it AI, call it an expert system, or just call it a giant look up table but it would be definitive and unbeatable.
Also sometimes called an NxM game. NxM game: A normal form game for two players, where one player has N possible actions and the other one has M possible actions. In such a game, the payoffs pairs to any strategy combination can be neatly arranged in a matrix, and the game is easily analyzable. See http://www.xs4all.nl/~mgsch/gaming/theory_glossary.htm.
2

The number is frequently given as See http://mathworld.wolfram.com/Chess.html

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 6 of 116

Indeed, any Discrete State Game strategy or Discrete State problem can and will be solved. It does not matter what methods are used in the short term neural nets, genetic algorithms, or brute force there is a definitive solution waiting out there. This is not the case of simulations. Computer simulations, by definition, are emulations of the real world. The real world rarely rests in discrete states.

Methods
The purpose of this research is to collect, catalog and summarize published and available papers written within the last few years on the subject of human-level artificial intelligence for computer simulations and wargames. Work in this area falls into three distinct categories: 1. Research done in academia that is published in ACM, IEEE or similar journals. 2. Research done under contract to the military and published in symposium proceedings such as Behavior Representation in Modeling and Simulation (BRIMS), the Defense Modeling and Simulation Office (DMSO) Journal or similar publications. 3. AI routines created for specific commercial computer games that are presented at industry conferences such as the Game Developers Conference (GDC) or appear as articles in industry specific books such as LaMothes Game Programming Gems or Laramees, AI Game Programming Wisdom. A conclusion appears at the end of the next section.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 7 of 116

Human-Level AIs Killer Application Interactive Computer Games. Laird; van Lent (2001)
Type of Research: Academic. This article appeared in the summer 2001 issue of AI Magazine. Also on this disk: John Lairds PowerPoint presentation Toward Human-level AI for Computer Games. Summary: Lairds article is practically a call to arms for the AI community. It is frequently cited in papers that touch on AI and the computer game industry and begins: Although one of the fundamental goals of AI is to understand and develop intelligent systems that have all the capabilities of humans, there is little active research directly pursuing this goal. We propose that AI for interactive computer games is an emerging application area in which this goal of human-level AI can successfully be pursued. And, later: The thesis of this article is that interactive computer games are the killer application for human-level AI. They are the application that will need human-level AI. Moreover, they can provide the environments for research on the right kinds of problem that lead to the type of incremental and integrative research needed to achieve human-level AI. Laird discusses the advent of Computer Generated Forces (CGF) in 1991 and describes how he had hoped that this would be, the right application for our research that requires the breadth, depth and flexibility of human intelligence. However, by late 1997, we started to look for another application area, one where could use what we learned from computer-generated forces and pursue further research on human-level intelligence. We think we have found it in interactive computer games. Laird then proceeds to enumerate the basic computer game genres (Action Games, Role-Playing Games, Adventure Games, Strategy Games, God Games, Team Sports and Individual Sports) and describe the AI challenges and opportunities in each category.
The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 8 of 116

In his conclusion, Laird writes: One attractive aspect of working in computer games is that there is no need to attempt a Manhattan Project approach with a monolithic project that attempts to create human-level intelligence all at once. Computer games provide an environment for continual, steady advancement and a series of increasingly difficult challenges.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 9 of 116

Tricks of the Windows Game Programming Gurus. Andr LaMothe New York, Sams Publishing 1999.
Book, 1005 pages plus CD-ROM of example source code. This book is not so much a bible of the computer game industry as it is a grimoire4. It is especially loved because it contains numerous source code examples both in the text and on the accompanying CD-ROM. Chapter 12, Making Silicon Think with Artificial Intelligence does a fine job of introducing, discussing and showing with source code the most common techniques of AI for computer games. These include, in order: Deterministic AI These are predetermined or preprogrammed behaviors (including random vectors and velocities). All external input is ignored. Pac Man is a classic example of Deterministic AI. Tracking Algorithms These can be as simple as the standard if (player_x > monster_x) monster_x ++; or as complicated as curved trajectory tracking and calculating future target location and adjusting vector and velocity accordingly. Evasion Algorithms These are simply the reverse of the tracking algorithm above. Patterns and Basic Control Scripting Patterns are frequently stored as defines or bits and use a simple vocabulary such as: GO_FORWARD = 1, TURN_RIGHT_90 = 2, WAIT_X = wait X cycles, etc. These commands are then read and interpreted though a series of switch statements. Patterns with Conditional Logic Processing This combines pre-designed patterns with some conditional such as:
grimoire (noun) - 1. a manual of black magic (for invoking spirits and demons). From http://define.ansme.com/words/g/grimoire.html
4

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 10 of 116

If (test_distance > 7) AI_State = TRACK_PLAYER; Else AI_State = AVOID_PLAYER; Modeling Behavioral State Systems To create a truly robust Finite State Machine (FSM) you need two properties: 1. A reasonable number of states, each of which represents a different goal or motive. 2. Lots of input to the FSM, such as the state of the environment and the other objects within the environment. p. 729

A master Finite State Machine (FSM) with substates (from page p. 730). Elementary State Machines This involves separating the AI into High Level states, Medium level logic and Low level logic. See next diagram (from page 731).

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 11 of 116

LaMothe continues stating that, A personality is nothing more than a collection of predictable behaviors. And later, you should be able to model personality types using logic and probability distributions that track a few behavioral traits and place a probability on each. LaMothe also suggests adding a radii of influence that further adjusts and influences a computer personality (see below):

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 12 of 116

Modeling Memory and Learning with Software This is a fairly obvious AI routine with the addition of agents transferring knowledge and data. See the sample source code for memory and learning. (Note: Word may not correctly open the file which is a Visual Studio C++ file.) Planning and Decision Trees LaMothe states that high-level AI = planning = a high-level set of actions. He also suggests building a decision tree of nodes that represent a production rule and/or an action which can be loaded at runtime from a file. Pathfinding LaMothe begins by describing basic, primitive pathfinding algorithms including Trial and Error. Contour Tracing This method involves constantly shooting a line from the subject to the goal. Whenever an obstacle is encountered a new path is traced around it:

As LeMothe says, This works, but it looks a little stupid because the algorithm traces around things instead of going through the obvious shortest path. But it works. But theres something to be said for an algorithm that is guaranteed to work. Collision Avoidance Tracks This method involves calculating (possibly pre-computing), or drawing during level design, avoidance paths around all obstacles and storing them for later use. See diagram below:

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 13 of 116

This system is fine if you have the luxury of pre-computing or pre-designing terrain. Waypoint Pathfinding This method consists of constructing a series of predefined waypoints and paths (see diagram below). Source code is also included showing implementation of this method in a driving/racing game.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 14 of 116

Robust Pathfinding Breadth-First, Bidirectional Breadth-First, Depth-First, Dijkstras Search and A* Searches are described. However, LaMothe cautions that none of these are applicable to real-time pathfinding. Advanced AI Scripting LaMothe describes some basic scripting techniques and suggests using the compiler pre-processor commands to turn a pseudo-English scripting language into standard C/C++ code. The script file (xxx.scr) would be added as an include to the code and then compiled and linked into the executable. Artificial Neural Networks Neural Nets have been around since they were first described by McCulloch and Pitts in 1943. A neural net basically emulates the human brain by creating logic circuits that fire when two inputs are greater than a predetermined value (theta). A bias can also be added to simulate long term memory. See LaMothes in depth article here (neural.doc). Also in the same subdirectory is source code and executable examples of neural network implementation.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 15 of 116

Genetic Algorithms LaMothe only briefly describes Genetic Algorithms that emulate the process of natural selection to solve AI tasks. Obviously, the key to creating robust genetic algorithms rests with the algorithm evaluation criteria. Fuzzy Set Theory, Fuzzy Linguistic Variables and Rules, Fuzzy Manifolds and Membership, Fuzzy Associative Matrices Key Fuzzy Logic terms and concepts: DOM = Degree of Membership FLV = Fuzzy Linguistic Variable FAM = Fuzzy Associative Matrix (Matrices) Logical AND means minimum of the set Logical OR means maximum of the set

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 16 of 116

The key principles of fuzzy logic are: 1. As opposed to crisp logic, fuzzy linguistic variables (FLV) do not fall into precise categories rather they have Degrees of Membership (DOM). See first chart 12.46 above. 2. Sets of FLVs are ANDED (means minimum of the set) or ORED (means maximum of the set) and then used to index the Fuzzy Associative Matrix (FAM) as in above (chart 12.47). Each cell of the FAM can have a number of values or instructions.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 17 of 116

Game Programming Gems. Edited by Mark DeLoura, Rockland, Massachusetts, Charles River Media 2000.
Book, 614 pages plus CD-ROM of example source code. This book is another classic of the computer game industry and, like LaMothes Tricks of the Windows Game Programming Gurus above, includes a CD-ROM of example source code. Gems, however, is comprised of a series of articles both written specifically for this book and those reprinted from other sources. Chapter 3, Artificial Intelligence, includes the following articles: Designing a General Robust AI Engine by Steve Rabin. This is a general overview of AI engines used in games and suggests a method as described in the flow chart below.

A central message router sorts and clears game objects that are then sent to a specific State Machine for processing. Rabins message consists of five fields: a descriptive name, name of sender, name of receiver, time it should be delivered and relevant data. An example message from the article is:
name:damaged, from:dragon, to:knight, deliver_at_time:245.34, data:10 (damage)

Rabin also states the value of this system because it leaves an electronic paper trail which is useful for debugging and the desirability of sending delayed messages

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 18 of 116

which can be acted upon at a later time. Caveat: ensure that the game object has not been destroyed before a delayed message is acted upon. A Finite-State Machine Class by Eric Dybsand. This article is primarily source code for implementing an FSM in C++. Game Trees by Jan Svarovsky. Game trees are only applicable to discrete games such as chess and checkers and, therefore, are not of The Basics of A* for Path Planning by Bryan Stout. Note: Stout also wrote Smart Moves: Intelligent Pathfinding. Stout (1996) below. Theoretically an A* search will eventually return the cheapest path (as defined by the code which can weight various terrain types in different costs). However, on large maps an A* search can create thousands of nodes which can quickly consume all available memory. Stout writes, The case in which A* is most inefficient is in determining that no path is possible between the start and goal locations; in that case, it examines every possible location accessible from the start before determining that the goal is not among them See also Amit's Thoughts on Path-Finding on this disk. A* Aesthetic Optimizations by Steve Rabin. This is an interesting short article describing various methods of smoothing A* paths by using Catmull-Rom and other spline techniques.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 19 of 116

A* Speed Optimizations by Steve Rabin. A* is a very slow algorithm because it must search every possible square in a map. For example a 1,000 x 1,000 grid map will produce 1 million squares that need to be evaluated. This article presents two basic methods for speeding up A* path searches. The first is reducing the number of squares.

The diagrams above represent: a) a rectangular (or hexagonal) grid; b) an actual polygonal floor; c) a polygonal floor representation and d) points of visibility. A is, of course, the slowest. B uses the actual polygons created by a 3D program; which can be extraordinarily complex in many cases. C cheats by having a pathing map created and overlaying the real map. This method is completely unacceptable for terrains that are not predesigned. D, again, represents another predesigned method. Other techniques suggested are a hierarchical pathfinding system which, again, is restricted to certain medthods of terrain and/or obstacles representation. Lastly, modifications of the heuristic cost of the A* search will produce optimatization but it is impossible to predict at what cost of the quality of the path found. Simplified 3D Movement and Pathfinding Using Navigation Meshes by Greg Snook. This method involves cheating and precalculated paths which are not applicable to my are of interest. Flocking: A Simple Technique for Simulating Group Behavior by Steven Woodcock. The boids.c source code is included here. Fuzzy Logic for Video Games by Mason McCuskey. A very good, but very short, primer for fuzzy logic.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 20 of 116

A Neural-Net Primer by Andr LaMothe. This appears to be an expanded article of what first appeared in his Tricks of the Windows Game Programming Gurus (see above). This also includes source code on the CD-ROM and briefly describes Hebian nets and Hopfield autoassociation nets.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 21 of 116

Artificial Intelligence in Games: A Review. Sweetser (2002)


Type of Research: Academic. Summary: This is an extremely detailed (62 pp.) overview AI techniques currently used in commercial games. The topics include: Finite State Machines Agents Scripting Fuzzy Logic Fuzzy State Machines Flocking Decision Trees Neural Networks Genetic Algorithms Extensible AI Middleware and Research Centres Each chapter includes an explanation of the technique and examples of current commercial games that use the technique.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 22 of 116

Qualitative Spatial Interpretation of Course-of-Action Diagrams. Ferguson; Rasch, Jr; Turmel; Forbus (2000)
Type of Research: Academic research funded by Defense Advanced Research Projects Agency (DARPA). Summary: This paper is a precursor of work done by Ferguson et. al. (see Sketching for Military Courses of Action Diagrams. Forbus; Usher; Chapman (2003)) that summarizes the work by the Qualitative Reasoning Group of Northwestern University that was funded by DARPA. This project was involved with the digitization (in the words of Ken Forbus) of certain staff functions performed by the G2 and S2 officer of a division or corps (see U. S. Army Field Manual Fm 101-5: Staff Organization and Operations). A typical Course of Action (COA) diagram taken from Qualitative Spatial Interpretation of Course-ofAction Diagrams. Caption: Figure 4: Example from a Course-of-Action diagram. Three friendly brigade-level task forces attack three enemy positions. The main attack is against objective Buford, and the supporting attack is against objective Grant. Ferguson et. al. constructed two COA diagram interpreters using their qualitative spatial reasoning engine, GeoRep (see GeoRep: A Flexible Tool for Spatial Representation of Line Drawings. Ferguson; Forbus (2000)). The first system used GeoRep to interpret individual COA glyphs (unit icons and movement orders). The second system then used GeoRep to describe geographic relationships implied by the symbol arrangements. This allows the program to respond to questions such as: What coa-object is located on/at coa-area? Or what is ordinal direction of coa-obj1 relative to coa-obj2?

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 23 of 116

Enhanced Military Decision Modeling Using a MultiAgent System Approach. Sokolowski (2003)
Type of Research: Government funded (Joint Warfighting Center, U.S. Joint Forces Command & Defense Modeling and Simulation Office) academic research. Summary: The author (Sokolowski) is associated with the Virginia Modeling, Analysis & Simulation Center at Old Dominion University. In this paper he describes the methods he used to create a multiple agent system, employing Recognition-Primed Decision Making (RPD), entitled RPDAgent that was designed to simulate strategic decisions made by a senior military commander. The author first examined von Neumanns classical decision theory before deciding upon RPD (a method by Klein5 to describe Naturalistic Decision Making (NDM)). The test bed that Sokolowski created was a hypothetical amphibious landing. Sokolowski then presented the scenario to thirty senior U. S. and Coalition military officers and recorded their responses to four strategic decision making points: 1. Which of four beaches to attack? 2. When to attack? 3. Should the timing of the attack be changed due to enemy troop movements? 4. Should the attack be called off after encountering high casualties and strong enemy resistance? RPDAgent evaluated 20 variables that were narrowly defined such as Beach Topography: Sand Type: Coarse or Fine (see also Representing Knowledge and Experience in RPDAgent. Sokolowski (2003)). RPDAgent, which also employed stochastic methods, was run 200 times and its responses to the four above decisions were recorded. The RPDAgent decisions were remarkably similar those of the 30 senior military officers (showing a standard deviation of between 0 and 2.4185). Sokolowski then conducted a version of the famous Turing Test (see Computing Machinery and Intelligence) by showing the results of the RPDAgent simulation runs
5

Klein, G. Sources of Power: How People Make Decisions. The MIT Press, Cambridge, MA, 1998 and Klein, G. "Strategies of Decision Making," Military Review, Vol. 69, No. 5, pp. 56-64, 1989.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 24 of 116

combined with the responses of the original 30 senior military officers to five generals (three four-star and two three-star generals) and recorded their responses. The results (shown below) would qualify for passing the Turing Test.
Number of human or model responses 0 14 7 2 3 Number of correct human or computer responses 0 8 3 2 2

SME

Num. of cant tell 20 6 13 18 17

Percent correct

Gen . A Gen. B Gen. C Gen. D Gen. E

0 40% 15% 10% 10%

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 25 of 116

Pathfinding, Strategic, Tactical, and Terrain Analysis: A Look at Artificial Intelligence in Real Time Strategy Games. Fisher (2001)
Type of Research: Academic; student research. Summary: This article is a very broad overview of pathfinding techniques currently being used in commercial Real Time Strategy (RTS) games. The author points out that Most games today use some variation of the A* algorithm This algorithm is based on a heuristic function that ranks nodes in a possible path by an estimate of the best path through that node. This algorithm and numerous variations of it are covered in greater detail in Smart Moves: Intelligent Pathfinding. Stout (1996) The author also made observations about the implementation of pathfinding algorithms in two classic RTSs: Warcraft and Age of Empires 2 and their ability to look ahead. Fisher writes, [G]ains are readily apparent when comparing Warcraft (the oldest game in the pack) with Age of Empires 2 (the newest). The Age of Empires units are better able to determine a "good" route to a goal. In Warcraft, the units would not reevaluate their positions when moving around objects. They would treat each new obstacle as a separate entity and move around it until reaching the original vector of their path. This wastes time and is an obvious flaw in the pathfinding. Also, the Warcraft units are slower to react (even on the same test machine as the Age units), slower to calculate an alternative when faced with an obstacle, and less likely to take a "good" route when there is unit congestion (a clustering of units, or traffic jam). Furthermore, there is more unit congestion overall as the units are less able to detangle themselves from other units.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 26 of 116

Smart Moves: Intelligent Pathfinding. Stout (1996)


Type of Research: Computer Game. Summary: Stout has created a test bed computer program (screen shot below) that allows for the testing of various Pathing techniques.

These include bidirectional breadth-first, Dijkstras algorithm, depth-first, Iterativedeepening depth-first search, Best-first search and A* search algorithms. Each are displayed in his test bed. This is a superb starting point for examining various search algorithms. In addition Stout discusses generalized cylinders, convex polys and quadtrees. This paper includes numerous graphics and code samples.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 27 of 116

Tactical Movement Planning for Individual Combatants. Reece; Kraus; Dumanoir (2001)
Type of Research: Military; Private corporation, Science Applications International Corporation (SAIC), funded by U.S. Armys Simulation, Training and Instrumentation Command (STRICOM). Summary: This research was undertaken to deal with problems encountered by SAIC with their Dismounted Infantry Semi-Automated Forces (DISAF) project (for more information see this web page). Specifically, SAICs Modular Semi-Automated Forces (ModSAF) software proved inadequate to the task of moving individual soldiers rather than vehicles. This paper describes a multi-step process for pathfinding. The first step uses four levels of decision making to set objective points: 1. Long distance or strategic planning (which is ignored here). 2. Intermediate distance of a few hundred meters. 3. Short distance of about a hundred meters. 4. Fine motion which presumably covers everything less than a 100 meters. However, the details of setting the objective points are not discussed in this paper. The paper then discusses four different pathfinding algorithms: 1. Cell decomposition. 2. Skeletons. 3. Weighted regions. 4. Potential functions. Cell decomposition consists of casting a grid across the terrain and then using an A* algorithm to assess potential paths. Each cell in the grid which can be rectangular, hexagonal or irregular may be given weighted values representing anything related to movement, e.g., trafficablity, ground slope, or exposure to threats. The skeleton method employs Voronoi diagrams (see How Qualitative Spatial Reasoning Can Improve Strategy Game Ais. Forbus; Mahoney; Dill (2000) and Describing Topological Relations With Voronoi-Based 9-Intersection Model. Chen; Li; Li; Gold (1998)
The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 28 of 116

for a description of Voronoi diagrams as applied to terrain pathfinding). The weighted regions method was first described in A Stochastic Approach to the Weighted-Region Problem: Design and Testing of a Path Annealing Algorithm and employs a method of first assigning values to each cell and then casting rays in all directions, from the start point; each ray bends at region boundaries to obey the optimality criterion... The goal point must be trapped between two rays. The path found will be closer to optimal if more rays are used. The potential method of pathfinding constructs, a scalar potential (as in energy) function that steadily decreases to a minimum as the distance to the goal decreases. The potential function is high at obstacle boundaries and decreases as the distance to the obstacle increases. The path is determined by following the steepest descent down the potential value surface until the goal is reached. The paper then evaluated each algorithm in relation to the following criteria: 1. Path planning through an obstacle field 2. Planning an optimal path through regions of different cost. 3. Planning an optimal path through terrain with continuously varying cost. 4. Using exposure to threats as a terrain cost. The table below summarizes the strengths and weaknesses of each algorithm as indexed with the evaluation criteria:
Requirement

Cell decomp. Free path Varying costs Continuous variation Threats Fair Good
Good

Planning Approach Skeleton Weighted region Good Good Poor Good Poor Poor Poor Poor

Potential Fn. Fair Poor Poor Poor

Good

The authors conclude, From the table, it is apparent that there is no approach that is good for all the requirements, but the cell decomposition approach is at least fair for all. We have selected cell decomposition as the path planning mechanism. The authors then construct a search space of a grid of 1 meter squares no more than 250 meters long by 200 meters wide, that is oriented from the starting point to the objective point. The grid is then populated with polygonal terrain obstacles.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 29 of 116

An A* algorithm is then employed to determine a path. Each cell is weighted based on ground slope and exposure to enemy threats. When a path is returned by the A* algorithm it is then post-processed with the following procedures: 1. Exposed segments of the path are grown by one node so that the agent will have a chance to accelerate to a rushing speed before being exposed. 2. Segments requiring the agent to be prone are grown by one node so that the agent will be prone before it enters the area where it needs to be prone. 3. Standing segments between prone segments are checked to see how long they are. Short standing segments are replaced by prone segments so that the agent does not waste time standing up for just a short segment. If the standing segment is exposed, the distance threshold is increased so that the agent will more likely stand and run. 4. The path is examined to see if uniform segmentssegments of the path that dont change direction more than 45 degrees,

Left: Examples of Path planned through building to avoid exposure to threat (star). Right: Path planned up a steep hill. The path goes laterally and then traverses the hill at an angle so that the effective slope is reduced (right).

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 30 of 116

A Bayesian Network Representation of a Submarine Commander. Yu (2003)


Type of Research: Military; Private corporation, Frontier Technology, Inc. (FTI), funded by U. S. Defense Modeling and Simulation Office (DMSO). Summary: This paper describes FTIs efforts to create a Human Behavior Representation (HBR) as part of a DMSO Challenge Problem to simulate a human submarine commanders decision making processes during a Sea Scenario test. The object of the game was to command a submarine to protect a high-value asset (HVAan aircraft carrier) and two picket ships from submarine attacks while not accidentally attacking a non-aggressive submarine. The only commands that the HBR could give were: 1) Station on Port - positions the submarine in a location to monitor the enemy port 2) Transit- moves the submarine to the specified location at the specified speed 3) Station on Unit- positions the submarine in a location to monitor either the HVA or one of the picket ships 4) Station in Area- positions the submarine at a specified location 5) Trail- follow a specified enemy sub 6) Disengage Trail- stop following an enemy sub 7) Fire- launch a torpedo against an enemy sub 8) Report- send an information report to HQ 9) Cease Reporting- stop sending an information report 10) Set Speed- set the speed of motion Furthermore, the HBR did not concern itself with the intricacies of submarine command such as changes in depth or changing course to properly align the boat for a torpedo attack. Yus group decided to implement a belief-based Bayesian network for the HBR. It was driven by the following beliefs: - Belief that Red will attack the HVA (or Picket ship) - Belief that our submarine is capable of successful attack on Red - Belief in the location of Red - Belief that Red is trying to attack our submarine
The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 31 of 116

Belief in what our optimal speed should be Belief that we need to monitor the port area Belief that we should make a report Belief in our best course of action (based on all of the above)

They then purchased Netica, a Commercial-Off-The-Shelf (COTS) software package that implemented Bayesian Networks (see their web site at http://www.norsys.com/). The Bayesian Network evaluated a number of Conditional Probabilities They also created four different files that represented personalities of commanders (aggressive, conservative, skeptical and trusting) that could be loaded at runtime. The paper concludes that, The behaviors witnessed by the test monitors were deemed appropriate

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 32 of 116

Strategy and Tactics in Military WarGames: The Role and Potential of Artificial Intelligence. Veale (1997?)
Lecture I: Introduction to WarGaming Lecture II: Movement, Terrain and Visibility Lecture III: Modeling Combat and Predicting Losses Lecture IV: Modeling Fatigue and Morale Lecture V: Ancient Masters -- how much algorithmic content can we find in Sun Tzu's "The Art of War"? Type of Research: Academic. This is a series of lectures from a course in Artificial Intelligence and Games taught at Dublin City University, Dublin, Ireland by Tony Veale. Summary: The author covers basic wargaming and military terms and history including spatial resolution, facing, early board wargames, kriegspiel, imperfect information (fog of war) and Line of Sight (LOS).
Battle of Cannae, 216 BC.
Roman Roman Legions and Allies Heavy Cavalry River Aufidus Africa Spanish/Galli Numidia n (Phase Three) Allies

African

Also covered are formulas for observation probabilities, the Lanchaster Attrition Model, Combat Cycles and Combat Efficiency Coefficients. Also included and cited are combat resolution tables from this researchers commercial computer wargame, The War College (published by GameTek 1995). This is a very good overview of the history of wargaming over the last 200 years and is cited in a number of other papers.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 33 of 116

Fabulous Graduate Research Topics in CGF, Improving the Army Graduate Program; Marshall; Argueta; Harris; Law; Rieger (2001)
Type of Research: Military. Summary: The intended audience for this paper is Army officers who are attending graduate school (primarily the Simulation Masters program at the University of Central Florida) with the hope of directing their research towards topics that are immediately applicable to specific needs of STRICOM (Simulation, Training and Instrumentation Command). The 17 specific topics are: Weapon & Ammunitions Prioritization: a system that automatically determines the write weapon and ammunition for the target rather than depending on human judgment. Target Prioritization & Disengagement: Given an array of targets in an intense battle, one of the most difficult issues a crew has is which target it should engage. User Control Interface (UCI)/Workstation & Execution Matrices Improvements: Not only how information is displayed but how low-level units interact and how information is shared between them. Maneuver/Obstacle Avoidance/ Routing Improvement: One of the most common complaints from SAF (Semi Automated Forces) operators is the maneuver capability of the SAF entities. Suppressive Fire Effects: One weakness in current virtual simulations and CGFs is a realistic simulation of suppressive fire effects. Stability and Support Operations (SASO), Nuclear Biological Chemical (NBC), or Amphibious Operations Requirement Analysis: These new units types are not accurately modeled or included in CGF simulations. Virtual Warrior/ Infantry Simulation: Most of the simulators and CGFs were based on supporting tank simulation. As the virtual and CGF technology progressed it became apparent that infantry had to be added to the simulation. The addition of
The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 34 of 116

infantry has proven to be difficult because of it unique behavior and maneuver characteristics. Military Operations in Urban Terrain (MOUT): There are numerous problems with urban combat simulation including Line of Sight (LOS), 3D maneuvering, changing terrain (building damage), etc., etc. Situational Behavior Improvements: The CGF AI needs a great deal of improvement. Digital Message Processing: Creating the ability for the CGF to send and receive and respond to messages. Fair Fight with Manned Simulators: Manned simulators and SAF are not equal in information. Executive Framework Control: Optimization of ModSAF and CCTT SAF which has some definite time and size problems. Demand-Driven Simulation: Optimization of training simulations so that they would not spend CPU time and resources calculating unnecessary data. Scalability Analysis of Simulation Problems and Algorithms: This analysis could characterize the potential for time-space tradeoffs in algorithms [e.g. Line-of-Sight determination, Route Planning] for solving these problems, determine the extent to which these problems can be solved via parallel or distributed algorithms, and describe the effects of bandwidth constraints and network latency on parallel or distributed solutions. Command from Simulator (CFS) Issues: In order to support Command Force Exercises (CFX) whose focus is the training and exercise of commanders and leaders, an operational mode called CFS was created in CCTT SAF. In other SAFS it has been refereed to as a tethered unit. Apparently this kludge has caused numerous problems including units not stopping on an intervisability line such as a ridgeline. Aggregate-Disaggregate Algorithms: For example, the destruction of one tank in a platoon is not the same as minor damage to two of the tanks, but commonly used reaggregation techniques might map these back to the same constructive representation.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 35 of 116

Opposing Forces (OPFOR) Skill Levels/Variable Skill Leadership Behaviors: Current SAF systems have methods to create different skill levels that modify detection, engagement rates and delivery accuracy based on the competency level identified. This implementation(s) are mainly speculative and need an empirical analysis based on actual field data.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 36 of 116

Army Digitization Overview for NDIA. Balough (2000)


Type of Research: Military. Summary: This is a PowerPoint slide presentation that is the blueprint for the complete digitization of the U.S. Army. Digitization includes not only computer military simulations but making every unit and CCCI as digital as possible.

Secretary of Defense in a real-time conference with Qatar War Room; part of the Armys digitization plan in operation.

DoD Modeling and Simulation (M&S) Glossary


Type of Research: Military. Summary: Definitions and glossary for a blizzard of acronyms. Ive found this 185 page document indispensable in deciphering some papers.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 37 of 116

U. S. Army Field Manual FM 101-5: Staff Organization And Operations.


Type of Research: Military. Summary: These are the step by step instructions to create Courses of Action (COAs). Any computer program that would attempt to simulate the COA procedure would have to use this manual as a guide.

U. S. Army Field Manual FM 34-130: Intelligence Preparation of the Battlefield.


Type of Research: Military. Summary: This manual was prepared for Intelligence Officers creating COAs. It is to be used in conjunction with FM 101-5: Staff Organization and Operations (see above). Comments: This particular manual is very well written and includes a number of important insights into the history of successful strategy and tactics.

Computing Machinery And Intelligence; Turing (1950)


Type of Research: Academic Summary: Turings classic paper that introduces the Turing Test. I added this, not only because it is such an important document for me, but because its nice to always be able to find a copy of it when you need it.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 38 of 116

Do We Really Want Computer Generated Forces That Learn? Petty (2001)


Type of Research: Academic. Summary: The main thesis is that we (not surprisingly) do not want CGF that learn (at least most of the time). The argument can be summed up by the chart (below) that shows the positives (+) and minuses (-) for learning behavior in each category of CGF use.
CGF behavior category Unchanged 0 + + Improved behavior Varying experience Loss of training control Cost of learning phase CGF applications Analysis

Training

Experimentation

0 + Improved behavior Loss of repeatability Confounded results Cost of learning phase Loss of repeatability Confounded results Non-doctrinal behavior Loss of repeatability Confounded results Unrealistic behavior

0 + + Improved behavior Richer experiment Confounded results Cost of learning phase

Improved

Altered

+ Varying experience - Loss of training control - Non-doctrinal behavior + Varying experience - Loss of training control - Unrealistic behavior

+ Richer experiment - Confounded results - Non-doctrinal behavior + Richer experiment - Confounded results - Unrealistic behavior

Martian

I would argue that there are some important categories that are omitted from the above chart such as Learning Something New, Looking Outside the Box and Reevaluating Standard Military Doctrine.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 39 of 116

Realtime Initialization of Planning and Analysis Simulations Based on C4ISR System Data. Furness; Isensee; Fitzpatrick (2002)
Type of Research: Private Corporation and Military Summary: C4ISR = Command, Control, Communications & Computer Intelligence, Surveillance & Reconnaissance. There is a very real problem in that heretofore there was not a consistent data structure among the numerous DoD systems. Ideally, the data that is displayed on the computer screens inside command vehicles and in the War Room should be seamlessly imported into simulation programs. This paper proposes using the GCCS Ambassador as an interface. As part of previous C4ISR-Simulation interoperability experiments, the Naval Research Laboratory (NRL) had already developed a GCCS HLA-compliant interface (known as the "GCCS Ambassador") that allowed simulation data to be sent via the RTI and deposited into the GCCS Track Database Manager (TDBM). This interface was originally developed to work with the Joint Theater Level Simulation (JTLS), an aggregate-level simulation of air, ground, and sea operations used primarily by the Joint Warfighting Center (JWFC) in Joint training exercises. While any simulation that is HLA compliant could in theory derive data from GCCS (Global Command & Control System) in this manner, there are only a select few that are used in the execution of COA (Course Of Action) and deliberate planning applications that really need this capability. The Joint Warfare System (JWARS) is one of the simulations that is currently considering adopting this technology for use. As JWARS has a requirement specified in its Operations Requirements Document (ORD) for operating as a COA application [JWARS, 1998] in support of CINC analysis requirements, this type of initialization scheme may be well suited for it. JWARS has recently developed an HLA interface for use with other applications, which would provide a straightforward approach for implementation with the GCCS Ambassador. While the GCCS COP provides a significant amount of information on enemy and friendly units, it does not contain several bits of data of interest to the analysis of COAs. Among the data sources that would be beneficial for inclusion in this federation are the Master Intelligence Database (MIDB), Status of Resources and Training Systems (SORTS), and Joint Operation Planning and Execution System (JOPES) data. Also, since GCCS is an aggregated view of the entire theater battle, it tends to lack certain detail that can be found in component data sources such as
The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 40 of 116

Theater Battle Management Core System (TBMCS), and the Joint Common Database (JCDB). Implementation of COA analysis at lower echelons may require finer resolution data that can only be found in these component C4ISR systems.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 41 of 116

Representing Computer Generated Forces Behaviors Using eXtensible Markup Language (XML) Techniques. Lacy; Dugone (2001)
Type of Research: Private Corporation and Military Summary: The lack of a standard language and data structure that can transfer data and CGF behaviors between applications is a constant hindrance in development of CGF applications (see also Realtime Initialization of Planning and Analysis Simulations Based on C4ISR System Data. Furness; Isensee; Fitzpatrick (2002) above). This paper recommends XML as a standard and points out numerous COTS (Consumer Off The Shelf) XML applications are available.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 42 of 116

Moneyball: The Art of Winning an Unfair Game. Michael Lewis New York, W. W. Norton & Company, 2003.
Book, 288 pages. This may be the most important book and certainly one of the most influential books that I read this summer. While this book, on the surface, is about Billy Beane, the General Manager of the Oakland As and his successful rebuilding of a team on a shoestring budget, it also introduced me to the history of sabermetrics (also sabremetrics and sabrmetrics) that Beane used as his manifesto. Sabermetrics (which derives its name from the Society of American Baseball Research (SABR) was created by Bill James in the late 1970s (see http://www.baseball1.com/c-sabr.html for a good overview of the subject). James, who to the best of my knowledge, was neither a statistician, mathematician or computer programmer, after pouring over thousands of pages of statistical data discovered an extraordinarily important, simple truth about baseball that had been overlooked for over 150 years: outs are bad for the team at bat. Anything that creates outs was to be eliminated; anything that decreased outs was to be encouraged. From this simple statement an entirely new concept in baseball was created that said, batting averages are irrelevant, RBIs are irrelevant, and the only thing that mattered was a players On Base Percentage (OBP). About 15 years later Voros McCracken, a disciple of James, discovered the corollary for pitching: the only things that are actually under the pitchers control are strikeouts, walks and homerun balls; everything else is luck. Consequently, the Earned Run Average (ERA) is irrelevant. Obviously any science (or cult for that matter) that produced thousands of pages of statistics and theory was going to be a bit more complex than my simple summing up in the above paragraph but I have captured the heart of Sabermetrics: Outs bad, OBP important, ERA irrelevant. There is also one other important fact that I learned from this book: Conventional baseball wisdom was wrong. This was an epiphany for me. Previously, if I had been hired by a computer game company to write the AI for a baseball simulation I would have written it using the complicated (and sometimes

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 43 of 116

contradictory) rules of conventional baseball wisdom that I had been taught since my youth. For some time I had suspected that many of these conventional baseball rules were wrong they had just felt wrong but I certainly would never have jumped to the conclusions that James had. However, the results of Sabermetrics are irrefutable. Now, what intrigues me is this: to write a computer program that crunched all the data that James did by hand is not a big programming accomplishment (indeed theres a nice little C program at http://www.baseball1.com/bbdata/grabiner/brock2.c that will do that) rather it was the conclusion that James arrived at that interested me and I asked myself, Is it possible to write a computer program that given the rules of baseball and the same data set that James had would arrive at a similar solution? Of course, the next logical step if such a program was created would be to ask: Could an all-purpose (universal, generic) version of this program be written that would: 1. Load in a (game/simulation) rule set. 2. Load in data files (which could include maps, statistics, etc.). 3. Derive an optimum winning strategy. I can only assume that this is not an original idea. Internet searches for optimum strategy reasoning software return agent based systems and game theory. Internet searches for universal [strategy] reasoning software return case based reasoning, Forbuss Qualitative Spatial Reasoning, Universal Learning Machines and computational reasoning.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 44 of 116

Heterogeneous Agent Operations in JWARS. Burdick; Graf; Huynh; Grell; MacQueen; Argo; Bross; Prince; Johnson; Blacksten (2003)
Type of Research: Private Corporation and Military Summary: This is an overview of how an individual unit in the JWARS wargame is programmed (for lack of a better term). A unit in a JWARS simulation is called a Battle Space Entity (BSE).

Figure 1 (above) shows the general structure of a JWARS Battle Space Entity (BSE), the primary object for all JWARS behavior. For JWARS land forces, agents are configured as military units and civilian groups. These units are not decomposable, but can temporarily spawn subordinate units for specific tasks, if the need arises.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 45 of 116

Figure 2 (above) KBs for New Units are Built from Multiple Sources Appropriate for the Particular Unit
Unit Attributes Coalition/Side Nationality Function: Cbt, CS, or CSS Echelon: Bn, Brigade Type: Armor, Mech, Inf Is a Headquarters? Role: Left Flank, Reserve Rank or Skill Level Unit Situation Unit is under fire Days in combat Unit is in contact # of enemy in contact Unit Current Strength Formation/Orientation Unit Current Objective Local Activity/Mission Units is On/Off Plan Has specific asset Global Conditions Is Day/Night Weapons free/tight Chemical Use Authorized Unit is in Country X Vegetation type Terrain type Weather Civilians are present

Figure 3 (above) Primitive Knowledge Base Facts. It is interesting to note that only 28 variables are stored for each unit and does not include such factors as leadership, morale, attack, defense or speed which are standards in most commercial wargames.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 46 of 116

Primitive Facts

Related Derived Facts

Resources # of Personnel Skill Levels Type of Equipment Condition/Strength Amount of Supplies Expected Resupply Knowledge of Capabilities Ability to Sense Enemy Situation Communications Intercept Potential Own Operations Available Options Enemy Operations Enemy Intentions Own Doctrine Expected Enemy Reaction Environment Weather (current) Favorable for unit Weather (forecast) Favorable for unit Terrain Favorable for unit

Figure 4 (above) Reasoning from Available Facts. Derived facts may only have applicability to the units triggering them. These appear to be very simple extrapolations from the individual BSE data available.

Figure 5 (above) JWARS Rule Builder User Interface. This appears to be a simple Boolean logic / conditional statement editor. Without the KB, there have been instances at the lower echelons in JWARS when combat units that should have engaged one another have missed that opportunity by simply returning fire and continuing on to their objective. Circumstances have also occurred where units without KBs have inappropriately engaged in combat. For example, in some situations a logistics support unit may inappropriately assess the situation and close with an enemy infantry battalion. The KB supports the desired outcome by improving the ability to assess the situation and subsequently altering individual unit courses of action. As shown in Figure 7 (below), attacking combat

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 47 of 116

units sense enemy fire, close with the enemy units, and, after destroying them or forcing them to withdraw, resume their original mission.

This diagram labeled Figure 7 (above) was added to the article to illustrate Altering Course of Action Using KB Rules however, I am extremely dubious about the appropriateness of a unit making an abrupt right angle turn and exposing its left flank to two hostile units even if it is now attacking a headquarters unit. Indeed this appears to demonstrate a very common and very old problem with commercial wargame AIs: units making individual decisions as opposed to an entire army working as a cohesive whole. See my revised diagram below:

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 48 of 116

I have indicated with thick green arrows (above) the revised Course of Action (COA) that certainly follows current doctrine and would probably be the COA that most (if not all) staff officers would recommend. Conclusion: JWARS is the most important computer simulation currently used by the Joint Chiefs and strategic planners for wargaming. It does not have any Artificial Intelligence capabilities and the introduction of Agents and Knowledge Bases appears to be the first attempt to do so. In this case the Agents are used to control subordinate units and not to simulate enemy forces and responses. It appears that the JWARS Program has encountered many of the same problems that commercial computer wargames have dealt with for the last twenty years.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 49 of 116

The Representation and Storage of Military Knowledge to Support the Automated Production of CGF Command Agents. Allsopp; Harrison; Sheppard (2001)
Type of Research: Military (United Kingdom funded) Summary: This paper describes the procedures used for creating a central repository for Knowledge Bases used to program Agents in wargames. To derive a highly autonomous CGF agent, the knowledge engineer has to resort to interviewing a subject matter expert (SME) or to examining source doctrine to capture the skills needed within the program. The process is iterative and can involve several SME interviews or several reviews of source doctrine to clarify issues. The CGF development cycle could be improved by reusing CGF components from previous applications leading to cost and time savings. While I would not have assumed there would have been a great deal of expense involved in interviewing SMEs and programming the rules, it is certainly a good idea to create one central set of rules. The architecture for the system now being developed and used in the United Kingdom looks like this:

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 50 of 116

Variability in Human Behavior Modeling for Military Simulations. Wray; Laird (2003)
Type of Research: Academic funded by the Office of Naval Research, Virtual Training & Environments Program, Summary: This article is an overview of the approach that Laird is using to create urban warfare (MOUT) agents (MOUTBots) with his SOAR language. It does not include and specific source code and many of the ideas mentioned in the article have not yet been implemented. The two basic themes of the article are:
Space of all possible behaviors (given some initial situation)

Agent 3

Agent 1 Agent 2 (alternate run) across-subject variability Agent 2

within-subject variability

good/correct behavior space

Figure 1: A notional view of behavior space. As illustrated in Figure 1, within the space of all correct or good behaviors, different [human behavior models] HBMs (or the same HBM at a different time) can follow different paths through the behavior space.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 51 of 116

In other words: when properly implemented two MOUTBots would make different decisions but still be in the good decision category.
rules of engagement

Agent Knowledge
doctrine command skills

situation interpretation mission

Reason

tactics

Perceive

Agent Architecture

Act

OUTPUT

INPUT

Task Environment

POTENTIAL ARCHITECTURAL SOURCES OF VARIABILITY

Figure 3 Variability influences all aspects of HBM behavior generation. [W]e introduce a simple model of behavior generation with perception, reasoning, and action components. Figure 3 provides a notional view of such an agent Agents receive input from some external task environment. This input must be transformed to a representation that the agent can understand. Internally, the process of perception is mediated by situation interpretation knowledge, allowing the external situation to be interpreted in terms of the current mission, goals, and rules of engagement, etc. Reasoning is the process of generating, evaluating, and selecting goals and actions. Reasoning is influenced by the agents current situational assessment, its background knowledge, emotional state, and current physical capabilities. The selections of goals lead to actions that are executed in the external environment. Conclusion: This seems like a logical way of creating agents inside an urban warfare simulation that display certain human characteristics including variability of
The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 52 of 116

decisions. However, this article is completely theoretical and abstract. It appears that so far there has been no attempt to actually create these agents inside of a computer program.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 53 of 116

An Intelligent Action Algorithm for Virtual Human Agents. Undeger; Isler; Ipekkan (2001)
Type of Research: Academic probably funded by military. Summary: This article presents a method for moving intelligent agents across a 3D landscape towards a common goal.

Part of this problem, of course, includes determining Line of Sight (LOS) for the agents.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 54 of 116

The algorithm for individual agent movement is:


Main loop
For each group For each agent If the agent is a red team member (intelligent agent) Construct the sensor detection list (perceptions) for the agent; > (A) Analyse the detected list and update the knowledgebase; > (B) Execute behavior module; > (C) Update the pysical appearance;

A
Backup the previous detected list and create a new list; For each group For each agent (target) except himself Calculate the seeing and hearing statistics between the agent and the target; > (D)

B
For each member of detected list If the detection exists in the previous list and unsensed because of the probability test and no probability change occurred after that time Mark the member of new detected list as unsensed; For each sensed member of new detected list Do a comparison to knowledgebase, find the similarities; If similarity found Update the knowledge and check the member of detected list as similarity_found; For each member of new detected list which is not checked as similarity_found Do a probability test and if it is passed Add the list member to the knowledgebase as a new perception; Else Check the member of detected list as unsensed;

C
Find who the commander is; Find the status of the mission plan; If the agent is a commander If there is no abnormal condition Follow the path; Else Execute the reaction planning module; Else If there is no abnormal condition If the commander position is known Follow the commander; Else Stop and search for the commander; Else Execute the reaction planning module;

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 55 of 116

D
Compute the statistics between the agent and the target (line of sight, viewing angle, range, etc.); If there is any possibility of detection, add the perception to the detected list;

Comments: The authors are using a DTED map file and seem to have duplicated much of the work that my group and I completed in 1995 for The War College (below):

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 56 of 116

They also use an extremely simplified movement matrix (1 movement point when crossing an embedded graph cells vertically or horizontally and 1.4 movement points when crossing a cell diagonally). They do not factor in any terrain costs (woods versus grassland versus swamp, etc.). They do, however, calculate slope and adjust movement accordingly. Their algorithm for coordinated agent movement follows:
Description
Create the world database. Create the DTED matrix Add a group and two agents to the environment Set group position and formation of the agents Get the starting control point. Add a goal item for the point. Move fast to the next point Add a control point Add a tactical control point. Wait until the group 2 leaves from point 3 Add another tactical control point. Wait until group 1 arrives at point 5. When ready to continue, send a keyword to group 1. Wait for a response from group 1 to leave the point. Move slow to the next point

Peusedo Code
cPlatoon * plt; cAgent * agent; cGoalPoint * point; cGoalItem * item world->Initialize(); world->Create_World(...); world->Create_DTED(...); plt = world->Add_Group( GroupRed ); agent = plt->Add_Agent(...); agent = plt->Add_Agent(...); plt->SetGroupPosition( 9163.0, 10259.0 ); plt->Set_Agents_Group_Formation(8,true); point = plt->plt_goals->GetObjectFromIndex(0); point->movingstate = MovingFast; Goal item Item=point->Add_GoalItem(giWaitUntilReceiveKeyword); 1.1 item->data.ReceiveFromID = 1; item->SetKeyword("First Step Go..."); point = plt->Add_Plt_GoalPoint( gpPassThrough, 9345.0, 9742.0 ); point = plt->Add_Plt_GoalPoint( gpTactical , 9220.0, 9595.0 ); Goal item item= point ->Add_GoalItem(giWaitUntilContinueMission); 3.1 item->data.ReceiveFromID = 2; item->data.GoalPointID = 3; point = plt->Add_Plt_GoalPoint( gpTactical , 8945.0, 9270.0 ); point ->movingstate = MovingSlow; item = point ->Add_GoalItem(giWaitUntilArrival); item->data.ReceiveFromID = 1; item->data.GoalPointID = 5; item = point ->Add_GoalItem(giReadyToContinueDo); Goal item item->data.doAction = doSendKeyword; 4.2 item->data.SendToID = 1; item->SetKeyword("Group 3 Ready"); Goal item item = point ->Add_GoalItem(giWaitUntilResponseKeyword); 4.3 item->data.ReceiveFromID = 1; item->SetKeyword("Mission Start"); point = plt->Add_Plt_GoalPoint( gpTarget , 8935.0, 9130.0 ); point ->movingstate = MovingFast; item = point ->Add_GoalItem(giArrivalDo); item->data.doAction = doPutBomb; item = point ->Add_GoalItem(giContinueMissionDo); item->data.doAction = doSendKeyword; item->data.SendToID = -1; item->SetKeyword("Mission Complete"); point = plt->Add_Plt_GoalPoint( gpPassThrough, 9212.0, 9124.0 ); point = plt->Add_Plt_GoalPoint( gpHome , 9683.0, 8400.0 ); Goal item 5.1 Goal item 5.2 Goal item 4.1

Add the target point. Put a bomb to the point and move fast after leaving. While leaving the point send the keyword mission completed

Add another control point Home point

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 57 of 116

Terrain Analysis in Realtime Strategy Games. Pottinger (2001?)


Type of Research: Commercial computer game development. Summary: Pottinger, the technical director for Ensemble Studios the creators of Age of Empires (AOE) and Age of Empires 2 (AOE2) describes the methods used in the game to analyze and store homogenous terrain types within the game. AOE, AOE2 and most other Real Time Strategy (RTS) games must support randomly generated maps. Consequentially, after the random map is generated, but before the game actually starts, an analysis must quickly be made of the map and stored for future use.

Figure 3: Influence map around gold mines.

Figure 5: Terrain from Figure 4 mapped into subareas.

The technique employed by Pottinger involves mapping influence (or what we used to call spheres of influence around key areas. Furthermore, Pottinger divides the areas into subareas (see Figure 5) above. These areas are determined by a flood fill algorithm (old but effective) with some modifications to keep from overrunning the stack. The 3D maps are created using the proprietary BANG! 3D engine which allows for multiple layers of influence mapping and storage. It is interesting to note that Pottinger writes, Pathfinding is one of the slowest things most RTS games do (e.g. Age of Empires 2 spends roughly 60 to 70% of simulation time doing pathfinding). Having played AOE and AOE2, and being aware of the fairly small size of the maps (usually less than 100 x 100 tiles) I would

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 58 of 116

have thought it would have been possible to pre-calculate paths and store them leaving the CPU time for other things. Pottinger concludes with this advice:
Useful Tidbits Terrain analysis doesnt need to be exact. If you try to make it perfect, youll either end up spending a lot of time tweaking heuristics or youll have a lot of extra areas that really dont buy you anything. Abstract the area representation (tile vs. polygonal) away from the users (e.g. the CP AI) as much as possible. This will allow you to change it to fix bugs or upgrade as needed. Build it to support dynamic terrain. Even if you dont have dynamic terrain in your shipping game, this helps debugging and testing. Write all of the processing so that it is time-sliced. Even if you dont use it right away, someone will inevitably code (or ask you to code) a feature that blows out your carefully constructed time budgets. If you have time slicing built into the system from the start, thats not a problem. This also lets you run the terrain analysis during the game much more easily and effectively. Build area specification tools into the scenario editor. Use these to simplify the processing. Theres no reason why a scenario author cant put hints for the terrain analysis into the scenario. This also lets the scenario author tweak how the CP AI thinks about the map. Since the random map generator already has to manage areas and connectivity to generate a decent map, it should just pass that data to the terrain analysis. If the random map also has a general shape (e.g. AOE2s Mediterranean map), that information can also simplify the terrain analysis task. Dont use tiles as a measure of distance. Be trendy and use meters as the unit of measure so that you can change the tile size without having to rebalance your game data and your AI heuristics. Pattern recognition is a hit or miss proposition. If scenario authors do a little work and know the basic properties of the games random maps are known, real pattern recognition may not be needed to do effective terrain analysis. If real pattern recognition is needed, though, a couple of easy options present themselves. Since the tile subareas have too much data and convex areas have unknown holes inside the areas, the tile subareas could be hulled used instead. The data reduction would make recognition easier and the smaller areas would tend to have fewer holes. Running pattern matching on the mip-mapped version of the map is also another option.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 59 of 116

An Exploration into Computer Games and Computer Generated Forces. Laird (2001)
Type of Research: Academic, funded in part by grant N61339-99-C-0104 from ONR and NAWCTSD. Summary: Laird write, When CGFs are used for training (and other purposes), the primary goal is to replicate the behavior of a human; that is the behavior should be realistic. Without realistic, human-like behavior, the danger is that human trainees interacting with the CGFs will have negative training. In other words, he does not want AI that is superior to humans or behaves in a non-human manner. Lairds group, who created the SOAR language, created the Quakebot, an agent that plays in Quake II tournaments.
Below is a list of the main tactics the Quakebot uses. These are implemented across the top-level operators. Collect-powerups Pick up items based on their spawn locations Pick up weapons based on their quality Abandon collecting items that are missing Remember when missing items will respawn Use shortest paths to get objects Get health and armor if low on them Pickup up other good weapons/ammo if close by Attack Use circle-strafe (walk sidewise while shooting) Move to best distance for current weapon Retreat Run away if low on health or outmatched by the enemy's weapon Chase Go after enemy based on sound of running Go where enemy was last seen Ambush Wait in a corner of a room that cant be seen by enemy coming into the room Hunt Go to nearest spawn room after killing enemy Go to rooms enemy is often seen in

A flow chart of the Quakebots AI is below:

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 60 of 116

attack

ander

collect-powerups

explor

get-item

goto-item

goto-next-room

face-item

move-to-item

stop

notice-item-missing

The Quakebot is programmed by rules. An example is:


If predict-enemy is the current operator and there is an enemy with health 100, using the blaster, in room #11 and I am distance 2 from room #3 then predict that the enemy will go to room #3 through door #7.

Inherent to Soar is a learning mechanism, called chunking, that automatically creates rules that summarize the processing within impasses. Chunking creates rules that test the aspects of the situation that were relevant during the generation of a result. The action of the chunk creates the result. Chunking can speed up problem solving by compiling complex reasoning into a single rule that bypasses the problem solving in the future. Chunking is not used with the standard Quakebot because there is little internal reasoning to compile out; however, with anticipation, there can be a long chain of internal reasoning that takes significant time (a few seconds) for the Quakebot to generate. In that case, chunking is perfect for learning rules that eliminate the need for the Quakebot to regenerate the same prediction. The learned rules are specific to the exact rooms, but that is appropriate because the predictions are only valid under special circumstances. Comments: Laird is well-known for SOAR (http://www.eecs.umich.edu/~soar/), his presentations at the Computer Game Developers Conference (CGDC), his military research (most notably for the Air Force) and the Quakebot. Laird has consistently championed the importance of the commercial gaming industry, and its developments, to the military establishment.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 61 of 116

On a personal note, I can remember a very abortive attempt by a group from UCDavis to use SOAR in conjunction with the CyberWar XXI game that I consulted on. At the time I thought the problem was with SOAR, itself. However, the problem may have been with the programmers, instead. To the best of knowledge SOAR has been used to program only individual agents and not to act as an AI controller of armies, corps or divisions nor to perform any sort of strategic reasoning.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 62 of 116

A Modified Naive Bayes Approach for Autonomous Learning in an Intelligent CGF. Chia; Williams (2003)
Type of Research: Academic (University of Central Florida). Summary: This paper shows the results of a series of experiments with TankSoar (a simple tank game using the SOAR language) and knowledge acquisition (KA) learning.

Above: a screen capture of TankSoar. The game is played on a 14 x 14 grid and the rules are extremely simple by wargames standards. A run is between 2 tanks with different AIs: one is pre-programmed the other employs a learning routine. One is aggressive the other is non-agressive: A flowchart for the learning procedure is below:

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 63 of 116

The results of the experiments were not what were expected. The learning AI took longer and longer to win and the lessons that they learned were probably not desirable in the real world. See below:

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 64 of 116

Indeed, the experiment produced only six learned behaviors (see table above). The four learned behaviors for the non-aggressive tanks were to retreat. Unfortunately, the retreat was simply backing up so the tank was still hit by the incoming missile. The two learned behaviors for the aggressive tanks were if you see something on your radar, attack it. Comments: While KA is almost certainly an important tool in creating CGF AI, this experiment, in my opinion, is almost worthless. First, one of the goals of the authors is, To implement a truly autonomous agent, the agent on its own must encode the appropriate attributes which are relevant to the execution of behaviors. This remains a challenge for the simulation of human behavior and for furthering an understanding of learning. However, human behavior, especially within a military context, is not simply aggressive or nonaggressive. General S. L. A. Marshall did a great deal of research into this very subject which was published in his classic work, Men Against Fire. Marshall discovered that less than 25% of combatants in World War II could be classified as self starters (he believed the actual number was probably closer to 5%). The actions of these self starters determined the actions of the rest of the troops in their immediate vicinity. If they moved forward, the rest of the platoon moved forward. If they were killed or were incapacitated, the rest of the platoon ground to a halt. Marshalls work greatly influenced my equations used in my wargames to determine when and why a unit retreated. Second, the game TankSoar is far too simple to have any military value whatsoever. Indeed, while the experimenters are using terms like tank, missile and radar they could have just as well have been pawns or pieces. The tanks did not have any of the characteristics of tanks and the battlefield bore no resemblance to real terrain whatsoever. Lastly, while the tanks did, indeed, learn some lessons, they were either not the lessons that you would have wanted them to learn in real life (in the words of Monty Python, run away!) or so absurdly obvious (go towards enemy units) that any CGF AI should have been programmed to do this in the first place. While the authors claim the experiment was a success (The proposed approach was able to allow Soar CGFs to learn autonomously over successive trials and to
The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 65 of 116

adaptively add and drop rules according to their experience with little human involvement. These adaptations were achieved efficiently at a very fast learning rate.) any real CGF AI using these methods are considerably further down this research road.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 66 of 116

Modeling Cooperative, Reactive Behaviors on the Battlefield with Intelligent Agents. Ioerger, Volz; Yen (2001) A Distributed Intelligent Agent Architecture for Simulating Aggregate-Level Behavior and Interactions on the Battlefield. Zhang, Biggers, He, Reddy, Sepulvado, Yen, Ioerger (2001)
Type of Research: Academic (Texas A & M) in cooperation with University of Texas (Austin) and Ft. Hood; DoD funded. Terms and abbreviations used in these papers: TOC = tactical operations center OTB = OneSAF Testbed SAF = Semi-Automated Forces OPFOR = opposing forces S2 = intelligence officer S3 = maneuver officer FRAGOs = Fragmentary Orders TRL = Task Representation Language TAIs = targeted areas of interest RFIs = requests for information or status reports. Jess = Java Expert System Shell PDU = Protocol Data Unit Puckster = A human that enters data by hand.

Summary: These two papers describe their work on the same project: In this paper, we describe our initial work on the University XXI project, in cooperation with Ft. Hood and sponsored through the Department of Defense. In the preliminary phase of this project, we are tasked with developing an intelligentagent architecture that can make intelligent decisions for units on the battlefield - in particular, battalion TOCs - such that the battalions carry out orders, cooperate with each other and brigade, and sufficiently react to situations that deviate from expectations. The system is built on top of OneSAF Testbed (OTB), of which we can monitor the state and issue commands to control entities. On the other end, the system provides an interface for brigade staff officers to interact, including

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 67 of 116

communicating with simulated battalions, requesting status reports, issuing commands (e.g. FRAGOs), responding to battalion requests for information, support, or resources, etc. (This is only part of a digital battlestaff trainer.) In the University XXI project at Texas A&M University, we are developing a digital battle-staff trainer (DBST) system. For implementing the DBST, a multi-agent architecture called TaskableAgents has been designed for simulating the internal operations, decision-making, and interactions of battalion TOC staffs. The core of the TaskableAgents Architecture is a knowledge representation language called TRL. The application of knowledge in TRL to select actions for an agent is carried out by an algorithm called APTE. By communicating and sharing information based on reasoning about each others roles, the TaskableAgents Architecture allows multiple agents to work together as teams to accomplish collective goals.
level 1 level 2 T1 M1 T3 T2 T4 level 4 level 5 T15 T18 T40 T45 T40 C T45
Figure 2. Task-decompostion tree. The levels alternate among task nodes and method nodes, with methods expanding into process networks (levels 3 and 5) that contain sub-tasks or primitive operat ions, and so on, producing the hierarchical structure. The arrows with in a process net depict the execution of the net. For examp le, at level 3, the arrows fro m T2 to T3 and T4 indicate a parallel structure that merges again at T5. The dotted arrows to the methods indicate the methods selected to implement the task. Each task points to a un ique selected method. The pointers from methods into the process nets keep track of wh ich step (task node) they are at in the p rocess, and these pointers get updated by the appropriate Step algorith m described below.

level 3

T5

M7

M12

M92

M60

T2

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 68 of 116

The system itself consists of an intelligent agent (one agent for each battalion) that makes decisions about what to do and how to communicate in the context of a battle. The agent uses a large knowledge base of procedural knowledge about how to carry out various functions within a TOC, gathered from manuals (TTP, doctrine) and from interviewing military experts. This knowledge is encoded in a special knowledge representation language we have devised, called TRL (Task Representation Language). TRL has:
Goals are conditions to be achieved, such as enemy-defeated or brigade-informed. Tasks are higher-level actions to be done, such as surround-enemy or track-enemy. Methods are descriptions of specific procedures that can be used to accomplish tasks, which can refer to sub-tasks or operators. Operators are primitive actions (from the perspective of TRL), such move or fire, which can be directly executed in the environment, along with communications (RFIs, status reports, call for fire support...) with brigade and other units.

Goals are described by giving a condition to achieve, and specifying tasks that can be used to achieve those conditions: <GOAL> ::= (:GOAL <name> (<variables>* ) (:COND <condition>) (:TASK <name> <value>* ) ) Syntactically, a task contains the keyword :TASK, the task name, some arguments, any termination conditions6, and then the method specification. Aside from naming and providing input arguments to methods, they may also be assigned priorities or preference conditions. <TASK> ::= * (:TASK <name> (<variables> ) (:TERM-COND <condition>) * [ (:METHOD (<name> <value> ) [(:PRIORITY <int>) | + (:PREF-COND <condition>) ] ) ] ) . Here is an example of a task description: (:Task attack-enemy (?company-id ?enemy-id) (:Term-cond (enemy-destroyed ?enemy-id)) (:Method (call-for-indirect-fire
6

Termination has the interpretation of failure, by default, though successful terminations could easily be added.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 69 of 116

?company-id ?enemy-id) (:Pref-cond (have-priority-of-fire ?company-id))) . The syntax for methods is: <METHOD> ::= * (:METHOD <name> (<variables> ) [ (:PRE-COND <condition>) ] [ (:TERM-COND <condition>) ] (:PROCESS <process> ) )

brigade staff trainees graphical interface assertions

procedural knowledge in TRL

Intelligent Agent (task decomposition trees and algorithms) puckster OTB PDUs

queries variable bindings update entity states Jess

domain knowledge base

System architecture. Notes: However, because of limitations of time, we have relied on a human (puckster) to key in commands to OTB (OneSafe Test Bed) via a console, based on decisions made by our agent.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 70 of 116

Modeling Adaptive, Asymmetric Behaviors. Bloom (2003)


Type of Research: Private Corporate (AT&T Government Solutions, Inc.). Summary: This paper is an overview of how to, theoretically, model MOOTW (Military Operations Other Than War) operations. These procedures have not yet been implemented. The author suggests using scripts (see below):
Entry Conditions Result
Conditions that must be satisfied before the events in the script can occur.

Props

Roles

Track

Scenes

Conditions that will be true after the script executes Objects that are involved in the execution of the script. The presence of these objects can be inferred even if they are not mentioned explicitly. Slots for people who are involved in the events of the script, A variation of a more general pattern that is represented by the script. The actual sequence of events that occur.

Also, the author suggests SEs (Synthetic Environments), Three-layer AI and AI personalities. Without any actual work on implementation this is all very much blue-skying and wishful thinking.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 71 of 116

Developing an Artificial Intelligence Engine. van Lent; Laird (1999)


Type of Research: Academic (University of Michigan) with DARPA funding. Summary: This paper describes Lairds and his students efforts in the Soar/Games project to create bots for Quake and Descent. Their system uses an Inference Machine. The job of the inference machine is to apply knowledge from the knowledge base to the current situation to decide on internal and external actions. The agents current situation is represented by data structures representing the results of simulated sensors implemented in the interface and contextual information stored in the inference machines internal memory. The inference machine must select and execute the knowledge relevant to the current situation. This knowledge specifies external actions, the agents moves in the game, and internal actions, changes to the inference machines internal memory, for the machine to perform. The inference machine constantly cycles through a perceive, think, act loop, which is called the decision cycle. 1. Perceive: Accept sensor information from the game 2. Think: Select and execute relevant knowledge 3. Act: Execute actions in the game.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 72 of 116

Comments: The QuakeBot is, frankly, fairly obvious in its design. The interface with Quake (which is not mentioned except to credit Steve Houchard) seems like a more difficult piece of coding.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 73 of 116

Agent Based Toolkit for Intelligent Model Development. Napierski; Aykroyd; Jacobs; White; Harper (2003)
Type of Research: Private Corporation (Charles River Analytics). Summary: CRA creates large-scale, multi-agent systems for the Air Force. In this paper, we present the latest implementation of our developing SAMPLE agent architecture, within the context of our advanced GRADE agent toolkit. Below is a screen shot of the GRADE toolkit:

Each GRADE component provides a specialized graphical user interface (GUI) for the construction and testing of the corresponding SAMPLE component. For example, our expert system component allows the user to define the antecedents and consequents of each rule, and then test the behavior of the rule on user defined inputs. Within GRADE, creating a communication link between two components is done by dragging a linking tool from one component to another. The developer must then ensure that the components are capable of processing the content they receive by defining an XSL transformation that will reformat and filter the outgoing message data of the transmitting component to conform to the schema of the receiving component.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 74 of 116

CIANC3: An Agent-Based Intelligent Interface for Future Combat Systems Command and Control. Wood; Zaientz; Beard; Frederiksen; Huber (2003)
Type of Research: Private Corporation (Soar Technology, Inc.); a spin-off from Lairds University of Michigan work; funded by Army Research Institute. Terms and abbreviations used in this paper: CIANC3 = Cooperative Interface Agents for Networked Command, Control and Communications. FCS = Future Combat Systems BLOS = Beyond-Line Of Sight COP = Common Operational Picture SALUTE reports = Size, Activity, Location, Unit, Time, and Equipment of an observed enemy OCU = Operator Control Unit MMBL= Mounted Maneuver Battle Lab UAMBL = Unit of Action Maneuver Battle Lab ACL = Agent Communication Language Summary: The vision of the Armys FCS includes the use of mixed teams of human and robotic forces on a dynamic and rapidly changing battlefield. The focus of this project has been to identify the human-interface issues, design potential solutions and create intelligent agent software that support the commander... To accomplish this task we have implemented an agent architecture based on decomposing the command and control problem into three main task areas: Monitoring, Coordinating and Tasking. Interface agents are a specific form of software designed to reduce the complexity of human-system interaction. These Interface Agents are based on the roles found in current command staffs. Command staffs commonly provide five basic functions to commanders in support of reconnaissance, security, offensive, and defensive operations (c.f. Army FM 1795): Provide timely and accurate information. Anticipate requirements and prepare estimates.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 75 of 116

Determine courses of action and make recommendations. Prepare plans and orders. Supervise execution of decisions. In CIANC3 these functions are divided between three classes of agents: tasking, monitoring, and coordinating. This division aligns the agents with the three C3 concepts of command, control, and communication respectively. The goal is to provide a virtual command staff for echelons that do not current have that support.

Tasking agents will be used to assist commanders and controllers to rapidly issue battlefield commands. Coordinating agents are responsible for facilitating communication and coordination across and within echelons within the command hierarchy.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 76 of 116

Monitoring agents are responsible for assisting the commander in maintaining an accurate awareness of the current situation (situational awareness) at all times. The current CIANC3 system integrates Soar-based interface agents into a combined simulation and operational environment for robotic control. The agents communicate using the FIPA protocol and a user interface to the agents was created using Tcl. Comments: This (FCS) is an extremely ambitious project. The DoD has been interested in utilizing more unmanned or robotic systems for a number of years. We have recently seen the RQ-1 Predator Unmanned Aerial Vehicle used in Afghanistan and Iraq. The Predator, however, is not an autonomous robot or Intelligent Agent. Rather it is a system.... (with) a ground control station (GCS), a Predator Primary Satellite Link (PPSL), and 55 personnel for continuous 24 hour operations. The CIANC3 system, however, envisions a very active system of agents that moreor-less seem to replace (or, perhaps, supplement) a command staff.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 77 of 116

Game AI: The State of the Industry, Part One. Woodcock (2000) Game AI: The State Of The Industry, Part Two. Pottinger; Laird (2000) Computer Player AI: The New Age. Pottinger (2003) in July 2003 Game Developer magazine. Not yet available in electronic form.
Type of Research: Gamasutra / Computer Game Developer Magazine / Academic. Summary: Woodcocks Game AI: State of the Industry article (Gamasutra) is an overview of techniques and trends in the computer game industry.

Chart (left) shows more projects now include a full-time AI developer. The Sims (right) are an example of A-Life. Among the trends that Woodcock discusses are: An increase of dedicated AI coders on a commercial game project (see chart above). A-Life. The power of A-Life techniques stems from its roots in the study of real-world living organisms. A-Life seeks to emulate that behavior through a variety of methods that can use hard-coded rules, genetic algorithms, flocking algorithms, and so on.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 78 of 116

Formations; both military and sports. According to Woodcock, developers are using Finite and Fuzzy State Machines for implementation of formations. Also, hierarchal AI, Interplay's Starfleet Command and Red Storm's Force 21 take such an approach, using higher-level strategic "admirals" or "generals" to issue general movement and attack orders to tactical groups of units under their command. Visibility Graphs. They work as follows: Assume you are looking down at a map that has a hill in the center and a pasture with clumps of trees all around it. Let appropriately shaped polygons represent the hill and the trees. The visibility graph for this scene uses the vertices of the polygons for the vertices in the graph, and builds the edges of the graph between the vertices wherever there is a clear (unobstructed) path between the corresponding polygon vertices. The weight of each connecting line equals the distance between the two corresponding polygon vertices. This gives you a simplified map against which you can run a pathfinding algorithm to traverse the map while avoiding the obstacles. Pottinger (who is closely involved with Ensemble Studios) in part 2 discusses many of the same trends as Woodcock in part 1 with a special emphasis on Hierarchal AI. The Hierarchal AI that is being discussed appears to be a much simpler version (two or three level) of the four-level AI I designed for UMS II: Nations at War (1992). John Laird (of the University of Michigan) writes in Bridging the Gap Between Developers & Researchers, When game developers look at AI research, they find little work on the problems that interest them, such as nontrivial pathfinding, simple resource management and strategic decision-making, bot control, behavior-scripting languages, and variable levels of skill and personality -- all using minimal processing and memory resources. Game developers are looking for example "gems": AI code that they can use or adapt to their specific problems. Unfortunately, most AI research systems are big hunks of code that require a significant investment of time to understand and use effectively. Laird concludes, Although there is currently a significant gap between game developers and AI researchers, that gap is starting to close. The inevitable march of Moore's law is starting to free up significant processing power for AI, especially with the advent of graphics cards that move the graphics processing off the CPU. The added CPU power will make more complex game AI possible. A second, equally powerful force that is closing the gap is sociological. Students who grew up loving

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 79 of 116

computer games are getting advanced degrees in AI. This has the dual effect of bringing game research to universities and university research to game companies -already there are at least five AI Ph.D.s at game companies. Pottingers 2003 Article in Computer Game Developer explains in great detail the methods behind the AI in Ensembles Age of Mythology. The system is based on a Knowledge Base (KB). The KB keeps track of all enemy units encountered as well as enemy buildings and structures (a key part to the game). Above the KB are AI Plans or complex state machines that know how to do tasks such as building structures, attacking a variety of target types, gathering and the like. Above the AI Plans are AI Goals that are super-high-level constructs such as Attack Player 4 of Build a forward base in Player 3s direction.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 80 of 116

Sketching for Military Courses of Action Diagrams. Forbus; Usher; Chapman (2003)
Type of Research: Academic funded by the DARPA Command Post of the Future and Rapid Knowledge Formation programs. Summary: This paper describes the creation of the nuSketch COA diagramming application. My PowerPoint presentation describes the history of military symbology, the evolution of Course of Action diagrams, and gives an overview of nuSketch itself (screen shot below).

nuSketch, is designed to be an interface to battlespace reasoning systems, built both by us and by others. Spatial reasoning is a crucial component in most battlespace reasoners. Consequently, we incorporate a suite of visual computations in nSB that use sketched input to provide a combination of domain-specific and domain-independent qualitative spatial reasoning. This paper, in turn, introduces the important topic of qualitative spatial reasoning (see below).

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 81 of 116

Calculi for Qualitative Spatial Reasoning; Cohn (1996) Describing Topological Relations With Voronoi-Based 9Intersection Model. Chen; Li; Li; Gold (1998) How Qualitative Spatial Reasoning Can Improve Strategy Game AIs. Forbus; Mahoney; Dill (2000) Potential of Qualitative Spatial Reasoning in Strategy Games. Chew (2002)
Type of Research: Academic. Summary: Cohns paper is a good introduction to qualitative spatial reasoning (QSR). QSR is a subfield of AI that deals with not only our everyday commonsense knowledge about the physical world, but the underlying abstractions used by engineers and scientists when they create quantitative models. One of the interesting standard assumptions in QSR is that change is continuous. This becomes obvious when you think of QSR in terms of 3D terrain data (such as DTED or DEM files). While two adjacent points might have values of +2 and +4 it is obvious that there is a place between these two points with a value of +3. Forbus, et. al., suggests that QSR is a logical tool to be used in strategy game AI routines. They specifically look at two types of problems (terrain analysis and pathfinding):

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 82 of 116

The three figures above illustrate the Massed Fire problem (Figure 3 is the solution for Figure 1) and the Ambush problem. The pathfinding problems (a constant source of angst in computer wargame AI) could, theoretically be facilitated by Voronoi diagrams:

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 83 of 116

In the above for snapshots: 1. The original tactical battlefield map. 2. The areas of the map that are impassable to armored vehicles. 3. The Voronoi diagram (which calculates a path equidistant between obstructions). 4. The free space areas. This description of free space and corridors still needs work for example, there are edge effects where the distinctions between regions and paths seem visually unnatural. However, we do find it encouraging, given that we have only just started exploring this space of algorithms. From a practical standpoint, it is important to note that these computations only need to be done once per map. The entire sequence of computations described here took six seconds on a mid-range machine, using a Javabased general-purpose implementation.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 84 of 116

Commander Behavior and Course of Action Selection in JWARS. Vakas; Prince; Blacksten; Burdick (2001)
Type of Research: Corporate (work done under DMSO support to advance the state of Land Warfare Command and Control (C2) in JWARS). Summary: This paper describes a plug-in module that attempts to introduce behavior patterns for synthetic commanders. These behavior patterns are driven by fuzzy logic. It uses data from well known personality tests in fuzzy rule sets to influence the interpretation of this doctrine, and a chess-like look-ahead engine to see the results of various applications of this doctrine. It then chooses the COA that gets to a goal while best satisfying other values such as minimize attrition of friendly troops.
Commander Model (CM) Situation Assessment COA Selector - Conventional Rules - Fuzzy Rules - Game Board ! Commander Behavior Model (CBM) (Optional) - Personality Traits - Personality to Attitudes Converter - Attitudes to Values Converter - Consequence Evaluator - Value/Consequence Resolver
! !

! ! ! !

Staff Support Elements Maneuver Planner Intel Planner Resupply Planner Fire Support Coordinator

! !

Information Knowledge Base (enemy, friendly, etc.) Fuzzy Knowledge Base (supports COA Selection and CBM)

Above: Major components of the JWARS Land Warfare Commander model. The JWARS Land Commander Model is responsible for three primary functions. First, it has the responsibility for assessing the situation.... Second, the CM selects the course of action (COA) from those that are available. And third, the CM monitors the plan and makes the decision when to modify or abandon the plan and select a new one.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 85 of 116

Mis sion (sup erio rs Sch ord edu ers) le (s upe rior s o Ow rde nT rs) roo ps ( Ene sub my ord (per inat ceiv es) Env ed) iron men t (p erce ived )

NGIC Country Behavior Data DIA Unit Behavior Data Commander Personality Data

Unit-Level Knowledge Base*


* includes doctrine, TTP, local customs, history, etc. for this unit

Individual Perceptions

COA Selector
Individual Decision Criteria

Commander Behavior Model (CBM)

Selected Course of Action (COA)

Examples of some of the fuzzy logic rules for COA: If force A is flank assaulting force B, then B must face A. If force As strength is much greater than Bs strength and Bs loss rate is very attriting, then B should withdraw. If attacker A is much stronger than defender B and A is close to objective C, then the objective is likely. A person whose decision-making axis is feeling has a benevolent attitude towards others If friendly frontal attack is true, friendly attrition is very true. A commander whose attitude towards others is benevolent has a minimizing loss COA evaluation strategy. This is an interesting concept, trying to simulate a commanders decisions based on fuzzy logic and MMPI (Minnesota Multiphasic Personality Inventory) and the MBTI (Myers-Briggs Type Indicator) tests.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 86 of 116

In the future they plan on implementing the following:


Current Prototype Agents treat perceptions as ground truth, i.e. have full confidence in them. Agents all have the same knowledge of themselves that the opponent has. Agents all have the same doctrinal rule sets. Agents have a simple scoring function. Agents have the same moves available to them at all times. Long Term JWARS CBM/COA Selector Agents recognize they only perceive situations (do not know truth), and take different fuzzy likelihoods into account in their decisions. Each agent has a model of what their opponent likely knows about that agent, based on opponents position, sensor capability and intelligence, making deception and information operations possible. Agents have individual rule sets based on their countrys doctrine, and a model of opponents rule sets based on their identification of the opponent and its doctrine. Agents act based on the assumption that opponents will do what is best for them given their doctrine and personality. Agents doctrinally correct responses are constrained by their forces and supplies, by the enemy situation, and by the environment, with only the top n options considered in COA tree creation.
Aggregate Situation

A Situation

Situation Assessment Rules Reaction 1 Commanders Perception of Situation Possible game boards of actual case, depending on commanders assessment of both enemies and friends, according to rules of situation assessment from partial knowledge and confidence levels. Very Likely Actual Case Game Board Less Likely Actual Case Game Board Likelihood-rated Next Game Boards Likelihood-rated Next Game Boards Reaction 3

Rules of Response

Reaction 2

My Model of Opponents Situation Assessment Rules

Reaction 1 Commanders Perception of Opponents Perception For a likely set of actual cases, several game boards of what the opponent thinks is going on are created. Opponents Very Likely Actual Case Game Board Opponents Less Likely Actual Case Game Board

Rules of Response

Reaction 2

Plausible Next Level Situations Each plausible perception is fed back into the rules of response: the commanders actual case game board for a single set of reactions, and the several possible opponent perceptions for that case into opponent situation assessment and then rules of response for multiple sets of reactions. All of a Commanders plausible responses are combined with all of opponents, for that particular case. The simulation is run on the commanders game board, to arrive at the next level of game boards

Above: How a COA tree is created with second order perception. Comments: This is a very ambitious undertaking and, I feel, a very dangerous precedent. While there are a number of stories7 - some apocryphal others not of a commanding
7

Lees comments on Grant after he assumed command in the east and Pattons comments about Rommel before the battles in Tunisia come to mind.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 87 of 116

general altering his strategy because of some perceived inside knowledge of his opponents personality, COA analysis, by definition, is supposed to include a bestcase and a worst-case scenario. Commanders come and go (and are frequently replaced in combat). I would have much more confidence in a computer analysis of what an opposing force could do based on the laws of physics rather than what it might do based on some vague psychological principles.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 88 of 116

PC Games: A Testbed for Human Behavior Representation Research and Evaluation. Biddle; Stretton; Burns (2003)
Type of Research: Corporate (Sonalysts, Inc.). Sonalysts created Janes Fleet Command and Sub Command commercial naval wargames. Summary: This paper proposes using commercial off the shelf (COTS) PC games as a testbed for military research. The availability of these types of AI editing tools has enabled HBR researchers to use games as testbeds for conducting research to advance their HBR technique. For example, Laird has used Quake extensively in support of AI development. Additionally, AI research at the MOVES Institute at the Naval Postgraduate School has involved commercial PC games.

Screen from Sub Commanders Mission Editing utility.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 89 of 116

A Method for Incorporating Cultural Effects into a Synthetic Battlespace. Mui; LaVine; Bagnall; Sargent (2003)
Type of Research: Corporate (Micro Analysis & Design, Inc. and Northrop Grumman Mission Systems) funded by Air Force Research Laboratory (AFRL). Terms and abbreviations used in this paper: CART = Combat Automation Requirements Testbed ARL = Army Research Laboratory IMPRINT = Improved Performance Research Integration Tool HLA = High Level Architecture RTI = Run Time Infrastructure COM = Component Object Model HPM = Human Performance Modeling HBR = Human Behavioral Representation CMC2 = Cultural Modeling of Command and Control Summary: This technical effort consisted of three primary objectives: 1. Investigate cultural factors and add cultural modeling capabilities to the Combat Automation Requirements Testbed (CART), an existing human performance modeling (HPM) tool, to allow users to easily inject cultural effects into a human performance model. 2. Create a client-server interface between CART and the Joint Integrated Mission Model (JIMM) constructive simulation to allow JIMM entities to receive higher fidelity behavioral representation from an external simulation tool. 3. Develop a model of an Integrated Air Defense System (IADS) for two different cultures to demonstrate the functionality of the enhanced CART tool as well as the interaction of CART and JIMM operating in a client-server environment. The Cultural Lens is a concept for a tool which would help leaders from one culture view a situation from the perspective of another culture. To implement this they created a cultural editing tool (CMC2 Tool). The first step is to create human performance task networks, breaking down actions into functions (or sub-networks) and tasks (screen shot below):

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 90 of 116

Above the CART task network. Below the Cultural Parameter Interface:

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 91 of 116

Above the Cultural Macro Interface. Below the Cultural Template Interface.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 92 of 116

JIM M
H LA or S hared M em ory Interface

CART
SO C H PM

SO C

CRC

CRC H PM

CRP

CRP H PM

The interface between JIMM and CART. This is how their Cultural Lens was implemented. The used variables such as: Distribution of Power (DP) is the perceived difference in power between an individual (member of the military) and his superior. Willingness to Take Risk (WR) is defined for this project as an individual's willingness to make decisions that place him in vulnerable situations thereby risking the consequences of errors. Familiarity with the Enemy (FE) is defined as the extent to which a culture has had prior interaction with its enemy. Comments: This project certainly had a feeling of dj vu all over again (it is very reminiscent of the CyberWar XXI project that I worked on). When amorphous, fuzzy and hard-to-define subjects such as Willingness to Take Risks are digitized (a value is arbitrarily assigned to it) I get decidedly uncomfortable about the validity of the simulation. This is not the same process as converting analog sound files to digital. At some point a human being is going to arbitrarily assign values to all these variables that will then define a human or in this case, cultural personality.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 93 of 116

Many times the customer gets a feeling that the results of these simulations are somehow more accurate just because they are run on a computer. But the truth of the matter is that once a human arbitrarily assigns values to these variables that will affect the results of the simulation the simulation has no more validity (and possibly quite a bit less) than the same human being pontificating on the subject. Indeed, a human predicting that troops from Country X will run away at the first sign of battle has just as much validity as a computer simulation that has been arbitrarily weighted to come to the same results.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 94 of 116

A Methodology for Modeling and Representing Expert Knowledge that Supports Teaching-Based Intelligent Agent Development. Bowman; Tecuci; Boicu (2000)
Type of Research: Academic (George Mason University). Summary: This is extremely short paper; however their approach and the subject matter (COA analysis) is interesting. The domain expert is given a specific problem to solve (such as, to Assess COA411 with respect to the Principle of Objective) and solves it through task reduction, as illustrated in Figure 1. To perform this assessment, the expert needs a certain amount of information about COA411. This information is obtained through a series of questions and answers that help reduce the initial assessment task to simpler and better defined ones, until the expert has enough information to perform the assessment.

There are several important results of this modeling process: 1) Necessary concepts and features are identified - they guide the import of relevant ontological knowledge from external repositories such as CYC (Lenat,

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 95 of 116

1995), Loom (MacGregor, 1999) or Ontolingua (Farquhar et al. 1996), leading to the definition of the agents ontology. 2) Each task reduction step represents an example from which the Disciple agent will learn a general rule through the application of a mixed-initiative multistrategy learning method. In particular, the question and the answer from the example reduction guides the agent in generating an explanation of the reduction, which is a central element in rule learning. 3) The learned rules will include generalizations of the natural language phrases from the modeling tree. These phrases are used to generate solutions and justification in natural language. For instance, an abstract justification of an assessment task is generated by simply instantiating the sequence of the questions and answers that led to the assessment.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 96 of 116

JWARS OUTPUT ANALYSIS; Blacksten; Jones; Poumade; Osborne; Stone (2001)


Type of Research: Academic Corporate / Military (Joint Warfare Systems Office) Summary: The Joint Warfare System (JWARS) is a campaign-level model of military operations that is currently being developed under contract by the U.S. Office of the Secretary of Defense (OSD) for use by OSD, the Joint Staff, the Services, and the Warfighting Commands. The following constructs are important to understanding how the JWARS simulation is set up and run: Scenario - A specified set of problem domain input data (Playbox, Environment, Order of Battle (OOB), Plans, system performance parameters, etc.) Replication - A single execution of a scenario, corresponding to unique initial random number seed. Run - Scenario data set plus user-selected control data, including identification of data to be captured, number of replications to be run, initial replication number (surrogate for random number seed), etc. Run Definition A named, stored set of parameters defining a run.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 97 of 116

When a run is submitted for execution, the JWARS administrative control system (JACS) assigns the run an identification code based on time stamp, e.g., J2001-04-30163955690000. That run ID is included in all outputs from the run. JWARS analysis products consist of reports addressing essential elements of analysis (EEAs), quantified by measures that are calculated from data elements captured by instruments during the simulation. From the bottom up, these terms are defined as follow: Data element - E.g., heading, longitude, unit ID, missile type, current unit activity, and current unit attrition. Instrument technically, a specific software method (used in the object-oriented programming sense) designed to capture and output a set of data elements whenever it is triggered. Measure a quantitative result computed from data elements; JWARS also uses the term measure to the collection of instruments and data elements needed to calculate that result. Report - a set of instrument output data or measures that have been processed into a graph or table that helps to answer one or more EEAs. Essential element of analysis - an aggregate-level grouping concept found within the HCI System. EEAs may be considered as both: (1) statements of the overarching questions that the decision maker seeks to answer (e.g., Are forces in Theater X sufficient to prevent Nation Y from pushing from the DMZ to the yy Parallel in less than eight days?); and (2) a means for selecting those instruments and measures contributing to the resolution of the question (e.g., the instruments and measures shown in the relationships page of the EEA). Another class of outputs consists of information generated and displayed to the users workstation during a replication. This includes:

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 98 of 116

Message a debug-like text string written to a message log, when triggered by an associated simulation event. Message category a logical grouping of messages, e.g., Simulation Model C4ISR.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 99 of 116

Message log the sequential file of messages generated during the JWARS replication. The JWARS user may choose to have this file displayed in a message log window during simulation execution and saved following the run.

Active map a visual map display of the campaign, replete with military as well as geographical entities. (JWARS also provides a capability to play back the replication on the map after the simulation is finished) In addition JWARS can output data via CSV (Comma Separated Values) files for charts such as this:

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 100 of 116

Comments: JWARS is THE wargame / military simulation used by the Pentagon to plan all military actions. In many ways it is reminiscent of AMVER that I was introduced to during the SimSAR II project for MSIAC:

The application certainly appears to be rude and crude by commercial computer game standards. It is, apparently restricted to 16 colors and an interface that looks like it was written in Visual Basic. See also Logistics Modeling in JWARS for a PowerPoint presentation about the history of JWARS and it the future enhancements planned for it.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 101 of 116

AI Middleware: Getting into Character; Part 1 - BioGraphic Technologies' AI.implant; Dybsand (2003) AI Middleware: Getting into Character; Part 2 - MASA's DirectIA; Dybsand (2003) AI Middleware: Getting into Character; Part 3 - Criterion's Renderware AI 1.0; Dybsand (2003)
Type of Research: Commercial Game Development Software Reviews (Gamasutra) Summary: These are reviews of three middleware (i.e. turnkey third-party) products for AI for computer games.

AI.implant flow chart above. Basically, AI.implant is designed to work with Maya or 3D Studio Max as a plug-in (you call it from inside Maya or 3DS). You fill in a number of fields - and voila youve got an instant NPC (Non-Player Character).

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 102 of 116

In the example above the blue lines show a waypoint network (yes, you preprogram the waypoint so you dont actually do any pathfinding). Then you get to fill in some more fields:

and when youre done you have an autonomous agent. This product also comes with an SDK. You have to fire up the AI.implant module in your code and then you can call it like so:
aiSolver->AddSubSolver( new ACE_BehaviourSolver) ;

This is definitely a canned AI that is appropriate for D&D or RPG games. I dont see how applicable it would be for RTS games. The next product reviews is DirectIA. As the name implies it is an Intelligent Agent creation product. It is primarily script driven. An example of a stimulus script and an emotion script follows:

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 103 of 116

//-----------------------------------------------------------// Stimulus for greed. Activated by the perception of valuable objects. //-----------------------------------------------------------// declare the stimulus with a unique name to identify it stimulus SeeTreasure { float rStimulusValue=0.0; // a c callback function is called that will return the of all objects // in the game. There is also some built-in functions that could have been // called to get a list of all entities from world which would have been // filtered to retrieve the 'object' entities. selection selObjects=c_GetObjects(); // The loop instruction in the DirectIA script language with ( y in selObjects) { // DirectIA is a strongly typed language, but allows dynamic conversion // within a structure hierarchy to get the correct type of an object. // A 'selection' is a collection of 'thing's and in this case 'y' is a thing RealObject obj=y; float rBoost; // 'myself' is the reserved word for the agent. Here other // script functions are called to see if object is of interest to // the agent. The 'InSameRoom' is called rules. The agent // is only stimulated by object that are room as him. ID of the the current due to game in the same list

then

if( InSameRoom( myself, obj ) && CareAboutObject( obj, rBoost ) ) // If the object is of some value for the agent, boost the stimulus rStimulusValue = rStimulusValue + c_Love( obj ); } // Here the final value of the stimulus is set to the value just computed. DIA_SetStimulusValue( SeeTreasure, rStimulusValue ); }Code Fragment Copyright MASA (Mathematiques Appliquees SA) 2002

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 104 of 116

// // // //

The emotion is declared with a unique name to identify it. This emotion will be used somewhere else in the engine to modify the strength of a behavior activation. The value of this emotion can be retrieved in the script by calling DIA_GetEmotion( greed ).

emotion greed { // Factor applied to the input stimulus // (In this case, the input value is equal to the stimulus) stim_factor = 1; // The surprise factor is used to simulate a boost when the emotion // here there is no surprise surprise_factor = 0; increase.

// define the max increase and decrease rate of the value per time unit. // Here the emotion can vary a lot quickly. max_decrease_rate = 300; max_increase_rate = 300; // Used to simulate the adaptation of an agent to a stimulus // (for example, an agent facing a treasure long enough will get // used to seeing it, and will no longer feel any greed). // Here there is no persistence habit_factor = 0; // The name of the input stimulus used with this emotion used_stimulus = SeeTreasure; // The state variable that is modified by the output of this emotion. // Here we only use the output of the emotion as a direct value in // another part of the script by calling DIA_GetEmotion( greed ). // No state variable is influenced. modified_sv = {}; } Code Fragment Copyright MASA (Mathematiques Appliquees SA) 2002

As before, the module has to be fired up at the beginning of the code and then called from the main game loop: DIA_Workspace::GetTime() to calculate when it should execute and then DIA_Engine::ExecuteAllActions().

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 105 of 116

The last AI package reviewed is Criterions Renderware AI. Basically, this product creates entities which have predefined behaviors:
CGoToAgent - directs an NPC to a specific destination. CFollowerAgent - enables an NPC to follow a specific entity, from a defined distance and at a defined bearing. CFleeAgent - directs an NPC to move away from a set of locations and to stop when hidden from all of them. CPathWayAgent - directs an NPC to follow a series of way points, that form a predetermined path. CWanderAgent - enables an NPC to wander to random destinations. CHideAgent - directs an NPC to hide out of sight, of specific entities. CShooterAgent - enables an NPC to move into a firing position and shoot at a targeted enemy with the current weapon, based on line-of-sight and weapon ranges. CAttackNNAgent - enables an NPC to fight a targeted enemy in melee combat, using predefined strategies. CTestAgent - provides a testing and debugging agent tool to the game developer for use prior to any agent modification or development.

Prices are not shown for any of the three products but I seem to recall that they are in 4-5 figures area. All three products got good reviews and would probably be handy for any RPG type game. However, I certainly wouldnt want to advertise that any of the NPCs so created would exhibit intelligent behavior.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 106 of 116

Other Papers of Interest


A Theory of Universal Artificial Intelligence based on Algorithmic Complexity. Hutter (2000) Development of a Concept Learning Capability for a Human Performance Model. Glenn; Le Mentec; Ryder; Santarelli; Stokes; Zachary (2003) Knowledge Scoring Engine (KSE) for Real-Time Knowledge Base Generation. Jodlowski; Carruth; Lowe (2003) Pattern-based AI Scripting using ScriptEase. McNaughton; Redford; Schaeffer; Szafron (2003) The Incorporation of Validated Combat Models Into a Discrete Event Simulation (DES) CGF. Courtemanche (2000) Toward An Object Oriented Design For Reproducible Results in Military Simulations. Youmans (2001) Situation Awareness and Assessment An Integrated Approach and Applications. Zhang; Hill, Jr. (2001) Developing Requirements for Building MOUT Representations in M&S Using the Battles for Grozny as Case Studies. Lannon (2003) Spatial Plans, Communication, and Teamwork in Synthetic MOUT Agents. Best (2003) Facilitating Learning in a Real Time Strategy Computer Game. Sweetser; Dennis (2001) Representations and Solutions for Game-Theoretic Problems. Koller; Pffeffer (1997) Intelligent Agents for an Interactive Multi-media Game. Nicholson; Dutta (1997) Utilizing Artificial Intelligence to Achieve Dominant Battlespace Awareness/Knowledge (DBA/DBK). Fillman (1999)

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 107 of 116

Intelligence Preparation of the Information Battlespace- A Methodical Approach to Cyber Defense Planning. Moore; Williams; McCain (2001) Generating and Solving Imperfect Information Games. Koller; Pfeffer (1995) Role of Pattern Recognition in Computer Games. Kaukoranta; Smed; Hakonen (2002) Efficient, Realistic NPC Control Systems using Behavior-Based Techniques. Khoo; Dunham; Trienens; Sood (2002) GeoRep: A Flexible Tool for Spatial Representation of Line Drawings. Ferguson; Forbus (2000) Representing Knowledge and Experience in RPDAgent. Sokolowski (2003) Joint Warfare System (JWARS) Operational Requirements Document (ORD) An Overview of The Joint Warfare System (JWARS); Maxwell (2000)

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 108 of 116

Conclusions
Research into developing human-level AI can be divided into the following areas: Intelligent Agents A recurring theme in many of the military and academic papers cited above are Intelligent Agents (IAs) that implement smart behaviors or attempt to mimic personality traits of commanders or cultures. Lairds work with bots can also be put into the category of Intelligent Agents as they are both relatively small, selfcontained applets. IAs are also becoming more common in commercial Role Playing Games (RPGs) to implement Non Player Characters (NPCs). There are now commercial third-party (middleware) packages for creating IAs (see DirectIA above). IAs really do not introduce any new ideas in AI coding. An IA may be created using scripting, fuzzy logic, finite state machines, fuzzy state machines or even, theoretically, genetic algorithms. Their strengths include the ability to perform certain specific tasks (such as a QuakeBots ability to hide, ambush and fight) and their set it and forget it autonomy. While they can greatly contribute to the authenticity of a simulation the path to human-level intelligence probably does not lead through Intelligent Agents.

Finite State Machines Finite State Machines (FSMs) were described in Turings Computing Machinery And Intelligence (though he called them Discrete State Machines) over fifty years ago. To Turing an FSM was a physical device8. Since then the term FSM, while it still can refer to hardware, is more likely to mean a series of AI routines that are linked together with specific entry and exit points. The following image is a detail from a software phone number FSM,

The digital computers considered in the last section may be classified amongst the 'discrete state machines' these are the machines which move by sudden jumps or clicks from one quite definite state to another. These states are sufficiently different for the possibility of confusion between them to be ignored. Strictly speaking there are no such machines. Everything really moves continuously. Computing Machinery And Intelligence

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 109 of 116

Obviously, FSMs are basic building blocks of AI and will continue to be so far into the future. However, it is their very nature of chopping real life actions into discrete compartments that makes me suspicious of FSMs playing more than an incidental or supporting role in the quest for Human-Level AI. Indeed, as Turing wrote, Strictly speaking there are no such machines. Everything really moves continuously.

Fuzzy Logic While the Department of Defense has, not surprisingly, funded the largest computer wargames ever created (WARSIM, JWARS, TACOPS) they are all curiously devoid of strategic or even tactical AI. Indeed, when I sent email to a friend who works on WARSIM asking him about the AI used in the wargame he replied, We don't really use AI, since the enemy is played by Humans, but we do have
some pretty smart behaviors as well as a Fuzzy Logic system that attempts to portray some of the soft factors of war.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 110 of 116

Fuzzy Logic (which is described in numerous citations above) shows some promise for the creation of Human-Level AI. Humans rarely think in absolute terms. The drivers education book states definitively that you are to signal a turn precisely100 ft. before the intersection but people never get out and measure the exact distance. The same is true with military or strategic axioms. It is good to get behind an enemy. It is good to cut an enemys lines of supply and communication. However, there are no precise mathematical definitions for the terms. In 1985 I unknowingly used fuzzy logic in my commercial wargame, UMS: The Universal Military Simulator (screen shots at right). The program understood concepts such as line and flanks. This was accomplished by first drawing an imaginary box around all the units in an army, then determining the armys facing by comparison with the enemy army. It was then easy to determine left and right flanks as well as the center of the armys lines (a double envelopment is simply a simultaneous left and right flank attack). It is reasonable to assume that fuzzy logic will play a role in the development of Human-Level AI.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 111 of 116

Neural Nets Surprisingly it is the commercial gaming industry that is showing the most interest in Neural Nets. I did not find any references to Neural Nets currently being employed for military applications. Andr LaMothe, a very respected writer about computer game programming, has produced an in-depth piece on the subject (here neural.doc). Neural Nets have gone in and out of style since they were first proposed over fifty years ago and are bound to become popular again. Neural Nets are very reminiscent of the meat machine view of the human brain popularized in the 60s. Indeed, part of the allure of Neural Nets is the almost homeopathic belief that mimicry of the wiring of the human brain will, eventually, produce a sentient, human-level intelligence. I remain skeptical.

Fuzzy State Machines A fuzzy state machine (FuSM) brings together fuzzy logic and finite state machines (FSMs). Instead of determining that a state has or has not been met, a FuSM assigns different degrees of membership to each state (Russel & Norvig, 2002). Therefore, instead of the states on/off or black/white, a FuSM can be in the states slightly on or almost off. Furthermore, a FuSM can be in both the on and off states simultaneously to various degrees. - Sweetser. Fuzzy State Machines will almost certainly be a part of any attempt to create Human-Level AI.

A Life A Life (Artificial Life) is the current hot topic in computer game AI. The umbrella term has been used to describe any program that emulates or simulates living creatures even though the techniques employed can be quite different. The Sims, frequently cited as an example of A Life, actually use a system of Intelligent Agents (see Some notes on programming objects in The Sims; Forbus; Wright). Other A Life programs employ genetic algorithms. At this time the term A Life is too amorphous to have any value in this discussion.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 112 of 116

Scripting Scripting refers to any system in which a previously created file of behaviors (often a text file) is loaded at runtime to control the actions of NPCs or units. AI scripts, in a way, are not unlike book openings in chess. They are a series of canned responses. For The War College (1996) (screen shot at right) I created three scripts for each army that contained detailed instructions for the coordinated movements of each unit. At runtime one script picked at random was loaded. The computer AI continued executing the script until any of a number of triggers (hostile units approaching within so many meters, hostile units attacking, etc.) were encountered at which time a series of heuristic tactical combat routines took over. Scripts can be a very powerful tool but they presuppose some knowledge of the simulation. This was possible because The War College contained simulations of the battles of Antietam, Pharsalus, Tannenburg and Austerlitz. Being familiar with the real events and knowing the terrain I could create reasonably intelligent scripts for the armies to follow. Generic scripts could also be created that are not tied to a specific terrain or simulation such as football pass route scripts or guard patrol scripts. In the right circumstances, scripting can be a very powerful AI tool.

Genetic Algorithms Genetic Algorithms (GAs) are an interesting concept; especially since they seem to promise to deliver quite a bit of AI for very little human coding. While a GA can, theoretically, deliver an ever more intelligent AI the downside is that GAs are notorious for consuming CPU memory and clock cycles. Because of this the commercial gaming world has not yet explored these methods.

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 113 of 116

Belief-based Bayesian Network. I encountered only one example of this method (A Bayesian Network Representation of a Submarine Commander. Yu) and the results were less than impressive. Bayesian Networks (see http://www.cs.ualberta.ca/~greiner/bn.html) certainly seem to be very useful for representing uncertainty and making deductions, but they do not seem directly applicable to the creation of human-level AI.

Hierarchal AI. Pottinger described a two-tier hierarchal AI as one of the methods that he used in his Ages of Mythology. The highest level AI, which Pottinger termed AI Goals are super-high-level constructs such as Attack Player 4 or Build a forward base in Player 3s direction. Beneath this the lower level AI, dubbed Plans would then fetch the appropriate script and execute the behavior. In my UMS II: Nations at War (1992) I designed a four-tiered hierarchal AI that corresponded to a nations Order of Battle Table (screen shot at right). For example, the highest level (here represented as Montgomery the 21st Army Group Commander) would pass down the order to attack the German player. The next level down, Bradley, would then assign objectives such as attack Berlin. The next level down, the VII Corps in this case, would then assign a series of way points utilizing the road net to Berlin (passing through Paris for example) and give orders to the divisions under its control. The lowest level units would then execute pathfinding algorithms to get on to the road net and advance toward its first waypoint (Paris). A multi-tiered hierarchal AI system is crucial for emulating the behavior of complex military formations. Each tier of the hierarchy corresponds to an equivalent level of the Order of Battle Table. Indeed, any Human-Level AI that is operating in a strategic simulation, be it a military campaign or a football game, will need to implement a hierarchal AI. I am certain of this.
The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 114 of 116

Final Thoughts In my research I did not encounter any program that had made substantial progress towards Human-Level AI as described in the chapter The Problem at the top of this paper. That was not surprising. What was surprising was that I could not find any research group even attempting to create AI of that caliber. Research in developing Human-Level AI can be divided into two distinct areas: the commercial game industry and military sponsored research (which includes military funded academia). The two groups approach the problem differently because they have different motivations and interests. Commercial computer game developers are primarily interested in creating a challenging opponent that can play the game that they designed. A great deal of the commercial gaming AI is purpose built and frequently hardwired for a specific game. Furthermore, there is a long history of cheating in commercial computer game AI design. Even when it is not overtly cheating many of the AI routines that are now being created have no real value outside the original application. For example, one recent technique is for the level designer or artist to pre-plot waypoints to be used in lieu of heuristic pathfinding. Indeed, this is even a built in feature of AI.implant (see above). At the same time, the wargames created for the military (WARSIM, JWARS, TACOPS) do not have any strategic AI whatsoever (recall the comment from my friend who works on WARSIM, We don't really use AI, since the enemy is played by Humans). While creating Human-Level strategic AI is a daunting task, the failure to do so can be extraordinarily dangerous (recall V Corps commander Lt. Gen. William S. Wallaces recent comments from Iraq, The enemy we're fighting is a bit different from the one we wargamed against."). The enemy they had wargamed against was controlled by U. S. officers following U. S. military doctrine. What they needed was to wargame against an enemy controlled by a Human-Level AI that could think outside the box so that problems were encountered during simulations and not on the battlefield. John Laird of the University of Michigan admits that there is not much of a technology transfer from academia to the commercial gaming industry, either (When game developers look at AI research, they find little work on the problems that interest them, such as nontrivial pathfinding, simple resource management and strategic decision-making - Bridging the Gap Between Developers & Researchers).

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 115 of 116

Yet it is inevitable that Human-Level strategic AI will be created and it will probably be created by the end of this decade. Most of the pieces to the puzzle are probably all ready out on the table. They just need to be assembled in a new way. - David Ezra Sidran July 27, 2003 Davenport, Iowa

The Current State of Human-Level Artificial Intelligence in Computer Simulations and Wargames. Page 116 of 116

Вам также может понравиться