Tag Archives: AI

What’s Taking So Long?

This is what a brilliant game publisher looks like: Marten Davies. I was looking for a photo of Marten from the ’80s and I found this 2019 photo (credit: University of Texas). This is exactly what he looked like in ’87. He hasn’t aged a day.

Marten Davies, my first publisher and still a close friend, was painfully accurate when he said, “Take any time estimate that Ezra gives you and multiply it by three.” Now, in my defense, Marten probably said that because I had a contractual obligation to deliver UMS: The Universal Military Simulator in some insanely short period of time like six months and I actually delivered it in eighteen. To my credit I delivered a #1 game (Europe and US). To Marten’s credit, he was very easy to work with and it remains the best experience of my career. The bottom line was that Marten knew that I needed more time to make a better game and he made sure I got it.

So, what’s the hold-up? Well, that would be me, again.

Specifically, it’s the AI.  As many of you know, I’ve been working on AI for wargames for a long time and I was hoping to turn General Staff into a showcase for my work. Some of this has been accomplished 1) Antietam & AI . These are the algorithms that I’ve written about in my doctoral thesis and in various published papers.

The MATE2)Machine Analysis of Tactical Environments, see http://riverviewai.com/ set of tactical AI routines built upwards from low level routines; like 3D Line of Sight (3DLOS) 3) https://www.general-staff.com/tag/3d-line-of-sight/ , Range of Influence (ROI), and least weighted path algorithms 4) https://www.general-staff.com/tag/least-weighted-path-algorithm/ to battlefield analysis 5) Algorithms for Generating Attribute Values for the Classification of Tactical Situations.: http://riverviewai.com/papers/Algorithms-Tactical_Class.pdf , to selection of objectives 6) https://www.general-staff.com/antietam-ai/ , and the implementation of offensive maneuvers 7) Implementing the Five Canonical Offensive Maneuvers in a CGF Environment.:http://riverviewai.com/papers/ImplementingManeuvers.pdf to achieve those objectives.

So, given a list of objectives the AI can implement a tactical plan (Course of Action, or COA); but the AI doesn’t have any comprehension of the greater strategic picture. For example, MATE’s analysis of Gettysburg is that Blue8)MATE always labels the attacker as blue (Confederates) should not attack because Red (Union) has interior lines, a superior defensive position, and greater troop strength.

MATE representation of Gettysburg: Confederates (blue), Union (red). MATE screen shot. Click to enlarge,

MATE output analysis of Gettysburg. Confederates (BLUEFOR) should not attack because REDFOR (Union) has interior lines, a superior defensive position, and greater troop strength.

But, what the AI doesn’t understand is that the Confederates were desperate for a victory on Union soil which required them to attack at Gettysburg.

Screen capture of the Battle of Little Bighorn in the General Staff Scenario Editor. Click to enlarge.

Or consider the tactical positions at the battle of the Little Bighorn (above). My goal is to write a human-level tactical AI and, clearly, the historical attack (splitting forces) by Blue (7th Cavalry) against the far superior Red (Native American) forces was very ill advised (perhaps calling into question the definition of ‘human-level tactical AI’).

After a lot of thought I realized that I needed to create an AI Editor program. You use the Army Editor to create armies for the General Staff Wargaming System, the Map Editor to create maps, the Scenario Editor to place armies on the map and now you use the AI Editor to quickly and easily select strategies for an army. For example:

Screen shot of the General Staff AI Editor being used to program AI strategies for the Union forces at Antietam. Click on a battle group and select strategies from the drop down menus. Click to enlarge.

The AI Editor is very easy to use. You just load a scenario (created in the Scenario Editor, of course) tell the AI to separate the units on the field into battle groups (this creates unique groups based on proximity rather than the Order of Battle hierarchy) and then select strategies from the drop down menus. It’s important to note that these units won’t follow the direct paths between objectives; those are just there to show the order of Objectives. The MATE AI algorithms will be engaged to actually move units on the battlefield and implement tactical maneuvers like envelopment and turning attacks. Also, note the AI Editor creates one set of strategies that the AI will follow, if so instructed, when you actually play a simulation. You will also have the  option to let the Machine Learning algorithms select strategies as well though these may not follow historical strategies.

It took less than a minute to set up Custer’s strategy at Little Bighorn:

Screen shot from the General Staff AI Editor showing Custer’s historical strategy at Little Bighorn. Click to enlarge.

With the creation of the AI Editor the last piece is in place for the General Staff Wargaming System. We now have nine battlefield maps with more on the way (we’re hoping to ship the finished game with about 20 battlefield maps) and fifteen armies (we hope to have about sixty when we’re finished). We’re now registered with Steam and are getting our ‘store’ set up. We will be using Steam for player vs. player games. With a bit of luck we’re hoping to start player vs. player testing in about sixty days.

As always, please feel free to contact me directly with questions, comments or complaints. I’m sorry for the delay, but we’re creating something that hasn’t been done before and that always takes a bit longer.

References

References
1 Antietam & AI
2 Machine Analysis of Tactical Environments, see http://riverviewai.com/
3 https://www.general-staff.com/tag/3d-line-of-sight/
4 https://www.general-staff.com/tag/least-weighted-path-algorithm/
5 Algorithms for Generating Attribute Values for the Classification of Tactical Situations.: http://riverviewai.com/papers/Algorithms-Tactical_Class.pdf
6 https://www.general-staff.com/antietam-ai/
7 Implementing the Five Canonical Offensive Maneuvers in a CGF Environment.:http://riverviewai.com/papers/ImplementingManeuvers.pdf
8 MATE always labels the attacker as blue

Antietam & AI

MATE AI selected Objectives for Blue, 3D Line of Sight (3DLOS) and Range of Influence (ROI) displayed for the Antietam: Dawn General Staff scenario. Screen shot from General Staff Sand Box. Click to enlarge.

The author walking across Burnside’s Bridge in 1966 (age 12).

I have been thinking about creating an artificial intelligence (AI) that could make good tactical decisions for the battle of Antietam (September 17, 1862, Sharpsburg, Maryland) for over fifty years. At the time there was little thought of computers playing wargames.1)However, it is important to note that Arthur Samuel had begun research in 1959 into a computer program that could play checkers. See. “Samuel, Arthur L. (1959). “Some Studies in Machine Learning Using the Game of Checkers”. IBM Journal of Research and Development.” What I was envisioning was a board wargame with some sort of look-up tables and coffee grinder slide rules that properly configured (not sure how, actually) would display what we now call a Course of Action (COA), or a set of tactical orders. I didn’t get too far on that project but I did create an Antietam board wargame when I was 13 though it was hardly capable of solitaire play.

The Antietam scenario from The War College (1992). This featured 128 pre-rendered 3D views generated from USGS Digital Elevation Model Maps.

In 1992 I created my first wargame with an Antietam scenario: The War College (above). It used a scripted AI that isn’t worth talking about. However, in 2003 when I began my doctoral research into tactical AI I had the firm goal in my mind of creating software that could ‘understand‘ the battle of Antietam.

TIGER Analysis of the battle of Antietam showing Range of Influence of both armies, battle lines and RED’s avenue of retreat. TIGER screen shot. Appears in doctoral thesis, “TIGER: A Machine Learning Tactical Inference Generator,” University of Iowa 2009

The TIGER program met that goal (the definition of ‘understand’ being: performing a tactical analysis that is statistically indistinguishable from a tactical analysis performed by 25 subject matter experts; e.g.. active duty command officers, professors of tactics at military institutes, etc.).

In the above screen shot we get a snapshot of how TIGER sees the battlefield. The darker the color the greater the firepower that one side or the other can train on that area. Also shown in the above screen shot is that RED has a very restricted Avenue of Retreat; the entire Confederate army would have to get across the Potomac using only one ford (that’s the red line tracing the road net to the Potomac).  Note how overlapping ROIs cancel each other out. In my research I discovered that ROIs are very important for determining how battles are described. For example, some terms to describe tactical positions include:

  • Restricted Avenue of Attack
  • Restricted Avenue of Retreat
  • Anchored Flanks
  • Unanchored Flanks
  • Interior Lines
  • No Interior Lines

A Predicate Statement list generated by MATE for the battle of Antietam.

Between the time that I received my doctorate in computer science for this research and the time I became a Principal Investigator for DARPA on this project the name changed from TIGER to MATE (Machine Analysis of Tactical Environments) because DARPA already had a project named TIGER. MATE expanded on the TIGER AI research and added the concept of Predicate Statements. Each statement is a fact ascertained by the AI about the tactical situation on that battlefield. The most important statements appear in bold.

The key facts about the tactical situation at Antietam that MATE recognized were:

  • REDFOR’s flanks are anchored. There’s no point in attempting to turn the Confederate flanks because it can’t be done.
  • REDFOR has interior lines. Interior lines are in important tactical advantage. It allows Red to quickly shift troops from one side of the battlefield to the other while the attacker, Blue, has a much greater distance to travel.
  • REDFOR’s avenue of retreat is severely restricted. If Blue can capture the area that Red must traverse in a retreat, the entire Red army could be captured if defeated. Lee certainly was aware of this during the battle.
  • BLUEFOR’s avenue of attack is not restricted. Even though the Blue forces had two bridges (Middle Bridge and Burnside’s Bridge) before them, MATE determined that Blue had the option of a wide maneuver to the north and then west to attack Red (see below screen shot):

MATE analysis shows that Blue units are not restricted to just the two bridge crossings to attack Red. MATE screen shot.

  • BLUEFOR has the superior force. The Union army was certainly larger in men and materiel at Antietam.
  • BLUEFOR is attacking across level ground. Blue is not looking at storming a ridge like at the battle of Fredericksburg.

MATE AI selects these objectives for Blue’s attack. General Staff Sand Box screen shot. Click to enlarge.

We now come to General Staff which uses the MATE AI. General Staff clearly has a much higher resolution than the original TIGER program (1155 x 805 terrain / elevation data points versus 102 x 66, or approximately 138 times the resolution / detail). In the above screen shot the AI has selected five Objectives for Blue. I’ve added the concept of a ‘battle group’ – units that share a contiguous battle line – which in this case works out as one or two corps. Each battle group has been assigned an objective. How each battle group achieves its objective is determined by research that I did earlier on offensive tactical maneuvers 2)See, “Implementing the Five Canonical Offensive Maneuvers in a CGF Environment.” link to paper.

As always, I appreciate comments and questions. Please feel free to email me directly with either.

References

References
1 However, it is important to note that Arthur Samuel had begun research in 1959 into a computer program that could play checkers. See. “Samuel, Arthur L. (1959). “Some Studies in Machine Learning Using the Game of Checkers”. IBM Journal of Research and Development.”
2 See, “Implementing the Five Canonical Offensive Maneuvers in a CGF Environment.” link to paper.

Wargame AI Continued: Range of Influence

In two previous blogs I wrote about how Artificial Intelligence (AI) for wargames perceive battle lines and terrain and elevation. Today the topic is how computer AI has changed ‘Range of Influence’  (ROI) or ‘Zone of Control’ (ZOC) analysis. Range of Influence  and Zone of Control are terms that can be used interchangeably. Basically, what they mean is, “how far can this unit project its power.”

One of the first appearances of range as a wargame variable was in Livermore’s 1882 American Kriegsspiel: A Game for Practicing the Art of War Upon a Topographical Map (superb article on American Kriegsspiel here).  Note that incorporated into the ‘range ruler’ (below) is also a linear ‘effectiveness scale’.

Detail of Plate IV, “The Firing Board,” from the American Kriegsspil showing a ruler for artillery range printed on the top. Note the accuracy declines (apparently linearly) proportional to the distance. Click to enlarge.

The introduction of hexagon wargames (first at RAND and then later by Roberts at Avalon Hill; see here) created the now familiar 6 hexagon ‘ring’ for a Zone of Control:

Zone of Control explained in the Avalon Hill Waterloo (1962) manual. Author’s Collection.

I seem to remember an Avalon Hill game where artillery had a 2 hex range; but I may well be mistaken.

Ever since the first computer wargames that I wrote back in the ’80s I have earnestly tried to make the simulations as accurate as possible by including every reasonable variable. With the General Staff Wargaming System we’ve added two new variables to ROI: 3D Line of Sight and an Accuracy curve.

Order of Battle for Antietam showing Hamilton’s battery being edited. Screen shot from the General Staff Army Editor. Click to enlarge.

In the above image we are editing a Confederate battery in Longstreet’s corps. Every unit can have a unique unit range and accuracy. You can select an accuracy curve from the drop-down menu or you can create a custom accuracy curve by clicking on the pencil (Edit) icon.

Window for editing the artillery accuracy curve. There are 100 points and you can set each one individually. This also supports a digitizing pen and drawing tablet. Screen shot from General Staff Army Editor. Click to enlarge.

In the above screen shot from the General Staff Wargaming System Army Editor the accuracy curve for this particular battery is being edited. There are 100 points that can be edited. As you move across the curve the accuracy at the range is displayed in the upper right hand corner. Note: every unit in the General Staff Wargaming System can have a unique accuracy curve as well as range and every other variable.

Screen shot showing the Range of Influence fields for a scenario from the 1882 American Kriegsspiel book. Click to enlarge.

In the above screen shot from the General Staff Sand Box (which is used to test AI and combat) we see the ROI for a rear guard scenario from the original American Kriegsspiel 1882. Notice that the southern-most Red Horse Artillery unit has a mostly unobstructed field of vision and you can clearly see how accuracy diminishes as range increases. Also, notice how the ROI for the one Blue Horse Artillery unit is restricted by the woods which obstructs its line of sight.

Screen shot of Antietam (dawn) showing Red and Blue ROI and battle lines. Click to enlarge.

In the above screen shot we see the situation at Antietam at dawn. Blue and Red units are rushing on to the field and establishing battle lines. Again, notice how terrain and elevation effects ROI. In the above screen shot Blue artillery’s ROI is restricted by the North Woods.

The above ROI maps (screen shots) were created by the General Staff Sand Box program to visually ‘debug’ the ROI (confirm that it’s working properly). We probably won’t include this feature in the actual General Staff Wargame unless users would like to see it added.

This is a topic that is very near and dear to my heart. Please feel free to contact me directly if you have any questions or comments.

Battle Lines, Commanders & Computers

When we look at maps of battles even the novice armchair general can quickly trace the battle lines of the armies. Recognizing battle lines is one of the most important skills a commander – or a wargaming Artificial Intelligence (AI) – can possess. Without this ability how will you identify the flanking units? And if you can’t identify the units at the end of a line, how will you implement a flanking attack around them? Equally important is the ability to identify weak points in a battle line.

The algorithm for detecting battle lines and flank units is one of the ‘building block’ algorithms of my TIGER / MATE tactical AI and first appeared in my paper, Implementing the Five Canonical Offensive Maneuvers in a CGI Environment1)http://riverviewai.com/papers/ImplementingManeuvers.pdf. I will discuss how the algorithm works at the end of this blog. For now, just accept that it finds lines and flanks.

Let’s look at some examples of the General Staff AI ‘parsing’ unit positions. First, the battle of Antietam, situation at dawn (by the way, Antietam is one of the free scenarios included with the General Staff Wargaming System):

The battle of Antietam, dawn, September 17, 1863. Screen shot from the General Staff Sand Box program. Click to enlarge.

This is how a human sees the tactical situation: units on a topographical map. But, the computer AI sees it quite differently. In the next image, below, the battle lines and elevation are displayed as the AI sees the battle (note: the AI also ‘sees’ the terrain but, for clarity, that is not being shown in this screen capture):

The battle of Antietam, September 17, 1862 dawn, with computer AI battle lines and elevation displayed. Note: the identification of flank units. Both red and blue forces are assembling on the field. Click to enlarge.

What is immediately obvious is that Red (Confederate) forces are hastily constructing a battle line while Blue (Union) forces are beginning to pour onto the battlefield to attack.  Let us now ask the question: what is the weakest point of the Red battle line? Where should Blue attack? This point is sometimes called the Schwerpunkt. German for point of maximum effort2)See also, “Clausewitz’s Schwerpunkt Mistranslated from German, Misunderstood in English” Military Review, 2007 https://www.armyupress.army.mil/Portals/7/military-review/Archives/English/MilitaryReview_20070228_art014.pdf. Where should Blue concentrate its forces?

Computer AI representation of battle lines for Antietam, dawn September 17, 1862. The AI is locating the Schwerpunkt or place to attack. Click to enlarge.

Now that the weakest points of Red’s battle line have been identified, Blue (assuming Blue is being controlled by the AI) can exploit it by attacking the gaps in Red’s battle line. The Blue AI can order either a Penetration or Infiltration Maneuver to exploit these gaps (the following images are from my paper, “Implementing the Five Canonical Offensive Maneuvers in a CGF Environment.” Note, in the TIGER / MATE screen shots below Range of Influence (ROI) is also visible:

From the paper, “Implementing the Five Canonical Offensive Maneuvers.”

Both of these maneuvers are possible because the AI has identified weak points in the OPFOR (Opponent Forces) battle lines. Equally important when discussing battle lines are the location of the flanks. The next two images use the original American Kriegsspiel (1882) map which is also included in the General Staff Wargaming System:

The original American Kriegsspiel map (1882) restored and now used in the General Staff Wargaming System. Screen shot from the General Staff Sand Box AI test program. Click to enlarge.

In this screen capture from the General Staff Wargaming System Sand Box AI test program battle lines are displayed by the AI. Note the flank units and especially the unanchored (or open) Blue flank. Click to enlarge.

Identifying flank units is vitally important in the Turning Maneuver and the Envelopment Maneuver:

Knowing the location of flank units is also important for classifying tactical positions (this will be the subject of an upcoming blog).

So, how does this algorithm work?

I’ve never been a fan of graph theory; or heavy mathematical lifting in general. One of the required classes in grad school was Design and Analysis of Algorithms and it got into graph theory quite a bit. The whole time I was thinking, “I’m never going to use any of this stuff, but I have to get at least a B+ to graduate,” so I took a lot of notes and studied hard. Later, when I was looking for a framework to understand tactics and to write a tactical AI it became obvious that graph theory was at least part of the solution. Maps are routinely divided into a grid, unit locations can be points (or vertices) at the intersections of these lines. Battle lines can be edges that connect the vertices. I need to publicly thank my doctoral advisor, Dr. Alberto Segre, for first suggesting that battle lines could be described using something called a Minimum Spanning Tree3)https://en.wikipedia.org/wiki/Minimum_spanning_tree (MST). An MST is the minimum possible distances (edge weights to be precise) to connect all the vertices in a tree (or a group, as I call them in the above screen shots).

I ended up implementing Kruskal’s algorithm4)https://en.wikipedia.org/wiki/Kruskal’s_algorithm for identifying battle lines. It is what is called a ‘greedy algorithm’ and it runs in O(E log V) which means it gets slower as we add more units but we’re never dealing with gigantic numbers of individual units in an Order of Battle (probably around 50 is the maximum) so it takes less than a second to calculate and display battle lines for both Red and Blue.

Lastly, and I guess this is my contribution to military graph theory, I realized that the flank units of any battle line must be the maximally separated units. That is to say, that the two units in a battle line that are the farthest apart are the flank units.

Obviously, this is a subject that I find fascinating so please feel free to contact me directly if you have any questions or comments.

 

Computational Military Reasoning Part 4: Learning

In my previous three posts on computational military reasoning (tactical artificial intelligence) we introduced my algorithms for detecting the absence or presence of anchored and unanchored flanks, interior lines and restricted avenues of attack (approach) and retreat. In this post I present my doctoral research1)TIGER: An Unsupervised Machine Learning Tactical Inference Generator which can be downloaded here which utilizes these algorithms, and others, in the construction of an unsupervised machine learning program that is able to classify the current tactical situation (battlefield) in the context of previously observed battles. In other words, it learns and it remembers.

‘Machine learning’ is the computer science term for learning software (in computer science ‘machine’ often means ‘software’ or ‘program’ ever since the ‘Turning Machine'2)https://en.wikipedia.org/wiki/Turing_machine which was not a physical machine but an abstract thought experiment.

There are two forms of machine learning: supervised and unsupervised machine learning. Supervised machine learning requires a human to ‘teach’ the software. An example of supervised machine learning is the Netflix recommendation system. Every time you watch a show on Netflix you are teaching their software what you like. Well, theoretically. Netflix recommendations are often laughingly terrible (no, I do not want to see the new Bratz kids movie regardless of how many times you keep recommending it to me).

Unsupervised machine learning is a completely different animal. Without human intervention an unsupervised machine learning program tries to make sense of a series of ‘objects’ that are presented to it. For the TIGER / MATE program, these objects are battles and the program classifies them into similar clusters. In other words, every time TIGER / MATE ‘sees’ a new tactical situation it asks itself if this is something similar (and how similar) to what it’s seen before or is it something entirely new?

I use convenient terms like ‘a computer tries to make sense of’ or a ‘computer sees’ or a ‘computer thinks’ but I’m not trying to make the argument that computers are sentient or that they see or think. These are just linguistic crutches that I employ to make it easier to write about these topics.

So, a snapshot of a battle (the terrain, elevation and unit positions at a specific time) is an ‘object’ and this object is described by a number of ‘attributes’. In the case of TIGER / MATE, the attributes that describe a battle object are:

  • Interior Line Value
  • Anchored / Unanchored Flank Value
  • REDFOR (Red Forces) Choke Points Value
  • BLUEFOR (Blue Forces) Choke Points Value
  • Weighted Force Ratio
  • Attack Slope

The algorithms for calculating the metrics for the first four attributes were discussed in the three previous blog posts cited above. The algorithms for calculating the Weighted Force Ratio and Attack Slope metrics are straightforward: Weighted Force Ratio is the strength of Red over the strength of Blue weighted by unit type and the Attack Slope is just that: the slope (uphill or downhill) that the attacker is charging over.

TIGER / MATE constructs a hierarchical tree of battlefield snapshots. This tree represents the relationship and similarity of different battlefield snapshots. For example, two battlefield situations that are very similar will appear in the same node, while two battlefield situations that are very different will appear in disparate nodes. This will be easier to follow with a number of screen shots. Unfortunately, we first have to introduce the Category Utility Function.

So, first let me apologize for all the math. It isn’t necessary for you to understand how the TIGER / MATE unsupervised machine learning process works, but if I don’t show it I’m guilty of this:

The Category Utility Function (or CU, for short) is the equation that determines how similar or dissimilar too objects (battlefields) are. This it the CU function:

‘Acuity’ is the concept of the minimum value that separates two ‘instances’ (in our case, battles). It has to have a value of 1.0 or very bad things will happen.

 

So, let’s recap what we’ve got:

  • A series of algorithms that analyze a battlefield and return values representing various conditions that SMEs agree are significant (flanks, attack and retreat routes, unit strengths, etc., etc).
  • A Category Utility Function (CU) that uses the products of these algorithms to determine how similar analyzed battlefields are.

So now, we just need to put this all together. A battlefield (tactical situation) is analyzed by TIGER / MATE. It is ‘fed’ into the unsupervised machine learning function and, using the Category Utility Function one of four things happen:

  1. All the children of the parent node are evaluated using the CU function and the object (tactical situation)is added to an existing node with the best score.
  2. The object is placed in a new node all by itself.
  3. The two top-scoring nodes are combined into a single node and the new object is added to it.
  4. A node is divided into several nodes with the new objected to one of them.

These rules (above) construct a hierarchical tree structure. TIGER was fed 20 historical tactical situations (below):

  1. Kasserine Pass February 14,1943
  2. KasserinePass February 19, 1943
  3. Lake Trasimene, 217 BCE
  4. Shiloh Day 2
  5. Shiloh Day 1, 0900 hours
  6. Shiloh Day 1, 1200 hours
  7. Antietam 0600 hours
  8. Antietam 1630 hours
  9. Fredericksburg, December 10
  10. Fredericksburg, December 13
  11. Chancellorsville May 1
  12. Chancellorsville May 2
  13. Gazala
  14. Gettysburg, Day 1
  15. Gettysburg, Day 2
  16. Gettysburg, Day 3
  17. Sinai, June 5
  18. Waterloo, 1000 hours
  19. Waterloo, 1600 hours
  20. Waterloo, 1930 hours

In addition to these 20 historical tactical situations five hypothetical situations were created labeled A-E. This is the resulting tree which TIGER created:

The hierarchical tree created by TIGER from 20 historical and 5 hypothetical tactical situations. The numbers in the nodes refer to the above legend. Battles placed in the same nodes are considered very similar by TIGER. Click to enlarge.

If we look at the tree that TIGER constructed we can see that it placed Shiloh Day 1 0900 hours and Shiloh Day 1 1200 hours together in cluster C35. Indeed, as we look around the tree we observe that TIGER did a remarkable job of analyzing tactical situations and placing like with like. But, that’s easy for me to say, I wrote TIGER. My opinion doesn’t count. So we asked 23 SMEs which included:

  • 7 Professional Wargame Designers
  • 14 Active duty and retired U. S. Army officers including:
  • Colonel (Ret.) USMC infantry 5 combat tours, 3 advisory tours
  • Maj. USA. (SE Core) Project Leader, TCM-Virtual Training
  • Officer at TRADOC (U. S. Army Training and Doctrine Command)
  • West Point; Warfighting Simulation Center
  • Instructor, Dept of Tactics Command & General Staff College
  • PhD student at RMIT
  • Tactics Instructor at Kingston (Canada)

And in a blind survey asked them not what TIGER did but what they would do. For example:

Twenty-three SMEs were asked this question: is this hypothetical tactical situation (top) more like Kasserine Pass or Gettysburg?. Click to enlarge.

And this is how the responded:

Results from 23 SMEs answering the above question. Overwhelmingly the SMEs agreed that that the hypothetical tactical situation was most like the battle of Kasserine Pass.

So, 91.3% of SMEs agreed that the hypothetical tactical situation was more like Kasserine Pass than Gettysburg Day 1. Unbeknownst to the SMEs TIGER had already classified these three tactical situations like this:

How TIGER classified Kasserine Pass (1), Gettysburg Day 1 (14) and a hypothetical tactical situation (B). The cluster C1 contains two tactical situations that both have restricted avenues of attack caused by armor traveling through narrow mountainous passes. These passes also partially create restricted avenues of retreat. REDFOR does not have anchored flanks.Click to enlarge.

In conclusion: over the last four blog posts about Computational Military Reasoning we have demonstrated:

  • Algorithms for analyzing a battlefield (tactical situation).
  • Algorithms for implementing offensive maneuvers.
  • An Unsupervised Machine Learning system for classifying tactical situations and clustering like situations together. Furthermore, this system is never-ending and as it encounters new tactical situations it will continue this process which enables the AI to plan maneuvers based on previously observed and annotated situations.

This is the AI that will be used in General Staff. It is unique and revolutionary. No computer military simulation – either commercially available or any military simulation used by any of the world’s armies – employ an AI of this depth.

As always, please feel free to contact me directly with questions or comments. You can use our online email form here or write to me directly at Ezra [at] RiverviewAI.com.

References

References
1 TIGER: An Unsupervised Machine Learning Tactical Inference Generator which can be downloaded here
2 https://en.wikipedia.org/wiki/Turing_machine