Category Archives: Algorithm

A Human-Level Tactical Artificial Intelligence at Ligny

Map 159 from the superb, “West Point Atlas of the Napoleonic Wars,” (Esposito & Elting, 1999, Stackpole). Scanned from the author’s collection. Click to enlarge.

The seeds of Napoleon’s defeat at Waterloo were sown two days earlier at his victory near Ligny. Napoleon needed to surround and completely remove the Prussian army as a viable force on the battlefield. Instead, they escaped to Wavre in the north and resurfaced at the worst possible time on Napoleon’s right flank two days later at Waterloo.

MATE1)Machine Analysis of Tactical Environments 2.0 is now capable of analyzing the battle of Ligny, June 16, 1815 from both the Blue (L’Armée du Nord on the offensive) and Red (Prussian on the defensive) positions. (MATE is the AI behind General Staff: Black Powder. For more information about MATE see these links).

The Ligny map was donated by Glenn Frank Drover. Jared Blando is the artist. Ed Kuhrt did the elevation, roads and mud terrain overlays. The unit positions are from the West Point Atlas of the Napoleonic Wars (above) and from David Chandler’s Waterloo: The Hundred Days. If any one has a better source for unit positions, please contact me directly.

Screen capture of the Ligny scenario in General Staff. Elevation and slope layers enhanced.

Below is MATE’s analysis from the Blue (L’Armée du Nord) perspective. MATE correctly identifies the key positions and realities of the battlefield:

  • Red is on the defensive
  • Red has an exposed flank
  • There are two key choke points on the route to Red’s exposed flank

MATE then creates an appropriate Course of Action (COA) for Blue:

  • Battle Group #1 (The French III Corps) is assigned the flanking maneuver.
  • Battle Group #0 (The Imperial Guard) is assigned the objective of St. Amand with the support of Battle Group #4 (IV Corps cavalry and reserve artillery).
  • Battle Group #2 (IV Corps) demonstrates against Ligny.
  • Battle Group #3 (The Cavalry Reserve) seizes Balatre and a crucial bridge located there.

MATE’s analysis of Blue’s position at Ligny. Screen capture. Click to enlarge.

A log of MATE’s thought processes, with my commentary, follows:

Text output of MATE’s analysis of Blue’s position at Ligny. Click to enlarge.

MATE also analyzed Ligny from the Prussian (Red) perspective:

Screen capture of MATE’s analysis of Ligny for Red (Prussian army). MATE recognizes the two choke points on the route of the enemy’s flank attack and dispatches cavalry units to cover these critical areas. Click to enlarge.

MATE, analyzing the Prussian (Red) position correctly recognizes that it is on the defensive, it has an exposed flank, there are two crucial choke points on the route that Blue will take on its flanking maneuver and dispatches two cavalry units to cover the bridges. A log of MATE’s thought processes, with my commentary, follows:

Text output of MATE’s analysis of Red’s position at Ligny. Click to enlarge.

Critique of MATE’s analysis:

As the author of MATE any critique I have of its performance should be taken with a grain of salt (also, see this video). If I was back in academia I would put together twenty or thirty Subject Matter Experts (SMEs), set up a double blind web site, get all the SME’s solutions to the problem, and compare their solutions to MATE’s. If they match to a statistical significance it proves the ‘human-level’ part. But, I’m not in academia anymore and you’ll just have to take my word for it. That said, MATE did what I expected it to do.

It first sussed out if it was on offense or defense and what it had to do to win.

Then, as Blue, MATE discovered a back door to Red’s position and ordered a classic enveloping maneuver. MATE assigned Blue Battle Group #1 the task of implementing the flanking maneuver. Blue Battle Groups #0, #4 and #2 are the fixing force. See my paper, Implementing the Five Canonical Offensive Maneuvers in a CGF Environment (free download here) for details and algorithms.  The Blue Cavalry Reserve is given the COA to seize the town of Balatre. This, in my opinion, is a pretty good tactical plan.

When MATE finds itself on defense, as it does as Red at Ligny, one of the first things it does is ask itself, “how would I attack myself?” So, of course it finds the back door right away. Then it compiles a list of available units that are not actively engaged in holding crucial parts of a defensive line, selects the optimal (fastest) units and assigns them orders to defend the crucial choke points. This was a better plan than Field Marshal Gebhard Leberecht von Blücher had. So, again, I’m going to argue that MATE is operating at a ‘human-level’.

As always, please feel free to write me with the questions or comments. MATE is going to take a look at Antietam next.

References

References
1 Machine Analysis of Tactical Environments

A Human-Level Tactical Artificial Intelligence at Bull Run

West Point Atlas Map 20; the situation near Manassas & Centreville, July 20, 1861. Click to enlarge. From the Library of Congress; original source.

I‘ve been looking for interesting tactical problems for MATE1)Machine Analysis of Tactical Environments 2.0 to solve and I found a good one after reading William C. Davis’s Battle at Bull Run.2)1995, Stackpole Books,  Mechanicsburg, PA The actual battle (called 1st Bull Run by the Union who named battles after nearby waterways and 1st Manassas by the Confederates who labeled battles from nearby geographic features and cities) was a tragicomedy fought on July 21, 1861 in which both commanders (Irvin McDowell for the Union and P. G. T. Beauregard for the Confederates) had little control of their own forces after their initial battle orders were given. Indeed, the battle came down to a series of charges up and down Henry Hill with units committed piecemeal as they arrived on the field. Large elements of both armies were never committed. All in all, not a particularly interesting tactical situation for MATE to analyze.

However, the tactical position the day before (see West Point Atlas map #20, above) is quite a different situation. The Union army is massed at Centreville (Washington, D. C. is off the map, about 30 miles to the east). The two armies are separated by Bull Run which can only be crossed at eleven fords and bridges. Confederate general Beauregard is certain that McDowell’s attack will be almost due south from Centreville and will cross Bull Run at Mitchell’s and Blackburn’s fords. He has assembled almost all of his forces there. This is a tactical situation of which avenues of attack are open and which are closed.

Troop positions and topographical data fed to MATE for this analysis come from the McDowell Map, below:

Map of the battlefield of Bull Run, Virginia. Brig. Gen. Irvin McDowell commanding the U.S. forces, Gen. [P.] G. T. Beauregard commanding the Confederate forces, July 21st 1861 from the Library of Congress. Click to enlarge.

MATE’s assessment of this situation from the Confederate (RED) perspective is below. I use a program called the AI Editor (which, ironically, doesn’t actually edit AI) to observe what MATE is thinking and seeing.

Screen shot of the AI Editor. Click to enlarge.

The left window contains a series of predicate statements, conclusions and inferences. Predicate statements to MATE are simple factual statements that MATE knows to be true; e. g. statement #4: The enemy needs 300 Victory Points to win is a basic factual statement. MATE can combine statements (such as #4 and #5: The enemy currently controls 125 Victory Points) to come to a logical conclusion (indicated by beginning the new statement with logical symbol ”∴” or therefore): #6∴ The enemy needs to seize 175 more Victory Points.

The left window is divided into two windows of scrolling text. I printed out the complete list of all statements and conclusions and added commentary (below) so you can follow the thread of MATE’s thought processes:

List of premises and conclusions with commentary from MATE’s analysis of Manassas. Click to enlarge.

The right window shows the graphic output of some of MATE’s views of the battlefield (see Layers: Why a Military Simulation Is Like a Parfait). In the above screen shot it is displaying the terrain and elevation layers of the map plus all RED and BLUE forces. The yellow line is how MATE would attack if it were BLUE. Yes, that is correct. MATE analyzes its own defensive position by planning to attack it from the enemy’s perspective. The yellow line (the path using the road net) is how it would turn its own flank. It was this analysis that triggered the creation of statement #31: I have an exposed flank! To see the complete algorithm click here (PDF). The Red line is the optimal route of the 30th Virginia Cavalry to Sudley Ford indicated on the screen by the black box labeled CHOKE POINT.

MATE’s analysis of Manassas certainly appears to be a reasonable analysis and solution to this tactical problem.  It also generated a COA (Course of Action) ordering a regiment of cavalry to secure a critical choke point. This, in fact, was better than Confederate General Beauregard’s actual performance.

Is there more work to do? Certainly. MATE uses heuristics. Here is the classic definition of heuristics: “A heuristic function, also simply called a heuristic, is a function that ranks alternatives in search algorithms at each branching step based on available information to decide which branch to follow.”

Here is my definition of a heuristic: a function that groks3)to understand profoundly and intuitively from Heinlein’s Stranger in a Strange Land the problem. MATE uses dozens of heuristic algorithms. MATE is pretty good at discovering – and pouncing – on an exposed flank. MATE groks exposed flanks. MATE also groks interior lines, the high ground, the road net, and constricted avenues of attack and retreat. That may not be a long list but it ticks more boxes than most 19th century generals.

References

References
1 Machine Analysis of Tactical Environments 2.0
2 1995, Stackpole Books,  Mechanicsburg, PA
3 to understand profoundly and intuitively from Heinlein’s Stranger in a Strange Land

Why Machines May Kill Us In Our Sleep

An amazing screen capture of the AI’s solution to a problem. It has found a 1 pixel gap between the data and the edge of the screen and is exploiting it to successfully find an ‘open flank’ of Red. Click to enlarge.

Professor Alberto M. Segre was my thesis advisor and one day he said to me, “You know when your AI is really working because it will surprise you.” Today I got to have one of those weird surprises.

The screen shot (above) is a visual representation of what the AI is up to. You won’t get to see this in the actual game. The program that’s running is called the AI Editor which is a bit of a misnomer because you don’t actually edit the AI in it; you mostly just get to observe what it’s doing. There’s a lot of stuff going on in the above image. There are multiple layers visually displaying different types of data (check out the blog – Layers: Why a Military Simulation is Like a Parfaitfor more information about these). But, what interests us are the AI layers: Battle Groups, Objectives, and that thin yellow line that snakes from a group of blue units, crossing Antietam Creek at the Middle Bridge and then, amazingly, exploiting a data anomaly to reach its goal: a point far behind enemy lines.

Some background on the situation:

The map of the Antietam Battlefield (screen shot) with terrain and elevation layers displayed. Click to enlarge.

Underlying all the clutter from the first screen capture, top, is the battle of Antietam (above). The map has been rotated 90 degrees to the left so north is now pointing to the left; east is at the top of the screen.

After adding Blue (Union) and Red (Confederate) units to the map in their historical positions at 0600 September 17, 1862 the AI performed a tactical analysis from the perspective of Blue.

The AI ‘strategic’ analysis for Antietam playing Blue (Union).

The above are a list of Predicate Statements all of which the AI knows to be true. Statements preceded by the logical sign ∴ (therefore) are conclusions, or inferences, derived from the predicate statement referenced in the brackets. It is this analysis that determines if the AI will be on the offensive or defensive and what its objectives will be.

Next, the AI performs Range of Influence (ROI) calculations for the entire observable battlefield. I plan on doing a video about this later, but for now the darker the red (in the topmost screen capture) the more – and more powerful – weapons the Red army can bear on that point.  The AI next divides all the units on the map into a forest of minimum spanning trees called Battle Groups. I want to do a video about this, too. However, if you can’t wait, these subjects are covered in my paper, Implementing the Five Canonical Offensive Maneuvers in a CGF Environment (free download).

Again, referring to the top screenshot you can see the AI’s calculations to this point:

  • It has determined it (Blue) will be on the offensive.
  • It has calculated enemy ROI.
  • It has assigned objectives to the first Battle Group.

Flanking Algorithm published in, “Algorithms for Generating Attribute Values for the Classification of Tactical Situations”. Click to enlarge.

Now the AI needs to determine if the enemy has an ‘open or unanchored flank’. In Algorithms for Generating Attribute Values for the Classification of Tactical Situations I published the Algorithm for Flanking Attribute Value Function (right). It basically comes down to this: can the AI trace an unbroken path from the center of the Blue Battle Group to a specific point (called the Retreat Point) far behind enemy lines without crossing into ‘No Go Areas’ (water, swamp) or entering any area controlled by Red’s ROI (literally the red areas in the topmost screen shot).

The reason that I was using Antietam as a test case for anchored / unanchored flanks is because years ago I had analyzed the battle for my doctoral thesis and knew it to be a classic example of anchored flanks; Lee’s left flank rests on the Potomac and his right flank is anchored on the Antietam. Granted, the Confederate flanks were held by Stuart’s cavalry with a little horse artillery support but they were still, by definition, anchored flanks.

Due to an error in the data that made up the Antietam terrain map a 1 pixel (about 3.8 meter wide) strip of ‘no terrain’ was inserted at the far right hand edge of the map (see blow up of screen capture, right; it’s the thin line between the water, represented in red, and the brown edge of the map). This meant there was a ‘land bridge’ across Antietam Creek where none existed in real life. A digital parting of the Red Sea, if you will. But, by the rules of the game the AI perfectly performed its function. There was no error in the AI – again, the AI performed better than I had dared hope – the error was with the data set.

And that’s how fifty years from now I can see a cyber-detective standing over the chalk marks around a body saying, “Yeah, the machine performed perfectly, brilliantly, in fact. But, the error in the data set killed him.”

It’s already happened in real life. For cars with autopilot the data set of the world in which it operates is crucial. However, “against a bright spring sky, the car’s sensors system failed to distinguish a large white 18-wheel truck and trailer crossing the highway, Tesla said. The car attempted to drive full speed under the trailer, “with the bottom of the trailer impacting the windshield of the Model S”, Tesla said.” The driver died. The AI functioned perfectly. But, the error in the data set killed him.

So, I fixed the error in the data set (probably caused by not using the right values in InkScape when I converted the Antietam Water.bmp into paths) and imported it back into the Antietam map using the General Staff Map Editor, saved it out, and ran the AI Editor again and saw this:

The AI did not display a yellow path from the center of the Blue Battle Group to the Red Retreat Point because none existed. Instead, it just wrote the first Predicate Statement in the Tactical Analysis stack: “Red’s flanks are anchored”.

Again, the machine was performing perfectly. And its results were no longer surprising.

Addendum

I recently got to experience this again (though this time it was caused by a different data bug) when I was reviewing the AI’s decisions at the battle of Manassas:

Because the Range of Influence was not calculating the very bottom row the AI found another, perfectly legal, way to reach its goal. Screen shot from the General Staff AI Editor. Click to enlarge.

In this instance, the error in the database was caused by the Range of Influence (basically a map of what red and blue can see and hit) not calculating the very last row. Consequently, the AI was able to legally trace a path from the blue forces in the northeast to their goal at the bottom of the map.

After this bug was corrected the AI performed as expected:

The AI correctly sees going around red’s left flank as the solution to the problem. Screen shot from General Staff AI Engine. Click to enlarge

In the above screen shot the AI has demonstrated the correct solution to the tactical problem facing blue at Manassas on July 20, 1861 (the day before the actual battle). Red’s left flank is unanchored. It’s wide open. Note how the AI identifies the one choke point (Sudley Springs Ford) in the plan.

So, the AI surprised me again. I think it’s looking pretty good. When you play against it, watch your flanks.

Ty Bomba’s Primer on Strategy & Tactics

Legendary wargame designer Ty Bomba.

I can think of no better introduction for Ty Bomba than his Wikipedia entry: “Ty Bomba is a prolific wargame designer from the United States. He is credited as the designer of over 125 board games or game items. At times between 1976 and 1988, Bomba held a security clearance as a certified Arabic and Russian linguist for the US Air Force, US Army, and the National Security Agency. In 1988, he was elected to the Charles Roberts Awards Hall of Fame. He was previously a senior editor at Strategy & Tactics Press. Bomba was co-founder and designer for XTR Corporation, a company that existed between 1989 and 2001. ” In other words, a very impressive career in wargame design and military strategy and tactical thinking.

Ty recently posted his Primer on Strategy & Tactics on Facebook and I asked his permission to repost it here, which he very kindly gave. I have spent much of my professional career trying to create computer algorithms for military tactics and strategy (a subject that I call ‘computational military reasoning’ and have written extensively about here). Ty has very succinctly stated much of what I’ve attempted to accomplish in his Primer below. Ty can be found on Facebook as ‘Ty Bomba’.

Ty Bomba’s Primer on Strategy and Tactics

Everything in strategy is very simple, but that does not mean everything is very easy” – Carl von Clausewitz.

Strategy Defined
A plan or policy intended to achieve a major or overall aim, and having to be
achieved in the face of opposition from others. All strategy is a contextual
interpretation of a problem and a compromised rationalization of a
solution. There are no formulas to end the tensions inescapably imposed by
uncertain intentions, faulty assumptions, unknown capabilities and vaguely
understood risks.

Laws of Strategy

  1. Know your own capabilities.
  2. Know your opponent’s capabilities and objectives.
  3. Pit your strengths against your opponent’s weaknesses.
  4. Prevent your opponent from pitting his strengths against your
    weaknesses.
  5. Never pit your strengths against your opponent’s strengths.
  6. Maintain an emergency reserve of five to 25 percent of your strength.
  7. Keep in mind your desired end-state: only do things that move you closer
    to it.
  8. Never repeat an already failed strategy with the expectation of getting a
    better result from it.
  9. The overarching objective of your strategy should be to create a state of
    surprise in your opponent. That uncertainty will delay, and otherwise make
    less efficient, his countermoves. That is a force multiplier for you.

Common Reasons for Strategic Failure

  1. Overconfidence due to previous successes.
  2. Analyzing information only after sifting it through the filter of dogma.
  3. Operating with insufficient reserves.
  4. Mirror imaging – using one’s own rationales to interpret the actions or
    intentions of an opponent – is the most common fault among decision
    makers.
  5. Objectives not well explained to those below the highest level of command.
  6. Objectives not adjusted according to new data coming from the
    operational environment.
  7. Unanticipated outside influences.

Tactics Defined
An action intended to achieve a specific end, undertaken while in contact with the
enemy.

Laws of Tactics

  1. Always seek to control the local high ground or its aerial or outer space
    equivalent.
  2. Move in short bounds from cover to cover so as not to be caught in the
    open by your opponent.
  3. Maneuver so as to engage your opponent on his flank or from behind and
    so as to prevent him from engaging you in that way.
  4. Don’t confuse “concealment” with “cover.” The former only gets you out of
    sight; the latter also offers protection from enemy fire.

Juncture of Tactics & Strategy
Your superior strategy can make up for your poor tactics; however, your superior
tactics will not make up for your poor strategy. As Sun Tzu put it: “Good strategy
combined with poor tactics is the slowest route to victory; good tactics combined
with poor strategy is just so much noise before your final defeat.”

Surprise
Surprise is a state of confusion in your opponent, induced by your introducing the
unexpected. At the strategic level, surprise is often viewed as the tool of the
weaker side, as the stronger side has the option of simply applying greater force.
At the tactical level, surprise is considered a force multiplier for the side causing it
by creating a temporary period of confusion and vulnerability in the surprised
force. Having multiple objectives lies at the heart of creating surprise in an
opponent.

The Most Difficult Thing
The most difficult thing in a dynamic situation is to know when to change
strategies. If you do it too soon or too often, you’re not a strategist; you’re an
opportunist. If you do it too late, or refuse to do it no matter what, again you’re
not a strategist; you’re a fanatic. Opportunists and fanatics are both easily
defeated by good strategists.

Feeding the Machine

The famous Turing Machine1)It was first described in Turing’s, “On Computing Machines with an Application to the Entscheidungsproblem,” in 1937 which can be downloaded here: https://www.cs.virginia.edu/~robins/Turing_Paper_1936.pdf. Also a very good book on the subject is Charles Petzold’s, “The Annotated Turing: A Guided Tour through Alan Turing’s Historic Paper on Computability and the Turing Machine.” was a thought experiment and, until recently did not physically exist 2)Yes, somebody has built one and you can see what Turing described here: https://www.youtube.com/watch?v=E3keLeMwfHY . When computer scientists talk about machines we don’t mean the, “lumps of silicon that we use to heat our offices,” (thanks Mike Morton for this wonderful quote), but, rather, we mean the software programs that actually do the computing. When we talk about Machine Learning we don’t think that the physical hardware actually learns anything. This is because, as Alan Turing demonstrated in the above paper, the software functions as a virtual machine; albeit, much more efficiently than creating a contraption with pens, gears, rotors and an infinitely long paper strip.

When I talk about, “feeding the machine,” I mean giving the program (the AI for General Staff is called MATE: Machine Analysis of Tactical Environments and the initial research was funded by DARPA) more data to learn from. Yesterday, the subject at machine learning school was Quatre Bras.

Screen shot of the General Staff AI Editor after analysis of Quatre Bras and calculating the flanking Schwerpunkt or point of attack (blue square).  Click to enlarge.

The MATE tactical AI algorithms produce a plan of attack around a geographic point on the battlefield that has been calculated and tagged as the Schwerpunkt, or point where maximum effort is to be applied. In the above (Quatre Bras) scenario the point of attack is the extreme left flank of the Anglo-Allied (Red) army. I apply the ‘reasonableness test’ 3)Thank you Dennis Beranek for introducing me to the concept of ‘reasonableness test’. See https://www.general-staff.com/schwerpunkt/ for explanation and think, “Yes, this looks like a very reasonable plan of attack – a flanking maneuver on the opponent’s unanchored left flank – and, in fact, is a better plan than what Marhshal Ney actually executed.

It would be good at this point to step back and talk about the differences in ‘supervised’ and ‘unsupervised’ machine learning and how they work.

Supervised machine learning employs training methods. A classic example of supervised learning is the Netflix (or any other TV app’s) movie recommendations. You’re the trainer. Every time you pick a movie you train the system to your likes and dislikes. I don’t know if Netflix’s, or any of the others, use a weighting for how long (what percentage watched over total length of show) watched but that would be a good metric to add in, too. Anyway, that’s how those suggestions get flashed up on the screen: “Because you watched Das Boot you’ll love The Sound of Music!”  Well, yeah, they both got swastikas in them, so… 4)Part of the problem with Netflix’s system is that they hire out of work scriptwriters to tag each movie with a number of descriptive phrases. Correctly categorizing movies is more complex than this.

Supervised machine learning uses templates and reinforcement. The more the user picks this thing the more the user gets this thing. MATE is unsupervised machine learning. It doesn’t care how often a user does something, it cares about always making an optimal decision within an environment that it can compare to previously observed situations. Furthermore, MATE is a series of algorithms that I wrote and that I adjust after seeing how they react to new scenarios. For example, in the above Quatre Bras scenario, MATE originally suggested an attack on Red’s right-flank. This recommendation was probably influenced by the isolated Red infantry unit (1st Netherlands Brigade) in the Bois de Bossu woods.  After seeing this I added a series of hierarchical priorities with, “a flank attack in a woods (or swamp) is not as optimal as an attack on an exposed flank with clear terrain,” as a higher importance than pouncing on an isolated unit.  And so I, the designer, learn and MATE learns.

My main concern is that MATE must be able to ‘take care of itself’ out there, ‘in the wild’, and make optimal decisions when presented with previously unseen tactical situations. This is not writing an AI for a specific battle. This is a general purpose AI and it is much more difficult to write than a battle specific AI. One of the key aspects of the General Staff Wargaming System is that users can create new armies, maps and scenarios. MATE must make good decisions in unusual circumstances.

Previously, I have shown MATE’s analysis of 1st Bull Run and Antietam. Below is the battle of Little Bighorn in the General Staff AI Editor:

The battle of Little Bighorn in the General Staff AI Editor. Normally the MATE AI would decline to attack. However, when ordered to attack, this is MATE’s optimal plan. Click to enlarge.

I would like to expose MATE to at least thirty different tactical situations before releasing the General Staff Wargame. This is a slow process. Thanks to Glenn Frank Drover of Forbidden Games, Inc. for donating the superb Quatre Bras map. He also gave us maps for Ligny and Waterloo which will be the next two scenarios submitted to MATE. We still have a way to go to get up to thirty. If anybody is interested in helping to create more scenarios please contact me directly.

References

References
1 It was first described in Turing’s, “On Computing Machines with an Application to the Entscheidungsproblem,” in 1937 which can be downloaded here: https://www.cs.virginia.edu/~robins/Turing_Paper_1936.pdf. Also a very good book on the subject is Charles Petzold’s, “The Annotated Turing: A Guided Tour through Alan Turing’s Historic Paper on Computability and the Turing Machine.”
2 Yes, somebody has built one and you can see what Turing described here: https://www.youtube.com/watch?v=E3keLeMwfHY
3 Thank you Dennis Beranek for introducing me to the concept of ‘reasonableness test’. See https://www.general-staff.com/schwerpunkt/ for explanation
4 Part of the problem with Netflix’s system is that they hire out of work scriptwriters to tag each movie with a number of descriptive phrases. Correctly categorizing movies is more complex than this.