Tag Archives: AI

Computational Military Reasoning (Tactical Artificial Intelligence) Part 1

I coined the phrase ‘computational military reasoning’ in grad school to explain what my doctoral thesis in computer science was about. ‘Computational reasoning’ is a formal method for solving problems (technically, you don’t even need a computer). But, for our purposes it means a computer solving ‘human-level’ problems. A classic example of this would be calculating the fastest route on a map between two points. In computer science we call this a ‘least weighted path’ algorithm and I did my Q (Qualifying) Exam on this subject. I have also written extensively on the subject including these blog posts.

So, ‘computational military reasoning’ is a, “computer solving human-level military problems.” Furthermore, we can divide computational military reasoning into two subcategories: strategic and tactical (Russian military dogma also adds a third category, ‘grand strategy’); however, for now, let’s concentrate on tactical artificial intelligence; or battlefield decisions.

Tactical AI is divided into two parts: analyzing – or reading – the battlefield and acting on that information by creating a set coherent orders (commonly known as a COA or Course of Action) that exploit the weaknesses in our enemy’s position that we have found during our battlefield analysis.

 

It is said that as Napoleon traveled across Europe with his staff he would question them about the terrain that they were passing; “Where is the best defensive position? What are the best attack routes?” Where would you position artillery? What ground is favorable for cavalry attack?”

We take it for granted that such analysis of terrain and opposing forces positioned upon it is a skill that can be taught to humans. My doctoral research 1)TIGER: An Unsupervised Machine Learning Tactical Inference Generator; This thesis can be downloaded free of charge here. successfully demonstrated the hypothesis that an unsupervised machine learning program could also learn this skill and perform battlefield analysis that was statistically indistinguishable2)Using a one sided Wald test resulted in  p = 0.0001.In other words, it was extremely unlikely that TIGER was ‘guessing correctly’. from analyses performed by Subject Matter Experts (SMEs) such as instructors at West Point and active duty combat command officers.

Supervised & Unsupervised Machine Learning

Netflix recommendations are a supervised learning program. Every time you ‘like’ a movie the program ‘learns’ that you like ‘documentaries’; for example. Any program that has you ‘like’ or ‘dislike’ offerings is a supervised learning program. You are the supervisor and by clicking on ‘like’ or ‘dislike’ you are teaching the program.

TIGER is an unsupervised machine learning program. That means it has to figure everything out for itself. Rather than being taught, TIGER is ‘fed’ a series of ‘objects’ that have ‘attributes’ and it sorts them into like categories. For TIGER the objects are snapshots of battlefields.

Screen capture from TIGER. An ‘object’ has been loaded into TIGER for analysis; in this case a ‘snapshot’ of the battle of Antietam at 1630 hours on September 17, 1863. Click to enlarge.

How TIGER perceives the battlefield

When you and I look at a battlefield our brains, somehow, make sense of all the NATO 2525B icons scattered around the topographical map. I don’t know how our brains do it, but this is how TIGER does it:

Screen capture from TIGER: How TIGER converts unit positions into lines and frontages using a Minimum Spanning Tree (MST). Click to enlarge.

By combining 3D Line of Sight with Range of Influence (how far weapons can fire and how accurate they are at greater distances displayed, above, with lighter colors) with a Minimum Spanning Tree algorithm3)Kruskal’s algorithm, https://en.wikipedia.org/wiki/Kruskal%27s_algorithm the above image is how TIGER ‘sees’ the battlefield of Antietam. This is an important first step for evaluating object attributes.

How to determine the attribute of anchored or unanchored flanks

Battlefields are ‘objects’ that are made up of ‘attributes’. One of these attributes is the concept of anchored and unanchored flanks. While anyone who plays wargames probably has a good idea what is meant by a ‘flank’, following formal scientific methods I had to first prove that there was agreement among Subject Matter Experts (SMEs) on the subject. This is from one of the double-blind surveys given to SMEs:

Screen shot from online double-blind survey of Subject Matter Experts on identifying the presence of Anchored and Unanchored Flanks. Click to enlarge.

And their responses to the situation at Antietam:

Subject Matter Experts response to the question of the presence of Anchored or Unanchored flanks at Antietam. Click to enlarge.

And another double-blind survey question asked of the SMEs about anchored or unanchored flanks at Chancellorsville:

Response to double-blind survey question asked of SMEs about anchored and unanchored flanks at Chancellorsville. Click to enlarge.

So, we have now proven that there is agreement among Subject Matter Experts about the concept of ‘anchored’ and ‘unanchored’ flanks and, furthermore, some battlefields exhibit this attribute and others don’t.

Following is a series of slides from a debriefing presentation that I gave to DARPA (Defense Advanced Research Projects Agency) as part of my DARPA funded research grant (W911NF-11-200024) describing the algorithm that MATE (the successor to TIGER) uses to calculate if a flank is anchored or unanchored and how to tactically exploit this situation with a flanking maneuver. This briefing is not classified. Click to enlarge slides.

How to generate a Course of Action for a flank attack

Once TIGER / MATE has detected an ‘open’ or unanchored flank it will then plot a Course of Action (COA) to maneuver its forces to perform either a Turning Maneuver or an Envelopment Maneuver. Returning to the previous DARPA debriefing presentation (Click to enlarge):

MATE analysis of the battle of Marjah (Operation Moshtarak February 13, 2010)

The following two screen captures are part of MATE’s analysis of the battle of Marjah suggesting an alternative COA  (envelopment maneuver) to the direct frontal assault that the U. S. Marine force actually performed at Marjah. Click to enlarge:

Conclusions & Comments about Computational Military Reasoning (Tactical Artificial Intelligence) & Battlefield Analysis (Part 1)

Usually, at this point when I give this lecture, I look out to my audience and ask for questions. I really don’t want to lose anybody and we’ve got a lot more Tactical AI to talk about. So far, I’ve only covered how my programs (TIGER / MATE) analyze a battlefield in one particular way (does my enemy – OPFOR in military terms – have an exposed flank that I can pounce on?) and there is a lot more battlefield analysis to be performed.

It’s easy, as a computer scientist, to use computer science terminology and shorthand for explaining algorithms. But, I worry that the non computer scientists in the audience will not quite get what I’m saying.

Do you have any questions about this? If so, I would really like to hear from you. I’ve been working on this research for my entire professional career (see A Wargame 55 Years in the Making) and, frankly, I really like talking about it. As a TA said to me many years ago when I was an undergrad, “There are no stupid questions in computer science.” So, please feel free to write to me either using our built in form or by emailing me at Ezra [at] RiverviewAI.com

References

References
1 TIGER: An Unsupervised Machine Learning Tactical Inference Generator; This thesis can be downloaded free of charge here.
2 Using a one sided Wald test resulted in  p = 0.0001.In other words, it was extremely unlikely that TIGER was ‘guessing correctly’.
3 Kruskal’s algorithm, https://en.wikipedia.org/wiki/Kruskal%27s_algorithm

A Wargame 55 Years in the Making (Part 4)

It’s easy to say that you want to create an artificial intelligence that is capable of making human-level tactical decisions but that’s just not how it’s done in academia. First off, the term ‘human-level’ is vague. And then there’s the question of how do you prove your claim? I am indebted to Professor (now Dean) Joe Kearney who proposed the following hypotheses to state my doctoral thesis:

Hypothesis 1:  There is agreement among military experts that tactical situations exhibit certain features (or attributes) and that these features can be used by SMEs (Subject Matter Experts) to group tactical situations by similarity.

Hypothesis 2:  The best match by TIGER (the Tactical Inference Generator  program) of a new scenario to a scenario from its historical database predicts what the experts would choose.

In other words, a preponderance of SMEs will describe a tactical situation in the same way (like ‘Blue has a severely restricted line of retreat’ or ‘Red has anchored flanks’) and a computer program can be written that will describe the same battlefield in the same way as the human experts. And, if TIGER correctly predicts what the SMEs would choose in four out of five tests  (using a one sided Wald test resulted in  p = 0.0001 which is statistically significant) you get a PhD in Computer Science.  By the way, I am also indebted to Dr. Joseph Lang of the Department of Statistics and Actuarial Science at the University of Iowa who calculated the p value.

An example of a tactical description question asked of Subject Matter Experts.

An example of an online survey tactical description question asked of Subject Matter Experts. Image from Sidran’s TIGER: A Tactical Inference Generator. Click to enlarge.

The results of the above survey question: 100% of SMEs agree that RED's left flank is anchored on the Potomac; 79% agree that RED's right flank is anchored on the Antietam. Click to enlarge.

The results of the above survey question: 100% of SMEs agree that RED’s left flank is anchored on the Potomac; 79% agree that RED’s right flank is anchored on the Antietam. Image from Sidran’s TIGER: A Tactical Inference Generator. Click to enlarge.

The descriptors (features or attributes) that were identified by the SMEs included Anchored Flanks, Unanchored Flanks, Restricted Avenues of Attack, Unrestricted Avenues of Attack, Restricted Avenues of Retreat, Unrestricted Avenues of Retreat and Interior Lines. If you are interested in the methodology and algorithms there are links at the end of this post.

Now that the features have been identified (and algorithms written and tested that return a value representing the extent of the attribute) the next step is separating battlefield situations into categories is creating a machine learning classifier program.

There are two kinds of machine learning programs: supervised and unsupervised. Imagine two kinds of fish coming down a conveyor belt with a human being watching this on a TV with two buttons to push. If he pushes the button on the left the fish is classified as, say, ‘tuna’. And if he pushes the button the right the fish is classified as a ‘catfish’. (Why tuna and catfish are coming down this conveyor belt is beyond me, but please stay with the explanation.). In this way the program is taught the difference between a tuna and a catfish (tuna are bigger and longer). This is called supervised learning and is the method used by Netflix and Spotify to select movies or songs that are similar to choices you have previously made. I don’t like supervised systems because they have to be ‘trained’ and, in my opinion, have a tendency to oversimplify classification problems (for example, Netflix movie suggestions are usually awful).

Unsupervised machine learning works differently: there are a number of ‘objects’ that are described with certain ‘attributes’. These objects are presented to the ‘machine’ and the machine separates them into categories based on the likelihood (probability) that they belong together and then displays the results in a hierarchical tree structure. This is how the TIGER program works. The ‘objects’ are battlefield maps and the attributes are things like ‘anchored flanks’ and ‘restricted lines of attack’.

In the image, below, one branch of a tree structure of classified battles (both real and hypothetical) is displayed:

TIGER has classified four battlefield snapshots (Lake Trasimene 217 BC, Antietam 0600 hours, Antietam 1630 hours and a test battlefield submitted to TIGER and the SMEs as being similar. Note how TIGER sees the two Antietam snapshots as 'more similar' and puts them in their own node. Image taken from TIGER: An Unsupervised Machine Learning Tactical Inference Generator. Click to enlarge.

TIGER has classified four battlefield snapshots (Lake Trasimene 217 BC, Antietam 0600 hours, Antietam 1630 hours and a test battlefield submitted to TIGER and the SMEs as being similar. Note how TIGER sees the two Antietam snapshots as ‘more similar’ and puts them in their own node. Image taken from TIGER: An Unsupervised Machine Learning Tactical Inference Generator. Click to enlarge.

The method that TIGER uses to classify battlefields is Gennari, Fisher and Langley’s ClassIT algorithm which is explained in detail in my thesis (link below). Basically, a number of objects are ‘fed’ to the machine (in computer science the terms ‘machine’ and ‘program’ are synonymous) and every time the machine ‘consumes’ an object it asks itself, “does this new object belong in a previously existing category, or a new category, or should I combine two existing categories and add this new object or should I split an existing category and add this new object to one of them? Caveat: just as I explained in the previous blog, computers don’t actually ‘say’ or ‘ask itself’ but it’s easier to explain these processes using those terms. This is unsupervised because there is no human involvement or training. And there is no limit to the number of objects (battlefields) that can be shown to the program. TIGER is constantly learning.

Below is an example blind survey question given to >20 SMEs to validate TIGER’s ability to predict what the majority of SMEs would choose. My good friend, Ralph Sharp, who has worked on art for many of my games did the hypothetical battlefield maps.

An example of the blind survey questions asked of SMEs: is the hypothetical battlefield situation on the top more like the historical battlefield in the middle (Kasserine Pass) or the historical battlefield at the bottom (Gettysburg). Click to enlarge.

An example of the blind survey questions asked of SMEs: is the hypothetical battlefield situation on the top more like the historical battlefield in the middle (Kasserine Pass) or the historical battlefield at the bottom (Gettysburg). Click to enlarge.

The results show a statistically significant number of SMEs are in agreement that the hypothetical battlefield situation most closely resembles Kasserine Pass.

The results from the, above, blind survey question. The SMEs overwhelmingly state that the the battle of Kasserine Pass most resembles the hypothetical battle situation. The TIGER program also chose the 'Kasserine Pass'. Click to enlarge.

The results from the, above, blind survey question. The SMEs overwhelmingly state that the the battle of Kasserine Pass most resembles the hypothetical battle situation. The TIGER program also chose the ‘Kasserine Pass’. Click to enlarge.

Once again this week’s post ran longer than I anticipated. It looks like this story won’t conclude at least until Part 5. It has been said that by the time a PhD dissertation is defended only five people in the world are capable of understanding it. I certainly hope that wasn’t the case with my research. Below is a link to download a PDF of my thesis. Please feel free to contact me directly if I can answer any questions.

Lastly, my good friend Mike Morton, sent me a link to this piece just before my defense:  The “Snake Fight” Portion of Your Thesis Defense. Anybody thinking of getting a PhD should probably read this first (and laugh and then cry).


Papers that were cited in this post with download links:

“Algorithms for Generating Attribute Values for the Classification of Tactical Situations.”

In PDF Format

TIGER: An Unsupervised Machine Learning Tactical Inference Generator.

In PDF Format

 

 

A Wargame 55 Years in the Making (Part 2)

After The War College I created a couple of non-wargames including Online Mysteries, a massive multiplayer online mystery game that was written for AOL’s WorldPlay. WorldPlay was envisioned to be a 3D online world populated with avatars. It was similar in concept to Second Life but, like a lot of great ideas, was ahead of its time. AOL shut WorldPlay down before most of the games, including Online Mysteries, launched.

Mysteries Unlimited screen shot (Windows) was a massively multiplayer online mystery game created for AOL/WorldPlay (click to enlarge).

Mysteries Unlimited screen shot (Windows) was a massively multiplayer online mystery game created for AOL/WorldPlay (click to enlarge).

By 2000 the game publishing industry  was going through another convulsion of consolidations, buyouts and contractions. Publishers were producing fewer games but the ones that were being created had large teams, long development cycles and massive budgets. The days when an independent developer could pitch a game idea, get an advance and then develop it outside of a publisher’s studio were gone. And the last thing that the big publishers were interested in were wargames.

Over the previous fifteen years I had received inquiries from active duty military and Pentagon project managers about my wargames (known as Commercial Off The Shelf, or COTS, in Pentagon-lingo) and if I would be available to consult on various wargaming projects. Unfortunately, I was lacking a key prerequisite for this: a doctorate. I returned to academia, first to a small local college where I also taught computer game design and in 2003 I was accepted in the computer science PhD program at the University of Iowa (one of the oldest computer science departments in the world).

Before I ever set foot in MacLean Hall (the home of the Department of Computer Science at the University of Iowa) I knew what I would spend the next six years of my life researching and studying: tactical and strategic AI (I would eventually coin the phrase ‘computational military reasoning’ to describe this field).  What I soon discovered was that very little work had ever been done in this research area. What was even more surprising was my discovery that most ‘professional’ military wargames (i.e. wargames used by the US Army, NATO, England, Australia, France, etc.) had absolutely no AI whatsoever. Instead, they employed ‘pucksters’ (usually retired military officers) who made all the moves for OPFOR (Opposition Forces, AKA ‘the enemy’) from another computer in another room.

Pucksters, or humans (usually retired military officers) who make the decisions and moves for enemy (or OPFOR) units during a wargame.

Pucksters are humans (usually retired military officers) who make the decisions and moves for enemy (Opposition Forces = OPFOR) units during a wargame. Note the sign OPFOR & EXCON (Exercise Control) over the puckster’s work station.

To earn a doctorate at an American ‘Research One’ university requires 90 graduate credits (about 30 classes), a GPA > 3.5 (out of 4.0) and passing three major examinations. The first examination on the road to a doctorate is the Qualifying Examination (or Q Exam as everyone calls it). The topic of my Q exam was “An Analysis of Dimdal’s (ex-Jonsson’s) ‘An Optimal Pathfinder for Vehicles in Real-World Terrain Maps’.” This is the area of computer science and graph theory known as ‘least weighted path algorithms’. GPS devices and Map apps use a least weighted path algorithm, except they’re only interested in roads; they don’t consider terrain, slope and other things (that are important to a military unit maneuvering on a battlefield).

Now, if you were to wander into the ivied halls of academic computer science  (like MacLean Hall) and inquire of a tenured faculty member how to calculate the fastest path between two points on a sparse grid they would almost certainly reply to you, “Dijkstra’s algorithm.”  Dr. Dijkstra invented his algorithm in 1956 and it works like this: first calculate the distance between every point on the map and every other point on the map. Then figure out the fastest path. Yeah, it’s that obvious, and painfully slow. In fact, it’s so slow that it isn’t used for GPS or game AI. In computer science we us ‘Big O’ notation to describe how fast (or slow) an algorithm takes to run. Dijkstra’s algorithm runs in O(|V|2). This means that as the number of vertices, or points on the map, (that’s the |V| part) increases, the time it takes for the entire algorithm to complete goes up by the square of the number of vertices. In other words, as the map gets bigger the algorithm gets a lot slower.

Dimdal, and I and most of the gaming world do not use Dijkstra’s algorithm, Instead we use A* (pronounced ‘A Star’) which was designed in 1968 primarily by Nils Nilsson with later improvements by Peter Hart and Bertram Raphael. Below is an example of A* used in General Staff (note that the algorithm doesn’t look at every point on the map, just ones that it thinks are relevant to the problem at hand). A* runs in O(n) time.

A screen shot of A* algorithm running. The green areas are where the algorithm searched for a least weighted path, the brown line is the shortest path (mostly following a road).

A screen shot of A* algorithm running. The green areas are where the algorithm searched for a least weighted path, the brown line is the shortest path (mostly following a road).

Graph showing the difference between Dijkstra's algorithm and the A* algorithm. The blue line that increases rapidly shows that Dijkstra's algorithm gets much slower as the map gets bigger. A* is not affected as much by the size of the map.

Graph showing the difference between Dijkstra’s algorithm and the A* algorithm. The blue line that increases rapidly shows that Dijkstra’s algorithm takes much more time as the map gets bigger. A* (the green line) is not affected as much by the size of the map.

As part of my research into computational military reasoning I made further modifications to A* to take into effect the slope of the terrain (which can affect speed of some units), the range of enemy units (OPFOR ROI, e.g. areas controlled by enemy artillery) and to avoid enemy line of sight (LOS). My MATE (Machine Analysis of Tactical Environments) project used all of these options:

A slide from my presentation to DARPA showing how my modified A* avoids enemy range of weapons.

A slide from my presentation to DARPA showing how my modified A* avoids enemy range of weapons. The likelihood of taking casualties is indicated by the darkness of the red coloring.

While working on General Staff I came up with a new optimization of the A* algorithm which I’ve called EZRoadStar. EZRoadStar first looks at the roadnet and attempts to utilize it for rapid troop movement. Only after ascertaining how far using roads will get it to its goal does the algorithm look for nonroad paths. EZRoadStar is much faster than traditional A*; especially for wargames and military simulations.

An example of the EZRoadStar least weighted path algorithm. What's the fastest way point A to point B (the yellow line)? Taking the road, of course. This algorithm looks at a battlefield like a commander and utilizes the roadnet first before looking at other options. Click to enlarge.

An example of the EZRoadStar least weighted path algorithm. What’s the fastest way from point A to point B (the yellow arrow)? Taking the road, of course. This algorithm looks at a battlefield like a commander and utilizes the roadnet first before looking at other options. Click to enlarge.

Well, this wargame may be 55 years in the making and it looks like describing some of the key things that went into it may take almost as long. Clearly, I’m going to have to continue this story with yet another post. We’ve just barely scratched the surface of my work on wargame AI. The next installment will (hopefully) cover algorithms for ‘the five canonical offensive maneuvers’ (i.e. The Envelopment Maneuver, The Turning Maneuver, Penetration, Infiltration and Frontal Assault. These are the algorithms that are ‘under the hood’ of General Staff. If any of my readers would like to know more about these topics (links to my published papers on the subject or whatever) please drop me a line at Ezra [at] RiverviewAI.com.

 

A Wargame 55 Years in the Making

I was introduced to wargames (Avalon Hill, of course) about 55 years ago when I was seven years old by my buddy Carl Hoffman who lived down the street. Carl and I ended up owning every Avalon Hill wargame and we played them all. Eventually, Carl and his family moved away (I think Carl became a professor at LSU but I can’t confirm that) and I was faced with the grim realization that all of us Grognards inevitably confront: there was nobody to play wargames with.

Tactics II from Avalon Hill. The first wargame I played. What was the first wargame you played?

Tactics II from Avalon Hill. The first wargame I played. What was the first wargame you played?

Over the years I would occasionally find someone interested in playing  (or, more likely, coerce a friend who really had little interest in wargames) to set up a game but rarely would we ever complete a full campaign or battle.  By the late 1970s playing wargames for me was a rare event (the one exception being my good friend from my undergrad college days, Corkey Custer). Then, about this time, I saw an episode of PBS’s NOVA on what was then the infancy of Computer Graphics (CGI). There, flickering on my black & white TV, was what we would eventually call ‘wire frame’ 3D. Wire frame 3D just shows the edges of 3D objects. They are not filled in with a texture (as CGI is done now). Computers just didn’t have the processing power to pull this off back in the late ’70s and ’80s. But, I immediately thought to myself, “this technique could be used to render 3D battlefields!” And that was the spark from UMS: The Universal Military Simulator was born.

Screen capture of the original UMS running in 640 x 400 x 2 resolution in MS DOS. This is an example of wire frame 3D.

Screen capture of the original UMS running in 640 x 400 x 2 resolution in MS DOS. This is an example of wire frame 3D.

By the mid 1980s I was about to graduate with a bachelor’s degree in computer animation and, more importantly, a working demo of UMS. I had hypothesized that one of the biggest selling points of computer wargames was the ability to always have an opponent handy (the artificial intelligence or AI). I borrowed a copy of the Software Writers Market from the library and sent out dozens (maybe even hundreds) of letters and, eventually, papered an entire wall of my apartment with rejection letters from game publishers. But, Dr. Ed Bever, who designed Microprose’s Crusade in Europe, Decision in the Desert and NATO Commander (pretty much the only computer wargames that were out at the time) saw my pitch letter and gave me a call. Microprose wasn’t interested in publishing UMS per se, rather their CEO, “Wild Bill” Stealey, wanted me to come work for Microprose (and they would own UMS). That deal didn’t appeal to me but a short time later, at the 1986 Consumer Electronics Show in Chicago, Ed Bever introduced me to Microprose’s competitors, Firebird/Rainbird. Within 48 hours I had my first game publishing deal.

Crusade in Europe from Microprose (1985) designed by Dr. Ed Bever, originally programming by Sid Meir.

Crusade in Europe from Microprose (1985) designed by Dr. Ed Bever, originally programming by Sid Meir.

UMS sold about 128,000 units and was the #1 game in the US and Europe for a while.

The European Microdealer Top 30 Chart with UMS as #1 with a bullet on the 16 game chart. What other classic games can you find on the charts? (Click to enlarge)

The European Microdealer Top 30 Chart with UMS as #1 with a bullet on the 16-bit game chart. What other classic games can you find on the charts? (Click to enlarge)

Ed Bever helped on the design of UMS II: Nations at War which was an extremely ambitious global wargame that had unprecedented detail and allowed the user to edit numerous variables and equations:

UMS II: Nations at War screen shots from the MS DOS version. Click to enlarge.

UMS II: Nations at War screen shots from the MS DOS version. Click to enlarge.

UMS II screen shot (Macintosh) showing active weather systems.

UMS II screen shot (Atari ST) showing active weather systems.

An example of the numerous variables that the user could adjust in UMS II. Click to enlarge.

An example of the numerous variables that the user could adjust in UMS II.; in this case adjusting the attrition level based on experience for ground units. Macintosh screen shot. Click to enlarge.

UMS II was named “Wargame of the Year” by Strategy Plus magazine and enjoyed strong sales. In many ways it was the ‘ultimate’ in complex wargaming.  It was far more detailed than any other wargame I’ve seen before or since (and that includes my work on DARPA, Department of Defense, US Army, Office of Naval Research and Modeling Simulation Information Analysis Center (MSIAC) wargames. Ironically, it was published by Microprose because they bought out Firebird/Rainbird and with it my publishing contract.

My next wargame, in 1993, was The War College.  We used data from US Geological Survey to create the 3D maps. Again, it was very detailed and allowed the user to edit combat values and featured an interactive hyperlinked history of each scenario (it shipped with Antietam, Austerlitz, Pharsalus and Tannenberg). Unfortunately, our publisher, GameTek, pretty much ceased to exist just as we were about to release this game. To this day I have no idea how many units it sold. We never got a royalty statement.

Screen shot (MS DOS) of the Austerlitz scenario in The War College. Click to enlarge.

Screen shot (MS DOS) of the Austerlitz scenario in The War College. Click to enlarge.

The War College also had an interactive history for each battle (screen shot from MS DOS, click to enlarge).

The War College also had an interactive history for each battle (screen shot from MS DOS, click to enlarge).

The War College allowed users to adjust melee effectiveness and values. MS DOS screen shot, click to enlarge.

The War College allowed users to adjust melee effectiveness and values. MS DOS screen shot, click to enlarge.

I also would be terribly remiss if I did not mention my good friends who worked on various ports of the above mentioned games, Ed Isenberg (Amiga and MS DOS), Andy Kanakares (Apple IIGS and MS DOS) and Mike Pash (MS DOS).

(left to right) Ed Isemberg, Andy Kanakares, Ezra Sidran. Not pictured; Mike Pash.

(left to right) Ed Isemberg, Andy Kanakares, Ezra Sidran. Not pictured; Mike Pash.

One of the problems of getting old is that your stories get longer to tell. This blog post is far longer than I intended and we haven’t even got to this century and all my research into computational military reasoning (Tactical and Strategic AI) and my new wargame, General Staff.  I’m afraid we’ll have to end this here and pick up the story in Part 2 next week.

Slope Weight Added to Least Weighted Path Calculations

An example of slope avoidance in General Staff. Note how the cavalry unit skirts the edge of the hill on the way to its objective. (Click to enlarge)

An example of slope avoidance in General Staff. Note how the cavalry unit skirts the edge of the hill on the way to its objective. (Click to enlarge)

We have added a new feature, the cost of traversing up a slope, to our proprietary least weighted path algorithm. This will create even more realistic (and, frankly, optimal) unit order paths.  A key element to General Staff is its ability to assist the user in calculating optimal paths for units so the user only has to click where he wants the unit to go and the AI figures out the rest. In fact, the user doesn’t even have to click on the map, but can select the unit’s destination from a list of objectives.

The original least weighted path algorithm with slope weighting was created by Dr. Sidran when he was in graduate school. Dr. Sidran said, “I should probably write a paper describing the new algorithm, called EZRoadStar. However, as I am no longer an active member of academia there is no pressure to, ‘publish or perish’.” Instead, he will concentrate on finishing General Staff.

General Staff is expected to be released for Windows, XBox and iOS later this year.