The Social
Simulation Strategy of Person(s):
Noncomputerized
Validation and Verification of Analysis
by Red Teaming
Dominated and Undominated Options
by
Dallas F. Bell, Jr.
(Vide: Social Simulation
Sequencing: Constructing the Software Architecture
for Systematic Political Science)
The Case for Red Teaming
In a contest a person is shown
three shells lying on a table. It is explained that only one shell
has a pea hidden under it. The contestant will win a prize if
he or she correctly chooses the one shell that hides the pea.
After the contestant has chosen a shell, the host removes one of the
remaining two shells to reveal that it did not have the pea. The
host then asks the contestant if he or she would like to switch their
choice to the last shell. The payoff is the same but switching
has a win probability of two-thirds, the dominated option, and a one-third
probability to win, undominated option, by not switching. That
reality is not always apparent to everyone until the contest is expanded
to three hundred shells with a pea hidden under only one shell.
Your choice of shell A has a one in three-hundred probability
of having the pea and winning the prize. All the remaining shells,
except shell B, are turned over to reveal that they did not contain
the pea. Aided by the tool of Bayes' theorem, it is virtually
certain the pea is under shell B. Granted that a one in
three probability of winning is more favorable than a one in three-hundred
probability of winning, the principle of switching to the dominated
option has the consistently highest probability of a payoff. Reinforcing
the understanding of those odds may enable the intuition threshold,
a person feels that choice A is better than choice B,
to be surpassed.
The risk threshold is defined
as the point that the risk of choosing and loosing is perceived less
than the risk of choosing and winning. For example, a lottery
sum is increased to a point that spending the money for a ticket to
win is perceived to outweigh using the funds for something else.
Minus, of course, the presence of a primary disorder such as pathological
gambling and any comorbity or simultaneous existing relevant medical
condition. The risk threshold is lowest for those in the survival need
level. Even those in the next need level, economic security, may
elect to play the lottery and forgo using the money to increase their
interest bearing saving's account at the bank. Lotteries are
widely recognized as being funded largely by those least able to afford
the probable loss.
In California, it is estimated
that there is a one in eighteen-million probability of winning the lottery.
Those odds are the same as finding one specific foot of ground between
the over 3,400 miles between Portland, Oregon, and Acadia National Park,
Maine. Furthermore, to drive ten miles or more to buy the ticket
for the California lottery results in a three times higher probability
of the buyer being killed in an auto accident than the probability of
winning. Politicians, businesses, and other advertisers attempt
to enhance a perception within their target audience toward their dominated
option--propaganda. Often conspiracy theories abound contrary
to evidence for the probability of an undominated option occurring.
Because conspiracy theorists wrongly conclude that the odds for a reality
x leading to a y outcome are perceived to be less than their
alternative explanation.
Some logical fallacies are
categorized by the term of gambler's fallacy. They include the
misconception that a random event is more likely to occur because it
hasn't happened for a period of time or a random event is less likely
to occur because it hasn't happened for a period of time, and a random
event is more likely to occur because it happened recently or a random
event is less likely to occur because it happened recently. Those
false ideas are affected by illusion of control and math errors such
as thinking a sequence of numbers is more or less likely to occur etc.
The T2 and T3 theological beliefs in astrology or a lucky/karma force
or psychic abilities etc. are commonly observed. There is ample
empirical evidence that those deified concepts are untrue. Thus,
neither a T2 or T3 theology could be considered a viable dominated or
undominated option, yet their believers refuse to switch to the dominated
theological option of an infinite God, T1. There are also conjunction
errors explained as the thought that a case is more probable if another
condition is added despite the reality that the extra aspect makes the
case less probable. This is seen in the experiment where 85 percent
of the people chose the statement, Linda is a bank teller and active
in the feminist movement, as being more likely than the statement that
Linda is a bank teller.
J. Edward Russo and Paul J.
H. Schoemaker wrote that decision making should avoid traps of not generating
or considering undominated alternatives to dominated options (Ralph
Keeney and Howard Raiffa), inadequate information, using irrelevant
information, and frame blindness which could include deciding to solve
the wrong problem. In 1981, Amos Tversky and Daniel Kahneman demonstrated
that framing, or the manner that a problem is presented, can affect
the outcome. A group of participants were asked to choose between
two United States programs to prepare for an Asian disease expected
to kill 600 people. Program A would save 200 affected people
and program B had a one-third probability that 600 people would
be saved and a two-thirds probability that no people would be saved.
A total of 72 percent of the participants were risk averse and preferred
A while 28 percent preferred B. A second choice was
given between program C where 400 people may die or program
D where there was a one-third probability that no one dies and a
two-thirds probability that 600 people will die. This time 78
percent preferred the risk-taking option of D and 22 percent
chose C. The programs were technically identical though
a change in the decision frame between the two choices produced a preference
reversal.
Cognitive bias is considered
to be prominent in the decision making process. This involves
undue optimism or pessimism, peer pressure and group think, experience
limitations, and a premature ending of a search for evidence.
The anterior cingulate cortex and the orbitofrontal cortex are the brain
regions considered to process decision making. Distinct patterns
of neural activation have been recorded in those locations to be dependent
on whether decisions were made on the basis of personal volition or
due to following orders from another entity.
Tversky and Kahneman have identified
four assumptions in decision theory regarding utility functions.
The first is cancellation, eliminating irrelevant alternatives.
For example, if A is preferred to B then A should
be preferred if it rains next week to B if it rains next week,
unless desiring to play golf makes them more weather dependent.
The second is transitivity. If A is preferred over
B and B is preferred over C, then A must be
preferred over C. Douglas J. Navarick and Edmund Fantino
found that choice in concurrent-chain procedures can violate stochastic
transitivity that is required to make unidimensional scale reinforcements
possible. The third assumption is dominance with the highest
being stochastic dominance. An option may be said to dominate
another option if in every possible state it has an outcome perceived
to be at least as good. A choice is considered to be stochastically
dominate over another choice if given all probability it has a perceived
superior outcome. The final assumption of expected utility theory
is invariance. This means that different representations of
the same problem should yield the same results. The failure of
that idea was demonstrated by the Asian disease problem presented earlier
in this paper.
The biblical Hebrew King Solomon
stated that safety is found by having a multitude of counselors.
That was true thousands of years ago and is true today. Using
the template of systematic political science, people can explore all
the known options and eventually make decisions with the most preferred
outcomes. This process should include robust give-and-take among
all the players and is often called red teaming. Therein rests
the historically proven best hope of avoiding the analysis pitfalls
inherent to finite decision makers.
Red Teaming
Years ago military leaders
practiced war maneuvers against a surrogate opposition force, OPFOR,
composed of friendly soldiers. Traditionally, the OPFOR was designated
in simulation planning as the red team and the resident force was called
the blue team. Controllers established and monitored the parameters
of the tasks and conditions, or cognitive model, for the exercise.
At the end of each event reports were written concerning the things
that worked well and things that needed changed.
Today red teaming is practiced
by many organizations and has more than a surrogate mission. Like
the other two social simulation strategies of computers only and combinations
of computers and persons, the social simulation strategy of persons
comprise the simulation scope of microsimulation, simulation
of an individual(s), intermediatesimulation, simulation of groups
of individuals that make up an institution(s), or a macrosimulation,
simulation of the institutions that make up a nation-state(s). The
simulation scenarios are either specific where the number and types
of agents are known by time and environment of action, or they are nonspecific
where the number and/or type of agents nor the time or environment of
action is specifically known.
J. Edward Russo and Kurt A.
Carlson have identified five phases of the decision process. The
first is the representation of the decision task. The second phase
is the generation of alternatives. The third is the acquisition
and evaluation of information. Phase four involves the resolution of
multiple units of information to distinguish one alternative as superior.
Lastly, five is post-commitment distinction, implementation, and learning.
Russo and Carlson's systematic
process recognizes the necessity of first finding and categorizing dominated
and undominated options. This is the beginning goal of the red
teaming mission of validation. Independent of the blue
team the red team should create and collate options. When that
phase is determined to have been accomplished the red team can generate
more alternatives using the blue team's conclusions and vice versa.
Acquiring and evaluating information is the red team's operation for
the third phase.
Another red teaming mission
is one of verification. It serves to transition phase three
into phase four which is the resolution of units of information to distinguish
one alternative as superior. In the last phase the red team challenges
the implementation, learning, and training of the results.
For surrogate missions, a red
team should be composed of those persons operating with the same need
levels and problem solving skills, META formulae, as the OPFOR they
are simulating. Whatever the red team operation may entail, its
makeup and equipment should be adjusted accordingly. The art is
generally to mix those with different skills to maintain the forensic
aspect within the red team itself.
Diane Vaughan and others have
argued that management decisions tend to become routine and evolve over
time. An active red team can help to insure that the erosion of
standards by decision makers is minimized. Red teaming is not
a cure-all. The better the red team the better the input will
be for the blue team's action and reaction.
Another red team concern is
known as the observer effect. Merely by observing causes behavior
to change. Therefore, behaviors of both red and blue teams can
be expected to be modified along with the true oppositional force(s).
Also, the (Heisenberg) uncertainty principle implies that it is impossible
to assert by position and motion that a particle is at the same time
at a point and moving with a specific velocity. This quantum observation
can be applied to a red teaming surrogate operation. The enemy's
geographic positions and decision-tree will always be considered difficult
to simultaneously determine.
The challenges for implementing
and maintaining red teaming operations are admittedly great. However,
the investment offers low risk and high reward for those that want the
best analysis to consistently make superior decisions in a highly competitive
world. Chapter four and verse six of the Old Testament book of
Hosea addresses the alternative by declaring, "My people are destroyed
for lack of knowledge..."
ALL RIGHTS RESERVED
© 2006 DALLAS F. BELL, JR.