Table of Contents
The Role of Group Dynamics in Mental Model Development: An
Experimental C omparison of the Effect of Case Study and
Management Flight Simulator under Two Levels of Facilitation
Michelle Shields
University Affiliation: North Carolina State University
54 Steadman Road
Avonhead/Russley
Christchurch, New Zealand
(H) 011 643 342 1280
michelle.shields@ mindspring.com
Abstract
Group Dynamics is theorized to play a pivotal role in how mental model development occurs as
a result of Group Model Building. To test this theory, a two part experimental investigation was
conducted. In the first part, Mental Model Development was assessed using two levels of group
model building Method: case study and management flight simulator and two levels of
Facilitation: non-facilitation and structured facilitation.
In the second experiment, the four experimental conditions were retained and the original
sample was divided into fourteen sub-groups that worked together to develop a team strategy for
a hypothetical change management task. Audio recordings of the activity were analyzed for
comments related to strategy, process, rationale and facilitation. A second measure assessed
participants’ perceived level of agreement with and input on their group’s final strategy. This
paper describes the results of this second experiment and discusses outcomes in relation to the
findings from Experiment I.
Key Words: Group Dynamics, Management Flight Simulator, Case Study, Mental Models,
Facilitation, Group Model Building
Introduction and Overview
As our knowledge-based society unfolds (Reich, 1992), the need for team-based leamming
processes that support strategic decision making in organizations and facilitate the development of
shared mental models is expected to grow (Akkermans & Vennix, 1997; de Geus, 1988;
Morecroft & Sterman, 1994; Randers, 1976; Senge, 1990a; Stata, 1989; Vennix, 1996). While
interest in group-based modeling processes to enhance organizational leaming capabilities is on
the rise, there have been only a handful of studies that empirically evaluate effectiveness of
variations in approaches. In an effort to address this need, a two part investigation was conducted
at a major airline located in the southwestem United States. The research upon which this paper
is drawn consisted of two experiments. The first looked at if, and how, model conceptualization,
simulation and facilitation of these two modeling processes contribute to the elaboration and
revision of individually held mental models (for further detail see the 2001 System Dynamics
Conference proceedings or contact the author). As the level of group dynamics is purported to
be an important component of mental model development, the purpose of the second experiment
was to better understand the impact that the two methods: case study and management flight
simulator; and two levels of facilitation: non-facilitated and scripted facilitation had on the
person to person interaction that resulted during a group-based modeling activity and on the level
of participant buy-in to the process and its results.
This paper consists of three parts. The first describes in the theoretical and empirical
foundation needed to better understand the impact of modeling method and facilitation on group
dynamics. The second part outlines the experimental model, procedures and results. The third
part of this paper discusses results in relation to outcomes from the first experiment pertaining to
mental model development and discusses implications for theory, research and practice.
Part I Theoretical and Empirical F oundation
Mental Models and Organizational Learning
Mental models are believed to be important to organizational success because it is through
their development and transference across group members the collective process known as
“organizational leaning” is said to occur (Gary & Charyk, 1996; Kim, 1993a; Morecroft &
Sterman, 1994). By organizing a group’s knowledge into a collective framework and allowing
for discussion regarding the validity of the framework (Zohar, 1997), individually held mental
models may be revised to more closely align with a shared view of the organizational system
(Grofler, 1996; Lane, 1994; Morecroft & van der Haijden, 1994; Senge, 1990a; Stata, 1989).
Inherent in this theory of shared mental models is the belief that individually held mental models
may be altered as a result of person-to-person interaction. When ideas are discussed and debated,
mental models may be altered as the brain forms new concept “associations” (Luria, 1973). Asa
result, new information may be added or current mental data rearranged to form new connections
between concepts resulting in greater coherence or robustness of the original mental model
(Morecroft, 1994). A more coherent mental model is one in which the concepts contained are
rational, logically consistent with each other, have no mutual contradictions, and reflect
interactions between the systems components and higher level effects that are built up (Lane).
Altering Mental Models with System Dynamics Methodologies
To describe how the system dynamics model building process may alter mental models,
Senge (1990a) and Senge and Sterman (1994) describe a recursive process of mental model
development involving three stages: mapping mental models, challenging mental models to reveal
inconsistencies, and improving mental models. The first stage, mapping mental models, is based
on the premise that mental models can not evolve unless they are first made explicit (de Geus,
1988; Forrester, 1975). This is accomplished by having participants talk about and answer
questions about the variables they see as important to system functioning. In talking with one
another, group members may relate examples from their domain specific experiences, and in the
process, make these mental models explicit and known to others in the group (Bakken, Gould &
Kim, 1994; Eden, 1989; Narayanan & Fahey, 1990).
In the second validation stage of mental model development, an attempt is made to
uncover intemal contradictions, inconsistencies or incompleteness in previously articulated
thinking (Senge, 1990a). For example, in a learning session utilizing a computer-based simulation
- a “management flight simulator’ Bakken et al. (1994) reported that participants thought that the
workload assumptions made for an insurance adjustment claims task were incredibly light
compared to what they should be. When asked what made the revised numbers the right ones, the
participants reported, “Because it had always been that way”. In that instant, the participants are
said to have realized how this unquestioned assumption had been a driving force in their decision-
making and contributed to their group’s poor performance (Bakken et al.).
Senge and Sterman (1994) assert that mental model articulation is a necessary precursor
to mental model development as, only after people have gone public with their mental model can
they begin to discover inconsistencies and contradictions with other sources of information.
Further, not until these contradictions are resolved should it be expected that individuals will
move to the third stage of mental model development involving the revision of potentially
erroneous assumptions (de Geus, 1988; Senge & Sterman). Even if these first three criteria are
met, it should not be expected that a mental model will be readily revised however (Senge,
1990a), as new conceptual perspectives are assimilated gradually (Levitt & March, 1988) and
sometimes not at all if current models, though perhaps simpler, seem to function satisfactorily
(Woods et al., 1994).
Group Model Building
Over the last several years, system dynamics modelers such as Vennix (1996), Lane
(1994), Morecroft and Sterman (1994) and others (see for example the special journal of the
System Dynamics Review, Summer, 1997) have explored the process of building system dynamics
models with groups of individuals from a variety of organizations. According to Morecroft and
Sterman (1994), three defining characteristics of the group model building approach are: client
ownership of all analytical work performed; models created through group process; and
consultant acting as facilitator of group process to capture team knowledge. The group model
building methodology is based on the idea that models should capture the knowledge and mental
data of policy makers and should support team reasoning through the development of a more
systemic and dynamic perspective (Morecroft & Sterman; Senge 1990a; Senge & Sterman, 1994).
Through group interaction and the discussion of business issues, group model building is believed
to enhance team learning, foster consensus and build commitment in decision making (Vennix,
1996).
Advancing the Use of Group Model Building through Abbreviated Interventions
One drawback to the group modeling approach is that building a model from scratch and
developing it to the stage of policy experimentation requires a substantial time investment. Even
with improvements in computer technology that have made the modeling process much easier and
more accessible to non-experts, building a model that leads to meaningful insights may take two
or more years to complete (e.g., Isaacs & Senge (1994); Vennix, 1996). Asa result, group model
building efforts must be designed to leverage as much leaming from the process as possible when
working within a shortened time frame with limited opportunities to work together as a group
(Gary & Charyk, 1996). To achieve the desired goals within this narrow scope, the group may
focus on only one aspect of the overall system dynamics modeling process. They may focus on
conceptual modeling in an effort to generate a shared understanding of the problem from which to
act (e.g., Checkland, 1989; Eden, 1989; Rosenhead, 1989; and Wolstenholme & Coyle, 1994) or
focus on simulation by utilizing an already operating model to conduct policy experimentation
(eg., Paich & Sterman, 1993; Senge, cited in Morecroft & Sterman, 1994; Sterman, 1989).
The Role of Facilitation and Group Interaction in Group Model Building
Facilitation is widely regarded as critical to group model building success (e.g., Morecroft
& Sterman, 1994). With the aid of a facilitator, individually held assumptions about how the
system functions may be elicited and challenged. In addition to encouraging debate among group
members, facilitation is asserted to make unique contributions to achieving the learning objectives
in each of the two abbreviated group modeling approaches. According to Morecroft and
Sterman, during model conceptualization, a facilitator may help ensure that all group members’
ideas and opinions are heard and reflected in the conceptual model. During simulation, a
facilitator may be instrumental in making sure that policy experiments are carried out in a
scientific manner and rationale for changes in strategy are articulated. In either approach,
facilitation may be important on a rudimentary level as it may serve to foster group interaction, a
factor asserted to play a central role in mental model development.
The Role of Social Interaction in Mental Model Development
It is asserted that in the process of group model building that group interaction plays an
important role in mental model development (e.g., Lane, 1994; Morecroft & Sterman, 1994;
Vennix, 1996). This assertion is based on the idea that mental models may be enriched the longer
a person thinks about a topic (Morecroft, 1994; Woods et al., 1994). When group interaction
occurs, people may bring to mind more facts and concepts than they would in isolation. This
enhanced recall may result in cognitive models that include not only a network of ‘familiar’ facts
and concepts, but a vast matrix of ‘potential’ connections stimulated by the flow of conversation
(Forrester, 1975). Forrester, asserts that “within one individual, a mental model changes with
time and even during the flow of conversation. The human mind assembles a few relationships to
fit the context of discussion. As the subject shifts so does the model...” (p. 199). Observations by
Anderson, Tolmic, Howe, Mayes & MacKenzie (1992) support this idea. Based on videotapes
of an ‘interactive protocol’ - a live exchange between people working on a task - they observed
that working with a peer resulted in improvements in the subjects’ mental models used for
prediction, particularly if the individual contrasted his or her pre-test predictions with those of
another and then entered into discussions as to possible explanations of the phenomena.
When considered from a cognitive psychology perspective, group interaction may affect
mental model development because the process of social interaction allows for the use of two
separate but cooperative types of working memory - spatial working memory and phonetic
working memory (Wickens, 1992). Spatial working memory, which represents objects in visual
or spatial form, may be tapped by the visual aspects of the modeling process while, verbal or
phonetic working memory, which represents information as words or sounds, may be tapped by
the discussion aspects of the process. This duality of input may prove instrumental to leaming.
Since these two forms of working memory are thought to be complementary, they do not
necessarily draw on the same memory resources. If so, there may be more cognitive resources to
devote to accessing information that may otherwise be latent. This enhanced recall by extension,
may foster a more cognitively complex network of concepts to draw on during decision-making.
This in turn, may increase the potential for mental model enhancement.
It can not be said categorically that group interaction will necessarily improve problem
solving performance however, as it may aid performance in some groups and not in others
depending on the dynamics that result. For instance, if one individual dominates group
discussions so that participation is not equal among group members (Bakken et al., 1994; Eden,
Jones & Sims, 1983; Hodgson, 1994; Vennix & Gubbels, 1994). To guard against such an
occurrence, a neutral third party or ‘facilitator’ may be added to the group mix to inquire into the
meaning of statements, ensure that all group members’ contributions are heard and to mediate
group members’ opposing views.
Theoretical and Empirical Support for Abbreviating the Model Building Process
Even though it is desirable to have decision-making teams involved in all aspects of the
modeling process, limitations in time to meet as a group make this difficult. Hence, there is a
growing interest in reports of significant benefits derived when utilizing just one of the two
primary sub-activities of the system dynamics model building methodology. When employing an
abbreviated group model building effort, the focus is on either the conceptual model building
(e.g., Coyle & Alexander, 1997; Hodgson, 1994; Rosenhead, 1989; Wolstenholme & Coyle,
1983) or the simulation aspects of the system dynamics process (e.g., Cavaleri & Thompson,
1996; Sterman & Senge, 1994).
Studies that employed one of the two primary components of the group model building
process are reviewed in the next section of this paper are organized along three lines: 1) studies
that evaluate conceptualization-only efforts or the development of a group derived “cognitive or
causal map” that shows how key variables and concepts in the system relate to one another; 2)
efforts that evaluate outcomes from simulation-only efforts using a “management flight simulator”
- a computer-based leaming environment embodying a system dynamics model; and 3) studies
that empirically compare the case analysis method (a conceptual modeling task) to gaming (a
simulation task) in a business education setting.
Theoretical Relationship of Model Conceptualization, Group Dynamics and Learning
The conceptual model may be used to initiate in-depth discussion and debate about the
relationship between structure and behavior in the real system (Lane, 1993; Richardson &
Andersen, 1995; Stevenson, 1993). Once agreed upon, this shared understanding helps group
members prepare for taking the actions necessary to achieve their objectives (Rosenhead, 1989).
Moreover, because conceptual models generally use a simple, symbolic language to show
connections and causal relationships between elements in the system, they may effectively convey
ideas in a way that can be understood by people from a variety of different backgrounds and
disciplines (Espejo, 1994). In conclusion, it may be posited that conceptual model building
provides a unique mode of communication that encourages group interaction.
A primary aim in conceptual model building is to reach some agreement about the nature
of the problem so that each group member feels committed to finding an appropriate solution
(Checkland, 1989; Eden, 1989). When debating the nature of model structure, mental models
may be altered as a group seeks agreement on the structure of system variables, boundaries,
process flows, and information feedback loops (Randers, 1976; Stevenson, 1993; Vennix,
Andersen, Richardson & Rohrbaugh, 1994).
In the last few years, preliminary studies that support the use of conceptual modeling
using system dynamics methodologies for bringing about organizational leaning have been
reported in the system dynamics literature (e.g., Huz, Andersen, Richardson & Boothroyd, 1996;
Vennix et al., 1996; Wolstenholme, 1994). In a study exploring the effectiveness of group model
building techniques, Vennix et al., reviewed cases involving conceptual modeling in order to
evaluate whether group model building could induce in a time efficient manner, the kind of
strategic learning and change in attitude and behavior considered necessary for organizational
success. Their findings were in keeping with those of Huz et al. who reported that following
conceptual model building, participants were in greater alignment about the goals of the
organizational system but demonstrated no increase in alignment in perception regarding
strategies for change. In other words, groups developed more agreement on what the problem
was but no further agreement on what to do about it. Even though participants in the Vennix et
al. study reported having gained considerable insight into the problem and said that the process
was effective in revealing relationships and feedback processes between problem elements, they
felt that their initial opinions had not changed much. This suggests that no change in participants’
mental models occurred, or that if it did, participants were unable to distinguish this change.
Overall, the effort was viewed as a success as analysis of the individual workbooks used in the
study indicated that the number of variables identified by participants in the post-assessment
increased (almost no variables were removed), concepts became more detailed, and new relations
between variables and feedback processes were added. This suggest that the interaction process
was effective in the enhancement of mental models.
In summary, outcomes from the Vennix et al. (1996) study indicating greater consensus
around the problem but no agreement on potential courses of action were supported by Huz et al.
(1996) and fit with assertions made by Doyle et al. (1996) who posit that group consensus and a
shared mental model are not the same thing. In conclusion, it may be possible to achieve group
consensus regarding the nature of the problem without developing a shared mental model that
translates into collective or coordinated action. From a research standpoint it may be expected
that individuals would perceive higher levels of input on a chosen strategy while at the same time
perceiving lower levels of agreement with the strategy.
Theoretical Relationship of Model Simulation to Group Dynamics and Learning
The use of system dynamics computer-based simulations, also known as ‘management
flight simulators’ or ‘microworlds’ (Sterman & Morecroft, 1994) is the second approach often in
an abbreviated group model building process. A management flight simulator is a computer-based
learning environment based on a previously developed model that allows people to experiment
with the model without having to build it form scratch (Senge & Sterman, 1994). Once a
conceptual model is developed and the relationships depicted are quantified using specialized
computer software (e.g., iThink by High Performance Systems) a computer interface or “cockpit”
is added so that users can easily manipulate a range of model assumptions. The resulting
“management flight simulator” can then be used for policy experimentation. Computer-based
simulations, or management flight simulators, are said to be effective for enhancing mental models
because they provide a framework for clarifying mental models through controlled
experimentation; a forum through which assumptions can be questioned in a non-threatening way;
and opportunities for participating in ongoing debate regarding strategy change; and feedback on
performance that may lead to insight into how the system functions under various conditions
(Bakken et al., 1994; Diehl, 1992; GroRler, 1996; Lyneis, Reichelt & Sjoblom, 1994; Morecroft,
1988).
When using system dynamics simulations to achieve learning objectives, emphasis is
placed on the experimentation aspects of the group model building process (Wolstenholme &
Coyle, 1983). As users try something, see how it works, seek to understand how the system
structure facilitated or defeated the action and then try again, they can see how the assumptions
they make about system structure and processes may play out over time (Stevenson, 1993). This
repeated experimentation may contribute to a deeper appreciation of the dynamics of the system
and the feedback processes that produce them (Paich & Sterman, 1993) and may in some
circumstances lead to improved decision-making (Cavaleri & Thompson, 1996; Lyneis et al.,
1994; Morecroft, 1988; Stevenson) and contribute to organizational learning (de Geus, 1988;
Simon, as cited in Morecroft).
One of the reasons that model simulation is believed to be effective is that it may play a
vital role in the “validation” stage of mental model development. When the dynamics observed
are counter-intuitive to users’ expectations, they may experience a surprise reaction (ie., an ‘aha’),
an emotional response considered critical to learning (Argyris & Schon, 1996; Luria, 1973;
Restak, 1984; Wack, 1985b). If given an opportunity to reflect on a surprising outcome, users
may be prompted to modify their preexisting models so that they are more in line with the new
discovery or, if they believe this outcome is invalid, they may recommend changes be made to the
underlying simulation model so that it more closely matches their perceptions of reality (Argyris &
Schon, 1978; GroBler, 1996; Williams et al., 1983). There are, of course, other less desirable
outcomes possible. A counter-intuitive result may also lead to the rejection of the model
altogether particularly if it is not viewed as an adequate representation of the context being
explored (Doyle et al., 1996; Lane, 1995; Wack, 1985a, 1985b) or to no response at all (Woods
et al., 1994) if a relevant context has not been established.
Recent evaluation studies support the use of management flight simulators for enhancing
users’ mental models (e.g., Akkermans & Vennix, 1997; Bakken et al., 1994; Doyle et al., 1996).
In a study by Doyle et al. in which half the subjects were allowed to play a simulation game
designed to coincide with data demonstrating a particular organizational dynamic, some
participants showed significantly different content in mental models, an outcome the researchers
attributed to the use of the simulation. They also noted that the participants’ post-experience
mental models did not replace the original models but rather, that what the participants leaned
from the simulation was integrated into them as evidenced by the addition of variables to assessed
models.
Research by Cavaleri and Thompson (1996) suggests that there may be specific factors
such as the backgrounds of the users that influence the extent to which simulation may be
expected to contribute to leaning. In a questionnaire administered following the use of a
computer simulation, they observed that, managers more than students, felt that the microworld
helped deepen their understanding of management practice. Even though it might be expected
that the simulation context was closer to managerial experience, this finding suggests that
effectiveness of these methods may be contingent on specific conditions. Three conditions
advocated as essential to insuring effectiveness of simulation are that it must be: incorporated into
a planned leaning framework; relevant to the backgrounds of the users; and designed so that
users can grasp the underlying model dynamics driving simulation performance.
Empirical Studies Comparing the Case Study Method to Gaming
Business schools have long utilized case study and simulation or gaming methods to
provide an experiential approach to teaching business strategy (Wolfe, 1976). The case study
method which is characterized by an analysis of those variables considered most critical to the
problem, is similar to the conceptual modeling in that both encourage managers to think
strategically, view the business as a whole, and adopt the perspective of the general manager
(Graham, Morecroft, Senge & Sterman, 1994). While management simulations differ from games
in terms of the degree to which individuals give input and make decisions, many experts in the
field view simulations and games as synonymous (Lane, 1995). Given these similarities, studies
that evaluate the effective use of case study and gaming methods may he considered alongside
those pertaining to model conceptualization and simulation. This creates a wider base of
literature from which to draw on in establishing an empirical foundation regarding the
effectiveness of abbreviated group modeling methods and the role that group dynamics plays. A
review of the relevant case and gaming studies follows.
Moore (1967), Strother (as cited in Wolfe, 1975b), Wolfe (1973, 1975a; 1975b) and
Wolfe and Guth (1975) compared the relative contributions of the case and gaming methods to
students’ understanding of business issues. Overall, it may be said that results of these studies
have been contradictory. While Moore found that games utilized in business policy teaching were
not superior to traditional methods in teaching production management, Wolfe (1973) and Wolfe
and Guth found that the use of games in teaching business policy were better than traditional case
study methods if guidance and structure were provided.
Wolfe’s (1973, 1975a) research points to the importance of effective communication in
student performance in simulated business environments. Wolfe (1975a) looked at effective
performance of business students who utilized a simulated policy and decision-making
environment and found several behaviors that were associated with successful performance.
Among them were: formulating a long run strategy or plan; talking with other individuals during
play; quantifying statements and rationalizing techniques; taking ample time for discussion among
team members; assuming an experimental and questioning attitude; and demonstrating flexibility in
the face of changing conditions.
Ina second study, Wolfe (1975b) pointed out the importance of facilitation in the leaning
process. In this study he compared a traditional teaching approach where the instructor leads the
learning process to an experiential teaching approach in which the instructor's role is largely
passive once the initial learning structure is established. In comparing the quantity and type of
knowledge acquired by each group using a six-question test before and after the play, Wolfe
found no gain in test scores in the learning environment in a non-facilitated leaming process. In
contrast, he observed a gain in overall knowledge and principle mastery in a facilitated, traditional
approach condition.
Finally, Strother et al. (cited in Wolfe, 1975b) observed that students who utilized gaming
demonstrated erratic decision-making behavior as they applied their decision-making techniques
ad hoc and inconsistently. He asserts that students in a gaming situation are often aware of issues
or problems during play but fail to apply formal and rational analyses needed to solve them. He
also noted that participants became so involved in play that they did not take time to objectively
understand what they were doing. Many of these problems could, asserts Wolfe, be eliminated
through facilitation of the process.
Theoretical Model of the Role of Group Dynamics in Group Model Building
The number of studies upon which to form a general theory of mental model development
as a result of system dynamic methodologies are few and, in general, are often dependent on case
study methods and self-report measures. Even though limited in scope and number however,
when taken together, these preliminary evaluations form a starting point for exploring the
effectiveness of group model building and point the way for the formation of a general theory for
the role that group dynamics plays in mental model development.
In the illustration in Figure 1, the application of the system dynamics methodology
comprised of either conceptualization, simulation, group process facilitation or any combination
of these practices, is asserted to increase the level of group dynamics observed in the group’s
interaction. This increased interaction in tum is believed to serve as the catalyst for enhancing
existing mental models so that they are altered from Time! to Time?2.
Group Process
Fadilitation
System Dynamics | | Increased Group Enhanced
» Methodology Ss Dynamics > | Mental Models
ZN
Conceptualization Simulation
Existing
Mental Models
Time 1
Figure 1 The Theoretical Role of Group Dynamics in Mental Model Development
In the theoretical model of Figure 1 it is proposed that the sub-components of the system
dynamics methodology work together with group facilitation to moderate mental model
development in a group model building situation. It is further inferred that, while both modeling
method and facilitation may have an independent impact on mental model development, this effect
may be greatest when methodology and facilitation are combined. The rationale for this assertion
is twofold. First, both modeling method and facilitation have the potential to increase the level of
productive discussion while at the same time reducing negative defensive behaviors in the group
interaction process. Second, while both methods help make mental models explicit, facilitation
may take this process one step further by encouraging individuals to consider how valid
assumptions being made are to the current context. It is posited here that unless the modeling
framework and effective facilitation are used together, mental models may evolve simply as a
function of time but not to the extent possible when both method and facilitation are used
together. The reason for this is that both variables contribute meaningfully and reciprocally to the
other and in so doing, increase the level of group interaction or dynamics.
Defining Group Dynamics
Group dynamics is operationalized here as the observable level of group debate and
discussion in the group problem solving forum. To that effect, increased group dynamics implies
that there are greater levels of these behaviors as a result of a particular group model building
strategy. Previously it was said that group discussion was an integral part of mental model
development because it serves to unearth knowledge that may be otherwise unavailable or inert in
memory. The implication of this assumption is that hearing another speak about a topic may
stimulate or trigger a thought in another and potentially, over time, allow individuals to develop
cognitive connections between seemingly unrelated concepts in memory. In this way, effective
person-to-person dialogue increases the potential for forming multiple links in a cognitive network
as contributions by person A stimulate thoughts in person B. From a group modeling standpoint,
when person B asks person A, “what do you mean when you say?” or, “why do you say that...?”
this ongoing exchange may lead to further articulation of assumptions, followed by potentially
more opportunities for debate. Further, facilitation of the dialogue process may magnify this
discourse and bring more individuals with differing viewpoints into the discussion. As a result,
person A may stimulate thoughts not only in person B, but in persons C and D as well. Likewise
they, in tum, may stimulate thoughts in each other, creating an exponential increase in the
potential cognitive network which may be developed as a result of group discussion.
As suggested previously, facilitation alone may not be enough for this elaboration of the
cognitive network to occur. Without the ongoing framework provided by the modeling method,
there may be little thought given to how the boundaries of the problem are defined and how the
parts of the system interrelate. So, for example, during model conceptualization when person A,
who possesses knowledge about Event 1, shares this information with person B and Person B in
turn, offers their unique perspective on Event 2, the declarative and procedural knowledge of how
the system functions may be enhanced for both members.
Group dynamics may also be affected by the modeling method when, during simulation
modeling, individuals working as part of a team assert their opinions about which variables to use
to achieve the greatest impact on system performance. If person A asserts that setting a variable
assumption at a specific level will help control the system, persons B and C may urge him or her
to explain the reasoning behind this assertion. Given adequate explanation by this individual, this
assertion may then be tested. Depending on outcomes, the team may go on to discuss how results
observed fit with their expectations and to consider the implications of these effects. In summary,
it may be asserted that combining a particular modeling method with a meaningful level of
facilitation may lead to group dynamics that alter individual thinking such that the preconceived
notions about system functioning are altered.
Part II Description of the Research Effort
The following paragraphs introduce the two experiments that comprised the total study.
In the first experiment, conceptual modeling via case study was compared to simulation modeling
via a management flight simulator under two levels of facilitation in order to assess the effects of
these modeling approaches on mental model development in a group setting. In the second
experiment, the modeling methods and two levels of facilitation were further assessed to
determine if, and how, these variables affected the observable levels of intra-person group
interactions. The following section outlines the general design of these two experiments and the
results that were expected in Experiment II, the focus of the this paper, under varying
experimental conditions.
Overview of the Research
The aim of Experiment I was to test hypotheses related to two group modeling methods
and two levels of facilitation on mental models. In the 2 x 2 x 2 repeated measure fixed effects
design, two levels of modeling method - conceptualization and simulation, and two levels of
facilitation - scripted facilitated and non-facilitated, made up the experimental groups in each of
four conditions (i.e., case study with no facilitation; simulation with no facilitation; case study
with scripted facilitation; and simulation with scripted facilitation). Each experimental group
served as its own control group through the application of the repeated measures design, thus
constituting an extension of the Latin Square design as described by Cook & Campbell (1979).
Experiment II was designed to assess the effect that modeling method and facilitation had
on the level and type of Group Dynamics that could be observed during a group modeling
activity. Unlike the pre- and post-test design used in Experiment I, in Experiment II, the element
of Time was omitted leaving a fixed effects post-test only design as illustrated in Figure 2. The
aim of Experiment II was to assess the level of group dynamics that occurred as a result of a
group model building activity under each of the four treatment conditions.
MODELING METHOD
Conceptualization Simulation
Scripted
Facilitation Gs bad
FACILITATION
LEVEL
Non-Facilitated G1 G2
Figure 2 Fixed Effects Post-test Design Used for Experiment II
In the 2 x 2 matrix of Figure 2, the randomization of the participants in the study to
various treatment conditions sufficed for the lack of a pre-test (Campbell & Stanley, 1963). In
this experiment, the effect of facilitation on group dynamics was assessed by comparing the intra-
person interactions that could be observed in Groups 1 and 2 (non-facilitated) to those observed
in Groups 3 and 4 (facilitated). Similarly, in order to test the effect of the modeling method on
group dynamics, the results of Groups 1 and 3 (conceptual modeling) were compared with
Groups 2 and 4 (simulation).
Description of Cell Components and Theorized Expected Outcomes
To facilitate a more complete understanding of the four cells depicted in Figure 2, each
group as a treatment condition is discussed in terms of the level of group dynamics that each was
expected to exhibit in the following paragraphs.
Conceptualization in a Non-Facilitated Group Session (G1)
In the non-facilitated conceptual modeling condition (G1), it was expected that the level of
group dynamics would be less than other groups. As a ‘leaderless’ group, it was anticipated that
there would be little discussion during the group session and that, lacking a facilitator, one or two
group members would dominate discussion. Hence, individuals would not perceive high levels of
input from the process and discussion would center on just a few elements of the conceptual
problem. It was also expected that verbalized opinions would be taken at face value with little
prompting for further discussion regarding the reasoning behind these expressed ideas. Thus it
was anticipated that there would be lower levels of discussion regarding rationale for a chosen
strategy.
Simulation in a Non-Facilitated Group Session (G2)
When the simulation method was employed in a non-facilitated setting (G2), it was
anticipated that the effect on group dynamics would be much like those observed in the non-
facilitated conceptual modeling condition. As such, it was expected that one or two dominant
members of the experimental group would assert their views for strategy with little rebuttal from
other members of the group. It was further assumed that during the group activity, that some
group members would not participate in the discussion while others that felt more confident in
their opinions or with using the computer, would dominate the group’s activities. Finally, it was
expected that there would be little time spent discussing outcomes of the experiments, and that
trials would continue one after the other in search of better performance without exploration as to
why the simulation model reflected the outcomes that it did.
Effect of Facilitation Combined with Conceptualization (G3)
In the facilitated conceptual modeling group (G3), it was expected that group dynamics
would be significantly different from either of the two non-facilitated conditions. It was
anticipated that the facilitator would effectively draw out the opinions of the group members and
encourage them to make their rationale for their assessments explicit. By posing questions such
as, “why do you think this particular strategy will make a difference?” the degree of group
dynamics would increase as members would be prompted to explain their rationale for their
chosen strategy. This increased discussion would, in tum, lead to the generation of a greater
number of potential solutions; raise the level of debate regarding the assumptions being made; and
contribute to improvements in the group’s ability to predict the impact of variables on system
behavior. Moreover, by having the facilitator ask the group questions like, “Do all of you agree
with Person A’s assessment of the situation, why, or why not?” it was expected that the facilitator
would effectively increase the level of perceived input on the final strategy.
Effect of Facilitation with Simulation (G4)
Finally, in the facilitated simulation group (G4) it was expected that the level of group
dynamics would be greatest as emphasis on obtaining better results with each successive trial and
determining how the system could be made to function most effectively would draw group
member into discussion more. Further, it was theorized that because the simulation would
effectively function as a transitional object for eliciting mental models, that participants would
feel more comfortable making their opinions known and disagreements would be focused on the
simulation output and its perceived accuracy to participants’ experiences. Moreover, it was
anticipated that the facilitator would help increase the level of discussion regarding rationale by
asking for explanations to observed outcomes. This would elicit responses not only from the
individual whose strategy was tested, but also from others in the group as well as they would be
drawn into the group’s performance challenge. Hence it was expected that the Facilitated
Simulation groups would perceive higher levels of input on and agreement with final strategy and
levels of comments related to rationale and strategy would be greater than in the other three
conditions.
General Hypotheses for Experiment II
Based on the theoretical assertions made regarding the effect of model building method
and facilitation on group dynamics, the following main and interaction effects were asserted.
Table 1 Summary of Null Research Hypotheses
Main Effects
Effect of Modeling Method
Ho: There would be a significant modeling strategy effects on Group Dynamics
Effect of Facilitation Level
Ho: There would not be a significant facilitation effect on Group Dynamics
Interaction Effects
Effect of Modeling Method x Facilitation Level
Hos: There would not be a significant modeling Method x Facilitation interaction
Measuring Group Dynamics
The Group Dynamics construct is operationalized as the observable level of group
discussion as manifest by statements made during a group model building activity. Group
dynamics was assessed with two measures, one a variation on the “think aloud” method and the
other, an anonymous voting procedure. In the ‘think-aloud’ methodology proceeds by presenting
individuals with a problem and asking them to isolate the key variables involved in controlling the
system. Individuals are asked to verbally express all thoughts, while working to solve the problem
within a pre-set time limit. For purposes of this study, a group activity served as a natural context
for gathering ‘think aloud’ data. To do this, the group activity proceedings, or “think aloud
data” was recorded, categorized and quantified. These audio proceedings were analyzed by
having two independent raters count the number of times participants made suggestions for
strategy, provided rationale for their chosen strategy, proposed group process procedures or
made comments intended to facilitate their group’s process.
In addition to counting the behavioral incidences that occurred in each group session as
recommended by Volpe, Canon-Bowers and Salas (1998), group dynamics was also be assessed
with an anonymous voting procedure that took place after the group model building activity. This
ballot (see example in Appendix ..) assessed the degree to which individuals felt their group’s final
strategy reflected their input and the degree to which they agreed with the strategy (Graham et al.,
1994). This measure was analyzed with a five-point quantitative response scale. Figure 3
summarizes the two measures used to assess the group dynamics construct.
Measurement
Construct Dependent Variables Method
audio recording
Assessment by Raters ite
f icti
Group of group activity
Dynamics
Sia Level of Agreement individual anonymous
voting procedure
with Solutions
Figure 3 Dependent Variables and Measurement Methods Used in Experiment II
The Research Design
The purpose of the experimental design was designed to test whether Method and
Facilitation had an effect on the intra-person dynamics that occurred during the group model
building activity. As with the first experiment, Experiment II involved a fixed effects design
involving two levels of Method - model conceptualization (case study) and model simulation
(management flight simulator) and two levels of Facilitation - facilitated and non-facilitated.
As shown in Figure 4, for the research design, the original sample of 58 participants was
divided into fourteen sub-groups comprised of four to five participants each. The use of sub-
groups was based on the rationale that the larger the group, the fewer people participate in
discussion and the more discussions tend to be dominated by a few group members (Bales et al.,
1951 cited in Vennix, 1996). Limiting the size of the sub-groups to five was based on findings
showing that a group size of five was optimal for ensuring group member satisfaction (Slater
1958, cited in V ennix).
MODELING METHOD
Conceptualization Simulation
6B oc |G az
Scripted
ilitati os Gc | as
FACILITATION an n=4 al
reve a G2 a cs
Non-Facilitated
6 Gio
n=2 n=4a
Figure 4 Experiment II Design to Assess Group Dynamics under Two Levels of
Modeling Method and Facilitation
Experimental Procedures
Following an introduction to three general change management theories, a short orientation
to Systems Thinking and a pretest used for assessing mental model development in Experiment I,
participants in each of the four experimental sessions were randomly divided into sub-groups by
having participants count-off from left to right across the room. For each experimental session
there were two to four subgroups as shown in Figure 4. Once the groups were organized, half of
the sub-groups moved to an adjacent meeting room while the remaining subgroup(s) organized
their teams in a dispersed fashion in the primary meeting space. Once seated together, each of the
sub-groups was given a team strategy sheet and instructed to record their team’s strategy,
provide its rationale, and prepare to test their strategy using The Tipping Point management flight
simulator when the larger group reconvened. Throughout the 50-minute time period in which the
group worked together, their discussion was recorded on audiotape. Immediately following the
group activity, participants returned to their original seating arrangements, answered two
questions on the anonymous voting procedure and completed the posttest used for assessing Time
effects in mental model development in Experiment I.
Dependent Variables in Experiment II
The two measures employed in Experiment II are described in greater detail below.
Measure 1 - Audiotapes of the Group Activity
Audiotapes of each sub-group were independently content analyzed by two trained raters.
To prepare for this assessment the two raters reviewed records made during the pilot study and
practiced, first together and then independently, categorizing the interactions that could be heard
according to the descriptions provided on the pilot study coding sheet. Upon review, these
classifications were refined to derive the assessment form used in Experiment II. The four types
of statements assessed were selected based on their purported role in the development of mental
models wherein mental models must first be articulated (i.e., comments on Strategy) in order to
be examined for validity (i.e, comments on Rationale). Moreover, theoretical assertions about
the benefits of a structured framework (i.e., comments on Process) and the proposed importance
of including all group members in the discussion (i.e., Facilitative comments) guided the
classification scheme. The four variables are detailed in Table 2.
Table 2 Experiment II, Measure 1 - Four Types of Comments Assessed from the
Recorded Group Activity Proceedings
Variable Name Operational Measurement
Comments on Strategy number of times individuals proposed specific
solutions to the problem
Comments on Process number of statements made about how the group
should carry out their task
Comments on Rationale number of times individuals gave explanations for
their approach
Comments on Facilitation number of times individuals made comments that
were intended to solicit input from others
Measure 2 - Anonymous Voting Procedure
The second measure of group dynamics was the post-activity voting process. In this
procedure, individuals were asked to rate on a scale of 1-5, “the degree to which they agreed with
the group’s final strategy” (Variable 5) and, “the degree to which they felt the group’s strategy
reflected their input” (Variable 6). The rationale for including the voting procedure in the group
dynamics construct was based on group model building theory (Graham et al., 1994) asserting
that to the extent that group members feel that their team’s strategy reflects their input, the level
of group dynamics that has occurred is said to be higher than in those groups in which individuals
do not agree with the strategy and do not feel that the strategy reflects their input.
The variables used to assess Group Dynamics in Experiment II are summarized in Table 3.
Table 3 Summary of Group Dynamics Dependent Variables
Group Level Data
Measure 1: Audio Taped Recordings
Variable 1 Comments on Strategy
Variable 2 Comments on Process
Variable 3 Comments on Rationale
Variable 4 Facilitative Comments
Individual Level Data
Measure 2: Anonymous V oting Procedure
Variable 5 Level of Agreement with Final Strategy
Variable 6 Level of Input on Final Strategy
The Experimental Field Site
Subsequent to a pilot study at a small electronics firm, the main experiment was conducted
in June, 1999 at a major airline in southwestem United States. One of the advantages of
participating in this study was that the sponsoring organization had an opportunity to expose their
employees to a management flight simulator that, for nearly all participants, was a new method of
training. In addition, the subject matter of the training - organizational change - was specifically
of interest to the host organization as they were undergoing high levels of organizational change.
The Experimental Protocol
The experimental protocol used for this study was developed according to the learning
objectives of the field site organization. These objectives included a desire to have all
participants, regardless of their experimental group, receive some level of exposure to the
management flight simulator and to engage in meaningful discussion around the issue of
organizational change. The following experimental protocol was used for Experiment I and II.
Pre-Experimental Session Procedures
Prior to attending an experimental session, all participants were invited to attend one of
the four experimental sessions offered. While the electronic invitation (ie., e-mail) sent out did
not indicate whether individuals would participate in a conceptual or simulation-based session,
they were informed that they would receive some exposure to the management flight simulator
and were informed of the topic of the session - change management.
Experimental Session Procedures
Each experimental session began with introduction to the training including the purpose of
the research; collection of an ‘informed consent form’ and sample demographics/screening
questionnaire; an overview on Organizational Models of Change; and an introduction to Systems
Thinking. The Systems Thinking orientation included how to label a causal-loop diagram using
arrows, same/opposite or +/- labels, and system delay indicators. Depending on whether
participants were in the Conceptualization or Simulation condition, they read a business case or
observed an orientation to the Management Flight Simulator used. Both method orientations
included an explanation of the six variables or “levers” that could be used to control the
hypothetical organizational system and an overview of the organization’s objectives. Following
the overview and introduction to the change management issue to be addressed, each participant
completed a three-part pretest consisting of an open-ended question, a two part ratings task and a
diagramming task. The results of this pretest were used in the analysis of mental model
development which was the focus of Experiment I. This three part test was repeated following
the group activity, the focus of Experiment II.
Group Activity Protocol
According to McGrath (1984), a small group should consist of two or more people but be
small enough that individuals can be mutually aware of and potentially in interaction with one
another. Following the pre-test, the group model building activity was described and study
participants were randomly assigned to groups consisting of 4-5 people. All groups were given
the same assignment: to derive a strategy for implementing a change in a fictitious
organizational system utilizing the six control levers described in the method orientation.
The Case/Simulation Task
The use of hypothetical problems or test cases to conduct an analysis of problem solving
reasoning is supported by prior research (e.g., ... et al., 1995). To reduce the possibility of prior
exposure to the case/simulation task, a newly developed management flight simulator called The
Tipping Point (Shapiro, 1998) was used in the simulation modeling condition and a business case
based on the simulation’s underlying system dynamics model was written especially for use by
participant's in the conceptual modeling condition.
Those participants in the Conceptualization condition worked in small groups to conduct an
analysis of the written case describing the organizational context, management challenge and
potential mechanisms for controlling the system. Their strategy devised was to include the
requisite level(s) of the six control levers the group would use and a prediction of what they
thought would happen to the hypothetical organization’s three populations of people over a five-
year time frame if their strategy were employed. Participants in the Simulation condition used a
laptop computer equipped with The Tipping Point to answer this same series of questions.
Facilitation Protocol
Depending on whether participants were in a facilitated or non-facilitated group, variations in
assistance with group processes for completing the team objectives were provided. If participants
were in a Non-Facilitated condition, they were instructed to use any group process techniques
they chose to develop their strategy. If they were in a Facilitated condition, they were assisted by
a trained facilitator (one of eight volunteers from a local Organizational Development professional
association). The facilitator was instructed to guide the group through the case analysis with the
aid of a pre-established script. Based on the theory that the purpose of mental models is to
describe, predict and explain (Rouse & Morris, 1986) this script included a description of the
facilitator’ s role and questions the facilitator should ask including, “what are the key relationships
in the system?” (describe); “what makes these important?” (explain); and, “what will happen if
you implement your proposed strategy?” (predict).
With the exception of assistance in how to operate the Management Flight Simulator, the
Non-Facilitated Simulation groups were given no further guidance in completing the group
assignment. Those in the Facilitated Simulation condition used the same scripted process used by
the Facilitated Case groups. During the fifty-minute activity the group was to decide on a
strategy and record it on the team strategy sheet given.
Post-treatment and Session Conclusion Procedures
After the fifty-minute group activity, participants retuned to their original seating
arrangement and completed the anonymous ballot designed to assess the degree to which they
agreed with their team’s final strategy and whether they thought their team’s strategy included
their input. Following the voting, participants completed the posttest - a repeat of the open-
ended question, ratings, and diagramming tasks completed as part of the pretest.
As part of the workshop structure, the teams presented their strategies to the larger group
detailing the level of each variable they used and their rationale. Each team’s strategy was then
tested using the Tipping Point management flight simulator and results of each “run” were
displayed on a projected overhead image for all participants to view and discuss.
Part III Results and Discussion
Summary of Experiment I Results
The experimental hypothesis that facilitation level would significantly affect mental model
development (Ho»: Facilitation) was strongly supported (p < .01). A review of the variables
analyzed in the univariate ANOVA analysis showed that facilitated groups performed significantly
better than non-facilitated groups on the Open-ended Question and Ratings Task I dependent
variables but not on Ratings Task II.
The first and third experimental hypotheses pertaining to Method (Ho;: Method) and Time
(Ho: Time) were moderately supported (p > .01 <.05). Although significant from a MANOVA
perspective, no significant method effects were discemed when the three dependent variables
(Open-ended Question, Ratings Task I & II) were submitted to separate univariate analysis. In
contrast, the hypothesis that Time would have a significant effect on performance and that
participants would perform better on the posttest than they did on the pretest was moderately
supported by MANOVA as well as by separate univariate analyses.
Of particular interest was the significant interaction between all three independent
variables (Hog: M x F x T). The hypothesis that the Facilitation x Method interaction would not
be invariant over time was moderately supported by the MANOVA analysis. Separate univariate
ANOVA analyses were instrumental in providing insight as to the source and nature of the
observed effect. When submitted to univariate analyses, significant three-way interactions were
observed on Ratings Task I and Ratings Task II but not on the Open-ended Question dependent
variables. A surprising finding was that the direction of the relationship between Facilitation and
Method varied between these two measures. In comparing the posttest to the pretest scores, the
Simulation groups in general performed better than the case groups by a wide margin on Ratings
Task I and by a small margin on Ratings Task II. While the level of performance due to method
alone remained fairly constant across the two variables, there was a striking contrast in
performance when the effect of facilitation was considered. This contrast in results suggests that
differences between the pretest and the posttest may be attributable not only to differences in the
nature of these two measurement tasks but by the level of facilitation in the groups as well.
In summary, a key finding in Experiment I was that as posited, unless the modeling
framework and effective facilitation are used together, mental models may evolve simply as a
function of time but not to the extent possible when both method and facilitation are used
together. The reason for this is that both variables contribute meaningfully and reciprocally to the
other and in so doing, increase the level of group interaction or dynamics.
Results Experiment II
A Univariate Analysis of Variance on each of the variables in the two Group Dynamics
measures was conducted using the SAS Statistical package (SAS Institute, 1999). As with
Experiment I, there were two levels of fixed effects for Method and Facilitation. Given that the
audiotape data (Group Dynamics Variables 1-4) were generated at the group-level and that the
post-activity voting data (Group Dynamics Variables 5 and 6) were generated at the individual
level, these two data types were analyzed separately. Each of the independent variables (Method
and Facilitation) was modeled against each of the four comment types from Measure 1 and the
two questions from Measure 2. The error term used for the four comment types on the Measure
1 analyses was the between group error term.
ANOVA Experiment II Measure I Results
Measure 1 - Dependent Variable 1
Analysis of Variance (ANOVA) results for the first dependent variable - Comments on
Strategy - are presented in Table 4.
Table 4 ANOVA for Experiment II, Measure 1, Variable 1 - Comments on Strategy
Source Ss df MS F
Between groups
Method (M) 15 1 15 5.18**
Facilitation (F) .00 dl .00 09
Method x Facilitation .08 1 .08 2.58
Group error 29 10 (.03)
*p<.10 ** p< 05 ***p <.01
Table 4 indicates that on the first group level variable used to assess Group Dynamics, a
significant main effect occurred for Method F (1, 10) = 5.18, p <.05) with better performance
observed in the Simulation conditions (M case = 1.17, M simulation = 1.36). No other significant
effects were observed.
Measure 1 - Dependent Variable 2
Analysis of variance (ANOVA) results for the second group level variable - Comments on
Process - are presented in Table 5.
Table 5 ANOVA for Experiment II, Measure 1, Variable 2- Comments _ Related to Process
Source Ss df MS F
Between groups
Method (M) 18 1 18 3.47*
Facilitation (F) 01 1 01 19
Method x Facilitation .09 1 .09 1.74
Group error $1 10 (.05)
*p<.10 **p <.05 ***p <.01
Method effects for Variable 2, Comments on Process, were found to be weakly significant F
(1,10) = 3.47, p <.10, with better performance observed in the Case condition (M case = .74; M
simulation = .01). No other significant effects observed.
Measure 1 - Dependent Variable 3
ANOVA results for the third variable in the group level Measure I - Comments related to
Rationale - are presented in Table 6.
Table 6 ANOVA for Experiment II, Measure 1, Variable 3- Comments __ Related to Rationale
Source Ss df MS F
Between groups
Method (M) 85 1 85 9.16**
Facilitation (F) 16 1 16 1.75
Method x Facilitation .00 1 .00 02
Group error 93 10 (.09)
¥prs.10 8p <05 f**pr<01
Consistent with prior results, Method effects were strongly significant F (1, 10) = 9.16, p <.05,
with better performance observed in the Case groups (M case = 1.70; M simulation = 1.15). There
were no other significant effects observed.
Measure 1 - Dependent Variable 4
ANOVA results for Comments related to Facilitation are shown in Table 7.
Table 7 ANOVA for Experiment II, Measure 1, Variable 4 - Comments Related to Facilitation
Source Ss df MS F
Between groups
Method (M) .00 1 .00 01
Facilitation (F) 30 1 30 5.40**
Method x Facilitation .00 1 .00 .00
Group error 54 10 (.05)
*p <.10 **p <.05 **p <.01
In contrast to the first three Group Dynamics variables which showed a significant main
effect for Method, only Facilitation showed a significant effect F (1,10) = 5.40, p <.05 on the last
variable. Results were as expected, the Facilitated conditions had higher levels of facilitative
comments than the Non-Facilitated groups (M eacititation =.77; M Non-Facilitation = .46). There were no
other significant effects observed.
ANOVA Experiment II Measure 2 Results
The second measure used to assess Group Dynamics was the anonymous voting procedure.
In this assessment, individuals were asked to rate on a five-point scale their level of agreement
with their group’s final strategy and the degree to which they thought the strategy reflected their
input. Unlike the first four group level variables that comprised the first measure of Group
Dynamics, the two dependent variables that comprised the second Group Dynamics measure were
based on data collected from each individual.
Measure 2 - Dependent Variable 5
Analysis of Variance results for the fifth dependent variable are shown in Table 8.
Table 8 ANOVA for Experiment II, Measure 2, Variable 5 - Level of Agreement with Final
Strategy
Source Ss df MS F
Between subjects
Method (M) 1 1 Al .07
Facilitation (F) 3.00 1 3.00 1.96
Method x Facilitation .26 1 26 17
Subject error 82.75 54 (1.53)
*p <.10 **p <.05 ***p <.01
As can be seen in Table 8, no significant effects were observed for any factors.
Measure 2 - Dependent Variable 6
ANOVA results for the second variable assessing the participant's perceived level of input on the
group’s final strategy are given in Table 9.
Table 9 ANOVA Experiment II, Measure 2, Variable 6 - Level of Input on Final Strategy
Source SS Df MS F
Between subjects
Method (M) 6.21 1 6.21 4.94%*
Facilitation (F) 75 1 75 59
Method x Facilitation .20 1 .20 16
Subject error 67.88 54 (1.26)
%p <10 **p <.05 **p <.01
As can be seen in Table 9, a significant Method main effect F (1, 54) = 4.94, p < .05 was
observed. An examination of the mean scores for this variable indicated higher mean scores in the
Simulation groups (M simulation = 4.16; M case = 3.48). No other significant effects were observed.
Summary of Experiment II ANOVA Results
Of the four types of comments analyzed on Measure 1 of Experiment II, a significant main
effect for Method was found on comments of Strategy, Process and Rationale and a significant
main effect was observed for fourth variable, Facilitative comments. No significant interaction
effects between Method and Facilitation were found. Of the three variables that were significant
under Method, the experimental conditions with the higher mean scores were not consistent
across variables. The Simulation condition generated a greater number of comments of a
Strategic nature while the Case groups had significantly more comments related to Process and
Rationale. In the analysis of Measure 2 dependent variables, Method was significant for the
second question on the voting procedure with the Simulation groups perceiving higher levels of
input on their team’s strategy than individuals in the Case groups.
MANOVA Results for Experiment II
To the extent that the dependent variables used to measure Group Dynamics share
commonalties both theoretically and empirically, a Multivariate Analysis of Variance (MANOVA)
was conducted using the Measure 1 dependent variables. On this MANOVA, the four comment
types for Measure 1 were modeled using the same two levels of Method and Facilitation used in
the univariate analyses.
Results of the MANOVA on the F our Statement Types of Measure 1
Results of the MANOVA procedure on the four classifications of statements found to be
significant in the ANOVA analysis were modeled as covariates as shown in Table 10.
Table 10 MANOVA Using the Four Group Level Dependent Variables
Source A df F ie
Between groups
Method (M) .0727 4,7 22.31** .9273
Facilitation (F) 4082 4,7 2.54 5919
Method x Facilitation 2154 4,7 6.37** .7846
*p <10 **p <.05 **p <.01
In keeping with results observed on the Univariate Analysis of Variance, the MANOVA
analysis detailed in Table 10, also shows a significant main effect for Method F (4, 7) = 22.31, p <
.01. However, the main effect for Facilitation observed on the ANOVA analysis for variable four
- Facilitative Comments - was not significant on the MANOVA analysis. Finally, whereas there
were no significant interaction effects between Method and Facilitation observed on the four
ANOVA tests from Measure 1, when the four comment types were analyzed together a significant
Method x Facilitation interaction effect F (4, 7) = 6.37, p <.05 emerged.
As there was only one variable of the two from the Measure 2 voting procedure that had
significant effects on the ANOVA analysis, a Multivariate Analysis for this measure was
unwarranted.
Summary of Results
While the focus of Experiment I was on the assessment of how the two methods and levels
of facilitation affected mental models over time, in Experiment II, the focus shifted to
understanding better the relationship between Method and Facilitation on Group Dynamics - a
hypothesized moderator of mental model development. In this experiment it was proposed that
the method used would not significantly affect the level of group dynamics observed in the group
activity (Ho;: Method). Further, it was asserted that the level of facilitation provided the group
would lead to significant differences in group performance (Ho>: Facilitation). Finally, it was
hypothesized that the difference in group dynamics levels between Facilitation groups would differ
according to Method level (Ho3: M x F). These experimental hypotheses and the effect of the
results may be summarized as follows in Table 11:
Table 11 Experimental Hypotheses Testing Results for Experiment II
Experimental Hypothesis Test Results
Ho,: The method used would not have a Rejected
significant effect on Group Dynamics
Ho»: Facilitation of the group process would have Rejected
a significant effect on Group Dynamics
Hos: Time would have a significant Method x Moderately
Facilitation interaction Supported
On the first measure of the Group Dynamics assessment, the classification of the four types of
comments from the audiotapes, there was a significant main effect for Method. Unlike the
ANOVA analysis that showed no two interaction effects on the four group level variables, the
MANOVA analysis showed a significant interaction effect for Method x Facilitation. Whereas the
Facilitative comments (Variable 4) were significant in the ANOVA analysis of Measure 1, they
were not significant on the subsequent MANOVA analysis. On Measure 2 of the Group
Dynamics assessment, there was only one significant effect observed for Method. It occurred on
the second variable of Measure 2 - Level of Input on Final Strategy. Since this was the only
significant dependent variable on Measure 2, the individual level measure, a MANOVA analysis
was not required.
Discussion of Results
The Experimental Hypotheses 1, asserting that there would not be a significant Method
effect was rejected. Conversely, the Experimental Hypotheses 2, proposing there would be a
significant effect due to facilitation was rejected. Experimental Hypothesis 3 pertaining to an
expected Method x Facilitation interaction effect was moderately supported (p > .01 <.05).
The first hypothesis in Experiment II stating that there not would be a significant
differences due to variations in Method was strongly rejected when subjected to multivariate
analysis. The univariate analysis of variance showed that Simulation groups made significantly
more comments related to Strategy and reported significantly higher levels of agreement on the
final strategy whereas Case groups made more comments related to Process and Rationale -
regardless of the level of facilitation provided. Although not significant in a multivariate sense,
the differences in mean facilitative comments made was significant when analyzed by univariate
ANOVA. Higher levels of Facilitative comments in groups where facilitation was provided
supports the intemal validity of the facilitation variable manipulation - when a facilitator was
present, facilitative comments occurred, and when there wasn’t a facilitator, facilitative comments
were made significantly less often.
Results of the second experimental hypothesis asserting that there would be differences in
the level of group dynamics as a function of facilitation provided were opposite what was
projected. Results in the multivariate analysis showed that facilitation level alone did not result in
significantly higher levels of group dynamics when all four dependent variables were
simultaneously considered.
The third experimental hypothesis that the group dynamics differences between methods
would differ significantly across facilitation levels was moderately supported (p > .01 < .05).
Interestingly, in this situation, none of the dependent variables consisting of the four comment
types (Strategy, Process, Rationale and Facilitative) used to measure group dynamics was
sensitive enough on its own to detect a significant Facilitation x Method interaction on the
univariate analysis. It was only when these variables were analyzed together as a system in the
multivariate analysis that a significant difference between the four groups was observed. In
general, no one group was shown to have consistently higher levels of several of the different
comment types. Perhaps of most interest however, were the groups exhibiting the greatest levels
of discussion on strategy alternatives and rationale for strategy - two theorized indicators of
mental model content. Although not significant on the univariate ANOVA analysis, the significant
MANOVA results showed that the Facilitated Case groups had higher levels of discussion on
Rationale (ie.,why proposed strategy would be effective) while the Non-facilitated Simulation
groups had more discussion on Strategy (ie., how to achieve desired outcomes). Facilitative
comments were greatest in the Facilitated Case groups and comments related to Process were
greatest in the Non-facilitated Case groups.
Integration of Experiment I and Experiment II Results
In order to better understand the results described above, Experiment II results need to be
looked at in light of the results observed on the first experiment assessing mental models. The
following paragraphs highlight some of the most important Experiment I outcomes and their
potential relationship to results from the experiment described here. Overall, the most
important, yet counterintuitive finding, was the differential shape of the univariate three-way
Method x Facilitation x Time interactions across the Ratings Task I and II as shown in Figure 5.
Dependent Variable: Ratings Task! Dependent Variable: Ratings Task
MxFXT MxFXT
us. °.
a Bis 7
2 05 ase. a
ga
on. a
ohne [non roammaTa TT eo
Hi Le FacaeaT2n B
Dos.
ie i”
°
Pi ; 2
‘i Si oss
case Siuaton as 248
Method Case Simutiton
Method
Figure 5 Pre- and Post Differences in M x F x T for Ratings Task I and II
When mean Time differences were compared across Method levels, the shape for Facilitation was
reversed across the two ratings tasks. This was particularly evident in the Simulation groups
where facilitation of the group resulted in a performance decrement on the first ratings task in
contrast to a performance enhancement on the second ratings task. Even though the effect was
not as striking, facilitation also had a reverse effect on the two ratings tasks in the Case
conditions. Most puzzling perhaps was that the comparative effect observed in the Case groups
was just the opposite of that observed in the Simulation groups, as facilitation appeared to
suppressed performance on the first ratings task but to aid performance on the second task.
This contrast suggests that variations in performance may be due to differences in the
nature and difficulty of the two ratings tasks. As noted previously, mean performance dropped
for both the Facilitated and Non-facilitated groups on the second ratings task which asked
subjects to indicate the level of influence they thought each of the six control variables would have
on the three populations of people in the system. This task required that participants rate eighteen
different paired combinations on a five-point scale. In comparison, the first ratings task required
that participants rate six different combinations on a three point scale. Had the variations in
performance between ratings tasks been consistent across case and simulation groups, differences
in the complexity of task could explain the variations in performance on the two tasks. But
because facilitation appears to have had an opposite effect on performance for each of the two
tasks depending on the method used, it is necessary to look at the relationship between Method,
Facilitation, and the measurement task.
Relationship between Experiment I and II
The analysis in Experiment II showed that comments related to process were highest in
the Non-facilitated Case groups. This suggests that these groups may spend more time figuring
out how to proceed with their activities than on the activities themselves. Interestingly, while
there were no significant differences between groups in the level of Facilitative comments made as
a function of method type, there were significant differences in the levels of Strategic, Process and
Rationale comments based on the method used. This suggests that the nature of the discussion
that occurred was affected by the method used during the group activity. These variations in
types of comments given, may have contributed to differences in performance on the two ratings
tasks as shown in Figure 5.
Relationship between the Ratings Task I and Comment Type as a Function of Method
As a starting point, recall the differences in complexity of the Ratings I and II tasks. On
Ratings Task I, participants were asked to assess the effect of the six control levers on the system
functioning overall, while on Ratings Task II they were asked to assess the effect the six control
levers on each of the three populations of people presumed to exist in the hypothetical
organization. The following paragraphs consider the differences in the complexity of these tasks
in light of the nature of comments made during the group activity in each of the two method
groups.
First, comments related to Strategy were found to be highest in Ratings Task I in the
Simulation groups for Ratings Task I. Here, it could be asserted that feedback from the
simulation allowed group members to identify the most effective levels of each variable to use to
achieve the best overall performance - the objective of Ratings Task I. While the Case groups did
not have the same level of reiterative performance feedback, this did not seem to be a detriment to
performance in the Facilitated Case groups. Perhaps in these groups, a facilitator helped group
members explore why their strategy would work and encouraged discussion on how the six
variables would affect performance.
For Ratings Task I, in Non-facilitated Simulation groups, the pre-post mean improvement
was more than twice that of the Facilitated Case group. Perhaps without facilitation, simulation
groups had more time for trial-and-error, as there was no one urging them to discuss their
rationale or the performance observed. This may have given these groups more information on
system functioning that contributed to their better performance.
In contrast, comments related to Process were significantly higher in the Case groups.
More time spent on process in the Case groups may have meant more point-by-point discussion of
how each system control level would effect the overall system. Whereas Non-facilitated Case
groups may have spent undue time discussing only a few of the control levers because more time
was spent on group process, a facilitator in these groups may have ensured that ample time was
given to more thoroughly completing the task by considering all variables. Indeed, the Facilitated
Case groups mean differences on the Ratings Task I were second highest overall followed closely
by the Non-facilitated Case groups. Further, in the Non-facilitated Simulation groups, comments
related to Process were lowest and performance on the first ratings task was also lowest. This
suggests that the Non-facilitated Simulation groups spent little time establishing an experimental
process and tracking performance that would allow them to better understand the effects of the
variables on system performance overall. Moreover, even when facilitation was provided, it made
little difference in increasing the amount of time spent on process. Thus it would appear that in
Simulation groups it was not the facilitator that controlled the process.
Finally, comments on Rationale were found to be significantly higher in the Case groups
on Ratings Task I. In these groups, more discussion of Rationale for strategy meant more
discussion on the effect of the control levers on the system. This may have helped Case groups
maintain high levels of performance on Ratings Task I, even without the performance feedback
that would have been provided by the simulation. In contrast, in Non-facilitated Simulation
groups where comments related to Rationale were lowest, performance on Ratings Task I was
also lowest.
Relationship between Experiment I and Experiment II, Measure 1
On Ratings Task II, participants were asked to assess the effect that each of the six control
levers would on each of the three populations of people presumed to exist in the hypothetical
organization. Although all experimental groups exhibited a drop in performance on the Ratings II
posttest, the Facilitated Simulation group still performed marginally better than other groups as
can be seen in Figure 5 shown previously. As with Ratings Task I, facilitation of the group
appeared to be the differentiating factor affecting performance. However, consideration should
also be given to the types of comments made during the group model building activity as some
comment types were found to be significantly greater as a function of method used. These
differences in performance on Ratings Task II as a function of the types of comment that were
shown to be significantly greater as a function of method groups are described in greater detail
below.
First, comments related to Strategy as well as performance on Ratings Task II were
significantly higher in the simulation groups. Here, output from the ‘runs’ of the simulation may
have provided information that was helpful in understanding the how the strategy employed
affected specific populations of people - the focus of Ratings Task II. However, even with this
visual feedback cue, lacking a process for tracking these results participants in a simulation
condition would find it difficult to retain the details of these runs over repeated trials.
Secondly, comments related to Process should also be considered in light of performance
observed on Ratings Task II. While the Non-facilitated groups may have performed well on
Ratings Task I as a result of both system feedback and opportunity for more uninterrupted trials,
they had significantly lower levels of comments related to process. This finding supports the
theory that, without a structure to track multiple trials, it was difficult to perform well on Ratings
Task II. If so, then it would seem that adding a facilitator to simulation groups to help the group
follow a structured experimentation process would lead to better performance. As such, the
combination of feedback inherent to the simulation when combined with a facilitator to instill
process should ensure more discussion on strategy and tracking of trials to better understand the
more complex second order relationships measured on Ratings Task II. Certainly it would be
tempting to accept this as an explanation as to why the Facilitated Simulation groups performed
marginally better than the Non-facilitated Case groups, as the assertion that a facilitator would
make up for the lack of attention paid to devising a process for experimentation was the
theoretical supposition made in Chapter 2. The problem with this theory however, is that the
Simulation groups had the lowest levels of discussion related to Process regardless of whether
they were facilitated or not. Hence, even though the Non-facilitated Simulation groups had better
performance on Ratings Task II overall, their performance was only marginally better than the
Non-facilitated Case groups that exhibited significantly higher levels of discussion related to
Process. Perhaps the case groups systematically discussed each variable and its effects both on
the system and on the specific populations of people.
The third comment type, comments pertaining to Rationale were also significantly higher
in Case groups (p <.05) which supports the idea that Case groups systematically discussed each
variable and its effects more than other groups. Moreover, attention paid to “why” a chosen
strategy would affect the system variables in a particular way may have been influenced by the
presence of a facilitator as theorized as it was the Facilitated Case groups that had the highest
levels of comments of Rationale overall.
Relationship between Experiment I and Experiment II Measure 2
Lastly, in considering the relationship between the Ratings Task I and II tasks and the
perceived levels of input on final strategy as reported on the individual voting measure in
Experiment II, it was observed that even without higher levels of process or more detailed
discussion of rationale, individuals in the Simulation groups felt the final strategy reflected their
input significantly more than those in the Case groups (p < .05). This suggests that there were
aspects of the Simulation method that were more effective than the Case method for drawing
group members into the process and that they were involved in decision making.
Discussion of Unexpected Results
Results of the hypotheses proposed in Experiment II were surprising in several ways.
First, the assertion made that Method would not significantly affect group dynamics had to be
reconsidered in light of the statistical impact that Method did have on comments related to
Strategy, Process and Rationale. Secondly, that Facilitation did not have a significant effect as
theorized, was also surprising considering the emphasis placed on Facilitation in the literature.
Finally, whereas Method did affect group dynamics but Facilitation did not, and whereas there
was a significant interaction observed for Method x Facilitation on the MANOVA analysis for
Experiment II, it may be theorized that method may mediate the effectiveness of facilitation.
What was revealed by the MANOVA analysis in Experiment II that was not evident in the
univariate analysis was the effect that Method had on facilitative comments. Surprisingly, the
greatest number of facilitative comments occurred in the Non-facilitated Simulation groups and
the second highest level occurred in the Facilitated Simulation groups. Given that there were
higher levels of Facilitative comments in the two simulation groups even though one was non-
facilitated, suggests that there are inherent aspects of the simulation process that serve a
facilitative function. Perhaps the participant in charge of keyboard input or the individual
vocalizing the input values acts as an informal leader. That the simulation is structured with an
item by item input field might also suggest that the man-machine interface serves to facilitate the
group’s process and in so doing reduces the level of discussion related to process that is required.
Given that Simulation groups spent significantly less time on discussion of process supports this
idea.
Conversely, in the Case groups, the levels of Facilitative comments were lowest which is
surprising given that without the simulation variable input process to drive the group’s activity as
proposed above, it would seem that a facilitator would have much more impact on the group’s
process. Perhaps the levels of Strategic and Rationale statements made served to reduce the level
of facilitative comments that were possible. For instance, whereas Strategic comments which
were highest in Simulation groups could be expressed relatively quickly and in succession - “do
this, try that, go higher with that, etc” comments explaining Rationale take longer to express as
people are inclined to give examples, explain the meaning of words and clarify their opinions.
Hence, while the short, succinct strategy comments made may lead to a greater level of
understanding of overall system functioning, they may suppress the level of statements relating to
rationale needed to develop a better understanding of the second order system complexities
measured on Ratings Task II in Experiment I. This notion is supported by the differences
observed on the individual level measures of group dynamics where those in the Simulation
groups reported significantly higher levels of perceived input on the final strategy than those in
Case groups but where no significant difference was observed in the level of agreement with the
final strategy. In short, everybody had a say in the process in the Simulation groups but what is
said is not necessarily heard, understood and agreed upon by others in the group.
How Results of Experiment II F it with Earlier Studies
==>lInsert how these results fit with Akkermans and Vennix finding that good communication
coincided with fair to high levels of group consensus;.
==> Insert how these results fit with the assertion that it may be expected that individuals would
perceive higher levels of input on a chosen strategy while at the same time perceiving lower levels
of agreement with the strategy (based on findings by Doyle, et al.,)
==> Insert discussion on how these results fit with the expectation that individuals would not
perceive high levels of input from the process and discussion would center on just a few elements
of the conceptual problem.
==> Insert discussion on how these results fit with the expectation that non-facilitated case
groups would have lower levels of discussion regarding rationale for a chosen strategy.
==> Insert discussion on how these results fit with the expectation that there would be little time
spent discussing outcomes of the experiments, and that trials would continue one after the other in
search of better performance without exploration as to why the simulation model reflected the
outcomes that it did in non-facilitated simulation groups.
==> insert how these results fit with the expectation that the facilitator would effectively increase
the level of perceived input on the final strategy.
==> insert discussion on how results fit with the expectation that the Facilitated Simulation
groups would perceive higher levels of input on and agreement with final strategy and levels of
comments related to rationale and strategy would be greater than in the other three conditions.
Further Considerations for Results in Relation to Earlier Studies
Results of this study support Wolfe's (1975b) findings that participants using simulations
often failed to apply formal, rational analyses in devising a strategy and did not take the time to
articulate the rationale for their strategy without being prompted to do so by a facilitator. As with
behaviors observed by Wolfe, statements of Rationale were significantly lower in Simulation
groups as a function of the method used.
Next, Wolfe’s (1973, 1975a) finding that effective communication aided performance on
business gaming situations was also supported by this study although not in the same way.
Whereas Wolfe noted that those groups that had higher levels of communications performed
better, results here showed that even though the Case groups had more discussion related to
Process and Rationale, this did not translate to significantly better performance on Experiment I,
Ratings Task I, as performance was actually better in the Simulation groups. Study results
complement Wolfe's findings by showing that the type of communication affects performance.
Where the level of discussion related to strategy was higher in the Simulation groups,
performance was also significantly higher on the first ratings task. In contrast, in Case groups
where discussion of Process and Rationale were higher, performance on Ratings Task I was
actually lower whether facilitated or not. Finally, whereas the majority of findings are compatible
with earlier works, the results of this study did not support the case versus gaming comparison
made by Wolfe and Guth (1975), who found that the use of games in the teaching of business
policy were superior to case studies but only if teacher guidance and structure were provided. In
this study, Non-facilitated Simulation groups (a teacher-less gaming corollary) outperformed the
Facilitated Case groups on Ratings Task I. However, whereas facilitation did not aid simulation
performance on Ratings Task I, it appeared to have helped on Ratings Task II. This finding
suggests that while Method and Facilitation interact, that the nature of the interaction may vary as
a function of the measurement method.
Implications of Results for Theory, Research and Practice
The results and insights on how mental model development may be due in part to
variations in the nature of group dynamics have implications for future theory development and
research design. The following paragraphs highlight some of these implications.
First, responses given in Experiment | did not necessarily reflect the same level of thinking
from pre- to posttest conditions. For instance, participants in the simulation conditions referred to
we” in their posttest responses on the open-ended question more often than the case groups,
suggesting that the simulation activity was more effective at building a sense of team camaraderie.
This team spirit may carry over to responses that were more in keeping with what “the team”
rather than what the individual would say. When individuals indicated that they disagreed with
the final strategy, their answers to “why” they did not agree sometimes indicated that there was a
particular variable they wanted given more or less emphasis in the final strategy. Thus, an answer
given to a question when phrased as “we think that...” may reflect one perspective, while the level
of individual agreement with this strategy may reflect another.
Secondly, in this study, the effectiveness of facilitation varied depending on the tasks to be
completed. This suggests that some tasks are better suited to a facilitative process than others
and that the design of the group model building activity should reflect these considerations.
Assuming that the nature of group dynamics may vary under different methods, consideration
should be given to the goals of the model building exercise. If the goal is greater exploration of
the system variables with less concem on team performance, using a facilitator might encourage
greater exploration of rationale but allow less time for strategy trials.
Next, the recognition that differences in the effectiveness of facilitation may be due to
variations in method and task has implication for future evaluative research. Results herein
suggest that simply asking whether facilitation is better or worse is too narrow a research
question. The evaluation paradigm needs to be extended to consider how facilitation effectiveness
varies according to method and performance task. Lastly, generalizations regarding facilitation
practice must be limited to those situations that most closely match the fixed levels of method and
facilitation employed here.
Lessons Learned: Practitioner Guidelines
Opting to emphasize model conceptualization or simulation processes independently of
one another is at best a satisficing practice. This is not to say that it should not be done, but
rather that the trade-offs in utilizing one aspect of the system dynamics model building paradigm
at the expense of others should be recognized. Based on this assessment, the appropriate method
and level of facilitation can then be matched to the learning objectives.
In determining the level of facilitation required, consideration should first be given to the
method to be employed. If the goal of the group modeling activity is to teach about system
complexity, results from Experiment I suggest that a facilitator will not have as much impact in a
conceptual modeling activity based on a case study as they will in a simulation activity. Based on
results of Experiment II detailed here, it can be expected that case groups will spend more time
devising a process for going about the modeling activity with or without a facilitator. However,
the same can not be said for simulation groups that will spend little time devising a process to
systematically test components of the system. In using a simulation without a facilitator, groups
can be expected to spend little or no time attempting to understand why their strategy affects
system performance the way it does and will opt instead to run as many trials as possible in the
time allotted.
Furthermore, case groups can be expected to consider more carefully the rationale behind
presumed system relationships even in the absence of a facilitator. They may however, spend a
great deal of time establishing a process to do this. Therefore, if exposing relatively inexperienced
group members to all the primary relationships in a given system is the goal of the initiative, a
facilitator may be instrumental in ensuring timely completion of the process. If, on the other hand
the session is designed to improve understanding of second order system relationships with
groups whose members have greater expertise, facilitation may be superfluous. Finally, there are
secondary benefits that may be derived from using either a case study or a management flight
simulator. For instance, these methods may be incorporated into a larger systems thinking
learning initiative or serve as a vital introduction to a related organizational change effort. As
such, they may play a critical role in achieving a specific organizational rather than individual
learning objective. Given the contrast in the effect of facilitation on performance across the
different mental model measurements used in Experiment I, facilitation of the process is clearly
important and should be tailored not only to the methods used, but to the leaming agenda.
==> Insert References
Back to the Top