Wakeland, Wayne, "The Judging Process for SymBowl: a High School System Dynamics Modeling Competition", 1998 July 20-1998 July 23

Online content

Fullscreen
The Judging Process for SymBowl: a High School System Dynamics Modeling Competition
by Wayne Wakeland, Systems Science Ph.D. Program, Portland State University, 5/7/98

Introduction

This “paper” describes the judging process used to determine the winners in SymBowl, a high
school system dynamics modeling competition held in Portland, Oregon the past three years.
SymBowl was created by Ed Gallaher, a medical researcher at the Portland VA Hospital and
Associate Professor at Oregon Health Sciences University.

The judging criteria and judging process were developed by Wakeland, who has served as the
judging coordinating for past three years, overseeing the process, compiling results, etc.
Wakeland is an Adjunct Professor of System Science at Portland State University where he
teaches graduate-level modeling and simulation classes.

Included as attachments are: A) the judging criteria, B) a document provided to students and
teachers, C) a document provided to the judges, D) a judge response form, and E) the
spreadsheets used prior to and during the event to facilitate scoring. The spreadsheets have
been converted to tables for ease of inclusion.

Wakeland developed the attached materials, except for Attachment B which was co-developed
by Gallaher, Wakeland, and Diana Fisher; with significant input from Tim Joy and Ron Zaraza.
The latter three are high school teachers who have incorporated system dynamics concepts into
their classes. Fisher and Zaraza are also co-principal investigators for the NSF-funded grant CC-
SUSTAIN that teaches system dynamics to high school teachers.

Evolution of the Judging Process

In SymBowl 96, five judges each read 6 or 7 of the 16 papers, and scored them in terms of the
combined quality of the paper and model. Since the mean scores varied greatly by judge, the
absolute scores were converted to a set of rankings for each judge, and the rankings were added.
From this data, three finalists were identified, and six potential finalists. The morning of the
event, the judges focused on, and ranked, the poster sessions of the top contenders. These scores
were added to the previous scores to determine the finalists. The finalists then made
presentations, which were also ranked by the judges. These presentation scores were factored in
to determine the final order of finish.

This process was problematic for several reasons. First, even though the judges had not read all
the papers, they still had to evaluate all of the top contenders in the poster session, and all of the
finalists’ presentations. It had been thought that this wouldn’t matter because the scoring for the
poster session and presentation is intended to be independent from the paper and model scores.
However, the judges who had not read a particular paper felt they were at a disadvantage when
scoring the poster sessions and presentations.

Symbowl Judging Process pg. 2

A second problem was that the teams that were not top contenders received virtually no attention
from the judges during the poster session. In retrospect, it was clear that this needed to be
remedied.

The third problem was the use of rank ordering to combine the judges scores. This method was
selected because the judges had very different absolute scales due to their varying backgrounds:
HLS. teacher, modeling consultant, professor, etc. Use of rank ordering eliminated the possibility
that one team might be judged by “hard” judges while another might be judged by “easy” judges.
The problem with focusing on rankings was that there was no way to tell if two or more projects
were nearly identical in score or far apart.

Consequently, for SymBowl 97, the process was changed:

« All judges read all the papers

« Each judge provide separate scores for the paper and model

* Judges were provided detailed criteria for scoring each aspect (see Attachment A), although
they were asked to submit only final scores on a 5-point scale for the paper and model

« Large discrepancies in scores were highlighted by the judging coordinator and discussed prior
to the event; judges would decide whether or not to revise their score based on the discussion

« All teams would be visited and scored by three judges during the poster session, regardless of
whether they were top contenders; to facilitate this, a detailed schedule on 15 minute intervals
was developed (see A ttachment C).

This process worked much better. A very small number of “outliers” were identified and
discussed. Two of them were compilation errors, and were corrected. No other changes were
made. As a result of having absolute scores, it was clear after the poster session that instead of
five finalists, there was a natural break point at six, so the judges recommended having six
finalists.

For SymBowl 98, two things were changed: the weights for the various aspects of the
competition were modified, and one criteria was added. The weight for the model was increased
from 25% to 40%, and the other three aspects were reduced from 25% to 20%. In addition, a
criteria called “endogenous creation of the behavior of interest” added to the model aspect. This
was in response to seeing a number of student projects that used table functions to “drive” the
model.

During SymBowl 98, one project made it into the top three even though it began the competition
in 7" place based on the paper and model scores. This was the result of an excellent poster
session and excellent presentation. Another project did not make the finalist cut due to a poor
poster session, even though it was in 24 place based on the initial paper and model scoring. The
selection of finalists was partly subjective, since two projects were virtually “tied” for the fifth
slot based on the numerical scores. Selection of third place also required a judgment call, since
two projects were tied based on the numerical scores. In both of these cases, the project was
selected that had the best poster session and presentation, respectively.

Wayne Wakeland 5/7/98
Symbowl Judging Process pg. 3

Further Explanation of the Attachments

Attachment A gives the scoring criteria for the report, model, poster session and presentation.
Eight criteria are listed for the report itself, with two of them having the most weight. Eleven
criteria are provided for the model, of which five have the most weight. Four criteria are
provided for the poster session, all of which have about the same weight. Four criteria are
provided for the presentation, with two having the most weight.

Attachment B is a copy of the document provided as a guideline to the students and teachers.
Written by Gallaher, Wakeland, and Fisher, it is formatted to serve as a model for the paper, and
shows how to present a modeling project: Introduction, The “Core” Model, Verification &
Validation, Model Enhancements, Conclusions & Future Plans, Source Materials, and
Appendices. General guidelines and formatting are provided in the body, and additional details
are provided as footnotes that are quite extensive in some cases.

Attachment C is a copy of the document provided to the judges. It contains contact information,
a schedule of events pertinent to the judging process, a room layout showing where each poster
session will be located, a schedule showing which judge visits which team at each 15 minute
interval, and scoring related information.

Attachment D is simply a scoring sheet that makes it easier to consolidate the judges scores.

Attachment E provides the two spreadsheets used for the judging process. The first one was used
prior to the event to combine the paper and model scores. On this sample, from the 1998
Symbowl, there are a number of blanks spots. This is due to the fact that, due to a mis-
communication, we had to score several additional projects. We chose to pre-screen them and
have some of the projects scored by three instead of six judges. We do not recommend this
process, but it was necessary in this case. The second spreadsheet was used during the event to
incorporate the scores from the poster sessions and presentations.

Closing Remarks

The process used to judge SymBowl appears to us to be working reasonably well. While it is
very time-consuming for the judges to read all the papers in a very short timeframe, we feel it is
appropriate and necessary.

The process also relies on the use of a 5-point “absolute” scoring scale, with explicit criteria
identified, but the judges determine their own process for arriving at a final score. Some judges
score the papers and model on each criteria and then compute a score, while others use the
criteria more subjectively. We feel this has been effective, due in part to the use of a feedback
loop to resolve major discrepancies.

We would welcome any input regarding how we might improve process.

Wayne Wakeland 5/7/98
Symbowl Judging Process

pg. 4

Attachment A: Scoring Criteria

Criteria for Scoring the Report

Most Weight

Problem Def’n, Model Purpose, Reference Behavior Pattern

eK

Modeling Process

Interpretation of Results, Conclusions, Future Directions

eK

References (required)

Readability

Organization

Effective Use of Figures & Tables

Format (Use of Instant Manual highly recommended)

Criteria for Scoring the Model

Most Weight

Model Contains Feedback (appropriately)

24K

Description of Key Variables, Interactions, Equations

RK

Clarity of the Flow Diagram

7K

Endogenous “creation” of the behavior of interest

eK

Assumptions/Limitations Clearly Indicated

Analysis of Feedback Loops (as recommended)

Originality

Working “Core” Model (highly recommended)

Verification, Comparison to Reference Behavior Pattem

Exercising the Model, Sensitivity Testing

eK

Impression of Overall Model “Quality” or Validity

Criteria for Scoring the Poster Session

Most Weight

Model Purpose and Results Clearly Shown

Model Structure & Key Assumptions/Relationships Clearly
Documented

Informative Overview Provided by the Students

Ability of Students to Answer Probing Questions

Criteria for Scoring the Presentation

Most Weight

Clarity of Model Purpose and Interpretation of Results

sekk

Clarity of Model Structure & Key Assumptions/Relationships

eK

Sense of the Process including Verification & Validity

Organization & Delivery

Wayne Wakeland

5/7/98
Symbowl Judging Process pg. 5

Attachment B: Document Provided to Teachers & Students
by Edward J. Gallaher, Wayne Wakeland, and Diana Fisher

Interesting/Useful Title
(in form of a question if possible)

by
Jane Allison
Mark Baird
(alphabetical)

Name of School
(Systems Dynamics Club)
Date
(Prepared for the 1998 Sym Bowl)
System Dynamics Advisor: First I. Last)
Department
School

Outside Advisors:

First I. Last
Title or Position

Primo M. Fino
Title or Position

Wayne Wakeland 5/7/98
SymB owl Judging Process pg. 6

Introduction

Why are We Building Models, and for Whom?
Background and —_ Explain the problem you studied, what you wanted to find out, and
Problem Definition — why this is important or interesting."

Purpose of the Model —_ This section should include a clear description of the intended
audience and what you are trying to communicate to them’.
It is also important to identify the "stakeholders," that is, the
groups of people who have a special interest in this problem.’

Resources Utilized
People Summarize who you spoke with to get the information you needed
(details in the Bibliography.)
Identify two major outside consultants. How did you find them?
How often did you meet with them (or talk to them?)
Reference Material Brief summary (details in Bibliography).
Data Sources Some attempt should be made to compare the model and

simulations with real-world data’.

"Tn broad terms, we develop models and simulations to learn more about the world around us, and to make
predictions which allow us to solve problems. Our goal is to make the world a better place to live. Unfortunately,
modelers can get so wrapped up in the modeling process, that they lose sight of the real world problem they are
addressing. Defining the problem up front, in writing, helps to avoid this.

Real-world problems are often difficult to solve due to the complex interconnections among the relevant
components. For example, our attempts to intervene and “manage” natural resources often fail due to unforeseen
side effects caused by the various feedback loops present in the system. This is exactly why system dynamics
models are needed and useful.

* No one model should be expected to be all things to all people. For example, you might build a salmon
depletion model aimed at high school students to indicate some of the major issues (weather, dams, disease,
overfishing, pollution). Models of salmon depletion for fisheries biologists, politicians, voters, or sports fishermen
would probably be somewhat different.

Suggestion: an ideal target audience would be a cross-section of high school students. Y ou know this group the
best, and you can probably develop an excellent model aimed at this population. This would be analogous to a
school newspaper reporter writing articles for the population of the school.

* Identifying stakeholders can provide a very different perspective on how your model fits into the real world!
Despite the fact that your early models cannot include each "player", it is crucial that you consider who these
players might be. For example, in a study of salmon depletion, we tend to think of fishermen (both commercial and
sport), logging interests (who may damage the spawning habitat), and the utility industry with its dams that prevent
fish from navigating the rivers. Who else might be very interested in the political aspects of this problem? How
about the aluminum industry with its vast appetite for electricity? And economists who are concemed about the
world price of aluminum? And farmers who want to use the water for irrigation?

Ina larger model, adding these factors will greatly expand the complexity of the model. For the purposes of this
project we want to focus on simpler models.

Wayne Wakeland 5/7/98
SymB owl Judging Process pg.7

Challenges
Explain some of the problems you encountered as you built your
model, and how you overcame them’.
Reference Behavior
Whatare your —_ Describe the behavior of the system, or the expected behavior of
expectations? —the system, such as oscillation, rapid growth and collapse,
exponential decay, etc. Graphical or tabular data is very helpful."

The “Core” Model

Where to Start

Simplicity By far the most common, often disastrous, mistake is to begin with
a model that is too complicated. It will be very hard to
troubleshoot and you will probably never get it to work!
A complex model is not necessarily superior to a simple model. In
fact, the opposite is most often the case’.
Consequently, it is strongly recommended that you use the basic
elements of System Dynamics (stocks, flows, sources and sinks,
auxiliary variables, connectors), and avoid "space compression
objects," sectors, submodels, ovens, queues, and conveyors, unless
you are quite convinced that these are necessary for your model’.

Originality Originality is important! At least a portion of the model logic must
be of your own design. Y ou can either build a simple model of

‘This may be collected by the team members (in an actual physics experiment, for example), or obtained from
outside sources.

> Examples include limited resources, lack of data, lack of understanding about how the system works, etc.

° This is one of the most challenging aspect of modeling and simulation. But itis critical that you make an
attempt to do this, before building the model, even if your reference behavior is simply a hand-drawn sketch of your
expectations. Without this documented (!) expectation, it is too easy to accept the behavior of the model after seeing
what the simulation tells you. This is a coomon trap, so try to avoid it by documenting your expectations in
advance.

You may change your “expectations” as you learn more about your system’s behavior, but the modeling process
should not be a series of exercises of the form: “Let's see what happens”, followed by, “Oh yeah, that makes sense.”

"In any case, an excellent complex model always begins as an excellent simple model.

* These items are rarely needed to reproduce the fundamental behaviors of interest in a feedback system.
However, if you really think you need a specific feature, say for example, a conveyor, then go ahead and use it;
then try to duplicate the behavior without using the feature. More often than not, you will find this is possible. As
you gain experience, you will probably be surprised at the variety of phenomena that can be modeled quite
adequately using only stocks, flows, and converters.

Wayne Wakeland 5/7/98
SymBowl Judging Process pg. 8

your own design, or enhance an existing model’. In the latter case,
be sure to credit the source of your initial model”.
Starting witha —_[t is recommended that you focus initially on a very simple “core”
“Core”model — model, and get this model working properly before making
refinements or adding complexity”.
How to Describe/Document Your Model for Others
Key Variables — Describe the most important variables as you perceive them before
building the model (i.e. a “laundry list”)”.
Core Model Flow —_ Figure 1 shows the core model developed by the team. If you
Diagram begin with an existing model, show this model first, in a separate
figure, and describe where it came from. Then, show your
additions in a second figure. Y our original contribution should be
quite clear to the reader.
Figure 1. The Core Model. Always put a caption under your
figure, numbering the figure, and providing a brief explanation.

Core Model Logic and ~—_ Explain how the stock-and-flow diagram is organized, and the
Key Equations —_verall structure of the model. This may take several paragraphs.
* For each flow, describe the source, trace through the various
reservoirs, and describe its sink(s).
* For each stock, describe all inputs and outputs.
* Describe the reasoning behind the key equations in the
model”.
Explanation of —Y ou may want to expand on the overall description in order to
Additional Aspects —_ etter explain certain parts of the model or to document the
(optional) additional model components that were added to an existing
model.
Figure 2. Insert caption here.

° This might be, for example, adding a new submodel or some other new feature to a population model,
ecosystem model, or basic exponential decay model.

* There is no penalty for starting with an existing core model, and no bonus for starting from scratch. You must,
however, clearly explain what you started with, and what your team added as an original contribution.

* “Working properly” means that even though the model may not yet fully capture the reference behavior due to
its simplicity, the variables do not “blow up”, go the wrong direction, oscillate when they shouldn't, etc.

” For example, salmon population , number of fishermen, length of the fishing season, sea lions which sit in
front of the fish ladder and scarf up salmon, etc. (Be aware that what you think are the most important variables
when you start often not tum out to be the most important when you are finished. But that is why we are using
System Dynamics, isn’t it?)

© The equation set, along with its graphics, can be copied from the Equations layer by carefully outlining the
equations with Flash-It.

Wayne Wakeland 5/7/98
SymBowl Judging Process pg. 9

Identification and —_ Identify each feedback loop in the model, and describe how it
Analysis of Feedback —_ impacts the behavior of the model.
Loops
Additional Considerations
Major Assumptions Document the major simplifying assumptions you made”.

Parameter Values —_ Explain the rationale or data source for the values of those
parameters that strongly influence behavior, such as the initial
values for the stocks, growth rates, etc. Depending on the data you
are able to obtain, these may be educated guesses. Nevertheless,
there should be some explanation as to how you chose the
particular values.

Choice of Time —_ Explain the choices you made for Time Specs, and why. This
includes the basic unit of time (minutes, months, etc.), the
beginning and ending time, the length of simulation, and the choice
of DT.

Core Model Results
Graph forthe Core Describe the specific parameters for the initial simulation, and then
Model capture an output graph to illustrate the behavior of the model”.
Figure 3. Salmon Population between 1965 and 1995.

Interpretation ofthe — Explain the graph(s), and your interpretation of each of the curves
Graph __hased on your understanding of the system.
Tabular Output forthe Create a table to present the values of the important elements of the
Core Model model. Choose an appropriate output interval for the table (not

necessarily every DT).
Figure 4. Tabular output.

Interpretation ofthe — Explain what the numbers mean to you, and how they support what
Table you believe the model demonstrates.

“ This is extremely useful and easily done. All loops will contain at least one storage, so one strategy is to pick a
storage and see if any of the arrows emanating from it can be traced through a sequence of connections back to the
starting point.

* For example: 1) In order to simplify the model, we assumed that all species of salmon could be lumped
together. 2) We also assumed that the reproductive level for salmon is the same from year to year, even though in
reality..., etc.

Tf you Copy and Paste, rather than capturing with Flash-It, you will be able to edit the individual objects in the
Graph window. For example, you can delete the “pin” at the upper left, as well as the date and time that you ran the
simulation, and you can enlarge the fonts used for the x- and y-axes.

Wayne Wakeland 5/7/98
SymBowl Judging Process pg. 10

Verification and Validation

Preliminary Testing
Verification — Make sure that the information you entered into the computer
model accurately represents your conceptual model (e.g., no typos,
no incorrectly-entered parameters, etc.).

Validation — Compare the behavior of the model to the real world (the reference
behavior described in the first section): Also, what do your
advisors think of your model and results?””

“Error” Analysis —_If the model does not agree with your expectations, can you
explain why? Were your perceptions of the reference behavior
correct? Is the structure of the model appropriate for the real
world situation you are trying to model?

Sensitivity Testing - “Exercising” the Model”*
TestEach Parameter Describe the results of systematically varying the values of
Separately individual parameters, over the entire range of possible values,”
Additional graphs may be useful here.
Repeat this with each key parameter.” Is one parameter more
influential than you expected? Or less?
Figure X. Enter caption here.

Testing Multiple — Y ou might also want to study what happens when you vary more
Parameters (optional) —_ than one parameter at a time. If so, explain why you chose the
specific combinations. Why these values? Do you need to test all
of them?” Are some combinations more likely to occur than
others? Are some more likely to provide meaningful information?

” Keep in mind that many subject experts are not familiar with System Dynamics modeling, so you may have to
help them interpret what you've done.

* This is an aspect of modeling that is very often under-appreciated. It is tempting to begin enhancing your
model before developing a full understanding of the core model, but this would be a big mistake.

*“ Y ou will have to decide how small or large the intervals might be from one simulation to the next, but you
should examine very low values, very high values, and enough values in between (perhaps 6-10) to get a good feel
for the influence of each key parameter.

* Note that if you have four key parameters, this may take forty to fifty simulation runs! It might be useful to
use the Sensitivity Specs feature to make this process more efficient. Also, it is often informative to each parameter
as a family of curves.

* Note that if you have two critical parameters, and you test all combinations of 6 values of each parameter, you
will require 36 simulations. Three parameters tested in this way will require 216 (673) simulations. Four
parameters will require 1296 (64) simulations. Thus, it is usually not feasible to test all possible combinations!

Wayne Wakeland 5/7/98
SymBowl Judging Process pg. 11

TestSome More? Or Before enhancing a model, it is important to first make sure that
Begin Enhancing the the core model works properly , and is well-understood.

Model? Nevertheless, one’s interests quickly expand beyond the core
model. Even the simplest model will suggest four or five obvious
enhancements. It is tempting to make these enhancements as soon
as they are conceived in order to see what happens.

However, it is very important to properly test the core model
before adding "bells and whistles."

Model Enhancements (optional)

Description of Enhancements
Describe your attempts to enhance the core model, whether these
attempts were successful or not.” For each modification:
* What were you trying to accomplish?”
* Why did this seem to be a reasonable next step?

Results Describe your results. As appropriate, use formats, subheadings,
and figures similar to the earlier sections. When the work is
incomplete, be sure to say so”. Describe the next step(s) you
would take if more time were available.

Problems —_ Discuss problems with the attempted enhancements, and your ideas
regarding what might be done to rectify these problems.

Conclusions and Future Plans

* What can you conclude from the results of your modeling
activity?

* Is the final model appropriate for your intended audience?

* Can you now answer the question(s) you were originally
trying to answer?

* Were your conclusions what you expected?

* How would you change the model to test other ideas?

* How would you change the model to illustrate similar ideas to
a different audience?

= Tn contrast to the initial core model, it is not a requirement that the enhancements be as well-tested. In fact,
you may run out of time and they may not work at all.

* Describe the logic behind any potential additions, and how these additions contribute to the original purpose of
the model. You might build several new diagrams, each illustrating one new feature, followed by a final diagram
including all the new features. Or, you might build one (slightly) expanded model, and proceed through all the
disciplined steps of listing assumptions, sensitivity testing, etc., etc.

* The standards for this section are less rigorous than for the core model described earlier (although eventually
all these steps would need to be carried out before the enhanced model could be used for anything).

Wayne Wakeland 5/7/98
SymBowl Judging Process pg. 12

Source Materials

References _List all references, in correct bibliographic format. (Select an
appropriate format from any reference manual.)
Consultants —_L.ist the people you talked to -- in person or over the phone.
Acknowledgements Acknowledge any sources for modeling ideas not covered by the
references.

Appendices

Attach a listing of final model equations, DOCUMENTED (mandatory!) using STELLA’s documentation feature.
Appendix Z: Judging Criteria

[The scoring criteria, see Appendix A, was provided here for the teachers &
students]

Wayne Wakeland 5/7/98
Symbol Judging Process pg. 13

Attachment C: Document Provided to Judges

Sym Bowl Judges & Organizers

Judges Work Home Phone | Address E-mail & FAX #
Phone

Judgel

Judge2

Etc.

Organizers

Organizer1

Organizer2

Etc. [ l

Schedule
Collects and Photocopies Papers (Friday, 4/17/98)
* As papers are obtained, they are assign a “team number.”
* Fora given school, team numbers are assigned at random (1 to 20, recording of what has been
used) to assure that teams from a given school will be scattered around the room.
¢ B4copying the papers, team numbers are written at the top of the first page of each paper.
* Make at least 10 copies (6 judges + organizers + extras).

Fedex copies to out-of-town judges (Friday, 4/17/98, 5:00PM)
[address information here]
* NOTE: please check the "Priority" box, the "Saturday" box, and the “release” box at the botton,
so they can leave the package.

Initial Meeting of Judging Panel (Friday, 4/17/98, 5:30PM at__
¢ [how to get to the location goes here]

oo will bring the copies of papers.

¢ Judges pick up their copies of the papers.

¢ And confirm the process, judging criteria and weights, etc.

Preliminary Scoring of Report & Model Completed (by Wed., 4/22/98, 7PM)

* Judges finish reviewing and scoring the papers.

¢ Results email’d or telephoned [or FAX’d?] to judging coordinator (please use the judging form
provided, for consistency).

¢ For those teams they are scheduled to visit during the event, as judges review the papers, they are
to make notes to give later to the students.

Compilaton of Scores (Wed., 10PM, 4/22/98)
* Judging coordinator emails the results, with any significant outliers highlighted.

Reconciliation of any Major Discrepancies (Thursday, 4/23/98)

* Could be the result of errors in compilation or different interpretations by the judges

* Judges will call or email the judging coordinator regarding any errors or changes.

* Judging coor. will update the scoring spreadsheet to be used during the event, as required.

Wakeland 5/7/98

Symbol Judging Process pg. 14

SyM Bowl (Friday, 4/24/98
¢ 8:00AM: Students set up their display at the proper station for the number; Judges meet briefly

to confirm final scores for the papers/models.

8:45AM: Welcome; introduce judges

9:00AM Judges evaluate assigned booths, according to the schedule below
12:00AM: Judges meet to identify finalists (and eat box lunch)

12:45: Finalists announced. Finalists get their presentations ready.
1:15-1:30: Reconvene in auditorium; introduction, ground rules
1:30-2:45PM: Presentations by the finalists (10 min. ea. + 5 min. for questions; Judges evaluate;
Audience is free to ask questions.

2:45-3:150PM: Break; judges determine final order of finish.
3:15-3:45PM: Presentation of awards.

3:34-4:15: Pictures of winning teams and teachers

4:30: Clear the area

Feedback for Teams submitted by Judges (Friday, 5/1/98)

* Using the notes they made as they read the papers, plus any notes from the poster session and
presentations, Judges will prepare a few typewritten paragraphs of feedback for the teams they
visited the day of the event. This feedback will discuss what the team did well and what the team
could improve, with respect to both the paper and the model.

* Judges mail, FAX, oremailthisfeedbackto.

. __ makes sure this feedback gets to the teacher advisors.
this written feedback gets to the students on the teams.

Entrance

3 4

2 5

1 6
12 11 += «#10 9 8 7 Room 221 A
u
Foyer d
I
Room 210 t
20 13 0
19 14 r
18 15 I
17 16 u

Wakeland 5/7/98
Symbol Judging Process

pg. 15

Here is the initial plan for which judge sees which team, in 15 minute intervals. This gives the teams

45 min. between visits, and assures that visits at the same do not occur at adjacent tables. We will
modify this as required at our initial meeting to balance “reading” load.

9:00 | 9:15 | 9:30 | 9:45 | 10: 10: 10: 10: 11: 11: 11: 11:
00 15 30 45 00 15 30 45
Team1 1 2 5
Team2 4 6 3
Team3 6 1 2
Team4 2 5 1
Team5 3 4 6
Team6 6 3 2
Team7 3 1 4
Teams 1 2 3
Team9 5 6 3
Team10 4 6 5
Team11 5 4 6
Team12 4 5 1
Team13 4 6 2
Team14 1 2 4
Team15 3 4 1
Team16 6 3 2
Team17 5 3 6
Team18 2 5 1
Team19 1 2 4
Team20 2 5 6
Overall Scoring:
Report Model (based on report) Poster Session Presentation (finalists)
20% 40% 20% 20%

Scores for each “aspect” (report, model, poster session, presentation):

5 Outstanding

4 Superior/V ery Good

3 Good

2 Satisfactory

1 Marginal

¢ Scores for each aspect are computed by averaging the judges scores for that aspect.
¢ Finalists are determined by the team’s overall score for report, model, and poster session
(computed by weighted average of the scores on each aspect, as indicated above)

Scoring Criteria

[The scoring criteria, see A ppendix A, was provided here for the judges]

Wakeland

5/7/98

Symbol Judging Process pg. 16

Attachment D: Judge Response Form

Scores

Team | Paper | Model

we] cof NS) om] o] AP Ww) Ne

an
°

an
a

Wakeland 5/7/98
Symbol Judging Process pg. 17
Attachment E: Scoring Spreadsheets
Spreadsheet used to consolidate preliminary scores from judges.
Paper Model Combined
Team 1 2 3 4 5 6lavg. 1 2 3 4 5 6lavg. |wgtd. | rank
3.00 4.50] 2.00} 3.2 2.27 2.40] 3.00} 2.6 ae
3.00] 3.40 2.00) 2.8 2.50] 3.40 3.00} 3.0) 87
2.00 2.50] 3.60 2.7| 2.50 1.75] 2.73 2.3) 7.4
1.00} 1.00 1.50 1.2] 1.20] 1.00 1.50 1.2) 3.6
3.00] 4.00} 3.00] 4.00} 3.90] 3.00) 3.5] 2.50} 3.00) 2.00] 2.25] 3.53] 3.00) 2.7; 8.9
3.25] 4.00) 3.88} 3.50] 3.60] 4.00] 3.7] 3.20) 4.00) 3.27) 3.25] 3.87) 5.00) 3.8) 11.2} 4
3.50] 3.50} 3.88] 3.25] 3.70] 4.00) 3.6] 4.25] 3.50) 4.09] 4.00] 3.67] 3.00) 3.8) 11.1
2.75] 3.00] 3.75] 3.75] 3.30] 2.00) 3.1] 2.10] 3.50) 3.18] 2.75] 3.47] 2.00) 2.8} 88
2.50] 2.50 3.10 2.7| 2.10] 3.50 2.67 2.8) 8.2
3.00 3.40] 3.00} 3.1 2.00 3.00] 3.00) 2.7) 8.5
3.00] 3.00} 4.50] 4.25] 4.00] 3.00] 3.6] 3.10} 5.00) 3.70] 4.00) 3.73] 3.00) 3.8) 11.1) 5
1.75} 2.00 4.00) 2.6 1.20] 1.50 2.00) 1.6) 5.7
4.25] 4.00} 2.75] 3.00] 3.60] 4.00] 3.6] 3.30} 5.00) 3.45] 3.25] 3.80] 5.00) 4.0) 11.5} 2
2.00] 2.00 2.00) 2.0] 1.70) 1.50 3.00) 2.1) 6.1
2.50] 2.00} 3.00 2.5 1.09] 1.00} 2.67 1.6) 5.7
3.50] 5.00] 4.75] 5.00} 4.40] 4.00] 4.4] 4.10] 5.00] 4.81] 4.75] 4.33] 5.00) 4.7) 13.8} 1
3.50] 4.50} 3.60] 4.00] 3.60] 4.00] 3.9] 3.30} 4.50] 3.00] 4.00) 3.40] 4.00) 3.7) 11.3} 3
3.00] 3.00} 2.75] 2.50} 3.50] 3.00) 3.0) 2.40] 3.50) 2.73] 2.25] 3.47| 3.00) 2.9} 8.7
2.50] 4.00] 3.88] 3.75] 3.30] 2.00) 3.2] 2.25] 3.00) 3.09] 2.25] 3.33] 2.00) 2.7} 8.5
2.00] 3.10} 3.00) 2.7 1.00] 2.73] 3.00} 2.2} 7.2
2.75] 3.50] 3.50] 3.75] 3.00] 4.00} 3.4) 3.10} 3.00) 2.90} 2.75) 3.00} 3.00) 3.0) 9.3} 8
3.75 2.75] 2.50 3.0) 2.50 1.63] 1.75 2.0) 6.9
3.75] 4.00] 3.13] 3.00] 3.20] 4.00] 3.5] 4.25] 3.50] 2.70] 3.00) 3.60] 4.00) 3.5) 10.5} 7
avg.| 2.9) 3.4) 3.3) 3.1) 3.5) 3.2} 3.1) 2.8) 3.4) 2.9) 2.6) 3.3) 3.3) 2.8) 8.7

Note: the team numbers have been randomized and resorted to preserve anonymity.

Wakeland

5/7/98

Symbol Judging Process pg. 18
Spreadsheet used for scoring during the event.
Poster Session. [New Totals | Presentation Final

Te | papr | modl |wgtd |prel.| 1 | 2 | 3 Jtotallavg. |score | rank] 1 | 2] 3) 4] 5 total avg. |score
jam| avg. | avg. |total | rank

3.17] 2.56] 8.28 45] 3.5{ 25/105] 35] 118

2.80| 2.97] 8.73 2) 35] 4[ 95/317] 119

2.70] 2.33] 7.35 275| 3) 4.5) 10.3]3.42] 10.8

Li7]_ 1.23] 3.63 25] 1] 3] 65/217] 58

348) 2.71] 8.91 25] 4{ 45[ 1/367] 126

3.70| 3.7/11.23] 4 | 25] 4] 45] 11/367] 149) 5

364] 3.75]11.14. 5 [3.75] 5/4.75/135] 45] 15.6) 2 4] 35] 25/3.75] 35 20.3] 3.4] 19.0

3.09] 2.83] 8.76 a 5{ 5! 14/467) 13.4

2.70| 2.76] 8.21 3) 45/45/12) 4 122

3.13] 2.67] 8.47 ay 4] 13/433] 128

363] 3.7ejili4, 5 | 45/ 35/5] 13/433, 155] 4 3) 35] 25) 35, 3 18.5; 3.1] 186

2.58] 1.57] 5.72 Oo} of 5.7

3.60| 3.97]11.53] 2 3] 25, 3) 85/283, 14.4) 7

2.00] 2.07] 6.13 5/225] 4[11.3/3.75] 9.9

250] 1.59] 5.67 175] 2.25] 3.5| 7.5] 25] 82

444] 467/13.77| 1 | 45{ 5] 5/14.5/4.83] 18.6) 1 4] 3.5] 3.25] 4.75] 3.5 22| 3.7| 22.3

3.87| 3.70|11.27| 3 4af3.75| 5/128) 4.25] 155; 3 [3.75] 3] 3.75/ 4.25] 35 22.3] 3.7) 19.2

2.96] 2.89] 8.74 3.5] 3.75] 3.75] 11/367] 12.4

3.24] 2.65] 854 35] 3] 46/ilay 3.7] 122

2.70) 2.24] 7.19 3] sf af 12) ay a2

342| 2.96] 9.33; 8 [3.25] 2/325] 65/283] 12.2

3.00 1.96] 6.92 275| 45] 5[ 12.3] 4.08] 11.0

351] 3.51/10.53| 7 | 45/425] 4/ize/425[ 148) 5 | 35! 4/425, 47 45 25.3| 42] 19.0

Note: the team numbers have been randomized and resorted to preserve anonymity

Wakeland

5/7/98

Metadata

Resource Type:
Document
Rights:
Image for license or rights statement.
CC BY-NC-SA 4.0
Date Uploaded:
December 18, 2019

Using these materials

Access:
The archives are open to the public and anyone is welcome to visit and view the collections.
Collection restrictions:
Access to this collection is unrestricted unless otherwide denoted.
Collection terms of access:
https://creativecommons.org/licenses/by/4.0/

Access options

Ask an Archivist

Ask a question or schedule an individualized meeting to discuss archival materials and potential research needs.

Schedule a Visit

Archival materials can be viewed in-person in our reading room. We recommend making an appointment to ensure materials are available when you arrive.