[to be published in Interactive Learning Environments]

Making Dynamic Modeling Accessible to Pre-College Science Students

Shari L. Jackson, Steven J. Stratford Joseph Krajcik, Elliot Soloway

University of Michigan 1101 Beal Ave. Ann Arbor, MI 48109-2110

{sjackson, sstrat, krajcik, soloway}@umich.edu

Abstract

Dynamic modeling, for many pre-college science students, is an out-of-reach cognitive activity. Professional tools are too hard for novices to use, user-unfriendly, and provide no support for learners. We have designed a new modeling tool, Model-It, that provides intentionally designed scaffolding for learners, enabling them to build and test dynamic models of complex systems easily, using object-oriented and qualitative techniques. Model-It contains scaffolding strategies intended to ground the learner in prior knowledge and experience, bridge the learner from novice to expert understandings and practices, and couple the learner's mental model with testing actions and model feedback. Our user and classroom testing shows evidence that the scaffolding strategies of Model-It support learners' active construction of knowledge and that students can create meaningful models.

Introduction

"It makes you think more about a real-life situation, where there's no real answer, you set it up and everything."

This statement by a 9th-grade science student about Model-It speaks volumes about why constructing models in an interactive learning environment can be valuable. "It makes you think"–if learning is active construction of knowledge, with the emphasis on `active,' then this student is an active learner. The thinking is directed toward "a real-life situation"– an authentic context where the learning has meaning. "There's no real answer"–she realizes that any answer she finds will be tentative and that the process of generating an answer will be one of inquiry and investigation. "You set it up and everything"–by emphasizing the word `you,' she indicates that the modeling problem is personally meaningful and valuable and recognizes that "set[ting] it up and everything" will require her active participation.

Scientists build models to test theories and to improve their understanding about complex systems. In this exploratory, speculative style of modeling,

Simulation is used at a prototheoretical stage, as a vehicle for thought experiments. The purpose of a model lies in the act of its construction and exploration, and in the resultant, improved intuition about the system's behavior, essential aspects and sensitivities. (Kreutzer, 1986)

As our opening quote illustrates, students, too, can benefit from building models in order to develop their own understanding of natural phenomena. We should encourage theory-building and experimentation, since both are important activities of science (Tinker, 1990). Building models gives students opportunities to use their existing knowledge, to perform "thought experiments," and to gain insight into the behavior of complex systems.

The problem is that modeling as it is currently practiced is very hard for students to do – it requires a great deal of prior knowledge and mathematical ability. However, by redefining the modeling task and providing appropriate support, modeling can be made accessible to high school science students. For example, the recent Project 2061 curriculum reforms suggest a high-level, qualitative approach to modeling:

In modeling phenomena, students should encounter a variety of common kinds of relationships depicted in graphs (direct proportions, inverses, accelerating and saturating curves, and maximums and minimums) and therefore develop the habit of entertaining these possibilities when considering how two quantities might be related. None of these terms need be used at first, however. `It is biggest here and less on either side' or `It keeps getting bigger, but not as quickly as before' are perfectly acceptable - especially when phenomena that behave like this can be described (American Association for the Advancement of Science, 1993).

The challenge in making modeling accessible in the pre-college science classroom is to create a modeling environment which requires minimal prior knowledge from other domains, which incorporates advanced interface design, and which not only enables rapid generation of simple models, but facilitates the learner's transition toward more expert-like modeling practices. We have designed a constructivist interactive learning environment, Model-It, that provides methods of qualitative expression and that utilizes learner-centered design techniques to attempt to meet the challenge. In this paper, we discuss the theory, strategies, and techniques that were applied in its design and implementation, and present results from our first year of classroom testing.

Enabling Student Access to Modeling

Barriers to Accessibility

Several researchers have introduced computer-based modeling into high school and middle school classrooms (Mandinach & Thorpe, 1988; Mandinach & Cline, 1992; Mandinach & Cline, 1994). However, students found the modeling process extremely difficult. In other studies, students had to learn to program in order to create working models. Some studies have found that students don't have the requisite mathematical knowledge for creating rigorous quantitative models, or for properly interpreting the graphical output of models (Roberts, 1985; Feurzeig, 1992). In these cases modeling was out of reach mainly because the cognitive load was simply too great; students lacked the requisite prior knowledge in mathematical and programming domains.

Sometimes modeling has been inaccessible because the software environment in which the modeling activities occurred was not designed to be user-friendly (for example, models were constructed with a command-line interface), or the modeling environment incorporated large numbers of expert level modeling functions. These kinds of environments were designed for experts, to enable the construction of sophisticated, mathematically precise models; learners, however, may be overwhelmed or discouraged by confusing interfaces and too many options.

Modeling in Professional Practice

Recent literature on modeling and simulation provides some new ways of thinking about modeling that we believe are especially beneficial for learners. In particular, there is a growing interest by the scientific community in the application of object-oriented languages for modeling and simulation, and the application of qualitative techniques for knowledge representation. We will discuss the advantages of these two techniques with respect to the design of modeling environments and with respect to student learning.

Object-Oriented Modeling

Object-oriented programming languages are particularly appropriate for the design and implementation of computer-based models, since they offer a natural mapping to the phenomena being modeled (Kreutzer, 1986). For example, objects can be used to represent each of the interacting populations of an ecosystem (Saarenmaa, Stone, Folse, Packard, Grant, Maleka, & Coulson, 1988). Providing students with an object-oriented framework for modeling allows them to think about the phenomena that they are modeling in a more natural way, by matching the interacting objects that they can see in the world with what they see in the modeling environment, instead of having to translate those objects into abstract representations.

An object-oriented approach also enables a simpler mechanism for model construction. Students simply specify pair-wise relationships between variables, and the underlying simulation engine handles the complexity of combining multiple impacts on the same variable (see Appendix). So not only does the object-oriented technique make sense from the programmer's point of view, it also provides the learner with a straightforward mechanism for dealing with what would otherwise be a difficult, if not intractable, mathematical modeling problem.

Qualitative Modeling

Models, especially computer-based models, are typically based on mathematical equations, so that in order to build a model it is first necessary to derive the equations that represent its behavior. Recently, however, much modeling and simulation research has explored applications for qualitative modeling (e.g., Cochran & Paul, 1990; Green, 1990; Guerrin, 1991; Salski, 1992).

Scientists often think qualitatively about a model before quantifying the relationships (White & Fredericksen, 1990), and when models are used to speculate or gain insight about a system, no numerical results may ever be generated. (Hamming, 1962; Kreutzer, 1986). Similarly, providing qualitative representations for causal relationships allows students to focus and reflect on these relationships at the conceptual level, instead of on a level that requires a great deal of technical or mathematical knowledge.

Qualitative modeling is particularly useful for ecological modeling, since ecological systems are often too complex or insufficiently investigated to permit formal numerical reasoning (Karplus, 1983). For example, LARKS, a fuzzy knowledge-based system for ecological research, allowed the definition of linguistic rules based on the natural language that ecologists typically use to describe their knowledge about ecosystems, without requiring precise data (Salski, 1992). For example:

IF "vegetation-height" is "low" and
"population of larks" is "very high" and
"vegetation density" is "minimally smaller than standard"
THEN "number of territories" is "high"

Students can similarly benefit from qualitative modeling, because classroom-based projects often have the goal of acquiring high-level conceptual knowledge of a complex domain for which they may lack precise or complete data (e.g., life sciences, ecosystems).

Learning Theory and Modeling

Current educational theories emphasize the "active, reflective and social nature of learning" (Brown & Campione, in press). Learners have come to be viewed as active constructors of knowledge, no longer being seen as passive receivers of transmitted information. We also now recognize that constructing understanding is not an isolated activity but occurs within the framework of a learning community. In this model of teaching and learning, teachers attempt to provide engaging and motivating learning opportunities for students (Blumenfeld, Soloway, Marx, Krajcik, Guzdial, & Palincsar, 1991). Such opportunities are fostered by situating learning in an authentic, real-world context (Brown, Collins, & Duguid, 1989; Cognition and Technology Group at Vanderbilt, 1990) and by ensuring that the activities are non-trivial and personally meaningful for the learner.

Computer applications associated with constructivist learning are typically designed as "interactive learning environments" that emphasize student-directed learning activities, not computer-driven tutoring. Such environments may function as cognitive tools, amplifying and extending the cognitive abilities of learners (Salomon, 1990) and scaffolding them in the creation of artifacts of learning (Wisnudel, Stratford, Jackson, Krajcik, & Soloway, in press). In addition, such environments may foster the development of communities of learners (Brown & Campione, 1994; Cognition and Technology Group at Vanderbilt, 1994) because they promote critical thinking and reflection skills.

We apply these theoretical tenets to the task of model construction as an opportunity to understand complex systems. When students build and test models of familiar natural phenomena, using supportive cognitive tools, in authentic and interesting contexts, with the support of their peers, they may be able to test and refine their mental representations and understandings of that system. What follows, then, is a description and rationale for strategies incorporated into Model-It, strategies intended to support model building and testing processes.

 

Scaffolding Strategies to Support Modeling

In designing software for education, we are mindfully designing for learners. In the Highly Interactive Computing (HI-C) group at the University of Michigan, we have formulated a rationale for learner-centered design (LCD) (Soloway, Guzdial, & Hay, 1994; Jackson, Stratford, Krajcik, & Soloway, 1995; Soloway, Jackson, Klein, Quintana, Reed, Spitulnik, Stratford, Studer, Jul, Eng, & Scala, to appear). Learners are also users, so the principles of user-centered design certainly apply (Norman & Draper, 1986). [1] However, user-centered design guidelines are not sufficient to address certain unique needs of learners, such as intellectual growth, diversity of learning styles, and motivational needs. For example, learners should have software available to them that represents information in a familiar way, but that also helps introduce them to more professional or symbolic representations. By designing software to support learners' growth from apprenticeship towards mastery, we create an environment that is learner-friendly.

The central claim of LCD is that software can incorporate learning supports - scaffolding - to address the learner's needs. Scaffolding is important because it enables the learner to achieve goals or accomplish processes that would not normally be possible and that are normally out of reach (Vygotsky, 1978; Wood, Bruner, & Ross, 1975). Vygotsky said that these goals or processes are in the learner's zone of proximal development. This concept is variously expressed as enabling the learner to engage in out-of-reach activities; having a "knowledgeable other" or "more capable peer" to bring the learner along; having something or someone "share the cognitive load." The choice of goal and process is equally important. We want to scaffold tasks, such as modeling, that are rich learning experiences. The environment in which the learning takes place, such as that provided by a computer program and/or a classroom, also contributes to the scaffolding and to the value, or lack thereof, of the learning experience.

Scaffolding strategies can be implemented in many ways. Software-realized scaffolding strategies for supporting programming include coaching, communicating process, and eliciting articulation (Guzdial, 1993; 1995). In a recent paper, we described a general framework for software-realized scaffolding (Soloway, Jackson, Klein, Quintana, Reed, Spitulnik, Stratford, Studer, Jul, Eng & Scala, to appear). In this article, however, we will focus on the following three scaffolding strategies that we have identified as particularly appropriate for supporting model building and testing:

Implementing Scaffolding Strategies in Model-It

In this section, we describe the Model-It program, and explain how each of the different scaffolding strategies were implemented in Model-It (see Table 1).

 
 

Scaffolding Strategy 

Model-It Implementation 

Grounding in Experience and Prior Knowledge 

  • Pre-defined high-level objects 
  • Digitized, personalized photographs and graphics 
  • Qualitative, verbal representation of relationships 

Bridging Representations 

  • Textual to graphical representations of relationships 
  • Qualitative to quantitative definition of relationships 
  • Concrete to abstract representations of the model 

Coupling Actions, Effects, and Understanding 

  • Direct manipulation of factor values while a simulation is running 
  • Immediate, visual feedback of the effect of user's changes in factor values 

Table 1: Scaffolding strategies and their implementation in Model-It

 

Grounding in Experience and Prior Knowledge

The primary task of creating a model is to recreate the phenomenon (or some part of it) in such a way that the structure and behavior of the model reflects the phenomenon itself. The objects and relationships that the learner sees and experiences in the world must somehow be re-represented within the modeling environment. To assist the learner in making the transition from what she already knows of the world over to computerized model representations, Model-It provides a set of pre-defined high-level objects (e.g. stream, macroinvertebrate population, golf course) with which she can build a model [2] . These "physical" objects provide a close conceptual match with the learner's knowledge representation of the domain, in contrast to an expert's knowledge representation which might consist of domain-independent input, output, function and state primitives. In effect, the modeling environment situates the model in the prior knowledge and experience of the learner.

Objects are represented visually with digitized photographs and graphics. Figure 1 shows the Simulation Window of Model-It, with the stream object already in place, and below, a palette of other objects that can be added to the model. Students may also create objects and paste in their own pictures to represent those objects. The stream is represented by a photograph of the actual stream the students measured, collected macroinvertebrates from, and got their feet wet in. This personalized representation may help to create an authentic context through which the activity has meaning.

Figure 1: Simulation Window

 

To add objects to a model, students select objects from the object palette (at the bottom of Figure 1 above). Factors of those objects (measurable quantities or values associated with the objects) can then be defined, for example, the total phosphates measured in a stream or the count of a population of macroinvertebrates. Figure 2 shows the Object Editor for the stream object, and the Factor Factory where the stream's phosphate factor is being defined.

Figure 2a: Object Editor

 

Figure 2b: Factor Factory

 

Next, the student can define relationships between the factors (to show how the value of one factor affects the value of another). Model-It supports a qualitative, verbal representation of relationships, rather than requiring formal mathematical expressions. Students can define a relationship simply by selecting descrip tors in a sentence, e.g., "As stream phosphate increases, stream quality decreases by less and less" (Figure 3) [3]. This is another example of grounding, on a conceptual basis–learners create relationships simply by re-representing them on the screen as English-like sentences (presumably the language of their prior knowledge and experience). This scaffolding is important for learners because their knowledge structures and skills don't initially include the same quantitative command of the concepts that experts would have. As students discuss the best representation for a relationship, (e.g., by gathering data, consulting experts, finding reference resources, etc.), they may construct more sophisticated understandings of that relationship.

Figure 3: Qualitative relationship definition: Text View

 

Model-It also supports a qualitative definition of rate relationships, in which one factor sets the rate of change of another factor over time (Figure 4). (The appendix provides a detailed description of the mathematics behind both types of relationships.)

Figure 4: Qualitative relationship definition of Rate Relationships

 

Bridging Representations

Model-It provides simultaneous, linked textual to graphical representations of relationships. Given a qualitative, textual definition, the software translates the text into a quantitative, visual representation; e.g. "decreases by less and less" is interpreted as shown by the graph in Figure 3. Although the learner can easily create relationships that are grounded in internal understandings using the English-language representation, he also is presented with corresponding, more abstract mathematical representations. These simultaneous representations establish a bridge between simple and more expert-like representations.

The same principle applies when switching from the qualitative text view to a quantitative table view (Figure 5). Now, the textual representation is re-represented as a table, which can then be edited to more accurately represent the learner's understanding of the relationship.

 

Figure 5: Quantitative relationship definition: Table View

 

Model-It also incorporates strategies that will hopefully provide a bridge from concrete to abstract representations of the model. Although the Simulation Window (Figure 1) provides a concrete, semi-realistic representation of the objects being modeled, the Factor Map presents a more structural and relational (and thus more abstract) representation of the model (Figure 6).

 

Figure 6: Visualizing abstract structure: Factor Map

 

In the Factor Map, factors are represented by iconized pictures of the objects, providing a bridge between Simulation Window and Factor Map representations. The Factor Map view may help students construct mental representations of the system they are modeling by providing a way to visualize the relational network of factors and relationships. This view is interactive–students can rearrange the nodes in a visually meaningful way and can also make changes (e.g., to create and define a new relationship, one can simply drag a line from one factor to another).

Coupling Actions, Effects, and Understanding

Once objects, factors, and relationships have been defined, the student can run simulations with his/her model (Figure 7). The student selects factors to view during a simulation using meters (vertical indicators for dependent factors, controls for independent factors) and graphs (displays of factor values as they change over time). During a simulation, these meters and graphs provide immediate, visual feedback of the current state of the simulation. Students can directly manipulate current factor values even while the model is running, and immediately see the impact. The student both provides and experiences interactive feedback with the model. "What if?" questions are generated and answered nearly simultaneously; hypotheses can be tested and predictions verified within moments. This interactivity may provide opportunities for students to refine and revise their mental models, by comparing the interactive feedback they initiate and receive with the feedback they expected to receive. This interactivity may also support students with low motivation and short attention spans and provide opportunities for engagement for students who would otherwise be uninvolved.

 

Figure 7: Running a simulation

 

Research Questions

In order to gain some sense of Model-It's utility and effectiveness, and to explore whether our scaffolding strategies made modeling accessible to high school students, we pilot- and classroom-tested the software. These were our research questions:


Footnotes

[1]All of the interface components of Model-It are implemented through the graphical user interface (GUI) of the Macintosh. We use GUI components like windows, lists, pop-up menus, buttons, sliders, and editable text boxes. Sliders can be used to set initial values and change values while the model runs; new factors are automatically entered into lists and pop-up menus, object pictures can be cut and pasted, etc. In addition, the positioning of pop-up menus is carefully chosen, particularly in the Relationship Maker window, in which the menus are part of the sentence at the top of the screen that defines which factor affects which other, and the sentence going down the right of the screen that defines which qualitative relationship to use (e.g., Figure 3). Instead of having to remember and/or type in the names of factors repeatedly, students can quickly pop up menus to find what they're looking for. Instead of having to laboriously type in data points, they can quickly define a relationship's graph by clicking on the graph itself (Figure 5).

[2]Model-It can be used to build a wide range of process flow models; for our preliminary classroom study we chose the domain of stream ecosystems. In our description of the program, we use examples from this domain.

[3]Stream quality refers to a standard index called the Water Quality Index (WQI) developed by the National Sanitation Foundation (Mitchell & Stapp, 1994). The WQI is determined by nine tests: dissolved oxygen, fecal coliform, pH, biochemical oxygen demand, temperature, total phosphate, nitrates, turbidity, and total solids. Associated with each test is a weighting curve chart which converts the value of the test into a 0-100 scale Q-value, indicating the impact of that test result on the health of the stream. The WQI is calculated as a weighted average of the Q-values for the nine tests, giving a measure of the overall stream quality.

Method

Background and Context

Model-It is designed to be used within a project-based science classroom (Krajcik, Blumenfeld, Marx, & Soloway, 1994), a method of science instruction that focuses on students doing inquiry. We are working with science teachers at a local public alternative high school who are developing a new project-based curriculum called "Foundations of Science," in which computing technologies are routinely used, and the subject matter of earth science, chemistry, and biology are combined within the context of meaningful, long term projects. The high school is "alternative" in the sense that community-based and innovative instructional techniques are encouraged, and that students must apply and be accepted in order to attend. The students in the studies reported here were generally Caucasian, primarily middle- to upper middle-socioeconomic class, and of average to above average ability.

The ninth and tenth grade students who took this class were engaged in a long-term project investigating the question "How safe is our water?" Specifically, they studied a tributary of a local river that flows near the school, collecting a variety of data to determine the quality of the water. Because this water eventually ended up in their drinking fountains, the question was likely to be motivating and personally meaningful to the students. Their project investigations also included using various technologies to conduct and report detailed biological, physical, and chemical assessments.

Overview of the Studies

Model-It has been used three times with a Foundations of Science class of 22 students. First, we pilot tested the software with six ninth grade students from the class, and then we used the software twice in the classroom – once as the ninth grade final project, and then again by the same students early in the fall of their tenth grade. Table 2 describes the dates, students, time frame, and data collected for these studies.

 
 

Dates 

Study 

Students and Time Frame 

Data Collected 

Early spring 

Study 1, pilot 

6 selected ninth grade students (2 pairs, 2 individuals), 1.5 to 2 hrs. each, during one session 

Video/audiotape of training and practice sessions; models 

Late spring 

Study 2, classroom 

22 ninth grade students (working in pairs) for four 50 minute class periods. 

Video/audiotape of students using Model-It; models 

Early fall 

Study 3, classroom 

22 tenth grade students (same students, working in pairs) for over one week. 

Same as above, plus post- interviews 

Table 2: Model-It Studies, Subjects, and Data

 

In each case, the students used Model-It to construct and test models of stream ecology. They collaborated with partners on open-ended projects in which they built models of their own design to represent their choice of particular stream phenomena. They were given several general scenarios from which to choose, including the option to create their own scenario. One scenario suggested a land use practice model of the impact of golf courses or parking lots on stream quality. Another suggested modeling the relationships leading to cultural eutrophication and algae blooms. The Factor Map, as described above, had not yet been implemented for these three studies; however, students were encouraged to draw conceptual maps representing their models to help them design and visualize the overall organization and structure of their model. Following is a more detailed description of each study.

Study 1

For the pilot study, we worked with six students individually or in pairs, for 1.5 to 2 hours each during one session. We first asked the students to brainstorm about the objects, factors, and relationships in a stream ecosystem with which they were familiar. Then we briefly demonstrated the program to the students by showing them how to build and test a few simple relationships, and finally (for the major time of the session) suggested that they add more factors and relationships to the model, based on what they already knew about stream ecosystems. A researcher sat with the students to answer questions, to prompt the students to talk about what they were doing, to take notes, etc. Our main focus was to evaluate Model-It's learnability and to identify potential sources of confusion for students. The Study 1 data consisted of a transcribed videotape of each session, along with the models students created during those sessions.

Study 2

In the first classroom testing of Model-It, 22 students used the program for four class periods of 50 minutes each. They used a study guide during the first three days, to help them learn to use the program to make models. Working through the guide in groups of two, students created models with qualitative immediate and rate relationships, wrote explanations for their relationships, made predictions about the behavior of the resulting models, explored their model's behavior by manipulating independent factors, and wrote explanations about the behavior they observed. On the fourth day, they constructed a model of one of several proposed scenarios: benthic macroinvertebrates as indicators of water quality; stream phosphate and algae "blooms"; land use practices and their impact on the stream; or an open-ended design of their own choice. During the fifth and final class period, the students discussed their models with the class and with each other. Study 2 data consisted of transcribed videotape of two pairs of students using the software for all 4 days. The video track recorded the computer screen output, while the audio track recorded the students' conversations. All of the models that students created on the fourth day were also saved.

Study 3

Early in the fall of the following school year, the second classroom testing occurred. The same 22 students, now tenth graders, used another prepared guide designed to introduce them to some added functionality of Model-It (specifically, the run-time graphs), and to give them some experience with creating and using population objects and setting up predator/prey models. All students worked for two days on the prepared guide, in groups of two, and then four groups of two and one group of three used Model-It for several more days to construct models of their stream's quality and how it had changed since the previous year. Study 3 data consisted of audio/videotape for one pair and one trio, similar to that obtained in Study 2. We also saved the five models produced by the five groups.

 

Data Analysis and Results

Our method and data analysis to date has been formative; in looking at and analyzing the various data sources (models, tapes, interviews, and log files), we have attempted to identify themes related to models and modeling, and scaffolding in action. We chose models and conversations from among the three studies for their illustrative value rather than their representative value.

Models

We begin with an evaluation of the models that the students actually created with Model-It. For all three studies, we looked to see if the models that students created were generally reasonable, that is, if the relationships defined were scientifically accurate and appropriate, and if the model worked to illustrate the intended scientific phenomena. For Study 2, we conducted a more formal evaluation, judging qualitatively for accuracy, complexity, and completeness. To give the reader an idea of what kinds of models were constructed, Figures 8, 9, and 10 show Factor Map representations [1] of models from each of the three studies.

Study 1

Figure 8 shows a typical model from the pilot testing. The student who built the model in Figure 8 started with some chemical factors of the stream, then created relationships to show how these factors affected various populations of organisms that live in the stream (bacteria, mayflies, and midge flies). She created and tested this model in about an hour. There are a few errors (e.g., the relationship from midge fly count to stream quality is in error because macroinvertebrate counts are indicators of water quality, not components of it); overall, however, the model is sound.

 

Figure 8: Factor Map representation of a model created by a student in Study 1.

Study 2

Figure 9 shows one of the better models from the fourth day of the first classroom testing. The pair of students who created this model chose to represent the impact of a golf course on a stream ecosystem, by showing how fertilizer runoff from golf courses and fecal matter deposited by geese living nearby can affect water quality. As Figure 8 indicates, their model also showed how a change in water quality could affect a population of mayfly larvae in the stream.

 

Figure 9: Factor Map representation of a model created by a pair of students in Study 2.

 

We conducted a more formal assessment on the models in this study, since we had a decent sample size (12), and since the models were more varied and interesting than in the other two studies. We evaluated the models, judging qualitatively for accuracy and completeness. We used a continuous scale from 1 ("poor") to 4 ("excellent"), with a poor rating given to a model with few or no accurate relationships or factors, and an excellent rating given to models which accurately represented the modeling scenario without errors. Models rated 2.5 or above were judged to be "accurate and reasonable," even though they might be imperfect or incomplete in some way. The model in Figure 9 was judged an excellent model, because the relationships were reasonable and accurately represented that group's chosen scenario (potential human impact on stream quality).

According to our assessment criteria, two-thirds of the groups in Study 2 created reasonably good quality models (quality rating 2.5 or above). The average rated model quality was 2.6 with a low of 1.5 and a high of 4. Most groups were able to set up at least three reasonable relationships in their model. Some models contained errors in the way relationships were defined. For example, one common error was to confuse a population's rate of growth with its count. A few had relationships that were backwards, contradictory, or that made no sense; however, for the most part, students were successful in their model creation efforts.

Study 3

Figure 10 shows a typical model from the second classroom testing. Students spent several days on the task assigned by the teachers: to build a model of stream quality, showing its effect on various populations of macroinvertebrates, and including an outside factor to account for differences in data from last year to this year. All groups chose rain as the outside factor, indicating different ways that rain could cause more pollutants to be washed into the stream, thereby lowering water quality. All the groups' models looked somewhat similar, and generally reflected valid scientific relationships. The main variation between models was the accuracy with which students created the relationships to stream quality; groups that were most concerned with accuracy used the quantitative "Table View" in order to construct relationships instead of the qualitative "Text View."

 

 

Figure 10: Factor Map representation of a model created by a pair of students in Study 3.

 

Model Building and Testing

We examined the videotapes and interview transcripts for scenarios that were good examples of typical model building and testing activities, or in which a particular scaffolding strategy seemed to be particularly salient in the students' conversations and actions with the computer. We listened to what they were talking about and watched what they were doing as they constructed a model, watched and listened as they tested their models, and considered the supportive role of the software in those activities.

We report our results as an illustrated commentary of students creating models in a scaffolded learning environment, and draw freely from all three studies for our analysis (indicating the source for each illustration in parentheses). For each section, we state the research question, some answers to the question, and then present some illustrations to support our analysis.

Did the process of building and testing models help the students develop their understanding of complex systems? We find evidence for the following: 1) building models leads to refining and articulating understanding, 2) building and testing leads to model extension, 3) testing leads to the discovery of flaws in models or suggests refinements, and 4) building their own models is motivating for students. The following examples illustrate these findings.

Building models leads to refining and articulating understanding

During the process of constructing models, students often found that they had to refine their understanding of a phenomena in order to represent it. They might at first say just that X is related to Y, but the act of defining that relationship required its further articulation. To refine their understanding, students engaged in thoughtful discussion, and referred to the field manual to learn more about the phenomena they were trying to model.

"Now I need to get oxygen in there. [looks in manual] Sunlight affects oxygen. Phosphates affects oxygen. `Cause phosphates make plants grow out of control, and then they eat all the oxygen. I think. [reading from total phosphate description in manual] `Also decreases the number of pollution...' Okay. The dissolved oxygen level goes down as phosphates go up. So." (Study 1)

Building and testing leads to model extension

Given an open-ended project, we also saw that the process of building and testing a model inspired students to extend their model. For example, when a student and her partner lowered the phosphate to 0 while testing their model, the student commented that the change in phosphate should affect all living things in their model, so she suggested that they add one of the macroinvertebrate populations to represent that phenomena. This phenomenon is also illustrated in Study 2, when two students had been testing their model, and, with several meters still displayed on the screen, student A suddenly expressed a series of ideas about several more factors and relationships that could be incorporated into their model:

A: So we could make a direct relationship thing. We could put, we could add insects, and then we could make a relationship between phosphate and insects. ... Yeah! and then the more insects the better the stream quality, I suppose we could put, yeah.

B: It depends on what taxa.

Both examples illustrate that when students are engaged in building and testing a model, we see them coming up with ideas as to how their model can be extended or improved.

Testing leads to the discovery of flaws in models or suggests refinements

Examination of students' testing strategies suggests that testing often led to possible refinements or to the discovery of flaws in their model. For example, one group had erroneously defined factors (bacteria and algae) that had already been pre-defined as objects. When they tested their model, they observed behavior that didn't agree with how they thought it should work, and consequently discovered their error. Another group, while testing their model, realized that it would make sense to link two of their unrelated factors together by creating another relationship, so they went back and defined the new relationship. In both of these examples, the students discovered flaws or found areas for improvement while they were testing their model.

Building their own models is motivating for students

The open-ended modeling tasks that Model-It supports gave students the flexibility to branch off and explore different topics, and to express their own understanding of various phenomena. For example, to demonstrate land use impacts, students C and D chose to incorporate the golf course object into their model, and show how factors of the golf course might affect the stream and the organisms living in it:

D: Let's use that one.

C: The golf course?

D: Yeah, we haven't used that one yet.

C: How the golf course affects what, though?

D: How the golf course affects, um, bacteria.

C: Too hard.

D: It's easy. Because the golf course, a lot of geese are on the golf course, and the geese feces go in the water.

C: Oh, and it affects fecal coliform

D: Which in turn affects the bacteria, and the fecal coliform grows on bacteria.

C: Okay, where do you want the golf course?

D: Right there.

This opportunity to build their own models was extremely motivating; students displayed excitement and enthusiasm throughout the model building process. For instance, once students C and D had completed their initial goal of representing the golf course impact, they branched out on their own to create another relationship, from the stream quality to the mayfly population. They expressed pride in their model, and called the teacher over to show it off to her. (Figure 9, above, shows the factor map of their final model.) The next day, in class discussion, they proudly and excitedly described how their model worked (unfortunately, the transcript text does not do justice to the students' high level of participation and animation):

[Teacher draws a map of their model as they talk]

C: The size of the golf course affected the geese, the number of geese...

D: The more land there is the more geese... And the more geese the more fecal coliform.

C: The golf course size affected nitrates and phosphates...because the bigger golf course has more fertilizer and fertilizer has nitrates and phosphates in it.

Teacher: Do you have any [relationships] going to quality?

C: Well I'm getting there, okay? This is complicated! Okay, fecal coliform goes to quality, phosphate goes to quality, nitrate goes to quality... And then the quality went to rate of growth.

Teacher: Why?

C: Because the better quality...

D: There is the more mayflies can grow. And then the growth went to count and the decay went to the count.


Footnotes

[1] These Factor Maps were created by researchers using later versions of Model-It, as an aid to visualizing the models students built.

 

 

Scaffolding

How did the scaffolding strategies support students in their modeling activities? Here we present a number of examples to illustrate how the scaffolding strategies of grounding, bridging, and coupling supported students' modeling efforts. Each scaffolding strategy is discussed in terms of its Model-It implementation.

Grounding in Experience and Prior Knowledge

Digitized photographs and graphics. The digitized photographs of a familiar place and the graphics used in the Model-It learning environment seemed to provide a grounding around which students could talk and think about the real stream ecosystem. Seeing pictures of the stream from which they collected data seemed to help them think of relationships they could model. We also saw a number of instances of the students wanting to place objects in a visually appealing and "realistic" way. In the following three examples, we illustrate how the digitized photographs and graphics help ground the students in their prior knowledge and experience.

Example 1: (Study 1) Students identified with the photorealistic graphics. The graphics also seemed to inspire them to use objects they already knew about. For example, one student saw the stream photo, and decided to build a model of "how our stream was when we tested it." She then proceeded to set the factors to the values she remembered from those real-world tests to see what would happen in her model. Her decisions about what modeling activities to pursue were grounded in her own prior knowledge of stream factors.

Example 2: (Study 1) Example 2 illustrates how the photorealistic graphics provide a grounding in prior knowledge. This student referred to the stream photo while brainstorming:

"...Or since this one's [referring to the stream] by a road, roads can have pollution, they have things from the car like oil, lots of salt from the road can get in, and that can change what's in it,..."

Example 3: (Study 2) In this conversation, students E and F are beginning a new model. They are manipulating the objects in the environment, placing them in visually appealing places, and commenting on the appearance. Notice that student F comments that she wanted the object placed where "it looks real," and then said the placement was "pretty good." Student E had some fun placing the parking lot object in the portion of the stream picture where there was water.

E: [clicks on drainpipe object, moves cursor around on stream ecosystem picture]

F: Put it somewhere really nice ... so it looks real.

E: [still moving cursor around]

F: Where we had it before, click like right there, see what it looks like.

E: [Clicks on the right-hand side of the picture]

F: That looks pretty good.

E: I've always wanted to do this. [selects the parking lot object and places it in the water area of the picture]

F: Why?

E: Right in the middle of the stream. That's funny!

The video/audio data of other students showed a number of instances of students placing, removing, and re-placing objects in slightly different positions, often commenting on whether one place was more visually appealing than another.

Recalling that the picture of the stream was a digitized photograph of the actual stream site where they performed water quality testing, we felt that students' mindful manipulation of on-screen objects gave some indication of how closely they related to those objects in the Model-It environment. The photorealistic images and graphics they saw on-screen were grounded in their own real-life experience and prior knowledge.

Qualitative representation of relationships. The qualitative representation of relationships in the Model-It environment allowed students the opportunity to construct relationships. Students seemed comfortable expressing themselves qualitatively. The qualitative text view appeared to help them think about defining relationships and appeared to enable them to construct complex relationships quickly. The analysis suggests that representing a relationship qualitatively is much closer to the way students seem to naturally think and express themselves than is representing a relationship quantitatively or mathematically. The following two examples are illustrative.

Example 1: (Study 2) In this example, reading the qualitative sentence out loud helped the students realize a mistake and think of a better way to define their factors. Students C and D had previously defined a "golf course" factor of the "Golf course" object, a definition that probably doesn't make sense. Here they were looking at the Relationship Maker, and constructing a rate relationship between "Golf course:golf course" and "Bacteria:count."

C: [reading from the screen] "At each time step, add golf course to bacteria count"

[pauses and moves cursor back and forth, tracing over the words "Golf course:golf course"]

C: Wait but it should say like golf course .. `size' or something. Wait we did that one thing wrong.

D: Yeah. [they go back to the Factor Factory and change the name of the `Golf course: golf course' factor to `Golf course: size']

C: [now in the Object Editor] So the object is a golf course, the factor of the golf course [pointing with the cursor to the word `size']–see we could have different factors–like we could have `number of geese.'

Student C, reading out loud from the screen, suddenly realized that the factor is not the object itself, but is something related to the object. He may not have reached this realization as quickly had he not been reading the qualitative, verbal representation of the relationship on the screen. Not only did he find his error, but later on he showed a better understanding of factors by proposing another golf course factor – geese.

Example 2: (Study 2) In this example, we see students C and D each contributing ideas to the model they were building, ideas that were incorporated into the model in short order. They seemed to be comfortable expressing themselves qualitatively, and, using the qualitative definition of relationships, they were able to build complex relationships very quickly.

C: As geese increases fecal coliform increases at about the same. [saves geese -> fecal coliform relationship] And then if we want, do you want, it won't take long to put in nitrates.

D: Okay.

C: We can add that in. [closes Relationship Maker]

D: Cause that's part of fertilizer...

C: Cause that's part of fertilizer, yeah. So we go to stream [opens Factor Factory] okay...let's see...nitrates N I T nitrates. [types name into Factor Factory]

D: Lesser and lesser.

Within the next 2 minutes they proceeded to construct a relationship from `Golf course: size' to `Stream: nitrates' and one from `Stream: nitrates' to `Stream: quality.' This was accomplished by opening only 2 windows, pulling down 6 menus, and pressing 6 buttons. The relationships they understood from their prior investigations in the stream seemed to translate easily and quickly into the qualitative representations in the modeling environment.

Pre-defined high-level objects. The pre-defined high-level objects in Model-It provided students with simple and accessible manipulatives, as opposed to low-level programming primitives. Students had little trouble learning the object-oriented environment and the high-level building blocks of objects, factors, and relationships; in fact, the object-oriented representation seemed to correspond well with the way students expressed their understanding and with what they already knew.

Example 1: (Study 1) Talking about objects, factors, and relationships seemed to come naturally for students. In Study 1, in the brainstorming session before they had ever been introduced to the program, students could list factors of the stream ("oxygen, total solids, insects, all those nine chemical tests") and describe relationships ("colder water has more oxygen, and warmer water has less oxygen"). We see then that the modeling vocabulary used in Model-It is grounded in familiar language of students and thus supports their modeling efforts.

Example 2: (Study 2) Students referred to multiple causal relationships in various ways, sometimes as `chains' or `hooks.' Here students E and F were working on their model (drainpipe discharge affects stream phosphate which affects stream quality), and were in the process of writing down an explanation. Student E talked through a causal relationship, suddenly realizing that all of them are related causally.

E: High growth rate is caused by phosphorus which increases plant life which decays and they feed on it.

F: Yeah.

E: Whoa! We got a whole chain here.

F: That's it! That's it!

The student's reference to a "chain" is an indication that she was thinking about the objects in the stream as "things" that could be physically connected. She related her perception of the multiple relationships to her prior experience with chains and links, which gave her a way of thinking about Model-It relationships.

Bridging Representations

Textual and graphical representations of relationships. Providing both the text and graph views of relationships may scaffold learning; we saw students using one representation to help them interpret the other. This helped them connect unfamiliar representations with familiar ones, promoting sense-making and connection-building.

Example: (Study 1) This student, thinking out loud, is trying to decide which relationship to choose between two factors. She initially tries to think it through using the qualitative words, but, finding them inadequate, turns to the more quantitative graph at the right of the Relationship Maker window. This helps her verbalize the relationship and make a decision.

Should we say more and more. Cause it gets more... I don't really understand that one. More and more. Should we say more and more? Well, let's look at the graph thing. If the stream temperature is like, really high, then it [oxygen] starts going down.

In this case, the student used the graphical relationship to confirm her thinking that the textual representation "more and more" might be an appropriate qualitative definition for the relationship. We expected the opposite, that the text view would be used to interpret the graphical view. Perhaps, then, the simultaneous representation of the relationship both as a verbal sentence and a mathematical graph forms a bridge between whichever representation the student is more familiar with, to the less familiar one.

Qualitative and quantitative definition of relationships. The qualitative and quantitative definitions of relationships helped students make connections between commonsense and more abstract ideas. Students were able to use whichever representation suited their abilities or goals. When the verbal representation suited them, they used it to define their relationships; where they perceived it to be inadequate to express their understanding of the relationship (particularly as it related to accuracy), they used the more quantitative table view.

Example: (Study 2) In Study 2, students were not given any instructions in the guide as to how to make quantitative relationships with the table view; consequently, most constructed their models with the qualitative text view exclusively. However, one student was dissatisfied with the levels of accuracy possible with the text view (since he was trying to re-create graphical relationships pictured in his water quality manual), and, on his own, discovered the table view. With this more quantitative representation, he created relationships that more closely matched his understanding of those relationships.

 

Concrete and abstract representations of the model. Providing students with both concrete and abstract representations of models may give learners different ways to think about their model and help them transition from novice to expert representations. Students sometimes arranged meters on the screen to represent the underlying structure of the model. Instead of placing meters randomly on the screen, they placed them in a (left-to-right) order that indicated how objects to the left caused changes in objects to the right.

Example: (Study 2) In this example, the initial random arrangement of meters on the screen didn't match this student's mental representation. She decided to try to create a closer match by rearranging them in the order of causality which, in effect, became for her an abstract representation of the model. Here, she had 6 meters up on the screen, randomly placed (oxygen, fecal coliform, bacteria, quality, algae and phosphate, placed somewhat lower than the others). She moved the phosphate meter up to the same level as the others. Then,

"We should have this in the order that it goes. [begins moving meters] Drainpipe discharge [moves discharge to the far left position] Discharge affects phosphorus [moves phosphorus to right of discharge] Phosphorus which affects algae [moves algae to right of phosphorus] which affects bacteria [moves bacteria to right of algae] which affects oxygen [moves oxygen to right of bacteria] which affects .. we don't need fecal coliform for this."

Although the student had already constructed the model, and probably knew what relationships she had built into it, she still found it useful to be able to rearrange the meters to visually represent the structure of the model on the screen.

Coupling Actions, Effects, and Understanding

 

Direct manipulation of factor values while a simulation is run. The opportunity to manipulate the factor values directly while the model is running provides a link between the learner's cognitions and the model. Meters provide a speedy way to test out a hypothesis; students used information from the meters to explore how their model worked and to verify its operation. We also saw that the visual feedback allowed students to evaluate their model and compare its behavior with their expectation or prediction of how the model should behave; in essence, they compared the behavior of their Model-It model with their mental model.

Example 1: (Study 2) Sometimes manipulating a meter in an interactive testing process helped students to understand the model's behavior in a way that went beyond just understanding the individual relationships. In this example, two students had just been testing their stream quality model (see Figure 9), watching how the quality affects the population of the mayfly larvae. One of the students was slowly increasing the size of the golf course (larger golf courses put more fertilizer in the stream which lowers the quality which adversely affects the mayfly population). He had an idea that the population of mayflies should thrive, until the golf course reaches a certain larger size, above which the mayflies would die off:

[The Golf course: size is very small, and he increases it to 38 (on a `scale' of 1 to 100 acres) Various factors in the stream increase or decrease, but the mayfly population stays high.] "The mayfly count still goes up. You have to have a golf course over that many, over 50 acres" [as he increases it to 51]. [The stream quality drops further, and the mayfly count immediately starts decreasing as the model runs.] "Jeez, look at it go!"

By manipulating the model he created, in a few moments he was able to form and verify a hypothesis that there existed a critical golf course size, above which the impact upon the mayfly population would be much larger.

Example 2: (Study 2) Manipulating meters and receiving immediate feedback led to students being able to verify their models very easily. In this example, two students had been creating relationships between drainpipe discharge, phosphate levels, and stream quality. Here is one student's interaction with the program, as she verified that the model worked as she thought it should:

See, look, watch. Now if I move this down, this will go down, and that will go up. [Discharge and phosphate meters go down, stream quality goes up] Now if I move this up, this will go up and that will go down. [Discharge and phosphate meters go up, stream quality goes down]

This short episode embodies all of the possibilities of Model-It. She phrased her statements in the form of predictions; using the meters provided by Model-It, she generated and tested four simple predictions as to how her model would work, and, receiving immediate feedback, confirmed not only that her predictions were correct but that her model was working according to her own mental model.

Example 3: (Study 2) The students used meters and graphs in various ways to understand how their models worked. Students G and H had already figured out that their model properly implemented phosphate's and fecal coliform's effects on stream quality. They removed one meter (stream quality) in order to check other relationships and gain a better understanding of how their model works.

[Phosphate, quality, oxygen, and fecal coliform meters are showing on the screen.]

G: Say, let's close stream quality.

H: Um hm.

G: [The student removes the quality meter.] Now let's see what happens. It's running. It doesn't have any effect on them.

H: It does! Well, let's see, fecal coliform...

G: It's true because we're trying to see if it has an effect on these and it doesn't have any effect on this, see? [Student displays slider for fecal coliform, and moves it up and down. None of the meters change]

H: You're right, you're right, it has no effect whatsoever. ... This isn't a very complex model, so far, because we have no other relationships besides to quality.

By removing one meter from the screen and running one additional test with the meter, these students accomplished two critical modeling activities. First, they were able to focus their attention on three particular factors. Second, they were able to obtain information that allowed them to find out how one of those factors affected the others (and in fact, they found out that it didn't).

Real-time, visual feedback of the effect of user's changes in factor values. The real-time, visual feedback provided by the Model-It meters while the learner changes values provides a scaffold that allows the learner to visualize the behavior of the model as it runs. Often, in the process of testing their model, students were led to further exploration and expansion which could be implemented quickly; the testing process revealed students' understanding of the behavior of the phenomenon in real life.

Example 1: (Study 1) In this example, we see a student talking out loud while watching the meters as her model ran. (Figure 8, above, shows a representation of her model.) She was comparing what she saw happening on the meters with what she thought should be happening, evaluating the level of each factor as she read it off the meter. She seemed able to explain her reasoning as to why a particular factor was at a good, bad, or "okay" level. Implicit in her remarks was the stamp of approval upon the whole model she had built – she was obviously happy with it. She explained what she was seeing on the meters as she ran her model:

"The fecal coliform is at an okay level, so the oxygen is at an okay level. Oxygen's actually at a really good level. And that means the mayflies can thrive in it, so the mayflies are just having a picnic, and they're all growing. The stream quality is really good because there's lots of mayflies,[...] The midge fly count is 0 which is really good because midge flies show bad conditions. Bacteria count is going down to 0 and that's good, because, you don't need it, well, I mean, you need bacteria, but this is sort of an extreme. And the stream phosphate is down a lot, it's down to a decent level."

The ease with which she could play "what if" games emboldened her. Knowing that there was no real cost, she described several experiments that would be impossible in real life, experiments that could be "dangerous":

"And then, this'll be fun, I know, I want to start playing around, and you can make just the most dangerous thing in the world, I mean just like, tons of midge flies, and lots of runoff, and... this'll be just totally dangerous."

The real-time visual feedback appeared to provide a scaffold for visualizing the model's behavior, and also provided opportunities to experiment with what-if situations, even implausible situations.

Example 2: (Study 2) The real-time visual feedback allowed model testing and expansion to proceed without interruption, and stimulated model expansion. In this example, these students were testing a partially complete model. During their testing, they used the meters to try different values of golf course size, and in the process realized that the size of the golf course should also affect the number of geese on the golf course. So the testing process with immediate feedback helped them think of additional relationships that could be constructed. One said,

"So, golf course size affects golf course geese. Yeah, we can do it. As golf course size increases, geese increases by about the same." [they subsequently put the relationship `Golf course: size' affects `Golf course: geese']

Example 3: (Study 3) At times the interactive nature of the meters and graphs resonated with the students' own experiences that they were immediately able to relate what they saw on the screen with their real life knowledge. Students I and J were working through the guide, exploring rabbit population growth.

I: [reading from guide] `Start the simulation and raise the rate of growth to point 6.' OK

J: Go.

I: Wait. OK. Start. And I want it to be ... point 6. [student moves slider on Rabbit rate of growth to 0.6]

J: Yes. [as the model runs, both watch the graph of the Rabbit count go up sharply]

I: Whoa the rabbit count is--whoa they're multiplying like rabbits!

J: Ha ha.

The feedback they received from their action was immediately related back to their prior knowledge about how rabbits multiply in the `real world.'

 

Future Directions

Since the time this data was collected, several new features have been implemented that should increase the impact of the scaffolding strategies. Students can now create their own objects with pictures and graphics of their own, including replacing the stream picture and creating population objects. Also, the Factor Map has been completely implemented.

Other future changes to the program have been suggested by the classroom testing. For example, the data showed that frequent and iterative testing of models as they were built tended to result in better models. We therefore intend to redesign the interface to guide the learner towards these testing strategies, either by providing prompts to test each relationship as they create it or by making the building and testing cycle more explicit. We also noticed that predicting model behavior was useful for students, yet they often didn't make predictions before testing. We intend to scaffold students in making predictions about their model, and in analyzing and explaining those predictions after testing. Other future goals include providing built-in checks for common student errors and supporting a wider range of modeling activities, from an even more qualitative beginning, to beyond the current level with more advanced types of relationships. In addition, we would like to support scientific argumentation by implementing ways for learners to keep track of model "runs," both for comparison between different versions of a model, and for use in presentations.

Finally, there are more long-term goals for research with the software, in which we consider other tasks that the program might support. For example, we are looking into networking the software so that students can interact and share their models. Students might even work on different sections of the same model, so that their changes affect each other's model. For example, if the students "upstream" add pollution to the stream, the students "downstream" might see their organisms dying off. We also plan to integrate Model-It with real-world data, so that students can generate realistic relationships from real world data, and can validate their models by comparing simulation results with data from the real world.

Concluding Remarks

Our research indicates that with Model-It, students can create working models of complex phenomena, a task which is usually inaccessible to learners in high school science classrooms. Influenced by current trends in scientific ecosystem modeling, Model-It redefines modeling as an object-oriented, initially qualitative task. The students therefore spent minimal time on the mechanics of programming a model, and instead used their cognitive energies to concentrate on thinking about and modeling the phenomena. We have seen the value of this model building activity in helping students construct, test, and refine their understanding of complex systems, just as model building helps scientists test theories and explore their ideas.

Furthermore, with regard to research and development of interactive learning environments, this study demonstrates that learner-centered scaffolding strategies seem to hold promise for the design of educational software in general. The grounding strategy makes the program accessible by situating activities in learners' prior knowledge and experience. The bridging strategy, providing multiple graphical and textual representations at increasing levels of abstraction, helps learners make connections between what they know and what they ought to know in order to make progress toward more abstract modeling. The coupling strategy gives learners an interactive "handle" into their mental model of a phenomenon, allowing them to quickly generate and answer "what if" questions, make and test predictions, and run and revise both the model artifacts in the software environment and their own mental models.

Acknowledgments

We would like to extend our great appreciation to the other members of the Highly Interactive Computing (HI-C) research group, and the teachers and students at Community High School in Ann Arbor, for their feedback and support. This research has been supported by the National Science Foundation (RED 9353481) and the University of Michigan.

 

References

American Association for the Advancement of Science, (1993). Benchmarks for Science Literacy. New York, NY: Oxford Press.

Blumenfeld, P. C., Soloway, E., Marx, R. W., Krajcik, J. S., Guzdial, M., & Palincsar, A. (1991). Motivating project-based learning: Sustaining the doing, supporting the learning. Educational Psychologist, 26(3 & 4), 369-398.

Bobrow, D. G. (1984). Qualitative Reasoning about Physical Systems: An Introduction, Artificial Intelligence, 24:1-6

Brown, A. L. & Campione, J. C. (in press). Psychological theory and the design of innovative learning environments: on procedures, principles, and systems. To appear in L. Schauble & R. Glaser (Eds.), Contributions of Instructional Innovation to Understanding Learning. Hillside, NJ: Erlbaum.

Brown, A. L. & Campione, J. C. (1994). Guided discovery in a community of learners. In K. McGilly (Ed.), Classroom Lessons: Integrating Cognitive Theory and Classroom Practice, . Cambridge, MA: MIT Press/Bradford Books.

Brown, J., Collins, A., Duguid, P. (1989). Situated cognition and the culture of learning. Educational Researcher, 18(1), 32-42.

Clement, J., Brown, D. E., & Zietsman, A. (1989). Not all preconceptions are misconceptions: finding `anchoring conceptions' for grounding instruction in students' intuitions. International Journal of Science Education, 11(Special Issue), 554-565.

Cochran, J. K. and Paul, B. K. (1990). QUAL: A Microcomputer System for Qualitative Simulation, Simulation, November, 300-3089

Cognition and Technology Group at Vanderbilt, T. (1990). Anchored instruction and its relationship to situated cognition. Educational Researcher, 19(6), 2-10.

Cognition and Technology Group at Vanderbilt, T. (1994). From visual word problems to learning communities: changing conceptions of cognitive research. In K. McGilly (Ed.), Classroom Lessons: Integrating Cognitive Theory and Classroom Practice, (pp. 157-200). Cambridge, MA: MIT Press.

Draper, F. & Swanson, M. (1990). Learner-directed systems education: a successful example, System Dynamics Review, 6(2), 209-213.

Feurzeig, W. (1992). Visualization tools for model-based inquiry. Paper presented at the Conference on Technology Assessment, Los Angeles.

Green, D. G. (1990). Syntactic Modeling and Simulation, Simulation, June, 281-286

Guerrin, F. (1991). Qualitative Reasoning About an Ecological Process: Interpretation in Hydroecology, Ecological Modeling , 59, 165-20114.

Guzdial, M. (1995). Software-Realized Scaffolding to Facilitate Programming for Science Learning. Interactive Learning Environments, 4(1), 1-44.

Guzdial, M. (1993). Emile: software-realized scaffolding for science learners programming in mixed media. Unpublished Ph.D. dissertation, University of Michigan.

Hamming, R. W. (1962). Numerical Methods for Scientists and Engineers, New York, McGraw-Hill.

Jackson, S., Stratford, S., Krajcik, J., & Soloway, E. (1995, March). Model-It: a case study of learner-centered software for supporting model building. Proceedings of the Working Conference on Technology Applications in the Science Classroom, The National Center for Science Teaching and Learning, Columbus, OH.

Karplus, W. (1983). The Spectrum of Mathematical Models, Perspectives in Computing, 3(2), 4-13.

Krajcik, J., Blumenfeld, P., Marx, R. W., & Soloway, E. (1994). A collaborative model for helping science teachers learn project-based instruction. Elementary School Journal, 94(5), 483-498.

Kreutzer, W. (1986). Systems Simulation: Programming Styles and Languages, Addison-Wesley, Wokingham, England

Mandinach, E. B., & Cline, H. F. (1994). Classroom dynamics: implementing a technology-based learning environment. Hillsdale, New Jersey: Lawrence Erlbaum Associates.

Mandinach, E., & Thorpe, M. (1988). The systems thinking and curriculum innovation project (Technical report ).

Mandinach, E., & Cline, H. (1992). The impact of technological curriculum innovation on teaching and learning activities. Paper presented at the American Educational Research Association.

Miller, R., Ogborn, J., Briggs, J., Brough, D., Bliss, J., Boohan, R., Brosnan, T., Mellar, H., & Sakonidis, B. (1993). Educational tools for computational modelling. Computers in Education, 21(3), 205-261.

Mitchell, M. K., & Stapp, W. B. (1994). Field manual for water quality monitoring: an environmental education program for schools. (8th ed.). Dexter, MI: Thomson-Shore Printers.

Mokros, J. R., & Tinker, R. F. (1987). The impact of microcomputer-based labs on children's ability to interpret graphs. Journal of Research in Science Teaching, 24(4), 369-383.

Norman, D., & Draper, S. (1986). eds., User Centered System Design. Hillsdale, NJ: L. Erlbaum & Assoc.

Roberts, N. (1985). Model building as a learning strategy. Hands On!, 9(1), 4-7.

Saarenmaa, H., Stone, N. D., Folse, L. J., Packard, J. M., Grant, W. E., Maleka, M. E., & Coulson, R. N. (1988). An Artificial Intelligence Modeling Approach to Simulating Animal/Habitat Interactions, Ecological Modeling , 44, 125-141

Salomon, G. (1990). Cognitive effects with and of computer technology. Communication Research, 17(1), 26-44.

Salski, A. (1992). Fuzzy Knowledge-Based Models in Ecological Research, Ecological Modeling , 63, 103-112

Silvert, W. (1993a). Object-oriented ecosystem modelling. Ecological Modelling, 68, 91-118.

Silvert, W. (1993b). The distributed derivative: an aid to modular modelling. Ecological Modelling, 68, 293-302.

Soloway, E., Jackson, S. L., Klein, J., Quintana, C., Reed, J., Spitulnik, J., Stratford, S. J., Studer, S., Jul, S., Eng, J., & Scala, N. (to appear) Learning Theory in Practice: Case Studies of Learner-Centered Design. To appear in ACM CHI '96 Human Factors in Computer Systems, Vancouver, B.C.

Soloway, E., Guzdial, M., & Hay, K. E. (1994). Learner-centered design: the challenge for HCI in the 21st century. Interactions, 1(2), 36-48.

Swartzman, G., and Kaluzny, S. (1987). Ecological Simulation Primer, Macmillan Publishing Company, New York, NY

Tinker, R. (1990). Teaching theory building: modeling: instructional materials and software for theory building : The Technical Education Research Centers, Inc.

Vygotsky, L. S. (1978). Mind in society: the development of higher psychological processes. Cambridge, MA: Cambridge University Press.

White, B. Y., & Frederiksen, J. R. (1990). Causal model progressions as a foundation for intelligent learning environments. Artificial Intelligence, 42, 99-157.

Wisnudel, M., Stratford, S. J., Jackson, S., Krajcik, J., & Soloway, E. (in press). Educational technology to support students' artifact construction in science. In K. Tobin & B. J. Fraser (Eds.), International Handbook of Science Education. The Netherlands: Kluwer.

Wood, D., Bruner, J. S., & Ross, G. (1975). The role of tutoring in problem-solving. Journal of Child Psychology and Psychiatry, 17, 89-100.

 

Appendix: A discussion of Model-It relationships, and the particular subset of dynamic modeling that they represent

Model-It is designed using an object-oriented approach, which has the advantage of allowing the independent representation of different objects in the system and their relationships to each other (Silvert, 1993a, 1993b). Conversely, traditional ecosystem modeling represents the rate of change of a variable by a single differential equation, even when that equation involves multiple variables that may be associated with several different objects. For example, the rate of change of a fish population might traditionally be described by the following equation:

d(biomass of fish)/dt = (growth) + (recruitment) - (reproductive outputs) - (natural mortality) - (fishing mortality) + (net migration)

Instead, by using object-oriented representations, we can implement a "distributed derivative" (Silvert, 1993b) in which each variable's effect on the rate of change is represented by a separate equation, and the overall effect is built up as a sum of the effects generated by each variable. The advantages of this approach are that each equation becomes simpler to create and to understand, and that the impact of new populations can be easily added to the system without changing existing equations. The following sections describe how this approach was implemented in Model-It.

Types of Relationships

Model-It provides a constrained control structure – two types of relationships – with which users can define how one factor affects another. The functionality of those relationships ("immediate" and "rate") was carefully chosen to support the basic essentials for implementing models, so that students can achieve the goals associated with model building without the overhead of learning complex mathematics or a programming language.

1.    Immediate relationships

Immediate relationships are of the form y = f(x); they are used to define a relationship in which the value of the affected factor is immediately calculated based on the value of the causal factor. Immediate relationships represent constraint systems whose mathematical analogs are collections of simultaneous algebraic equations (Miller, Ogborn, Briggs, Brough, Bliss, Boohan, Brosnan, Mellar, & Sakonidis, 1993). That is, for a simulation defined entirely by immediate relationships, the values of all dependent factors are calculated from the values of the independent factors (state variables), and define the state of the model. Nothing changes until the user gives a new value to an independent factor, putting the model into a new state. This form of modeling permits the simplest exploration of basic modeling concepts such as chains of relationships (e.g. the sunlight affects the photosynthesis level, which affects the oxygen production of the plants, which affects the oxygen level of the stream, which affects the quality of the stream), and combinations of relationships (e.g. the scenario, in which stream phosphate and stream oxygen both affect stream quality). Through the use of immediate relationships, middle and high-school learners can easily define and explore models may be easily defined and explored to gain an understanding of basic modeling concepts.

2.    Rate relationships

Rate Relationships define feedback equations of the general form y(t+1)= yt ± x. This equation can also be considered as a discrete time step approximation of the linear differential equation: dy/dt = ±x, where x is the rate of change of y. Rate relationships are implemented slightly differently for objects that represent populations, such as mayflies. For rate relationships involving populations, we want to take into account the impact of each organism in the population, so that the form of the equation becomes: dy/dt = ± (x á n), where n is the count.

For our example, to model the rate of growth (rg) of the population (n) , the equation becomes: dn/dt = (rg á n). At each time step, multiply the population's rate of growth by the population's count, and add that product to the count. This is still a linear differential equation, but the function it defines is exponential. We present this concept to the user with a qualitative sentence such as: "At each time step, and for each Mayfly, add the Mayfly rate of growth to the Mayfly count" (Figure 4). The user chooses the "sign" of the relationship, either "add" or "subtract."

Rate relationships provide support for exploring dynamic, time-based models. They represent the basic flow equations that are the basis for linear models, the simplest dynamic models used to represent ecosystems, and the first to be taught in a typical collegiate simulation modeling textbook (Swartzman & Kaluzny, 1987). The relationship between the rate of growth and count of a population, in particular, supports the exploration of the process of exponential growth, without the student having to first express it as an equation.

Underlying Simulation Engine

To make calculations based on the relationships, we first convert any qualitative definitions into quantitative functions [1]. Specifically, the text-based immediate relationships are converted into the functions presented by their associated graphs, and scaled to the defined range of values for each factor; e.g. Figure 3 shows the curve associated with "decreases by less and less" scaled to the phosphate range of 0 to 10 and quality range of 0 to 100. The functions are stored internally as 11 data points (for the 10×10 grid used by the graph view). Linear interpolation between these points is used to draw the graph, and to calculate the function for arbitrary input values.

Objects maintain lists of their associated factors, and factors keep track of their initial value and current value. Factors also keep track of which relationships they "cause." When a simulation is running, the Modeler's central Controller cycles through a loop, such that in each cycle (represented as a "time step"), it executes two functions:

(1) Firing relationships – The Controller tells each object in the world to "fire" its relationships, and each Object passes the message on to each of its factors, which then fire all of their "causal" relationships. When a relationship is fired, it calculates a new value for the affected factor (based on the equations described above), and tells the affected factor its new value. The affected factor stores the new value in a list.

(2) Calculating new values – Once all relationships have been fired, the factors calculate their new value, if any, from the stored list of new values. If there is more than one new value in the list (because the factor was affected by multiple factors), the new values are resolved as follows, depending on whether the factor was affected by immediate relationships or rate relationships (the same factor cannot be affected by both immediate and rate):

 


Footnotes

[1]The growing field of qualitative reasoning (see Bobrow, 1984 for an overview) suggests an alternative approach in which qualitative systems are simulated through artificial intelligence techniques such as constraint propagation and logic-based expert systems.

 

 

Back to hi-ce Office