7
Models as Interfaces

Steffen Bayer,1 Tim Bolt,2 Sally Brailsford3 and Maria Kapsali4

1Program in Health Services & Systems Research, Duke-NUS Graduate Medical School Singapore

2Faculty of Health Sciences, University of Southampton, UK

3Southampton Business School, University of Southampton, UK

4Umea School of Business and Economics, Umea University, Sweden

7.1 Introduction: Models at the Interfaces or Models as Interfaces

Simulation models and simulation modelling are used in many different ways. The contexts and objectives of modelling projects vary as much as the approaches and tools used. The system modelled is only part of what determines the modelling process and the modeller often is only one of the stakeholders influencing or being influenced by the model. Others, namely model users such as decision makers or students in a teaching context or participants in a group model building project, also interact directly with the model. Further stakeholders might be influenced indirectly by decisions made based on the model.

In this chapter we pay particular attention to the use of models in group model building projects where a group of domain experts or other stakeholders come together to build a model together with a modelling expert. This chapter draws attention to how models can function as an ‘interface’ between participants in a modelling project. An interface can be understood as a point of interaction that allows two systems to communicate across a boundary. In everyday use ‘interface’ is, however, also often used as relating to what affords and enables the process of communication across the boundary: interface as the means or mechanism that allows the boundary between subsystems to be permeable. In this chapter we use the metaphor ‘interface’ loosely in this sense.

The ‘interface’ metaphor highlights the potential of models (and of the modelling process) to support information transmission in the widest sense between the participants in a group modelling project, but also across the boundary between the model and those engaging with the model. The aim of this chapter is to explore and to illustrate how the interface can shed light on the variety of functions models can have, especially in a group modelling context.

The chapter is motivated by the assumption that reflection on the use of simulation models and the role they are given by stakeholders not only supports the modeller in making a modelling project more effective, more implementable or more insightful for the different stakeholders, but could also help to avoid conflicts and misunderstandings if these stakeholders have a different understanding of the function of simulation models and the role of the modelling process, or if the role of a model evolves over time within a project.

7.2 The Social Roles of Simulation

When participants from different organizational and professional backgrounds come together to develop a simulation model to solve a problem, the emerging model can support them in communicating their diverse experiences and perspectives (which are often expressed in very specific language influenced by their organizational and professional backgrounds). The discussion of relationships and mechanisms to be included in the model, as well as the agreement on decisions about timescales and model boundaries, can give rise to new insights and a shared understanding of the system. Moreover, experiments with a model can also help participants to understand what others are talking about when presenting their perspective of the system. Modelling can support communication and foster understanding in modelling groups with diverse members. Silos, communities of interest and practice, professional and organizational differences, and languages can all make communication between stakeholders difficult. During the process of model building the simulation can take on a variety of social roles.

Boundary objects are artefacts shared between communities of practice, which have their own specific informational codes (Star and Griesemer, 1989; Carlile, 2002; Sapsed and Salter, 2004). Boundary objects can address some of the difficulties in communicating and creating knowledge across (disciplinary and organizational) boundaries. These difficulties include not only the syntactic and semantic challenges of having to overcome differences in language and interpretation, but also those challenges inherent in creating new shared knowledge and dealing with any negative consequences for the participants which might arise from this shared knowledge creation process (Carlile, 2002). Boundary objects can be repositories of knowledge, standardized forms and methods, objects or models, or boundary maps to support interdisciplinary working (Star and Griesemer, 1989). However, while boundary objects can be the basis of negotiation and knowledge exchange, they can also be ineffectual, precisely because their role is at the margins of communities, and their use depends on the frequency of interaction and level of understanding within groups (Sapsed and Salter, 2004).

Simulation can sometimes serve as a boundary object. In a variety of domains, modelling has been shown to be able to support situations where disparate stakeholders need to create new knowledge. In large, complex, transdisciplinary research areas, models can become the facilitators of interdisciplinarity, integrating different knowledge bases (Mattila, 2005). Simulation modelling has been shown to act as a boundary object in engineering planning (Dodgson, Gann and Salter, 2007a), helping to bridge disparate communities involved in innovating, and in particular allowing disparate groups to engage within innovative projects and propose potential solutions to engineering problems (Dodgson, Gann and Salter, 2007b). Models have also been conceptualized as interfaces in policy decision processes drawing on scientific research. In such situations, models can be an interface between scientists and policy makers, making scientific research findings accessible to decision makers (Kolkman, Kok and van der Veen, 2005).

Models can be used to make predictions about outcomes in the real world and allow decision makers to experiment with different courses of action in a safe, quick and low-cost way. However, as has been shown in the case of engineering (Dodgson, Gann and Salter, 2007b), simulation modelling can also help to shape the conversation between stakeholders solving problems together and to foster collaboration. Models can help to achieve this across professional, institutional and other interpersonal or interorganizational boundaries.

By building on the insights of the literature on boundary objects and also its application to group model building (Zagonel, 2002), and drawing on the distinct literature on objects as epistemic and technical objects (Ewenstein and Whyte, 2009), it is possible to distinguish four different social roles of models (Kapsali et al., 2011). Boundary objects and representative objects can both be epistemic and technical objects. Models used as epistemic objects help to create new knowledge either because the system overview provided by the model already triggers new insights for the group, or because the simulation of the model results in new insights. In either case, engagement with the model adds to the total knowledge available, while for models used as technical objects the existing knowledge is communicated among the group members.

The two dimensions of boundary vs representational objects and epistemic vs technical objects therefore allow a stylized classification of four types of model roles to be made (Kapsali et al., 2011). Models which as boundary objects facilitate communication between stakeholders with different knowledge bases can be used to create new knowledge (as epistemic objects) by the stakeholder group, or can be used to make available across the group knowledge which individual members might already possess (as technical objects). Models primarily used to represent a reality which is seen as principally unproblematic can again be used in two different ways: as a micro world or management flight simulator to allow the user to learn; or as a predictive tool to allow the user to draw on the knowledge embodied in the model without necessarily requiring an understanding of the relationships within the system (Kapsali et al., 2011).

While in a typical modelling project these roles will not appear in their pure forms, the roles nevertheless point towards the different ways models are used and the different ways models can act as interfaces. Different stakeholders might have different views of the role of the model: for example, a client might have a predictive tool in mind at the outset, while the modelling process might show that what is required (or maybe in some cases achievable) has to be learned as a group. Over time the role of a model might change: learning as a group might be followed by an expression of knowledge and experimentation, and then by the development of a predictive tool for other users or of a game as a learning environment for students to explore (Kapsali et al., 2011). The discussion of the social roles of models highlights that models have complex and potentially changing roles which go beyond ‘prediction’ (as a technical and representative object), and in these other roles the transmission of information across boundaries is important.

This interface character can be seen across these social roles (see Table 7.1). As a technical and boundary object a model helps to transmit information between group members by demonstrating already existing knowledge (‘express’) and by giving group members lacking this knowledge the opportunity to engage with the system (‘experiment’). As an epistemic and representative object the model allows those that engage with it to create new insights by exploring a filtered and simplified version of the real world. The model can be seen as an interface not between members of the stakeholder group but as an interface to engage with the real-world relationships captured (partly and selectively) in the model. As an epistemic and boundary object a model acts like an interface to both the ways just described: it allows a stakeholder group to learn as a group by acting as an interface between the group members, as well as an interface to engage with the preliminary relationships captured in the model. In this latter case, these relationships captured in the model might be seen as relatively more problematic and more preliminary in their claim to making a statement about the ‘real world’ than for a model seen as a representative object: essentially these relationships are seen as capturing the mental models of (some members of) the group. In any case, a model is only a snapshot of the whole system: the model necessarily (and importantly for clarity) has to concentrate on what is relevant for a decision, discussion or system behaviour. As Alfred Korzybski memorably expressed, ‘The map is not the territory’ (Korzybski, 1931). So even a model as representative object can be seen as an interface, filtering aspects of reality and making available outputs which have some predictive power with regard to the expected behaviour of the ‘real world’.

Table 7.1 Simulation models as interfaces across social roles.

Epistemic object (create knowledge) Technical object (make knowledge available)
Boundary object (facilitate communication across boundaries) Model allows stakeholder group to learn by acting as an interface between the group members as well as an interface to engage with (preliminary) relationships captured in the model Model helps to transmit information between group members by demonstrating already existing knowledge and by giving group members lacking this knowledge the opportunity to engage with the system
Representative object (represent reality) Model allows exploration of a filtered and simplified version of the real world. The model as an interface to engage with the ‘real-world’ relationships captured (partly and selectively) in the model Access to model outputs which contain predictions about a constrained subset of the ‘real world’

7.3 The Modelling Process

A simulation model might be a product available to others, since developing a simulation model is a process and there are opportunities to learn from both the product and the process. As members of the group learn from each other, they develop and change the model – but also change their own assumptions. Engagement with the model and learning might be a recursive process. Facilitation can play an important role here. Learning can happen throughout the stages of the modelling process and can occur as learning from the model, from modelling and from simulation of the model.

A modelling group can involve very different participants and organizations. Their interaction and later decisions are influenced by explicit and implicit decisions on the interactive use of the model and model building process. The process of engagement with each other is helped by the model acting as an interface to translate between group members and allow learning. Modelling encourages agreement to be reached and that what has been agreed is consistently coded into the model. Clearly, there is a danger that some group members will not fully understand what has been included in the model, or are ignored and overruled, and that therefore disagreement or uncertainty is essentially ‘black boxed’ (i.e. remain concealed).

During the process of building a simulation model in a group, a shared conceptual model of the issue and the system needs to be developed, which can then be reified or coded as a simulation model (Jonassen, Strobel and Gottdenker, 2005). Engaging with a model allows the members of a simulation group not only to clarify their own understanding, but also to examine the consistency of their own preconceptions through hypothesis testing by simulating the model.

Changes in the model as different versions are built over time reflect changes in the mental models of the members of a model building group. This development is, of course, also shaped and limited by what the specific model building approach allows to be implemented easily: simulation modelling approaches impose their own syntax. The model-based reasoning that the group engage in involves a critical examination of the variables, factors, parts and relationships within their own conceptual models (Jonassen, Strobel and Gottdenker, 2005).

The modelling process begins with a great degree of openness during the creation of the model which then becomes a well-defined artefact that allows experimentation only within boundaries. Simulation models are constrained structures, which limits how they can be interpreted (see also Knuuttila and Voutilainen, 2003), and they are less flexible than a drawing on a blackboard. Simulation models become content rich and increase in rigidity and structure during their development process. This materialization process depends on the learning processes, composition and often the professional background of the group participants, among other factors.

Information about the system will be drawn from the model. Group participants learn about different parts of the system or roles within the systems they are not familiar with. The model therefore allows participants to get an overview of the whole system. Often, there are important insights to be gained from achieving such an overview, in particular in highly fragmented systems where individuals often have constricted roles and no conscious exposure to the wider interrelationships in the system. This might explain some of the reasons why ‘whole-systems modelling’ has been particularly promoted in healthcare settings where such roles are prominent and complex, and often messy problems involving numerous stakeholders and overlapping priorities have to be considered.

Both the conceptual models of the members of the modelling group and the computer model are necessarily incomplete and fragmentary representations of how the system actually works. As an externalized collective mental model is agreed on and built in the group, the group members will construct, deconstruct and reconstruct their conceptual models drawing on experiences to create sets of structures, factors and variables.

The final computer model is inevitably a compromise between the different conceptual models of the different stakeholders. Other factors also play a role in the conceptual and problem definition phases, such as the definition of needs.

As the model is simulated on the computer, the simulation becomes a source of information: as the model is run it acts as an interface between expected behaviour and the behaviour of the model implied by the assumptions embedded in it. Three sources of learning then build on each other: the modelling, which leads to the model, which can then be simulated; all three are potential sources of insight (see Figure 7.1).

img

Figure 7.1 Modelling, model and simulation as sources of information.

How models can influence and support learning and function as an interface will depend on how the model development process is facilitated and managed; group learning and model scope are influenced, guided and shaped by the management of the model development process.

7.4 The Modelling Approach

The way models act as interfaces will also depend on the level of abstraction or aggregation in the model. Models need to have the right level of abstraction and aggregation to be useful. From a technical perspective, the amount of complexity and model scope needs to be considered to capture what is relevant for the problem at hand. However, in addition to this technical perspective, what is appropriate for the client and modelling group is also important. These stakeholders might often deal with systems on a more detailed level and might object if some of ‘their detail’ is not included – even if the modeller argues that it does not affect system behaviour and might obscure some of the clarity of the model. Simplicity in the model itself might, however, not help its communicability if it omits what the audience is looking for.

In the ways of the model as interface described in this chapter, what we consider normally the interface of the model, that is the visual interface, might be significant as it helps to make model outputs and/or relationships within the system accessible to a lay audience. Models can particularly well support ‘learning as a group’ if they are easily changeable so that suggestions from the model building group and experiments can be rapidly implemented and interactively explored. Such models would typically be simple and visually accessible to stakeholders who might have only limited experience of simulation modelling or understanding of the mathematical underpinning of models. Frequently in such modelling projects, insights into relationships between variables or parts of the system might be more the focus than precision of the modelling output. Models used to ‘predict’ might in contrast have the characteristics of being fixed, detailed and precisely focused. While the visual interfaces might still be important, the emphasis might now be more on the visual attractiveness of the output than on the degree to which the visual interfaces support an understanding of the relationships within the modelled system. When models are used to experiment and explore devices such as sliders to change parameters quickly, gaming interfaces or the ability of rapid sensitivity analysis or immediate model simulation can be important. The requirements of the other two types of roles in our framework will fall between these extremes. Models used to ‘experiment’ with and ‘express’ knowledge or to ‘explore’ a system should make insights into relationships easily accessible but need not be so easily changeable in their structure.

While specific models might not correspond completely to these ideal types, and while the exact model requirements will be context specific, we nevertheless believe that this classification of ideal types is informative. These types are actually the vessel in which the different types of simulation models can take up different social roles.

System dynamics (SD) and discrete-event simulation (DES) can be seen to represent the two ends of a spectrum in their emphasis and explanatory power, though both may be applied to the same situations. There has been a discussion and comparison of the methods in the literature since the mid-1990s, the most notable being those by Sweetser (1999), Lane (2000) and Brailsford and Hilton (2001). These themes have been more fully explored by Morecroft and Robinson (2005, 2006) and Tako and Robinson (2009a, 2009b), as well as by modellers subsequently looking at strategies for combining SD and DES in hybrid models.

The differences between the two approaches can be classed into four categories (see Table 7.2):

  • the characteristics of the problem/decision under consideration;
  • the data requirements and the development process;
  • the type of understanding derived; and
  • the model output and usability by clients (often based on visual representation).

Table 7.2 Key differences between SD and DES modelling in the modelling literature.

Areas Characteristic Typical SD use Typical DES use
Problem/decision type Decision level Strategic decisions in systemic and population levels Operational decisions
Perspective Systemic overviews of population level where individual variation is statistically subsumed Operations level where events impact one another and variations of individuals cumulate or interact
Data requirements and development process Base data sources Qualitative to identify system behaviour and find feedback loops; then supported by data to complete stock levels and flow rates Model build from individual components, putting together entities
Uncertainty and randomness Deterministic runs based on provided parameters, feedback loops and delays Explicit randomness in parameters for each modelled activity and event
Type of understanding derived Key technical learning Systemic interactions and feedback effects Impact of randomness/variation and potential bottlenecks under runs
Scope of learning Overall population-level changes for long-term planning efforts Variation expected for service delivery decisions and contingency planning
Model output and usability by clients Primary usage mode Not optimization; understanding influences Playing with the models; ‘what-ifs’
Representation System represented as stocks and flows with explicit feedback System represented as events and queues with implicit feedback effects
Common user concerns about entities Lack of individuality among human entities Probability distributions for each event and entity
Common user concerns about structure Continuous, smooth curves and stock accumulation do not match perceptions of users Rearranging components completely changes interactions

The social roles of models will affect to a large degree the success of problem solving and decision making and the levels of understanding and interaction among the stakeholders. While it has to be recognized that the actual domains of use of SD and DES might overlap to a wide extent, and modellers can successfully apply SD and DES tools to problems across a spectrum from strategic to tactical (Tako and Robinson, 2009a, 2009b), the characterizations of the SD and DES literature nevertheless allow a hypothesis to be formulated on the ‘natural domains’ of both modelling approaches. Although the combination of epistemic and boundary object (the top left area in Table 7.1) could be suggested as the natural domain of SD, the combination of technical and representative object seems more the home of DES (the bottom right area in Table 7.1). In this way, consideration of social roles can be added to the traditional criteria for selecting the modelling techniques, leading to a more comprehensive toolbox that will benefit the group decision-making process.

The focus on feedback loops and systemic interactions in SD models more readily supports experimenting with the model in such a way as to create new knowledge about the relationship between system structure and behaviour, and might help to reveal systemic effects of policies and other interventions, while DESs might be very powerful in predicting the impact of randomness on system behaviour.

More empirical work is required to analyse whether this suggested understanding of the natural domains of different modelling approaches corresponds to the actual use of these two approaches and to successful outcomes. Clearly such work will also have to consider the differences between stakeholders in terms of knowledge domains, language used, incentives and social ties, as well as the problem characteristics and the system (e.g. importance of randomness and feedback, relevant level of aggregation, operational vs strategic focus), together with the goal of the planning process or of the modelling engagement.

7.5 Two Case Studies of Modelling Projects

In this section we illustrate the discussion with two case studies of group model building projects which show how models can be used as interfaces in the ways discussed in this chapter.

Both projects were done in the healthcare area; they do nevertheless illustrate the diversity of simulation projects. In both projects, modellers met with the expert group on a number of occasions using evidence and knowledge provided by the expert group, together with other sources, to build the model and populate it with data.

In the first project a team of consultants helped to develop a model to understand what the choices might be in providing services and developing public health strategies to reduce a specific type of hospital admission in a locality. This modelling project used SD and involved a diverse stakeholder group drawn from a variety of organizations. A consultant worked on the model between meetings and the (developing and changing) model was presented at every meeting of the stakeholder group, where it was discussed in detail by the group. Over time the model took on (or was given) different roles and became an interface in all the ways indicated in Section 7.2 (see Table 7.3). Especially in the early phase of the project, when it was important to agree on the structure of the system, the model served as a focus for the group to clarify their different understandings of the system and what different group members from different backgrounds actually meant when talking about the system and their experience. The need to capture relationships in a precise form and to a high degree of precision was a challenge for the participants in presenting their views on the relationships. The model helped to transmit information between the group members even before scenarios were explored, or it was used to predict the possible impact of different strategies. The model gave the participants an access point to engage with the mechanisms and real-world relationships captured in the model, and helped them to understand the interdependencies between parts of the system when experiments were suggested and conducted in the group meetings. The model finally produced simulation results as predictions of the likely impact of chosen strategies.

Table 7.3 The model as interface in the first project.

Interface function Illustrative interview quotes
Interface between the group members and interface to engage with (preliminary) relationships Discussions about definitions in order to develop language ‘because every member in the group had a different definition for every word that you could possibly come up with’ (project 1, interview 3)
‘I think it helps people discuss…yes, it did create a lot of discussion, not necessarily about the model, but about where we were at…what we were doing, what needed to be done, so it definitely generated a lot of discussion’ (project 1, interview 8)
‘the more complicated discussion was, well, what evidence have we got to show the effectiveness of that treatment, affecting the rate of flow from one stock to the next’ (project 1, interview 1)
Transmit information between group members ‘So at this point they have learnt along the way how it works, what it does and how it works. It's not about how the model works, it's about how the system works, and they don't necessarily see the system’ (project 1, interview 3)
Interface to engage with the ‘real-world’ relationships as captured in model ‘was a good learning process…to plan scenarios, test scenarios in the model, because that brought it to life’ (project 1, interview 1)
Access to model outputs ‘I think the main part for me was the capacity that, using the model, you could see what would have the biggest influence and where you needed to target your resources’ (project 1, interview 8)
‘it was a visual representation that sort of helped me to understand and see the effects’ (project 1, interview 4)

In this first project learning occurred in close relationship with the model and throughout the phases of the project. Reflecting on the project, both modellers and stakeholders emphasized the learning aspects of the work. The group were so diverse and came from such different backgrounds and roles that engaging in conversation to define the model helped reveal various insights (‘learning from modelling’). The group finally agreed on a structure, even though some members were concerned that aspects of the system they were particularly involved with were not in the model. However, one can clearly see that this model structure contributed to learning (‘learning from the model’), albeit to a much smaller extent than learning from the modelling process and also learning from the simulation. There was some degree of overlap of these phases: experimenting with the model and the results from test runs helped decide which elements of the system were relevant for inclusion in the final model. This was not a smooth process and was not without conflict; not all disagreements could be resolved.

The second project was a DES project where a simulation model was built based on discussions with an expert group drawn from two departments providing the same type of service to different parts of a larger healthcare organization. This work investigated the operational efficiency of different practices and studied the impact of changes in the availability of resources on operational performance. The expert group were (in addition to a dedicated data collection exercise) used as a source of data and also involved in decisions about what aspects of the system should be included and what scenarios should be explored. Interestingly the expert group were never shown the actual model on a screen, but instead were presented with a flowchart showing the processes, as well as with results from model experiments that had been run between meetings. The model was, however, not modified and simulations were not run during the group sessions.

In this model there was a lot of shared decision making on what should be included in it and what should be tested with it, but the model itself played only a small role within the group: ‘we had a model which obviously will be run through experiments, and … we'll discuss experiments and the results of the experiments, but … there was no such thing as a coding or a drawing of the schematic representation or a model … together’(project 2, interview 1).

The learning process was different from the one in the first project: the stakeholders involved in the project had a good overview of the system and knew each other well, so there was much less to be learned from the modelling process. The expertise of the stakeholders informed the model building and decisions on what was important to include, but the participants themselves gained less from the discussion. However, there were some insights drawn from the detailed data collection which was undertaken to build the model. Learning from modelling and learning from the model were far less important than learning from the simulation: in this project the model served mainly as a tool to provide answers to stakeholders' questions about the consequences of resource use and the policies of operational performance. Interestingly, there was no direct engagement with the model at all: the modellers built the model and experimented with it offline, only reporting the results back to the expert group.

The emphasis of the work was very much on developing as accurate a model as possible so that results could be derived from the simulation; thus the spirit of the modellers was very much ‘put this in, try this, try that and then some results’ (project 2, interview 2). The group made suggestions to the modeller on what they wanted to be examined with the model: ‘The things that we wanted it to model or to test, we put those ideas through but we didn't actually sit and do them’ (project 2, interview 3). However, even here, the participants did learn about the need to make their assumptions precise and from the data that was collected for the model: ‘And there certainly was things that were perhaps assumptions or unwritten assumptions, unwritten rules that came out of the woodwork’ (project 2, interview 4). The meetings were used to discuss the underlying assumptions and outputs as well as to investigate the reasons why the behaviour of the model was different from what was expected: ‘I would say that's the most creative we got. It was exploring the reasons why perhaps the result wasn't how we expected’ (project 2, interview 5). So there was collective learning in the group about the model and the system, but this was very much about the results of the simulation.

There were a number of reasons why the modellers did not make use of the model in the expert group meetings, such as the size and complexity of the model and the fact that the modellers had used complicated ‘workarounds’ to implement some features of the process under the constraints imposed by the modelling package and which were difficult to communicate to non-modellers. Showing an animation of a model run was considered unwise as a single run would not be representative of the variance of the stochastic process and therefore very likely to be misleading.

The two projects described here are not representative, but just illustrative, of the different ways models can be used in a model building group. They show the breadth of difference between modelling projects even though both are in healthcare and conducted with the participation of an expert group. They also illustrate typical differences between SD and DES projects. While it is clear that there is flexibility in how these approaches are used, the two projects can be seen as not atypical and illustrate the key differences between the two approaches: the more strategic project used SD, while the more operational project used DES. This operational vs strategic difference is related to the homogeneity vs diversity of the stakeholder group: the more homogeneous stakeholder group in the second project had far less difficulty in understanding each other and far more understanding of the system, so there was much less need of the model as an interface between group members. Moreover, the basic working of the system was relatively clear to everybody in the group, the uncertainty being more in the operational level variance in process times which were gathered in a detailed data collection exercise involving observations. The system structure was also relatively clear to everybody, so there was less scope to learn from the model structure than in the first project. Insights were gained mostly from the detailed data collected and from the simulation runs, but far less from an increased understanding of systemic relationships. For this reason the model in the second project had less of a role as an interface to engage with the ‘real-world’ relationships as captured in the model. Because the data required was on an individual level and because stochastic effects were important in the performance of the system, these features influenced the building and simulating of the model within the modelling group, as an individual run would be misleading in a DES model but not in a normally deterministic SD model. This reduced the role of the model in the group meetings of this project.

7.6 Summary and Conclusions

A model can help a group to communicate, by being consistent in laying down assumptions about how the systems works and helping to show the consequences of those assumptions: modelling, models and simulation can support learning.

To classify the social roles of simulation models, two dimensions can be distinguished: models can be boundary objects or representative objects; and they can be epistemic or technical objects. The metaphor of an interface is relevant for all social roles of a model. The model can act as an interface between the members (boundary object) of a modelling group in a group modelling exercise, as well as in different ways between the group and the system: it can transmit information between group members; engage with ‘real-world’ relationships as captured in the model; and give access to model outputs. Projects vary in the extent to which models serve as interfaces in these different ways: SD group model building projects typically (though not necessarily) emphasize the model much more as an interface between group members and as an interface to systemic relationships, while DES models are more typical in situations where the model is mainly an interface to the model output as a prediction of the likely consequences of the implementation of different choices in the represented system.

References

  1. Brailsford, S. and Hilton, N. (2001) A comparison of discrete event simulation and system dynamics for modelling healthcare systems. Proceedings of the 26th Meeting of the ORAHS Working Group 2000, Glasgow Caledonian University, Glasgow, Scotland.
  2. Carlile, P.R. (2002) A pragmatic view of knowledge and boundaries: boundary objects in new product development. Organization Science, 13 (4), 442–455.
  3. Dodgson, M., Gann, D. and Salter, A. (2007a) In case of fire, please use the elevator: simulation technology and organization in fire engineering. Organization Science, 18 (5), 849–864.
  4. Dodgson, M., Gann, D. and Salter, A. (2007b) The impact of modelling and simulation technology on engineering problem solving. Technology Analysis & Strategic Management, 19 (4), 471–489.
  5. Ewenstein, B. and Whyte, J.K. (2007) Picture this: visual representations as ‘artifacts of knowing’. Building Research and Information, 35, 1, 81–89.
  6. Jonassen, D.H., Strobel, J. and Gottdenker, J. (2005) Model building for conceptual change. Interactive Learning Environments, 13 (1–2), 15–37.
  7. Kapsali, M., Bolt, T., Bayer, S. and Brailsford, S. (2011) The materialization of simulation: a boundary object in the making. HaCIRIC 11 International Conference, Manchester, 26–28 September.
  8. Knuuttila, T. and Voutilainen, A. (2003) A parser as an epistemic artefact: a material view on models. Philosophy of Science, 70 (5), 1484–1495.
  9. Kolkman, M.J., Kok, M. and van der Veen, A. (2005) Mental model mapping as a new tool to analyse the use of information in decision-making in integrated water management. Physics and Chemistry of the Earth, 30, 317–332.
  10. Korzybski, A. (1931) A non-Aristotelian system and its necessity for rigor in mathematics and physics. Proceedings of the American Mathematical Society, New Orleans, LA, 28 December.
  11. Lane, D.C. (2000) You just don't understand me: modes of failure and success in the discourse between system dynamics and discrete event simulation. LSE OR Dept Working Paper, LSEOR 00-34.
  12. Mattila, E. (2005) Interdisciplinarity ‘in the making’: modelling infectious diseases. Working Papers on the Nature of Evidence: How well do facts travel, London School of Economics.
  13. Morecroft, J. and Robinson, S. (2005) Explaining puzzling dynamics: comparing the use of system dynamics and discrete-event simulation. Proceedings of the 23rd International Conference of the System Dynamics Society, Boston, MA.
  14. Morecroft, J. and Robinson, S. (2006) Comparing discrete-event simulation and system dynamics: modelling a fishery, in Proceedings of the Operational Research Society Simulation Workshop 2006 (SW'06) (eds J. Garnett et al.), Operational Research Society, Birmingham, pp. 137–148.
  15. Sapsed, J. and Salter, A. (2004) Postcards from the edge: local communities, global programs and boundary objects. Organization Studies, 25, 1515–1534.
  16. Star, S. and Griesemer, J. (1989) Institutional ecology, ‘translations’, and boundary objects: amateurs and professionals in Berkeley's Museum of Vertebrate Zoology. Social Studies of Science, 19, 387–420.
  17. Sweetser, A. (1999) A comparison of system dynamics and discrete event simulation. Proceedings of the 17th International Conference of the System Dynamics Society and 5th Australian & New Zealand Systems Conference, Wellington, New Zealand.
  18. Tako, A.A. and Robinson, S. (2009a) Comparing discrete-event simulation and system dynamics: users' perceptions. Journal of the Operational Research Society, 60 (3), 296–312.
  19. Tako, A.A. and Robinson, S. (2009b) Comparing model development in discrete event simulation and system dynamics. Proceedings of the Winter Simulation Conference WSC'09.
  20. Zagonel, A. (2002) Model conceptualization in group model building: a review of the literature exploring the tension between representing reality and negotiating a social order. Proceedings of the 20th International Conference of the System Dynamics Society, Palermo.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.18.198