Since their inception Computer Security Incident Response Teams (CSIRTs) have been afflicted by chronic problems concerning workload, quality of service, and sustaining their constituency. We have cooperated with one of the oldest CSIRTs to model the most challenging issues. Low-priority and high-priority incident response cause distinct problems. In a previous paper we dealt with the impact of the exponential growth of low-priority incidents on the CSIRT workload. In this paper we deal with high-priority incident response and its impact on the CSIRT workload and quality of service. One observes long-term instabilities in workload and QoS and, ominously, oscillatory decreasing recognition of the CSIRT by its constituency. An improved communication of the service level provided by the CSIRT is the most effective policy to mitigate long-term instability in the workload and quality of service.
Since their inception Computer Security Incident Response Teams (CSIRTs) have been afflicted by chronic problems concerning workload, quality of service, and sustaining their constituency. We have cooperated with one of the oldest CSIRTs to model the most challenging issues. Low-priority and high-priority incident response cause distinct problems. Low-priority reports grow exponentially, which overwhelms the limited CISRT resources. For high-priority incident response, one observes long-term instabilities in workload and QoS and, ominously, oscillatory decreasing recognition of the CSIRT by its constituency. In this paper we focus on low-priority incident response, leaving high-priority response for two companion papers. For low-priority response, the CSIRT tends to handle the workload by adjusting the productivity of manually handled incidents, a futile task owing to exponential growth in incidents. A more fundamental solution is automated incident response, but its implementation requires careful planning of timing and resources.
The National Institutes of Health (NIH) and the research community has been concerned for decades about the increasing periods of training and the rate of entry of new investigators into its pool of funded Principal Investigators (PIs). Since 1970, newly trained investigators have experienced longer periods of training prior to application for NIH research grant support. Longer periods of training are reflected in the average age at which investigators receive their first independent research grant, which has increased from 34.3 to 42.4 over the period from 1970 to 2006. Because of the concern about sustaining the enterprise and assuring a continuing supply of new investigators, the NIH launched a collaboration with viaSim to model the biomedical PI workforce and to estimate the rate of replenishment necessary to balance the age of the entire pool and to test policies that could be employed to encourage reductions in the duration of training. This paper provides an overview of the model developed for the project, as well as some initial simulations of policies related to the duration of training and entry of new investigators. The final section addresses how the NIH-specific model could be applied to the national STEM workforce.
The healthy exchange of ideas within an organization leads to faster problem solving, mitigates short and long term risk, and opens the possibility for disruptive technological change. We introduce a new tool (GYRUS) for the simulation and optimization of idea propagation within an organization. This tool treats the organizational topology, internal processes, and implements an individual knowledge model to examine idea propagation. The topology represents both the formal and informal networks of idea movement within an organization. The processes include all activities resulting in the exchange or introduction of ideas to the organization. The knowledge model concerns how individuals store and propagate ideas. We apply this tool to a simple organizational topology to understand the propagation characteristics of ideas and the coupling of ideas between entities in the structure.
Sustained competitive advantage is a major issue in the field of management research. A growing number of scholars utilize the Dynamic Capabilities View as a reason for over average performance and adaptability of a firm, especially to radical innovations that threaten a firmâs survival. Due to the abstract character of the concept, the nature and impact of dynamic capabilities is still vague and empirical evidence is rare. This paper presents a formal simulation model that builds on previous work on the accumulation of dynamic capabilities to explore the micro foundations of the concept. To generate pseudo empirical data, a mixed agent based and system dynamics modeling approach is devel-oped. Judging preliminary results further development of the method promises to be fruitful to understand of the micro foundations of dynamic capabilities.
The aim of this paper is to extend a recent "war of attrition" model for counterinsurgency (Kress & Szechtman, 2008) to include the impact on war of the use of influence operations for popular support and defections from the insurgency. The model has the following five sectors: (1) Competitive Contagion for Popular Support; (2) Recruitment and Defections; (3) Quality of Intelligence; (4) War of Attrition; and (5) Collateral Damage. Two messaging policies were compared, but the results of such comparisons will depend heavily on model parameterization and the formulation of effect functions. Still, a model such as this one can be used in principle to inform policy development by making assumptions transparent and by clarifying causal links. For instance, popular support messaging can reduce the effectiveness of insurgent fighters and their ability to recruit. Alternatively, defection messaging can help to recruit defectors and glean intelligence for targeting that could limit civilian casualties and reduce insurgent recruitment, thus bringing the war to an earlier close. This effort was completed, in part, for the U.S. Air Force Research Laboratory at Wright-Patterson Air Force Base (Contract No. FA8650-04-D-6405 TO 25 and TO 33).
In this study we develop a system dynamics model of teachers adoption of e-learning system. We identify that environment variables and teachers individual characteristics are the two main factors affecting teachers adoption. Consequently we integrate well-known technology acceptance model into our dynamic model. This study also proposes three policies to enhance teachers adoption. Each policy will be analyzed individually, and policy comparison will also be performed.
This paper aims to support growth management for firms that have no stable growth logic. Based on Schöns reflective management perspective (Schön, 1983), we propose an iterative system dynamics-based reflective strategy development process to facilitate managers to organize and develop firm growth logic. Different from typical system dynamics modeling which is based on existing dynamic structures, in this paper, an iterative system dynamics modeling process is designed to develop models that evolve with managers ideal designs towards the implementation of expected growth patterns. An action science research is conducted with a case to illustrate the iterative SD model-based growth management process. How the case under discussion enhanced its understanding of the confronted growth problem and developed its growth logic to guide the formulation of relevant growth strategies are clearly described.
The benefits of a strategically balanced product portfolio, as a key driver of long-term business success, are well documented. In this respect, many firms have been unable to achieve a balanced product portfolio. An important cause is the failure to develop dynamic capabilities, that is, the capabilities to reconfigure internal and external competences to address dynamic business environments. In times of environmental instability and financial decay, top managers are facing difficulties in adapting their strategy to changes in market and competitive conditions. Firms can thus become seriously trapped in a reinforcing negative loop, where the changing environment is counteracted with inadequate strategic actions, which in turn results in further decreasing financial performance. This so-called suppression mechanism serves to explain why so many firms fail at building dynamic capabilities. We draw on system dynamics modeling to build and simulate a model of the causes, consequences, and potential solutions of the suppression mechanism. This model is derived from the literature on dynamic capability and, more broadly, strategy and innovation studies. The main contribution of this paper to the literature on dynamic capabilities is the definition and codification of the suppression mechanism.
Software projects have traditionally been problematic in terms of quality, cost and time. Researchers and practitioners have focused on agile software development as an alternative to overcome these problems. Agile methods employ iterative development cycles (typically 20 working-days), interspersed by user feedback. The key to agile projects is the sense of urgency created by the need to deliver at regular intervals. This paper examines this construct, i.e., schedule pressure. We investigate the relationship between the level of agility (length of the iterative cycle) and project outcomes. We argue that project outcomes may suffer either from a team being too inactive, e.g., in sequential or low levels of agility, or from a team being over-active over too long, a situation likely to occur in high levels of agility. We hypothesize that moderate levels of agility are likely to result in the best project outcomes. We test our hypothesis through simulation, and find a U-shaped pattern: performance is better when iteration lengths are 50 working-days, as opposed to 20 working-day cycles widely used in practice. Our analysis provides both theoretical insights into the dynamics of agile software development and practical suggestions for managing these projects.