Synchronous parallel system for emulation and discrete event simulation
NASA Technical Reports Server (NTRS)
Steinman, Jeffrey S. (Inventor)
1992-01-01
A synchronous parallel system for emulation and discrete event simulation having parallel nodes responds to received messages at each node by generating event objects having individual time stamps, stores only the changes to state variables of the simulation object attributable to the event object, and produces corresponding messages. The system refrains from transmitting the messages and changing the state variables while it determines whether the changes are superseded, and then stores the unchanged state variables in the event object for later restoral to the simulation object if called for. This determination preferably includes sensing the time stamp of each new event object and determining which new event object has the earliest time stamp as the local event horizon, determining the earliest local event horizon of the nodes as the global event horizon, and ignoring the events whose time stamps are less than the global event horizon. Host processing between the system and external terminals enables such a terminal to query, monitor, command or participate with a simulation object during the simulation process.
Synchronous Parallel System for Emulation and Discrete Event Simulation
NASA Technical Reports Server (NTRS)
Steinman, Jeffrey S. (Inventor)
2001-01-01
A synchronous parallel system for emulation and discrete event simulation having parallel nodes responds to received messages at each node by generating event objects having individual time stamps, stores only the changes to the state variables of the simulation object attributable to the event object and produces corresponding messages. The system refrains from transmitting the messages and changing the state variables while it determines whether the changes are superseded, and then stores the unchanged state variables in the event object for later restoral to the simulation object if called for. This determination preferably includes sensing the time stamp of each new event object and determining which new event object has the earliest time stamp as the local event horizon, determining the earliest local event horizon of the nodes as the global event horizon, and ignoring events whose time stamps are less than the global event horizon. Host processing between the system and external terminals enables such a terminal to query, monitor, command or participate with a simulation object during the simulation process.
An extension of the OpenModelica compiler for using Modelica models in a discrete event simulation
Nutaro, James
2014-11-03
In this article, a new back-end and run-time system is described for the OpenModelica compiler. This new back-end transforms a Modelica model into a module for the adevs discrete event simulation package, thereby extending adevs to encompass complex, hybrid dynamical systems. The new run-time system that has been built within the adevs simulation package supports models with state-events and time-events and that comprise differential-algebraic systems with high index. Finally, although the procedure for effecting this transformation is based on adevs and the Discrete Event System Specification, it can be adapted to any discrete event simulation package.
NASA Technical Reports Server (NTRS)
Steinman, Jeffrey S. (Inventor)
1998-01-01
The present invention is embodied in a method of performing object-oriented simulation and a system having inter-connected processor nodes operating in parallel to simulate mutual interactions of a set of discrete simulation objects distributed among the nodes as a sequence of discrete events changing state variables of respective simulation objects so as to generate new event-defining messages addressed to respective ones of the nodes. The object-oriented simulation is performed at each one of the nodes by assigning passive self-contained simulation objects to each one of the nodes, responding to messages received at one node by generating corresponding active event objects having user-defined inherent capabilities and individual time stamps and corresponding to respective events affecting one of the passive self-contained simulation objects of the one node, restricting the respective passive self-contained simulation objects to only providing and receiving information from die respective active event objects, requesting information and changing variables within a passive self-contained simulation object by the active event object, and producing corresponding messages specifying events resulting therefrom by the active event objects.
Event-driven simulation in SELMON: An overview of EDSE
NASA Technical Reports Server (NTRS)
Rouquette, Nicolas F.; Chien, Steve A.; Charest, Leonard, Jr.
1992-01-01
EDSE (event-driven simulation engine), a model-based event-driven simulator implemented for SELMON, a tool for sensor selection and anomaly detection in real-time monitoring is described. The simulator is used in conjunction with a causal model to predict future behavior of the model from observed data. The behavior of the causal model is interpreted as equivalent to the behavior of the physical system being modeled. An overview of the functionality of the simulator and the model-based event-driven simulation paradigm on which it is based is provided. Included are high-level descriptions of the following key properties: event consumption and event creation, iterative simulation, synchronization and filtering of monitoring data from the physical system. Finally, how EDSE stands with respect to the relevant open issues of discrete-event and model-based simulation is discussed.
An Empirical Study of Combining Communicating Processes in a Parallel Discrete Event Simulation
1990-12-01
dynamics of the cost/performance criteria which typically made up computer resource acquisition decisions . offering a broad range of tradeoffs in the way... prcesses has a significant impact on simulation performance. It is the hypothesis of this 3-4 SYSTEM DECOMPOSITION PHYSICAL SYSTEM 1: N PHYSICAL PROCESS 1...EMPTY)) next-event = pop(next-event-queue); lp-clock = next-event - time; Simulate next event departure- consume event-enqueue new event end while; If no
Assessment of Critical Events Corridors through Multivariate Cascading Outages Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makarov, Yuri V.; Samaan, Nader A.; Diao, Ruisheng
2011-10-17
Massive blackouts of electrical power systems in North America over the past decade has focused increasing attention upon ways to identify and simulate network events that may potentially lead to widespread network collapse. This paper summarizes a method to simulate power-system vulnerability to cascading failures to a supplied set of initiating events synonymously termed as Extreme Events. The implemented simulation method is currently confined to simulating steady state power-system response to a set of extreme events. The outlined method of simulation is meant to augment and provide a new insight into bulk power transmission network planning that at present remainsmore » mainly confined to maintaining power system security for single and double component outages under a number of projected future network operating conditions. Although one of the aims of this paper is to demonstrate the feasibility of simulating network vulnerability to cascading outages, a more important goal has been to determine vulnerable parts of the network that may potentially be strengthened in practice so as to mitigate system susceptibility to cascading failures. This paper proposes to demonstrate a systematic approach to analyze extreme events and identify vulnerable system elements that may be contributing to cascading outages. The hypothesis of critical events corridors is proposed to represent repeating sequential outages that can occur in the system for multiple initiating events. The new concept helps to identify system reinforcements that planners could engineer in order to 'break' the critical events sequences and therefore lessen the likelihood of cascading outages. This hypothesis has been successfully validated with a California power system model.« less
Simulation of a master-slave event set processor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Comfort, J.C.
1984-03-01
Event set manipulation may consume a considerable amount of the computation time spent in performing a discrete-event simulation. One way of minimizing this time is to allow event set processing to proceed in parallel with the remainder of the simulation computation. The paper describes a multiprocessor simulation computer, in which all non-event set processing is performed by the principal processor (called the host). Event set processing is coordinated by a front end processor (the master) and actually performed by several other functionally identical processors (the slaves). A trace-driven simulation program modeling this system was constructed, and was run with tracemore » output taken from two different simulation programs. Output from this simulation suggests that a significant reduction in run time may be realized by this approach. Sensitivity analysis was performed on the significant parameters to the system (number of slave processors, relative processor speeds, and interprocessor communication times). A comparison between actual and simulation run times for a one-processor system was used to assist in the validation of the simulation. 7 references.« less
2014-09-18
and full/scale experimental verifications towards ground/ satellite quantum key distribution0 Oat Qhotonics 4235>9+7,=5;9!អ \\58^ Zin K. Dao Z. Miu T...Conceptual Modeling of a Quantum Key Distribution Simulation Framework Using the Discrete Event System Specification DISSERTATION Jeffrey D. Morris... QUANTUM KEY DISTRIBUTION SIMULATION FRAMEWORK USING THE DISCRETE EVENT SYSTEM SPECIFICATION DISSERTATION Presented to the Faculty Department of Systems
Teaching sexual history-taking skills using the Sexual Events Classification System.
Fidler, Donald C; Petri, Justin Daniel; Chapman, Mark
2010-01-01
The authors review the literature about educational programs for teaching sexual history-taking skills and describe novel techniques for teaching these skills. Psychiatric residents enrolled in a brief sexual history-taking course that included instruction on the Sexual Events Classification System, feedback on residents' video-recorded interviews with simulated patients, discussion of videos that simulated bad interviews, simulated patients, and a competency scoring form to score a video of a simulated interview. After the course, residents completed an anonymous survey to assess the usefulness of the experience. After the course, most residents felt more comfortable taking sexual histories. They described the Sexual Events Classification System and simulated interviews as practical methods for teaching sexual history-taking skills. The Sexual Events Classification System and simulated patient experiences may serve as a practical model for teaching sexual history-taking skills to general psychiatric residents.
Synchronization Of Parallel Discrete Event Simulations
NASA Technical Reports Server (NTRS)
Steinman, Jeffrey S.
1992-01-01
Adaptive, parallel, discrete-event-simulation-synchronization algorithm, Breathing Time Buckets, developed in Synchronous Parallel Environment for Emulation and Discrete Event Simulation (SPEEDES) operating system. Algorithm allows parallel simulations to process events optimistically in fluctuating time cycles that naturally adapt while simulation in progress. Combines best of optimistic and conservative synchronization strategies while avoiding major disadvantages. Algorithm processes events optimistically in time cycles adapting while simulation in progress. Well suited for modeling communication networks, for large-scale war games, for simulated flights of aircraft, for simulations of computer equipment, for mathematical modeling, for interactive engineering simulations, and for depictions of flows of information.
Program For Parallel Discrete-Event Simulation
NASA Technical Reports Server (NTRS)
Beckman, Brian C.; Blume, Leo R.; Geiselman, John S.; Presley, Matthew T.; Wedel, John J., Jr.; Bellenot, Steven F.; Diloreto, Michael; Hontalas, Philip J.; Reiher, Peter L.; Weiland, Frederick P.
1991-01-01
User does not have to add any special logic to aid in synchronization. Time Warp Operating System (TWOS) computer program is special-purpose operating system designed to support parallel discrete-event simulation. Complete implementation of Time Warp mechanism. Supports only simulations and other computations designed for virtual time. Time Warp Simulator (TWSIM) subdirectory contains sequential simulation engine interface-compatible with TWOS. TWOS and TWSIM written in, and support simulations in, C programming language.
Event Classification and Identification Based on the Characteristic Ellipsoid of Phasor Measurement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Jian; Diao, Ruisheng; Makarov, Yuri V.
2011-09-23
In this paper, a method to classify and identify power system events based on the characteristic ellipsoid of phasor measurement is presented. The decision tree technique is used to perform the event classification and identification. Event types, event locations and clearance times are identified by decision trees based on the indices of the characteristic ellipsoid. A sufficiently large number of transient events were simulated on the New England 10-machine 39-bus system based on different system configurations. Transient simulations taking into account different event types, clearance times and various locations are conducted to simulate phasor measurement. Bus voltage magnitudes and recordedmore » reactive and active power flows are used to build the characteristic ellipsoid. The volume, eccentricity, center and projection of the longest axis in the parameter space coordinates of the characteristic ellipsoids are used to classify and identify events. Results demonstrate that the characteristic ellipsoid and the decision tree are capable to detect the event type, location, and clearance time with very high accuracy.« less
NASA Technical Reports Server (NTRS)
Mukhopadhyay, A. K.
1978-01-01
The Data Storage Subsystem Simulator (DSSSIM) simulating (by ground software) occurrence of discrete events in the Voyager mission is described. Functional requirements for Data Storage Subsystems (DSS) simulation are discussed, and discrete event simulation/DSSSIM processing is covered. Four types of outputs associated with a typical DSSSIM run are presented, and DSSSIM limitations and constraints are outlined.
DISCRETE EVENT SIMULATION OF OPTICAL SWITCH MATRIX PERFORMANCE IN COMPUTER NETWORKS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Imam, Neena; Poole, Stephen W
2013-01-01
In this paper, we present application of a Discrete Event Simulator (DES) for performance modeling of optical switching devices in computer networks. Network simulators are valuable tools in situations where one cannot investigate the system directly. This situation may arise if the system under study does not exist yet or the cost of studying the system directly is prohibitive. Most available network simulators are based on the paradigm of discrete-event-based simulation. As computer networks become increasingly larger and more complex, sophisticated DES tool chains have become available for both commercial and academic research. Some well-known simulators are NS2, NS3, OPNET,more » and OMNEST. For this research, we have applied OMNEST for the purpose of simulating multi-wavelength performance of optical switch matrices in computer interconnection networks. Our results suggest that the application of DES to computer interconnection networks provides valuable insight in device performance and aids in topology and system optimization.« less
Ghany, Ahmad; Vassanji, Karim; Kuziemsky, Craig; Keshavjee, Karim
2013-01-01
Electronic prescribing (e-prescribing) is expected to bring many benefits to Canadian healthcare, such as a reduction in errors and adverse drug reactions. As there currently is no functioning e-prescribing system in Canada that is completely electronic, we are unable to evaluate the performance of a live system. An alternative approach is to use simulation modeling for evaluation. We developed two discrete-event simulation models, one of the current handwritten prescribing system and one of a proposed e-prescribing system, to compare the performance of these two systems. We were able to compare the number of processes in each model, workflow efficiency, and the distribution of patients or prescriptions. Although we were able to compare these models to each other, using discrete-event simulation software was challenging. We were limited in the number of variables we could measure. We discovered non-linear processes and feedback loops in both models that could not be adequately represented using discrete-event simulation software. Finally, interactions between entities in both models could not be modeled using this type of software. We have come to the conclusion that a more appropriate approach to modeling both the handwritten and electronic prescribing systems would be to use a complex adaptive systems approach using agent-based modeling or systems-based modeling.
Wu, Sheng; Li, Hong; Petzold, Linda R.
2015-01-01
The inhomogeneous stochastic simulation algorithm (ISSA) is a fundamental method for spatial stochastic simulation. However, when diffusion events occur more frequently than reaction events, simulating the diffusion events by ISSA is quite costly. To reduce this cost, we propose to use the time dependent propensity function in each step. In this way we can avoid simulating individual diffusion events, and use the time interval between two adjacent reaction events as the simulation stepsize. We demonstrate that the new algorithm can achieve orders of magnitude efficiency gains over widely-used exact algorithms, scales well with increasing grid resolution, and maintains a high level of accuracy. PMID:26609185
Parallel discrete event simulation: A shared memory approach
NASA Technical Reports Server (NTRS)
Reed, Daniel A.; Malony, Allen D.; Mccredie, Bradley D.
1987-01-01
With traditional event list techniques, evaluating a detailed discrete event simulation model can often require hours or even days of computation time. Parallel simulation mimics the interacting servers and queues of a real system by assigning each simulated entity to a processor. By eliminating the event list and maintaining only sufficient synchronization to insure causality, parallel simulation can potentially provide speedups that are linear in the number of processors. A set of shared memory experiments is presented using the Chandy-Misra distributed simulation algorithm to simulate networks of queues. Parameters include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors. These experiments show that Chandy-Misra distributed simulation is a questionable alternative to sequential simulation of most queueing network models.
On-Die Sensors for Transient Events
NASA Astrophysics Data System (ADS)
Suchak, Mihir Vimal
Failures caused by transient electromagnetic events like Electrostatic Discharge (ESD) are a major concern for embedded systems. The component often failing is an integrated circuit (IC). Determining which IC is affected in a multi-device system is a challenging task. Debugging errors often requires sophisticated lab setups which require intentionally disturbing and probing various parts of the system which might not be easily accessible. Opening the system and adding probes may change its response to the transient event, which further compounds the problem. On-die transient event sensors were developed that require relatively little area on die, making them inexpensive, they consume negligible static current, and do not interfere with normal operation of the IC. These circuits can be used to determine the pin involved and the level of the event in the event of a transient event affecting the IC, thus allowing the user to debug system-level transient events without modifying the system. The circuit and detection scheme design has been completed and verified in simulations with Cadence Virtuoso environment. Simulations accounted for the impact of the ESD protection circuits, parasitics from the I/O pin, package and I/O ring, and included a model of an ESD gun to test the circuit's response to an ESD pulse as specified in IEC 61000-4-2. Multiple detection schemes are proposed. The final detection scheme consists of an event detector and a level sensor. The event detector latches on the presence of an event at a pad, to determine on which pin an event occurred. The level sensor generates current proportional to the level of the event. This current is converted to a voltage and digitized at the A/D converter to be read by the microprocessor. Detection scheme shows good performance in simulations when checked against process variations and different kind of events.
A simulation model for probabilistic analysis of Space Shuttle abort modes
NASA Technical Reports Server (NTRS)
Hage, R. T.
1993-01-01
A simulation model which was developed to provide a probabilistic analysis tool to study the various space transportation system abort mode situations is presented. The simulation model is based on Monte Carlo simulation of an event-tree diagram which accounts for events during the space transportation system's ascent and its abort modes. The simulation model considers just the propulsion elements of the shuttle system (i.e., external tank, main engines, and solid boosters). The model was developed to provide a better understanding of the probability of occurrence and successful completion of abort modes during the vehicle's ascent. The results of the simulation runs discussed are for demonstration purposes only, they are not official NASA probability estimates.
Koh, Wonryull; Blackwell, Kim T
2011-04-21
Stochastic simulation of reaction-diffusion systems enables the investigation of stochastic events arising from the small numbers and heterogeneous distribution of molecular species in biological cells. Stochastic variations in intracellular microdomains and in diffusional gradients play a significant part in the spatiotemporal activity and behavior of cells. Although an exact stochastic simulation that simulates every individual reaction and diffusion event gives a most accurate trajectory of the system's state over time, it can be too slow for many practical applications. We present an accelerated algorithm for discrete stochastic simulation of reaction-diffusion systems designed to improve the speed of simulation by reducing the number of time-steps required to complete a simulation run. This method is unique in that it employs two strategies that have not been incorporated in existing spatial stochastic simulation algorithms. First, diffusive transfers between neighboring subvolumes are based on concentration gradients. This treatment necessitates sampling of only the net or observed diffusion events from higher to lower concentration gradients rather than sampling all diffusion events regardless of local concentration gradients. Second, we extend the non-negative Poisson tau-leaping method that was originally developed for speeding up nonspatial or homogeneous stochastic simulation algorithms. This method calculates each leap time in a unified step for both reaction and diffusion processes while satisfying the leap condition that the propensities do not change appreciably during the leap and ensuring that leaping does not cause molecular populations to become negative. Numerical results are presented that illustrate the improvement in simulation speed achieved by incorporating these two new strategies.
SIGMA--A Graphical Approach to Teaching Simulation.
ERIC Educational Resources Information Center
Schruben, Lee W.
1992-01-01
SIGMA (Simulation Graphical Modeling and Analysis) is a computer graphics environment for building, testing, and experimenting with discrete event simulation models on personal computers. It uses symbolic representations (computer animation) to depict the logic of large, complex discrete event systems for easier understanding and has proven itself…
NASA Technical Reports Server (NTRS)
Dubos, Gregory F.; Cornford, Steven
2012-01-01
While the ability to model the state of a space system over time is essential during spacecraft operations, the use of time-based simulations remains rare in preliminary design. The absence of the time dimension in most traditional early design tools can however become a hurdle when designing complex systems whose development and operations can be disrupted by various events, such as delays or failures. As the value delivered by a space system is highly affected by such events, exploring the trade space for designs that yield the maximum value calls for the explicit modeling of time.This paper discusses the use of discrete-event models to simulate spacecraft development schedule as well as operational scenarios and on-orbit resources in the presence of uncertainty. It illustrates how such simulations can be utilized to support trade studies, through the example of a tool developed for DARPA's F6 program to assist the design of "fractionated spacecraft".
Parallel Stochastic discrete event simulation of calcium dynamics in neuron.
Ishlam Patoary, Mohammad Nazrul; Tropper, Carl; McDougal, Robert A; Zhongwei, Lin; Lytton, William W
2017-09-26
The intra-cellular calcium signaling pathways of a neuron depends on both biochemical reactions and diffusions. Some quasi-isolated compartments (e.g. spines) are so small and calcium concentrations are so low that one extra molecule diffusing in by chance can make a nontrivial difference in its concentration (percentage-wise). These rare events can affect dynamics discretely in such way that they cannot be evaluated by a deterministic simulation. Stochastic models of such a system provide a more detailed understanding of these systems than existing deterministic models because they capture their behavior at a molecular level. Our research focuses on the development of a high performance parallel discrete event simulation environment, Neuron Time Warp (NTW), which is intended for use in the parallel simulation of stochastic reaction-diffusion systems such as intra-calcium signaling. NTW is integrated with NEURON, a simulator which is widely used within the neuroscience community. We simulate two models, a calcium buffer and a calcium wave model. The calcium buffer model is employed in order to verify the correctness and performance of NTW by comparing it to a serial deterministic simulation in NEURON. We also derived a discrete event calcium wave model from a deterministic model using the stochastic IP3R structure.
Evaluation of the Navys Sea/Shore Flow Policy
2016-06-01
Std. Z39.18 i Abstract CNA developed an independent Discrete -Event Simulation model to evaluate and assess the effect of...a more steady manning level, but the variability remains, even if the system is optimized. In building a Discrete -Event Simulation model, we...steady-state model. In FY 2014, CNA developed a Discrete -Event Simulation model to evaluate the impact of sea/shore flow policy (the DES-SSF model
An investigation into pilot and system response to critical in-flight events, volume 2
NASA Technical Reports Server (NTRS)
Rockwell, T. H.; Giffin, W. C.
1981-01-01
Critical in-flight event is studied using mission simulation and written tests of pilot responses. Materials and procedures used in knowledge tests, written tests, and mission simulations are included
Implementing system simulation of C3 systems using autonomous objects
NASA Technical Reports Server (NTRS)
Rogers, Ralph V.
1987-01-01
The basis of all conflict recognition in simulation is a common frame of reference. Synchronous discrete-event simulation relies on the fixed points in time as the basic frame of reference. Asynchronous discrete-event simulation relies on fixed-points in the model space as the basic frame of reference. Neither approach provides sufficient support for autonomous objects. The use of a spatial template as a frame of reference is proposed to address these insufficiencies. The concept of a spatial template is defined and an implementation approach offered. Discussed are the uses of this approach to analyze the integration of sensor data associated with Command, Control, and Communication systems.
Evaluation of the Navys Sea/Shore Flow Policy
2016-06-01
CNA developed an independent Discrete -Event Simulation model to evaluate and assess the effect of alternative sea/shore flow policies. In this study...remains, even if the system is optimized. In building a Discrete -Event Simulation model, we discovered key factors that should be included in the... Discrete -Event Simulation model to evaluate the impact of sea/shore flow policy (the DES-SSF model) and compared the results with the SSFM for one
DynamO: a free O(N) general event-driven molecular dynamics simulator.
Bannerman, M N; Sargant, R; Lue, L
2011-11-30
Molecular dynamics algorithms for systems of particles interacting through discrete or "hard" potentials are fundamentally different to the methods for continuous or "soft" potential systems. Although many software packages have been developed for continuous potential systems, software for discrete potential systems based on event-driven algorithms are relatively scarce and specialized. We present DynamO, a general event-driven simulation package, which displays the optimal O(N) asymptotic scaling of the computational cost with the number of particles N, rather than the O(N) scaling found in most standard algorithms. DynamO provides reference implementations of the best available event-driven algorithms. These techniques allow the rapid simulation of both complex and large (>10(6) particles) systems for long times. The performance of the program is benchmarked for elastic hard sphere systems, homogeneous cooling and sheared inelastic hard spheres, and equilibrium Lennard-Jones fluids. This software and its documentation are distributed under the GNU General Public license and can be freely downloaded from http://marcusbannerman.co.uk/dynamo. Copyright © 2011 Wiley Periodicals, Inc.
Expert systems and simulation models; Proceedings of the Seminar, Tucson, AZ, November 18, 19, 1985
NASA Technical Reports Server (NTRS)
1986-01-01
The seminar presents papers on modeling and simulation methodology, artificial intelligence and expert systems, environments for simulation/expert system development, and methodology for simulation/expert system development. Particular attention is given to simulation modeling concepts and their representation, modular hierarchical model specification, knowledge representation, and rule-based diagnostic expert system development. Other topics include the combination of symbolic and discrete event simulation, real time inferencing, and the management of large knowledge-based simulation projects.
An Overview of Importance Splitting for Rare Event Simulation
ERIC Educational Resources Information Center
Morio, Jerome; Pastel, Rudy; Le Gland, Francois
2010-01-01
Monte Carlo simulations are a classical tool to analyse physical systems. When unlikely events are to be simulated, the importance sampling technique is often used instead of Monte Carlo. Importance sampling has some drawbacks when the problem dimensionality is high or when the optimal importance sampling density is complex to obtain. In this…
Design and Evaluation of Simulations for the Development of Complex Decision-Making Skills.
ERIC Educational Resources Information Center
Hartley, Roger; Varley, Glen
2002-01-01
Command and Control Training Using Simulation (CACTUS) is a computer digital mapping system used by police to manage large-scale public events. Audio and video records of adaptive training scenarios using CACTUS show how the simulation develops decision-making skills for strategic and tactical event management. (SK)
2010-01-01
gross vehicle response; and the effects of blast mitigation material, restraint system, and seat design to the loads developed on the members of an...occupant. A Blast Event Simulation sysTem (BEST) has been developed for facilitating the easy use of the LS- DYNA solvers for conducting a...et al, 1999] for modeling blast events. In this paper the Eulerian solver of LS- DYNA is employed for simulating the soil – explosive – air
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilke, Jeremiah J; Kenny, Joseph P.
2015-02-01
Discrete event simulation provides a powerful mechanism for designing and testing new extreme- scale programming models for high-performance computing. Rather than debug, run, and wait for results on an actual system, design can first iterate through a simulator. This is particularly useful when test beds cannot be used, i.e. to explore hardware or scales that do not yet exist or are inaccessible. Here we detail the macroscale components of the structural simulation toolkit (SST). Instead of depending on trace replay or state machines, the simulator is architected to execute real code on real software stacks. Our particular user-space threading frameworkmore » allows massive scales to be simulated even on small clusters. The link between the discrete event core and the threading framework allows interesting performance metrics like call graphs to be collected from a simulated run. Performance analysis via simulation can thus become an important phase in extreme-scale programming model and runtime system design via the SST macroscale components.« less
Enabling parallel simulation of large-scale HPC network systems
Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.; ...
2016-04-07
Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks usedmore » in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations« less
Enabling parallel simulation of large-scale HPC network systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.
Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks usedmore » in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations« less
Estimating rare events in biochemical systems using conditional sampling.
Sundar, V S
2017-01-28
The paper focuses on development of variance reduction strategies to estimate rare events in biochemical systems. Obtaining this probability using brute force Monte Carlo simulations in conjunction with the stochastic simulation algorithm (Gillespie's method) is computationally prohibitive. To circumvent this, important sampling tools such as the weighted stochastic simulation algorithm and the doubly weighted stochastic simulation algorithm have been proposed. However, these strategies require an additional step of determining the important region to sample from, which is not straightforward for most of the problems. In this paper, we apply the subset simulation method, developed as a variance reduction tool in the context of structural engineering, to the problem of rare event estimation in biochemical systems. The main idea is that the rare event probability is expressed as a product of more frequent conditional probabilities. These conditional probabilities are estimated with high accuracy using Monte Carlo simulations, specifically the Markov chain Monte Carlo method with the modified Metropolis-Hastings algorithm. Generating sample realizations of the state vector using the stochastic simulation algorithm is viewed as mapping the discrete-state continuous-time random process to the standard normal random variable vector. This viewpoint opens up the possibility of applying more sophisticated and efficient sampling schemes developed elsewhere to problems in stochastic chemical kinetics. The results obtained using the subset simulation method are compared with existing variance reduction strategies for a few benchmark problems, and a satisfactory improvement in computational time is demonstrated.
NASA Technical Reports Server (NTRS)
Wood, Charles C.
1991-01-01
The following topics are presented in tabular form: (1) simulation capability assessments (no propulsion system test); (2) advanced vehicle simulation capability assessment; (3) systems tests identified events; (4) main propulsion test article (MPTA) testing evaluation; (5) Saturn 5, 1B, and 1 testing evaluation. Special vehicle simulation issues that are propulsion related are briefly addressed.
NASA Astrophysics Data System (ADS)
Wehner, Michael; Pall, Pardeep; Zarzycki, Colin; Stone, Daithi
2016-04-01
Probabilistic extreme event attribution is especially difficult for weather events that are caused by extremely rare large-scale meteorological patterns. Traditional modeling techniques have involved using ensembles of climate models, either fully coupled or with prescribed ocean and sea ice. Ensemble sizes for the latter case ranges from several 100 to tens of thousand. However, even if the simulations are constrained by the observed ocean state, the requisite large-scale meteorological pattern may not occur frequently enough or even at all in free running climate model simulations. We present a method to ensure that simulated events similar to the observed event are modeled with enough fidelity that robust statistics can be determined given the large scale meteorological conditions. By initializing suitably constrained short term ensemble hindcasts of both the actual weather system and a counterfactual weather system where the human interference in the climate system is removed, the human contribution to the magnitude of the event can be determined. However, the change (if any) in the probability of an event of the observed magnitude is conditional not only on the state of the ocean/sea ice system but also on the prescribed initial conditions determined by the causal large scale meteorological pattern. We will discuss the implications of this technique through two examples; the 2013 Colorado flood and the 2014 Typhoon Haiyan.
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Basham, Bryan D.
1989-01-01
CONFIG is a modeling and simulation tool prototype for analyzing the normal and faulty qualitative behaviors of engineered systems. Qualitative modeling and discrete-event simulation have been adapted and integrated, to support early development, during system design, of software and procedures for management of failures, especially in diagnostic expert systems. Qualitative component models are defined in terms of normal and faulty modes and processes, which are defined by invocation statements and effect statements with time delays. System models are constructed graphically by using instances of components and relations from object-oriented hierarchical model libraries. Extension and reuse of CONFIG models and analysis capabilities in hybrid rule- and model-based expert fault-management support systems are discussed.
Novel high-fidelity realistic explosion damage simulation for urban environments
NASA Astrophysics Data System (ADS)
Liu, Xiaoqing; Yadegar, Jacob; Zhu, Youding; Raju, Chaitanya; Bhagavathula, Jaya
2010-04-01
Realistic building damage simulation has a significant impact in modern modeling and simulation systems especially in diverse panoply of military and civil applications where these simulation systems are widely used for personnel training, critical mission planning, disaster management, etc. Realistic building damage simulation should incorporate accurate physics-based explosion models, rubble generation, rubble flyout, and interactions between flying rubble and their surrounding entities. However, none of the existing building damage simulation systems sufficiently faithfully realize the criteria of realism required for effective military applications. In this paper, we present a novel physics-based high-fidelity and runtime efficient explosion simulation system to realistically simulate destruction to buildings. In the proposed system, a family of novel blast models is applied to accurately and realistically simulate explosions based on static and/or dynamic detonation conditions. The system also takes account of rubble pile formation and applies a generic and scalable multi-component based object representation to describe scene entities and highly scalable agent-subsumption architecture and scheduler to schedule clusters of sequential and parallel events. The proposed system utilizes a highly efficient and scalable tetrahedral decomposition approach to realistically simulate rubble formation. Experimental results demonstrate that the proposed system has the capability to realistically simulate rubble generation, rubble flyout and their primary and secondary impacts on surrounding objects including buildings, constructions, vehicles and pedestrians in clusters of sequential and parallel damage events.
Scott, J; Botsis, T; Ball, R
2014-01-01
Spontaneous Reporting Systems [SRS] are critical tools in the post-licensure evaluation of medical product safety. Regulatory authorities use a variety of data mining techniques to detect potential safety signals in SRS databases. Assessing the performance of such signal detection procedures requires simulated SRS databases, but simulation strategies proposed to date each have limitations. We sought to develop a novel SRS simulation strategy based on plausible mechanisms for the growth of databases over time. We developed a simulation strategy based on the network principle of preferential attachment. We demonstrated how this strategy can be used to create simulations based on specific databases of interest, and provided an example of using such simulations to compare signal detection thresholds for a popular data mining algorithm. The preferential attachment simulations were generally structurally similar to our targeted SRS database, although they had fewer nodes of very high degree. The approach was able to generate signal-free SRS simulations, as well as mimicking specific known true signals. Explorations of different reporting thresholds for the FDA Vaccine Adverse Event Reporting System suggested that using proportional reporting ratio [PRR] > 3.0 may yield better signal detection operating characteristics than the more commonly used PRR > 2.0 threshold. The network analytic approach to SRS simulation based on the principle of preferential attachment provides an attractive framework for exploring the performance of safety signal detection algorithms. This approach is potentially more principled and versatile than existing simulation approaches. The utility of network-based SRS simulations needs to be further explored by evaluating other types of simulated signals with a broader range of data mining approaches, and comparing network-based simulations with other simulation strategies where applicable.
Using a simulation assistant in modeling manufacturing systems
NASA Technical Reports Server (NTRS)
Schroer, Bernard J.; Tseng, Fan T.; Zhang, S. X.; Wolfsberger, John W.
1988-01-01
Numerous simulation languages exist for modeling discrete event processes, and are now ported to microcomputers. Graphic and animation capabilities were added to many of these languages to assist the users build models and evaluate the simulation results. With all these languages and added features, the user is still plagued with learning the simulation language. Futhermore, the time to construct and then to validate the simulation model is always greater than originally anticipated. One approach to minimize the time requirement is to use pre-defined macros that describe various common processes or operations in a system. The development of a simulation assistant for modeling discrete event manufacturing processes is presented. A simulation assistant is defined as an interactive intelligent software tool that assists the modeler in writing a simulation program by translating the modeler's symbolic description of the problem and then automatically generating the corresponding simulation code. The simulation assistant is discussed with emphasis on an overview of the simulation assistant, the elements of the assistant, and the five manufacturing simulation generators. A typical manufacturing system will be modeled using the simulation assistant and the advantages and disadvantages discussed.
High-speed extended-term time-domain simulation for online cascading analysis of power system
NASA Astrophysics Data System (ADS)
Fu, Chuan
A high-speed extended-term (HSET) time domain simulator (TDS), intended to become a part of an energy management system (EMS), has been newly developed for use in online extended-term dynamic cascading analysis of power systems. HSET-TDS includes the following attributes for providing situational awareness of high-consequence events: (i) online analysis, including n-1 and n-k events, (ii) ability to simulate both fast and slow dynamics for 1-3 hours in advance, (iii) inclusion of rigorous protection-system modeling, (iv) intelligence for corrective action ID, storage, and fast retrieval, and (v) high-speed execution. Very fast on-line computational capability is the most desired attribute of this simulator. Based on the process of solving algebraic differential equations describing the dynamics of power system, HSET-TDS seeks to develop computational efficiency at each of the following hierarchical levels, (i) hardware, (ii) strategies, (iii) integration methods, (iv) nonlinear solvers, and (v) linear solver libraries. This thesis first describes the Hammer-Hollingsworth 4 (HH4) implicit integration method. Like the trapezoidal rule, HH4 is symmetrically A-Stable but it possesses greater high-order precision (h4 ) than the trapezoidal rule. Such precision enables larger integration steps and therefore improves simulation efficiency for variable step size implementations. This thesis provides the underlying theory on which we advocate use of HH4 over other numerical integration methods for power system time-domain simulation. Second, motivated by the need to perform high speed extended-term time domain simulation (HSET-TDS) for on-line purposes, this thesis presents principles for designing numerical solvers of differential algebraic systems associated with power system time-domain simulation, including DAE construction strategies (Direct Solution Method), integration methods(HH4), nonlinear solvers(Very Dishonest Newton), and linear solvers(SuperLU). We have implemented a design appropriate for HSET-TDS, and we compare it to various solvers, including the commercial grade PSSE program, with respect to computational efficiency and accuracy, using as examples the New England 39 bus system, the expanded 8775 bus system, and PJM 13029 buses system. Third, we have explored a stiffness-decoupling method, intended to be part of parallel design of time domain simulation software for super computers. The stiffness-decoupling method is able to combine the advantages of implicit methods (A-stability) and explicit method(less computation). With the new stiffness detection method proposed herein, the stiffness can be captured. The expanded 975 buses system is used to test simulation efficiency. Finally, several parallel strategies for super computer deployment to simulate power system dynamics are proposed and compared. Design A partitions the task via scale with the stiffness decoupling method, waveform relaxation, and parallel linear solver. Design B partitions the task via the time axis using a highly precise integration method, the Kuntzmann-Butcher Method - order 8 (KB8). The strategy of partitioning events is designed to partition the whole simulation via the time axis through a simulated sequence of cascading events. For all strategies proposed, a strategy of partitioning cascading events is recommended, since the sub-tasks for each processor are totally independent, and therefore minimum communication time is needed.
Valli, Katja; Revonsuo, Antti; Pälkäs, Outi; Ismail, Kamaran Hassan; Ali, Karzan Jalal; Punamäki, Raija-Leena
2005-03-01
The threat simulation theory of dreaming (TST) () states that dream consciousness is essentially an ancient biological defence mechanism, evolutionarily selected for its capacity to repeatedly simulate threatening events. Threat simulation during dreaming rehearses the cognitive mechanisms required for efficient threat perception and threat avoidance, leading to increased probability of reproductive success during human evolution. One hypothesis drawn from TST is that real threatening events encountered by the individual during wakefulness should lead to an increased activation of the system, a threat simulation response, and therefore, to an increased frequency and severity of threatening events in dreams. Consequently, children who live in an environment in which their physical and psychological well-being is constantly threatened should have a highly activated dream production and threat simulation system, whereas children living in a safe environment that is relatively free of such threat cues should have a weakly activated system. We tested this hypothesis by analysing the content of dream reports from severely traumatized and less traumatized Kurdish children and ordinary, non-traumatized Finnish children. Our results give support for most of the predictions drawn from TST. The severely traumatized children reported a significantly greater number of dreams and their dreams included a higher number of threatening dream events. The dream threats of traumatized children were also more severe in nature than the threats of less traumatized or non-traumatized children.
Traffic Congestion Detection System through Connected Vehicles and Big Data
Cárdenas-Benítez, Néstor; Aquino-Santos, Raúl; Magaña-Espinoza, Pedro; Aguilar-Velazco, José; Edwards-Block, Arthur; Medina Cass, Aldo
2016-01-01
This article discusses the simulation and evaluation of a traffic congestion detection system which combines inter-vehicular communications, fixed roadside infrastructure and infrastructure-to-infrastructure connectivity and big data. The system discussed in this article permits drivers to identify traffic congestion and change their routes accordingly, thus reducing the total emissions of CO2 and decreasing travel time. This system monitors, processes and stores large amounts of data, which can detect traffic congestion in a precise way by means of a series of algorithms that reduces localized vehicular emission by rerouting vehicles. To simulate and evaluate the proposed system, a big data cluster was developed based on Cassandra, which was used in tandem with the OMNeT++ discreet event network simulator, coupled with the SUMO (Simulation of Urban MObility) traffic simulator and the Veins vehicular network framework. The results validate the efficiency of the traffic detection system and its positive impact in detecting, reporting and rerouting traffic when traffic events occur. PMID:27136548
Traffic Congestion Detection System through Connected Vehicles and Big Data.
Cárdenas-Benítez, Néstor; Aquino-Santos, Raúl; Magaña-Espinoza, Pedro; Aguilar-Velazco, José; Edwards-Block, Arthur; Medina Cass, Aldo
2016-04-28
This article discusses the simulation and evaluation of a traffic congestion detection system which combines inter-vehicular communications, fixed roadside infrastructure and infrastructure-to-infrastructure connectivity and big data. The system discussed in this article permits drivers to identify traffic congestion and change their routes accordingly, thus reducing the total emissions of CO₂ and decreasing travel time. This system monitors, processes and stores large amounts of data, which can detect traffic congestion in a precise way by means of a series of algorithms that reduces localized vehicular emission by rerouting vehicles. To simulate and evaluate the proposed system, a big data cluster was developed based on Cassandra, which was used in tandem with the OMNeT++ discreet event network simulator, coupled with the SUMO (Simulation of Urban MObility) traffic simulator and the Veins vehicular network framework. The results validate the efficiency of the traffic detection system and its positive impact in detecting, reporting and rerouting traffic when traffic events occur.
Modeling a maintenance simulation of the geosynchronous platform
NASA Technical Reports Server (NTRS)
Kleiner, A. F., Jr.
1980-01-01
A modeling technique used to conduct a simulation study comparing various maintenance routines for a space platform is dicussed. A system model is described and illustrated, the basic concepts of a simulation pass are detailed, and sections on failures and maintenance are included. The operation of the system across time is best modeled by a discrete event approach with two basic events - failure and maintenance of the system. Each overall simulation run consists of introducing a particular model of the physical system, together with a maintenance policy, demand function, and mission lifetime. The system is then run through many passes, each pass corresponding to one mission and the model is re-initialized before each pass. Statistics are compiled at the end of each pass and after the last pass a report is printed. Items of interest typically include the time to first maintenance, total number of maintenance trips for each pass, average capability of the system, etc.
Discrete event simulation as a tool in optimization of a professional complex adaptive system.
Nielsen, Anders Lassen; Hilwig, Helmer; Kissoon, Niranjan; Teelucksingh, Surujpal
2008-01-01
Similar urgent needs for improvement of health care systems exist in the developed and developing world. The culture and the organization of an emergency department in developing countries can best be described as a professional complex adaptive system, where each agent (employee) are ignorant of the behavior of the system as a whole; no one understands the entire system. Each agent's action is based on the state of the system at the moment (i.e. lack of medicine, unavailable laboratory investigation, lack of beds and lack of staff in certain functions). An important question is how one can improve the emergency service within the given constraints. The use of simulation signals is one new approach in studying issues amenable to improvement. Discrete event simulation was used to simulate part of the patient flow in an emergency department. A simple model was built using a prototyping approach. The simulation showed that a minor rotation among the nurses could reduce the mean number of visitors that had to be refereed to alternative flows within the hospital from 87 to 37 on a daily basis with a mean utilization of the staff between 95.8% (the nurses) and 87.4% (the doctors). We conclude that even faced with resource constraints and lack of accessible data discrete event simulation is a tool that can be used successfully to study the consequences of changes in very complex and self organizing professional complex adaptive systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reynolds, John; Jankovsky, Zachary; Metzroth, Kyle G
2018-04-04
The purpose of the ADAPT code is to generate Dynamic Event Trees (DET) using a user specified set of simulators. ADAPT can utilize any simulation tool which meets a minimal set of requirements. ADAPT is based on the concept of DET which uses explicit modeling of the deterministic dynamic processes that take place during a nuclear reactor plant system (or other complex system) evolution along with stochastic modeling. When DET are used to model various aspects of Probabilistic Risk Assessment (PRA), all accident progression scenarios starting from an initiating event are considered simultaneously. The DET branching occurs at user specifiedmore » times and/or when an action is required by the system and/or the operator. These outcomes then decide how the dynamic system variables will evolve in time for each DET branch. Since two different outcomes at a DET branching may lead to completely different paths for system evolution, the next branching for these paths may occur not only at separate times, but can be based on different branching criteria. The computational infrastructure allows for flexibility in ADAPT to link with different system simulation codes, parallel processing of the scenarios under consideration, on-line scenario management (initiation as well as termination), analysis of results, and user friendly graphical capabilities. The ADAPT system is designed for a distributed computing environment; the scheduler can track multiple concurrent branches simultaneously. The scheduler is modularized so that the DET branching strategy can be modified (e.g. biasing towards the worst-case scenario/event). Independent database systems store data from the simulation tasks and the DET structure so that the event tree can be constructed and analyzed later. ADAPT is provided with a user-friendly client which can easily sort through and display the results of an experiment, precluding the need for the user to manually inspect individual simulator runs.« less
Lampoudi, Sotiria; Gillespie, Dan T; Petzold, Linda R
2009-03-07
The Inhomogeneous Stochastic Simulation Algorithm (ISSA) is a variant of the stochastic simulation algorithm in which the spatially inhomogeneous volume of the system is divided into homogeneous subvolumes, and the chemical reactions in those subvolumes are augmented by diffusive transfers of molecules between adjacent subvolumes. The ISSA can be prohibitively slow when the system is such that diffusive transfers occur much more frequently than chemical reactions. In this paper we present the Multinomial Simulation Algorithm (MSA), which is designed to, on the one hand, outperform the ISSA when diffusive transfer events outnumber reaction events, and on the other, to handle small reactant populations with greater accuracy than deterministic-stochastic hybrid algorithms. The MSA treats reactions in the usual ISSA fashion, but uses appropriately conditioned binomial random variables for representing the net numbers of molecules diffusing from any given subvolume to a neighbor within a prescribed distance. Simulation results illustrate the benefits of the algorithm.
Simulating Coronal Loop Implosion and Compressible Wave Modes in a Flare Hit Active Region
NASA Astrophysics Data System (ADS)
Sarkar, Aveek; Vaidya, Bhargav; Hazra, Soumitra; Bhattacharyya, Jishnu
2017-12-01
There is considerable observational evidence of implosion of magnetic loop systems inside solar coronal active regions following high-energy events like solar flares. In this work, we propose that such collapse can be modeled in three dimensions quite accurately within the framework of ideal magnetohydrodynamics. We furthermore argue that the dynamics of loop implosion is only sensitive to the transmitted disturbance of one or more of the system variables, e.g., velocity generated at the event site. This indicates that to understand loop implosion, it is sensible to leave the event site out of the simulated active region. Toward our goal, a velocity pulse is introduced to model the transmitted disturbance generated at the event site. Magnetic field lines inside our simulated active region are traced in real time, and it is demonstrated that the subsequent dynamics of the simulated loops closely resemble observed imploding loops. Our work highlights the role of plasma β in regards to the rigidity of the loop systems and how that might affect the imploding loops’ dynamics. Compressible magnetohydrodynamic modes such as kink and sausage are also shown to be generated during such processes, in accordance with observations.
Modeling a Million-Node Slim Fly Network Using Parallel Discrete-Event Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolfe, Noah; Carothers, Christopher; Mubarak, Misbah
As supercomputers close in on exascale performance, the increased number of processors and processing power translates to an increased demand on the underlying network interconnect. The Slim Fly network topology, a new lowdiameter and low-latency interconnection network, is gaining interest as one possible solution for next-generation supercomputing interconnect systems. In this paper, we present a high-fidelity Slim Fly it-level model leveraging the Rensselaer Optimistic Simulation System (ROSS) and Co-Design of Exascale Storage (CODES) frameworks. We validate our Slim Fly model with the Kathareios et al. Slim Fly model results provided at moderately sized network scales. We further scale the modelmore » size up to n unprecedented 1 million compute nodes; and through visualization of network simulation metrics such as link bandwidth, packet latency, and port occupancy, we get an insight into the network behavior at the million-node scale. We also show linear strong scaling of the Slim Fly model on an Intel cluster achieving a peak event rate of 36 million events per second using 128 MPI tasks to process 7 billion events. Detailed analysis of the underlying discrete-event simulation performance shows that a million-node Slim Fly model simulation can execute in 198 seconds on the Intel cluster.« less
VME rollback hardware for time warp multiprocessor systems
NASA Technical Reports Server (NTRS)
Robb, Michael J.; Buzzell, Calvin A.
1992-01-01
The purpose of the research effort is to develop and demonstrate innovative hardware to implement specific rollback and timing functions required for efficient queue management and precision timekeeping in multiprocessor discrete event simulations. The previously completed phase 1 effort demonstrated the technical feasibility of building hardware modules which eliminate the state saving overhead of the Time Warp paradigm used in distributed simulations on multiprocessor systems. The current phase 2 effort will build multiple pre-production rollback hardware modules integrated with a network of Sun workstations, and the integrated system will be tested by executing a Time Warp simulation. The rollback hardware will be designed to interface with the greatest number of multiprocessor systems possible. The authors believe that the rollback hardware will provide for significant speedup of large scale discrete event simulation problems and allow multiprocessors using Time Warp to dramatically increase performance.
Simulating Impacts of Disruptions to Liquid Fuels Infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, Michael; Corbet, Thomas F.; Baker, Arnold B.
This report presents a methodology for estimating the impacts of events that damage or disrupt liquid fuels infrastructure. The impact of a disruption depends on which components of the infrastructure are damaged, the time required for repairs, and the position of the disrupted components in the fuels supply network. Impacts are estimated for seven stressing events in regions of the United States, which were selected to represent a range of disruption types. For most of these events the analysis is carried out using the National Transportation Fuels Model (NTFM) to simulate the system-level liquid fuels sector response. Results are presentedmore » for each event, and a brief cross comparison of event simulation results is provided.« less
Evaluation of a low-cost 3D sound system for immersive virtual reality training systems.
Doerr, Kai-Uwe; Rademacher, Holger; Huesgen, Silke; Kubbat, Wolfgang
2007-01-01
Since Head Mounted Displays (HMD), datagloves, tracking systems, and powerful computer graphics resources are nowadays in an affordable price range, the usage of PC-based "Virtual Training Systems" becomes very attractive. However, due to the limited field of view of HMD devices, additional modalities have to be provided to benefit from 3D environments. A 3D sound simulation can improve the capabilities of VR systems dramatically. Unfortunately, realistic 3D sound simulations are expensive and demand a tremendous amount of computational power to calculate reverberation, occlusion, and obstruction effects. To use 3D sound in a PC-based training system as a way to direct and guide trainees to observe specific events in 3D space, a cheaper alternative has to be provided, so that a broader range of applications can take advantage of this modality. To address this issue, we focus in this paper on the evaluation of a low-cost 3D sound simulation that is capable of providing traceable 3D sound events. We describe our experimental system setup using conventional stereo headsets in combination with a tracked HMD device and present our results with regard to precision, speed, and used signal types for localizing simulated sound events in a virtual training environment.
Knowledge-based simulation for aerospace systems
NASA Technical Reports Server (NTRS)
Will, Ralph W.; Sliwa, Nancy E.; Harrison, F. Wallace, Jr.
1988-01-01
Knowledge-based techniques, which offer many features that are desirable in the simulation and development of aerospace vehicle operations, exhibit many similarities to traditional simulation packages. The eventual solution of these systems' current symbolic processing/numeric processing interface problem will lead to continuous and discrete-event simulation capabilities in a single language, such as TS-PROLOG. Qualitative, totally-symbolic simulation methods are noted to possess several intrinsic characteristics that are especially revelatory of the system being simulated, and capable of insuring that all possible behaviors are considered.
LISP based simulation generators for modeling complex space processes
NASA Technical Reports Server (NTRS)
Tseng, Fan T.; Schroer, Bernard J.; Dwan, Wen-Shing
1987-01-01
The development of a simulation assistant for modeling discrete event processes is presented. Included are an overview of the system, a description of the simulation generators, and a sample process generated using the simulation assistant.
Actionable Capability for Social and Economic Systems (ACSES)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fernandez, Steven J; Brecke, Peter K; Carmichael, Theodore D
The foundation of the Actionable Capability for Social and Economic Systems (ACSES) project is a useful regional-scale social-simulation system. This report is organized into five chapters that describe insights that were gained concerning the five key feasibility questions pertaining to such a system: (1) Should such a simulation system exist, would the current state of data sets or collectible data sets be adequate to support such a system? (2) By comparing different agent-based simulation systems, is it feasible to compare simulation systems and select one appropriate for a given application with agents behaving according to modern social theory rather thanmore » ad hoc rule sets? (3) Provided that a selected simulation system for a region of interest could be constructed, can the simulation system be updated with new and changing conditions so that the universe of potential outcomes are constrained by events on the ground as they evolve? (4) As these results are constrained by evolving events on the ground, is it feasible to still generate surprise and emerging behavior to suggest outcomes from novel courses of action? (5) As these systems may for the first time require large numbers (hundreds of millions) of agents operating with complexities demanded of modern social theories, can results still be generated within actionable decision cycles?« less
NASA Astrophysics Data System (ADS)
Intriligator, D. S.; Sun, W.; Detman, T. R.; Dryer, Ph D., M.; Intriligator, J.; Deehr, C. S.; Webber, W. R.; Gloeckler, G.; Miller, W. D.
2015-12-01
Large solar events can have severe adverse global impacts at Earth. These solar events also can propagate throughout the heliopshere and into the interstellar medium. We focus on the July 2012 and Halloween 2003 solar events. We simulate these events starting from the vicinity of the Sun at 2.5 Rs. We compare our three dimensional (3D) time-dependent simulations to available spacecraft (s/c) observations at 1 AU and beyond. Based on the comparisons of the predictions from our simulations with in-situ measurements we find that the effects of these large solar events can be observed in the outer heliosphere, the heliosheath, and even into the interstellar medium. We use two simulation models. The HAFSS (HAF Source Surface) model is a kinematic model. HHMS-PI (Hybrid Heliospheric Modeling System with Pickup protons) is a numerical magnetohydrodynamic solar wind (SW) simulation model. Both HHMS-PI and HAFSS are ideally suited for these analyses since starting at 2.5 Rs from the Sun they model the slowly evolving background SW and the impulsive, time-dependent events associated with solar activity. Our models naturally reproduce dynamic 3D spatially asymmetric effects observed throughout the heliosphere. Pre-existing SW background conditions have a strong influence on the propagation of shock waves from solar events. Time-dependence is a crucial aspect of interpreting s/c data. We show comparisons of our simulation results with STEREO A, ACE, Ulysses, and Voyager s/c observations.
Parallel discrete-event simulation of FCFS stochastic queueing networks
NASA Technical Reports Server (NTRS)
Nicol, David M.
1988-01-01
Physical systems are inherently parallel. Intuition suggests that simulations of these systems may be amenable to parallel execution. The parallel execution of a discrete-event simulation requires careful synchronization of processes in order to ensure the execution's correctness; this synchronization can degrade performance. Largely negative results were recently reported in a study which used a well-known synchronization method on queueing network simulations. Discussed here is a synchronization method (appointments), which has proven itself to be effective on simulations of FCFS queueing networks. The key concept behind appointments is the provision of lookahead. Lookahead is a prediction on a processor's future behavior, based on an analysis of the processor's simulation state. It is shown how lookahead can be computed for FCFS queueing network simulations, give performance data that demonstrates the method's effectiveness under moderate to heavy loads, and discuss performance tradeoffs between the quality of lookahead, and the cost of computing lookahead.
A Deep Space Orbit Determination Software: Overview and Event Prediction Capability
NASA Astrophysics Data System (ADS)
Kim, Youngkwang; Park, Sang-Young; Lee, Eunji; Kim, Minsik
2017-06-01
This paper presents an overview of deep space orbit determination software (DSODS), as well as validation and verification results on its event prediction capabilities. DSODS was developed in the MATLAB object-oriented programming environment to support the Korea Pathfinder Lunar Orbiter (KPLO) mission. DSODS has three major capabilities: celestial event prediction for spacecraft, orbit determination with deep space network (DSN) tracking data, and DSN tracking data simulation. To achieve its functionality requirements, DSODS consists of four modules: orbit propagation (OP), event prediction (EP), data simulation (DS), and orbit determination (OD) modules. This paper explains the highest-level data flows between modules in event prediction, orbit determination, and tracking data simulation processes. Furthermore, to address the event prediction capability of DSODS, this paper introduces OP and EP modules. The role of the OP module is to handle time and coordinate system conversions, to propagate spacecraft trajectories, and to handle the ephemerides of spacecraft and celestial bodies. Currently, the OP module utilizes the General Mission Analysis Tool (GMAT) as a third-party software component for highfidelity deep space propagation, as well as time and coordinate system conversions. The role of the EP module is to predict celestial events, including eclipses, and ground station visibilities, and this paper presents the functionality requirements of the EP module. The validation and verification results show that, for most cases, event prediction errors were less than 10 millisec when compared with flight proven mission analysis tools such as GMAT and Systems Tool Kit (STK). Thus, we conclude that DSODS is capable of predicting events for the KPLO in real mission applications.
DOT National Transportation Integrated Search
1981-06-01
The purpose of Task 5 in the Extended System Operations Studies Project, DPM Failure Management, is to enhance the capabilities of the Downtown People Mover Simulation (DPMS) and the Discrete Event Simulation Model (DESM) by increasing the failure mo...
Acceleration techniques for dependability simulation. M.S. Thesis
NASA Technical Reports Server (NTRS)
Barnette, James David
1995-01-01
As computer systems increase in complexity, the need to project system performance from the earliest design and development stages increases. We have to employ simulation for detailed dependability studies of large systems. However, as the complexity of the simulation model increases, the time required to obtain statistically significant results also increases. This paper discusses an approach that is application independent and can be readily applied to any process-based simulation model. Topics include background on classical discrete event simulation and techniques for random variate generation and statistics gathering to support simulation.
Modeling and Simulation of Metallurgical Process Based on Hybrid Petri Net
NASA Astrophysics Data System (ADS)
Ren, Yujuan; Bao, Hong
2016-11-01
In order to achieve the goals of energy saving and emission reduction of iron and steel enterprises, an increasing number of modeling and simulation technologies are used to research and analyse metallurgical production process. In this paper, the basic principle of Hybrid Petri net is used to model and analyse the Metallurgical Process. Firstly, the definition of Hybrid Petri Net System of Metallurgical Process (MPHPNS) and its modeling theory are proposed. Secondly, the model of MPHPNS based on material flow is constructed. The dynamic flow of materials and the real-time change of each technological state in metallurgical process are simulated vividly by using this model. The simulation process can implement interaction between the continuous event dynamic system and the discrete event dynamic system at the same level, and play a positive role in the production decision.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perkins, Casey J.; Brigantic, Robert T.; Keating, Douglas H.
There is a need to develop and demonstrate technical approaches for verifying potential future agreements to limit and reduce total warhead stockpiles. To facilitate this aim, warhead monitoring systems employ both concepts of operations (CONOPS) and technologies. A systems evaluation approach can be used to assess the relative performance of CONOPS and technologies in their ability to achieve monitoring system objectives which include: 1) confidence that a treaty accountable item (TAI) initialized by the monitoring system is as declared; 2) confidence that there is no undetected diversion from the monitoring system; and 3) confidence that a TAI is dismantled asmore » declared. Although there are many quantitative methods that can be used to assess system performance for the above objectives, this paper focuses on a simulation perspective primarily for the ability to support analysis of the probabilities that are used to define operating characteristics of CONOPS and technologies. This paper describes a discrete event simulation (DES) model, comprised of three major sub-models: including TAI lifecycle flow, monitoring activities, and declaration behavior. The DES model seeks to capture all processes and decision points associated with the progressions of virtual TAIs, with notional characteristics, through the monitoring system from initialization through dismantlement. The simulation updates TAI progression (i.e., whether the generated test objects are accepted and rejected at the appropriate points) all the way through dismantlement. Evaluation of TAI lifecycles primarily serves to assess how the order, frequency, and combination of functions in the CONOPS affect system performance as a whole. It is important, however, to note that discrete event simulation is also capable (at a basic level) of addressing vulnerabilities in the CONOPS and interdependencies between individual functions as well. This approach is beneficial because it does not rely on complex mathematical models, but instead attempts to recreate the real world system as a decision and event driven simulation. Finally, because the simulation addresses warhead confirmation, chain of custody, and warhead dismantlement in a modular fashion, a discrete-event model could be easily adapted to multiple CONOPS for the exploration of a large number of “what if” scenarios.« less
Simulation modeling of route guidance concept
DOT National Transportation Integrated Search
1997-01-01
The methodology of a simulation model developed at the University of New South Wales, Australia, for the evaluation of performance of Dynamic Route Guidance Systems (DRGS) is described. The microscopic simulation model adopts the event update simulat...
3D Simulation of External Flooding Events for the RISMC Pathway
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prescott, Steven; Mandelli, Diego; Sampath, Ramprasad
2015-09-01
Incorporating 3D simulations as part of the Risk-Informed Safety Margins Characterization (RISMIC) Toolkit allows analysts to obtain a more complete picture of complex system behavior for events including external plant hazards. External events such as flooding have become more important recently – however these can be analyzed with existing and validated simulated physics toolkits. In this report, we describe these approaches specific to flooding-based analysis using an approach called Smoothed Particle Hydrodynamics. The theory, validation, and example applications of the 3D flooding simulation are described. Integrating these 3D simulation methods into computational risk analysis provides a spatial/visual aspect to themore » design, improves the realism of results, and can prove visual understanding to validate the analysis of flooding.« less
Chuang, Ming-Tung; Fu, Joshua S; Jang, Carey J; Chan, Chang-Chuan; Ni, Pei-Cheng; Lee, Chung-Te
2008-11-15
Aerosol is frequently transported by a southward high-pressure system from the Asian Continent to Taiwan and had been recorded a 100% increase in mass level compared to non-event days from 2002 to 2005. During this time period, PM2.5 sulfate was found to increase as high as 155% on event days as compared to non-event days. In this study, Asian emission estimations, Taiwan Emission Database System (TEDS), and meteorological simulation results from the fifth-generation Mesoscale Model (MM5) were used as inputs for the Community Multiscale Air Quality (CMAQ) model to simulate a long-range transport of PM2.5 event in a southward high-pressure system from the Asian Continent to Taiwan. The simulation on aerosol mass level and the associated aerosol components were found within a reasonable accuracy. During the transport process, the percentage of semi-volatile PM2.5 organic carbon in PM2.5 plume only slightly decreased from 22-24% in Shanghai to 21% near Taiwan. However, the percentage of PM2.5 nitrate in PM2.5 decreased from 16-25% to 1%. In contrast, the percentage of PM2.5 sulfate in PM2.5 increased from 16-19% to 35%. It is interesting to note that the percentage of PM2.5 ammonium and PM2.5 elemental carbon in PM2.5 remained nearly constant. Simulation results revealed that transported pollutants dominate the air quality in Taipei when the southward high-pressure system moved to Taiwan. Such condition demonstrates the dynamic chemical transformation of pollutants during the transport process from continental origin over the sea area and to the downwind land.
Visual Data-Analytics of Large-Scale Parallel Discrete-Event Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ross, Caitlin; Carothers, Christopher D.; Mubarak, Misbah
Parallel discrete-event simulation (PDES) is an important tool in the codesign of extreme-scale systems because PDES provides a cost-effective way to evaluate designs of highperformance computing systems. Optimistic synchronization algorithms for PDES, such as Time Warp, allow events to be processed without global synchronization among the processing elements. A rollback mechanism is provided when events are processed out of timestamp order. Although optimistic synchronization protocols enable the scalability of large-scale PDES, the performance of the simulations must be tuned to reduce the number of rollbacks and provide an improved simulation runtime. To enable efficient large-scale optimistic simulations, one has tomore » gain insight into the factors that affect the rollback behavior and simulation performance. We developed a tool for ROSS model developers that gives them detailed metrics on the performance of their large-scale optimistic simulations at varying levels of simulation granularity. Model developers can use this information for parameter tuning of optimistic simulations in order to achieve better runtime and fewer rollbacks. In this work, we instrument the ROSS optimistic PDES framework to gather detailed statistics about the simulation engine. We have also developed an interactive visualization interface that uses the data collected by the ROSS instrumentation to understand the underlying behavior of the simulation engine. The interface connects real time to virtual time in the simulation and provides the ability to view simulation data at different granularities. We demonstrate the usefulness of our framework by performing a visual analysis of the dragonfly network topology model provided by the CODES simulation framework built on top of ROSS. The instrumentation needs to minimize overhead in order to accurately collect data about the simulation performance. To ensure that the instrumentation does not introduce unnecessary overhead, we perform a scaling study that compares instrumented ROSS simulations with their noninstrumented counterparts in order to determine the amount of perturbation when running at different simulation scales.« less
Simulation Of Combat With An Expert System
NASA Technical Reports Server (NTRS)
Provenzano, J. P.
1989-01-01
Proposed expert system predicts outcomes of combat situations. Called "COBRA", combat outcome based on rules for attrition, system selects rules for mathematical modeling of losses and discrete events in combat according to previous experiences. Used with another software module known as the "Game". Game/COBRA software system, consisting of Game and COBRA modules, provides for both quantitative aspects and qualitative aspects in simulations of battles. COBRA intended for simulation of large-scale military exercises, concepts embodied in it have much broader applicability. In industrial research, knowledge-based system enables qualitative as well as quantitative simulations.
An Investigation of Computer-based Simulations for School Crises Management.
ERIC Educational Resources Information Center
Degnan, Edward; Bozeman, William
2001-01-01
Describes development of a computer-based simulation program for training school personnel in crisis management. Addresses the data collection and analysis involved in developing a simulated event, the systems requirements for simulation, and a case study of application and use of the completed simulation. (Contains 21 references.) (Authors/PKP)
Discrete-Event Simulation in Chemical Engineering.
ERIC Educational Resources Information Center
Schultheisz, Daniel; Sommerfeld, Jude T.
1988-01-01
Gives examples, descriptions, and uses for various types of simulation systems, including the Flowtran, Process, Aspen Plus, Design II, GPSS, Simula, and Simscript. Explains similarities in simulators, terminology, and a batch chemical process. Tables and diagrams are included. (RT)
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Flores, Luis; Fleming, Land; Throop, Daiv
2002-01-01
A hybrid discrete/continuous simulation tool, CONFIG, has been developed to support evaluation of the operability life support systems. CON FIG simulates operations scenarios in which flows and pressures change continuously while system reconfigurations occur as discrete events. In simulations, intelligent control software can interact dynamically with hardware system models. CONFIG simulations have been used to evaluate control software and intelligent agents for automating life support systems operations. A CON FIG model of an advanced biological water recovery system has been developed to interact with intelligent control software that is being used in a water system test at NASA Johnson Space Center
Schwartz, Rafi; Lahav, Ori; Ostfeld, Avi
2014-10-15
As a complementary step towards solving the general event detection problem of water distribution systems, injection of the organophosphate pesticides, chlorpyrifos (CP) and parathion (PA), were simulated at various locations within example networks and hydraulic parameters were calculated over 24-h duration. The uniqueness of this study is that the chemical reactions and byproducts of the contaminants' oxidation were also simulated, as well as other indicative water quality parameters such as alkalinity, acidity, pH and the total concentration of free chlorine species. The information on the change in water quality parameters induced by the contaminant injection may facilitate on-line detection of an actual event involving this specific substance and pave the way to development of a generic methodology for detecting events involving introduction of pesticides into water distribution systems. Simulation of the contaminant injection was performed at several nodes within two different networks. For each injection, concentrations of the relevant contaminants' mother and daughter species, free chlorine species and water quality parameters, were simulated at nodes downstream of the injection location. The results indicate that injection of these substances can be detected at certain conditions by a very rapid drop in Cl2, functioning as the indicative parameter, as well as a drop in alkalinity concentration and a small decrease in pH, both functioning as supporting parameters, whose usage may reduce false positive alarms. Copyright © 2014 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ponciroli, Roberto; Passerini, Stefano; Vilim, Richard B.
Advanced reactors are often claimed to be passively safe against unprotected upset events. In common practice, these events are not considered in the context of the plant control system, i.e., the reactor is subjected to classes of unprotected upset events while the normally programmed response of the control system is assumed not to be present. However, this approach constitutes an oversimplification since, depending on the upset involving the control system, an actuator does not necessarily go in the same direction as needed for safety. In this work, dynamic simulations are performed to assess the degree to which the inherent self-regulatingmore » plant response is safe from active control system override. The simulations are meant to characterize the resilience of the plant to unprotected initiators. The initiators were represented and modeled as an actuator going to a hard limit. Consideration of failure is further limited to individual controllers as there is no cross-connect of signals between these controllers. The potential for passive safety override by the control system is then relegated to the single-input single-output controllers. Here, the results show that when the plant control system is designed by taking into account and quantifying the impact of the plant control system on accidental scenarios there is very limited opportunity for the preprogrammed response of the control system to override passive safety protection in the event of an unprotected initiator.« less
Ponciroli, Roberto; Passerini, Stefano; Vilim, Richard B.
2017-06-21
Advanced reactors are often claimed to be passively safe against unprotected upset events. In common practice, these events are not considered in the context of the plant control system, i.e., the reactor is subjected to classes of unprotected upset events while the normally programmed response of the control system is assumed not to be present. However, this approach constitutes an oversimplification since, depending on the upset involving the control system, an actuator does not necessarily go in the same direction as needed for safety. In this work, dynamic simulations are performed to assess the degree to which the inherent self-regulatingmore » plant response is safe from active control system override. The simulations are meant to characterize the resilience of the plant to unprotected initiators. The initiators were represented and modeled as an actuator going to a hard limit. Consideration of failure is further limited to individual controllers as there is no cross-connect of signals between these controllers. The potential for passive safety override by the control system is then relegated to the single-input single-output controllers. Here, the results show that when the plant control system is designed by taking into account and quantifying the impact of the plant control system on accidental scenarios there is very limited opportunity for the preprogrammed response of the control system to override passive safety protection in the event of an unprotected initiator.« less
Computer Simulation as an Aid for Management of an Information System.
ERIC Educational Resources Information Center
Simmonds, W. H.; And Others
The aim of this study was to develop methods, based upon computer simulation, of designing information systems and illustrate the use of these methods by application to an information service. The method developed is based upon Monte Carlo and discrete event simulation techniques and is described in an earlier report - Sira report R412 Organizing…
Toward real-time regional earthquake simulation of Taiwan earthquakes
NASA Astrophysics Data System (ADS)
Lee, S.; Liu, Q.; Tromp, J.; Komatitsch, D.; Liang, W.; Huang, B.
2013-12-01
We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 minutes after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 minutes for a 70 sec ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.
The Gypsy Moth Event Monitor for FVS: a tool for forest and pest managers
Kurt W. Gottschalk; Anthony W. Courter
2007-01-01
The Gypsy Moth Event Monitor is a program that simulates the effects of gypsy moth, Lymantria dispar (L.), within the confines of the Forest Vegetation Simulator (FVS). Individual stands are evaluated with a susceptibility index system to determine the vulnerability of the stand to the effects of gypsy moth. A gypsy moth outbreak is scheduled in the...
Successful Demonstration of New Isolated Bridge System at UCB Shaking Table
other events Successful Demonstration of New Isolated Bridge System at UCB Shaking Table PEER Events Successful Demonstration of New Isolated Bridge System at UCB Shaking Table On May 26, 2010 over 100 demonstration of a new isolated bridge system at the PEER Earthquake Simulator Laboratory at UC BerkeleyÂs
Simulating and Forecasting Flooding Events in the City of Jeddah, Saudi Arabia
NASA Astrophysics Data System (ADS)
Ghostine, Rabih; Viswanadhapalli, Yesubabu; Hoteit, Ibrahim
2014-05-01
Metropolitan cities in the Kingdom of Saudi Arabia, as Jeddah and Riyadh, are more frequently experiencing flooding events caused by strong convective storms that produce intense precipitation over a short span of time. The flooding in the city of Jeddah in November 2009 was described by civil defense officials as the worst in 27 years. As of January 2010, 150 people were reported killed and more than 350 were missing. Another flooding event, less damaging but comparably spectacular, occurred one year later (Jan 2011) in Jeddah. Anticipating floods before they occur could minimize human and economic losses through the implementation of appropriate protection, provision and rescue plans. We have developed a coupled hydro-meteorological model for simulating and predicting flooding events in the city of Jeddah. We use the Weather Research Forecasting (WRF) model assimilating all available data in the Jeddah region for simulating the storm events in Jeddah. The resulting rain is then used on 10 minutes intervals to feed up an advanced numerical shallow water model that has been discretized on an unstructured grid using different numerical schemes based on the finite elements or finite volume techniques. The model was integrated on a high-resolution grid size varying between 0.5m within the streets of Jeddah and 500m outside the city. This contribution will present the flooding simulation system and the simulation results, focusing on the comparison of the different numerical schemes on the system performances in terms of accuracy and computational efficiency.
Coupled atmosphere-ocean-wave simulations of a storm event over the Gulf of Lion and Balearic Sea
Renault, Lionel; Chiggiato, Jacopo; Warner, John C.; Gomez, Marta; Vizoso, Guillermo; Tintore, Joaquin
2012-01-01
The coastal areas of the North-Western Mediterranean Sea are one of the most challenging places for ocean forecasting. This region is exposed to severe storms events that are of short duration. During these events, significant air-sea interactions, strong winds and large sea-state can have catastrophic consequences in the coastal areas. To investigate these air-sea interactions and the oceanic response to such events, we implemented the Coupled Ocean-Atmosphere-Wave-Sediment Transport Modeling System simulating a severe storm in the Mediterranean Sea that occurred in May 2010. During this event, wind speed reached up to 25 m.s-1 inducing significant sea surface cooling (up to 2°C) over the Gulf of Lion (GoL) and along the storm track, and generating surface waves with a significant height of 6 m. It is shown that the event, associated with a cyclogenesis between the Balearic Islands and the GoL, is relatively well reproduced by the coupled system. A surface heat budget analysis showed that ocean vertical mixing was a major contributor to the cooling tendency along the storm track and in the GoL where turbulent heat fluxes also played an important role. Sensitivity experiments on the ocean-atmosphere coupling suggested that the coupled system is sensitive to the momentum flux parameterization as well as air-sea and air-wave coupling. Comparisons with available atmospheric and oceanic observations showed that the use of the fully coupled system provides the most skillful simulation, illustrating the benefit of using a fully coupled ocean-atmosphere-wave model for the assessment of these storm events.
2016-03-14
flows , or continuous state changes, with feedback loops and lags modeled in the flow system. Agent based simulations operate using a discrete event...DeLand, S. M., Rutherford, B . M., Diegert, K. V., & Alvin, K. F. (2002). Error and uncertainty in modeling and simulation . Reliability Engineering...intrinsic complexity of the underlying social systems fundamentally limits the ability to make
Grammatical Aspect and Mental Simulation
ERIC Educational Resources Information Center
Bergen, Benjamin; Wheeler, Kathryn
2010-01-01
When processing sentences about perceptible scenes and performable actions, language understanders activate perceptual and motor systems to perform mental simulations of those events. But little is known about exactly what linguistic elements activate modality-specific systems during language processing. While it is known that content words, like…
NASA Astrophysics Data System (ADS)
Hong, Xiaodong; Reynolds, Carolyn A.; Doyle, James D.; May, Paul; O'Neill, Larry
2017-06-01
Atmosphere-ocean interaction, particular the ocean response to strong atmospheric forcing, is a fundamental component of the Madden-Julian Oscillation (MJO). In this paper, we examine how model errors in previous Madden-Julian Oscillation (MJO) events can affect the simulation of subsequent MJO events due to increased errors that develop in the upper-ocean before the MJO initiation stage. Two fully coupled numerical simulations with 45-km and 27-km horizontal resolutions were integrated for a two-month period from November to December 2011 using the Navy's limited area Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS®). There are three MJO events that occurred subsequently in early November, mid-November, and mid-December during the simulations. The 45-km simulation shows an excessive warming of the SSTs during the suppressed phase that occurs before the initiation of the second MJO event due to erroneously strong surface net heat fluxes. The simulated second MJO event stalls over the Maritime Continent which prevents the recovery of the deep mixed layer and associated barrier layer. Cross-wavelet analysis of solar radiation and SSTs reveals that the diurnal warming is absent during the second suppressed phase after the second MJO event. The mixed layer heat budget indicates that the cooling is primarily caused by horizontal advection associated with the stalling of the second MJO event and the cool SSTs fail to initiate the third MJO event. When the horizontal resolution is increased to 27-km, three MJOs are simulated and compare well with observations on multi-month timescales. The higher-resolution simulation of the second MJO event and more-realistic upper-ocean response promote the onset of the third MJO event. Simulations performed with analyzed SSTs indicate that the stalling of the second MJO in the 45-km run is a robust feature, regardless of ocean forcing, while the diurnal cycle analysis indicates that both 45-km and 27-km ocean resolutions respond realistically when provided with realistic atmospheric forcing. Thus, the problem in the 45-km simulation appears to originate in the atmosphere. Additional simulations show that while the details of the simulations are sensitive to small changes in the initial integration time, the large differences between the 45-km and 27-km runs during the suppressed phase in early December are robust.
Reversible Parallel Discrete-Event Execution of Large-scale Epidemic Outbreak Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perumalla, Kalyan S; Seal, Sudip K
2010-01-01
The spatial scale, runtime speed and behavioral detail of epidemic outbreak simulations together require the use of large-scale parallel processing. In this paper, an optimistic parallel discrete event execution of a reaction-diffusion simulation model of epidemic outbreaks is presented, with an implementation over themore » $$\\mu$$sik simulator. Rollback support is achieved with the development of a novel reversible model that combines reverse computation with a small amount of incremental state saving. Parallel speedup and other runtime performance metrics of the simulation are tested on a small (8,192-core) Blue Gene / P system, while scalability is demonstrated on 65,536 cores of a large Cray XT5 system. Scenarios representing large population sizes (up to several hundred million individuals in the largest case) are exercised.« less
NASA Astrophysics Data System (ADS)
Lee, Shiann-Jong; Liu, Qinya; Tromp, Jeroen; Komatitsch, Dimitri; Liang, Wen-Tzong; Huang, Bor-Shouh
2014-06-01
We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 min after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). A new island-wide, high resolution SEM mesh model is developed for the whole Taiwan in this study. We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 min for a 70 s ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.
Computer Simulation of the Neuronal Action Potential.
ERIC Educational Resources Information Center
Solomon, Paul R.; And Others
1988-01-01
A series of computer simulations of the neuronal resting and action potentials are described. Discusses the use of simulations to overcome the difficulties of traditional instruction, such as blackboard illustration, which can only illustrate these events at one point in time. Describes systems requirements necessary to run the simulations.…
Pineles, Lisa L; Morgan, Daniel J; Limper, Heather M; Weber, Stephen G; Thom, Kerri A; Perencevich, Eli N; Harris, Anthony D; Landon, Emily
2014-02-01
Hand hygiene (HH) is a critical part of infection prevention in health care settings. Hospitals around the world continuously struggle to improve health care personnel (HCP) HH compliance. The current gold standard for monitoring compliance is direct observation; however, this method is time-consuming and costly. One emerging area of interest involves automated systems for monitoring HH behavior such as radiofrequency identification (RFID) tracking systems. To assess the accuracy of a commercially available RFID system in detecting HCP HH behavior, we compared direct observation with data collected by the RFID system in a simulated validation setting and to a real-life clinical setting over 2 hospitals. A total of 1,554 HH events was observed. Accuracy for identifying HH events was high in the simulated validation setting (88.5%) but relatively low in the real-life clinical setting (52.4%). This difference was significant (P < .01). Accuracy for detecting HCP movement into and out of patient rooms was also high in the simulated setting but not in the real-life clinical setting (100% on entry and exit in simulated setting vs 54.3% entry and 49.5% exit in real-life clinical setting, P < .01). In this validation study of an RFID system, almost half of the HH events were missed. More research is necessary to further develop these systems and improve accuracy prior to widespread adoption. Copyright © 2014 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.
Innovative Composite Structure Design for Blast Protection
2007-01-01
2007-01-0483 Innovative Composite Structure Design for Blast Protection Dongying Jiang, Yuanyuan Liu MKP Structural Design Associates, Inc...protect vehicle and occupants against various explosives. The multi-level and multi-scenario blast simulation and design system integrates three major...numerical simulation of a BTR composite under a blast event. The developed blast simulation and design system will enable the prediction, design, and
Medicanes in an ocean-atmosphere coupled regional climate model
NASA Astrophysics Data System (ADS)
Akhtar, Naveed; Brauch, Jennifer; Ahrens, Bodo
2014-05-01
So-called medicanes (Mediterranean hurricanes) are meso-scale, marine and warm core Mediterranean cyclones which exhibit some similarities with tropical cyclones. The strong cyclonic winds associated with them are a potential thread for highly populated coastal areas around the Mediterranean basin. In this study we employ an atmospheric limited-area model (COSMO-CLM) coupled with a one-dimensional ocean model (NEMO-1d) to simulate medicanes. The goal of this study is to assess the robustness of the coupled model to simulate these extreme events. For this purpose 11 historical medicane events are simulated by the atmosphere-only and the coupled models using different set-ups (horizontal grid-spacings: 0.44o, 0.22o, 0.088o; with/with-out spectral nudging). The results show that at high resolution the coupled model is not only able to simulate all medicane events but also improves the simulated track length, warm core, and wind speed of simulated medicanes compared to atmosphere-only simulations. In most of the cases the medicanes trajectories and structures are better represented in coupled simulations compared to atmosphere-only simulations. We conclude that the coupled model is a suitable tool for systemic and detailed study of historical medicane events and also for future projections.
NASA Astrophysics Data System (ADS)
Jie, Cao; Zhi-Hai, Wu; Li, Peng
2016-05-01
This paper investigates the consensus tracking problems of second-order multi-agent systems with a virtual leader via event-triggered control. A novel distributed event-triggered transmission scheme is proposed, which is intermittently examined at constant sampling instants. Only partial neighbor information and local measurements are required for event detection. Then the corresponding event-triggered consensus tracking protocol is presented to guarantee second-order multi-agent systems to achieve consensus tracking. Numerical simulations are given to illustrate the effectiveness of the proposed strategy. Project supported by the National Natural Science Foundation of China (Grant Nos. 61203147, 61374047, and 61403168).
Quality Improvement With Discrete Event Simulation: A Primer for Radiologists.
Booker, Michael T; O'Connell, Ryan J; Desai, Bhushan; Duddalwar, Vinay A
2016-04-01
The application of simulation software in health care has transformed quality and process improvement. Specifically, software based on discrete-event simulation (DES) has shown the ability to improve radiology workflows and systems. Nevertheless, despite the successful application of DES in the medical literature, the power and value of simulation remains underutilized. For this reason, the basics of DES modeling are introduced, with specific attention to medical imaging. In an effort to provide readers with the tools necessary to begin their own DES analyses, the practical steps of choosing a software package and building a basic radiology model are discussed. In addition, three radiology system examples are presented, with accompanying DES models that assist in analysis and decision making. Through these simulations, we provide readers with an understanding of the theory, requirements, and benefits of implementing DES in their own radiology practices. Copyright © 2016 American College of Radiology. All rights reserved.
Software engineering and simulation
NASA Technical Reports Server (NTRS)
Zhang, Shou X.; Schroer, Bernard J.; Messimer, Sherri L.; Tseng, Fan T.
1990-01-01
This paper summarizes the development of several automatic programming systems for discrete event simulation. Emphasis is given on the model development, or problem definition, and the model writing phases of the modeling life cycle.
Discrete event simulation tool for analysis of qualitative models of continuous processing systems
NASA Technical Reports Server (NTRS)
Malin, Jane T. (Inventor); Basham, Bryan D. (Inventor); Harris, Richard A. (Inventor)
1990-01-01
An artificial intelligence design and qualitative modeling tool is disclosed for creating computer models and simulating continuous activities, functions, and/or behavior using developed discrete event techniques. Conveniently, the tool is organized in four modules: library design module, model construction module, simulation module, and experimentation and analysis. The library design module supports the building of library knowledge including component classes and elements pertinent to a particular domain of continuous activities, functions, and behavior being modeled. The continuous behavior is defined discretely with respect to invocation statements, effect statements, and time delays. The functionality of the components is defined in terms of variable cluster instances, independent processes, and modes, further defined in terms of mode transition processes and mode dependent processes. Model construction utilizes the hierarchy of libraries and connects them with appropriate relations. The simulation executes a specialized initialization routine and executes events in a manner that includes selective inherency of characteristics through a time and event schema until the event queue in the simulator is emptied. The experimentation and analysis module supports analysis through the generation of appropriate log files and graphics developments and includes the ability of log file comparisons.
A cascading failure analysis tool for post processing TRANSCARE simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
This is a MATLAB-based tool to post process simulation results in the EPRI software TRANSCARE, for massive cascading failure analysis following severe disturbances. There are a few key modules available in this tool, including: 1. automatically creating a contingency list to run TRANSCARE simulations, including substation outages above a certain kV threshold, N-k (1, 2 or 3) generator outages and branche outages; 2. read in and analyze a CKO file of PCG definition, an initiating event list, and a CDN file; 3. post process all the simulation results saved in a CDN file and perform critical event corridor analysis; 4.more » provide a summary of TRANSCARE simulations; 5. Identify the most frequently occurring event corridors in the system; and 6. Rank the contingencies using a user defined security index to quantify consequences in terms of total load loss, total number of cascades, etc.« less
Scavenging and recombination kinetics in a radiation spur: The successive ordered scavenging events
NASA Astrophysics Data System (ADS)
Al-Samra, Eyad H.; Green, Nicholas J. B.
2018-03-01
This study describes stochastic models to investigate the successive ordered scavenging events in a spur of four radicals, a model system based on a radiation spur. Three simulation models have been developed to obtain the probabilities of the ordered scavenging events: (i) a Monte Carlo random flight (RF) model, (ii) hybrid simulations in which the reaction rate coefficient is used to generate scavenging times for the radicals and (iii) the independent reaction times (IRT) method. The results of these simulations are found to be in agreement with one another. In addition, a detailed master equation treatment is also presented, and used to extract simulated rate coefficients of the ordered scavenging reactions from the RF simulations. These rate coefficients are transient, the rate coefficients obtained for subsequent reactions are effectively equal, and in reasonable agreement with the simple correction for competition effects that has recently been proposed.
Simulation methods with extended stability for stiff biochemical Kinetics.
Rué, Pau; Villà-Freixa, Jordi; Burrage, Kevin
2010-08-11
With increasing computer power, simulating the dynamics of complex systems in chemistry and biology is becoming increasingly routine. The modelling of individual reactions in (bio)chemical systems involves a large number of random events that can be simulated by the stochastic simulation algorithm (SSA). The key quantity is the step size, or waiting time, tau, whose value inversely depends on the size of the propensities of the different channel reactions and which needs to be re-evaluated after every firing event. Such a discrete event simulation may be extremely expensive, in particular for stiff systems where tau can be very short due to the fast kinetics of some of the channel reactions. Several alternative methods have been put forward to increase the integration step size. The so-called tau-leap approach takes a larger step size by allowing all the reactions to fire, from a Poisson or Binomial distribution, within that step. Although the expected value for the different species in the reactive system is maintained with respect to more precise methods, the variance at steady state can suffer from large errors as tau grows. In this paper we extend Poisson tau-leap methods to a general class of Runge-Kutta (RK) tau-leap methods. We show that with the proper selection of the coefficients, the variance of the extended tau-leap can be well-behaved, leading to significantly larger step sizes. The benefit of adapting the extended method to the use of RK frameworks is clear in terms of speed of calculation, as the number of evaluations of the Poisson distribution is still one set per time step, as in the original tau-leap method. The approach paves the way to explore new multiscale methods to simulate (bio)chemical systems.
Validation of ground-motion simulations for historical events using SDoF systems
Galasso, C.; Zareian, F.; Iervolino, I.; Graves, R.W.
2012-01-01
The study presented in this paper is among the first in a series of studies toward the engineering validation of the hybrid broadband ground‐motion simulation methodology by Graves and Pitarka (2010). This paper provides a statistical comparison between seismic demands of single degree of freedom (SDoF) systems subjected to past events using simulations and actual recordings. A number of SDoF systems are selected considering the following: (1) 16 oscillation periods between 0.1 and 6 s; (2) elastic case and four nonlinearity levels, from mildly inelastic to severely inelastic systems; and (3) two hysteretic behaviors, in particular, nondegrading–nonevolutionary and degrading–evolutionary. Demand spectra are derived in terms of peak and cyclic response, as well as their statistics for four historical earthquakes: 1979 Mw 6.5 Imperial Valley, 1989 Mw 6.8 Loma Prieta, 1992 Mw 7.2 Landers, and 1994 Mw 6.7 Northridge.
The Application of SNiPER to the JUNO Simulation
NASA Astrophysics Data System (ADS)
Lin, Tao; Zou, Jiaheng; Li, Weidong; Deng, Ziyan; Fang, Xiao; Cao, Guofu; Huang, Xingtao; You, Zhengyun; JUNO Collaboration
2017-10-01
The JUNO (Jiangmen Underground Neutrino Observatory) is a multipurpose neutrino experiment which is designed to determine neutrino mass hierarchy and precisely measure oscillation parameters. As one of the important systems, the JUNO offline software is being developed using the SNiPER software. In this proceeding, we focus on the requirements of JUNO simulation and present the working solution based on the SNiPER. The JUNO simulation framework is in charge of managing event data, detector geometries and materials, physics processes, simulation truth information etc. It glues physics generator, detector simulation and electronics simulation modules together to achieve a full simulation chain. In the implementation of the framework, many attractive characteristics of the SNiPER have been used, such as dynamic loading, flexible flow control, multiple event management and Python binding. Furthermore, additional efforts have been made to make both detector and electronics simulation flexible enough to accommodate and optimize different detector designs. For the Geant4-based detector simulation, each sub-detector component is implemented as a SNiPER tool which is a dynamically loadable and configurable plugin. So it is possible to select the detector configuration at runtime. The framework provides the event loop to drive the detector simulation and interacts with the Geant4 which is implemented as a passive service. All levels of user actions are wrapped into different customizable tools, so that user functions can be easily extended by just adding new tools. The electronics simulation has been implemented by following an event driven scheme. The SNiPER task component is used to simulate data processing steps in the electronics modules. The electronics and trigger are synchronized by triggered events containing possible physics signals. The JUNO simulation software has been released and is being used by the JUNO collaboration to do detector design optimization, event reconstruction algorithm development and physics sensitivity studies.
Numerical Simulation of the 9-10 June 1972 Black Hills Storm Using CSU RAMS
NASA Technical Reports Server (NTRS)
Nair, U. S.; Hjelmfelt, Mark R.; Pielke, Roger A., Sr.
1997-01-01
Strong easterly flow of low-level moist air over the eastern slopes of the Black Hills on 9-10 June 1972 generated a storm system that produced a flash flood, devastating the area. Based on observations from this storm event, and also from the similar Big Thompson 1976 storm event, conceptual models have been developed to explain the unusually high precipitation efficiency. In this study, the Black Hills storm is simulated using the Colorado State University Regional Atmospheric Modeling System. Simulations with homogeneous and inhomogeneous initializations and different grid structures are presented. The conceptual models of storm structure proposed by previous studies are examined in light of the present simulations. Both homogeneous and inhomogeneous initialization results capture the intense nature of the storm, but the inhomogeneous simulation produced a precipitation pattern closer to the observed pattern. The simulations point to stationary tilted updrafts, with precipitation falling out to the rear as the preferred storm structure. Experiments with different grid structures point to the importance of removing the lateral boundaries far from the region of activity. Overall, simulation performance in capturing the observed behavior of the storm system was enhanced by use of inhomogeneous initialization.
NASA Astrophysics Data System (ADS)
Nissen, Katrin; Ulbrich, Uwe
2016-04-01
An event based detection algorithm for extreme precipitation is applied to a multi-model ensemble of regional climate model simulations. The algorithm determines extent, location, duration and severity of extreme precipitation events. We assume that precipitation in excess of the local present-day 10-year return value will potentially exceed the capacity of the drainage systems that protect critical infrastructure elements. This assumption is based on legislation for the design of drainage systems which is in place in many European countries. Thus, events exceeding the local 10-year return value are detected. In this study we distinguish between sub-daily events (3 hourly) with high precipitation intensities and long-duration events (1-3 days) with high precipitation amounts. The climate change simulations investigated here were conducted within the EURO-CORDEX framework and exhibit a horizontal resolution of approximately 12.5 km. The period between 1971-2100 forced with observed and scenario (RCP 8.5 and RCP 4.5) greenhouse gas concentrations was analysed. Examined are changes in event frequency, event duration and size. The simulations show an increase in the number of extreme precipitation events for the future climate period over most of the area, which is strongest in Northern Europe. Strength and statistical significance of the signal increase with increasing greenhouse gas concentrations. This work has been conducted within the EU project RAIN (Risk Analysis of Infrastructure Networks in response to extreme weather).
Discrete Event Simulation of a Suppression of Enemy Air Defenses (SEAD) Mission
2008-03-01
component-based DES developed in Java® using the Simkit simulation package. Analysis of ship self air defense system selection ( Turan , 1999) is another...Institute of Technology, Wright-Patterson AFB OH, March 2003 (ADA445279 ) Turan , Bulent. A Comparative Analysis of Ship Self Air Defense (SSAD) Systems
2010-01-01
Background The challenge today is to develop a modeling and simulation paradigm that integrates structural, molecular and genetic data for a quantitative understanding of physiology and behavior of biological processes at multiple scales. This modeling method requires techniques that maintain a reasonable accuracy of the biological process and also reduces the computational overhead. This objective motivates the use of new methods that can transform the problem from energy and affinity based modeling to information theory based modeling. To achieve this, we transform all dynamics within the cell into a random event time, which is specified through an information domain measure like probability distribution. This allows us to use the “in silico” stochastic event based modeling approach to find the molecular dynamics of the system. Results In this paper, we present the discrete event simulation concept using the example of the signal transduction cascade triggered by extra-cellular Mg2+ concentration in the two component PhoPQ regulatory system of Salmonella Typhimurium. We also present a model to compute the information domain measure of the molecular transport process by estimating the statistical parameters of inter-arrival time between molecules/ions coming to a cell receptor as external signal. This model transforms the diffusion process into the information theory measure of stochastic event completion time to get the distribution of the Mg2+ departure events. Using these molecular transport models, we next study the in-silico effects of this external trigger on the PhoPQ system. Conclusions Our results illustrate the accuracy of the proposed diffusion models in explaining the molecular/ionic transport processes inside the cell. Also, the proposed simulation framework can incorporate the stochasticity in cellular environments to a certain degree of accuracy. We expect that this scalable simulation platform will be able to model more complex biological systems with reasonable accuracy to understand their temporal dynamics. PMID:21143785
Ghosh, Preetam; Ghosh, Samik; Basu, Kalyan; Das, Sajal K; Zhang, Chaoyang
2010-12-01
The challenge today is to develop a modeling and simulation paradigm that integrates structural, molecular and genetic data for a quantitative understanding of physiology and behavior of biological processes at multiple scales. This modeling method requires techniques that maintain a reasonable accuracy of the biological process and also reduces the computational overhead. This objective motivates the use of new methods that can transform the problem from energy and affinity based modeling to information theory based modeling. To achieve this, we transform all dynamics within the cell into a random event time, which is specified through an information domain measure like probability distribution. This allows us to use the "in silico" stochastic event based modeling approach to find the molecular dynamics of the system. In this paper, we present the discrete event simulation concept using the example of the signal transduction cascade triggered by extra-cellular Mg2+ concentration in the two component PhoPQ regulatory system of Salmonella Typhimurium. We also present a model to compute the information domain measure of the molecular transport process by estimating the statistical parameters of inter-arrival time between molecules/ions coming to a cell receptor as external signal. This model transforms the diffusion process into the information theory measure of stochastic event completion time to get the distribution of the Mg2+ departure events. Using these molecular transport models, we next study the in-silico effects of this external trigger on the PhoPQ system. Our results illustrate the accuracy of the proposed diffusion models in explaining the molecular/ionic transport processes inside the cell. Also, the proposed simulation framework can incorporate the stochasticity in cellular environments to a certain degree of accuracy. We expect that this scalable simulation platform will be able to model more complex biological systems with reasonable accuracy to understand their temporal dynamics.
NASA Astrophysics Data System (ADS)
Choudhury, Diptyajit; Angeloski, Aleksandar; Ziah, Haseeb; Buchholz, Hilmar; Landsman, Andre; Gupta, Amitava; Mitra, Tiyasa
Lunar explorations often involve use of a lunar lander , a rover [1],[2] and an orbiter which rotates around the moon with a fixed radius. The orbiters are usually lunar satellites orbiting along a polar orbit to ensure visibility with respect to the rover and the Earth Station although with varying latency. Communication in such deep space missions is usually done using a specialized protocol like Proximity-1[3]. MATLAB simulation of Proximity-1 have been attempted by some contemporary researchers[4] to simulate all features like transmission control, delay etc. In this paper it is attempted to simulate, in real time, the communication between a tracking station on earth (earth station), a lunar orbiter and a lunar rover using concepts of Distributed Real-time Simulation(DRTS).The objective of the simulation is to simulate, in real-time, the time varying communication delays associated with the communicating elements with a facility to integrate specific simulation modules to study different aspects e.g. response due to a specific control command from the earth station to be executed by the rover. The hardware platform comprises four single board computers operating as stand-alone real time systems (developed by MATLAB xPC target and inter-networked using UDP-IP protocol). A time triggered DRTS approach is adopted. The earth station, the orbiter and the rover are programmed as three standalone real-time processes representing the communicating elements in the system. Communication from one communicating element to another constitutes an event which passes a state message from one element to another, augmenting the state of the latter. These events are handled by an event scheduler which is the fourth real-time process. The event scheduler simulates the delay in space communication taking into consideration the distance between the communicating elements. A unique time synchronization algorithm is developed which takes into account the large latencies in space communication. The DRTS setup thus developed serves as an important and inexpensive test bench for trying out remote controlled applications on the rover, for example, from an earth station. The simulation is modular and the system is composable. Each of the processes can be aug-mented with relevant simulation modules that handle the events to simulate specific function-alities. With stringent energy saving requirements on most rovers, such a simulation set up, for example, can be used to design optimal rover movement control strategies from the orbiter in conjunction with autonomous systems on the rover itself. References 1. Lunar and Planetary Department, Moscow University, Lunokhod 1, "http://selena.sai.msu.ru/Home/Spa 2. NASA History Office, Guidelines for Advanced Manned Space Vehicle Program, "http://history.nasa.gov 35ann/AMSVPguidelines/top.htm" 3. Consultative Committee For Space Data Systems, "Proximity-1 Space Link Protocol" CCSDS 211.0-B-1 Blue Book. October 2002. 4. Segui, J. and Jennings, E., "Delay Tolerant Networking-Bundle Protocol Simulation", in Proceedings of the 2nd IEEE International Conference on Space Mission Challenges for Infor-mation Technology, 2006.
Lin, Yen Ting; Chylek, Lily A; Lemons, Nathan W; Hlavacek, William S
2018-06-21
The chemical kinetics of many complex systems can be concisely represented by reaction rules, which can be used to generate reaction events via a kinetic Monte Carlo method that has been termed network-free simulation. Here, we demonstrate accelerated network-free simulation through a novel approach to equation-free computation. In this process, variables are introduced that approximately capture system state. Derivatives of these variables are estimated using short bursts of exact stochastic simulation and finite differencing. The variables are then projected forward in time via a numerical integration scheme, after which a new exact stochastic simulation is initialized and the whole process repeats. The projection step increases efficiency by bypassing the firing of numerous individual reaction events. As we show, the projected variables may be defined as populations of building blocks of chemical species. The maximal number of connected molecules included in these building blocks determines the degree of approximation. Equation-free acceleration of network-free simulation is found to be both accurate and efficient.
Jahn, Beate; Theurl, Engelbert; Siebert, Uwe; Pfeiffer, Karl-Peter
2010-01-01
In most decision-analytic models in health care, it is assumed that there is treatment without delay and availability of all required resources. Therefore, waiting times caused by limited resources and their impact on treatment effects and costs often remain unconsidered. Queuing theory enables mathematical analysis and the derivation of several performance measures of queuing systems. Nevertheless, an analytical approach with closed formulas is not always possible. Therefore, simulation techniques are used to evaluate systems that include queuing or waiting, for example, discrete event simulation. To include queuing in decision-analytic models requires a basic knowledge of queuing theory and of the underlying interrelationships. This tutorial introduces queuing theory. Analysts and decision-makers get an understanding of queue characteristics, modeling features, and its strength. Conceptual issues are covered, but the emphasis is on practical issues like modeling the arrival of patients. The treatment of coronary artery disease with percutaneous coronary intervention including stent placement serves as an illustrative queuing example. Discrete event simulation is applied to explicitly model resource capacities, to incorporate waiting lines and queues in the decision-analytic modeling example.
Dynamic model based novel findings in power systems analysis and frequency measurement verification
NASA Astrophysics Data System (ADS)
Kook, Kyung Soo
This study selects several new advanced topics in power systems, and verifies their usefulness using the simulation. In the study on ratio of the equivalent reactance and resistance of the bulk power systems, the simulation results give us the more correct value of X/R of the bulk power system, which can explain why the active power compensation is also important in voltage flicker mitigation. In the application study of the Energy Storage System(ESS) to the wind power, the new model implementation of the ESS connected to the wind power is proposed, and the control effect of ESS to the intermittency of the wind power is verified. Also this study conducts the intensive simulations for clarifying the behavior of the wide-area power system frequency as well as the possibility of the on-line instability detection. In our POWER IT Laboratory, since 2003, the U.S. national frequency monitoring network (FNET) has been being continuously operated to monitor the wide-area power system frequency in the U.S. Using the measured frequency data, the event of the power system is triggered, and its location and scale are estimated. This study also looks for the possibility of using the simulation technologies to contribute the applications of FNET, finds similarity of the event detection orders between the frequency measurements and the simulations in the U.S. Eastern power grid, and develops the new methodology for estimating the event location based on the simulated N-1 contingencies using the frequency measurement. It has been pointed out that the simulation results can not represent the actual response of the power systems due to the inevitable limit of modeling power systems and different operating conditions of the systems at every second. However, in the circumstances that we need to test such an important infrastructure supplying the electric energy without taking any risk of it, the software based simulation will be the best solution to verify the new technologies in power system engineering and, for doing this, new models and better application of the simulation should be proposed. Conducting extensive simulation studies, this dissertation verified that the actual X/R ratio of the bulk power systems is much lower than what has been known as its typical value, showed the effectiveness of the ESS control to mitigate the intermittence of the wind power from the perspective of the power grid using the newly proposed simulation model of ESS connected to the wind power, and found many characteristics of the wide-area frequency wave propagation. Also the possibility of using the simulated responses of the power system for replacing the measured data could be confirmed and this is very promising to the future application of the simulation to the on-line analysis of the power systems based on the FNET measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horiike, S.; Okazaki, Y.
This paper describes a performance estimation tool developed for modeling and simulation of open distributed energy management systems to support their design. The approach of discrete event simulation with detailed models is considered for efficient performance estimation. The tool includes basic models constituting a platform, e.g., Ethernet, communication protocol, operating system, etc. Application softwares are modeled by specifying CPU time, disk access size, communication data size, etc. Different types of system configurations for various system activities can be easily studied. Simulation examples show how the tool is utilized for the efficient design of open distributed energy management systems.
Discrete-event system simulation on small and medium enterprises productivity improvement
NASA Astrophysics Data System (ADS)
Sulistio, J.; Hidayah, N. A.
2017-12-01
Small and medium industries in Indonesia is currently developing. The problem faced by SMEs is the difficulty of meeting growing demand coming into the company. Therefore, SME need an analysis and evaluation on its production process in order to meet all orders. The purpose of this research is to increase the productivity of SMEs production floor by applying discrete-event system simulation. This method preferred because it can solve complex problems die to the dynamic and stochastic nature of the system. To increase the credibility of the simulation, model validated by cooperating the average of two trials, two trials of variance and chi square test. Afterwards, Benferroni method applied to development several alternatives. The article concludes that, the productivity of SMEs production floor increased up to 50% by adding the capacity of dyeing and drying machines.
Update of KDBI: Kinetic Data of Bio-molecular Interaction database
Kumar, Pankaj; Han, B. C.; Shi, Z.; Jia, J.; Wang, Y. P.; Zhang, Y. T.; Liang, L.; Liu, Q. F.; Ji, Z. L.; Chen, Y. Z.
2009-01-01
Knowledge of the kinetics of biomolecular interactions is important for facilitating the study of cellular processes and underlying molecular events, and is essential for quantitative study and simulation of biological systems. Kinetic Data of Bio-molecular Interaction database (KDBI) has been developed to provide information about experimentally determined kinetic data of protein–protein, protein–nucleic acid, protein–ligand, nucleic acid–ligand binding or reaction events described in the literature. To accommodate increasing demand for studying and simulating biological systems, numerous improvements and updates have been made to KDBI, including new ways to access data by pathway and molecule names, data file in System Biology Markup Language format, more efficient search engine, access to published parameter sets of simulation models of 63 pathways, and 2.3-fold increase of data (19 263 entries of 10 532 distinctive biomolecular binding and 11 954 interaction events, involving 2635 proteins/protein complexes, 847 nucleic acids, 1603 small molecules and 45 multi-step processes). KDBI is publically available at http://bidd.nus.edu.sg/group/kdbi/kdbi.asp. PMID:18971255
Greenberg, Michael; Lioy, Paul; Ozbas, Birnur; Mantell, Nancy; Isukapalli, Sastry; Lahr, Michael; Altiok, Tayfur; Bober, Joseph; Lacy, Clifton; Lowrie, Karen; Mayer, Henry; Rovito, Jennifer
2013-11-01
We built three simulation models that can assist rail transit planners and operators to evaluate high and low probability rail-centered hazard events that could lead to serious consequences for rail-centered networks and their surrounding regions. Our key objective is to provide these models to users who, through planning with these models, can prevent events or more effectively react to them. The first of the three models is an industrial systems simulation tool that closely replicates rail passenger traffic flows between New York Penn Station and Trenton, New Jersey. Second, we built and used a line source plume model to trace chemical plumes released by a slow-moving freight train that could impact rail passengers, as well as people in surrounding areas. Third, we crafted an economic simulation model that estimates the regional economic consequences of a variety of rail-related hazard events through the year 2020. Each model can work independently of the others. However, used together they help provide a coherent story about what could happen and set the stage for planning that should make rail-centered transport systems more resistant and resilient to hazard events. We highlight the limitations and opportunities presented by using these models individually or in sequence. © 2013 Society for Risk Analysis.
Greenberg, Michael; Lioy, Paul; Ozbas, Birnur; Mantell, Nancy; Isukapalli, Sastry; Lahr, Michael; Altiok, Tayfur; Bober, Joseph; Lacy, Clifton; Lowrie, Karen; Mayer, Henry; Rovito, Jennifer
2014-01-01
We built three simulation models that can assist rail transit planners and operators to evaluate high and low probability rail-centered hazard events that could lead to serious consequences for rail-centered networks and their surrounding regions. Our key objective is to provide these models to users who, through planning with these models, can prevent events or more effectively react to them. The first of the three models is an industrial systems simulation tool that closely replicates rail passenger traffic flows between New York Penn Station and Trenton, New Jersey. Second, we built and used a line source plume model to trace chemical plumes released by a slow-moving freight train that could impact rail passengers, as well as people in surrounding areas. Third, we crafted an economic simulation model that estimates the regional economic consequences of a variety of rail-related hazard events through the year 2020. Each model can work independently of the others. However, used together they help provide a coherent story about what could happen and set the stage for planning that should make rail-centered transport systems more resistant and resilient to hazard events. We highlight the limitations and opportunities presented by using these models individually or in sequence. PMID:23718133
Parallel Discrete Molecular Dynamics Simulation With Speculation and In-Order Commitment*†
Khan, Md. Ashfaquzzaman; Herbordt, Martin C.
2011-01-01
Discrete molecular dynamics simulation (DMD) uses simplified and discretized models enabling simulations to advance by event rather than by timestep. DMD is an instance of discrete event simulation and so is difficult to scale: even in this multi-core era, all reported DMD codes are serial. In this paper we discuss the inherent difficulties of scaling DMD and present our method of parallelizing DMD through event-based decomposition. Our method is microarchitecture inspired: speculative processing of events exposes parallelism, while in-order commitment ensures correctness. We analyze the potential of this parallelization method for shared-memory multiprocessors. Achieving scalability required extensive experimentation with scheduling and synchronization methods to mitigate serialization. The speed-up achieved for a variety of system sizes and complexities is nearly 6× on an 8-core and over 9× on a 12-core processor. We present and verify analytical models that account for the achieved performance as a function of available concurrency and architectural limitations. PMID:21822327
Parallel Discrete Molecular Dynamics Simulation With Speculation and In-Order Commitment.
Khan, Md Ashfaquzzaman; Herbordt, Martin C
2011-07-20
Discrete molecular dynamics simulation (DMD) uses simplified and discretized models enabling simulations to advance by event rather than by timestep. DMD is an instance of discrete event simulation and so is difficult to scale: even in this multi-core era, all reported DMD codes are serial. In this paper we discuss the inherent difficulties of scaling DMD and present our method of parallelizing DMD through event-based decomposition. Our method is microarchitecture inspired: speculative processing of events exposes parallelism, while in-order commitment ensures correctness. We analyze the potential of this parallelization method for shared-memory multiprocessors. Achieving scalability required extensive experimentation with scheduling and synchronization methods to mitigate serialization. The speed-up achieved for a variety of system sizes and complexities is nearly 6× on an 8-core and over 9× on a 12-core processor. We present and verify analytical models that account for the achieved performance as a function of available concurrency and architectural limitations.
Evaluation of NASA's end-to-end data systems using DSDS+
NASA Technical Reports Server (NTRS)
Rouff, Christopher; Davenport, William; Message, Philip
1994-01-01
The Data Systems Dynamic Simulator (DSDS+) is a software tool being developed by the authors to evaluate candidate architectures for NASA's end-to-end data systems. Via modeling and simulation, we are able to quickly predict the performance characteristics of each architecture, to evaluate 'what-if' scenarios, and to perform sensitivity analyses. As such, we are using modeling and simulation to help NASA select the optimal system configuration, and to quantify the performance characteristics of this system prior to its delivery. This paper is divided into the following six sections: (1) The role of modeling and simulation in the systems engineering process. In this section, we briefly describe the different types of results obtained by modeling each phase of the systems engineering life cycle, from concept definition through operations and maintenance; (2) Recent applications of DSDS+. In this section, we describe ongoing applications of DSDS+ in support of the Earth Observing System (EOS), and we present some of the simulation results generated of candidate system designs. So far, we have modeled individual EOS subsystems (e.g. the Solid State Recorders used onboard the spacecraft), and we have also developed an integrated model of the EOS end-to-end data processing and data communications systems (from the payloads onboard to the principle investigator facilities on the ground); (3) Overview of DSDS+. In this section we define what a discrete-event model is, and how it works. The discussion is presented relative to the DSDS+ simulation tool that we have developed, including it's run-time optimization algorithms that enables DSDS+ to execute substantially faster than comparable discrete-event simulation tools; (4) Summary. In this section, we summarize our findings and 'lessons learned' during the development and application of DSDS+ to model NASA's data systems; (5) Further Information; and (6) Acknowledgements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kunsman, David Marvin; Aldemir, Tunc; Rutt, Benjamin
2008-05-01
This LDRD project has produced a tool that makes probabilistic risk assessments (PRAs) of nuclear reactors - analyses which are very resource intensive - more efficient. PRAs of nuclear reactors are being increasingly relied on by the United States Nuclear Regulatory Commission (U.S.N.R.C.) for licensing decisions for current and advanced reactors. Yet, PRAs are produced much as they were 20 years ago. The work here applied a modern systems analysis technique to the accident progression analysis portion of the PRA; the technique was a system-independent multi-task computer driver routine. Initially, the objective of the work was to fuse the accidentmore » progression event tree (APET) portion of a PRA to the dynamic system doctor (DSD) created by Ohio State University. Instead, during the initial efforts, it was found that the DSD could be linked directly to a detailed accident progression phenomenological simulation code - the type on which APET construction and analysis relies, albeit indirectly - and thereby directly create and analyze the APET. The expanded DSD computational architecture and infrastructure that was created during this effort is called ADAPT (Analysis of Dynamic Accident Progression Trees). ADAPT is a system software infrastructure that supports execution and analysis of multiple dynamic event-tree simulations on distributed environments. A simulator abstraction layer was developed, and a generic driver was implemented for executing simulators on a distributed environment. As a demonstration of the use of the methodological tool, ADAPT was applied to quantify the likelihood of competing accident progression pathways occurring for a particular accident scenario in a particular reactor type using MELCOR, an integrated severe accident analysis code developed at Sandia. (ADAPT was intentionally created with flexibility, however, and is not limited to interacting with only one code. With minor coding changes to input files, ADAPT can be linked to other such codes.) The results of this demonstration indicate that the approach can significantly reduce the resources required for Level 2 PRAs. From the phenomenological viewpoint, ADAPT can also treat the associated epistemic and aleatory uncertainties. This methodology can also be used for analyses of other complex systems. Any complex system can be analyzed using ADAPT if the workings of that system can be displayed as an event tree, there is a computer code that simulates how those events could progress, and that simulator code has switches to turn on and off system events, phenomena, etc. Using and applying ADAPT to particular problems is not human independent. While the human resources for the creation and analysis of the accident progression are significantly decreased, knowledgeable analysts are still necessary for a given project to apply ADAPT successfully. This research and development effort has met its original goals and then exceeded them.« less
Dong, Lu; Zhong, Xiangnan; Sun, Changyin; He, Haibo
2017-07-01
This paper presents the design of a novel adaptive event-triggered control method based on the heuristic dynamic programming (HDP) technique for nonlinear discrete-time systems with unknown system dynamics. In the proposed method, the control law is only updated when the event-triggered condition is violated. Compared with the periodic updates in the traditional adaptive dynamic programming (ADP) control, the proposed method can reduce the computation and transmission cost. An actor-critic framework is used to learn the optimal event-triggered control law and the value function. Furthermore, a model network is designed to estimate the system state vector. The main contribution of this paper is to design a new trigger threshold for discrete-time systems. A detailed Lyapunov stability analysis shows that our proposed event-triggered controller can asymptotically stabilize the discrete-time systems. Finally, we test our method on two different discrete-time systems, and the simulation results are included.
The use of discrete-event simulation modelling to improve radiation therapy planning processes.
Werker, Greg; Sauré, Antoine; French, John; Shechter, Steven
2009-07-01
The planning portion of the radiation therapy treatment process at the British Columbia Cancer Agency is efficient but nevertheless contains room for improvement. The purpose of this study is to show how a discrete-event simulation (DES) model can be used to represent this complex process and to suggest improvements that may reduce the planning time and ultimately reduce overall waiting times. A simulation model of the radiation therapy (RT) planning process was constructed using the Arena simulation software, representing the complexities of the system. Several types of inputs feed into the model; these inputs come from historical data, a staff survey, and interviews with planners. The simulation model was validated against historical data and then used to test various scenarios to identify and quantify potential improvements to the RT planning process. Simulation modelling is an attractive tool for describing complex systems, and can be used to identify improvements to the processes involved. It is possible to use this technique in the area of radiation therapy planning with the intent of reducing process times and subsequent delays for patient treatment. In this particular system, reducing the variability and length of oncologist-related delays contributes most to improving the planning time.
Modeling extreme (Carrington-type) space weather events using three-dimensional MHD code simulations
NASA Astrophysics Data System (ADS)
Ngwira, C. M.; Pulkkinen, A. A.; Kuznetsova, M. M.; Glocer, A.
2013-12-01
There is growing concern over possible severe societal consequences related to adverse space weather impacts on man-made technological infrastructure and systems. In the last two decades, significant progress has been made towards the modeling of space weather events. Three-dimensional (3-D) global magnetohydrodynamics (MHD) models have been at the forefront of this transition, and have played a critical role in advancing our understanding of space weather. However, the modeling of extreme space weather events is still a major challenge even for existing global MHD models. In this study, we introduce a specially adapted University of Michigan 3-D global MHD model for simulating extreme space weather events that have a ground footprint comparable (or larger) to the Carrington superstorm. Results are presented for an initial simulation run with ``very extreme'' constructed/idealized solar wind boundary conditions driving the magnetosphere. In particular, we describe the reaction of the magnetosphere-ionosphere system and the associated ground induced geoelectric field to such extreme driving conditions. We also discuss the results and what they might mean for the accuracy of the simulations. The model is further tested using input data for an observed space weather event to verify the MHD model consistence and to draw guidance for future work. This extreme space weather MHD model is designed specifically for practical application to the modeling of extreme geomagnetically induced electric fields, which can drive large currents in earth conductors such as power transmission grids.
Causal simulation and sensor planning in predictive monitoring
NASA Technical Reports Server (NTRS)
Doyle, Richard J.
1989-01-01
Two issues are addressed which arise in the task of detecting anomalous behavior in complex systems with numerous sensor channels: how to adjust alarm thresholds dynamically, within the changing operating context of the system, and how to utilize sensors selectively, so that nominal operation can be verified reliably without processing a prohibitive amount of sensor data. The approach involves simulation of a causal model of the system, which provides information on expected sensor values, and on dependencies between predicted events, useful in assessing the relative importance of events so that sensor resources can be allocated effectively. The potential applicability of this work to the execution monitoring of robot task plans is briefly discussed.
TRACE Model for Simulation of Anticipated Transients Without Scram in a BWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng L. Y.; Baek J.; Cuadra,A.
2013-11-10
A TRACE model has been developed for using theTRACE/PARCS computational package [1, 2] to simulate anticipated transients without scram (ATWS) events in a boiling water reactor (BWR). The model represents a BWR/5 housed in a Mark II containment. The reactor and the balance of plant systems are modeled in sufficient detail to enable the evaluation of plant responses and theeffectiveness of automatic and operator actions tomitigate this beyond design basis accident.The TRACE model implements features thatfacilitate the simulation of ATWS events initiated by turbine trip and closure of the main steam isolation valves (MSIV). It also incorporates control logic tomore » initiate actions to mitigate the ATWS events, such as water levelcontrol, emergency depressurization, and injection of boron via the standby liquid control system (SLCS). Two different approaches have been used to model boron mixing in the lower plenum of the reactor vessel: modulate coolant flow in the lower plenum by a flow valve, and use control logic to modular.« less
Naveros, Francisco; Luque, Niceto R; Garrido, Jesús A; Carrillo, Richard R; Anguita, Mancia; Ros, Eduardo
2015-07-01
Time-driven simulation methods in traditional CPU architectures perform well and precisely when simulating small-scale spiking neural networks. Nevertheless, they still have drawbacks when simulating large-scale systems. Conversely, event-driven simulation methods in CPUs and time-driven simulation methods in graphic processing units (GPUs) can outperform CPU time-driven methods under certain conditions. With this performance improvement in mind, we have developed an event-and-time-driven spiking neural network simulator suitable for a hybrid CPU-GPU platform. Our neural simulator is able to efficiently simulate bio-inspired spiking neural networks consisting of different neural models, which can be distributed heterogeneously in both small layers and large layers or subsystems. For the sake of efficiency, the low-activity parts of the neural network can be simulated in CPU using event-driven methods while the high-activity subsystems can be simulated in either CPU (a few neurons) or GPU (thousands or millions of neurons) using time-driven methods. In this brief, we have undertaken a comparative study of these different simulation methods. For benchmarking the different simulation methods and platforms, we have used a cerebellar-inspired neural-network model consisting of a very dense granular layer and a Purkinje layer with a smaller number of cells (according to biological ratios). Thus, this cerebellar-like network includes a dense diverging neural layer (increasing the dimensionality of its internal representation and sparse coding) and a converging neural layer (integration) similar to many other biologically inspired and also artificial neural networks.
High-speed event detector for embedded nanopore bio-systems.
Huang, Yiyun; Magierowski, Sebastian; Ghafar-Zadeh, Ebrahim; Wang, Chengjie
2015-08-01
Biological measurements of microscopic phenomena often deal with discrete-event signals. The ability to automatically carry out such measurements at high-speed in a miniature embedded system is desirable but compromised by high-frequency noise along with practical constraints on filter quality and sampler resolution. This paper presents a real-time event-detection method in the context of nanopore sensing that helps to mitigate these drawbacks and allows accurate signal processing in an embedded system. Simulations show at least a 10× improvement over existing on-line detection methods.
Integrated G and C Implementation within IDOS: A Simulink Based Reusable Launch Vehicle Simulation
NASA Technical Reports Server (NTRS)
Fisher, Joseph E.; Bevacqua, Tim; Lawrence, Douglas A.; Zhu, J. Jim; Mahoney, Michael
2003-01-01
The implementation of multiple Integrated Guidance and Control (IG&C) algorithms per flight phase within a vehicle simulation poses a daunting task to coordinate algorithm interactions with the other G&C components and with vehicle subsystems. Currently being developed by Universal Space Lines LLC (USL) under contract from NASA, the Integrated Development and Operations System (IDOS) contains a high fidelity Simulink vehicle simulation, which provides a means to test cutting edge G&C technologies. Combining the modularity of this vehicle simulation and Simulink s built-in primitive blocks provide a quick way to implement algorithms. To add discrete-event functionality to the unfinished IDOS simulation, Vehicle Event Manager (VEM) and Integrated Vehicle Health Monitoring (IVHM) subsystems were created to provide discrete-event and pseudo-health monitoring processing capabilities. Matlab's Stateflow is used to create the IVHM and Event Manager subsystems and to implement a supervisory logic controller referred to as the Auto-commander as part of the IG&C to coordinate the control system adaptation and reconfiguration and to select the control and guidance algorithms for a given flight phase. Manual creation of the Stateflow charts for all of these subsystems is a tedious and time-consuming process. The Stateflow Auto-builder was developed as a Matlab based software tool for the automatic generation of a Stateflow chart from information contained in a database. This paper describes the IG&C, VEM and IVHM implementations in IDOS. In addition, this paper describes the Stateflow Auto-builder.
Recent examples of mesoscale numerical forecasts of severe weather events along the east coast
NASA Technical Reports Server (NTRS)
Kocin, P. J.; Uccellini, L. W.; Zack, J. W.; Kaplan, M. L.
1984-01-01
Mesoscale numerical forecasts utilizing the Mesoscale Atmospheric Simulation System (MASS) are documented for two East Coast severe weather events. The two events are the thunderstorm and heavy snow bursts in the Washington, D.C. - Baltimore, MD region on 8 March 1984 and the devastating tornado outbreak across North and South Carolina on 28 March 1984. The forecasts are presented to demonstrate the ability of the model to simulate dynamical interactions and diabatic processes and to note some of the problems encountered when using mesoscale models for day-to-day forecasting.
Danielson, Thomas; Sutton, Jonathan E.; Hin, Céline; ...
2017-06-09
Lattice based Kinetic Monte Carlo (KMC) simulations offer a powerful simulation technique for investigating large reaction networks while retaining spatial configuration information, unlike ordinary differential equations. However, large chemical reaction networks can contain reaction processes with rates spanning multiple orders of magnitude. This can lead to the problem of “KMC stiffness” (similar to stiffness in differential equations), where the computational expense has the potential to be overwhelmed by very short time-steps during KMC simulations, with the simulation spending an inordinate amount of KMC steps / cpu-time simulating fast frivolous processes (FFPs) without progressing the system (reaction network). In order tomore » achieve simulation times that are experimentally relevant or desired for predictions, a dynamic throttling algorithm involving separation of the processes into speed-ranks based on event frequencies has been designed and implemented with the intent of decreasing the probability of FFP events, and increasing the probability of slow process events -- allowing rate limiting events to become more likely to be observed in KMC simulations. This Staggered Quasi-Equilibrium Rank-based Throttling for Steady-state (SQERTSS) algorithm designed for use in achieving and simulating steady-state conditions in KMC simulations. Lastly, as shown in this work, the SQERTSS algorithm also works for transient conditions: the correct configuration space and final state will still be achieved if the required assumptions are not violated, with the caveat that the sizes of the time-steps may be distorted during the transient period.« less
NASA Astrophysics Data System (ADS)
Danielson, Thomas; Sutton, Jonathan E.; Hin, Céline; Savara, Aditya
2017-10-01
Lattice based Kinetic Monte Carlo (KMC) simulations offer a powerful simulation technique for investigating large reaction networks while retaining spatial configuration information, unlike ordinary differential equations. However, large chemical reaction networks can contain reaction processes with rates spanning multiple orders of magnitude. This can lead to the problem of "KMC stiffness" (similar to stiffness in differential equations), where the computational expense has the potential to be overwhelmed by very short time-steps during KMC simulations, with the simulation spending an inordinate amount of KMC steps/CPU time simulating fast frivolous processes (FFPs) without progressing the system (reaction network). In order to achieve simulation times that are experimentally relevant or desired for predictions, a dynamic throttling algorithm involving separation of the processes into speed-ranks based on event frequencies has been designed and implemented with the intent of decreasing the probability of FFP events, and increasing the probability of slow process events-allowing rate limiting events to become more likely to be observed in KMC simulations. This Staggered Quasi-Equilibrium Rank-based Throttling for Steady-state (SQERTSS) algorithm is designed for use in achieving and simulating steady-state conditions in KMC simulations. As shown in this work, the SQERTSS algorithm also works for transient conditions: the correct configuration space and final state will still be achieved if the required assumptions are not violated, with the caveat that the sizes of the time-steps may be distorted during the transient period.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelli, Diego; Prescott, Steven R; Smith, Curtis L
2011-07-01
In the Risk Informed Safety Margin Characterization (RISMC) approach we want to understand not just the frequency of an event like core damage, but how close we are (or are not) to key safety-related events and how might we increase our safety margins. The RISMC Pathway uses the probabilistic margin approach to quantify impacts to reliability and safety by coupling both probabilistic (via stochastic simulation) and mechanistic (via physics models) approaches. This coupling takes place through the interchange of physical parameters and operational or accident scenarios. In this paper we apply the RISMC approach to evaluate the impact of amore » power uprate on a pressurized water reactor (PWR) for a tsunami-induced flooding test case. This analysis is performed using the RISMC toolkit: RELAP-7 and RAVEN codes. RELAP-7 is the new generation of system analysis codes that is responsible for simulating the thermal-hydraulic dynamics of PWR and boiling water reactor systems. RAVEN has two capabilities: to act as a controller of the RELAP-7 simulation (e.g., system activation) and to perform statistical analyses (e.g., run multiple RELAP-7 simulations where sequencing/timing of events have been changed according to a set of stochastic distributions). By using the RISMC toolkit, we can evaluate how power uprate affects the system recovery measures needed to avoid core damage after the PWR lost all available AC power by a tsunami induced flooding. The simulation of the actual flooding is performed by using a smooth particle hydrodynamics code: NEUTRINO.« less
Evaluation of PET Imaging Resolution Using 350 mu{m} Pixelated CZT as a VP-PET Insert Detector
NASA Astrophysics Data System (ADS)
Yin, Yongzhi; Chen, Ximeng; Li, Chongzheng; Wu, Heyu; Komarov, Sergey; Guo, Qingzhen; Krawczynski, Henric; Meng, Ling-Jian; Tai, Yuan-Chuan
2014-02-01
A cadmium-zinc-telluride (CZT) detector with 350 μm pitch pixels was studied in high-resolution positron emission tomography (PET) imaging applications. The PET imaging system was based on coincidence detection between a CZT detector and a lutetium oxyorthosilicate (LSO)-based Inveon PET detector in virtual-pinhole PET geometry. The LSO detector is a 20 ×20 array, with 1.6 mm pitches, and 10 mm thickness. The CZT detector uses ac 20 ×20 ×5 mm substrate, with 350 μm pitch pixelated anodes and a coplanar cathode. A NEMA NU4 Na-22 point source of 250 μm in diameter was imaged by this system. Experiments show that the image resolution of single-pixel photopeak events was 590 μm FWHM while the image resolution of double-pixel photopeak events was 640 μm FWHM. The inclusion of double-pixel full-energy events increased the sensitivity of the imaging system. To validate the imaging experiment, we conducted a Monte Carlo (MC) simulation for the same PET system in Geant4 Application for Emission Tomography. We defined LSO detectors as a scanner ring and 350 μm pixelated CZT detectors as an insert ring. GATE simulated coincidence data were sorted into an insert-scanner sinogram and reconstructed. The image resolution of MC-simulated data (which did not factor in positron range and acolinearity effect) was 460 μm at FWHM for single-pixel events. The image resolutions of experimental data, MC simulated data, and theoretical calculation are all close to 500 μm FWHM when the proposed 350 μm pixelated CZT detector is used as a PET insert. The interpolation algorithm for the charge sharing events was also investigated. The PET image that was reconstructed using the interpolation algorithm shows improved image resolution compared with the image resolution without interpolation algorithm.
Liu, Changxin; Gao, Jian; Li, Huiping; Xu, Demin
2018-05-01
The event-triggered control is a promising solution to cyber-physical systems, such as networked control systems, multiagent systems, and large-scale intelligent systems. In this paper, we propose an event-triggered model predictive control (MPC) scheme for constrained continuous-time nonlinear systems with bounded disturbances. First, a time-varying tightened state constraint is computed to achieve robust constraint satisfaction, and an event-triggered scheduling strategy is designed in the framework of dual-mode MPC. Second, the sufficient conditions for ensuring feasibility and closed-loop robust stability are developed, respectively. We show that robust stability can be ensured and communication load can be reduced with the proposed MPC algorithm. Finally, numerical simulations and comparison studies are performed to verify the theoretical results.
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Schrenkenghost, Debra K.
2001-01-01
The Adjustable Autonomy Testbed (AAT) is a simulation-based testbed located in the Intelligent Systems Laboratory in the Automation, Robotics and Simulation Division at NASA Johnson Space Center. The purpose of the testbed is to support evaluation and validation of prototypes of adjustable autonomous agent software for control and fault management for complex systems. The AA T project has developed prototype adjustable autonomous agent software and human interfaces for cooperative fault management. This software builds on current autonomous agent technology by altering the architecture, components and interfaces for effective teamwork between autonomous systems and human experts. Autonomous agents include a planner, flexible executive, low level control and deductive model-based fault isolation. Adjustable autonomy is intended to increase the flexibility and effectiveness of fault management with an autonomous system. The test domain for this work is control of advanced life support systems for habitats for planetary exploration. The CONFIG hybrid discrete event simulation environment provides flexible and dynamically reconfigurable models of the behavior of components and fluids in the life support systems. Both discrete event and continuous (discrete time) simulation are supported, and flows and pressures are computed globally. This provides fast dynamic simulations of interacting hardware systems in closed loops that can be reconfigured during operations scenarios, producing complex cascading effects of operations and failures. Current object-oriented model libraries support modeling of fluid systems, and models have been developed of physico-chemical and biological subsystems for processing advanced life support gases. In FY01, water recovery system models will be developed.
2017-06-01
designed experiment to model and explore a ship-to-shore logistics process supporting dispersed units via three types of ULSs, which vary primarily in...systems, simulation, discrete event simulation, design of experiments, data analysis, simplekit, nearly orthogonal and balanced designs 15. NUMBER OF... designed experiment to model and explore a ship-to-shore logistics process supporting dispersed units via three types of ULSs, which vary primarily
Kunkel, Amber; McLay, Laura A
2013-03-01
Emergency medical services (EMS) provide life-saving care and hospital transport to patients with severe trauma or medical conditions. Severe weather events, such as snow events, may lead to adverse patient outcomes by increasing call volumes and service times. Adequate staffing levels during such weather events are critical for ensuring that patients receive timely care. To determine staffing levels that depend on weather, we propose a model that uses a discrete event simulation of a reliability model to identify minimum staffing levels that provide timely patient care, with regression used to provide the input parameters. The system is said to be reliable if there is a high degree of confidence that ambulances can immediately respond to a given proportion of patients (e.g., 99 %). Four weather scenarios capture varying levels of snow falling and snow on the ground. An innovative feature of our approach is that we evaluate the mitigating effects of different extrinsic response policies and intrinsic system adaptation. The models use data from Hanover County, Virginia to quantify how snow reduces EMS system reliability and necessitates increasing staffing levels. The model and its analysis can assist in EMS preparedness by providing a methodology to adjust staffing levels during weather events. A key observation is that when it is snowing, intrinsic system adaptation has similar effects on system reliability as one additional ambulance.
Reaction Event Counting Statistics of Biopolymer Reaction Systems with Dynamic Heterogeneity.
Lim, Yu Rim; Park, Seong Jun; Park, Bo Jung; Cao, Jianshu; Silbey, Robert J; Sung, Jaeyoung
2012-04-10
We investigate the reaction event counting statistics (RECS) of an elementary biopolymer reaction in which the rate coefficient is dependent on states of the biopolymer and the surrounding environment and discover a universal kinetic phase transition in the RECS of the reaction system with dynamic heterogeneity. From an exact analysis for a general model of elementary biopolymer reactions, we find that the variance in the number of reaction events is dependent on the square of the mean number of the reaction events when the size of measurement time is small on the relaxation time scale of rate coefficient fluctuations, which does not conform to renewal statistics. On the other hand, when the size of the measurement time interval is much greater than the relaxation time of rate coefficient fluctuations, the variance becomes linearly proportional to the mean reaction number in accordance with renewal statistics. Gillespie's stochastic simulation method is generalized for the reaction system with a rate coefficient fluctuation. The simulation results confirm the correctness of the analytic results for the time dependent mean and variance of the reaction event number distribution. On the basis of the obtained results, we propose a method of quantitative analysis for the reaction event counting statistics of reaction systems with rate coefficient fluctuations, which enables one to extract information about the magnitude and the relaxation times of the fluctuating reaction rate coefficient, without a bias that can be introduced by assuming a particular kinetic model of conformational dynamics and the conformation dependent reactivity. An exact relationship is established between a higher moment of the reaction event number distribution and the multitime correlation of the reaction rate for the reaction system with a nonequilibrium initial state distribution as well as for the system with the equilibrium initial state distribution.
Understanding Emergency Care Delivery Through Computer Simulation Modeling.
Laker, Lauren F; Torabi, Elham; France, Daniel J; Froehle, Craig M; Goldlust, Eric J; Hoot, Nathan R; Kasaie, Parastu; Lyons, Michael S; Barg-Walkow, Laura H; Ward, Michael J; Wears, Robert L
2018-02-01
In 2017, Academic Emergency Medicine convened a consensus conference entitled, "Catalyzing System Change through Health Care Simulation: Systems, Competency, and Outcomes." This article, a product of the breakout session on "understanding complex interactions through systems modeling," explores the role that computer simulation modeling can and should play in research and development of emergency care delivery systems. This article discusses areas central to the use of computer simulation modeling in emergency care research. The four central approaches to computer simulation modeling are described (Monte Carlo simulation, system dynamics modeling, discrete-event simulation, and agent-based simulation), along with problems amenable to their use and relevant examples to emergency care. Also discussed is an introduction to available software modeling platforms and how to explore their use for research, along with a research agenda for computer simulation modeling. Through this article, our goal is to enhance adoption of computer simulation, a set of methods that hold great promise in addressing emergency care organization and design challenges. © 2017 by the Society for Academic Emergency Medicine.
USDA-ARS?s Scientific Manuscript database
A pilot-scale, recirculating-flow-through, non-steady-state (RFT-NSS) chamber system was designed for quantifying nitrous oxide (N2O) emissions from simulated open-lot beef cattle feedlot pens. The system employed five 1 square meter steel pans. A lid was placed systematically on each pan and heads...
NASA Astrophysics Data System (ADS)
Console, R.; Vannoli, P.; Carluccio, R.
2016-12-01
The application of a physics-based earthquake simulation algorithm to the central Apennines region, where the 24 August 2016 Amatrice earthquake occurred, allowed the compilation of a synthetic seismic catalog lasting 100 ky, and containing more than 500,000 M ≥ 4.0 events, without the limitations that real catalogs suffer in terms of completeness, homogeneity and time duration. The algorithm on which this simulator is based is constrained by several physical elements as: (a) an average slip rate for every single fault in the investigated fault systems, (b) the process of rupture growth and termination, leading to a self-organized earthquake magnitude distribution, and (c) interaction between earthquake sources, including small magnitude events. Events nucleated in one fault are allowed to expand into neighboring faults, even belonging to a different fault system, if they are separated by less than a given maximum distance. The seismogenic model upon which we applied the simulator code, was derived from the DISS 3.2.0 database (http://diss.rm.ingv.it/diss/), selecting all the fault systems that are recognized in the central Apennines region, for a total of 24 fault systems. The application of our simulation algorithm provides typical features in time, space and magnitude behavior of the seismicity, which are comparable with those of real observations. These features include long-term periodicity and clustering of strong earthquakes, and a realistic earthquake magnitude distribution departing from the linear Gutenberg-Richter distribution in the moderate and higher magnitude range. The statistical distribution of earthquakes with M ≥ 6.0 on single faults exhibits a fairly clear pseudo-periodic behavior, with a coefficient of variation Cv of the order of 0.3-0.6. We found in our synthetic catalog a clear trend of long-term acceleration of seismic activity preceding M ≥ 6.0 earthquakes and quiescence following those earthquakes. Lastly, as an example of a possible use of synthetic catalogs, an attenuation law was applied to all the events reported in the synthetic catalog for the production of maps showing the exceedence probability of given values of peak acceleration (PGA) on the territory under investigation. The application of a physics-based earthquake simulation algorithm to the central Apennines region, where the 24 August 2016 Amatrice earthquake occurred, allowed the compilation of a synthetic seismic catalog lasting 100 ky, and containing more than 500,000 M ≥ 4.0 events, without the limitations that real catalogs suffer in terms of completeness, homogeneity and time duration. The algorithm on which this simulator is based is constrained by several physical elements as: (a) an average slip rate for every single fault in the investigated fault systems, (b) the process of rupture growth and termination, leading to a self-organized earthquake magnitude distribution, and (c) interaction between earthquake sources, including small magnitude events. Events nucleated in one fault are allowed to expand into neighboring faults, even belonging to a different fault system, if they are separated by less than a given maximum distance. The seismogenic model upon which we applied the simulator code, was derived from the DISS 3.2.0 database (http://diss.rm.ingv.it/diss/), selecting all the fault systems that are recognized in the central Apennines region, for a total of 24 fault systems. The application of our simulation algorithm provides typical features in time, space and magnitude behavior of the seismicity, which are comparable with those of real observations. These features include long-term periodicity and clustering of strong earthquakes, and a realistic earthquake magnitude distribution departing from the linear Gutenberg-Richter distribution in the moderate and higher magnitude range. The statistical distribution of earthquakes with M ≥ 6.0 on single faults exhibits a fairly clear pseudo-periodic behavior, with a coefficient of variation Cv of the order of 0.3-0.6. We found in our synthetic catalog a clear trend of long-term acceleration of seismic activity preceding M ≥ 6.0 earthquakes and quiescence following those earthquakes. Lastly, as an example of a possible use of synthetic catalogs, an attenuation law was applied to all the events reported in the synthetic catalog for the production of maps showing the exceedence probability of given values of peak acceleration (PGA) on the territory under investigation.
Non-Lipschitz Dynamics Approach to Discrete Event Systems
NASA Technical Reports Server (NTRS)
Zak, M.; Meyers, R.
1995-01-01
This paper presents and discusses a mathematical formalism for simulation of discrete event dynamics (DED) - a special type of 'man- made' system designed to aid specific areas of information processing. A main objective is to demonstrate that the mathematical formalism for DED can be based upon the terminal model of Newtonian dynamics which allows one to relax Lipschitz conditions at some discrete points.
Adaptive Stress Testing of Airborne Collision Avoidance Systems
NASA Technical Reports Server (NTRS)
Lee, Ritchie; Kochenderfer, Mykel J.; Mengshoel, Ole J.; Brat, Guillaume P.; Owen, Michael P.
2015-01-01
This paper presents a scalable method to efficiently search for the most likely state trajectory leading to an event given only a simulator of a system. Our approach uses a reinforcement learning formulation and solves it using Monte Carlo Tree Search (MCTS). The approach places very few requirements on the underlying system, requiring only that the simulator provide some basic controls, the ability to evaluate certain conditions, and a mechanism to control the stochasticity in the system. Access to the system state is not required, allowing the method to support systems with hidden state. The method is applied to stress test a prototype aircraft collision avoidance system to identify trajectories that are likely to lead to near mid-air collisions. We present results for both single and multi-threat encounters and discuss their relevance. Compared with direct Monte Carlo search, this MCTS method performs significantly better both in finding events and in maximizing their likelihood.
Advanced Reactor Passive System Reliability Demonstration Analysis for an External Event
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bucknor, Matthew D.; Grabaskas, David; Brunett, Acacia J.
2016-01-01
Many advanced reactor designs rely on passive systems to fulfill safety functions during accident sequences. These systems depend heavily on boundary conditions to induce a motive force, meaning the system can fail to operate as intended due to deviations in boundary conditions, rather than as the result of physical failures. Furthermore, passive systems may operate in intermediate or degraded modes. These factors make passive system operation difficult to characterize within a traditional probabilistic framework that only recognizes discrete operating modes and does not allow for the explicit consideration of time-dependent boundary conditions. Argonne National Laboratory has been examining various methodologiesmore » for assessing passive system reliability within a probabilistic risk assessment for a station blackout event at an advanced small modular reactor. This paper provides an overview of a passive system reliability demonstration analysis for an external event. Centering on an earthquake with the possibility of site flooding, the analysis focuses on the behavior of the passive reactor cavity cooling system following potential physical damage and system flooding. The assessment approach seeks to combine mechanistic and simulation-based methods to leverage the benefits of the simulation-based approach without the need to substantially deviate from conventional probabilistic risk assessment techniques. While this study is presented as only an example analysis, the results appear to demonstrate a high level of reliability for the reactor cavity cooling system (and the reactor system in general) to the postulated transient event.« less
Advanced Reactor Passive System Reliability Demonstration Analysis for an External Event
Bucknor, Matthew; Grabaskas, David; Brunett, Acacia J.; ...
2017-01-24
We report that many advanced reactor designs rely on passive systems to fulfill safety functions during accident sequences. These systems depend heavily on boundary conditions to induce a motive force, meaning the system can fail to operate as intended because of deviations in boundary conditions, rather than as the result of physical failures. Furthermore, passive systems may operate in intermediate or degraded modes. These factors make passive system operation difficult to characterize within a traditional probabilistic framework that only recognizes discrete operating modes and does not allow for the explicit consideration of time-dependent boundary conditions. Argonne National Laboratory has beenmore » examining various methodologies for assessing passive system reliability within a probabilistic risk assessment for a station blackout event at an advanced small modular reactor. This paper provides an overview of a passive system reliability demonstration analysis for an external event. Considering an earthquake with the possibility of site flooding, the analysis focuses on the behavior of the passive Reactor Cavity Cooling System following potential physical damage and system flooding. The assessment approach seeks to combine mechanistic and simulation-based methods to leverage the benefits of the simulation-based approach without the need to substantially deviate from conventional probabilistic risk assessment techniques. Lastly, although this study is presented as only an example analysis, the results appear to demonstrate a high level of reliability of the Reactor Cavity Cooling System (and the reactor system in general) for the postulated transient event.« less
Collaborative Project: Development of an Isotope-Enabled CESM for Testing Abrupt Climate Changes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Zhengyu
One of the most important validations for a state-of-art Earth System Model (ESM) with respect to climate changes is the simulation of the climate evolution and abrupt climate change events in the Earth’s history of the last 21,000 years. However, one great challenge for model validation is that ESMs usually do not directly simulate geochemical variables that can be compared directly with past proxy records. In this proposal, we have met this challenge by developing the simulation capability of major isotopes in a state-of-art ESM, the Community Earth System Model (CESM), enabling us to make direct model-data comparison by comparingmore » the model directly against proxy climate records. Our isotope-enabled ESM incorporates the capability of simulating key isotopes and geotracers, notably δ 18O, δD, δ 14C, and δ 13C, Nd and Pa/Th. The isotope-enabled ESM have been used to perform some simulations for the last 21000 years. The direct comparison of these simulations with proxy records has shed light on the mechanisms of important climate change events.« less
Using soft systems methodology to develop a simulation of out-patient services.
Lehaney, B; Paul, R J
1994-10-01
Discrete event simulation is an approach to modelling a system in the form of a set of mathematical equations and logical relationships, usually used for complex problems, which are difficult to address by using analytical or numerical methods. Managing out-patient services is such a problem. However, simulation is not in itself a systemic approach, in that it provides no methodology by which system boundaries and system activities may be identified. The investigation considers the use of soft systems methodology as an aid to drawing system boundaries and identifying system activities, for the purpose of simulating the outpatients' department at a local hospital. The long term aims are to examine the effects that the participative nature of soft systems methodology has on the acceptability of the simulation model, and to provide analysts and managers with a process that may assist in planning strategies for health care.
A Madden-Julian oscillation event realistically simulated by a global cloud-resolving model.
Miura, Hiroaki; Satoh, Masaki; Nasuno, Tomoe; Noda, Akira T; Oouchi, Kazuyoshi
2007-12-14
A Madden-Julian Oscillation (MJO) is a massive weather event consisting of deep convection coupled with atmospheric circulation, moving slowly eastward over the Indian and Pacific Oceans. Despite its enormous influence on many weather and climate systems worldwide, it has proven very difficult to simulate an MJO because of assumptions about cumulus clouds in global meteorological models. Using a model that allows direct coupling of the atmospheric circulation and clouds, we successfully simulated the slow eastward migration of an MJO event. Topography, the zonal sea surface temperature gradient, and interplay between eastward- and westward-propagating signals controlled the timing of the eastward transition of the convective center. Our results demonstrate the potential making of month-long MJO predictions when global cloud-resolving models with realistic initial conditions are used.
Evolution of Flow channels and Dipolarization Using THEMIS Observations and Global MHD Simulations
NASA Astrophysics Data System (ADS)
El-Alaoui, M.; McPherron, R. L.; Nishimura, Y.
2017-12-01
We have extensively analyzed a substorm on March 14, 2008 for which we have observations from THEMIS spacecraft located beyond 9 RE near 2100 local time. The available data include an extensive network of all sky cameras and ground magnetometers that establish the times of various auroral and magnetic events. This arrangement provided an excellent data set with which to investigate meso-scale structures in the plasma sheet. We have used a global magnetohydrodynamic simulation to investigate the structure and dynamics of the magnetotail current sheet during this substorm. Both earthward and tailward flows were found in the observations as well as the simulations. The simulation shows that the flow channels follow tortuous paths that are often reflected or deflected before arriving at the inner magnetosphere. The simulation shows a sequence of fast flows and dipolarization events similar to what is seen in the data, though not at precisely the same times or locations. We will use our simulation results combined with the observations to investigate the global convection systems and current sheet structure during this event, showing how meso-scale structures fit into the context of the overall tail dynamics during this event. Our study includes determining the location, timing and strength of several current wedges and expansion onsets during an 8-hour interval.
Kaabi, Mohamed Ghaith; Tonnelier, Arnaud; Martinez, Dominique
2011-05-01
In traditional event-driven strategies, spike timings are analytically given or calculated with arbitrary precision (up to machine precision). Exact computation is possible only for simplified neuron models, mainly the leaky integrate-and-fire model. In a recent paper, Zheng, Tonnelier, and Martinez (2009) introduced an approximate event-driven strategy, named voltage stepping, that allows the generic simulation of nonlinear spiking neurons. Promising results were achieved in the simulation of single quadratic integrate-and-fire neurons. Here, we assess the performance of voltage stepping in network simulations by considering more complex neurons (quadratic integrate-and-fire neurons with adaptation) coupled with multiple synapses. To handle the discrete nature of synaptic interactions, we recast voltage stepping in a general framework, the discrete event system specification. The efficiency of the method is assessed through simulations and comparisons with a modified time-stepping scheme of the Runge-Kutta type. We demonstrated numerically that the original order of voltage stepping is preserved when simulating connected spiking neurons, independent of the network activity and connectivity.
Workshop on data acquisition and trigger system simulations for high energy physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1992-12-31
This report discusses the following topics: DAQSIM: A data acquisition system simulation tool; Front end and DCC Simulations for the SDC Straw Tube System; Simulation of Non-Blocklng Data Acquisition Architectures; Simulation Studies of the SDC Data Collection Chip; Correlation Studies of the Data Collection Circuit & The Design of a Queue for this Circuit; Fast Data Compression & Transmission from a Silicon Strip Wafer; Simulation of SCI Protocols in Modsim; Visual Design with vVHDL; Stochastic Simulation of Asynchronous Buffers; SDC Trigger Simulations; Trigger Rates, DAQ & Online Processing at the SSC; Planned Enhancements to MODSEM II & SIMOBJECT -- anmore » Overview -- R.; DAGAR -- A synthesis system; Proposed Silicon Compiler for Physics Applications; Timed -- LOTOS in a PROLOG Environment: an Algebraic language for Simulation; Modeling and Simulation of an Event Builder for High Energy Physics Data Acquisition Systems; A Verilog Simulation for the CDF DAQ; Simulation to Design with Verilog; The DZero Data Acquisition System: Model and Measurements; DZero Trigger Level 1.5 Modeling; Strategies Optimizing Data Load in the DZero Triggers; Simulation of the DZero Level 2 Data Acquisition System; A Fast Method for Calculating DZero Level 1 Jet Trigger Properties and Physics Input to DAQ Studies.« less
GPS synchronized power system phase angle measurements
NASA Astrophysics Data System (ADS)
Wilson, Robert E.; Sterlina, Patrick S.
1994-09-01
This paper discusses the use of Global Positioning System (GPS) synchronized equipment for the measurement and analysis of key power system quantities. Two GPS synchronized phasor measurement units (PMU) were installed before testing. It was indicated that PMUs recorded the dynamic response of the power system phase angles when the northern California power grid was excited by the artificial short circuits. Power system planning engineers perform detailed computer generated simulations of the dynamic response of the power system to naturally occurring short circuits. The computer simulations use models of transmission lines, transformers, circuit breakers, and other high voltage components. This work will compare computer simulations of the same event with field measurement.
NASA Astrophysics Data System (ADS)
Li, Jun; Fu, Siyao; He, Haibo; Jia, Hongfei; Li, Yanzhong; Guo, Yi
2015-11-01
Large-scale regional evacuation is an important part of national security emergency response plan. Large commercial shopping area, as the typical service system, its emergency evacuation is one of the hot research topics. A systematic methodology based on Cellular Automata with the Dynamic Floor Field and event driven model has been proposed, and the methodology has been examined within context of a case study involving the evacuation within a commercial shopping mall. Pedestrians walking is based on Cellular Automata and event driven model. In this paper, the event driven model is adopted to simulate the pedestrian movement patterns, the simulation process is divided into normal situation and emergency evacuation. The model is composed of four layers: environment layer, customer layer, clerk layer and trajectory layer. For the simulation of movement route of pedestrians, the model takes into account purchase intention of customers and density of pedestrians. Based on evacuation model of Cellular Automata with Dynamic Floor Field and event driven model, we can reflect behavior characteristics of customers and clerks at the situations of normal and emergency evacuation. The distribution of individual evacuation time as a function of initial positions and the dynamics of the evacuation process is studied. Our results indicate that the evacuation model using the combination of Cellular Automata with Dynamic Floor Field and event driven scheduling can be used to simulate the evacuation of pedestrian flows in indoor areas with complicated surroundings and to investigate the layout of shopping mall.
Montecarlo Simulations for a Lep Experiment with Unix Workstation Clusters
NASA Astrophysics Data System (ADS)
Bonesini, M.; Calegari, A.; Rossi, P.; Rossi, V.
Modular systems of RISC CPU based computers have been implemented for large productions of Montecarlo simulated events for the DELPHI experiment at CERN. From a pilot system based on DEC 5000 CPU’s, a full size system based on a CONVEX C3820 UNIX supercomputer and a cluster of HP 735 workstations has been put into operation as a joint effort between INFN Milano and CILEA.
Time Warp Operating System, Version 2.5.1
NASA Technical Reports Server (NTRS)
Bellenot, Steven F.; Gieselman, John S.; Hawley, Lawrence R.; Peterson, Judy; Presley, Matthew T.; Reiher, Peter L.; Springer, Paul L.; Tupman, John R.; Wedel, John J., Jr.; Wieland, Frederick P.;
1993-01-01
Time Warp Operating System, TWOS, is special purpose computer program designed to support parallel simulation of discrete events. Complete implementation of Time Warp software mechanism, which implements distributed protocol for virtual synchronization based on rollback of processes and annihilation of messages. Supports simulations and other computations in which both virtual time and dynamic load balancing used. Program utilizes underlying resources of operating system. Written in C programming language.
NASA Astrophysics Data System (ADS)
Coyne, Kevin Anthony
The safe operation of complex systems such as nuclear power plants requires close coordination between the human operators and plant systems. In order to maintain an adequate level of safety following an accident or other off-normal event, the operators often are called upon to perform complex tasks during dynamic situations with incomplete information. The safety of such complex systems can be greatly improved if the conditions that could lead operators to make poor decisions and commit erroneous actions during these situations can be predicted and mitigated. The primary goal of this research project was the development and validation of a cognitive model capable of simulating nuclear plant operator decision-making during accident conditions. Dynamic probabilistic risk assessment methods can improve the prediction of human error events by providing rich contextual information and an explicit consideration of feedback arising from man-machine interactions. The Accident Dynamics Simulator paired with the Information, Decision, and Action in a Crew context cognitive model (ADS-IDAC) shows promise for predicting situational contexts that might lead to human error events, particularly knowledge driven errors of commission. ADS-IDAC generates a discrete dynamic event tree (DDET) by applying simple branching rules that reflect variations in crew responses to plant events and system status changes. Branches can be generated to simulate slow or fast procedure execution speed, skipping of procedure steps, reliance on memorized information, activation of mental beliefs, variations in control inputs, and equipment failures. Complex operator mental models of plant behavior that guide crew actions can be represented within the ADS-IDAC mental belief framework and used to identify situational contexts that may lead to human error events. This research increased the capabilities of ADS-IDAC in several key areas. The ADS-IDAC computer code was improved to support additional branching events and provide a better representation of the IDAC cognitive model. An operator decision-making engine capable of responding to dynamic changes in situational context was implemented. The IDAC human performance model was fully integrated with a detailed nuclear plant model in order to realistically simulate plant accident scenarios. Finally, the improved ADS-IDAC model was calibrated, validated, and updated using actual nuclear plant crew performance data. This research led to the following general conclusions: (1) A relatively small number of branching rules are capable of efficiently capturing a wide spectrum of crew-to-crew variabilities. (2) Compared to traditional static risk assessment methods, ADS-IDAC can provide a more realistic and integrated assessment of human error events by directly determining the effect of operator behaviors on plant thermal hydraulic parameters. (3) The ADS-IDAC approach provides an efficient framework for capturing actual operator performance data such as timing of operator actions, mental models, and decision-making activities.
Using the Statecharts paradigm for simulation of patient flow in surgical care.
Sobolev, Boris; Harel, David; Vasilakis, Christos; Levy, Adrian
2008-03-01
Computer simulation of patient flow has been used extensively to assess the impacts of changes in the management of surgical care. However, little research is available on the utility of existing modeling techniques. The purpose of this paper is to examine the capacity of Statecharts, a system of graphical specification, for constructing a discrete-event simulation model of the perioperative process. The Statecharts specification paradigm was originally developed for representing reactive systems by extending the formalism of finite-state machines through notions of hierarchy, parallelism, and event broadcasting. Hierarchy permits subordination between states so that one state may contain other states. Parallelism permits more than one state to be active at any given time. Broadcasting of events allows one state to detect changes in another state. In the context of the peri-operative process, hierarchy provides the means to describe steps within activities and to cluster related activities, parallelism provides the means to specify concurrent activities, and event broadcasting provides the means to trigger a series of actions in one activity according to transitions that occur in another activity. Combined with hierarchy and parallelism, event broadcasting offers a convenient way to describe the interaction of concurrent activities. We applied the Statecharts formalism to describe the progress of individual patients through surgical care as a series of asynchronous updates in patient records generated in reaction to events produced by parallel finite-state machines representing concurrent clinical and managerial activities. We conclude that Statecharts capture successfully the behavioral aspects of surgical care delivery by specifying permissible chronology of events, conditions, and actions.
NASA Astrophysics Data System (ADS)
Akushevich, I.; Filoti, O. F.; Ilyichev, A.; Shumeiko, N.
2012-07-01
The structure and algorithms of the Monte Carlo generator ELRADGEN 2.0 designed to simulate radiative events in polarized ep-scattering are presented. The full set of analytical expressions for the QED radiative corrections is presented and discussed in detail. Algorithmic improvements implemented to provide faster simulation of hard real photon events are described. Numerical tests show high quality of generation of photonic variables and radiatively corrected cross section. The comparison of the elastic radiative tail simulated within the kinematical conditions of the BLAST experiment at MIT BATES shows a good agreement with experimental data. Catalogue identifier: AELO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1299 No. of bytes in distributed program, including test data, etc.: 11 348 Distribution format: tar.gz Programming language: FORTRAN 77 Computer: All Operating system: Any RAM: 1 MB Classification: 11.2, 11.4 Nature of problem: Simulation of radiative events in polarized ep-scattering. Solution method: Monte Carlo simulation according to the distributions of the real photon kinematic variables that are calculated by the covariant method of QED radiative correction estimation. The approach provides rather fast and accurate generation. Running time: The simulation of 108 radiative events for itest:=1 takes up to 52 seconds on Pentium(R) Dual-Core 2.00 GHz processor.
Integration of scheduling and discrete event simulation systems to improve production flow planning
NASA Astrophysics Data System (ADS)
Krenczyk, D.; Paprocka, I.; Kempa, W. M.; Grabowik, C.; Kalinowski, K.
2016-08-01
The increased availability of data and computer-aided technologies such as MRPI/II, ERP and MES system, allowing producers to be more adaptive to market dynamics and to improve production scheduling. Integration of production scheduling and computer modelling, simulation and visualization systems can be useful in the analysis of production system constraints related to the efficiency of manufacturing systems. A integration methodology based on semi-automatic model generation method for eliminating problems associated with complexity of the model and labour-intensive and time-consuming process of simulation model creation is proposed. Data mapping and data transformation techniques for the proposed method have been applied. This approach has been illustrated through examples of practical implementation of the proposed method using KbRS scheduling system and Enterprise Dynamics simulation system.
NASA Astrophysics Data System (ADS)
Bastin, Sophie; Champollion, Cédric; Bock, Olivier; Drobinski, Philippe; Masson, Frédéric
2005-03-01
Global Positioning System (GPS) tomography analyses of water vapor, complemented by high-resolution numerical simulations are used to investigate a Mistral/sea breeze event in the region of Marseille, France, during the ESCOMPTE experiment. This is the first time GPS tomography has been used to validate the three-dimensional water vapor concentration from numerical simulation, and to analyze a small-scale meteorological event. The high spatial and temporal resolution of GPS analyses provides a unique insight into the evolution of the vertical and horizontal distribution of water vapor during the Mistral/sea-breeze transition.
The simulation library of the Belle II software system
NASA Astrophysics Data System (ADS)
Kim, D. Y.; Ritter, M.; Bilka, T.; Bobrov, A.; Casarosa, G.; Chilikin, K.; Ferber, T.; Godang, R.; Jaegle, I.; Kandra, J.; Kodys, P.; Kuhr, T.; Kvasnicka, P.; Nakayama, H.; Piilonen, L.; Pulvermacher, C.; Santelj, L.; Schwenker, B.; Sibidanov, A.; Soloviev, Y.; Starič, M.; Uglov, T.
2017-10-01
SuperKEKB, the next generation B factory, has been constructed in Japan as an upgrade of KEKB. This brand new e+ e- collider is expected to deliver a very large data set for the Belle II experiment, which will be 50 times larger than the previous Belle sample. Both the triggered physics event rate and the background event rate will be increased by at least 10 times than the previous ones, and will create a challenging data taking environment for the Belle II detector. The software system of the Belle II experiment is designed to execute this ambitious plan. A full detector simulation library, which is a part of the Belle II software system, is created based on Geant4 and has been tested thoroughly. Recently the library has been upgraded with Geant4 version 10.1. The library is behaving as expected and it is utilized actively in producing Monte Carlo data sets for various studies. In this paper, we will explain the structure of the simulation library and the various interfaces to other packages including geometry and beam background simulation.
: A Scalable and Transparent System for Simulating MPI Programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perumalla, Kalyan S
2010-01-01
is a scalable, transparent system for experimenting with the execution of parallel programs on simulated computing platforms. The level of simulated detail can be varied for application behavior as well as for machine characteristics. Unique features of are repeatability of execution, scalability to millions of simulated (virtual) MPI ranks, scalability to hundreds of thousands of host (real) MPI ranks, portability of the system to a variety of host supercomputing platforms, and the ability to experiment with scientific applications whose source-code is available. The set of source-code interfaces supported by is being expanded to support a wider set of applications, andmore » MPI-based scientific computing benchmarks are being ported. In proof-of-concept experiments, has been successfully exercised to spawn and sustain very large-scale executions of an MPI test program given in source code form. Low slowdowns are observed, due to its use of purely discrete event style of execution, and due to the scalability and efficiency of the underlying parallel discrete event simulation engine, sik. In the largest runs, has been executed on up to 216,000 cores of a Cray XT5 supercomputer, successfully simulating over 27 million virtual MPI ranks, each virtual rank containing its own thread context, and all ranks fully synchronized by virtual time.« less
NASA Astrophysics Data System (ADS)
Gilchrist, J. J.; Jordan, T. H.; Shaw, B. E.; Milner, K. R.; Richards-Dinger, K. B.; Dieterich, J. H.
2017-12-01
Within the SCEC Collaboratory for Interseismic Simulation and Modeling (CISM), we are developing physics-based forecasting models for earthquake ruptures in California. We employ the 3D boundary element code RSQSim (Rate-State Earthquake Simulator of Dieterich & Richards-Dinger, 2010) to generate synthetic catalogs with tens of millions of events that span up to a million years each. This code models rupture nucleation by rate- and state-dependent friction and Coulomb stress transfer in complex, fully interacting fault systems. The Uniform California Earthquake Rupture Forecast Version 3 (UCERF3) fault and deformation models are used to specify the fault geometry and long-term slip rates. We have employed the Blue Waters supercomputer to generate long catalogs of simulated California seismicity from which we calculate the forecasting statistics for large events. We have performed probabilistic seismic hazard analysis with RSQSim catalogs that were calibrated with system-wide parameters and found a remarkably good agreement with UCERF3 (Milner et al., this meeting). We build on this analysis, comparing the conditional probabilities of sequences of large events from RSQSim and UCERF3. In making these comparisons, we consider the epistemic uncertainties associated with the RSQSim parameters (e.g., rate- and state-frictional parameters), as well as the effects of model-tuning (e.g., adjusting the RSQSim parameters to match UCERF3 recurrence rates). The comparisons illustrate how physics-based rupture simulators might assist forecasters in understanding the short-term hazards of large aftershocks and multi-event sequences associated with complex, multi-fault ruptures.
Perkins, Casey; Muller, George
2015-10-08
The number of connections between physical and cyber security systems is rapidly increasing due to centralized control from automated and remotely connected means. As the number of interfaces between systems continues to grow, the interactions and interdependencies between them cannot be ignored. Historically, physical and cyber vulnerability assessments have been performed independently. This independent evaluation omits important aspects of the integrated system, where the impacts resulting from malicious or opportunistic attacks are not easily known or understood. Here, we describe a discrete event simulation model that uses information about integrated physical and cyber security systems, attacker characteristics and simple responsemore » rules to identify key safeguards that limit an attacker's likelihood of success. Key features of the proposed model include comprehensive data generation to support a variety of sophisticated analyses, and full parameterization of safeguard performance characteristics and attacker behaviours to evaluate a range of scenarios. Lastly, we also describe the core data requirements and the network of networks that serves as the underlying simulation structure.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perkins, Casey; Muller, George
The number of connections between physical and cyber security systems is rapidly increasing due to centralized control from automated and remotely connected means. As the number of interfaces between systems continues to grow, the interactions and interdependencies between them cannot be ignored. Historically, physical and cyber vulnerability assessments have been performed independently. This independent evaluation omits important aspects of the integrated system, where the impacts resulting from malicious or opportunistic attacks are not easily known or understood. Here, we describe a discrete event simulation model that uses information about integrated physical and cyber security systems, attacker characteristics and simple responsemore » rules to identify key safeguards that limit an attacker's likelihood of success. Key features of the proposed model include comprehensive data generation to support a variety of sophisticated analyses, and full parameterization of safeguard performance characteristics and attacker behaviours to evaluate a range of scenarios. Lastly, we also describe the core data requirements and the network of networks that serves as the underlying simulation structure.« less
Wu, Ching-Han; Hwang, Kevin P
2009-12-01
To improve ambulance response time, matching ambulance availability with the emergency demand is crucial. To maintain the standard of 90% of response times within 9 minutes, the authors introduce a discrete-event simulation method to estimate the threshold for expanding the ambulance fleet when demand increases and to find the optimal dispatching strategies when provisional events create temporary decreases in ambulance availability. The simulation model was developed with information from the literature. Although the development was theoretical, the model was validated on the emergency medical services (EMS) system of Tainan City. The data are divided: one part is for model development, and the other for validation. For increasing demand, the effect was modeled on response time when call arrival rates increased. For temporary availability decreases, the authors simulated all possible alternatives of ambulance deployment in accordance with the number of out-of-routine-duty ambulances and the durations of three types of mass gatherings: marathon races (06:00-10:00 hr), rock concerts (18:00-22:00 hr), and New Year's Eve parties (20:00-01:00 hr). Statistical analysis confirmed that the model reasonably represented the actual Tainan EMS system. The response-time standard could not be reached when the incremental ratio of call arrivals exceeded 56%, which is the threshold for the Tainan EMS system to expand its ambulance fleet. When provisional events created temporary availability decreases, the Tainan EMS system could spare at most two ambulances from the standard configuration, except between 20:00 and 01:00, when it could spare three. The model also demonstrated that the current Tainan EMS has two excess ambulances that could be dropped. The authors suggest dispatching strategies to minimize the response times in routine daily emergencies. Strategies of capacity management based on this model improved response times. The more ambulances that are out of routine duty, the better the performance of the optimal strategies that are based on this model.
NASA Technical Reports Server (NTRS)
Phillips, D. T.; Manseur, B.; Foster, J. W.
1982-01-01
Alternate definitions of system failure create complex analysis for which analytic solutions are available only for simple, special cases. The GRASP methodology is a computer simulation approach for solving all classes of problems in which both failure and repair events are modeled according to the probability laws of the individual components of the system.
Dynamic partitioning for hybrid simulation of the bistable HIV-1 transactivation network.
Griffith, Mark; Courtney, Tod; Peccoud, Jean; Sanders, William H
2006-11-15
The stochastic kinetics of a well-mixed chemical system, governed by the chemical Master equation, can be simulated using the exact methods of Gillespie. However, these methods do not scale well as systems become more complex and larger models are built to include reactions with widely varying rates, since the computational burden of simulation increases with the number of reaction events. Continuous models may provide an approximate solution and are computationally less costly, but they fail to capture the stochastic behavior of small populations of macromolecules. In this article we present a hybrid simulation algorithm that dynamically partitions the system into subsets of continuous and discrete reactions, approximates the continuous reactions deterministically as a system of ordinary differential equations (ODE) and uses a Monte Carlo method for generating discrete reaction events according to a time-dependent propensity. Our approach to partitioning is improved such that we dynamically partition the system of reactions, based on a threshold relative to the distribution of propensities in the discrete subset. We have implemented the hybrid algorithm in an extensible framework, utilizing two rigorous ODE solvers to approximate the continuous reactions, and use an example model to illustrate the accuracy and potential speedup of the algorithm when compared with exact stochastic simulation. Software and benchmark models used for this publication can be made available upon request from the authors.
Terminal Dynamics Approach to Discrete Event Systems
NASA Technical Reports Server (NTRS)
Zak, Michail; Meyers, Ronald
1995-01-01
This paper presents and discusses a mathematical formalism for simulation of discrete event dynamic (DED)-a special type of 'man-made' systems to serve specific purposes of information processing. The main objective of this work is to demonstrate that the mathematical formalism for DED can be based upon a terminal model of Newtonian dynamics which allows one to relax Lipschitz conditions at some discrete points.!.
Henricksen, Jared W; Altenburg, Catherine; Reeder, Ron W
2017-10-01
Despite efforts to prepare a psychologically safe environment, simulation participants are occasionally psychologically distressed. Instructing simulation educators about participant psychological risks and having a participant psychological distress action plan available to simulation educators may assist them as they seek to keep all participants psychologically safe. A Simulation Participant Psychological Safety Algorithm was designed to aid simulation educators as they debrief simulation participants perceived to have psychological distress and categorize these events as mild (level 1), moderate (level 2), or severe (level 3). A prebrief dedicated to creating a psychologically safe learning environment was held constant. The algorithm was used for 18 months in an active pediatric simulation program. Data collected included level of participant psychological distress as perceived and categorized by the simulation team using the algorithm, type of simulation that participants went through, who debriefed, and timing of when psychological distress was perceived to occur during the simulation session. The Kruskal-Wallis test was used to evaluate the relationship between events and simulation type, events and simulation educator team who debriefed, and timing of event during the simulation session. A total of 3900 participants went through 399 simulation sessions between August 1, 2014, and January 26, 2016. Thirty-four (<1%) simulation participants from 27 sessions (7%) were perceived to have an event. One participant was perceived to have a severe (level 3) psychological distress event. Events occurred more commonly in high-intensity simulations, with novice learners and with specific educator teams. Simulation type and simulation educator team were associated with occurrence of events (P < 0.001). There was no association between event timing and event level. Severe psychological distress as categorized by simulation personnel using the Simulation Participant Psychological Safety Algorithm is rare, with mild and moderate events being more common. The algorithm was used to teach simulation educators how to assist a participant who may be psychologically distressed and document perceived event severity.
DOT National Transportation Integrated Search
1981-01-01
The System Availability Model (SAM) is a system-level model which provides measures of vehicle and passenger availability. The SAM operates in conjunction with the AGT discrete Event Simulation Model (DESM). The DESM output is the normal source of th...
Scaling an urban emergency evacuation framework : challenges and practices.
DOT National Transportation Integrated Search
2014-01-01
Critical infrastructure disruption, caused by severe weather events, natural disasters, terrorist : attacks, etc., has significant impacts on urban transportation systems. We built a computational : framework to simulate urban transportation systems ...
Extreme weather and climate events with ecological relevance: a review.
Ummenhofer, Caroline C; Meehl, Gerald A
2017-06-19
Robust evidence exists that certain extreme weather and climate events, especially daily temperature and precipitation extremes, have changed in regard to intensity and frequency over recent decades. These changes have been linked to human-induced climate change, while the degree to which climate change impacts an individual extreme climate event (ECE) is more difficult to quantify. Rapid progress in event attribution has recently been made through improved understanding of observed and simulated climate variability, methods for event attribution and advances in numerical modelling. Attribution for extreme temperature events is stronger compared with other event types, notably those related to the hydrological cycle. Recent advances in the understanding of ECEs, both in observations and their representation in state-of-the-art climate models, open new opportunities for assessing their effect on human and natural systems. Improved spatial resolution in global climate models and advances in statistical and dynamical downscaling now provide climatic information at appropriate spatial and temporal scales. Together with the continued development of Earth System Models that simulate biogeochemical cycles and interactions with the biosphere at increasing complexity, these make it possible to develop a mechanistic understanding of how ECEs affect biological processes, ecosystem functioning and adaptation capabilities. Limitations in the observational network, both for physical climate system parameters and even more so for long-term ecological monitoring, have hampered progress in understanding bio-physical interactions across a range of scales. New opportunities for assessing how ECEs modulate ecosystem structure and functioning arise from better scientific understanding of ECEs coupled with technological advances in observing systems and instrumentation.This article is part of the themed issue 'Behavioural, ecological and evolutionary responses to extreme climatic events'. © 2017 The Author(s).
Pyranometer offsets triggered by ambient meteorology: insights from laboratory and field experiments
NASA Astrophysics Data System (ADS)
Oswald, Sandro M.; Pietsch, Helga; Baumgartner, Dietmar J.; Weihs, Philipp; Rieder, Harald E.
2017-03-01
This study investigates the effects of ambient meteorology on the accuracy of radiation (R) measurements performed with pyranometers contained in various heating and ventilation systems (HV-systems). It focuses particularly on instrument offsets observed following precipitation events. To quantify pyranometer responses to precipitation, a series of controlled laboratory experiments as well as two targeted field campaigns were performed in 2016. The results indicate that precipitation (as simulated by spray tests or observed under ambient conditions) significantly affects the thermal environment of the instruments and thus their stability. Statistical analyses of laboratory experiments showed that precipitation triggers zero offsets of -4 W m-2 or more, independent of the HV-system. Similar offsets were observed in field experiments under ambient environmental conditions, indicating a clear exceedance of BSRN (Baseline Surface Radiation Network) targets following precipitation events. All pyranometers required substantial time to return to their initial signal states after the simulated precipitation events. Therefore, for BSRN-class measurements, the recommendation would be to flag the radiation measurements during a natural precipitation event and 90 min after it in nighttime conditions. Further daytime experiments show pyranometer offsets of 50 W m-2 or more in comparison to the reference system. As they show a substantially faster recovery, the recommendation would be to flag the radiation measurements within a natural precipitation event and 10 min after it in daytime conditions.
A Systems Approach to Scalable Transportation Network Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perumalla, Kalyan S
2006-01-01
Emerging needs in transportation network modeling and simulation are raising new challenges with respect to scal-ability of network size and vehicular traffic intensity, speed of simulation for simulation-based optimization, and fidel-ity of vehicular behavior for accurate capture of event phe-nomena. Parallel execution is warranted to sustain the re-quired detail, size and speed. However, few parallel simulators exist for such applications, partly due to the challenges underlying their development. Moreover, many simulators are based on time-stepped models, which can be computationally inefficient for the purposes of modeling evacuation traffic. Here an approach is presented to de-signing a simulator with memory andmore » speed efficiency as the goals from the outset, and, specifically, scalability via parallel execution. The design makes use of discrete event modeling techniques as well as parallel simulation meth-ods. Our simulator, called SCATTER, is being developed, incorporating such design considerations. Preliminary per-formance results are presented on benchmark road net-works, showing scalability to one million vehicles simu-lated on one processor.« less
The impact of bathymetry input on flood simulations
NASA Astrophysics Data System (ADS)
Khanam, M.; Cohen, S.
2017-12-01
Flood prediction and mitigation systems are inevitable for improving public safety and community resilience all over the worldwide. Hydraulic simulations of flood events are becoming an increasingly efficient tool for studying and predicting flood events and susceptibility. A consistent limitation of hydraulic simulations of riverine dynamics is the lack of information about river bathymetry as most terrain data record water surface elevation. The impact of this limitation on the accuracy on hydraulic simulations of flood has not been well studies over a large range of flood magnitude and modeling frameworks. Advancing our understanding of this topic is timely given emerging national and global efforts for developing automated flood predictions systems (e.g. NOAA National Water Center). Here we study the response of flood simulation to the incorporation of different bathymetry and floodplain surveillance source. Different hydraulic models are compared, Mike-Flood, a 2D hydrodynamic model, and GSSHA, a hydrology/hydraulics model. We test a hypothesis that the impact of inclusion/exclusion of bathymetry data on hydraulic model results will vary in its magnitude as a function of river size. This will allow researcher and stake holders more accurate predictions of flood events providing useful information that will help local communities in a vulnerable flood zone to mitigate flood hazards. Also, it will help to evaluate the accuracy and efficiency of different modeling frameworks and gage their dependency on detailed bathymetry input data.
Integrating physically based simulators with Event Detection Systems: Multi-site detection approach.
Housh, Mashor; Ohar, Ziv
2017-03-01
The Fault Detection (FD) Problem in control theory concerns of monitoring a system to identify when a fault has occurred. Two approaches can be distinguished for the FD: Signal processing based FD and Model-based FD. The former concerns of developing algorithms to directly infer faults from sensors' readings, while the latter uses a simulation model of the real-system to analyze the discrepancy between sensors' readings and expected values from the simulation model. Most contamination Event Detection Systems (EDSs) for water distribution systems have followed the signal processing based FD, which relies on analyzing the signals from monitoring stations independently of each other, rather than evaluating all stations simultaneously within an integrated network. In this study, we show that a model-based EDS which utilizes a physically based water quality and hydraulics simulation models, can outperform the signal processing based EDS. We also show that the model-based EDS can facilitate the development of a Multi-Site EDS (MSEDS), which analyzes the data from all the monitoring stations simultaneously within an integrated network. The advantage of the joint analysis in the MSEDS is expressed by increased detection accuracy (higher true positive alarms and fewer false alarms) and shorter detection time. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan S; Krishnamurthy, Dheepak; Top, Philip
This paper describes the design rationale for a new cyber-physical-energy co-simulation framework for electric power systems. This new framework will support very large-scale (100,000+ federates) co-simulations with off-the-shelf power-systems, communication, and end-use models. Other key features include cross-platform operating system support, integration of both event-driven (e.g. packetized communication) and time-series (e.g. power flow) simulation, and the ability to co-iterate among federates to ensure model convergence at each time step. After describing requirements, we begin by evaluating existing co-simulation frameworks, including HLA and FMI, and conclude that none provide the required features. Then we describe the design for the new layeredmore » co-simulation architecture.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan S; Krishnamurthy, Dheepak; Top, Philip
This paper describes the design rationale for a new cyber-physical-energy co-simulation framework for electric power systems. This new framework will support very large-scale (100,000+ federates) co-simulations with off-the-shelf power-systems, communication, and end-use models. Other key features include cross-platform operating system support, integration of both event-driven (e.g. packetized communication) and time-series (e.g. power flow) simulation, and the ability to co-iterate among federates to ensure model convergence at each time step. After describing requirements, we begin by evaluating existing co-simulation frameworks, including HLA and FMI, and conclude that none provide the required features. Then we describe the design for the new layeredmore » co-simulation architecture.« less
LCG MCDB—a knowledgebase of Monte-Carlo simulated events
NASA Astrophysics Data System (ADS)
Belov, S.; Dudko, L.; Galkin, E.; Gusev, A.; Pokorski, W.; Sherstnev, A.
2008-02-01
In this paper we report on LCG Monte-Carlo Data Base (MCDB) and software which has been developed to operate MCDB. The main purpose of the LCG MCDB project is to provide a storage and documentation system for sophisticated event samples simulated for the LHC Collaborations by experts. In many cases, the modern Monte-Carlo simulation of physical processes requires expert knowledge in Monte-Carlo generators or significant amount of CPU time to produce the events. MCDB is a knowledgebase mainly dedicated to accumulate simulated events of this type. The main motivation behind LCG MCDB is to make the sophisticated MC event samples available for various physical groups. All the data from MCDB is accessible in several convenient ways. LCG MCDB is being developed within the CERN LCG Application Area Simulation project. Program summaryProgram title: LCG Monte-Carlo Data Base Catalogue identifier: ADZX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence No. of lines in distributed program, including test data, etc.: 30 129 No. of bytes in distributed program, including test data, etc.: 216 943 Distribution format: tar.gz Programming language: Perl Computer: CPU: Intel Pentium 4, RAM: 1 Gb, HDD: 100 Gb Operating system: Scientific Linux CERN 3/4 RAM: 1 073 741 824 bytes (1 Gb) Classification: 9 External routines:perl >= 5.8.5; Perl modules DBD-mysql >= 2.9004, File::Basename, GD::SecurityImage, GD::SecurityImage::AC, Linux::Statistics, XML::LibXML > 1.6, XML::SAX, XML::NamespaceSupport; Apache HTTP Server >= 2.0.59; mod auth external >= 2.2.9; edg-utils-system RPM package; gd >= 2.0.28; rpm package CASTOR-client >= 2.1.2-4; arc-server (optional) Nature of problem: Often, different groups of experimentalists prepare similar samples of particle collision events or turn to the same group of authors of Monte-Carlo (MC) generators to prepare the events. For example, the same MC samples of Standard Model (SM) processes can be employed for the investigations either in the SM analyses (as a signal) or in searches for new phenomena in Beyond Standard Model analyses (as a background). If the samples are made available publicly and equipped with corresponding and comprehensive documentation, it can speed up cross checks of the samples themselves and physical models applied. Some event samples require a lot of computing resources for preparation. So, a central storage of the samples prevents possible waste of researcher time and computing resources, which can be used to prepare the same events many times. Solution method: Creation of a special knowledgebase (MCDB) designed to keep event samples for the LHC experimental and phenomenological community. The knowledgebase is realized as a separate web-server ( http://mcdb.cern.ch). All event samples are kept on types at CERN. Documentation describing the events is the main contents of MCDB. Users can browse the knowledgebase, read and comment articles (documentation), and download event samples. Authors can upload new event samples, create new articles, and edit own articles. Restrictions: The software is adopted to solve the problems, described in the article and there are no any additional restrictions. Unusual features: The software provides a framework to store and document large files with flexible authentication and authorization system. Different external storages with large capacity can be used to keep the files. The WEB Content Management System provides all of the necessary interfaces for the authors of the files, end-users and administrators. Running time: Real time operations. References: [1] The main LCG MCDB server, http://mcdb.cern.ch/. [2] P. Bartalini, L. Dudko, A. Kryukov, I.V. Selyuzhenkov, A. Sherstnev, A. Vologdin, LCG Monte-Carlo data base, hep-ph/0404241. [3] J.P. Baud, B. Couturier, C. Curran, J.D. Durand, E. Knezo, S. Occhetti, O. Barring, CASTOR: status and evolution, cs.oh/0305047.
Kittipittayakorn, Cholada; Ying, Kuo-Ching
2016-01-01
Many hospitals are currently paying more attention to patient satisfaction since it is an important service quality index. Many Asian countries' healthcare systems have a mixed-type registration, accepting both walk-in patients and scheduled patients. This complex registration system causes a long patient waiting time in outpatient clinics. Different approaches have been proposed to reduce the waiting time. This study uses the integration of discrete event simulation (DES) and agent-based simulation (ABS) to improve patient waiting time and is the first attempt to apply this approach to solve this key problem faced by orthopedic departments. From the data collected, patient behaviors are modeled and incorporated into a massive agent-based simulation. The proposed approach is an aid for analyzing and modifying orthopedic department processes, allows us to consider far more details, and provides more reliable results. After applying the proposed approach, the total waiting time of the orthopedic department fell from 1246.39 minutes to 847.21 minutes. Thus, using the correct simulation model significantly reduces patient waiting time in an orthopedic department.
Kittipittayakorn, Cholada
2016-01-01
Many hospitals are currently paying more attention to patient satisfaction since it is an important service quality index. Many Asian countries' healthcare systems have a mixed-type registration, accepting both walk-in patients and scheduled patients. This complex registration system causes a long patient waiting time in outpatient clinics. Different approaches have been proposed to reduce the waiting time. This study uses the integration of discrete event simulation (DES) and agent-based simulation (ABS) to improve patient waiting time and is the first attempt to apply this approach to solve this key problem faced by orthopedic departments. From the data collected, patient behaviors are modeled and incorporated into a massive agent-based simulation. The proposed approach is an aid for analyzing and modifying orthopedic department processes, allows us to consider far more details, and provides more reliable results. After applying the proposed approach, the total waiting time of the orthopedic department fell from 1246.39 minutes to 847.21 minutes. Thus, using the correct simulation model significantly reduces patient waiting time in an orthopedic department. PMID:27195606
Parallelized direct execution simulation of message-passing parallel programs
NASA Technical Reports Server (NTRS)
Dickens, Phillip M.; Heidelberger, Philip; Nicol, David M.
1994-01-01
As massively parallel computers proliferate, there is growing interest in findings ways by which performance of massively parallel codes can be efficiently predicted. This problem arises in diverse contexts such as parallelizing computers, parallel performance monitoring, and parallel algorithm development. In this paper we describe one solution where one directly executes the application code, but uses a discrete-event simulator to model details of the presumed parallel machine such as operating system and communication network behavior. Because this approach is computationally expensive, we are interested in its own parallelization specifically the parallelization of the discrete-event simulator. We describe methods suitable for parallelized direct execution simulation of message-passing parallel programs, and report on the performance of such a system, Large Application Parallel Simulation Environment (LAPSE), we have built on the Intel Paragon. On all codes measured to date, LAPSE predicts performance well typically within 10 percent relative error. Depending on the nature of the application code, we have observed low slowdowns (relative to natively executing code) and high relative speedups using up to 64 processors.
Dual-stage periodic event-triggered output-feedback control for linear systems.
Ruan, Zhen; Chen, Wu-Hua; Lu, Xiaomei
2018-05-01
This paper proposes an event-triggered control framework, called dual-stage periodic event-triggered control (DSPETC), which unifies periodic event-triggered control (PETC) and switching event-triggered control (SETC). Specifically, two period parameters h 1 and h 2 are introduced to characterize the new event-triggering rule, where h 1 denotes the sampling period, while h 2 denotes the monitoring period. By choosing some specified values of h 2 , the proposed control scheme can reduce to PETC or SETC scheme. In the DSPETC framework, the controlled system is represented as a switched system model and its stability is analyzed via a switching-time-dependent Lyapunov functional. Both the cases with/without network-induced delays are investigated. Simulation and experimental results show that the DSPETC scheme is superior to the PETC scheme and the SETC scheme. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Amplitude modulation of alpha-band rhythm caused by mimic collision: MEG study.
Yokosawa, Koichi; Watanabe, Tatsuya; Kikuzawa, Daichi; Aoyama, Gakuto; Takahashi, Makoto; Kuriki, Shinya
2013-01-01
Detection of a collision risk and avoiding the collision are important for survival. We have been investigating neural responses when humans anticipate a collision or intend to take evasive action by applying collision-simulating images in a predictable manner. Collision-simulating images and control images were presented in random order to 9 healthy male volunteers. A cue signal was also given visually two seconds before each stimulus to enable each participant to anticipate the upcoming stimulus. Magnetoencephalograms (MEG) were recorded with a 76-ch helmet system. The amplitude of alpha band (8-13 Hz) rhythm when anticipating the upcoming collision-simulating image was significantly smaller than that when anticipating control images even just after the cue signal. This result demonstrates that anticipating a negative (dangerous) event induced event-related desynchronization (ERD) of alpha band activity, probably caused by attention. The results suggest the feasibility of detecting endogenous brain activities by monitoring alpha band rhythm and its possible applications to engineering systems, such as an automatic collision evasion system for automobiles.
High resolution modelling of extreme precipitation events in urban areas
NASA Astrophysics Data System (ADS)
Siemerink, Martijn; Volp, Nicolette; Schuurmans, Wytze; Deckers, Dave
2015-04-01
The present day society needs to adjust to the effects of climate change. More extreme weather conditions are expected, which can lead to longer periods of drought, but also to more extreme precipitation events. Urban water systems are not designed for such extreme events. Most sewer systems are not able to drain the excessive storm water, causing urban flooding. This leads to high economic damage. In order to take appropriate measures against extreme urban storms, detailed knowledge about the behaviour of the urban water system above and below the streets is required. To investigate the behaviour of urban water systems during extreme precipitation events new assessment tools are necessary. These tools should provide a detailed and integral description of the flow in the full domain of overland runoff, sewer flow, surface water flow and groundwater flow. We developed a new assessment tool, called 3Di, which provides detailed insight in the urban water system. This tool is based on a new numerical methodology that can accurately deal with the interaction between overland runoff, sewer flow and surface water flow. A one-dimensional model for the sewer system and open channel flow is fully coupled to a two-dimensional depth-averaged model that simulates the overland flow. The tool uses a subgrid-based approach in order to take high resolution information of the sewer system and of the terrain into account [1, 2]. The combination of using the high resolution information and the subgrid based approach results in an accurate and efficient modelling tool. It is now possible to simulate entire urban water systems using extreme high resolution (0.5m x 0.5m) terrain data in combination with a detailed sewer and surface water network representation. The new tool has been tested in several Dutch cities, such as Rotterdam, Amsterdam and The Hague. We will present the results of an extreme precipitation event in the city of Schiedam (The Netherlands). This city deals with significant soil consolidation and the low-lying areas are prone to urban flooding. The simulation results are compared with measurements in the sewer network. References [1] Guus S. Stelling G.S., 2012. Quadtree flood simulations with subgrid digital elevation models. Water Management 165 (WM1):1329-1354. [2] Vincenzo Cassuli and Guus S. Stelling, 2013. A semi-implicit numerical model for urban drainage systems. International Journal for Numerical Methods in Fluids. Vol. 73:600-614. DOI: 10.1002/fld.3817
THYME: Toolkit for Hybrid Modeling of Electric Power Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nutaro Kalyan Perumalla, James Joseph
2011-01-01
THYME is an object oriented library for building models of wide area control and communications in electric power systems. This software is designed as a module to be used with existing open source simulators for discrete event systems in general and communication systems in particular. THYME consists of a typical model for simulating electro-mechanical transients (e.g., as are used in dynamic stability studies), data handling objects to work with CDF and PTI formatted power flow data, and sample models of discrete sensors and controllers.
ATLAS Eventlndex monitoring system using the Kibana analytics and visualization platform
NASA Astrophysics Data System (ADS)
Barberis, D.; Cárdenas Zárate, S. E.; Favareto, A.; Fernandez Casani, A.; Gallas, E. J.; Garcia Montoro, C.; Gonzalez de la Hoz, S.; Hrivnac, J.; Malon, D.; Prokoshin, F.; Salt, J.; Sanchez, J.; Toebbicke, R.; Yuan, R.; ATLAS Collaboration
2016-10-01
The ATLAS EventIndex is a data catalogue system that stores event-related metadata for all (real and simulated) ATLAS events, on all processing stages. As it consists of different components that depend on other applications (such as distributed storage, and different sources of information) we need to monitor the conditions of many heterogeneous subsystems, to make sure everything is working correctly. This paper describes how we gather information about the EventIndex components and related subsystems: the Producer-Consumer architecture for data collection, health parameters from the servers that run EventIndex components, EventIndex web interface status, and the Hadoop infrastructure that stores EventIndex data. This information is collected, processed, and then displayed using CERN service monitoring software based on the Kibana analytic and visualization package, provided by CERN IT Department. EventIndex monitoring is used both by the EventIndex team and ATLAS Distributed Computing shifts crew.
Towards the reliable calculation of residence time for off-lattice kinetic Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Alexander, Kathleen C.; Schuh, Christopher A.
2016-08-01
Kinetic Monte Carlo (KMC) methods have the potential to extend the accessible timescales of off-lattice atomistic simulations beyond the limits of molecular dynamics by making use of transition state theory and parallelization. However, it is a challenge to identify a complete catalog of events accessible to an off-lattice system in order to accurately calculate the residence time for KMC. Here we describe possible approaches to some of the key steps needed to address this problem. These include methods to compare and distinguish individual kinetic events, to deterministically search an energy landscape, and to define local atomic environments. When applied to the ground state ∑5(2 1 0) grain boundary in copper, these methods achieve a converged residence time, accounting for the full set of kinetically relevant events for this off-lattice system, with calculable uncertainty.
Medicanes in an ocean-atmosphere coupled regional climate model
NASA Astrophysics Data System (ADS)
Akhtar, N.; Brauch, J.; Dobler, A.; Béranger, K.; Ahrens, B.
2014-03-01
So-called medicanes (Mediterranean hurricanes) are meso-scale, marine, and warm-core Mediterranean cyclones that exhibit some similarities to tropical cyclones. The strong cyclonic winds associated with medicanes threaten the highly populated coastal areas around the Mediterranean basin. To reduce the risk of casualties and overall negative impacts, it is important to improve the understanding of medicanes with the use of numerical models. In this study, we employ an atmospheric limited-area model (COSMO-CLM) coupled with a one-dimensional ocean model (1-D NEMO-MED12) to simulate medicanes. The aim of this study is to assess the robustness of the coupled model in simulating these extreme events. For this purpose, 11 historical medicane events are simulated using the atmosphere-only model, COSMO-CLM, and coupled model, with different setups (horizontal atmospheric grid-spacings of 0.44°, 0.22°, and 0.08°; with/without spectral nudging, and an ocean grid-spacing of 1/12°). The results show that at high-resolution, the coupled model is able to not only simulate most of medicane events but also improve the track length, core temperature, and wind speed of simulated medicanes compared to the atmosphere-only simulations. The results suggest that the coupled model is more proficient for systemic and detailed studies of historical medicane events, and that this model can be an effective tool for future projections.
Medicanes in an ocean-atmosphere coupled regional climate model
NASA Astrophysics Data System (ADS)
Akhtar, N.; Brauch, J.; Dobler, A.; Béranger, K.; Ahrens, B.
2014-08-01
So-called medicanes (Mediterranean hurricanes) are meso-scale, marine, and warm-core Mediterranean cyclones that exhibit some similarities to tropical cyclones. The strong cyclonic winds associated with medicanes threaten the highly populated coastal areas around the Mediterranean basin. To reduce the risk of casualties and overall negative impacts, it is important to improve the understanding of medicanes with the use of numerical models. In this study, we employ an atmospheric limited-area model (COSMO-CLM) coupled with a one-dimensional ocean model (1-D NEMO-MED12) to simulate medicanes. The aim of this study is to assess the robustness of the coupled model in simulating these extreme events. For this purpose, 11 historical medicane events are simulated using the atmosphere-only model, COSMO-CLM, and coupled model, with different setups (horizontal atmospheric grid spacings of 0.44, 0.22, and 0.08°; with/without spectral nudging, and an ocean grid spacing of 1/12°). The results show that at high resolution, the coupled model is able to not only simulate most of medicane events but also improve the track length, core temperature, and wind speed of simulated medicanes compared to the atmosphere-only simulations. The results suggest that the coupled model is more proficient for systemic and detailed studies of historical medicane events, and that this model can be an effective tool for future projections.
Asynchronous sampled-data approach for event-triggered systems
NASA Astrophysics Data System (ADS)
Mahmoud, Magdi S.; Memon, Azhar M.
2017-11-01
While aperiodically triggered network control systems save a considerable amount of communication bandwidth, they also pose challenges such as coupling between control and event-condition design, optimisation of the available resources such as control, communication and computation power, and time-delays due to computation and communication network. With this motivation, the paper presents separate designs of control and event-triggering mechanism, thus simplifying the overall analysis, asynchronous linear quadratic Gaussian controller which tackles delays and aperiodic nature of transmissions, and a novel event mechanism which compares the cost of the aperiodic system against a reference periodic implementation. The proposed scheme is simulated on a linearised wind turbine model for pitch angle control and the results show significant improvement against the periodic counterpart.
Event-triggered consensus tracking of multi-agent systems with Lur'e nonlinear dynamics
NASA Astrophysics Data System (ADS)
Huang, Na; Duan, Zhisheng; Wen, Guanghui; Zhao, Yu
2016-05-01
In this paper, distributed consensus tracking problem for networked Lur'e systems is investigated based on event-triggered information interactions. An event-triggered control algorithm is designed with the advantages of reducing controller update frequency and sensor energy consumption. By using tools of ?-procedure and Lyapunov functional method, some sufficient conditions are derived to guarantee that consensus tracking is achieved under a directed communication topology. Meanwhile, it is shown that Zeno behaviour of triggering time sequences is excluded for the proposed event-triggered rule. Finally, some numerical simulations on coupled Chua's circuits are performed to illustrate the effectiveness of the theoretical algorithms.
Numerical simulation diagnostics of a flash flood event in Jeddah, Saudi Arabia
NASA Astrophysics Data System (ADS)
Samman, Ahmad
On 26 January 2011, a severe storm hit the city of Jeddah, the second largest city in the Kingdom of Saudi Arabia. The storm resulted in heavy rainfall, which produced a flash flood in a short period of time. This event caused at least eleven fatalities and more than 114 injuries. Unfortunately, the observed rainfall data are limited to the weather station at King Abdul Aziz International airport, which is north of the city, while the most extreme precipitation occurred over the southern part of the city. This observation was useful to compare simulation result even though it does not reflect the severity of the event. The Regional Atmospheric Modeling System (RAMS) developed at Colorado State University was used to study this storm event. RAMS simulations indicted that a quasi-stationary Mesoscale convective system developed over the city of Jeddah and lasted for several hours. It was the source of the huge amount of rainfall. The model computed a total rainfall of more than 110 mm in the southern part of the city, where the flash flood occurred. This precipitation estimation was confirmed by the actual observation of the weather radar. While the annual rainfall in Jeddah during the winter varies from 50 to 100 mm, the amount of the rainfall resulting from this storm event exceeded the climatological total annual rainfall. The simulation of this event showed that warm sea surface temperature, combined with high humidity in the lower atmosphere and a large amount of convective available potential energy (CAPE) provided a favorable environment for convection. It also showed the presence of a cyclonic system over the north and eastern parts of the Mediterranean Sea, and a subtropical anti-cyclone over Northeastern Africa that contributed to cold air advection bringing cold air to the Jeddah area. In addition, an anti-cyclone (blocking) centered over east and southeastern parts of the Arabian Peninsula and the Arabian Sea produced a low level jet over the southern part of the Red Sea, which transported large water vapor amounts over Jeddah. The simulation results showed that the main driver behind the storm was the interaction between these systems over the city of Jeddah (an urban heat island) that produced strong low-level convergence. Several sensitivity experiments were carried out showed that other variables could have contributed to storm severity as well. Those sensitivity experiments included several simulations in which the following variables were changed: physiographic properties were altered by removing the water surfaces, removing the urban heat island environment from the model, and changing the concentration of cloud condensation nuclei. The results of these sensitivity experiments showed that these properties have significant effects on the storm formation and severity.
NASA Astrophysics Data System (ADS)
Goswami, B. B.; Khouider, B.; Phani, R.; Mukhopadhyay, P.; Majda, A.
2017-01-01
To better represent organized convection in the Climate Forecast System version 2 (CFSv2), a stochastic multicloud model (SMCM) parameterization is adopted and a 15 year climate run is made. The last 10 years of simulations are analyzed here. While retaining an equally good mean state (if not better) as the parent model, the CFS-SMCM simulation shows significant improvement in the synoptic and intraseasonal variability. The CFS-SMCM provides a better account of convectively coupled equatorial waves and the Madden-Julian oscillation. The CFS-SMCM exhibits improvements in northward and eastward propagation of intraseasonal oscillation of convection including the MJO propagation beyond the maritime continent barrier, which is the Achilles Heel for coarse-resolution global climate models (GCMs). The distribution of precipitation events is better simulated in CFSsmcm and spreads naturally toward high-precipitation events. Deterministic GCMs tend to simulate a narrow distribution with too much drizzling precipitation and too little high-precipitation events.
A Simulation of Alternatives for Wholesale Inventory Replenishment
2016-03-01
algorithmic details. The last method is a mixed-integer, linear optimization model. Comparative Inventory Simulation, a discrete event simulation model, is...simulation; event graphs; reorder point; fill-rate; backorder; discrete event simulation; wholesale inventory optimization model 15. NUMBER OF PAGES...model. Comparative Inventory Simulation, a discrete event simulation model, is designed to find fill rates achieved for each National Item
Methodology for Collision Risk Assessment of an Airspace Flow Corridor Concept
NASA Astrophysics Data System (ADS)
Zhang, Yimin
This dissertation presents a methodology to estimate the collision risk associated with a future air-transportation concept called the flow corridor. The flow corridor is a Next Generation Air Transportation System (NextGen) concept to reduce congestion and increase throughput in en-route airspace. The flow corridor has the potential to increase throughput by reducing the controller workload required to manage aircraft outside the corridor and by reducing separation of aircraft within corridor. The analysis in this dissertation is a starting point for the safety analysis required by the Federal Aviation Administration (FAA) to eventually approve and implement the corridor concept. This dissertation develops a hybrid risk analysis methodology that combines Monte Carlo simulation with dynamic event tree analysis. The analysis captures the unique characteristics of the flow corridor concept, including self-separation within the corridor, lane change maneuvers, speed adjustments, and the automated separation assurance system. Monte Carlo simulation is used to model the movement of aircraft in the flow corridor and to identify precursor events that might lead to a collision. Since these precursor events are not rare, standard Monte Carlo simulation can be used to estimate these occurrence rates. Dynamic event trees are then used to model the subsequent series of events that may lead to collision. When two aircraft are on course for a near-mid-air collision (NMAC), the on-board automated separation assurance system provides a series of safety layers to prevent the impending NNAC or collision. Dynamic event trees are used to evaluate the potential failures of these layers in order to estimate the rare-event collision probabilities. The results show that the throughput can be increased by reducing separation to 2 nautical miles while maintaining the current level of safety. A sensitivity analysis shows that the most critical parameters in the model related to the overall collision probability are the minimum separation, the probability that both flights fail to respond to traffic collision avoidance system, the probability that an NMAC results in a collision, the failure probability of the automatic dependent surveillance broadcast in receiver, and the conflict detection probability.
Choi, Yun Ho; Yoo, Sung Jin
2018-06-01
This paper investigates the event-triggered decentralized adaptive tracking problem of a class of uncertain interconnected nonlinear systems with unexpected actuator failures. It is assumed that local control signals are transmitted to local actuators with time-varying faults whenever predefined conditions for triggering events are satisfied. Compared with the existing control-input-based event-triggering strategy for adaptive control of uncertain nonlinear systems, the aim of this paper is to propose a tracking-error-based event-triggering strategy in the decentralized adaptive fault-tolerant tracking framework. The proposed approach can relax drastic changes in control inputs caused by actuator faults in the existing triggering strategy. The stability of the proposed event-triggering control system is analyzed in the Lyapunov sense. Finally, simulation comparisons of the proposed and existing approaches are provided to show the effectiveness of the proposed theoretical result in the presence of actuator faults. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Examining Passenger Flow Choke Points at Airports Using Discrete Event Simulation
NASA Technical Reports Server (NTRS)
Brown, Jeremy R.; Madhavan, Poomima
2011-01-01
The movement of passengers through an airport quickly, safely, and efficiently is the main function of the various checkpoints (check-in, security. etc) found in airports. Human error combined with other breakdowns in the complex system of the airport can disrupt passenger flow through the airport leading to lengthy waiting times, missing luggage and missed flights. In this paper we present a model of passenger flow through an airport using discrete event simulation that will provide a closer look into the possible reasons for breakdowns and their implications for passenger flow. The simulation is based on data collected at Norfolk International Airport (ORF). The primary goal of this simulation is to present ways to optimize the work force to keep passenger flow smooth even during peak travel times and for emergency preparedness at ORF in case of adverse events. In this simulation we ran three different scenarios: real world, increased check-in stations, and multiple waiting lines. Increased check-in stations increased waiting time and instantaneous utilization. while the multiple waiting lines decreased both the waiting time and instantaneous utilization. This simulation was able to show how different changes affected the passenger flow through the airport.
Comparison of thunderstorm simulations from WRF-NMM and WRF-ARW models over East Indian Region.
Litta, A J; Mary Ididcula, Sumam; Mohanty, U C; Kiran Prasad, S
2012-01-01
The thunderstorms are typical mesoscale systems dominated by intense convection. Mesoscale models are essential for the accurate prediction of such high-impact weather events. In the present study, an attempt has been made to compare the simulated results of three thunderstorm events using NMM and ARW model core of WRF system and validated the model results with observations. Both models performed well in capturing stability indices which are indicators of severe convective activity. Comparison of model-simulated radar reflectivity imageries with observations revealed that NMM model has simulated well the propagation of the squall line, while the squall line movement was slow in ARW. From the model-simulated spatial plots of cloud top temperature, we can see that NMM model has better captured the genesis, intensification, and propagation of thunder squall than ARW model. The statistical analysis of rainfall indicates the better performance of NMM than ARW. Comparison of model-simulated thunderstorm affected parameters with that of the observed showed that NMM has performed better than ARW in capturing the sharp rise in humidity and drop in temperature. This suggests that NMM model has the potential to provide unique and valuable information for severe thunderstorm forecasters over east Indian region.
Design of 3D simulation engine for oilfield safety training
NASA Astrophysics Data System (ADS)
Li, Hua-Ming; Kang, Bao-Sheng
2015-03-01
Aiming at the demand for rapid custom development of 3D simulation system for oilfield safety training, this paper designs and implements a 3D simulation engine based on script-driven method, multi-layer structure, pre-defined entity objects and high-level tools such as scene editor, script editor, program loader. A scripting language been defined to control the system's progress, events and operating results. Training teacher can use this engine to edit 3D virtual scenes, set the properties of entity objects, define the logic script of task, and produce a 3D simulation training system without any skills of programming. Through expanding entity class, this engine can be quickly applied to other virtual training areas.
2016-08-01
the POI. ............................................................... 17 Figure 9. Discharge time series for the Miller pump system...2. In C2, the Miller Canal pump system was implicitly simulated by a time series of outflows assigned to model cells. This flow time series was...representative of how the pump system would operate during the storm events simulated in this work (USACE 2004). The outflow time series for the Miller
A conceptual modeling framework for discrete event simulation using hierarchical control structures.
Furian, N; O'Sullivan, M; Walker, C; Vössner, S; Neubacher, D
2015-08-01
Conceptual Modeling (CM) is a fundamental step in a simulation project. Nevertheless, it is only recently that structured approaches towards the definition and formulation of conceptual models have gained importance in the Discrete Event Simulation (DES) community. As a consequence, frameworks and guidelines for applying CM to DES have emerged and discussion of CM for DES is increasing. However, both the organization of model-components and the identification of behavior and system control from standard CM approaches have shortcomings that limit CM's applicability to DES. Therefore, we discuss the different aspects of previous CM frameworks and identify their limitations. Further, we present the Hierarchical Control Conceptual Modeling framework that pays more attention to the identification of a models' system behavior, control policies and dispatching routines and their structured representation within a conceptual model. The framework guides the user step-by-step through the modeling process and is illustrated by a worked example.
A conceptual modeling framework for discrete event simulation using hierarchical control structures
Furian, N.; O’Sullivan, M.; Walker, C.; Vössner, S.; Neubacher, D.
2015-01-01
Conceptual Modeling (CM) is a fundamental step in a simulation project. Nevertheless, it is only recently that structured approaches towards the definition and formulation of conceptual models have gained importance in the Discrete Event Simulation (DES) community. As a consequence, frameworks and guidelines for applying CM to DES have emerged and discussion of CM for DES is increasing. However, both the organization of model-components and the identification of behavior and system control from standard CM approaches have shortcomings that limit CM’s applicability to DES. Therefore, we discuss the different aspects of previous CM frameworks and identify their limitations. Further, we present the Hierarchical Control Conceptual Modeling framework that pays more attention to the identification of a models’ system behavior, control policies and dispatching routines and their structured representation within a conceptual model. The framework guides the user step-by-step through the modeling process and is illustrated by a worked example. PMID:26778940
Dynamically adaptive data-driven simulation of extreme hydrological flows
NASA Astrophysics Data System (ADS)
Kumar Jain, Pushkar; Mandli, Kyle; Hoteit, Ibrahim; Knio, Omar; Dawson, Clint
2018-02-01
Hydrological hazards such as storm surges, tsunamis, and rainfall-induced flooding are physically complex events that are costly in loss of human life and economic productivity. Many such disasters could be mitigated through improved emergency evacuation in real-time and through the development of resilient infrastructure based on knowledge of how systems respond to extreme events. Data-driven computational modeling is a critical technology underpinning these efforts. This investigation focuses on the novel combination of methodologies in forward simulation and data assimilation. The forward geophysical model utilizes adaptive mesh refinement (AMR), a process by which a computational mesh can adapt in time and space based on the current state of a simulation. The forward solution is combined with ensemble based data assimilation methods, whereby observations from an event are assimilated into the forward simulation to improve the veracity of the solution, or used to invert for uncertain physical parameters. The novelty in our approach is the tight two-way coupling of AMR and ensemble filtering techniques. The technology is tested using actual data from the Chile tsunami event of February 27, 2010. These advances offer the promise of significantly transforming data-driven, real-time modeling of hydrological hazards, with potentially broader applications in other science domains.
Simulations of Sea Level Rise Effects on Complex Coastal Systems
NASA Astrophysics Data System (ADS)
Niedoroda, A. W.; Ye, M.; Saha, B.; Donoghue, J. F.; Reed, C. W.
2009-12-01
It is now established that complex coastal systems with elements such as beaches, inlets, bays, and rivers adjust their morphologies according to time-varying balances in between the processes that control the exchange of sediment. Accelerated sea level rise introduces a major perturbation into the sediment-sharing systems. A modeling framework based on a new SL-PR model which is an advanced version of the aggregate-scale CST Model and the event-scale CMS-2D and CMS-Wave combination have been used to simulate the recent evolution of a portion of the Florida panhandle coast. This combination of models provides a method to evaluate coefficients in the aggregate-scale model that were previously treated as fitted parameters. That is, by carrying out simulations of a complex coastal system with runs of the event-scale model representing more than a year it is now possible to directly relate the coefficients in the large-scale SL-PR model to measureable physical parameters in the current and wave fields. This cross-scale modeling procedure has been used to simulate the shoreline evolution at the Santa Rosa Island, a long barrier which houses significant military infrastructure at the north Gulf Coast. The model has been used to simulate 137 years of measured shoreline change and to extend these to predictions of future rates of shoreline migration.
Network simulation using the simulation language for alternate modeling (SLAM 2)
NASA Technical Reports Server (NTRS)
Shen, S.; Morris, D. W.
1983-01-01
The simulation language for alternate modeling (SLAM 2) is a general purpose language that combines network, discrete event, and continuous modeling capabilities in a single language system. The efficacy of the system's network modeling is examined and discussed. Examples are given of the symbolism that is used, and an example problem and model are derived. The results are discussed in terms of the ease of programming, special features, and system limitations. The system offers many features which allow rapid model development and provides an informative standardized output. The system also has limitations which may cause undetected errors and misleading reports unless the user is aware of these programming characteristics.
CONFIG: Integrated engineering of systems and their operation
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Ryan, Dan; Fleming, Land
1994-01-01
This article discusses CONFIG 3, a prototype software tool that supports integrated conceptual design evaluation from early in the product life cycle, by supporting isolated or integrated modeling, simulation, and analysis of the function, structure, behavior, failures and operations of system designs. Integration and reuse of models is supported in an object-oriented environment providing capabilities for graph analysis and discrete event simulation. CONFIG supports integration among diverse modeling approaches (component view, configuration or flow path view, and procedure view) and diverse simulation and analysis approaches. CONFIG is designed to support integrated engineering in diverse design domains, including mechanical and electro-mechanical systems, distributed computer systems, and chemical processing and transport systems.
Sahoo, Avimanyu; Xu, Hao; Jagannathan, Sarangapani
2016-01-01
This paper presents a novel adaptive neural network (NN) control of single-input and single-output uncertain nonlinear discrete-time systems under event sampled NN inputs. In this control scheme, the feedback signals are transmitted, and the NN weights are tuned in an aperiodic manner at the event sampled instants. After reviewing the NN approximation property with event sampled inputs, an adaptive state estimator (SE), consisting of linearly parameterized NNs, is utilized to approximate the unknown system dynamics in an event sampled context. The SE is viewed as a model and its approximated dynamics and the state vector, during any two events, are utilized for the event-triggered controller design. An adaptive event-trigger condition is derived by using both the estimated NN weights and a dead-zone operator to determine the event sampling instants. This condition both facilitates the NN approximation and reduces the transmission of feedback signals. The ultimate boundedness of both the NN weight estimation error and the system state vector is demonstrated through the Lyapunov approach. As expected, during an initial online learning phase, events are observed more frequently. Over time with the convergence of the NN weights, the inter-event times increase, thereby lowering the number of triggered events. These claims are illustrated through the simulation results.
Extreme weather and climate events with ecological relevance: a review
Meehl, Gerald A.
2017-01-01
Robust evidence exists that certain extreme weather and climate events, especially daily temperature and precipitation extremes, have changed in regard to intensity and frequency over recent decades. These changes have been linked to human-induced climate change, while the degree to which climate change impacts an individual extreme climate event (ECE) is more difficult to quantify. Rapid progress in event attribution has recently been made through improved understanding of observed and simulated climate variability, methods for event attribution and advances in numerical modelling. Attribution for extreme temperature events is stronger compared with other event types, notably those related to the hydrological cycle. Recent advances in the understanding of ECEs, both in observations and their representation in state-of-the-art climate models, open new opportunities for assessing their effect on human and natural systems. Improved spatial resolution in global climate models and advances in statistical and dynamical downscaling now provide climatic information at appropriate spatial and temporal scales. Together with the continued development of Earth System Models that simulate biogeochemical cycles and interactions with the biosphere at increasing complexity, these make it possible to develop a mechanistic understanding of how ECEs affect biological processes, ecosystem functioning and adaptation capabilities. Limitations in the observational network, both for physical climate system parameters and even more so for long-term ecological monitoring, have hampered progress in understanding bio-physical interactions across a range of scales. New opportunities for assessing how ECEs modulate ecosystem structure and functioning arise from better scientific understanding of ECEs coupled with technological advances in observing systems and instrumentation. This article is part of the themed issue ‘Behavioural, ecological and evolutionary responses to extreme climatic events’. PMID:28483866
Simulation of Rate-Related (Dead-Time) Losses In Passive Neutron Multiplicity Counting Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, L.G.; Norman, P.I.; Leadbeater, T.W.
Passive Neutron Multiplicity Counting (PNMC) based on Multiplicity Shift Register (MSR) electronics (a form of time correlation analysis) is a widely used non-destructive assay technique for quantifying spontaneously fissile materials such as Pu. At high event rates, dead-time losses perturb the count rates with the Singles, Doubles and Triples being increasingly affected. Without correction these perturbations are a major source of inaccuracy in the measured count rates and assay values derived from them. This paper presents the simulation of dead-time losses and investigates the effect of applying different dead-time models on the observed MSR data. Monte Carlo methods have beenmore » used to simulate neutron pulse trains for a variety of source intensities and with ideal detection geometry, providing an event by event record of the time distribution of neutron captures within the detection system. The action of the MSR electronics was modelled in software to analyse these pulse trains. Stored pulse trains were perturbed in software to apply the effects of dead-time according to the chosen physical process; for example, the ideal paralysable (extending) and non-paralysable models with an arbitrary dead-time parameter. Results of the simulations demonstrate the change in the observed MSR data when the system dead-time parameter is varied. In addition, the paralysable and non-paralysable models of deadtime are compared. These results form part of a larger study to evaluate existing dead-time corrections and to extend their application to correlated sources. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Auld, Joshua; Hope, Michael; Ley, Hubert
This paper discusses the development of an agent-based modelling software development kit, and the implementation and validation of a model using it that integrates dynamic simulation of travel demand, network supply and network operations. A description is given of the core utilities in the kit: a parallel discrete event engine, interprocess exchange engine, and memory allocator, as well as a number of ancillary utilities: visualization library, database IO library, and scenario manager. The overall framework emphasizes the design goals of: generality, code agility, and high performance. This framework allows the modeling of several aspects of transportation system that are typicallymore » done with separate stand-alone software applications, in a high-performance and extensible manner. The issue of integrating such models as dynamic traffic assignment and disaggregate demand models has been a long standing issue for transportation modelers. The integrated approach shows a possible way to resolve this difficulty. The simulation model built from the POLARIS framework is a single, shared-memory process for handling all aspects of the integrated urban simulation. The resulting gains in computational efficiency and performance allow planning models to be extended to include previously separate aspects of the urban system, enhancing the utility of such models from the planning perspective. Initial tests with case studies involving traffic management center impacts on various network events such as accidents, congestion and weather events, show the potential of the system.« less
Analysis and Prediction of West African Moist Events during the Boreal Spring of 2009
NASA Astrophysics Data System (ADS)
Mera, Roberto Javier
Weather and climate in Sahelian West Africa are dominated by two major wind systems, the southwesterly West African Monsoon (WAM) and the northeasterly (Harmattan) trade winds. In addition to the agricultural benefit of the WAM, the public health sector is affected given the relationship between the onset of moisture and end of meningitis outbreaks. Knowledge and prediction of moisture distribution during the boreal spring is vital to the mitigation of meningitis by providing guidance for vaccine dissemination. The goal of the present study is to (a) develop a climatology and conceptual model of the moisture regime during the boreal spring, (b) investigate the role of extra-tropical and Convectively-coupled Equatorial Waves (CCEWs) on the modulation of westward moving synoptic waves and (c) determine the efficacy of a regional model as a tool for predicting moisture variability. Medical reports during 2009, along with continuous meteorological observations at Kano, Nigeria, showed that the advent of high humidity correlated with cessation of the disease. Further analysis of the 2009 boreal spring elucidated the presence of short-term moist events that modulated surface moisture on temporal scales relevant to the health sector. The May moist event (MME) provided insight into interplays among climate anomalies, extra-tropical systems, equatorially trapped waves and westward-propagating synoptic disturbances. The synoptic disturbance initiated 7 May and traveled westward to the coast by 12 May. There was a marked, semi-stationary moist anomaly in the precipitable water field (kg m-2) east of 10°E through late April and early May, that moved westward at the time of the MME. Further inspection revealed a mid-latitude system may have played a role in increasing the latitudinal amplitude of the MME. CCEWs were also found to have an impact on the MME. A coherent Kelvin wave propagated through the region, providing increased monsoonal flow and heightened convection. A Ttropical Depression-type (TD-type) system developed on May 7 at 20°E and traveled westward with the MME. As this system progressed westward it induced important changes in surface moisture. The TD-type and Kelvin waves underwent phase coupling over central Nigeria (8°E), strengthening the westward-moving feature on May 9. Further evidence is presented that an ER wave also contributed to the development of the TD-type system. The Weather Research and Forecasting Model (WRF) was employed to simulate the environment during 2009 in seasonal and real-time forecast modes. WRF was configured during the 2006 boreal spring, given the increase in meteorological information through the Africa Monsoon Multidisciplinary Analyses project. The model simulated the moist events but tended to have a dry bias and a 2-day delay of the MME for the seasonal simulation. Real-time simulations were able to simulate the MME better than the seasonal run, temporally and spatially. The ensemble simulations served as a testbed for a new tool for the analysis of ensemble prediction skill called the extended ROC (EROC) method. The EROC retains the appealing simplicity of the traditional ROC method and the ability of the EV method to provide evaluation of the performance of an ensemble climate prediction system (EPS) for a hypothetical end user defined by the cost--loss ratio (micro=C/L). Seasonal simulations varied in their useable skill, with Bamako (Mali) as the location with the highest value. This study has revealed that moist events could be of crucial importance to meningitis mitigation. The systems constituting the MME represent predictable phenomena that can be forecasted days in advance. Real-time model simulations were able to diagnose the event 10 days in advance.
NASA Astrophysics Data System (ADS)
Kawecki, Stacey; Steiner, Allison L.
2018-01-01
We examine how aerosol composition affects precipitation intensity using the Weather and Research Forecasting Model with Chemistry (version 3.6). By changing the prescribed default hygroscopicity values to updated values from laboratory studies, we test model assumptions about individual component hygroscopicity values of ammonium, sulfate, nitrate, and organic species. We compare a baseline simulation (BASE, using default hygroscopicity values) with four sensitivity simulations (SULF, increasing the sulfate hygroscopicity; ORG, decreasing organic hygroscopicity; SWITCH, using a concentration-dependent hygroscopicity value for ammonium; and ALL, including all three changes) to understand the role of aerosol composition on precipitation during a mesoscale convective system (MCS). Overall, the hygroscopicity changes influence the spatial patterns of precipitation and the intensity. Focusing on the maximum precipitation in the model domain downwind of an urban area, we find that changing the individual component hygroscopicities leads to bulk hygroscopicity changes, especially in the ORG simulation. Reducing bulk hygroscopicity (e.g., ORG simulation) initially causes fewer activated drops, weakened updrafts in the midtroposphere, and increased precipitation from larger hydrometeors. Increasing bulk hygroscopicity (e.g., SULF simulation) simulates more numerous and smaller cloud drops and increases precipitation. In the ALL simulation, a stronger cold pool and downdrafts lead to precipitation suppression later in the MCS evolution. In this downwind region, the combined changes in hygroscopicity (ALL) reduces the overprediction of intense events (>70 mm d-1) and better captures the range of moderate intensity (30-60 mm d-1) events. The results of this single MCS analysis suggest that aerosol composition can play an important role in simulating high-intensity precipitation events.
Elliott, Elizabeth J.; Yu, Sungduk; Kooperman, Gabriel J.; ...
2016-05-01
The sensitivities of simulated mesoscale convective systems (MCSs) in the central U.S. to microphysics and grid configuration are evaluated here in a global climate model (GCM) that also permits global-scale feedbacks and variability. Since conventional GCMs do not simulate MCSs, studying their sensitivities in a global framework useful for climate change simulations has not previously been possible. To date, MCS sensitivity experiments have relied on controlled cloud resolving model (CRM) studies with limited domains, which avoid internal variability and neglect feedbacks between local convection and larger-scale dynamics. However, recent work with superparameterized (SP) GCMs has shown that eastward propagating MCS-likemore » events are captured when embedded CRMs replace convective parameterizations. This study uses a SP version of the Community Atmosphere Model version 5 (SP-CAM5) to evaluate MCS sensitivities, applying an objective empirical orthogonal function algorithm to identify MCS-like events, and harmonizing composite storms to account for seasonal and spatial heterogeneity. A five-summer control simulation is used to assess the magnitude of internal and interannual variability relative to 10 sensitivity experiments with varied CRM parameters, including ice fall speed, one-moment and two-moment microphysics, and grid spacing. MCS sensitivities were found to be subtle with respect to internal variability, and indicate that ensembles of over 100 storms may be necessary to detect robust differences in SP-GCMs. Furthermore, these results emphasize that the properties of MCSs can vary widely across individual events, and improving their representation in global simulations with significant internal variability may require comparison to long (multidecadal) time series of observed events rather than single season field campaigns.« less
2015-02-01
Sustainable design measures such as the use of “green” technology (e.g., photovoltaic panels, solar collection, heat recovery systems, wind turbines , green...explosive test events. During a I ,000 pounds explosive test event, the sound pressure level can cause tinnitus ( ringing of the ears) with a temporary...quality. ln additional, biological simulant testing would only occur when winds are from the south; ensuring lands off the installation would be
Roh, Min K; Gillespie, Dan T; Petzold, Linda R
2010-11-07
The weighted stochastic simulation algorithm (wSSA) was developed by Kuwahara and Mura [J. Chem. Phys. 129, 165101 (2008)] to efficiently estimate the probabilities of rare events in discrete stochastic systems. The wSSA uses importance sampling to enhance the statistical accuracy in the estimation of the probability of the rare event. The original algorithm biases the reaction selection step with a fixed importance sampling parameter. In this paper, we introduce a novel method where the biasing parameter is state-dependent. The new method features improved accuracy, efficiency, and robustness.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bucknor, Matthew; Grabaskas, David; Brunett, Acacia J.
We report that many advanced reactor designs rely on passive systems to fulfill safety functions during accident sequences. These systems depend heavily on boundary conditions to induce a motive force, meaning the system can fail to operate as intended because of deviations in boundary conditions, rather than as the result of physical failures. Furthermore, passive systems may operate in intermediate or degraded modes. These factors make passive system operation difficult to characterize within a traditional probabilistic framework that only recognizes discrete operating modes and does not allow for the explicit consideration of time-dependent boundary conditions. Argonne National Laboratory has beenmore » examining various methodologies for assessing passive system reliability within a probabilistic risk assessment for a station blackout event at an advanced small modular reactor. This paper provides an overview of a passive system reliability demonstration analysis for an external event. Considering an earthquake with the possibility of site flooding, the analysis focuses on the behavior of the passive Reactor Cavity Cooling System following potential physical damage and system flooding. The assessment approach seeks to combine mechanistic and simulation-based methods to leverage the benefits of the simulation-based approach without the need to substantially deviate from conventional probabilistic risk assessment techniques. Lastly, although this study is presented as only an example analysis, the results appear to demonstrate a high level of reliability of the Reactor Cavity Cooling System (and the reactor system in general) for the postulated transient event.« less
Sahoo, Avimanyu; Xu, Hao; Jagannathan, Sarangapani
2016-09-01
This paper presents an event-triggered near optimal control of uncertain nonlinear discrete-time systems. Event-driven neurodynamic programming (NDP) is utilized to design the control policy. A neural network (NN)-based identifier, with event-based state and input vectors, is utilized to learn the system dynamics. An actor-critic framework is used to learn the cost function and the optimal control input. The NN weights of the identifier, the critic, and the actor NNs are tuned aperiodically once every triggered instant. An adaptive event-trigger condition to decide the trigger instants is derived. Thus, a suitable number of events are generated to ensure a desired accuracy of approximation. A near optimal performance is achieved without using value and/or policy iterations. A detailed analysis of nontrivial inter-event times with an explicit formula to show the reduction in computation is also derived. The Lyapunov technique is used in conjunction with the event-trigger condition to guarantee the ultimate boundedness of the closed-loop system. The simulation results are included to verify the performance of the controller. The net result is the development of event-driven NDP.
Event-chain Monte Carlo algorithms for three- and many-particle interactions
NASA Astrophysics Data System (ADS)
Harland, J.; Michel, M.; Kampmann, T. A.; Kierfeld, J.
2017-02-01
We generalize the rejection-free event-chain Monte Carlo algorithm from many-particle systems with pairwise interactions to systems with arbitrary three- or many-particle interactions. We introduce generalized lifting probabilities between particles and obtain a general set of equations for lifting probabilities, the solution of which guarantees maximal global balance. We validate the resulting three-particle event-chain Monte Carlo algorithms on three different systems by comparison with conventional local Monte Carlo simulations: i) a test system of three particles with a three-particle interaction that depends on the enclosed triangle area; ii) a hard-needle system in two dimensions, where needle interactions constitute three-particle interactions of the needle end points; iii) a semiflexible polymer chain with a bending energy, which constitutes a three-particle interaction of neighboring chain beads. The examples demonstrate that the generalization to many-particle interactions broadens the applicability of event-chain algorithms considerably.
Model-Based Adaptive Event-Triggered Control of Strict-Feedback Nonlinear Systems.
Li, Yuan-Xin; Yang, Guang-Hong
2018-04-01
This paper is concerned with the adaptive event-triggered control problem of nonlinear continuous-time systems in strict-feedback form. By using the event-sampled neural network (NN) to approximate the unknown nonlinear function, an adaptive model and an associated event-triggered controller are designed by exploiting the backstepping method. In the proposed method, the feedback signals and the NN weights are aperiodically updated only when the event-triggered condition is violated. A positive lower bound on the minimum intersample time is guaranteed to avoid accumulation point. The closed-loop stability of the resulting nonlinear impulsive dynamical system is rigorously proved via Lyapunov analysis under an adaptive event sampling condition. In comparing with the traditional adaptive backstepping design with a fixed sample period, the event-triggered method samples the state and updates the NN weights only when it is necessary. Therefore, the number of transmissions can be significantly reduced. Finally, two simulation examples are presented to show the effectiveness of the proposed control method.
Predicting System Accidents with Model Analysis During Hybrid Simulation
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Fleming, Land D.; Throop, David R.
2002-01-01
Standard discrete event simulation is commonly used to identify system bottlenecks and starving and blocking conditions in resources and services. The CONFIG hybrid discrete/continuous simulation tool can simulate such conditions in combination with inputs external to the simulation. This provides a means for evaluating the vulnerability to system accidents of a system's design, operating procedures, and control software. System accidents are brought about by complex unexpected interactions among multiple system failures , faulty or misleading sensor data, and inappropriate responses of human operators or software. The flows of resource and product materials play a central role in the hazardous situations that may arise in fluid transport and processing systems. We describe the capabilities of CONFIG for simulation-time linear circuit analysis of fluid flows in the context of model-based hazard analysis. We focus on how CONFIG simulates the static stresses in systems of flow. Unlike other flow-related properties, static stresses (or static potentials) cannot be represented by a set of state equations. The distribution of static stresses is dependent on the specific history of operations performed on a system. We discuss the use of this type of information in hazard analysis of system designs.
NASA Astrophysics Data System (ADS)
Ulutas, E.; Inan, A.; Annunziato, A.
2012-06-01
This study analyzes the response of the Global Disasters Alerts and Coordination System (GDACS) in relation to a case study: the Kepulaunan Mentawai earthquake and related tsunami, which occurred on 25 October 2010. The GDACS, developed by the European Commission Joint Research Center, combines existing web-based disaster information management systems with the aim to alert the international community in case of major disasters. The tsunami simulation system is an integral part of the GDACS. In more detail, the study aims to assess the tsunami hazard on the Mentawai and Sumatra coasts: the tsunami heights and arrival times have been estimated employing three propagation models based on the long wave theory. The analysis was performed in three stages: (1) pre-calculated simulations by using the tsunami scenario database for that region, used by the GDACS system to estimate the alert level; (2) near-real-time simulated tsunami forecasts, automatically performed by the GDACS system whenever a new earthquake is detected by the seismological data providers; and (3) post-event tsunami calculations using GCMT (Global Centroid Moment Tensor) fault mechanism solutions proposed by US Geological Survey (USGS) for this event. The GDACS system estimates the alert level based on the first type of calculations and on that basis sends alert messages to its users; the second type of calculations is available within 30-40 min after the notification of the event but does not change the estimated alert level. The third type of calculations is performed to improve the initial estimations and to have a better understanding of the extent of the possible damage. The automatic alert level for the earthquake was given between Green and Orange Alert, which, in the logic of GDACS, means no need or moderate need of international humanitarian assistance; however, the earthquake generated 3 to 9 m tsunami run-up along southwestern coasts of the Pagai Islands where 431 people died. The post-event calculations indicated medium-high humanitarian impacts.
NASA Astrophysics Data System (ADS)
Ricciuto, D. M.; Warren, J.; Guha, A.
2017-12-01
While carbon and energy fluxes in current Earth system models generally have reasonable instantaneous responses to extreme temperature and precipitation events, they often do not adequately represent the long-term impacts of these events. For example, simulated net primary productivity (NPP) may decrease during an extreme heat wave or drought, but may recover rapidly to pre-event levels following the conclusion of the extreme event. However, field measurements indicate that long-lasting damage to leaves and other plant components often occur, potentially affecting the carbon and energy balance for months after the extreme event. The duration and frequency of such extreme conditions is likely to shift in the future, and therefore it is critical for Earth system models to better represent these processes for more accurate predictions of future vegetation productivity and land-atmosphere feedbacks. Here we modify the structure of the Accelerated Climate Model for Energy (ACME) land surface model to represent long-term impacts and test the improved model against observations from experiments that applied extreme conditions in growth chambers. Additionally, we test the model against eddy covariance measurements that followed extreme conditions at selected locations in North America, and against satellite-measured vegetation indices following regional extreme events.
Modeling the Historical Flood Events in France
NASA Astrophysics Data System (ADS)
Ali, Hani; Blaquière, Simon
2017-04-01
We will present the simulation results for different scenarios based on the flood model developed by AXA Global P&C CAT Modeling team. The model uses a Digital Elevation Model (DEM) with 75 m resolution, a hydrographic system (DB Carthage), daily rainfall data from "Météo France", water level from "HYDRO Banque" the French Hydrological Database (www.hydro.eaufrance.fr), for more than 1500 stations, hydrological model from IRSTEA and in-house hydraulic tool. In particular, the model re-simulates the most important and costly flood events that occurred during the past decade in France: we will present the re-simulated meteorological conditions since 1964 and estimate insurance loss incurred on current AXA portfolio of individual risks.
Hierarchical Engine for Large-scale Infrastructure Co-Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
2017-04-24
HELICS is designed to support very-large-scale (100,000+ federates) cosimulations with off-the-shelf power-system, communication, market, and end-use tools. Other key features include cross platform operating system support, the integration of both event driven (e.g., packetized communication) and time-series (e.g., power flow) simulations, and the ability to co-iterate among federates to ensure physical model convergence at each time step.
Discrete Event Modeling and Massively Parallel Execution of Epidemic Outbreak Phenomena
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perumalla, Kalyan S; Seal, Sudip K
2011-01-01
In complex phenomena such as epidemiological outbreaks, the intensity of inherent feedback effects and the significant role of transients in the dynamics make simulation the only effective method for proactive, reactive or post-facto analysis. The spatial scale, runtime speed, and behavioral detail needed in detailed simulations of epidemic outbreaks make it necessary to use large-scale parallel processing. Here, an optimistic parallel execution of a new discrete event formulation of a reaction-diffusion simulation model of epidemic propagation is presented to facilitate in dramatically increasing the fidelity and speed by which epidemiological simulations can be performed. Rollback support needed during optimistic parallelmore » execution is achieved by combining reverse computation with a small amount of incremental state saving. Parallel speedup of over 5,500 and other runtime performance metrics of the system are observed with weak-scaling execution on a small (8,192-core) Blue Gene / P system, while scalability with a weak-scaling speedup of over 10,000 is demonstrated on 65,536 cores of a large Cray XT5 system. Scenarios representing large population sizes exceeding several hundreds of millions of individuals in the largest cases are successfully exercised to verify model scalability.« less
NASA Astrophysics Data System (ADS)
Haustein, Karsten; Otto, Friederike; Uhe, Peter; Allen, Myles; Cullen, Heidi
2015-04-01
Extreme weather detection and attribution analysis has emerged as a core theme in climate science over the last decade or so. By using a combination of observational data and climate models it is possible to identify the role of climate change in certain types of extreme weather events such as sea level rise and its contribution to storm surges, extreme heat events and droughts or heavy rainfall and flood events. These analyses are usually carried out after an extreme event has occurred when reanalysis and observational data become available. The Climate Central WWA project will exploit the increasing forecast skill of seasonal forecast prediction systems such as the UK MetOffice GloSea5 (Global seasonal forecasting system) ensemble forecasting method. This way, the current weather can be fed into climate models to simulate large ensembles of possible weather scenarios before an event has fully emerged yet. This effort runs along parallel and intersecting tracks of science and communications that involve research, message development and testing, staged socialization of attribution science with key audiences, and dissemination. The method we employ uses a very large ensemble of simulations of regional climate models to run two different analyses: one to represent the current climate as it was observed, and one to represent the same events in the world that might have been without human-induced climate change. For the weather "as observed" experiment, the atmospheric model uses observed sea surface temperature (SST) data from GloSea5 (currently) and present-day atmospheric gas concentrations to simulate weather events that are possible given the observed climate conditions. The weather in the "world that might have been" experiments is obtained by removing the anthropogenic forcing from the observed SSTs, thereby simulating a counterfactual world without human activity. The anthropogenic forcing is obtained by comparing the CMIP5 historical and natural simulations from a variety of CMIP5 model ensembles. Here, we present results for the UK 2013/14 winter floods as proof of concept and we show validation and testing results that demonstrate the robustness of our method. We also revisit the record temperatures over Europe in 2014 and present a detailed analysis of this attribution exercise as it is one of the events to demonstrate that we can make a sensible statement of how the odds for such a year to occur have changed while it still unfolds.
Developing Performance Measures for Army Aviation Collective Training
2011-05-01
simulation-based training, such as ATX, is determined by performance improvement of participants within the virtual-training environment (Bell & Waag ...of the collective behavior (Bell & Waag , 1998). In ATX, system-based (i.e., simulator) data can be used to extract measures such as timing of events...to CABs. 20 21 References Bell, H. H., & Waag , W. L. (1998). Evaluating the effectiveness of flight simulators for training combat
NASA Astrophysics Data System (ADS)
Loikith, Paul C.; Waliser, Duane E.; Kim, Jinwon; Ferraro, Robert
2017-08-01
Cool season precipitation event characteristics are evaluated across a suite of downscaled climate models over the northeastern US. Downscaled hindcast simulations are produced by dynamically downscaling the Modern-Era Retrospective Analysis for Research and Applications version 2 (MERRA2) using the National Aeronautics and Space Administration (NASA)-Unified Weather Research and Forecasting (WRF) regional climate model (RCM) and the Goddard Earth Observing System Model, Version 5 (GEOS-5) global climate model. NU-WRF RCM simulations are produced at 24, 12, and 4-km horizontal resolutions using a range of spectral nudging schemes while the MERRA2 global downscaled run is provided at 12.5-km. All model runs are evaluated using four metrics designed to capture key features of precipitation events: event frequency, event intensity, even total, and event duration. Overall, the downscaling approaches result in a reasonable representation of many of the key features of precipitation events over the region, however considerable biases exist in the magnitude of each metric. Based on this evaluation there is no clear indication that higher resolution simulations result in more realistic results in general, however many small-scale features such as orographic enhancement of precipitation are only captured at higher resolutions suggesting some added value over coarser resolution. While the differences between simulations produced using nudging and no nudging are small, there is some improvement in model fidelity when nudging is introduced, especially at a cutoff wavelength of 600 km compared to 2000 km. Based on the results of this evaluation, dynamical regional downscaling using NU-WRF results in a more realistic representation of precipitation event climatology than the global downscaling of MERRA2 using GEOS-5.
Advanced Simulation of Coupled Earthquake and Tsunami Events
NASA Astrophysics Data System (ADS)
Behrens, Joern
2013-04-01
Tsunami-Earthquakes represent natural catastrophes threatening lives and well-being of societies in a solitary and unexpected extreme event as tragically demonstrated in Sumatra (2004), Samoa (2009), Chile (2010), or Japan (2011). Both phenomena are consequences of the complex system of interactions of tectonic stress, fracture mechanics, rock friction, rupture dynamics, fault geometry, ocean bathymetry, and coastline geometry. The ASCETE project forms an interdisciplinary research consortium that couples the most advanced simulation technologies for earthquake rupture dynamics and tsunami propagation to understand the fundamental conditions of tsunami generation. We report on the latest research results in physics-based dynamic rupture and tsunami wave propagation simulation, using unstructured and adaptive meshes with continuous and discontinuous Galerkin discretization approaches. Coupling both simulation tools - the physics-based dynamic rupture simulation and the hydrodynamic tsunami wave propagation - will give us the possibility to conduct highly realistic studies of the interaction of rupture dynamics and tsunami impact characteristics.
Pan, Chong; Zhang, Dali; Kon, Audrey Wan Mei; Wai, Charity Sue Lea; Ang, Woo Boon
2015-06-01
Continuous improvement in process efficiency for specialist outpatient clinic (SOC) systems is increasingly being demanded due to the growth of the patient population in Singapore. In this paper, we propose a discrete event simulation (DES) model to represent the patient and information flow in an ophthalmic SOC system in the Singapore National Eye Centre (SNEC). Different improvement strategies to reduce the turnaround time for patients in the SOC were proposed and evaluated with the aid of the DES model and the Design of Experiment (DOE). Two strategies for better patient appointment scheduling and one strategy for dilation-free examination are estimated to have a significant impact on turnaround time for patients. One of the improvement strategies has been implemented in the actual SOC system in the SNEC with promising improvement reported.
Image Reconstruction for a Partially Collimated Whole Body PET Scanner
Alessio, Adam M.; Schmitz, Ruth E.; MacDonald, Lawrence R.; Wollenweber, Scott D.; Stearns, Charles W.; Ross, Steven G.; Ganin, Alex; Lewellen, Thomas K.; Kinahan, Paul E.
2008-01-01
Partially collimated PET systems have less collimation than conventional 2-D systems and have been shown to offer count rate improvements over 2-D and 3-D systems. Despite this potential, previous efforts have not established image-based improvements with partial collimation and have not customized the reconstruction method for partially collimated data. This work presents an image reconstruction method tailored for partially collimated data. Simulated and measured sensitivity patterns are presented and provide a basis for modification of a fully 3-D reconstruction technique. The proposed method uses a measured normalization correction term to account for the unique sensitivity to true events. This work also proposes a modified scatter correction based on simulated data. Measured image quality data supports the use of the normalization correction term for true events, and suggests that the modified scatter correction is unnecessary. PMID:19096731
Image Reconstruction for a Partially Collimated Whole Body PET Scanner.
Alessio, Adam M; Schmitz, Ruth E; Macdonald, Lawrence R; Wollenweber, Scott D; Stearns, Charles W; Ross, Steven G; Ganin, Alex; Lewellen, Thomas K; Kinahan, Paul E
2008-06-01
Partially collimated PET systems have less collimation than conventional 2-D systems and have been shown to offer count rate improvements over 2-D and 3-D systems. Despite this potential, previous efforts have not established image-based improvements with partial collimation and have not customized the reconstruction method for partially collimated data. This work presents an image reconstruction method tailored for partially collimated data. Simulated and measured sensitivity patterns are presented and provide a basis for modification of a fully 3-D reconstruction technique. The proposed method uses a measured normalization correction term to account for the unique sensitivity to true events. This work also proposes a modified scatter correction based on simulated data. Measured image quality data supports the use of the normalization correction term for true events, and suggests that the modified scatter correction is unnecessary.
NASA Astrophysics Data System (ADS)
Sarr, A.
2016-12-01
This study investigates less known weather events, Off Season Rain affecting during boreal winter Western parts of Sahel region mainly, Senegal, Cape Verde and Mauritania. They are characterized by cloudy conditions at mid level, which can trigger light long lasting rains. In January 2002, an extreme case occurred from 09 to 11th producing unusual heavy rains, which had dramatic consequences on livestock and irrigated crops. The Weather and Research Forecast model (WRF ARW version 3.4) is used to simulate the event, which affected the western coast around the land/ocean interface and caused huge damages in Senegal and Mauritania. The model was able to reasonably simulate the event and its intensity 2 to 3 days in advance, demonstrating the usefulness of such a tools for early warning system (EWS), which could help mitigate the impacts. The location of the rain band was closer to the observed situation in higher resolution domains. The study showed keys dynamic and thermodynamic conditions associated with the event. Precipitable water (PW) evolution played a central role on the intensity of the event. The deep trough, associated with the disturbance, forced a northeast transport of moisture from the Inter Tropical Convergence Zone (ITCZ) over the Ocean towards Senegal and Mauritania.
Simulating advanced life support systems to test integrated control approaches
NASA Astrophysics Data System (ADS)
Kortenkamp, D.; Bell, S.
Simulations allow for testing of life support control approaches before hardware is designed and built. Simulations also allow for the safe exploration of alternative control strategies during life support operation. As such, they are an important component of any life support research program and testbed. This paper describes a specific advanced life support simulation being created at NASA Johnson Space Center. It is a discrete-event simulation that is dynamic and stochastic. It simulates all major components of an advanced life support system, including crew (with variable ages, weights and genders), biomass production (with scalable plantings of ten different crops), water recovery, air revitalization, food processing, solid waste recycling and energy production. Each component is modeled as a producer of certain resources and a consumer of certain resources. The control system must monitor (via sensors) and control (via actuators) the flow of resources throughout the system to provide life support functionality. The simulation is written in an object-oriented paradigm that makes it portable, extensible and reconfigurable.
NASA Technical Reports Server (NTRS)
Greenberg, Albert G.; Lubachevsky, Boris D.; Nicol, David M.; Wright, Paul E.
1994-01-01
Fast, efficient parallel algorithms are presented for discrete event simulations of dynamic channel assignment schemes for wireless cellular communication networks. The driving events are call arrivals and departures, in continuous time, to cells geographically distributed across the service area. A dynamic channel assignment scheme decides which call arrivals to accept, and which channels to allocate to the accepted calls, attempting to minimize call blocking while ensuring co-channel interference is tolerably low. Specifically, the scheme ensures that the same channel is used concurrently at different cells only if the pairwise distances between those cells are sufficiently large. Much of the complexity of the system comes from ensuring this separation. The network is modeled as a system of interacting continuous time automata, each corresponding to a cell. To simulate the model, conservative methods are used; i.e., methods in which no errors occur in the course of the simulation and so no rollback or relaxation is needed. Implemented on a 16K processor MasPar MP-1, an elegant and simple technique provides speedups of about 15 times over an optimized serial simulation running on a high speed workstation. A drawback of this technique, typical of conservative methods, is that processor utilization is rather low. To overcome this, new methods were developed that exploit slackness in event dependencies over short intervals of time, thereby raising the utilization to above 50 percent and the speedup over the optimized serial code to about 120 times.
Behavior coordination of mobile robotics using supervisory control of fuzzy discrete event systems.
Jayasiri, Awantha; Mann, George K I; Gosine, Raymond G
2011-10-01
In order to incorporate the uncertainty and impreciseness present in real-world event-driven asynchronous systems, fuzzy discrete event systems (DESs) (FDESs) have been proposed as an extension to crisp DESs. In this paper, first, we propose an extension to the supervisory control theory of FDES by redefining fuzzy controllable and uncontrollable events. The proposed supervisor is capable of enabling feasible uncontrollable and controllable events with different possibilities. Then, the extended supervisory control framework of FDES is employed to model and control several navigational tasks of a mobile robot using the behavior-based approach. The robot has limited sensory capabilities, and the navigations have been performed in several unmodeled environments. The reactive and deliberative behaviors of the mobile robotic system are weighted through fuzzy uncontrollable and controllable events, respectively. By employing the proposed supervisory controller, a command-fusion-type behavior coordination is achieved. The observability of fuzzy events is incorporated to represent the sensory imprecision. As a systematic analysis of the system, a fuzzy-state-based controllability measure is introduced. The approach is implemented in both simulation and real time. A performance evaluation is performed to quantitatively estimate the validity of the proposed approach over its counterparts.
Integrated modeling for assessment of energy-water system resilience under changing climate
NASA Astrophysics Data System (ADS)
Yan, E.; Veselka, T.; Zhou, Z.; Koritarov, V.; Mahalik, M.; Qiu, F.; Mahat, V.; Betrie, G.; Clark, C.
2016-12-01
Energy and water systems are intrinsically interconnected. Due to an increase in climate variability and extreme weather events, interdependency between these two systems has been recently intensified resulting significant impacts on both systems and energy output. To address this challenge, an Integrated Water-Energy Systems Assessment Framework (IWESAF) is being developed to integrate multiple existing or developed models from various sectors. The IWESAF currently includes an extreme climate event generator to predict future extreme weather events, hydrologic and reservoir models, riverine temperature model, power plant water use simulator, and power grid operation and cost optimization model. The IWESAF can facilitate the interaction among the modeling systems and provide insights of the sustainability and resilience of the energy-water system under extreme climate events and economic consequence. The regional case demonstration in the Midwest region will be presented. The detailed information on some of individual modeling components will also be presented in several other abstracts submitted to AGU this year.
A defense in depth approach for nuclear power plant accident management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chih-Yao Hsieh; Hwai-Pwu Chou
2015-07-01
An initiating event may lead to a severe accident if the plant safety functions have been challenged or operators do not follow the appropriate accident management procedures. Beyond design basis accidents are those corresponding to events of very low occurrence probability but such an accident may lead to significant consequences. The defense in depth approach is important to assure nuclear safety even in a severe accident. Plant Damage States (PDS) can be defined by the combination of the possible values for each of the PDS parameters which are showed on the nuclear power plant simulator. PDS is used to identifymore » what the initiating event is, and can also give the information of safety system's status whether they are bypassed, inoperable or not. Initiating event and safety system's status are used in the construction of Containment Event Tree (CET) to determine containment failure modes by using probabilistic risk assessment (PRA) technique. Different initiating events will correspond to different CETs. With these CETs, the core melt frequency of an initiating event can be found. The use of Plant Damage States (PDS) is a symptom-oriented approach. On the other hand, the use of Containment Event Tree (CET) is an event-oriented approach. In this study, the Taiwan's fourth nuclear power plants, the Lungmen nuclear power station (LNPS), which is an advanced boiling water reactor (ABWR) with fully digitized instrumentation and control (I and C) system is chosen as the target plant. The LNPS full scope engineering simulator is used to generate the testing data for method development. The following common initiating events are considered in this study: loss of coolant accidents (LOCA), total loss of feedwater (TLOFW), loss of offsite power (LOOP), station blackout (SBO). Studies have indicated that the combination of the symptom-oriented approach and the event-oriented approach can be helpful to find mitigation strategies and is useful for the accident management. (authors)« less
The Impact of Warm Pool El Nino Events on Antarctic Ozone
NASA Technical Reports Server (NTRS)
Hurwitz, Margaret M.; Newman, P. A.; Song, In-Sun; Frith, Stacey M.
2011-01-01
Warm pool El Nino (WPEN) events are characterized by positive sea surface temperature (SST) anomalies in the central equatorial Pacific in austral spring and summer. Previous work found an enhancement in planetary wave activity in the South Pacific in austral spring, and a warming of 3-5 K in the Antarctic lower stratosphere during austral summer, in WPEN events as compared with ENSO neutral. In this presentation, we show that weakening of the Antarctic vortex during WPEN affects the structure and magnitude of high-latitude total ozone. We use total ozone data from TOMS and OMI, as well as station data from Argentina and Antarctica, to identify shifts in the longitudinal location of the springtime ozone minimum from its climatological position. In addition, we examine the sensitivity of the WPEN-related ozone response to the phase of the quasi-biennial oscillation (QBO). We then compare the observed response to WPEN events with Goddard Earth Observing System chemistry-climate model, version 2 (GEOS V2 CCM) simulations. Two, 50-year time-slice simulations are forced by annually repeating SST and sea ice climatologies, one set representing observed WPEN events and the second set representing neutral ENSO events, in a present-day climate. By comparing the two simulations, we isolate the impact of WPEN events on lower stratospheric ozone, and furthermore, examine the sensitivity of the WPEN ozone response to the phase of the QBO.
Simulation - Concepts and Applications
NASA Astrophysics Data System (ADS)
Silva, Pedro Sá; Trigo, António; Varajão, João; Pinto, Tiago
Simulation in last decades has been widely used to analyze the impact of different scenarios in several areas like, for instance, health, military, business, and many others. When well used, it is an excellent tool to analyze alternative actions and to anticipate their impact, in order to rationalize the spending of resources. This paper introduces and resumes some of the main concepts of simulation, identifying and describing: systems; models; entities and attributes; resources; contexts of use; and, in particularly, the discrete-event simulation.
NASA Technical Reports Server (NTRS)
Mizell, Carolyn Barrett; Malone, Linda
2007-01-01
The development process for a large software development project is very complex and dependent on many variables that are dynamic and interrelated. Factors such as size, productivity and defect injection rates will have substantial impact on the project in terms of cost and schedule. These factors can be affected by the intricacies of the process itself as well as human behavior because the process is very labor intensive. The complex nature of the development process can be investigated with software development process models that utilize discrete event simulation to analyze the effects of process changes. The organizational environment and its effects on the workforce can be analyzed with system dynamics that utilizes continuous simulation. Each has unique strengths and the benefits of both types can be exploited by combining a system dynamics model and a discrete event process model. This paper will demonstrate how the two types of models can be combined to investigate the impacts of human resource interactions on productivity and ultimately on cost and schedule.
Markov state modeling of sliding friction
NASA Astrophysics Data System (ADS)
Pellegrini, F.; Landes, François P.; Laio, A.; Prestipino, S.; Tosatti, E.
2016-11-01
Markov state modeling (MSM) has recently emerged as one of the key techniques for the discovery of collective variables and the analysis of rare events in molecular simulations. In particular in biochemistry this approach is successfully exploited to find the metastable states of complex systems and their evolution in thermal equilibrium, including rare events, such as a protein undergoing folding. The physics of sliding friction and its atomistic simulations under external forces constitute a nonequilibrium field where relevant variables are in principle unknown and where a proper theory describing violent and rare events such as stick slip is still lacking. Here we show that MSM can be extended to the study of nonequilibrium phenomena and in particular friction. The approach is benchmarked on the Frenkel-Kontorova model, used here as a test system whose properties are well established. We demonstrate that the method allows the least prejudiced identification of a minimal basis of natural microscopic variables necessary for the description of the forced dynamics of sliding, through their probabilistic evolution. The steps necessary for the application to realistic frictional systems are highlighted.
Time Warp Operating System (TWOS)
NASA Technical Reports Server (NTRS)
Bellenot, Steven F.
1993-01-01
Designed to support parallel discrete-event simulation, TWOS is complete implementation of Time Warp mechanism - distributed protocol for virtual time synchronization based on process rollback and message annihilation.
Liu, Nianbo; Liu, Ming; Zhu, Jinqi; Gong, Haigang
2009-01-01
The basic operation of a Delay Tolerant Sensor Network (DTSN) is to finish pervasive data gathering in networks with intermittent connectivity, while the publish/subscribe (Pub/Sub for short) paradigm is used to deliver events from a source to interested clients in an asynchronous way. Recently, extension of Pub/Sub systems in DTSNs has become a promising research topic. However, due to the unique frequent partitioning characteristic of DTSNs, extension of a Pub/Sub system in a DTSN is a considerably difficult and challenging problem, and there are no good solutions to this problem in published works. To ad apt Pub/Sub systems to DTSNs, we propose CED, a community-based event delivery protocol. In our design, event delivery is based on several unchanged communities, which are formed by sensor nodes in the network according to their connectivity. CED consists of two components: event delivery and queue management. In event delivery, events in a community are delivered to mobile subscribers once a subscriber comes into the community, for improving the data delivery ratio. The queue management employs both the event successful delivery time and the event survival time to decide whether an event should be delivered or dropped for minimizing the transmission overhead. The effectiveness of CED is demonstrated through comprehensive simulation studies.
Dust Storm Feature Identification and Tracking from 4D Simulation Data
NASA Astrophysics Data System (ADS)
Yu, M.; Yang, C. P.
2016-12-01
Dust storms cause significant damage to health, property and the environment worldwide every year. To help mitigate the damage, dust forecasting models simulate and predict upcoming dust events, providing valuable information to scientists, decision makers, and the public. Normally, the model simulations are conducted in four-dimensions (i.e., latitude, longitude, elevation and time) and represent three-dimensional (3D), spatial heterogeneous features of the storm and its evolution over space and time. This research investigates and proposes an automatic multi-threshold, region-growing based identification algorithm to identify critical dust storm features, and track the evolution process of dust storm events through space and time. In addition, a spatiotemporal data model is proposed, which can support the characterization and representation of dust storm events and their dynamic patterns. Quantitative and qualitative evaluations for the algorithm are conducted to test the sensitivity, and capability of identify and track dust storm events. This study has the potential to assist a better early warning system for decision-makers and the public, thus making hazard mitigation plans more effective.
Zhao, Bo; Ding, Ruoxi; Chen, Shoushun; Linares-Barranco, Bernabe; Tang, Huajin
2015-09-01
This paper introduces an event-driven feedforward categorization system, which takes data from a temporal contrast address event representation (AER) sensor. The proposed system extracts bio-inspired cortex-like features and discriminates different patterns using an AER based tempotron classifier (a network of leaky integrate-and-fire spiking neurons). One of the system's most appealing characteristics is its event-driven processing, with both input and features taking the form of address events (spikes). The system was evaluated on an AER posture dataset and compared with two recently developed bio-inspired models. Experimental results have shown that it consumes much less simulation time while still maintaining comparable performance. In addition, experiments on the Mixed National Institute of Standards and Technology (MNIST) image dataset have demonstrated that the proposed system can work not only on raw AER data but also on images (with a preprocessing step to convert images into AER events) and that it can maintain competitive accuracy even when noise is added. The system was further evaluated on the MNIST dynamic vision sensor dataset (in which data is recorded using an AER dynamic vision sensor), with testing accuracy of 88.14%.
Response of the Antarctic Stratosphere to Warm Pool EI Nino Events in the GEOS CCM
NASA Technical Reports Server (NTRS)
Hurwitz, Margaret M.; Song, In-Sun; Oman, Luke D.; Newman, Paul A.; Molod, Andrea M.; Frith, Stacey M.; Nielsen, J. Eric
2011-01-01
A new type of EI Nino event has been identified in the last decade. During "warm pool" EI Nino (WPEN) events, sea surface temperatures (SSTs) in the central equatorial Pacific are warmer than average. The EI Nino signal propagates poleward and upward as large-scale atmospheric waves, causing unusual weather patterns and warming the polar stratosphere. In austral summer, observations show that the Antarctic lower stratosphere is several degrees (K) warmer during WPEN events than during the neutral phase of EI Nino/Southern Oscillation (ENSO). Furthermore, the stratospheric response to WPEN events depends of the direction of tropical stratospheric winds: the Antarctic warming is largest when WPEN events are coincident with westward winds in the tropical lower and middle stratosphere i.e., the westward phase of the quasi-biennial oscillation (QBO). Westward winds are associated with enhanced convection in the subtropics, and with increased poleward wave activity. In this paper, a new formulation of the Goddard Earth Observing System Chemistry-Climate Model, Version 2 (GEOS V2 CCM) is used to substantiate the observed stratospheric response to WPEN events. One simulation is driven by SSTs typical of a WPEN event, while another simulation is driven by ENSO neutral SSTs; both represent a present-day climate. Differences between the two simulations can be directly attributed to the anomalous WPEN SSTs. During WPEN events, relative to ENSO neutral, the model simulates the observed increase in poleward planetary wave activity in the South Pacific during austral spring, as well as the relative warming of the Antarctic lower stratosphere in austral summer. However, the modeled response to WPEN does not depend on the phase of the QBO. The modeled tropical wind oscillation does not extend far enough into the lower stratosphere and upper troposphere, likely explaining the model's insensitivity to the phase of the QBO during WPEN events.
Reaching extended length-scales with temperature-accelerated dynamics
NASA Astrophysics Data System (ADS)
Amar, Jacques G.; Shim, Yunsic
2013-03-01
In temperature-accelerated dynamics (TAD) a high-temperature molecular dynamics (MD) simulation is used to accelerate the search for the next low-temperature activated event. While TAD has been quite successful in extending the time-scales of simulations of non-equilibrium processes, due to the fact that the computational work scales approximately as the cube of the number of atoms, until recently only simulations of relatively small systems have been carried out. Recently, we have shown that by combining spatial decomposition with our synchronous sublattice algorithm, significantly improved scaling is possible. However, in this approach the size of activated events is limited by the processor size while the dynamics is not exact. Here we discuss progress in developing an alternate approach in which high-temperature parallel MD along with localized saddle-point (LSAD) calculations, are used to carry out TAD simulations without restricting the size of activated events while keeping the dynamics ``exact'' within the context of harmonic transition-state theory. In tests of our LSAD method applied to Ag/Ag(100) annealing and Cu/Cu(100) growth simulations we find significantly improved scaling of TAD, while maintaining a negligibly small error in the energy barriers. Supported by NSF DMR-0907399.
NASA Astrophysics Data System (ADS)
Parodi, Antonio; Boni, Giorgio; Ferraris, Luca; Gallus, William; Maugeri, Maurizio; Molini, Luca; Siccardi, Franco
2017-04-01
Recent studies show that highly localized and persistent back-building mesoscale convective systems represent one of the most dangerous flash-flood producing storms in the north-western Mediterranean area. Substantial warming of the Mediterranean Sea in recent decades raises concerns over possible increases in frequency or intensity of these types of events as increased atmospheric temperatures generally support increases in water vapor content. Analyses of available historical records do not provide a univocal answer, since these may be likely affected by a lack of detailed observations for older events. In the present study, 20th Century Reanalysis Project initial and boundary condition data in ensemble mode are used to address the feasibility of performing cloud-resolving simulations with 1 km horizontal grid spacing of a historic extreme event that occurred over Liguria (Italy): The San Fruttuoso case of 1915. The proposed approach focuses on the ensemble Weather Research and Forecasting (WRF) model runs, as they are the ones most likely to best simulate the event. It is found that these WRF runs generally do show wind and precipitation fields that are consistent with the occurrence of highly localized and persistent back-building mesoscale convective systems, although precipitation peak amounts are underestimated. Systematic small north-westward position errors with regard to the heaviest rain and strongest convergence areas imply that the Reanalysis members may not be adequately representing the amount of cool air over the Po Plain outflowing into the Liguria Sea through the Apennines gap. Regarding the role of historical data sources, this study shows that in addition to Reanalysis products, unconventional data, such as historical meteorological bulletins, newspapers and even photographs can be very valuable sources of knowledge in the reconstruction of past extreme events.
2009-09-01
2.1 Participants Twelve civilians (7 men and 5 women ) with no prior experience with the Robotic NCO simulation participated in this study. The mean...operators in a multitasking environment. 15. SUBJECT TERMS design guidelines, robotics, simulation, unmanned systems, automation 16. SECURITY...model of operator performance, or a hybrid method which combines one or more of these different invocation techniques (e.g., critical events and
A methodology for evacuation design for urban areas: theoretical aspects and experimentation
NASA Astrophysics Data System (ADS)
Russo, F.; Vitetta, A.
2009-04-01
This paper proposes an unifying approach for the simulation and design of a transportation system under conditions of incoming safety and/or security. Safety and security are concerned with threats generated by very different factors and which, in turn, generate emergency conditions, such as the 9/11, Madrid and London attacks, the Asian tsunami, and the Katrina hurricane; just considering the last five years. In transportation systems, when exogenous events happen and there is a sufficient interval time between the instant when the event happens and the instant when the event has effect on the population, it is possible to reduce the negative effects with the population evacuation. For this event in every case it is possible to prepare with short and long term the evacuation. For other event it is possible also to plan the real time evacuation inside the general risk methodology. The development of models for emergency conditions in transportation systems has not received much attention in the literature. The main findings in this area are limited to only a few public research centres and private companies. In general, there is no systematic analysis of the risk theory applied in the transportation system. Very often, in practice, the vulnerability and exposure in the transportation system are considered as similar variables, or in other worse cases the exposure variables are treated as vulnerability variables. Models and algorithms specified and calibrated in ordinary conditions cannot be directly applied in emergency conditions under the usual hypothesis considered. This paper is developed with the following main objectives: (a) to formalize the risk problem with clear diversification (for the consequences) in the definition of the vulnerability and exposure in a transportation system; thus the book offers improvements over consolidated quantitative risk analysis models, especially transportation risk analysis models (risk assessment); (b) to formalize a system of models for evacuation simulation; (c) to calibrate and validate system of model for evacuation simulation from a real experimentation. In relation to the proposed objectives in this paper: (a) a general framework about risk analysis is reported in the first part, with specific methods and models to analyze urban transportation system performances in emergency conditions when exogenous phenomena occur and for the specification of the risk function; (b) a formulation of the general evacuation problem in the standard simulation context of "what if" approach is specified in the second part with reference to the model considered for the simulation of transportation system in ordinary condition; (c) a set of models specified in the second part are calibrated and validated from a real experimentation in the third part. The experimentation was developed in the central business district of an Italian village and about 1000 inhabitants were evacuated, in order to construct a complete data-base. Our experiment required that socioeconomic information (population, number employed, public buildings, schools, etc.) and transport supply characteristics (infrastructures, etc.) be measured before and during experimentation. The real data of evacuation were recorded with 30 video cameras for laboratory analysis. The results are divided into six strictly connected tasks: Demand models; Supply and supply-demand interaction models for users; Simulation of refuge areas for users; Design of path choice models for emergency vehicles; Pedestrian outflow models in a building; Planning process and guidelines.
The Association between Past and Future Oriented Thinking: Evidence from Autism Spectrum Disorder
ERIC Educational Resources Information Center
Lind, Sophie E.; Williams, David M.
2012-01-01
A number of recently developed theories (e.g., the constructive episodic simulation, self-projection, and scene construction hypotheses) propose that the ability to simulate possible future events (sometimes referred to as episodic future thinking, prospection, or foresight) depends on the same neurocognitive system that is implicated in the…
MESA: An Interactive Modeling and Simulation Environment for Intelligent Systems Automation
NASA Technical Reports Server (NTRS)
Charest, Leonard
1994-01-01
This report describes MESA, a software environment for creating applications that automate NASA mission opterations. MESA enables intelligent automation by utilizing model-based reasoning techniques developed in the field of Artificial Intelligence. Model-based reasoning techniques are realized in Mesa through native support of causal modeling and discrete event simulation.
DOT National Transportation Integrated Search
1982-07-01
In order to examine specific automated guideway transit (AGT) developments and concepts, UMTA undertook a program of studies and technology investigations called Automated Guideway Transit Technology (AGTT) Program. The objectives of one segment of t...
Cost-effective solutions to maintaining smart grid reliability
NASA Astrophysics Data System (ADS)
Qin, Qiu
As the aging power systems are increasingly working closer to the capacity and thermal limits, maintaining an sufficient reliability has been of great concern to the government agency, utility companies and users. This dissertation focuses on improving the reliability of transmission and distribution systems. Based on the wide area measurements, multiple model algorithms are developed to diagnose transmission line three-phase short to ground faults in the presence of protection misoperations. The multiple model algorithms utilize the electric network dynamics to provide prompt and reliable diagnosis outcomes. Computational complexity of the diagnosis algorithm is reduced by using a two-step heuristic. The multiple model algorithm is incorporated into a hybrid simulation framework, which consist of both continuous state simulation and discrete event simulation, to study the operation of transmission systems. With hybrid simulation, line switching strategy for enhancing the tolerance to protection misoperations is studied based on the concept of security index, which involves the faulted mode probability and stability coverage. Local measurements are used to track the generator state and faulty mode probabilities are calculated in the multiple model algorithms. FACTS devices are considered as controllers for the transmission system. The placement of FACTS devices into power systems is investigated with a criterion of maintaining a prescribed level of control reconfigurability. Control reconfigurability measures the small signal combined controllability and observability of a power system with an additional requirement on fault tolerance. For the distribution systems, a hierarchical framework, including a high level recloser allocation scheme and a low level recloser placement scheme, is presented. The impacts of recloser placement on the reliability indices is analyzed. Evaluation of reliability indices in the placement process is carried out via discrete event simulation. The reliability requirements are described with probabilities and evaluated from the empirical distributions of reliability indices.
Discrete Event Supervisory Control Applied to Propulsion Systems
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.; Shah, Neerav
2005-01-01
The theory of discrete event supervisory (DES) control was applied to the optimal control of a twin-engine aircraft propulsion system and demonstrated in a simulation. The supervisory control, which is implemented as a finite-state automaton, oversees the behavior of a system and manages it in such a way that it maximizes a performance criterion, similar to a traditional optimal control problem. DES controllers can be nested such that a high-level controller supervises multiple lower level controllers. This structure can be expanded to control huge, complex systems, providing optimal performance and increasing autonomy with each additional level. The DES control strategy for propulsion systems was validated using a distributed testbed consisting of multiple computers--each representing a module of the overall propulsion system--to simulate real-time hardware-in-the-loop testing. In the first experiment, DES control was applied to the operation of a nonlinear simulation of a turbofan engine (running in closed loop using its own feedback controller) to minimize engine structural damage caused by a combination of thermal and structural loads. This enables increased on-wing time for the engine through better management of the engine-component life usage. Thus, the engine-level DES acts as a life-extending controller through its interaction with and manipulation of the engine s operation.
Optimising the performance of an outpatient setting.
Sendi, Pedram; Al, Maiwenn J; Battegay, Manuel; Al Maiwenn, J
2004-01-24
An outpatient setting typically includes experienced and novice resident physicians who are supervised by senior staff physicians. The performance of this kind of outpatient setting, for a given mix of experienced and novice resident physicians, is determined by the number of senior staff physicians available for supervision. The optimum mix of human resources may be determined using discrete-event simulation. An outpatient setting represents a system where concurrency and resource sharing are important. These concepts can be modelled by means of timed Coloured Petri Nets (CPN), which is a discrete-event simulation formalism. We determined the optimum mix of resources (i.e. the number of senior staff physicians needed for a given number of experienced and novice resident physicians) to guarantee efficient overall system performance. In an outpatient setting with 10 resident physicians, two staff physicians are required to guarantee a minimum level of system performance (42-52 patients are seen per 5-hour period). However, with 3 senior staff physicians system performance can be improved substantially (49-56 patients per 5-hour period). An additional fourth staff physician does not substantially enhance system performance (50-57 patients per 5-hour period). Coloured Petri Nets provide a flexible environment in which to simulate an outpatient setting and assess the impact of any staffing changes on overall system performance, to promote informed resource allocation decisions.
A Systems Thinking approach to post-disaster restoration of maritime transportation systems
Lespier, Lizzette Pérez; Long, Suzanna K.; Shoberg, Thomas G.
2015-01-01
A Systems Thinking approach is used to examine elements of a maritime transportation system that are most likely to be impacted by an extreme event. The majority of the literature uses a high-level view that can fail to capture the damage at the sub-system elements. This work uses a system dynamics simulation for a better view and understanding of the Port of San Juan, Puerto Rico, as a whole system and uses Hurricane Georges (1998), as a representative disruptive event. The model focuses on the impacts of natural disasters at the sub-system level with a final goal of determining the sequence needed to restore an ocean-going port to its pre-event state. This work in progress details model development and outlines steps for using real-world information to assist maritime port manager planning and recommendations for best practices to mitigate disaster damage.
NASA Technical Reports Server (NTRS)
Horst, Richard L.; Mahaffey, David L.; Munson, Robert C.
1989-01-01
The present Phase 2 small business innovation research study was designed to address issues related to scalp-recorded event-related potential (ERP) indices of mental workload and to transition this technology from the laboratory to cockpit simulator environments for use as a systems engineering tool. The project involved five main tasks: (1) Two laboratory studies confirmed the generality of the ERP indices of workload obtained in the Phase 1 study and revealed two additional ERP components related to workload. (2) A task analysis' of flight scenarios and pilot tasks in the Advanced Concepts Flight Simulator (ACFS) defined cockpit events (i.e., displays, messages, alarms) that would be expected to elicit ERPs related to workload. (3) Software was developed to support ERP data analysis. An existing ARD-proprietary package of ERP data analysis routines was upgraded, new graphics routines were developed to enhance interactive data analysis, and routines were developed to compare alternative single-trial analysis techniques using simulated ERP data. (4) Working in conjunction with NASA Langley research scientists and simulator engineers, preparations were made for an ACFS validation study of ERP measures of workload. (5) A design specification was developed for a general purpose, computerized, workload assessment system that can function in simulators such as the ACFS.
Comparison of Thunderstorm Simulations from WRF-NMM and WRF-ARW Models over East Indian Region
Litta, A. J.; Mary Ididcula, Sumam; Mohanty, U. C.; Kiran Prasad, S.
2012-01-01
The thunderstorms are typical mesoscale systems dominated by intense convection. Mesoscale models are essential for the accurate prediction of such high-impact weather events. In the present study, an attempt has been made to compare the simulated results of three thunderstorm events using NMM and ARW model core of WRF system and validated the model results with observations. Both models performed well in capturing stability indices which are indicators of severe convective activity. Comparison of model-simulated radar reflectivity imageries with observations revealed that NMM model has simulated well the propagation of the squall line, while the squall line movement was slow in ARW. From the model-simulated spatial plots of cloud top temperature, we can see that NMM model has better captured the genesis, intensification, and propagation of thunder squall than ARW model. The statistical analysis of rainfall indicates the better performance of NMM than ARW. Comparison of model-simulated thunderstorm affected parameters with that of the observed showed that NMM has performed better than ARW in capturing the sharp rise in humidity and drop in temperature. This suggests that NMM model has the potential to provide unique and valuable information for severe thunderstorm forecasters over east Indian region. PMID:22645480
Simulation of the 1994 Charlotte Microburst with Look-Ahead Windshear Radar
NASA Technical Reports Server (NTRS)
Proctor, F. H.; Bracalente, E. M.; Harrah, S. D.; Switzer, G. F.; Britt, C. L.
1995-01-01
A severe microburst occurred on 2 July 1994 at Charlotte, NC, and was associated with the crash of USAir Flight 1016 (FL-1016) (Salottolo 1994; Phillips 1994). The inbound DC-9 unexpectedly encountered a rapidly intensifying rainshaft just seconds before it was to touchdown on runway 18R. The aircraft crashed after encountering strong windshear, killing 37 of the 57 souls on board. The pilots did not recognize the windshear condition in time to prevent the accident and received no warning from the aircraft's Honeywell in-situ windshear detection system or from ground-based systems (Charlotte maintains both an ASR-9 weather radar and a Phase-2 LLWAS). Also two other aircraft landed ahead of FL-1016 without incident and reported smooth approaches to 18R. Section-2 of this paper reports briefly on the reconstruction of the event based on numerical results generated by the Terminal Area Simulation System (TASS) as presented at the National Transportation Safety Board (NTSB) public hearing (Proctor 1994). Section-3 discusses the simulation of this event with a look-ahead windshear radar.
Laser Scanner Tests For Single-Event Upsets
NASA Technical Reports Server (NTRS)
Kim, Quiesup; Soli, George A.; Schwartz, Harvey R.
1992-01-01
Microelectronic advanced laser scanner (MEALS) is opto/electro/mechanical apparatus for nondestructive testing of integrated memory circuits, logic circuits, and other microelectronic devices. Multipurpose diagnostic system used to determine ultrafast time response, leakage, latchup, and electrical overstress. Used to simulate some of effects of heavy ions accelerated to high energies to determine susceptibility of digital device to single-event upsets.
The three-dimensional Event-Driven Graphics Environment (3D-EDGE)
NASA Technical Reports Server (NTRS)
Freedman, Jeffrey; Hahn, Roger; Schwartz, David M.
1993-01-01
Stanford Telecom developed the Three-Dimensional Event-Driven Graphics Environment (3D-EDGE) for NASA GSFC's (GSFC) Communications Link Analysis and Simulation System (CLASS). 3D-EDGE consists of a library of object-oriented subroutines which allow engineers with little or no computer graphics experience to programmatically manipulate, render, animate, and access complex three-dimensional objects.
Incorporating discrete event simulation into quality improvement efforts in health care systems.
Rutberg, Matthew Harris; Wenczel, Sharon; Devaney, John; Goldlust, Eric Jonathan; Day, Theodore Eugene
2015-01-01
Quality improvement (QI) efforts are an indispensable aspect of health care delivery, particularly in an environment of increasing financial and regulatory pressures. The ability to test predictions of proposed changes to flow, policy, staffing, and other process-level changes using discrete event simulation (DES) has shown significant promise and is well reported in the literature. This article describes how to incorporate DES into QI departments and programs in order to support QI efforts, develop high-fidelity simulation models, conduct experiments, make recommendations, and support adoption of results. The authors describe how DES-enabled QI teams can partner with clinical services and administration to plan, conduct, and sustain QI investigations. © 2013 by the American College of Medical Quality.
StochKit2: software for discrete stochastic simulation of biochemical systems with events.
Sanft, Kevin R; Wu, Sheng; Roh, Min; Fu, Jin; Lim, Rone Kwei; Petzold, Linda R
2011-09-01
StochKit2 is the first major upgrade of the popular StochKit stochastic simulation software package. StochKit2 provides highly efficient implementations of several variants of Gillespie's stochastic simulation algorithm (SSA), and tau-leaping with automatic step size selection. StochKit2 features include automatic selection of the optimal SSA method based on model properties, event handling, and automatic parallelism on multicore architectures. The underlying structure of the code has been completely updated to provide a flexible framework for extending its functionality. StochKit2 runs on Linux/Unix, Mac OS X and Windows. It is freely available under GPL version 3 and can be downloaded from http://sourceforge.net/projects/stochkit/. petzold@engineering.ucsb.edu.
NASA TileWorld manual (system version 2.2)
NASA Technical Reports Server (NTRS)
Philips, Andrew B.; Bresina, John L.
1991-01-01
The commands are documented of the NASA TileWorld simulator, as well as providing information about how to run it and extend it. The simulator, implemented in Common Lisp with Common Windows, encodes a particular range in a spectrum of domains, for controllable research experiments. TileWorld consists of a two dimensional grid of cells, a set of polygonal tiles, and a single agent which can grasp and move tiles. In addition to agent executable actions, there is an external event over which the agent has not control; this event correspond to a 'gust of wind'.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jankovsky, Zachary Kyle; Denman, Matthew R.
It is difficult to assess the consequences of a transient in a sodium-cooled fast reactor (SFR) using traditional probabilistic risk assessment (PRA) methods, as numerous safety-related sys- tems have passive characteristics. Often there is significant dependence on the value of con- tinuous stochastic parameters rather than binary success/failure determinations. One form of dynamic PRA uses a system simulator to represent the progression of a transient, tracking events through time in a discrete dynamic event tree (DDET). In order to function in a DDET environment, a simulator must have characteristics that make it amenable to changing physical parameters midway through themore » analysis. The SAS4A SFR system analysis code did not have these characteristics as received. This report describes the code modifications made to allow dynamic operation as well as the linking to a Sandia DDET driver code. A test case is briefly described to demonstrate the utility of the changes.« less
NASA Astrophysics Data System (ADS)
Nebashi, Ryusuke; Sakimura, Noboru; Sugibayashi, Tadahiko
2017-08-01
We evaluated the soft-error tolerance and energy consumption of an embedded computer with magnetic random access memory (MRAM) using two computer simulators. One is a central processing unit (CPU) simulator of a typical embedded computer system. We simulated the radiation-induced single-event-upset (SEU) probability in a spin-transfer-torque MRAM cell and also the failure rate of a typical embedded computer due to its main memory SEU error. The other is a delay tolerant network (DTN) system simulator. It simulates the power dissipation of wireless sensor network nodes of the system using a revised CPU simulator and a network simulator. We demonstrated that the SEU effect on the embedded computer with 1 Gbit MRAM-based working memory is less than 1 failure in time (FIT). We also demonstrated that the energy consumption of the DTN sensor node with MRAM-based working memory can be reduced to 1/11. These results indicate that MRAM-based working memory enhances the disaster tolerance of embedded computers.
Analysis of the Space Propulsion System Problem Using RAVEN
DOE Office of Scientific and Technical Information (OSTI.GOV)
diego mandelli; curtis smith; cristian rabiti
This paper presents the solution of the space propulsion problem using a PRA code currently under development at Idaho National Laboratory (INL). RAVEN (Reactor Analysis and Virtual control ENviroment) is a multi-purpose Probabilistic Risk Assessment (PRA) software framework that allows dispatching different functionalities. It is designed to derive and actuate the control logic required to simulate the plant control system and operator actions (guided procedures) and to perform both Monte- Carlo sampling of random distributed events and Event Tree based analysis. In order to facilitate the input/output handling, a Graphical User Interface (GUI) and a post-processing data-mining module are available.more » RAVEN allows also to interface with several numerical codes such as RELAP5 and RELAP-7 and ad-hoc system simulators. For the space propulsion system problem, an ad-hoc simulator has been developed and written in python language and then interfaced to RAVEN. Such simulator fully models both deterministic (e.g., system dynamics and interactions between system components) and stochastic behaviors (i.e., failures of components/systems such as distribution lines and thrusters). Stochastic analysis is performed using random sampling based methodologies (i.e., Monte-Carlo). Such analysis is accomplished to determine both the reliability of the space propulsion system and to propagate the uncertainties associated to a specific set of parameters. As also indicated in the scope of the benchmark problem, the results generated by the stochastic analysis are used to generate risk-informed insights such as conditions under witch different strategy can be followed.« less
Prototype software model for designing intruder detection systems with simulation
NASA Astrophysics Data System (ADS)
Smith, Jeffrey S.; Peters, Brett A.; Curry, James C.; Gupta, Dinesh
1998-08-01
This article explores using discrete-event simulation for the design and control of defence oriented fixed-sensor- based detection system in a facility housing items of significant interest to enemy forces. The key issues discussed include software development, simulation-based optimization within a modeling framework, and the expansion of the framework to create real-time control tools and training simulations. The software discussed in this article is a flexible simulation environment where the data for the simulation are stored in an external database and the simulation logic is being implemented using a commercial simulation package. The simulation assesses the overall security level of a building against various intruder scenarios. A series of simulation runs with different inputs can determine the change in security level with changes in the sensor configuration, building layout, and intruder/guard strategies. In addition, the simulation model developed for the design stage of the project can be modified to produce a control tool for the testing, training, and real-time control of systems with humans and sensor hardware in the loop.
Rejeb, Olfa; Pilet, Claire; Hamana, Sabri; Xie, Xiaolan; Durand, Thierry; Aloui, Saber; Doly, Anne; Biron, Pierre; Perrier, Lionel; Augusto, Vincent
2018-06-01
Innovation and health-care funding reforms have contributed to the deployment of Information and Communication Technology (ICT) to improve patient care. Many health-care organizations considered the application of ICT as a crucial key to enhance health-care management. The purpose of this paper is to provide a methodology to assess the organizational impact of high-level Health Information System (HIS) on patient pathway. We propose an integrated performance evaluation of HIS approach through the combination of formal modeling using the Architecture of Integrated Information Systems (ARIS) models, a micro-costing approach for cost evaluation, and a Discrete-Event Simulation (DES) approach. The methodology is applied to the consultation for cancer treatment process. Simulation scenarios are established to conclude about the impact of HIS on patient pathway. We demonstrated that although high level HIS lengthen the consultation, occupation rate of oncologists are lower and quality of service is higher (through the number of available information accessed during the consultation to formulate the diagnostic). The provided method allows also to determine the most cost-effective ICT elements to improve the care process quality while minimizing costs. The methodology is flexible enough to be applied to other health-care systems.
NASA Astrophysics Data System (ADS)
Koo, Cheol Hea; Lee, Hoon Hee; Moon, Sung Tae; Han, Sang Hyuck; Ju, Gwang Hyeok
2013-08-01
In aerospace research and practical development area, increasing the usage of simulation in software development, component design and system operation has been maintained and the increasing speed getting faster. This phenomenon can be found from the easiness of handling of simulation and the powerfulness of the output from the simulation. Simulation brings lots of benefit from the several characteristics of it as following, - easy to handle ; it is never broken or damaged by mistake - never wear out ; it is never getting old - cost effective ; once it is built, it can be distributed over 100 ~ 1000 people GenSim (Generic Simulator) which is developing by KARI and compatible with ESA SMP standard provides such a simulation platform to support flight software validation and mission operation verification. User interface of GenSim is shown in Figure 1 [1,2]. As shown in Figure 1, as most simulation platform typically has, GenSim has GRD (Graphical Display) and AND (Alpha Numeric Display). But frequently more complex and powerful handling of the simulated data is required at the actual system validation for example mission operation. In Figure 2, system simulation result of COMS (Communication, Ocean, and Meteorological Satellite, launched at June 28 2008) is being drawn by Celestia 3D program. In this case, the needed data from Celestia is given by one of the simulation model resident in system simulator through UDP network connection in this case. But the requirement of displaying format, data size, and communication rate is variable so developer has to manage the connection protocol manually at each time and each case. It brings a chaos in the simulation model design and development, also to the performance issue at last. Performance issue is happen when the required data magnitude is higher than the capacity of simulation kernel to process the required data safely. The problem is that the sending data to a visualization tool such as celestia is given by a simulation model not kernel. Because the simulation model has no way to know about the status of simulation kernel load to process simulation events, as the result the simulation model sends the data as frequent as needed. This story may make many potential problems like lack of response, failure of meeting deadline and data integrity problem with the model data during the simulation. SIMSAT and EuroSim gives a warning message if the user request event such as printing log can't be processed as planned or requested. As the consequence the requested event will be delayed or not be able to be processed, and it means that this phenomenon may violate the planned deadline. In most soft real time simulation, this can be neglected and just make a little inconvenience of users. But it shall be noted that if the user request is not managed properly at some critical situation, the simulation results may be ended with a mess and chaos. As we traced the disadvantages of what simulation model provide the user request, simulation model is not appropriate to provide a service for such user request. This kind of work shall be minimized as much as possible.
NASA Astrophysics Data System (ADS)
Li, Jiaqiang; Choutko, Vitaly; Xiao, Liyi
2018-03-01
Based on the collection of error data from the Alpha Magnetic Spectrometer (AMS) Digital Signal Processors (DSP), on-orbit Single Event Upsets (SEUs) of the DSP program memory are analyzed. The daily error distribution and time intervals between errors are calculated to evaluate the reliability of the system. The particle density distribution of International Space Station (ISS) orbit is presented and the effects from the South Atlantic Anomaly (SAA) and the geomagnetic poles are analyzed. The impact of solar events on the DSP program memory is carried out combining data analysis and Monte Carlo simulation (MC). From the analysis and simulation results, it is concluded that the area corresponding to the SAA is the main source of errors on the ISS orbit. Solar events can also cause errors on DSP program memory, but the effect depends on the on-orbit particle density.
Boundary Avoidance Tracking for Instigating Pilot Induced Oscillations
NASA Technical Reports Server (NTRS)
Craun, Robert W.; Acosta, Diana M.; Beard, Steven D.; Hardy, Gordon H.; Leonard, Michael W.; Weinstein, Michael
2013-01-01
In order to advance research in the area of pilot induced oscillations, a reliable method to create PIOs in a simulated environment is necessary. Using a boundary avoidance tracking task, researchers performing an evaluation of control systems were able to create PIO events in 42% of cases using a nominal aircraft, and 91% of cases using an aircraft with reduced actuator rate limits. The simulator evaluation took place in the NASA Ames Vertical Motion Simulator, a high-fidelity motion-based simulation facility.
NASA Astrophysics Data System (ADS)
Nucita, A. A.; Licchelli, D.; De Paolis, F.; Ingrosso, G.; Strafella, F.; Katysheva, N.; Shugarov, S.
2018-05-01
The transient event labelled as TCP J05074264+2447555 recently discovered towards the Taurus region was quickly recognized to be an ongoing microlensing event on a source located at distance of only 700-800 pc from Earth. Here, we show that observations with high sampling rate close to the time of maximum magnification revealed features that imply the presence of a binary lens system with very low-mass ratio components. We present a complete description of the binary lens system, which host an Earth-like planet with most likely mass of 9.2 ± 6.6 M⊕. Furthermore, the source estimated location and detailed Monte Carlo simulations allowed us to classify the event as due to the closest lens system, being at a distance of ≃380 pc and mass ≃0.25 M⊙.
NASA Astrophysics Data System (ADS)
Wagenbrenner, N. S.; Forthofer, J.; Gibson, C.; Lamb, B. K.
2017-12-01
Frequent strong gap winds were measured in a deep, steep, wildfire-prone river canyon of central Idaho, USA during July-September 2013. Analysis of archived surface pressure data indicate that the gap wind events were driven by regional scale surface pressure gradients. The events always occurred between 0400 and 1200 LT and typically lasted 3-4 hours. The timing makes these events particularly hazardous for wildland firefighting applications since the morning is typically a period of reduced fire activity and unsuspecting firefighters could be easily endangered by the onset of strong downcanyon winds. The gap wind events were not explicitly forecast by operational numerical weather prediction (NWP) models due to the small spatial scale of the canyon ( 1-2 km wide) compared to the horizontal resolution of operational NWP models (3 km or greater). Custom WRF simulations initialized with NARR data were run at 1 km horizontal resolution to assess whether higher resolution NWP could accurately simulate the observed gap winds. Here, we show that the 1 km WRF simulations captured many of the observed gap wind events, although the strength of the events was underpredicted. We also present evidence from these WRF simulations which suggests that the Salmon River Canyon is near the threshold of WRF-resolvable terrain features when the standard WRF coordinate system and discretization schemes are used. Finally, we show that the strength of the gap wind events can be predicted reasonably well as a function of the surface pressure gradient across the gap, which could be useful in the absence of high-resolution NWP. These are important findings for wildland firefighting applications in narrow gaps where routine forecasts may not provide warning for wind effects induced by high-resolution terrain features.
Energy system contribution during 200- to 1500-m running in highly trained athletes.
Spencer, M R; Gastin, P B
2001-01-01
The purpose of the present study was to profile the aerobic and anaerobic energy system contribution during high-speed treadmill exercise that simulated 200-, 400-, 800-, and 1500-m track running events. Twenty highly trained athletes (Australian National Standard) participated in the study, specializing in either the 200-m (N = 3), 400-m (N = 6), 800-m (N = 5), or 1500-m (N = 6) event (mean VO2 peak [mL x kg(-1)-min(-1)] +/- SD = 56+/-2, 59+/-1, 67+/-1, and 72+/-2, respectively). The relative aerobic and anaerobic energy system contribution was calculated using the accumulated oxygen deficit (AOD) method. The relative contribution of the aerobic energy system to the 200-, 400-, 800-, and 1500-m events was 29+/-4, 43+/-1, 66+/-2, and 84+/-1%+/-SD, respectively. The size of the AOD increased with event duration during the 200-, 400-, and 800-m events (30.4+/-2.3, 41.3+/-1.0, and 48.1+/-4.5 mL x kg(-1), respectively), but no further increase was seen in the 1500-m event (47.1+/-3.8 mL x kg(-1)). The crossover to predominantly aerobic energy system supply occurred between 15 and 30 s for the 400-, 800-, and 1500-m events. These results suggest that the relative contribution of the aerobic energy system during track running events is considerable and greater than traditionally thought.
Reconstruction and numerical modelling of a flash flood event: Atrani 2010
NASA Astrophysics Data System (ADS)
Ciervo, F.; Papa, M. N.; Medina, V.; Bateman, A.
2012-04-01
The work intends to reproduce the flash-flood event that occurred in Atrani (Amalfi Coast - Southern Italy) on the 9 September 2010. In the days leading up to the event, intense low pressure system affected the North Europe attracting hot humid air masses from the Mediterranean areas and pushing them to the southern regions of Italy. These conditions contributed to the development of strong convective storm systems, Mesoscale Convective Systems (MCS) type. The development of intense convective rain cells, over an extremely confined areas, leaded to a cumulative daily rainfall of 129.2 mm; the maximum precipitation in 1hr was 19.4mm. The Dragone river is artificially forced to flow underneath the urban estate of Atrani through a culvert until it finally flows out into the sea. In correspondence of the culvert inlet a minor fraction of the water discharge (5.9m^3/s), skimming over the channel cover, flowed on the street and invaded the village. The channelized flow generated overpressure involving the breaking of the cover of culvert slab and caused a new discharge inlet (20 m^3/s) on the street modifying the downstream flood dynamics. Information acquired, soon after the event, through the local people interviews and the field measurements significantly contributed to the rainfall event reconstruction and to the characterization of the induced effects. In absence of hydrometric data, the support of the amateur videos was of crucial importance for the hydraulic model development and calibration. A geomorphology based rainfall-runoff model, WFIUH type (Instantaneous Unit Hydrograph Width Function), is implemented to extract the hydrograph of the hydrological event. All analysis are performed with GIS support basing on a Digital Terrain System (DTM) 5x5m. Two parameters have been used to calibrate the model: the average watershed velocity (Vmean = 0.08m/s) and hydrodynamic diffusivity (D=10E^-6 m^2/s). The model is calibrated basing on the peak discharge assessed value (98.5 m^3/s) and the observed hydrological response time (1hr). The flood hydrograph, thus obtained, constituted the upstream boundary condition for the simulation of the propagation processes in the urban area. The flow propagation has been simulated through 2D FLATModel code. FLATModel is a numerical code for solving the 2D system shallow-water equations; it belongs to the family of explicit Godunov schemes. In this work the code is tested on unstructured mesh. The unstructured mesh is particularly useful for detailed analysis and small scale hydraulic studies; it allows the adapting of digital surface to complex urban real estate improving significantly the resolution of the simulation results. The use of unstructured meshes also entails a significant reduction of the computational burden allowing the thickening of the cell domain where a better resolution is required. The results of simulations are in good agreement with the field observations, therefore the implemented approach seems suitable for the simulation and prediction of possible future flash flood events in similar context areas.
Görges, Matthias; Winton, Pamela; Koval, Valentyna; Lim, Joanne; Stinson, Jonathan; Choi, Peter T; Schwarz, Stephan K W; Dumont, Guy A; Ansermino, J Mark
2013-08-01
Perioperative monitoring systems produce a large amount of uninterpreted data, use threshold alarms prone to artifacts, and rely on the clinician to continuously visually track changes in physiological data. To address these deficiencies, we developed an expert system that provides real-time clinical decisions for the identification of critical events. We evaluated the efficacy of the expert system for enhancing critical event detection in a simulated environment. We hypothesized that anesthesiologists would identify critical ventilatory events more rapidly and accurately with the expert system. We used a high-fidelity human patient simulator to simulate an operating room environment. Participants managed 4 scenarios (anesthetic vapor overdose, tension pneumothorax, anaphylaxis, and endotracheal tube cuff leak) in random order. In 2 of their 4 scenarios, participants were randomly assigned to the expert system, which provided trend-based alerts and potential differential diagnoses. Time to detection and time to treatment were measured. Workload questionnaires and structured debriefings were completed after each scenario, and a usability questionnaire at the conclusion of the session. Data were analyzed using a mixed-effects linear regression model; Fisher exact test was used for workload scores. Twenty anesthesiology trainees and 15 staff anesthesiologists with a combined median (range) of 36 (29-66) years of age and 6 (1-38) years of anesthesia experience participated. For the endotracheal tube cuff leak, the expert system caused mean reductions of 128 (99% confidence interval [CI], 54-202) seconds in time to detection and 140 (99% CI, 79-200) seconds in time to treatment. In the other 3 scenarios, a best-case decrease of 97 seconds (lower 99% CI) in time to diagnosis for anaphylaxis and a worst-case increase of 63 seconds (upper 99% CI) in time to treatment for anesthetic vapor overdose were found. Participants were highly satisfied with the expert system (median score, 2 on a scale of 1-7). Based on participant debriefings, we identified avoidance of task fixation, reassurance to initiate invasive treatment, and confirmation of a suspected diagnosis as 3 safety-critical areas. When using the expert system, clinically important and statistically significant decreases in time to detection and time to treatment were observed for the endotracheal tube cuff Leak scenario. The observed differences in the other 3 scenarios were much smaller and not statistically significant. Further evaluation is required to confirm the clinical utility of real-time expert systems for anesthesia.
DOT National Transportation Integrated Search
1982-06-01
In order to examine specific Automated Guideway Transit (AGT) developments and concepts, and to build a better knowledge base for future decision-making, the Urban Mass Transportation Administration (UMTA) undertook a new program of studies and techn...
Integrated Medical Model (IMM) 4.0 Enhanced Functionalities
NASA Technical Reports Server (NTRS)
Young, M.; Keenan, A. B.; Saile, L.; Boley, L. A.; Walton, M. E.; Shah, R. V.; Kerstman, E. L.; Myers, J. G.
2015-01-01
The Integrated Medical Model is a probabilistic simulation model that uses input data on 100 medical conditions to simulate expected medical events, the resources required to treat, and the resulting impact to the mission for specific crew and mission characteristics. The newest development version of IMM, IMM v4.0, adds capabilities that remove some of the conservative assumptions that underlie the current operational version, IMM v3. While IMM v3 provides the framework to simulate whether a medical event occurred, IMMv4 also simulates when the event occurred during a mission timeline. This allows for more accurate estimation of mission time lost and resource utilization. In addition to the mission timeline, IMMv4.0 features two enhancements that address IMM v3 assumptions regarding medical event treatment. Medical events in IMMv3 are assigned the untreated outcome if any resource required to treat the event was unavailable. IMMv4 allows for partially treated outcomes that are proportional to the amount of required resources available, thus removing the dichotomous treatment assumption. An additional capability IMMv4 is to use an alternative medical resource when the primary resource assigned to the condition is depleted, more accurately reflecting the real-world system. The additional capabilities defining IMM v4.0the mission timeline, partial treatment, and alternate drug result in more realistic predicted mission outcomes. The primary model outcomes of IMM v4.0 for the ISS6 mission, including mission time lost, probability of evacuation, and probability of loss of crew life, are be compared to those produced by the current operational version of IMM to showcase enhanced prediction capabilities.
Hamman, William R; Beaubien, Jeffrey M; Beaudin-Seiler, Beth M
2009-12-01
The aims of this research are to begin to understand health care teams in their operational environment, establish metrics of performance for these teams, and validate a series of scenarios in simulation that elicit team and technical skills. The focus is on defining the team model that will function in the operational environment in which health care professionals work. Simulations were performed across the United States in 70- to 1000-bed hospitals. Multidisciplinary health care teams analyzed more than 300 hours of videos of health care professionals performing simulations of team-based medical care in several different disciplines. Raters were trained to enhance inter-rater reliability. The study validated event sets that trigger team dynamics and established metrics for team-based care. Team skills were identified and modified using simulation scenarios that employed the event-set-design process. Specific skills (technical and team) were identified by criticality measurement and task analysis methodology. In situ simulation, which includes a purposeful and Socratic Method of debriefing, is a powerful intervention that can overcome inertia found in clinician behavior and latent environmental systems that present a challenge to quality and patient safety. In situ simulation can increase awareness of risks, personalize the risks, and encourage the reflection, effort, and attention needed to make changes to both behaviors and to systems.
Evaluating average and atypical response in radiation effects simulations
NASA Astrophysics Data System (ADS)
Weller, R. A.; Sternberg, A. L.; Massengill, L. W.; Schrimpf, R. D.; Fleetwood, D. M.
2003-12-01
We examine the limits of performing single-event simulations using pre-averaged radiation events. Geant4 simulations show the necessity, for future devices, to supplement current methods with ensemble averaging of device-level responses to physically realistic radiation events. Initial Monte Carlo simulations have generated a significant number of extremal events in local energy deposition. These simulations strongly suggest that proton strikes of sufficient energy, even those that initiate purely electronic interactions, can initiate device response capable in principle of producing single event upset or microdose damage in highly scaled devices.
NASA Astrophysics Data System (ADS)
Fiori, E.; Comellas, A.; Molini, L.; Rebora, N.; Siccardi, F.; Gochis, D. J.; Tanelli, S.; Parodi, A.
2014-03-01
The city of Genoa, which places between the Tyrrhenian Sea and the Apennine mountains (Liguria, Italy) was rocked by severe flash floods on the 4th of November, 2011. Nearly 500 mm of rain, a third of the average annual rainfall, fell in six hours. Six people perished and millions of Euros in damages occurred. The synoptic-scale meteorological system moved across the Atlantic Ocean and into the Mediterranean generating floods that killed 5 people in Southern France, before moving over the Ligurian Sea and Genoa producing the extreme event studied here. Cloud-permitting simulations (1 km) of the finger-like convective system responsible for the torrential event over Genoa have been performed using Advanced Research Weather and Forecasting Model (ARW-WRF, version 3.3). Two different microphysics (WSM6 and Thompson) as well as three different convection closures (explicit, Kain-Fritsch, and Betts-Miller-Janjic) were evaluated to gain a deeper understanding of the physical processes underlying the observed heavy rain event and the model's capability to predict, in hindcast mode, its structure and evolution. The impact of forecast initialization and of model vertical discretization on hindcast results is also examined. Comparison between model hindcasts and observed fields provided by raingauge data, satellite data, and radar data show that this particular event is strongly sensitive to the details of the mesoscale initialization despite being evolved from a relatively large scale weather system. Only meso-γ details of the event were not well captured by the best setting of the ARW-WRF model and so peak hourly rainfalls were not exceptionally well reproduced. The results also show that specification of microphysical parameters suitable to these events have a positive impact on the prediction of heavy precipitation intensity values.
NASA Technical Reports Server (NTRS)
Holt, James M.; Clanton, Stephen E.
1999-01-01
Results of the International Space Station (ISS) Node 2 Internal Active Thermal Control System (IATCS) gross leakage analysis are presented for evaluating total leakage flowrates and volume discharge caused by a gross leakage event (i.e. open boundary condition). A Systems Improved Numerical Differencing Analyzer and Fluid Integrator (SINDA/FLUINT) thermal hydraulic mathematical model (THMM) representing the Node 2 IATCS was developed to simulate system performance under steady-state nominal conditions as well as the transient flow effects resulting from an open line exposed to ambient. The objective of the analysis was to determine the adequacy of the leak detection software in limiting the quantity of fluid lost during a gross leakage event to within an acceptable level.
NASA Technical Reports Server (NTRS)
Holt, James M.; Clanton, Stephen E.
2001-01-01
Results of the International Space Station (ISS) Node 2 Internal Active Thermal Control System (IATCS) gross leakage analysis are presented for evaluating total leakage flow rates and volume discharge caused by a gross leakage event (i.e. open boundary condition). A Systems Improved Numerical Differencing Analyzer and Fluid Integrator (SINDA85/FLUINT) thermal hydraulic mathematical model (THMM) representing the Node 2 IATCS was developed to simulate system performance under steady-state nominal conditions as well as the transient flow effect resulting from an open line exposed to ambient. The objective of the analysis was to determine the adequacy of the leak detection software in limiting the quantity of fluid lost during a gross leakage event to within an acceptable level.
Event-Based Robust Control for Uncertain Nonlinear Systems Using Adaptive Dynamic Programming.
Zhang, Qichao; Zhao, Dongbin; Wang, Ding
2018-01-01
In this paper, the robust control problem for a class of continuous-time nonlinear system with unmatched uncertainties is investigated using an event-based control method. First, the robust control problem is transformed into a corresponding optimal control problem with an augmented control and an appropriate cost function. Under the event-based mechanism, we prove that the solution of the optimal control problem can asymptotically stabilize the uncertain system with an adaptive triggering condition. That is, the designed event-based controller is robust to the original uncertain system. Note that the event-based controller is updated only when the triggering condition is satisfied, which can save the communication resources between the plant and the controller. Then, a single network adaptive dynamic programming structure with experience replay technique is constructed to approach the optimal control policies. The stability of the closed-loop system with the event-based control policy and the augmented control policy is analyzed using the Lyapunov approach. Furthermore, we prove that the minimal intersample time is bounded by a nonzero positive constant, which excludes Zeno behavior during the learning process. Finally, two simulation examples are provided to demonstrate the effectiveness of the proposed control scheme.
Neural robust stabilization via event-triggering mechanism and adaptive learning technique.
Wang, Ding; Liu, Derong
2018-06-01
The robust control synthesis of continuous-time nonlinear systems with uncertain term is investigated via event-triggering mechanism and adaptive critic learning technique. We mainly focus on combining the event-triggering mechanism with adaptive critic designs, so as to solve the nonlinear robust control problem. This can not only make better use of computation and communication resources, but also conduct controller design from the view of intelligent optimization. Through theoretical analysis, the nonlinear robust stabilization can be achieved by obtaining an event-triggered optimal control law of the nominal system with a newly defined cost function and a certain triggering condition. The adaptive critic technique is employed to facilitate the event-triggered control design, where a neural network is introduced as an approximator of the learning phase. The performance of the event-triggered robust control scheme is validated via simulation studies and comparisons. The present method extends the application domain of both event-triggered control and adaptive critic control to nonlinear systems possessing dynamical uncertainties. Copyright © 2018 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naitoh, Masanori; Ujita, Hiroshi; Nagumo, Hiroichi
1997-07-01
The Nuclear Power Engineering Corporation (NUPEC) has initiated a long-term program to develop the simulation system {open_quotes}IMPACT{close_quotes} for analysis of hypothetical severe accidents in nuclear power plants. IMPACT employs advanced methods of physical modeling and numerical computation, and can simulate a wide spectrum of senarios ranging from normal operation to hypothetical, beyond-design-basis-accident events. Designed as a large-scale system of interconnected, hierarchical modules, IMPACT`s distinguishing features include mechanistic models based on first principles and high speed simulation on parallel processing computers. The present plan is a ten-year program starting from 1993, consisting of the initial one-year of preparatory work followed bymore » three technical phases: Phase-1 for development of a prototype system; Phase-2 for completion of the simulation system, incorporating new achievements from basic studies; and Phase-3 for refinement through extensive verification and validation against test results and available real plant data.« less
Solution to the indexing problem of frequency domain simulation experiments
NASA Technical Reports Server (NTRS)
Mitra, Mousumi; Park, Stephen K.
1991-01-01
A frequency domain simulation experiment is one in which selected system parameters are oscillated sinusoidally to induce oscillations in one or more system statistics of interest. A spectral (Fourier) analysis of these induced oscillations is then performed. To perform this spectral analysis, all oscillation frequencies must be referenced to a common, independent variable - an oscillation index. In a discrete-event simulation, the global simulation clock is the most natural choice for the oscillation index. However, past efforts to reference all frequencies to the simulation clock generally yielded unsatisfactory results. The reason for these unsatisfactory results is explained in this paper and a new methodology which uses the simulation clock as the oscillation index is presented. Techniques for implementing this new methodology are demonstrated by performing a frequency domain simulation experiment for a network of queues.
A View on Future Building System Modeling and Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wetter, Michael
This chapter presents what a future environment for building system modeling and simulation may look like. As buildings continue to require increased performance and better comfort, their energy and control systems are becoming more integrated and complex. We therefore focus in this chapter on the modeling, simulation and analysis of building energy and control systems. Such systems can be classified as heterogeneous systems because they involve multiple domains, such as thermodynamics, fluid dynamics, heat and mass transfer, electrical systems, control systems and communication systems. Also, they typically involve multiple temporal and spatial scales, and their evolution can be described bymore » coupled differential equations, discrete equations and events. Modeling and simulating such systems requires a higher level of abstraction and modularisation to manage the increased complexity compared to what is used in today's building simulation programs. Therefore, the trend towards more integrated building systems is likely to be a driving force for changing the status quo of today's building simulation programs. Thischapter discusses evolving modeling requirements and outlines a path toward a future environment for modeling and simulation of heterogeneous building systems.A range of topics that would require many additional pages of discussion has been omitted. Examples include computational fluid dynamics for air and particle flow in and around buildings, people movement, daylight simulation, uncertainty propagation and optimisation methods for building design and controls. For different discussions and perspectives on the future of building modeling and simulation, we refer to Sahlin (2000), Augenbroe (2001) and Malkawi and Augenbroe (2004).« less
NASA Astrophysics Data System (ADS)
José Gómez-Navarro, Juan; Raible, Christoph C.; Blumer, Sandro; Martius, Olivia; Felder, Guido
2016-04-01
Extreme precipitation episodes, although rare, are natural phenomena that can threat human activities, especially in areas densely populated such as Switzerland. Their relevance demands the design of public policies that protect public assets and private property. Therefore, increasing the current understanding of such exceptional situations is required, i.e. the climatic characterisation of their triggering circumstances, severity, frequency, and spatial distribution. Such increased knowledge shall eventually lead us to produce more reliable projections about the behaviour of these events under ongoing climate change. Unfortunately, the study of extreme situations is hampered by the short instrumental record, which precludes a proper characterization of events with return period exceeding few decades. This study proposes a new approach that allows studying storms based on a synthetic, but physically consistent database of weather situations obtained from a long climate simulation. Our starting point is a 500-yr control simulation carried out with the Community Earth System Model (CESM). In a second step, this dataset is dynamically downscaled with the Weather Research and Forecasting model (WRF) to a final resolution of 2 km over the Alpine area. However, downscaling the full CESM simulation at such high resolution is infeasible nowadays. Hence, a number of case studies are previously selected. This selection is carried out examining the precipitation averaged in an area encompassing Switzerland in the ESM. Using a hydrological criterion, precipitation is accumulated in several temporal windows: 1 day, 2 days, 3 days, 5 days and 10 days. The 4 most extreme events in each category and season are selected, leading to a total of 336 days to be simulated. The simulated events are affected by systematic biases that have to be accounted before this data set can be used as input in hydrological models. Thus, quantile mapping is used to remove such biases. For this task, a 20-yr high-resolution control simulation is carried out. The extreme events belong to this distribution, and can be mapped onto the distribution of precipitation obtained from a gridded product of precipitation provided by MeteoSwiss. This procedure yields bias-free extreme precipitation events which serve as input by hydrological models that eventually produce a simulated, yet physically consistent flooding event. Thereby, the proposed methodology guarantees consistency with the underlying physics of extreme events, and reproduces plausible impacts of up to one-in-five-centuries situations.
Statistical and Probabilistic Extensions to Ground Operations' Discrete Event Simulation Modeling
NASA Technical Reports Server (NTRS)
Trocine, Linda; Cummings, Nicholas H.; Bazzana, Ashley M.; Rychlik, Nathan; LeCroy, Kenneth L.; Cates, Grant R.
2010-01-01
NASA's human exploration initiatives will invest in technologies, public/private partnerships, and infrastructure, paving the way for the expansion of human civilization into the solar system and beyond. As it is has been for the past half century, the Kennedy Space Center will be the embarkation point for humankind's journey into the cosmos. Functioning as a next generation space launch complex, Kennedy's launch pads, integration facilities, processing areas, launch and recovery ranges will bustle with the activities of the world's space transportation providers. In developing this complex, KSC teams work through the potential operational scenarios: conducting trade studies, planning and budgeting for expensive and limited resources, and simulating alternative operational schemes. Numerous tools, among them discrete event simulation (DES), were matured during the Constellation Program to conduct such analyses with the purpose of optimizing the launch complex for maximum efficiency, safety, and flexibility while minimizing life cycle costs. Discrete event simulation is a computer-based modeling technique for complex and dynamic systems where the state of the system changes at discrete points in time and whose inputs may include random variables. DES is used to assess timelines and throughput, and to support operability studies and contingency analyses. It is applicable to any space launch campaign and informs decision-makers of the effects of varying numbers of expensive resources and the impact of off nominal scenarios on measures of performance. In order to develop representative DES models, methods were adopted, exploited, or created to extend traditional uses of DES. The Delphi method was adopted and utilized for task duration estimation. DES software was exploited for probabilistic event variation. A roll-up process was used, which was developed to reuse models and model elements in other less - detailed models. The DES team continues to innovate and expand DES capabilities to address KSC's planning needs.
Developing a discrete event simulation model for university student shuttle buses
NASA Astrophysics Data System (ADS)
Zulkepli, Jafri; Khalid, Ruzelan; Nawawi, Mohd Kamal Mohd; Hamid, Muhammad Hafizan
2017-11-01
Providing shuttle buses for university students to attend their classes is crucial, especially when their number is large and the distances between their classes and residential halls are far. These factors, in addition to the non-optimal current bus services, typically require the students to wait longer which eventually opens a space for them to complain. To considerably reduce the waiting time, providing the optimal number of buses to transport them from location to location and the effective route schedules to fulfil the students' demand at relevant time ranges are thus important. The optimal bus number and schedules are to be determined and tested using a flexible decision platform. This paper thus models the current services of student shuttle buses in a university using a Discrete Event Simulation approach. The model can flexibly simulate whatever changes configured to the current system and report its effects to the performance measures. How the model was conceptualized and formulated for future system configurations are the main interest of this paper.
Reducing acquisition risk through integrated systems of systems engineering
NASA Astrophysics Data System (ADS)
Gross, Andrew; Hobson, Brian; Bouwens, Christina
2016-05-01
In the fall of 2015, the Joint Staff J7 (JS J7) sponsored the Bold Quest (BQ) 15.2 event and conducted planning and coordination to combine this event into a joint event with the Army Warfighting Assessment (AWA) 16.1 sponsored by the U.S. Army. This multipurpose event combined a Joint/Coalition exercise (JS J7) with components of testing, training, and experimentation required by the Army. In support of Assistant Secretary of the Army for Acquisition, Logistics, and Technology (ASA(ALT)) System of Systems Engineering and Integration (SoSE&I), Always On-On Demand (AO-OD) used a system of systems (SoS) engineering approach to develop a live, virtual, constructive distributed environment (LVC-DE) to support risk mitigation utilizing this complex and challenging exercise environment for a system preparing to enter limited user test (LUT). AO-OD executed a requirements-based SoS engineering process starting with user needs and objectives from Army Integrated Air and Missile Defense (AIAMD), Patriot units, Coalition Intelligence, Surveillance and Reconnaissance (CISR), Focused End State 4 (FES4) Mission Command (MC) Interoperability with Unified Action Partners (UAP), and Mission Partner Environment (MPE) Integration and Training, Tactics and Procedures (TTP) assessment. The SoS engineering process decomposed the common operational, analytical, and technical requirements, while utilizing the Institute of Electrical and Electronics Engineers (IEEE) Distributed Simulation Engineering and Execution Process (DSEEP) to provide structured accountability for the integration and execution of the AO-OD LVC-DE. As a result of this process implementation, AO-OD successfully planned for, prepared, and executed a distributed simulation support environment that responsively satisfied user needs and objectives, demonstrating the viability of an LVC-DE environment to support multiple user objectives and support risk mitigation activities for systems in the acquisition process.
Development of a GCR Event-based Risk Model
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Ponomarev, Artem L.; Plante, Ianik; Carra, Claudio; Kim, Myung-Hee
2009-01-01
A goal at NASA is to develop event-based systems biology models of space radiation risks that will replace the current dose-based empirical models. Complex and varied biochemical signaling processes transmit the initial DNA and oxidative damage from space radiation into cellular and tissue responses. Mis-repaired damage or aberrant signals can lead to genomic instability, persistent oxidative stress or inflammation, which are causative of cancer and CNS risks. Protective signaling through adaptive responses or cell repopulation is also possible. We are developing a computational simulation approach to galactic cosmic ray (GCR) effects that is based on biological events rather than average quantities such as dose, fluence, or dose equivalent. The goal of the GCR Event-based Risk Model (GERMcode) is to provide a simulation tool to describe and integrate physical and biological events into stochastic models of space radiation risks. We used the quantum multiple scattering model of heavy ion fragmentation (QMSFRG) and well known energy loss processes to develop a stochastic Monte-Carlo based model of GCR transport in spacecraft shielding and tissue. We validated the accuracy of the model by comparing to physical data from the NASA Space Radiation Laboratory (NSRL). Our simulation approach allows us to time-tag each GCR proton or heavy ion interaction in tissue including correlated secondary ions often of high multiplicity. Conventional space radiation risk assessment employs average quantities, and assumes linearity and additivity of responses over the complete range of GCR charge and energies. To investigate possible deviations from these assumptions, we studied several biological response pathway models of varying induction and relaxation times including the ATM, TGF -Smad, and WNT signaling pathways. We then considered small volumes of interacting cells and the time-dependent biophysical events that the GCR would produce within these tissue volumes to estimate how GCR event rates mapped to biological signaling induction and relaxation times. We considered several hypotheses related to signaling and cancer risk, and then performed simulations for conditions where aberrant or adaptive signaling would occur on long-duration space mission. Our results do not support the conventional assumptions of dose, linearity and additivity. A discussion on how event-based systems biology models, which focus on biological signaling as the mechanism to propagate damage or adaptation, can be further developed for cancer and CNS space radiation risk projections is given.
Assessing Inhalation Exposures Associated with ...
Journal Article This paper presents a simulation-based approach for assessing short-term, water-distribution-system-wide inhalation exposures that could result from showering and the use of humidifiers during contamination events.
NASA Astrophysics Data System (ADS)
Yucel, Ismail; Onen, Alper
2013-04-01
Evidence is showing that global warming or climate change has a direct influence on changes in precipitation and the hydrological cycle. Extreme weather events such as heavy rainfall and flooding are projected to become much more frequent as climate warms. Regional hydrometeorological system model which couples the atmosphere with physical and gridded based surface hydrology provide efficient predictions for extreme hydrological events. This modeling system can be used for flood forecasting and warning issues as they provide continuous monitoring of precipitation over large areas at high spatial resolution. This study examines the performance of the Weather Research and Forecasting (WRF-Hydro) model that performs the terrain, sub-terrain, and channel routing in producing streamflow from WRF-derived forcing of extreme precipitation events. The capability of the system with different options such as data assimilation is tested for number of flood events observed in basins of western Black Sea Region in Turkey. Rainfall event structures and associated flood responses are evaluated with gauge and satellite-derived precipitation and measured streamflow values. The modeling system shows skills in capturing the spatial and temporal structure of extreme rainfall events and resulted flood hydrographs. High-resolution routing modules activated in the model enhance the simulated discharges.
Wildland fire probabilities estimated from weather model-deduced monthly mean fire danger indices
Haiganoush K. Preisler; Shyh-Chin Chen; Francis Fujioka; John W. Benoit; Anthony L. Westerling
2008-01-01
The National Fire Danger Rating System indices deduced from a regional simulation weather model were used to estimate probabilities and numbers of large fire events on monthly and 1-degree grid scales. The weather model simulations and forecasts are ongoing experimental products from the Experimental Climate Prediction Center at the Scripps Institution of Oceanography...
Resource Contention Management in Parallel Systems
1989-04-01
technical competence include communications, command and control, battle management, information processing, surveillance sensors, intelligence data ...two-simulation approach since they require only a single simulation run. More importantly, since they involve only observed data , they may also be...we use the original, unobservable RAC of Section 2 and handle un- observable transitions by generating artifcial events, when required, using a random
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenkranz, Joshua-Benedict; Brancucci Martinez-Anido, Carlo; Hodge, Bri-Mathias
Solar power generation, unlike conventional forms of electricity generation, has higher variability and uncertainty in its output because solar plant output is strongly impacted by weather. As the penetration rate of solar capacity increases, grid operators are increasingly concerned about accommodating the increased variability and uncertainty that solar power provides. This paper illustrates the impacts of increasing solar power penetration on the ramping of conventional electricity generators by simulating the operation of the Independent System Operator -- New England power system. A production cost model was used to simulate the power system under five different scenarios, one without solar powermore » and four with increasing solar power penetrations up to 18%, in terms of annual energy. The impact of solar power is analyzed on six different temporal intervals, including hourly and multi-hourly (2- to 6-hour) ramping. The results show how the integration of solar power increases the 1- to 6-hour ramping events of the net load (electric load minus solar power). The study also analyzes the impact of solar power on the distribution of multi-hourly ramping events of fossil-fueled generators and shows increasing 1- to 6-hour ramping events for all different generators. Generators with higher ramp rates such as gas and oil turbine and internal combustion engine generators increased their ramping events by 200% to 280%. For other generator types--including gas combined-cycle generators, coal steam turbine generators, and gas and oil steam turbine generators--more and higher ramping events occurred as well for higher solar power penetration levels.« less
USMC Inventory Control Using Optimization Modeling and Discrete Event Simulation
2016-09-01
release. Distribution is unlimited. USMC INVENTORY CONTROL USING OPTIMIZATION MODELING AND DISCRETE EVENT SIMULATION by Timothy A. Curling...USING OPTIMIZATION MODELING AND DISCRETE EVENT SIMULATION 5. FUNDING NUMBERS 6. AUTHOR(S) Timothy A. Curling 7. PERFORMING ORGANIZATION NAME(S...optimization and discrete -event simulation. This construct can potentially provide an effective means in improving order management decisions. However
ANALYSIS OF INPATIENT HOSPITAL STAFF MENTAL WORKLOAD BY MEANS OF DISCRETE-EVENT SIMULATION
2016-03-24
ANALYSIS OF INPATIENT HOSPITAL STAFF MENTAL WORKLOAD BY MEANS OF DISCRETE -EVENT SIMULATION...in the United States. AFIT-ENV-MS-16-M-166 ANALYSIS OF INPATIENT HOSPITAL STAFF MENTAL WORKLOAD BY MEANS OF DISCRETE -EVENT SIMULATION...UNLIMITED. AFIT-ENV-MS-16-M-166 ANALYSIS OF INPATIENT HOSPITAL STAFF MENTAL WORKLOAD BY MEANS OF DISCRETE -EVENT SIMULATION Erich W
Modeling and simulation of queuing system for customer service improvement: A case study
NASA Astrophysics Data System (ADS)
Xian, Tan Chai; Hong, Chai Weng; Hawari, Nurul Nazihah
2016-10-01
This study aims to develop a queuing model at UniMall by using discrete event simulation approach in analyzing the service performance that affects customer satisfaction. The performance measures that considered in this model are such as the average time in system, the total number of student served, the number of student in waiting queue, the waiting time in queue as well as the maximum length of buffer. ARENA simulation software is used to develop a simulation model and the output is analyzed. Based on the analysis of output, it is recommended that management of UniMall consider introducing shifts and adding another payment counter in the morning.
NASA Technical Reports Server (NTRS)
Hurwitz, M. M.; Song, I.-S.; Oman, L. D.; Newman, P. A.; Molod, A. M.; Frith, S. M.; Nielsen, J. E.
2010-01-01
"Warm pool" (WP) El Nino events are characterized by positive sea surface temperature (SST) anomalies in the central equatorial Pacific. During austral spring. WP El Nino events are associated with an enhancement of convective activity in the South Pacific Convergence Zone, provoking a tropospheric planetary wave response and thus increasing planetary wave driving of the Southern Hemisphere stratosphere. These conditions lead to higher polar stratospheric temperatures and to a weaker polar jet during austral summer, as compared with neutral ENSO years. Furthermore, this response is sensitive to the phase of the quasi-biennial oscillation (QBO): a stronger warming is seen in WP El Nino events coincident with the easterly phase of the quasi-biennial oscillation (QBO) as compared with WP El Nino events coincident with a westerly or neutral QBO. The Goddard Earth Observing System (GEOS) chemistry-climate model (CCM) is used to further explore the atmospheric response to ENSO. Time-slice simulations are forced by composited SSTs from observed WP El Nino and neutral ENSO events. The modeled eddy heat flux, temperature and wind responses to WP El Nino events are compared with observations. A new gravity wave drag scheme has been implemented in the GEOS CCM, enabling the model to produce a realistic, internally generated QBO. By repeating the above time-slice simulations with this new model version, the sensitivity of the WP El Nino response to the phase of the quasi-biennial oscillation QBO is estimated.
NASA Technical Reports Server (NTRS)
Hurwitz, M. M.; Song, I.-S.; Oman, L. D.; Newman, P. A.; Molod, A. M.; Frith, S. M.; Nielsen, J. E.
2011-01-01
"Warm pool" (WP) El Nino events are characterized by positive sea surface temperature (SST) anomalies in the central equatorial Pacific. During austral spring, WP El Nino events are associated with an enhancement of convective activity in the South Pacific Convergence Zone, provoking a tropospheric planetary wave response and thus increasing planetary wave driving of the Southern Hemisphere stratosphere. These conditions lead to higher polar stratospheric temperatures and to a weaker polar jet during austral summer, as compared with neutral ENSO years. Furthermore, this response is sensitive to the phase of the quasi-biennial oscillation (QBO): a stronger warming is seen in WP El Nino events coincident with the easterly phase of the quasi-biennial oscillation (QBO) as compared with WP El Nino events coincident with a westerly or neutral QBO. The Goddard Earth Observing System (GEOS) chemistry-climate model (CCM) is used to further explore the atmospheric response to ENSO. Time-slice simulations are forced by composited SSTs from observed NP El Nino and neutral ENSO events. The modeled eddy heat flux, temperature and wind responses to WP El Nino events are compared with observations. A new gravity wave drag scheme has been implemented in the GEOS CCM, enabling the model to produce e realistic, internally generated QBO. By repeating the above time-slice simulations with this new model version, the sensitivity of the WP El Nino response to the phase of the quasi-biennial oscillation QBO is estimated.
NASA Astrophysics Data System (ADS)
Rudaz, Benjamin; Loye, Alexandre; Mazotti, Benoit; Bardou, Eric; Jaboyedoff, Michel
2013-04-01
The Materosion project, conducted between the swiss canton of Valais (CREALP) and University of Lausanne (CRET) aims at forecasting sediment transfer in alpine torrents using the sediment cascade concept. The study site is the high Anniviers valley, around the village of Zinal (Valais). The torrents are divided in homogeneous reaches, to and from which sediments are transported by debris flows and bedload transport events. The model runs simulations of 100 years, with a 1-month time step, each with a given a random meteorological event ranging from no activity up to high magnitude debris flows. These events are calibrated using local rain data and observed corresponding debris flow frequencies. The model is applied to ten torrent systems with variable geological context, watershed geometries and sediment supplies. Given the high number of possible event scenarios, 10'000 simulations per torrent are performed, giving a statistical distribution of cumulated volumes and an event size distribution. A way to visualize the complex results data is proposed, and a back-analysis of the internal sediment cascade dynamic is performed. The back-analysis shows that the results' distribution stabilize after ~5'000 simulations. The model results, especially the range of debris flow volumes are crucial to maintain mitigation measures such as retention dams, and give clues for future sediment cascade modeling.
The Design and Semi-Physical Simulation Test of Fault-Tolerant Controller for Aero Engine
NASA Astrophysics Data System (ADS)
Liu, Yuan; Zhang, Xin; Zhang, Tianhong
2017-11-01
A new fault-tolerant control method for aero engine is proposed, which can accurately diagnose the sensor fault by Kalman filter banks and reconstruct the signal by real-time on-board adaptive model combing with a simplified real-time model and an improved Kalman filter. In order to verify the feasibility of the method proposed, a semi-physical simulation experiment has been carried out. Besides the real I/O interfaces, controller hardware and the virtual plant model, semi-physical simulation system also contains real fuel system. Compared with the hardware-in-the-loop (HIL) simulation, semi-physical simulation system has a higher degree of confidence. In order to meet the needs of semi-physical simulation, a rapid prototyping controller with fault-tolerant control ability based on NI CompactRIO platform is designed and verified on the semi-physical simulation test platform. The result shows that the controller can realize the aero engine control safely and reliably with little influence on controller performance in the event of fault on sensor.
NASA Technical Reports Server (NTRS)
Pomerantz, M. I.; Lim, C.; Myint, S.; Woodward, G.; Balaram, J.; Kuo, C.
2012-01-01
he Jet Propulsion Laboratory's Entry, Descent and Landing (EDL) Reconstruction Task has developed a software system that provides mission operations personnel and analysts with a real time telemetry-based live display, playback and post-EDL reconstruction capability that leverages the existing high-fidelity, physics-based simulation framework and modern game engine-derived 3D visualization system developed in the JPL Dynamics and Real Time Simulation (DARTS) Lab. Developed as a multi-mission solution, the EDL Telemetry Visualization (ETV) system has been used for a variety of projects including NASA's Mars Science Laboratory (MSL), NASA'S Low Density Supersonic Decelerator (LDSD) and JPL's MoonRise Lunar sample return proposal.
NASA Astrophysics Data System (ADS)
Filipcic, A.; Haug, S.; Hostettler, M.; Walker, R.; Weber, M.
2015-12-01
The Piz Daint Cray XC30 HPC system at CSCS, the Swiss National Supercomputing centre, was the highest ranked European system on TOP500 in 2014, also featuring GPU accelerators. Event generation and detector simulation for the ATLAS experiment have been enabled for this machine. We report on the technical solutions, performance, HPC policy challenges and possible future opportunities for HEP on extreme HPC systems. In particular a custom made integration to the ATLAS job submission system has been developed via the Advanced Resource Connector (ARC) middleware. Furthermore, a partial GPU acceleration of the Geant4 detector simulations has been implemented.
Probabilistic Approaches for Multi-Hazard Risk Assessment of Structures and Systems
NASA Astrophysics Data System (ADS)
Kwag, Shinyoung
Performance assessment of structures, systems, and components for multi-hazard scenarios has received significant attention in recent years. However, the concept of multi-hazard analysis is quite broad in nature and the focus of existing literature varies across a wide range of problems. In some cases, such studies focus on hazards that either occur simultaneously or are closely correlated with each other. For example, seismically induced flooding or seismically induced fires. In other cases, multi-hazard studies relate to hazards that are not dependent or correlated but have strong likelihood of occurrence at different times during the lifetime of a structure. The current approaches for risk assessment need enhancement to account for multi-hazard risks. It must be able to account for uncertainty propagation in a systems-level analysis, consider correlation among events or failure modes, and allow integration of newly available information from continually evolving simulation models, experimental observations, and field measurements. This dissertation presents a detailed study that proposes enhancements by incorporating Bayesian networks and Bayesian updating within a performance-based probabilistic framework. The performance-based framework allows propagation of risk as well as uncertainties in the risk estimates within a systems analysis. Unlike conventional risk assessment techniques such as a fault-tree analysis, a Bayesian network can account for statistical dependencies and correlations among events/hazards. The proposed approach is extended to develop a risk-informed framework for quantitative validation and verification of high fidelity system-level simulation tools. Validation of such simulations can be quite formidable within the context of a multi-hazard risk assessment in nuclear power plants. The efficiency of this approach lies in identification of critical events, components, and systems that contribute to the overall risk. Validation of any event or component on the critical path is relatively more important in a risk-informed environment. Significance of multi-hazard risk is also illustrated for uncorrelated hazards of earthquakes and high winds which may result in competing design objectives. It is also illustrated that the number of computationally intensive nonlinear simulations needed in performance-based risk assessment for external hazards can be significantly reduced by using the power of Bayesian updating in conjunction with the concept of equivalent limit-state.
ARC Collaborative Research Seminar Series
been used to formulate design rules for hydration-based TES systems. Don Siegel is an Associate structural-acoustics, design of complex systems, and blast event simulations. Technology that he developed interests includes advanced fatigue and fracture assessment methodologies, computational methods for
A One Dimensional, Time Dependent Inlet/Engine Numerical Simulation for Aircraft Propulsion Systems
NASA Technical Reports Server (NTRS)
Garrard, Doug; Davis, Milt, Jr.; Cole, Gary
1999-01-01
The NASA Lewis Research Center (LeRC) and the Arnold Engineering Development Center (AEDC) have developed a closely coupled computer simulation system that provides a one dimensional, high frequency inlet/engine numerical simulation for aircraft propulsion systems. The simulation system, operating under the LeRC-developed Application Portable Parallel Library (APPL), closely coupled a supersonic inlet with a gas turbine engine. The supersonic inlet was modeled using the Large Perturbation Inlet (LAPIN) computer code, and the gas turbine engine was modeled using the Aerodynamic Turbine Engine Code (ATEC). Both LAPIN and ATEC provide a one dimensional, compressible, time dependent flow solution by solving the one dimensional Euler equations for the conservation of mass, momentum, and energy. Source terms are used to model features such as bleed flows, turbomachinery component characteristics, and inlet subsonic spillage while unstarted. High frequency events, such as compressor surge and inlet unstart, can be simulated with a high degree of fidelity. The simulation system was exercised using a supersonic inlet with sixty percent of the supersonic area contraction occurring internally, and a GE J85-13 turbojet engine.
NASA Technical Reports Server (NTRS)
Leonard, Daniel; Parsons, Jeremy W.; Cates, Grant
2014-01-01
In May 2013, NASA's GSDO Program requested a study to develop a discrete event simulation (DES) model that analyzes the launch campaign process of the Space Launch System (SLS) from an integrated commodities perspective. The scope of the study includes launch countdown and scrub turnaround and focuses on four core launch commodities: hydrogen, oxygen, nitrogen, and helium. Previously, the commodities were only analyzed individually and deterministically for their launch support capability, but this study was the first to integrate them to examine the impact of their interactions on a launch campaign as well as the effects of process variability on commodity availability. The study produced a validated DES model with Rockwell Arena that showed that Kennedy Space Center's ground systems were capable of supporting a 48-hour scrub turnaround for the SLS. The model will be maintained and updated to provide commodity consumption analysis of future ground system and SLS configurations.
NASA Technical Reports Server (NTRS)
Proctor, Fred H.
1994-01-01
On 8 July 1989, a very strong microburst was detected by the Low-Level Windshear Alert system (LLWAS), within the approach corridor just north of Denver Stapleton Airport. The microburst was encountered by a Boeing 737-200 in a 'go-around' configuration which was reported to have lost considerable air speed and altitude during penetration. Data from LLWAS revealed a pulsating microburst with an estimated peak velocity change of 48 m/s. Wilson et al. reported that the microburst was accompanied by no apparent visible clues such as rain or virga, although blowing dust was present. Weather service hourly reports indicated virga in all quadrants near the time of the event. A National Center for Atmospheric Research (NCAR) research Doppler radar was operating; but according to Wilson et al., meaningful velocity could not be measured within the microburst due to low radar-reflectivity factor and poor siting for windshear detection at Stapleton. This paper presents results from the three-dimensional numerical simulation of this event, using the Terminal Area Simulation System (TASS) model. The TASS model is a three-dimensional nonhydrostatic cloud model that includes parameterizations for both liquid and ice phase microphysics, and has been used in investigations of both wet and dry microburst case studies. The focus of this paper is the pulsating characteristic and the very-low radar reflectivity of this event. Most of the surface outflow contained no precipitation. Such an event may be difficult to detect by radar.
Biological Event Modeling for Response Planning
NASA Astrophysics Data System (ADS)
McGowan, Clement; Cecere, Fred; Darneille, Robert; Laverdure, Nate
People worldwide continue to fear a naturally occurring or terrorist-initiated biological event. Responsible decision makers have begun to prepare for such a biological event, but critical policy and system questions remain: What are the best courses of action to prepare for and react to such an outbreak? Where resources should be stockpiled? How many hospital resources—doctors, nurses, intensive-care beds—will be required? Will quarantine be necessary? Decision analysis tools, particularly modeling and simulation, offer ways to address and help answer these questions.
A framework for service enterprise workflow simulation with multi-agents cooperation
NASA Astrophysics Data System (ADS)
Tan, Wenan; Xu, Wei; Yang, Fujun; Xu, Lida; Jiang, Chuanqun
2013-11-01
Process dynamic modelling for service business is the key technique for Service-Oriented information systems and service business management, and the workflow model of business processes is the core part of service systems. Service business workflow simulation is the prevalent approach to be used for analysis of service business process dynamically. Generic method for service business workflow simulation is based on the discrete event queuing theory, which is lack of flexibility and scalability. In this paper, we propose a service workflow-oriented framework for the process simulation of service businesses using multi-agent cooperation to address the above issues. Social rationality of agent is introduced into the proposed framework. Adopting rationality as one social factor for decision-making strategies, a flexible scheduling for activity instances has been implemented. A system prototype has been developed to validate the proposed simulation framework through a business case study.
Console, Rodolfo; Nardi, Anna; Carluccio, Roberto; Murru, Maura; Falcone, Giuseppe; Parsons, Thomas E.
2017-01-01
The use of a newly developed earthquake simulator has allowed the production of catalogs lasting 100 kyr and containing more than 100,000 events of magnitudes ≥4.5. The model of the fault system upon which we applied the simulator code was obtained from the DISS 3.2.0 database, selecting all the faults that are recognized on the Calabria region, for a total of 22 fault segments. The application of our simulation algorithm provides typical features in time, space and magnitude behavior of the seismicity, which can be compared with those of the real observations. The results of the physics-based simulator algorithm were compared with those obtained by an alternative method using a slip-rate balanced technique. Finally, as an example of a possible use of synthetic catalogs, an attenuation law has been applied to all the events reported in the synthetic catalog for the production of maps showing the exceedance probability of given values of PGA on the territory under investigation.
Scalable File Systems for High Performance Computing Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brandt, S A
2007-10-03
Simulations of mode I interlaminar fracture toughness tests of a carbon-reinforced composite material (BMS 8-212) were conducted with LSDYNA. The fracture toughness tests were performed by U.C. Berkeley. The simulations were performed to investigate the validity and practicality of employing decohesive elements to represent interlaminar bond failures that are prevalent in carbon-fiber composite structure penetration events. The simulations employed a decohesive element formulation that was verified on a simple two element model before being employed to perform the full model simulations. Care was required during the simulations to ensure that the explicit time integration of LSDYNA duplicate the near steady-statemore » testing conditions. In general, this study validated the use of employing decohesive elements to represent the interlaminar bond failures seen in carbon-fiber composite structures, but the practicality of employing the elements to represent the bond failures seen in carbon-fiber composite structures during penetration events was not established.« less
A spatial DB model to simulate the road network efficiency in hydrogeological emergency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michele, Mangiameli, E-mail: michele.mangiameli@dica.unict.it; Giuseppe, Mussumeci
We deal with the theme of the simulation of risk analysis using a technological approach based on the integration of exclusively free and open source tools: PostgreSQL as Database Management System (DBMS) and Quantum GIS-GRASS as Geographic Information System (GIS) platform. The case study is represented by a seismic land in Sicily characterized by steep slopes and frequent instability phenomena. This area includes a city of about 30.000 inhabitants (Enna) that lies on the top of a mountain at about 990 m a.s.l.. The access to the city is assured by few and very winding roads that are also highly vulnerablemore » to seismic and hydrogeological hazards. When exceptional rainfall events occur, the loss of efficiency of these roads should compromise timeliness and effectiveness of rescue operations. The data of the sample area have been structured into the adopted DBMS, and the connection to the GIS functionalities allows simulating the exceptional events. We analyzed the hazard, vulnerability and exposure related to these events and calculated the final risk defining three classes for each scenario: low (L), medium (M) and high (H). This study can be a valuable tool to prioritize risk levels and set priorities for intervention to the main road networks.« less
A spatial DB model to simulate the road network efficiency in hydrogeological emergency
NASA Astrophysics Data System (ADS)
Michele, Mangiameli; Giuseppe, Mussumeci
2015-12-01
We deal with the theme of the simulation of risk analysis using a technological approach based on the integration of exclusively free and open source tools: PostgreSQL as Database Management System (DBMS) and Quantum GIS-GRASS as Geographic Information System (GIS) platform. The case study is represented by a seismic land in Sicily characterized by steep slopes and frequent instability phenomena. This area includes a city of about 30.000 inhabitants (Enna) that lies on the top of a mountain at about 990 m a.s.l.. The access to the city is assured by few and very winding roads that are also highly vulnerable to seismic and hydrogeological hazards. When exceptional rainfall events occur, the loss of efficiency of these roads should compromise timeliness and effectiveness of rescue operations. The data of the sample area have been structured into the adopted DBMS, and the connection to the GIS functionalities allows simulating the exceptional events. We analyzed the hazard, vulnerability and exposure related to these events and calculated the final risk defining three classes for each scenario: low (L), medium (M) and high (H). This study can be a valuable tool to prioritize risk levels and set priorities for intervention to the main road networks..
Hsu, Ling-Yuan; Chen, Tsung-Lin
2012-11-13
This paper presents a vehicle dynamics prediction system, which consists of a sensor fusion system and a vehicle parameter identification system. This sensor fusion system can obtain the six degree-of-freedom vehicle dynamics and two road angles without using a vehicle model. The vehicle parameter identification system uses the vehicle dynamics from the sensor fusion system to identify ten vehicle parameters in real time, including vehicle mass, moment of inertial, and road friction coefficients. With above two systems, the future vehicle dynamics is predicted by using a vehicle dynamics model, obtained from the parameter identification system, to propagate with time the current vehicle state values, obtained from the sensor fusion system. Comparing with most existing literatures in this field, the proposed approach improves the prediction accuracy both by incorporating more vehicle dynamics to the prediction system and by on-line identification to minimize the vehicle modeling errors. Simulation results show that the proposed method successfully predicts the vehicle dynamics in a left-hand turn event and a rollover event. The prediction inaccuracy is 0.51% in a left-hand turn event and 27.3% in a rollover event.
Hsu, Ling-Yuan; Chen, Tsung-Lin
2012-01-01
This paper presents a vehicle dynamics prediction system, which consists of a sensor fusion system and a vehicle parameter identification system. This sensor fusion system can obtain the six degree-of-freedom vehicle dynamics and two road angles without using a vehicle model. The vehicle parameter identification system uses the vehicle dynamics from the sensor fusion system to identify ten vehicle parameters in real time, including vehicle mass, moment of inertial, and road friction coefficients. With above two systems, the future vehicle dynamics is predicted by using a vehicle dynamics model, obtained from the parameter identification system, to propagate with time the current vehicle state values, obtained from the sensor fusion system. Comparing with most existing literatures in this field, the proposed approach improves the prediction accuracy both by incorporating more vehicle dynamics to the prediction system and by on-line identification to minimize the vehicle modeling errors. Simulation results show that the proposed method successfully predicts the vehicle dynamics in a left-hand turn event and a rollover event. The prediction inaccuracy is 0.51% in a left-hand turn event and 27.3% in a rollover event. PMID:23202231
Program For Simulation Of Trajectories And Events
NASA Technical Reports Server (NTRS)
Gottlieb, Robert G.
1992-01-01
Universal Simulation Executive (USE) program accelerates and eases generation of application programs for numerical simulation of continuous trajectories interrupted by or containing discrete events. Developed for simulation of multiple spacecraft trajectories with events as one spacecraft crossing the equator, two spacecraft meeting or parting, or firing rocket engine. USE also simulates operation of chemical batch processing factory. Written in Ada.
NASA Astrophysics Data System (ADS)
Sun, Ying; Ding, Derui; Zhang, Sunjie; Wei, Guoliang; Liu, Hongjian
2018-07-01
In this paper, the non-fragile ?-? control problem is investigated for a class of discrete-time stochastic nonlinear systems under event-triggered communication protocols, which determine whether the measurement output should be transmitted to the controller or not. The main purpose of the addressed problem is to design an event-based output feedback controller subject to gain variations guaranteeing the prescribed disturbance attenuation level described by the ?-? performance index. By utilizing the Lyapunov stability theory combined with S-procedure, a sufficient condition is established to guarantee both the exponential mean-square stability and the ?-? performance for the closed-loop system. In addition, with the help of the orthogonal decomposition, the desired controller parameter is obtained in terms of the solution to certain linear matrix inequalities. Finally, a simulation example is exploited to demonstrate the effectiveness of the proposed event-based controller design scheme.
NASA Astrophysics Data System (ADS)
Schmid, P. E.; Niyogi, D.
2012-12-01
The Indianapolis region exhibits a precipitation distribution indicative of urban weather modification: negative bias upwind and positive bias downwind. The causes for such a distribution within an urban area arise from a combination of land-surface heterogeneity and urban aerosol-cloud interaction. This study investigates the causes of the precipitation distribution with a 120-day simulation using the Regional Atmospheric Modeling System (RAMS) coupled with the Town Energy Budget (TEB) model. Using a nested grid with a maximum resolution of 500m, a seasonal simulation of May through August, 2008 is conducted. Land surface conditions are varied, removing, expanding, and intensifying the Indianapolis urban area. Aerosol conditions are scaled by a three-dimensional combination of MODIS and CALIPSO observations, and varied in concentration and plume extent. Results from the study demonstrate the paradigm of urban precipitation modification on a seasonal time scale. The boundary between the rural and urban land surfaces weakens approaching systems upwind, decreasing precipitation in the city center. A larger urban extent diminishes the systems further. The aerosol plume downwind increases cloud lifetimes via cloud-nucleating aerosol, then invigorates precipitation via large drizzle-invigorating aerosols. The overall effect reproduces the observed negative precipitation bias upwind and positive bias downwind of the urban center. A lower concentration of aerosols leads to a higher proportion of stratiform rain over a larger area, whereas a higher concentration of aerosols leads to more convective rain and heavy rain events. This manifests in a weekly cycle of precipitation with rain most likely on weekends, and with less frequent but heavier rain events most likely during midweek, when aerosol concentrations are the highest. More intense urbanization, via both land surface and aerosol effects, creates more frequent heavy rainfall events and exacerbates dry-periods, potentially leading to premature drought onset. The wetter than average May, June, and July received more total rainfall from the heavy rainfall events, while the dry August became drier due to lack of stratiform precipitation. Smart planning solutions can partially mitigate the urban precipitation problem. In a simulation where a more intense urban Indianapolis is surrounded by a greenbelt and green roofs are implemented in the city, the urban precipitation bias becomes less significant. Upwind, the greenbelt provides surface moisture and mitigates how much precipitation systems weaken. Downwind, the greenbelt slows the transport of drizzle-invigorating aerosol, reducing the heavy rain events. The green roofs reduce the urban-rural gradient and slow the initial weakening of systems.
NASA Astrophysics Data System (ADS)
Michnovicz, Michael R.
1997-06-01
A real-time executive has been implemented to control a high altitude pointing and tracking experiment. The track and mode controller (TMC) implements a table driven design, in which the track mode logic for a tracking mission is defined within a state transition diagram (STD). THe STD is implemented as a state transition table in the TMC software. Status Events trigger the state transitions in the STD. Each state, as it is entered, causes a number of processes to be activated within the system. As these processes propagate through the system, the status of key processes are monitored by the TMC, allowing further transitions within the STD. This architecture is implemented in real-time, using the vxWorks operating system. VxWorks message queues allow communication of status events from the Event Monitor task to the STD task. Process commands are propagated to the rest of the system processors by means of the SCRAMNet shared memory network. The system mode logic contained in the STD will autonomously sequence in acquisition, tracking and pointing system through an entire engagement sequence, starting with target detection and ending with aimpoint maintenance. Simulation results and lab test results will be presented to verify the mode controller. In addition to implementing the system mode logic with the STD, the TMC can process prerecorded time sequences of commands required during startup operations. It can also process single commands from the system operator. In this paper, the author presents (1) an overview, in which he describes the TMC architecture, the relationship of an end-to-end simulation to the flight software and the laboratory testing environment, (2) implementation details, including information on the vxWorks message queues and the SCRAMNet shared memory network, (3) simulation results and lab test results which verify the mode controller, and (4) plans for the future, specifically as to how this executive will expedite transition to a fully functional system.
Automatic programming of simulation models
NASA Technical Reports Server (NTRS)
Schroer, Bernard J.; Tseng, Fan T.; Zhang, Shou X.; Dwan, Wen S.
1988-01-01
The objective of automatic programming is to improve the overall environment for describing the program. This improved environment is realized by a reduction in the amount of detail that the programmer needs to know and is exposed to. Furthermore, this improved environment is achieved by a specification language that is more natural to the user's problem domain and to the user's way of thinking and looking at the problem. The goal of this research is to apply the concepts of automatic programming (AP) to modeling discrete event simulation system. Specific emphasis is on the design and development of simulation tools to assist the modeler define or construct a model of the system and to then automatically write the corresponding simulation code in the target simulation language, GPSS/PC. A related goal is to evaluate the feasibility of various languages for constructing automatic programming simulation tools.
Alexander Hegedus Lightning Talk: Integrating Measurements to Optimize Space Weather Strategies
NASA Astrophysics Data System (ADS)
Hegedus, A. M.
2017-12-01
Alexander Hegedus is a PhD Candidate at the University of Michigan, and won an Outstanding Student Paper Award at the AGU 2016 Fall Meeting for his poster "Simulating 3D Spacecraft Constellations for Low Frequency Radio Imaging." In this short talk, Alex outlines his current research of analyzing data from both real and simulated instruments to answer Heliophysical questions. He then sketches out future plans to simulate science pipelines in a real-time data assimilation model that uses a Bayesian framework to integrate information from different instruments to determine the efficacy of future Space Weather Alert systems. MHD simulations made with Michigan's own Space Weather Model Framework will provide input to simulated instruments, acting as an Observing System Simulation Experiment to verify that a certain set of measurements can accurately predict different classes of Space Weather events.
Improving the performance of a filling line based on simulation
NASA Astrophysics Data System (ADS)
Jasiulewicz-Kaczmarek, M.; Bartkowiak, T.
2016-08-01
The paper describes the method of improving performance of a filling line based on simulation. This study concerns a production line that is located in a manufacturing centre of a FMCG company. A discrete event simulation model was built using data provided by maintenance data acquisition system. Two types of failures were identified in the system and were approximated using continuous statistical distributions. The model was validated taking into consideration line performance measures. A brief Pareto analysis of line failures was conducted to identify potential areas of improvement. Two improvements scenarios were proposed and tested via simulation. The outcome of the simulations were the bases of financial analysis. NPV and ROI values were calculated taking into account depreciation, profits, losses, current CIT rate and inflation. A validated simulation model can be a useful tool in maintenance decision-making process.
Parallel discrete event simulation using shared memory
NASA Technical Reports Server (NTRS)
Reed, Daniel A.; Malony, Allen D.; Mccredie, Bradley D.
1988-01-01
With traditional event-list techniques, evaluating a detailed discrete-event simulation-model can often require hours or even days of computation time. By eliminating the event list and maintaining only sufficient synchronization to ensure causality, parallel simulation can potentially provide speedups that are linear in the numbers of processors. A set of shared-memory experiments, using the Chandy-Misra distributed-simulation algorithm, to simulate networks of queues is presented. Parameters of the study include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors. These experiments show that Chandy-Misra distributed simulation is a questionable alternative to sequential-simulation of most queueing network models.
Realistic training scenario simulations and simulation techniques
Dunlop, William H.; Koncher, Tawny R.; Luke, Stanley John; Sweeney, Jerry Joseph; White, Gregory K.
2017-12-05
In one embodiment, a system includes a signal generator operatively coupleable to one or more detectors; and a controller, the controller being both operably coupled to the signal generator and configured to cause the signal generator to: generate one or more signals each signal being representative of at least one emergency event; and communicate one or more of the generated signal(s) to a detector to which the signal generator is operably coupled. In another embodiment, a method includes: receiving data corresponding to one or more emergency events; generating at least one signal based on the data; and communicating the generated signal(s) to a detector.
NASA Astrophysics Data System (ADS)
Buhari, Abudhahir; Zukarnain, Zuriati Ahmad; Khalid, Roszelinda; Zakir Dato', Wira Jaafar Ahmad
2016-11-01
The applications of quantum information science move towards bigger and better heights for the next generation technology. Especially, in the field of quantum cryptography and quantum computation, the world already witnessed various ground-breaking tangible product and promising results. Quantum cryptography is one of the mature field from quantum mechanics and already available in the markets. The current state of quantum cryptography is still under various researches in order to reach the heights of digital cryptography. The complexity of quantum cryptography is higher due to combination of hardware and software. The lack of effective simulation tool to design and analyze the quantum cryptography experiments delays the reaching distance of the success. In this paper, we propose a framework to achieve an effective non-entanglement based quantum cryptography simulation tool. We applied hybrid simulation technique i.e. discrete event, continuous event and system dynamics. We also highlight the limitations of a commercial photonic simulation tool based experiments. Finally, we discuss ideas for achieving one-stop simulation package for quantum based secure key distribution experiments. All the modules of simulation framework are viewed from the computer science perspective.
NASA Technical Reports Server (NTRS)
Rockwell, T. H.; Griffin, W. C.
1981-01-01
Critical in-flight events (CIFE) that threaten the aircraft were studied. The scope of the CIFE was described and defined with emphasis on characterizing event development, detection and assessment; pilot information requirements, sources, acquisition, and interpretation, pilot response options, decision processed, and decision implementation and event outcome. Detailed scenarios were developed for use in simulators and paper and pencil testing for developing relationships between pilot performance and background information as well as for an analysis of pilot reaction decision and feedback processes. Statistical relationships among pilot characteristics and observed responses to CIFE's were developed.
Scientific Benefits of Space Science Models Archiving at Community Coordinated Modeling Center
NASA Technical Reports Server (NTRS)
Kuznetsova, Maria M.; Berrios, David; Chulaki, Anna; Hesse, Michael; MacNeice, Peter J.; Maddox, Marlo M.; Pulkkinen, Antti; Rastaetter, Lutz; Taktakishvili, Aleksandre
2009-01-01
The Community Coordinated Modeling Center (CCMC) hosts a set of state-of-the-art space science models ranging from the solar atmosphere to the Earth's upper atmosphere. CCMC provides a web-based Run-on-Request system, by which the interested scientist can request simulations for a broad range of space science problems. To allow the models to be driven by data relevant to particular events CCMC developed a tool that automatically downloads data from data archives and transform them to required formats. CCMC also provides a tailored web-based visualization interface for the model output, as well as the capability to download the simulation output in portable format. CCMC offers a variety of visualization and output analysis tools to aid scientists in interpretation of simulation results. During eight years since the Run-on-request system became available the CCMC archived the results of almost 3000 runs that are covering significant space weather events and time intervals of interest identified by the community. The simulation results archived at CCMC also include a library of general purpose runs with modeled conditions that are used for education and research. Archiving results of simulations performed in support of several Modeling Challenges helps to evaluate the progress in space weather modeling over time. We will highlight the scientific benefits of CCMC space science model archive and discuss plans for further development of advanced methods to interact with simulation results.
Discrete event simulation modelling of patient service management with Arena
NASA Astrophysics Data System (ADS)
Guseva, Elena; Varfolomeyeva, Tatyana; Efimova, Irina; Movchan, Irina
2018-05-01
This paper describes the simulation modeling methodology aimed to aid in solving the practical problems of the research and analysing the complex systems. The paper gives the review of a simulation platform sand example of simulation model development with Arena 15.0 (Rockwell Automation).The provided example of the simulation model for the patient service management helps to evaluate the workload of the clinic doctors, determine the number of the general practitioners, surgeons, traumatologists and other specialized doctors required for the patient service and develop recommendations to ensure timely delivery of medical care and improve the efficiency of the clinic operation.
Impacts of storm chronology on the morphological changes of the Formby beach and dune system, UK
NASA Astrophysics Data System (ADS)
Dissanayake, P.; Brown, J.; Karunarathna, H.
2015-07-01
Impacts of storm chronology within a storm cluster on beach/dune erosion are investigated by applying the state-of-the-art numerical model XBeach to the Sefton coast, northwest England. Six temporal storm clusters of different storm chronologies were formulated using three storms observed during the 2013/2014 winter. The storm power values of these three events nearly halve from the first to second event and from the second to third event. Cross-shore profile evolution was simulated in response to the tide, surge and wave forcing during these storms. The model was first calibrated against the available post-storm survey profiles. Cumulative impacts of beach/dune erosion during each storm cluster were simulated by using the post-storm profile of an event as the pre-storm profile for each subsequent event. For the largest event the water levels caused noticeable retreat of the dune toe due to the high water elevation. For the other events the greatest evolution occurs over the bar formations (erosion) and within the corresponding troughs (deposition) of the upper-beach profile. The sequence of events impacting the size of this ridge-runnel feature is important as it consequently changes the resilience of the system to the most extreme event that causes dune retreat. The highest erosion during each single storm event was always observed when that storm initialised the storm cluster. The most severe storm always resulted in the most erosion during each cluster, no matter when it occurred within the chronology, although the erosion volume due to this storm was reduced when it was not the primary event. The greatest cumulative cluster erosion occurred with increasing storm severity; however, the variability in cumulative cluster impact over a beach/dune cross section due to storm chronology is minimal. Initial storm impact can act to enhance or reduce the system resilience to subsequent impact, but overall the cumulative impact is controlled by the magnitude and number of the storms. This model application provides inter-survey information about morphological response to repeated storm impact. This will inform local managers of the potential beach response and dune vulnerability to variable storm configurations.
Impacts of storm chronology on the morphological changes of the Formby beach and dune system, UK
NASA Astrophysics Data System (ADS)
Dissanayake, P.; Brown, J.; Karunarathna, H.
2015-04-01
Impacts of storm chronology within a storm cluster on beach/dune erosion are investigated by applying the state-of-the-art numerical model XBeach to the Sefton coast, northwest England. Six temporal storm clusters of different storm chronologies were formulated using three storms observed during the 2013/14 winter. The storm power values of these three events nearly halve from the first to second event and from the second to third event. Cross-shore profile evolution was simulated in response to the tide, surge and wave forcing during these storms. The model was first calibrated against the available post-storm survey profiles. Cumulative impacts of beach/dune erosion during each storm cluster were simulated by using the post-storm profile of an event as the pre-storm profile for each subsequent event. For the largest event the water levels caused noticeable retreat of the dune toe due to the high water elevation. For the other events the greatest evolution occurs over the bar formations (erosion) and within the corresponding troughs (deposition) of the upper beach profile. The sequence of events impacting the size of this ridge-runnel feature is important as it consequently changes the resilience of the system to the most extreme event that causes dune retreat. The highest erosion during each single storm event was always observed when that storm initialised the storm cluster. The most severe storm always resulted in the most erosion during each cluster, no matter when it occurred within the chronology, although the erosion volume due to this storm was reduced when it was not the primary event. The greatest cumulative cluster erosion occurred with increasing storm severity; however, the variability in cumulative cluster impact over a beach/dune cross-section due to storm chronology is minimal. Initial storm impact can act to enhance or reduce the system resilience to subsequent impact, but overall the cumulative impact is controlled by the magnitude and number of the storms. This model application provides inter-survey information about morphological response to repeated storm impact. This will inform local managers of the potential beach response and dune vulnerability to variable storm configurations.
On Mixed Data and Event Driven Design for Adaptive-Critic-Based Nonlinear $H_{\\infty}$ Control.
Wang, Ding; Mu, Chaoxu; Liu, Derong; Ma, Hongwen
2018-04-01
In this paper, based on the adaptive critic learning technique, the control for a class of unknown nonlinear dynamic systems is investigated by adopting a mixed data and event driven design approach. The nonlinear control problem is formulated as a two-player zero-sum differential game and the adaptive critic method is employed to cope with the data-based optimization. The novelty lies in that the data driven learning identifier is combined with the event driven design formulation, in order to develop the adaptive critic controller, thereby accomplishing the nonlinear control. The event driven optimal control law and the time driven worst case disturbance law are approximated by constructing and tuning a critic neural network. Applying the event driven feedback control, the closed-loop system is built with stability analysis. Simulation studies are conducted to verify the theoretical results and illustrate the control performance. It is significant to observe that the present research provides a new avenue of integrating data-based control and event-triggering mechanism into establishing advanced adaptive critic systems.
A simulation framework for mapping risks in clinical processes: the case of in-patient transfers.
Dunn, Adam G; Ong, Mei-Sing; Westbrook, Johanna I; Magrabi, Farah; Coiera, Enrico; Wobcke, Wayne
2011-05-01
To model how individual violations in routine clinical processes cumulatively contribute to the risk of adverse events in hospital using an agent-based simulation framework. An agent-based simulation was designed to model the cascade of common violations that contribute to the risk of adverse events in routine clinical processes. Clinicians and the information systems that support them were represented as a group of interacting agents using data from direct observations. The model was calibrated using data from 101 patient transfers observed in a hospital and results were validated for one of two scenarios (a misidentification scenario and an infection control scenario). Repeated simulations using the calibrated model were undertaken to create a distribution of possible process outcomes. The likelihood of end-of-chain risk is the main outcome measure, reported for each of the two scenarios. The simulations demonstrate end-of-chain risks of 8% and 24% for the misidentification and infection control scenarios, respectively. Over 95% of the simulations in both scenarios are unique, indicating that the in-patient transfer process diverges from prescribed work practices in a variety of ways. The simulation allowed us to model the risk of adverse events in a clinical process, by generating the variety of possible work subject to violations, a novel prospective risk analysis method. The in-patient transfer process has a high proportion of unique trajectories, implying that risk mitigation may benefit from focusing on reducing complexity rather than augmenting the process with further rule-based protocols.
Topological events in single molecules of E. coli DNA confined in nanochannels
Reifenberger, Jeffrey G.; Dorfman, Kevin D.; Cao, Han
2015-01-01
We present experimental data concerning potential topological events such as folds, internal backfolds, and/or knots within long molecules of double-stranded DNA when they are stretched by confinement in a nanochannel. Genomic DNA from E. coli was labeled near the ‘GCTCTTC’ sequence with a fluorescently labeled dUTP analog and stained with the DNA intercalator YOYO. Individual long molecules of DNA were then linearized and imaged using methods based on the NanoChannel Array technology (Irys® System) available from BioNano Genomics. Data were collected on 189,153 molecules of length greater than 50 kilobases. A custom code was developed to search for abnormal intensity spikes in the YOYO backbone profile along the length of individual molecules. By correlating the YOYO intensity spikes with the aligned barcode pattern to the reference, we were able to correlate the bright intensity regions of YOYO with abnormal stretching in the molecule, which suggests these events were either a knot or a region of internal backfolding within the DNA. We interpret the results of our experiments involving molecules exceeding 50 kilobases in the context of existing simulation data for relatively short DNA, typically several kilobases. The frequency of these events is lower than the predictions from simulations, while the size of the events is larger than simulation predictions and often exceeds the molecular weight of the simulated molecules. We also identified DNA molecules that exhibit large, single folds as they enter the nanochannels. Overall, topological events occur at a low frequency (~7% of all molecules) and pose an easily surmountable obstacle for the practice of genome mapping in nanochannels. PMID:25991508
Comparison of spatial interpolation of rainfall with emphasis on extreme events
NASA Astrophysics Data System (ADS)
Amin, Kanwal; Duan, Zheng; Disse, Markus
2017-04-01
The sparse network of rain-gauges has always motivated the scientists to find more robust ways to include the spatial variability of precipitation. Turning Bands Simulation, External Drift Kriging, Copula and Random Mixing are amongst one of them. Remote sensing Technologies i.e., radar and satellite estimations are widely known to provide a spatial profile of the precipitation, however during extreme events the accuracy of the resulted areal precipitation is still under discussion. The aim is to compare the areal hourly precipitation results of a flood event from RADOLAN (Radar online adjustment) with the gridded rainfall obtained via Turning Bands Simulation (TBM) and Inverse Distance Weighting (IDW) method. The comparison is mainly focused on performing the uncertainty analysis of the areal precipitation through the said simulation and remote sensing technique for the Upper Main Catchment. The comparison of the results obtained from TBM, IDW and RADOLAN show considerably similar results near the rain gauge stations, but the degree of ambiguity elevates with the increasing distance from the gauge stations. Future research will be carried out to compare the forecasted gridded precipitation simulations with the real-time rainfall forecast system (RADVOR) to make the flood evacuation process more robust and efficient.
NASA Astrophysics Data System (ADS)
Kusangaya, Samuel; Warburton Toucher, Michele L.; van Garderen, Emma Archer
2018-02-01
Downscaled General Circulation Models (GCMs) output are used to forecast climate change and provide information used as input for hydrological modelling. Given that our understanding of climate change points towards an increasing frequency, timing and intensity of extreme hydrological events, there is therefore the need to assess the ability of downscaled GCMs to capture these extreme hydrological events. Extreme hydrological events play a significant role in regulating the structure and function of rivers and associated ecosystems. In this study, the Indicators of Hydrologic Alteration (IHA) method was adapted to assess the ability of simulated streamflow (using downscaled GCMs (dGCMs)) in capturing extreme river dynamics (high and low flows), as compared to streamflow simulated using historical climate data from 1960 to 2000. The ACRU hydrological model was used for simulating streamflow for the 13 water management units of the uMngeni Catchment, South Africa. Statistically downscaled climate models obtained from the Climate System Analysis Group at the University of Cape Town were used as input for the ACRU Model. Results indicated that, high flows and extreme high flows (one in ten year high flows/large flood events) were poorly represented both in terms of timing, frequency and magnitude. Simulated streamflow using dGCMs data also captures more low flows and extreme low flows (one in ten year lowest flows) than that captured in streamflow simulated using historical climate data. The overall conclusion was that although dGCMs output can reasonably be used to simulate overall streamflow, it performs poorly when simulating extreme high and low flows. Streamflow simulation from dGCMs must thus be used with caution in hydrological applications, particularly for design hydrology, as extreme high and low flows are still poorly represented. This, arguably calls for the further improvement of downscaling techniques in order to generate climate data more relevant and useful for hydrological applications such as in design hydrology. Nevertheless, the availability of downscaled climatic output provide the potential of exploring climate model uncertainties in different hydro climatic regions at local scales where forcing data is often less accessible but more accurate at finer spatial scales and with adequate spatial detail.
Integrated Turbine-Based Combined Cycle Dynamic Simulation Model
NASA Technical Reports Server (NTRS)
Haid, Daniel A.; Gamble, Eric J.
2011-01-01
A Turbine-Based Combined Cycle (TBCC) dynamic simulation model has been developed to demonstrate all modes of operation, including mode transition, for a turbine-based combined cycle propulsion system. The High Mach Transient Engine Cycle Code (HiTECC) is a highly integrated tool comprised of modules for modeling each of the TBCC systems whose interactions and controllability affect the TBCC propulsion system thrust and operability during its modes of operation. By structuring the simulation modeling tools around the major TBCC functional modes of operation (Dry Turbojet, Afterburning Turbojet, Transition, and Dual Mode Scramjet) the TBCC mode transition and all necessary intermediate events over its entire mission may be developed, modeled, and validated. The reported work details the use of the completed model to simulate a TBCC propulsion system as it accelerates from Mach 2.5, through mode transition, to Mach 7. The completion of this model and its subsequent use to simulate TBCC mode transition significantly extends the state-of-the-art for all TBCC modes of operation by providing a numerical simulation of the systems, interactions, and transient responses affecting the ability of the propulsion system to transition from turbine-based to ramjet/scramjet-based propulsion while maintaining constant thrust.
Hydrologic modeling of two glaciated watersheds in Northeast Pennsylvania
Srinivasan, M.S.; Hamlett, J.M.; Day, R.L.; Sams, J.I.; Petersen, G.W.
1998-01-01
A hydrologic modeling study, using the Hydrologic Simulation Program - FORTRAN (HSPF), was conducted in two glaciated watersheds, Purdy Creek and Ariel Creek in northeastern Pennsylvania. Both watersheds have wetlands and poorly drained soils due to low hydraulic conductivity and presence of fragipans. The HSPF model was calibrated in the Purdy Creek watershed and verified in the Ariel Creek watershed for June 1992 to December 1993 period. In Purdy Creek, the total volume of observed streamflow during the entire simulation period was 13.36 x 106 m3 and the simulated streamflow volume was 13.82 x 106 m3 (5 percent difference). For the verification simulation in Ariel Creek, the difference between the total observed and simulated flow volumes was 17 percent. Simulated peak flow discharges were within two hours of the observed for 30 of 46 peak flow events (discharge greater than 0.1 m3/sec) in Purdy Creek and 27 of 53 events in Ariel Creek. For 22 of the 46 events in Purdy Creek and 24 of 53 in Ariel Creek, the differences between the observed and simulated peak discharge rates were less than 30 percent. These 22 events accounted for 63 percent of total volume of streamflow observed during the selected 46 peak flow events in Purdy Creek. In Ariel Creek, these 24 peak flow events accounted for 62 percent of the total flow observed during all peak flow events. Differences in observed and simulated peak flow rates and volumes (on a percent basis) were greater during the snowmelt runoff events and summer periods than for other times.A hydrologic modeling study, using the Hydrologic Simulation Program - FORTRAN (HSPF), was conducted in two glaciated watersheds, Purdy Creek and Ariel Creek in northeastern Pennsylvania. Both watersheds have wetlands and poorly drained soils due to low hydraulic conductivity and presence of fragipans. The HSPF model was calibrated in the Purdy Creek watershed and verified in the Ariel Creek watershed for June 1992 to December 1993 period. In Purdy Creek, the total volume of observed streamflow during the entire simulation period was 13.36??106 m3 and the simulated streamflow volume was 13.82??106 m3 (5 percent difference). For the verification simulation in Ariel Creek, the difference between the total observed and simulated flow volumes was 17 percent. Simulated peak flow discharges were within two hours of the observed for 30 of 46 peak flow events (discharge greater than 0.1 m3/sec) in Purdy Creek and 27 of 53 events in Ariel Creek. For 22 of the 46 events in Purdy Creek and 24 of 53 in Ariel Creek, the differences between the observed and simulated peak discharge rates were less than 30 percent. These 22 events accounted for 63 percent of total volume of streamflow observed during the selected 46 peak flow events in Purdy Creek. In Ariel Creek, these 24 peak flow events accounted for 62 percent of the total flow observed during all peak flow events. Differences in observed and simulated peak flow rates and volumes (on a percent basis) were greater during the snowmelt runoff events and summer periods than for other times.
Simulating flaring events in complex active regions driven by observed magnetograms
NASA Astrophysics Data System (ADS)
Dimitropoulou, M.; Isliker, H.; Vlahos, L.; Georgoulis, M. K.
2011-05-01
Context. We interpret solar flares as events originating in active regions that have reached the self organized critical state, by using a refined cellular automaton model with initial conditions derived from observations. Aims: We investigate whether the system, with its imposed physical elements, reaches a self organized critical state and whether well-known statistical properties of flares, such as scaling laws observed in the distribution functions of characteristic parameters, are reproduced after this state has been reached. Methods: To investigate whether the distribution functions of total energy, peak energy and event duration follow the expected scaling laws, we first applied a nonlinear force-free extrapolation that reconstructs the three-dimensional magnetic fields from two-dimensional vector magnetograms. We then locate magnetic discontinuities exceeding a threshold in the Laplacian of the magnetic field. These discontinuities are relaxed in local diffusion events, implemented in the form of cellular automaton evolution rules. Subsequent loading and relaxation steps lead the system to self organized criticality, after which the statistical properties of the simulated events are examined. Physical requirements, such as the divergence-free condition for the magnetic field vector, are approximately imposed on all elements of the model. Results: Our results show that self organized criticality is indeed reached when applying specific loading and relaxation rules. Power-law indices obtained from the distribution functions of the modeled flaring events are in good agreement with observations. Single power laws (peak and total flare energy) are obtained, as are power laws with exponential cutoff and double power laws (flare duration). The results are also compared with observational X-ray data from the GOES satellite for our active-region sample. Conclusions: We conclude that well-known statistical properties of flares are reproduced after the system has reached self organized criticality. A significant enhancement of our refined cellular automaton model is that it commences the simulation from observed vector magnetograms, thus facilitating energy calculation in physical units. The model described in this study remains consistent with fundamental physical requirements, and imposes physically meaningful driving and redistribution rules.
Truman, C C; Strickland, T C; Potter, T L; Franklin, D H; Bosch, D D; Bednarz, C W
2007-01-01
The low-carbon, intensively cropped Coastal Plain soils of Georgia are susceptible to runoff, soil loss, and drought. Reduced tillage systems offer the best management tool for sustained row crop production. Understanding runoff, sediment, and chemical losses from conventional and reduced tillage systems is expected to improve if the effect of a variable rainfall intensity storm was quantified. Our objective was to quantify and compare effects of a constant (Ic) intensity pattern and a more realistic, observed, variable (Iv) rainfall intensity pattern on runoff (R), sediment (E), and carbon losses (C) from a Tifton loamy sand cropped to conventional-till (CT) and strip-till (ST) cotton (Gossypium hirsutum L.). Four treatments were evaluated: CT-Ic, CT-Iv, ST-Ic, and ST-Iv, each replicated three times. Field plots (n=12), each 2 by 3 m, were established on each treatment. Each 6-m2 field plot received simulated rainfall at a constant (57 mm h(-1)) or variable rainfall intensity pattern for 70 min (12-run ave.=1402 mL; CV=3%). The Iv pattern represented the most frequent occurring intensity pattern for spring storms in the region. Compared with CT, ST decreased R by 2.5-fold, E by 3.5-fold, and C by 7-fold. Maximum runoff values for Iv events were 1.6-fold higher than those for Ic events and occurred 38 min earlier. Values for Etot and Ctot for Iv events were 19-36% and 1.5-fold higher than corresponding values for Ic events. Values for Emax and Cmax for Iv events were 3-fold and 4-fold higher than corresponding values for Ic events. Carbon enrichment ratios (CER) were
Cigrand, Charles V.
2018-03-26
The U.S. Geological Survey (USGS) in cooperation with the city of West Branch and the Herbert Hoover National Historic Site of the National Park Service assessed flood-mitigation scenarios within the West Branch Wapsinonoc Creek watershed. The scenarios are intended to demonstrate several means of decreasing peak streamflows and improving the conveyance of overbank flows from the West Branch Wapsinonoc Creek and its tributary Hoover Creek where they flow through the city and the Herbert Hoover National Historic Site located within the city.Hydrologic and hydraulic models of the watershed were constructed to assess the flood-mitigation scenarios. To accomplish this, the models used the U.S. Army Corps of Engineers Hydrologic Engineering Center-Hydrologic Modeling System (HEC–HMS) version 4.2 to simulate the amount of runoff and streamflow produced from single rain events. The Hydrologic Engineering Center-River Analysis System (HEC–RAS) version 5.0 was then used to construct an unsteady-state model that may be used for routing streamflows, mapping areas that may be inundated during floods, and simulating the effects of different measures taken to decrease the effects of floods on people and infrastructure.Both models were calibrated to three historic rainfall events that produced peak streamflows ranging between the 2-year and 10-year flood-frequency recurrence intervals at the USGS streamgage (05464942) on Hoover Creek. The historic rainfall events were calibrated by using data from two USGS streamgages along with surveyed high-water marks from one of the events. The calibrated HEC–HMS model was then used to simulate streamflows from design rainfall events of 24-hour duration ranging from a 20-percent to a 1-percent annual exceedance probability. These simulated streamflows were incorporated into the HEC–RAS model.The unsteady-state HEC–RAS model was calibrated to represent existing conditions within the watershed. HEC–RAS model simulations with the existing conditions and streamflows from the design rainfall events were then done to serve as a baseline for evaluating flood-mitigation scenarios. After these simulations were completed, three different flood-mitigation scenarios were developed with HEC–RAS: a detention-storage scenario, a conveyance improvement scenario, and a combination of both. In the detention-storage scenario, four in-channel detention structures were placed upstream from the city of West Branch to attenuate peak streamflows. To investigate possible improvements to conveying floodwaters through the city of West Branch, a section of abandoned railroad embankment and an old truss bridge were removed in the model, because these structures were producing backwater areas during flooding events. The third scenario combines the detention and conveyance scenarios so their joint efficiency could be evaluated. The scenarios with the design rainfall events were run in the HEC–RAS model so their flood-mitigation effects could be analyzed across a wide range of flood magnitudes.
Object-oriented approach for gas turbine engine simulation
NASA Technical Reports Server (NTRS)
Curlett, Brian P.; Felder, James L.
1995-01-01
An object-oriented gas turbine engine simulation program was developed. This program is a prototype for a more complete, commercial grade engine performance program now being proposed as part of the Numerical Propulsion System Simulator (NPSS). This report discusses architectural issues of this complex software system and the lessons learned from developing the prototype code. The prototype code is a fully functional, general purpose engine simulation program, however, only the component models necessary to model a transient compressor test rig have been written. The production system will be capable of steady state and transient modeling of almost any turbine engine configuration. Chief among the architectural considerations for this code was the framework in which the various software modules will interact. These modules include the equation solver, simulation code, data model, event handler, and user interface. Also documented in this report is the component based design of the simulation module and the inter-component communication paradigm. Object class hierarchies for some of the code modules are given.
Realization of planning design of mechanical manufacturing system by Petri net simulation model
NASA Astrophysics Data System (ADS)
Wu, Yanfang; Wan, Xin; Shi, Weixiang
1991-09-01
Planning design is to work out a more overall long-term plan. In order to guarantee a mechanical manufacturing system (MMS) designed to obtain maximum economical benefit, it is necessary to carry out a reasonable planning design for the system. First, some principles on planning design for MMS are introduced. Problems of production scheduling and their decision rules for computer simulation are presented. Realizable method of each production scheduling decision rule in Petri net model is discussed. Second, the solution of conflict rules for conflict problems during running Petri net is given. Third, based on the Petri net model of MMS which includes part flow and tool flow, according to the principle of minimum event time advance, a computer dynamic simulation of the Petri net model, that is, a computer dynamic simulation of MMS, is realized. Finally, the simulation program is applied to a simulation exmple, so the scheme of a planning design for MMS can be evaluated effectively.
NASA Astrophysics Data System (ADS)
Pistolesi, Marco; Cioni, Raffaello; Rosi, Mauro; Aguilera, Eduardo
2014-02-01
The ice-capped Cotopaxi volcano is known worldwide for the large-scale, catastrophic lahars that have occurred in connection with historical explosive eruptions. The most recent large-scale lahar event occurred in 1877 when scoria flows partially melted ice and snow of the summit glacier, generating debris flows that severely impacted all the river valleys originating from the volcano. The 1877 lahars have been considered in the recent years as a maximum expected event to define the hazard associated to lahar generation at Cotopaxi. Conversely, recent field-based studies have shown that such debris flows have occurred several times during the last 800 years of activity at Cotopaxi, and that the scale of lahars has been variable, including events much larger than that of 1877. Despite a rapid retreat of the summit ice cap over the past century, in fact, there are no data clearly suggesting that future events will be smaller than those observed in the deposits of the last 800 years of activity. In addition, geological field data prove that the lahar triggering mechanism also has to be considered as a key input parameter and, under appropriate eruptive mechanisms, a hazard scenario of a lahar with a volume 3-times larger than the 1877 event is likely. In order to analyze the impact scenarios in the southern drainage system of the volcano, simulations of inundation areas were performed with a semi-empirical model (LAHARZ), using input parameters including variable water volume. Results indicate that a lahar 3-times larger than the 1877 event would invade much wider areas than those flooded by the 1877 lahars along the southern valley system, eventually impacting highly-urbanized areas such as the city of Latacunga.
NASA Astrophysics Data System (ADS)
Park, Y.-J.; Sudicky, E. A.; Brookfield, A. E.; Jones, J. P.
2011-12-01
Precipitation-induced overland and groundwater flow and mixing processes are quantified to analyze the temporal (event and pre-event water) and spatial (groundwater discharge and overland runoff) origins of water entering a stream. Using a distributed-parameter control volume finite-element simulator that can simultaneously solve the fully coupled partial differential equations describing 2-D Manning and 3-D Darcian flow and advective-dispersive transport, mechanical flow (driven by hydraulic potential) and tracer-based hydrograph separation (driven by dispersive mixing as well as mechanical flow) are simulated in response to precipitation events in two cross sections oriented parallel and perpendicular to a stream. The results indicate that as precipitation becomes more intense, the subsurface mechanical flow contributions tend to become less significant relative to the total pre-event stream discharge. Hydrodynamic mixing can play an important role in enhancing pre-event tracer signals in the stream. This implies that temporally tagged chemical signals introduced into surface-subsurface flow systems from precipitation may not be strong enough to detect the changes in the subsurface flow system. It is concluded that diffusive/dispersive mixing, capillary fringe groundwater ridging, and macropore flow can influence the temporal sources of water in the stream, but any sole mechanism may not fully explain the strong pre-event water discharge. Further investigations of the influence of heterogeneity, residence time, geomorphology, and root zone processes are required to confirm the conclusions of this study.
Park, Y.-J.; Sudicky, E.A.; Brookfield, A.E.; Jones, J.P.
2011-01-01
Precipitation-induced overland and groundwater flow and mixing processes are quantified to analyze the temporal (event and pre-event water) and spatial (groundwater discharge and overland runoff) origins of water entering a stream. Using a distributed-parameter control volume finite-element simulator that can simultaneously solve the fully coupled partial differential equations describing 2-D Manning and 3-D Darcian flow and advective-dispersive transport, mechanical flow (driven by hydraulic potential) and tracer-based hydrograph separation (driven by dispersive mixing as well as mechanical flow) are simulated in response to precipitation events in two cross sections oriented parallel and perpendicular to a stream. The results indicate that as precipitation becomes more intense, the subsurface mechanical flow contributions tend to become less significant relative to the total pre-event stream discharge. Hydrodynamic mixing can play an important role in enhancing pre-event tracer signals in the stream. This implies that temporally tagged chemical signals introduced into surface-subsurface flow systems from precipitation may not be strong enough to detect the changes in the subsurface flow system. It is concluded that diffusive/dispersive mixing, capillary fringe groundwater ridging, and macropore flow can influence the temporal sources of water in the stream, but any sole mechanism may not fully explain the strong pre-event water discharge. Further investigations of the influence of heterogeneity, residence time, geomorphology, and root zone processes are required to confirm the conclusions of this study. Copyright 2011 by the American Geophysical Union.
Managed traffic evacuation using distributed sensor processing
NASA Astrophysics Data System (ADS)
Ramuhalli, Pradeep; Biswas, Subir
2005-05-01
This paper presents an integrated sensor network and distributed event processing architecture for managed in-building traffic evacuation during natural and human-caused disasters, including earthquakes, fire and biological/chemical terrorist attacks. The proposed wireless sensor network protocols and distributed event processing mechanisms offer a new distributed paradigm for improving reliability in building evacuation and disaster management. The networking component of the system is constructed using distributed wireless sensors for measuring environmental parameters such as temperature, humidity, and detecting unusual events such as smoke, structural failures, vibration, biological/chemical or nuclear agents. Distributed event processing algorithms will be executed by these sensor nodes to detect the propagation pattern of the disaster and to measure the concentration and activity of human traffic in different parts of the building. Based on this information, dynamic evacuation decisions are taken for maximizing the evacuation speed and minimizing unwanted incidents such as human exposure to harmful agents and stampedes near exits. A set of audio-visual indicators and actuators are used for aiding the automated evacuation process. In this paper we develop integrated protocols, algorithms and their simulation models for the proposed sensor networking and the distributed event processing framework. Also, efficient harnessing of the individually low, but collectively massive, processing abilities of the sensor nodes is a powerful concept behind our proposed distributed event processing algorithms. Results obtained through simulation in this paper are used for a detailed characterization of the proposed evacuation management system and its associated algorithmic components.
Active Control of Solar Array Dynamics During Spacecraft Maneuvers
NASA Technical Reports Server (NTRS)
Ross, Brant A.; Woo, Nelson; Kraft, Thomas G.; Blandino, Joseph R.
2016-01-01
Recent NASA mission plans require spacecraft to undergo potentially significant maneuvers (or dynamic loading events) with large solar arrays deployed. Therefore there is an increased need to understand and possibly control the nonlinear dynamics in the spacecraft system during such maneuvers. The development of a nonlinear controller is described. The utility of using a nonlinear controller to reduce forces and motion in a solar array wing during a loading event is demonstrated. The result is dramatic reductions in system forces and motion during a 10 second loading event. A motion curve derived from the simulation with the closed loop controller is used to obtain similar benefits with a simpler motion control approach.
NASA Astrophysics Data System (ADS)
Edouard, Simon; Vincendon, Béatrice; Ducrocq, Véronique
2018-05-01
Intense precipitation events in the Mediterranean often lead to devastating flash floods (FF). FF modelling is affected by several kinds of uncertainties and Hydrological Ensemble Prediction Systems (HEPS) are designed to take those uncertainties into account. The major source of uncertainty comes from rainfall forcing and convective-scale meteorological ensemble prediction systems can manage it for forecasting purpose. But other sources are related to the hydrological modelling part of the HEPS. This study focuses on the uncertainties arising from the hydrological model parameters and initial soil moisture with aim to design an ensemble-based version of an hydrological model dedicated to Mediterranean fast responding rivers simulations, the ISBA-TOP coupled system. The first step consists in identifying the parameters that have the strongest influence on FF simulations by assuming perfect precipitation. A sensitivity study is carried out first using a synthetic framework and then for several real events and several catchments. Perturbation methods varying the most sensitive parameters as well as initial soil moisture allow designing an ensemble-based version of ISBA-TOP. The first results of this system on some real events are presented. The direct perspective of this work will be to drive this ensemble-based version with the members of a convective-scale meteorological ensemble prediction system to design a complete HEPS for FF forecasting.
Pawlowski, Andrzej; Guzman, Jose Luis; Rodríguez, Francisco; Berenguel, Manuel; Sánchez, José; Dormido, Sebastián
2009-01-01
Monitoring and control of the greenhouse environment play a decisive role in greenhouse production processes. Assurance of optimal climate conditions has a direct influence on crop growth performance, but it usually increases the required equipment cost. Traditionally, greenhouse installations have required a great effort to connect and distribute all the sensors and data acquisition systems. These installations need many data and power wires to be distributed along the greenhouses, making the system complex and expensive. For this reason, and others such as unavailability of distributed actuators, only individual sensors are usually located in a fixed point that is selected as representative of the overall greenhouse dynamics. On the other hand, the actuation system in greenhouses is usually composed by mechanical devices controlled by relays, being desirable to reduce the number of commutations of the control signals from security and economical point of views. Therefore, and in order to face these drawbacks, this paper describes how the greenhouse climate control can be represented as an event-based system in combination with wireless sensor networks, where low-frequency dynamics variables have to be controlled and control actions are mainly calculated against events produced by external disturbances. The proposed control system allows saving costs related with wear minimization and prolonging the actuator life, but keeping promising performance results. Analysis and conclusions are given by means of simulation results. PMID:22389597
Pawlowski, Andrzej; Guzman, Jose Luis; Rodríguez, Francisco; Berenguel, Manuel; Sánchez, José; Dormido, Sebastián
2009-01-01
Monitoring and control of the greenhouse environment play a decisive role in greenhouse production processes. Assurance of optimal climate conditions has a direct influence on crop growth performance, but it usually increases the required equipment cost. Traditionally, greenhouse installations have required a great effort to connect and distribute all the sensors and data acquisition systems. These installations need many data and power wires to be distributed along the greenhouses, making the system complex and expensive. For this reason, and others such as unavailability of distributed actuators, only individual sensors are usually located in a fixed point that is selected as representative of the overall greenhouse dynamics. On the other hand, the actuation system in greenhouses is usually composed by mechanical devices controlled by relays, being desirable to reduce the number of commutations of the control signals from security and economical point of views. Therefore, and in order to face these drawbacks, this paper describes how the greenhouse climate control can be represented as an event-based system in combination with wireless sensor networks, where low-frequency dynamics variables have to be controlled and control actions are mainly calculated against events produced by external disturbances. The proposed control system allows saving costs related with wear minimization and prolonging the actuator life, but keeping promising performance results. Analysis and conclusions are given by means of simulation results.
Modeling Anti-Air Warfare With Discrete Event Simulation and Analyzing Naval Convoy Operations
2016-06-01
WARFARE WITH DISCRETE EVENT SIMULATION AND ANALYZING NAVAL CONVOY OPERATIONS by Ali E. Opcin June 2016 Thesis Advisor: Arnold H. Buss Co...REPORT DATE June 2016 3. REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE MODELING ANTI-AIR WARFARE WITH DISCRETE EVENT...In this study, a discrete event simulation (DES) was built by modeling ships, and their sensors and weapons, to simulate convoy operations under
NASA Technical Reports Server (NTRS)
Springer, P.
1993-01-01
This paper discusses the method in which the Cascade-Correlation algorithm was parallelized in such a way that it could be run using the Time Warp Operating System (TWOS). TWOS is a special purpose operating system designed to run parellel discrete event simulations with maximum efficiency on parallel or distributed computers.
de Carvalho, Elias Cesar Araujo; Batilana, Adelia Portero; Claudino, Wederson; Reis, Luiz Fernando Lima; Schmerling, Rafael A; Shah, Jatin; Pietrobon, Ricardo
2012-01-01
With the exponential expansion of clinical trials conducted in (Brazil, Russia, India, and China) and VISTA (Vietnam, Indonesia, South Africa, Turkey, and Argentina) countries, corresponding gains in cost and enrolment efficiency quickly outpace the consonant metrics in traditional countries in North America and European Union. However, questions still remain regarding the quality of data being collected in these countries. We used ethnographic, mapping and computer simulation studies to identify/address areas of threat to near miss events for data quality in two cancer trial sites in Brazil. Two sites in Sao Paolo and Rio Janeiro were evaluated using ethnographic observations of workflow during subject enrolment and data collection. Emerging themes related to threats to near miss events for data quality were derived from observations. They were then transformed into workflows using UML-AD and modeled using System Dynamics. 139 tasks were observed and mapped through the ethnographic study. The UML-AD detected four major activities in the workflow evaluation of potential research subjects prior to signature of informed consent, visit to obtain subject́s informed consent, regular data collection sessions following study protocol and closure of study protocol for a given project. Field observations pointed to three major emerging themes: (a) lack of standardized process for data registration at source document, (b) multiplicity of data repositories and (c) scarcity of decision support systems at the point of research intervention. Simulation with policy model demonstrates a reduction of the rework problem. Patterns of threats to data quality at the two sites were similar to the threats reported in the literature for American sites. The clinical trial site managers need to reorganize staff workflow by using information technology more efficiently, establish new standard procedures and manage professionals to reduce near miss events and save time/cost. Clinical trial sponsors should improve relevant support systems.
Araujo de Carvalho, Elias Cesar; Batilana, Adelia Portero; Claudino, Wederson; Lima Reis, Luiz Fernando; Schmerling, Rafael A.; Shah, Jatin; Pietrobon, Ricardo
2012-01-01
Background With the exponential expansion of clinical trials conducted in (Brazil, Russia, India, and China) and VISTA (Vietnam, Indonesia, South Africa, Turkey, and Argentina) countries, corresponding gains in cost and enrolment efficiency quickly outpace the consonant metrics in traditional countries in North America and European Union. However, questions still remain regarding the quality of data being collected in these countries. We used ethnographic, mapping and computer simulation studies to identify/address areas of threat to near miss events for data quality in two cancer trial sites in Brazil. Methodology/Principal Findings Two sites in Sao Paolo and Rio Janeiro were evaluated using ethnographic observations of workflow during subject enrolment and data collection. Emerging themes related to threats to near miss events for data quality were derived from observations. They were then transformed into workflows using UML-AD and modeled using System Dynamics. 139 tasks were observed and mapped through the ethnographic study. The UML-AD detected four major activities in the workflow evaluation of potential research subjects prior to signature of informed consent, visit to obtain subject́s informed consent, regular data collection sessions following study protocol and closure of study protocol for a given project. Field observations pointed to three major emerging themes: (a) lack of standardized process for data registration at source document, (b) multiplicity of data repositories and (c) scarcity of decision support systems at the point of research intervention. Simulation with policy model demonstrates a reduction of the rework problem. Conclusions/Significance Patterns of threats to data quality at the two sites were similar to the threats reported in the literature for American sites. The clinical trial site managers need to reorganize staff workflow by using information technology more efficiently, establish new standard procedures and manage professionals to reduce near miss events and save time/cost. Clinical trial sponsors should improve relevant support systems. PMID:22768105
System status display evaluation
NASA Technical Reports Server (NTRS)
Summers, Leland G.
1988-01-01
The System Status Display is an electronic display system which provides the crew with an enhanced capability for monitoring and managing the aircraft systems. A flight simulation in a fixed base cockpit simulator was used to evaluate alternative design concepts for this display system. The alternative concepts included pictorial versus alphanumeric text formats, multifunction versus dedicated controls, and integration of the procedures with the system status information versus paper checklists. Twelve pilots manually flew approach patterns with the different concepts. System malfunctions occurred which required the pilots to respond to the alert by reconfiguring the system. The pictorial display, the multifunction control interfaces collocated with the system display, and the procedures integrated with the status information all had shorter event processing times and lower subjective workloads.
Water Hammer Simulations of Monomethylhydrazine Propellant
NASA Technical Reports Server (NTRS)
Burkhardt, Zachary; Ramachandran, N.; Majumdar, A.
2017-01-01
Fluid Transient analysis is important for the design of spacecraft propulsion system to ensure structural stability of the system in the event of sudden closing or opening of the valve. Generalized Fluid System Simulation Program (GFSSP), a general purpose flow network code developed at NASA/MSFC is capable of simulating pressure surge due to sudden opening or closing of valve when thermodynamic properties of real fluid are available for the entire range of simulation. Specifically GFSSP needs an accurate representation of pressure density relationship in order to predict pressure surge during a fluid transient. Unfortunately, the available thermodynamic property programs such as REFPROP, GASP or GASPAK do not provide the thermodynamic properties of Monomethylhydrazine(MMH). This work illustrates the process used for building a customized table of properties of state variables from available properties and speed of sound that is required by GFSSP for simulation. Good agreement was found between the simulations and measured data. This method can be adopted for modeling flow networks and systems with other fluids whose properties are not known in detail in order to obtain general technical insight.
Automatic programming of simulation models
NASA Technical Reports Server (NTRS)
Schroer, Bernard J.; Tseng, Fan T.; Zhang, Shou X.; Dwan, Wen S.
1990-01-01
The concepts of software engineering were used to improve the simulation modeling environment. Emphasis was placed on the application of an element of rapid prototyping, or automatic programming, to assist the modeler define the problem specification. Then, once the problem specification has been defined, an automatic code generator is used to write the simulation code. The following two domains were selected for evaluating the concepts of software engineering for discrete event simulation: manufacturing domain and a spacecraft countdown network sequence. The specific tasks were to: (1) define the software requirements for a graphical user interface to the Automatic Manufacturing Programming System (AMPS) system; (2) develop a graphical user interface for AMPS; and (3) compare the AMPS graphical interface with the AMPS interactive user interface.
ATLAS Simulation using Real Data: Embedding and Overlay
NASA Astrophysics Data System (ADS)
Haas, Andrew; ATLAS Collaboration
2017-10-01
For some physics processes studied with the ATLAS detector, a more accurate simulation in some respects can be achieved by including real data into simulated events, with substantial potential improvements in the CPU, disk space, and memory usage of the standard simulation configuration, at the cost of significant database and networking challenges. Real proton-proton background events can be overlaid (at the detector digitization output stage) on a simulated hard-scatter process, to account for pileup background (from nearby bunch crossings), cavern background, and detector noise. A similar method is used to account for the large underlying event from heavy ion collisions, rather than directly simulating the full collision. Embedding replaces the muons found in Z→μμ decays in data with simulated taus at the same 4-momenta, thus preserving the underlying event and pileup from the original data event. In all these cases, care must be taken to exactly match detector conditions (beamspot, magnetic fields, alignments, dead sensors, etc.) between the real data event and the simulation. We will discuss the status of these overlay and embedding techniques within ATLAS software and computing.
NASA Astrophysics Data System (ADS)
Darma Tarigan, Suria
2016-01-01
Flooding is caused by excessive rainfall flowing downstream as cumulative surface runoff. Flooding event is a result of complex interaction of natural system components such as rainfall events, land use, soil, topography and channel characteristics. Modeling flooding event as a result of interaction of those components is a central theme in watershed management. The model is usually used to test performance of various management practices in flood mitigation. There are various types of management practices for flood mitigation including vegetative and structural management practices. Existing hydrological model such as SWAT and HEC-HMS models have limitation to accommodate discrete management practices such as infiltration well, small farm reservoir, silt pits in its analysis due to the lumped structure of these models. Aim of this research is to use raster spatial analysis functions of Geo-Information System (RGIS-HM) to model flooding event in Ciliwung watershed and to simulate impact of discrete management practices on surface runoff reduction. The model was validated using flooding data event of Ciliwung watershed on 29 January 2004. The hourly hydrograph data and rainfall data were available during period of model validation. The model validation provided good result with Nash-Suthcliff efficiency of 0.8. We also compared the RGIS-HM with Netlogo Hydrological Model (NL-HM). The RGIS-HM has similar capability with NL-HM in simulating discrete management practices in watershed scale.
Kundu, Anupam; Sabhapandit, Sanjib; Dhar, Abhishek
2011-03-01
We present an algorithm for finding the probabilities of rare events in nonequilibrium processes. The algorithm consists of evolving the system with a modified dynamics for which the required event occurs more frequently. By keeping track of the relative weight of phase-space trajectories generated by the modified and the original dynamics one can obtain the required probabilities. The algorithm is tested on two model systems of steady-state particle and heat transport where we find a huge improvement from direct simulation methods.
NASA Astrophysics Data System (ADS)
Lin, Hai-Nan; Li, Jin; Li, Xin
2018-05-01
The detection of gravitational waves (GWs) provides a powerful tool to constrain the cosmological parameters. In this paper, we investigate the possibility of using GWs as standard sirens in testing the anisotropy of the universe. We consider the GW signals produced by the coalescence of binary black hole systems and simulate hundreds of GW events from the advanced laser interferometer gravitational-wave observatory and Virgo. It is found that the anisotropy of the universe can be tightly constrained if the redshift of the GW source is precisely known. The anisotropic amplitude can be constrained with an accuracy comparable to the Union2.1 complication of type-Ia supernovae if ≳ 400 GW events are observed. As for the preferred direction, ≳ 800 GW events are needed in order to achieve the accuracy of Union2.1. With 800 GW events, the probability of pseudo anisotropic signals with an amplitude comparable to Union2.1 is negligible. These results show that GWs can provide a complementary tool to supernovae in testing the anisotropy of the universe.
Hydrology of Fritchie Marsh, coastal Louisiana
Kuniansky, E.L.
1985-01-01
Fritchie Marsh, near Slidell, Louisiana, is being considered as a disposal site for sewage effluent. A two-dimensional, finite element, surface water modeling systems was used to solve the shallow water equations for flow. Factors affecting flow patterns are channel locations, inlets, outlets, islands, marsh vegetation, marsh geometry, stage of the West Pearl River, flooding over the lower Pearl River basin, gravity tides, wind-induced currents, and sewage discharge to the marsh. Four steady-state simulations were performed for two hydrologic events at two rates of sewage discharge. The events, near tide with no wind or rain and neap tide with a tide differential across the marsh, were selected as worst-case events for sewage effluent dispersion and were assumed as steady state events. Because inflows and outflows to the marsh are tidally affected, steady state simulations cannot fully define the hydraulic characteristics of the marsh for all hydrologic events. Model results and field data indicate that, during near tide with little or no rain, large parts of the marsh are stagnant; and sewage effluent, at existing and projected flows, has minimal effect on marsh flows. (USGS)
Deep Space Storm Shelter Simulation Study
NASA Technical Reports Server (NTRS)
Dugan, Kathryn; Phojanamongkolkij, Nipa; Cerro, Jeffrey; Simon, Matthew
2015-01-01
Missions outside of Earth's magnetic field are impeded by the presence of radiation from galactic cosmic rays and solar particle events. To overcome this issue, NASA's Advanced Exploration Systems Radiation Works Storm Shelter (RadWorks) has been studying different radiation protective habitats to shield against the onset of solar particle event radiation. These habitats have the capability of protecting occupants by utilizing available materials such as food, water, brine, human waste, trash, and non-consumables to build short-term shelters. Protection comes from building a barrier with the materials that dampens the impact of the radiation on astronauts. The goal of this study is to develop a discrete event simulation, modeling a solar particle event and the building of a protective shelter. The main hallway location within a larger habitat similar to the International Space Station (ISS) is analyzed. The outputs from this model are: 1) the total area covered on the shelter by the different materials, 2) the amount of radiation the crew members receive, and 3) the amount of time for setting up the habitat during specific points in a mission given an event occurs.
Discerning Trends in Performance Across Multiple Events
NASA Technical Reports Server (NTRS)
Slater, Simon; Hiltz, Mike; Rice, Craig
2006-01-01
Mass Data is a computer program that enables rapid, easy discernment of trends in performance data across multiple flights and ground tests. The program can perform Fourier analysis and other functions for the purposes of frequency analysis and trending of all variables. These functions facilitate identification of past use of diagnosed systems and of anomalies in such systems, and enable rapid assessment of related current problems. Many variables, for computation of which it is usually necessary to perform extensive manual manipulation of raw downlist data, are automatically computed and made available to all users, regularly eliminating the need for what would otherwise be an extensive amount of engineering analysis. Data from flight, ground test, and simulation are preprocessed and stored in one central location for instantaneous access and comparison for diagnostic and trending purposes. Rules are created so that an event log is created for every flight, making it easy to locate information on similar maneuvers across many flights. The same rules can be created for test sets and simulations, and are searchable, so that information on like events is easily accessible.
NASA Astrophysics Data System (ADS)
Batmunkh, Munkhbaatar; Bugay, Alexander; Bayarchimeg, Lkhagvaa; Lkhagva, Oidov
2018-02-01
The present study is focused on the development of optimal models of neuron morphology for Monte Carlo microdosimetry simulations of initial radiation-induced events of heavy charged particles in the specific types of cells of the hippocampus, which is the most radiation-sensitive structure of the central nervous system. The neuron geometry and particles track structures were simulated by the Geant4/Geant4-DNA Monte Carlo toolkits. The calculations were made for beams of protons and heavy ions with different energies and doses corresponding to real fluxes of galactic cosmic rays. A simple compartmental model and a complex model with realistic morphology extracted from experimental data were constructed and compared. We estimated the distribution of the energy deposition events and the production of reactive chemical species within the developed models of CA3/CA1 pyramidal neurons and DG granule cells of the rat hippocampus under exposure to different particles with the same dose. Similar distributions of the energy deposition events and concentration of some oxidative radical species were obtained in both the simplified and realistic neuron models.
Fault trees for decision making in systems analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lambert, Howard E.
1975-10-09
The application of fault tree analysis (FTA) to system safety and reliability is presented within the framework of system safety analysis. The concepts and techniques involved in manual and automated fault tree construction are described and their differences noted. The theory of mathematical reliability pertinent to FTA is presented with emphasis on engineering applications. An outline of the quantitative reliability techniques of the Reactor Safety Study is given. Concepts of probabilistic importance are presented within the fault tree framework and applied to the areas of system design, diagnosis and simulation. The computer code IMPORTANCE ranks basic events and cut setsmore » according to a sensitivity analysis. A useful feature of the IMPORTANCE code is that it can accept relative failure data as input. The output of the IMPORTANCE code can assist an analyst in finding weaknesses in system design and operation, suggest the most optimal course of system upgrade, and determine the optimal location of sensors within a system. A general simulation model of system failure in terms of fault tree logic is described. The model is intended for efficient diagnosis of the causes of system failure in the event of a system breakdown. It can also be used to assist an operator in making decisions under a time constraint regarding the future course of operations. The model is well suited for computer implementation. New results incorporated in the simulation model include an algorithm to generate repair checklists on the basis of fault tree logic and a one-step-ahead optimization procedure that minimizes the expected time to diagnose system failure.« less
Recurring Occultations of RW Aurigae by Coagulated Dust in the Tidally Disrupted Circumstellar Disk
NASA Astrophysics Data System (ADS)
Rodriguez, Joseph E.; Reed, Phillip A.; Siverd, Robert J.; Pepper, Joshua; Stassun, Keivan G.; Gaudi, B. Scott; Weintraub, David A.; Beatty, Thomas G.; Lund, Michael B.; Stevens, Daniel J.
2016-02-01
We present photometric observations of RW Aurigae, a Classical T Tauri system, that reveal two remarkable dimming events. These events are similar to that which we observed in 2010-2011, which was the first such deep dimming observed in RW Aur in a century’s worth of photometric monitoring. We suggested the 2010-2011 dimming was the result of an occultation of the star by its tidally disrupted circumstellar disk. In 2012-2013, the RW Aur system dimmed by ˜0.7 mag for ˜40 days and in 2014/2015 the system dimmed by ˜2 mag for >250 days. The ingress/egress duration measurements of the more recent events agree well with those from the 2010-2011 event, providing strong evidence that the new dimmings are kinematically associated with the same occulting source as the 2010-2011 event. Therefore, we suggest that both the 2012-2013 and 2014-2015 dimming events, measured using data from the Kilodegree Extremely Little Telescope and the Kutztown University Observatory, are also occultations of RW Aur A by the tidally disrupted circumstellar material. Recent hydrodynamical simulations of the eccentric fly-by of RW Aur B suggest the occulting body to be a bridge of material connecting RW Aur A and B. These simulations also suggest the possibility of additional occultations which are supported by the observations presented in this work. The color evolution of the dimmings suggest that the tidally stripped disk material includes dust grains ranging in size from small grains at the leading edge, typical of star-forming regions, to large grains, ices or pebbles producing gray or nearly gray extinction deeper within the occulting material. It is not known whether this material represents arrested planet building prior to the tidal disruption event, or perhaps accelerated planet building as a result of the disruption event, but in any case the evidence suggests the presence of advanced planet building material in the space between the two stars of the RW Aur system.
Hubley, Darlene; Peacocke, Sean; Maxwell, Joanne; Parker, Kathryn
2015-01-01
Simulation has the potential to invigorate teaching practices, facilitate professional development and impact client care. However, there is little literature on using simulation at the level of organizational change in healthcare. In this paper, the authors explore Holland Bloorview Kids Rehabilitation Hospital's experience using simulation to enhance the use of technology at the point-of-care. The simulation event demonstrated documentation using technology in two typical practice environments and allowed learners to discuss the challenges and opportunities. Participant feedback was positive overall, and this article reveals important lessons to support the future use of simulation as an educational tool for organizational change.
Integrated assessment of water-power grid systems under changing climate
NASA Astrophysics Data System (ADS)
Yan, E.; Zhou, Z.; Betrie, G.
2017-12-01
Energy and water systems are intrinsically interconnected. Due to an increase in climate variability and extreme weather events, interdependency between these two systems has been recently intensified resulting significant impacts on both systems and energy output. To address this challenge, an Integrated Water-Energy Systems Assessment Framework (IWESAF) is being developed to integrate multiple existing or developed models from various sectors. In this presentation, we are focusing on recent improvement in model development of thermoelectric power plant water use simulator, power grid operation and cost optimization model, and model integration that facilitate interaction among water and electricity generation under extreme climate events. A process based thermoelectric power water use simulator includes heat-balance, climate, and cooling system modules that account for power plant characteristics, fuel types, and cooling technology. The model is validated with more than 800 power plants of fossil-fired, nuclear and gas-turbine power plants with different cooling systems. The power grid operation and cost optimization model was implemented for a selected regional in the Midwest. The case study will be demonstrated to evaluate the sensitivity and resilience of thermoelectricity generation and power grid under various climate and hydrologic extremes and potential economic consequences.
Lisiecki, R S; Voigt, H F
1995-08-01
A 2-channel action-potential generator system was designed for use in testing neurophysiologic data acquisition/analysis systems. The system consists of a personal computer controlling an external hardware unit. This system is capable of generating 2 channels of simulated action potential (AP) waveshapes. The AP waveforms are generated from the linear combination of 2 principal-component template functions. Each channel generates randomly occurring APs with a specified rate ranging from 1 to 200 events per second. The 2 trains may be independent of one another or the second channel may be made to be excited or inhibited by the events from the first channel with user-specified probabilities. A third internal channel may be made to excite or inhibit events in both of the 2 output channels with user-specified rate parameters and probabilities. The system produces voltage waveforms that may be used to test neurophysiologic data acquisition systems for recording from 2 spike trains simultaneously and for testing multispike-train analysis (e.g., cross-correlation) software.
Haj, Adel E.; Christiansen, Daniel E.; Viger, Roland J.
2014-01-01
In 2011 the Missouri River Mainstem Reservoir System (Reservoir System) experienced the largest volume of flood waters since the initiation of record-keeping in the nineteenth century. The high levels of runoff from both snowpack and rainfall stressed the Reservoir System’s capacity to control flood waters and caused massive damage and disruption along the river. The flooding and resulting damage along the Missouri River brought increased public attention to the U.S. Army Corps of Engineers (USACE) operation of the Reservoir System. To help understand the effects of Reservoir System operation on the 2011 Missouri River flood flows, the U.S. Geological Survey Precipitation-Runoff Modeling System was used to construct a model of the Missouri River Basin to simulate flows at streamgages and dam locations with the effects of Reservoir System operation (regulation) on flow removed. Statistical tests indicate that the Missouri River Precipitation-Runoff Modeling System model is a good fit for high-flow monthly and annual stream flow estimation. A comparison of simulated unregulated flows and measured regulated flows show that regulation greatly reduced spring peak flow events, consolidated two summer peak flow events to one with a markedly decreased magnitude, and maintained higher than normal base flow beyond the end of water year 2011. Further comparison of results indicate that without regulation, flows greater than those measured would have occurred and been sustained for much longer, frequently in excess of 30 days, and flooding associated with high-flow events would have been more severe.
Prehistoric land use and Neolithisation in Europe in the context of regional climate events
NASA Astrophysics Data System (ADS)
Lemmen, C.; Wirtz, K. W.; Gronenborn, D.
2009-04-01
We present a simple, adaptation-driven, spatially explicit model of pre-Bronze age socio-technological change, called the Global Land Use and Technological Evolution Simulator (GLUES). The socio-technological realm is described by three characteristic traits: available technology, subsistence style ratio, and economic diversity. Human population and culture develop in the context of global paleoclimate and regional paleoclimate events. Global paleoclimate is derived from CLIMBER-2 Earth System Model anomalies superimposed on the IIASA temperature and precipitation database. Regional a forcing is provided by abrupt climate deteriorations from a compilation of 138 long-term high-resolution climate proxy time series from mostly terrestrial and near-shore archives. The GLUES simulator provides for a novel way to explore the interplay between climate, climate change, and cultural evolution both on the Holocene timescale as well as for short-term extreme event periods. We sucessfully simulate the migration of people and the diffusion of Neolithic technology from the Near East into Europe in the period 12000-4000 a BP. We find good agreement with recent archeological compilations of Western Eurasian Neolithic sites. No causal relationship between climate events and cultural evolution could be identified, but the speed of cultural development is found to be modulated by the frequency of climate events. From the demographic evolution and regional ressource consumption, we estimate regional land use change and prehistoric greenhouse gas emissions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elliott, Elizabeth J.; Yu, Sungduk; Kooperman, Gabriel J.
The sensitivities of simulated mesoscale convective systems (MCSs) in the central U.S. to microphysics and grid configuration are evaluated here in a global climate model (GCM) that also permits global-scale feedbacks and variability. Since conventional GCMs do not simulate MCSs, studying their sensitivities in a global framework useful for climate change simulations has not previously been possible. To date, MCS sensitivity experiments have relied on controlled cloud resolving model (CRM) studies with limited domains, which avoid internal variability and neglect feedbacks between local convection and larger-scale dynamics. However, recent work with superparameterized (SP) GCMs has shown that eastward propagating MCS-likemore » events are captured when embedded CRMs replace convective parameterizations. This study uses a SP version of the Community Atmosphere Model version 5 (SP-CAM5) to evaluate MCS sensitivities, applying an objective empirical orthogonal function algorithm to identify MCS-like events, and harmonizing composite storms to account for seasonal and spatial heterogeneity. A five-summer control simulation is used to assess the magnitude of internal and interannual variability relative to 10 sensitivity experiments with varied CRM parameters, including ice fall speed, one-moment and two-moment microphysics, and grid spacing. MCS sensitivities were found to be subtle with respect to internal variability, and indicate that ensembles of over 100 storms may be necessary to detect robust differences in SP-GCMs. Furthermore, these results emphasize that the properties of MCSs can vary widely across individual events, and improving their representation in global simulations with significant internal variability may require comparison to long (multidecadal) time series of observed events rather than single season field campaigns.« less
Event-triggered Kalman-consensus filter for two-target tracking sensor networks.
Su, Housheng; Li, Zhenghao; Ye, Yanyan
2017-11-01
This paper is concerned with the problem of event-triggered Kalman-consensus filter for two-target tracking sensor networks. According to the event-triggered protocol and the mean-square analysis, a suboptimal Kalman gain matrix is derived and a suboptimal event-triggered distributed filter is obtained. Based on the Kalman-consensus filter protocol, all sensors which only depend on its neighbors' information can track their corresponding targets. Furthermore, utilizing Lyapunov method and matrix theory, some sufficient conditions are presented for ensuring the stability of the system. Finally, a simulation example is presented to verify the effectiveness of the proposed event-triggered protocol. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Event-based aquifer-to-atmosphere modeling over the European CORDEX domain
NASA Astrophysics Data System (ADS)
Keune, J.; Goergen, K.; Sulis, M.; Shrestha, P.; Springer, A.; Kusche, J.; Ohlwein, C.; Kollet, S. J.
2014-12-01
Despite the fact that recent studies focus on the impact of soil moisture on climate and especially land-energy feedbacks, groundwater dynamics are often neglected or conceptual groundwater flow models are used. In particular, in the context of climate change and the occurrence of droughts and floods, a better understanding and an improved simulation of the physical processes involving groundwater on continental scales is necessary. This requires the implementation of a physically consistent terrestrial modeling system, which explicitly incorporates groundwater dynamics and the connection with shallow soil moisture. Such a physics-based system enables simulations and monitoring of groundwater storage and enhanced representations of the terrestrial energy and hydrologic cycles over long time periods. On shorter timescales, the prediction of groundwater-related extremes, such as floods and droughts, are expected to improve, because of the improved simulation of components of the hydrological cycle. In this study, we present a fully coupled aquifer-to-atmosphere modeling system over the European CORDEX domain. The integrated Terrestrial Systems Modeling Platform, TerrSysMP, consisting of the three-dimensional subsurface model ParFlow, the Community Land Model CLM3.5 and the numerical weather prediction model COSMO of the German Weather Service, is used. The system is set up with a spatial resolution of 0.11° (12.5km) and closes the terrestrial water and energy cycles from aquifers into the atmosphere. Here, simulations of the fully coupled system are performed over events, such as the 2013 flood in Central Europe and the 2003 European heat wave, and over extended time periods on the order of 10 years. State and flux variables of the terrestrial hydrologic and energy cycle are analyzed and compared to both in situ (e.g. stream and water level gauge networks, FLUXNET) and remotely sensed observations (e.g. GRACE, ESA ICC ECV soil moisture and SMOS). Additionally, the presented modeling system may be useful in the assessment of groundwater-related uncertainties in virtual reality and scenario simulations.
Ichikawa, Kazuhisa; Suzuki, Takashi; Murata, Noboru
2010-11-30
Molecular events in biological cells occur in local subregions, where the molecules tend to be small in number. The cytoskeleton, which is important for both the structural changes of cells and their functions, is also a countable entity because of its long fibrous shape. To simulate the local environment using a computer, stochastic simulations should be run. We herein report a new method of stochastic simulation based on random walk and reaction by the collision of all molecules. The microscopic reaction rate P(r) is calculated from the macroscopic rate constant k. The formula involves only local parameters embedded for each molecule. The results of the stochastic simulations of simple second-order, polymerization, Michaelis-Menten-type and other reactions agreed quite well with those of deterministic simulations when the number of molecules was sufficiently large. An analysis of the theory indicated a relationship between variance and the number of molecules in the system, and results of multiple stochastic simulation runs confirmed this relationship. We simulated Ca²(+) dynamics in a cell by inward flow from a point on the cell surface and the polymerization of G-actin forming F-actin. Our results showed that this theory and method can be used to simulate spatially inhomogeneous events.
Extending the FairRoot framework to allow for simulation and reconstruction of free streaming data
NASA Astrophysics Data System (ADS)
Al-Turany, M.; Klein, D.; Manafov, A.; Rybalchenko, A.; Uhlig, F.
2014-06-01
The FairRoot framework is the standard framework for simulation, reconstruction and data analysis for the FAIR experiments. The framework is designed to optimise the accessibility for beginners and developers, to be flexible and to cope with future developments. FairRoot enhances the synergy between the different physics experiments. As a first step toward simulation of free streaming data, the time based simulation was introduced to the framework. The next step is the event source simulation. This is achieved via a client server system. After digitization the so called "samplers" can be started, where sampler can read the data of the corresponding detector from the simulation files and make it available for the reconstruction clients. The system makes it possible to develop and validate the online reconstruction algorithms. In this work, the design and implementation of the new architecture and the communication layer will be described.
TWOS - TIME WARP OPERATING SYSTEM, VERSION 2.5.1
NASA Technical Reports Server (NTRS)
Bellenot, S. F.
1994-01-01
The Time Warp Operating System (TWOS) is a special-purpose operating system designed to support parallel discrete-event simulation. TWOS is a complete implementation of the Time Warp mechanism, a distributed protocol for virtual time synchronization based on process rollback and message annihilation. Version 2.5.1 supports simulations and other computations using both virtual time and dynamic load balancing; it does not support general time-sharing or multi-process jobs using conventional message synchronization and communication. The program utilizes the underlying operating system's resources. TWOS runs a single simulation at a time, executing it concurrently on as many processors of a distributed system as are allocated. The simulation needs only to be decomposed into objects (logical processes) that interact through time-stamped messages. TWOS provides transparent synchronization. The user does not have to add any more special logic to aid in synchronization, nor give any synchronization advice, nor even understand much about how the Time Warp mechanism works. The Time Warp Simulator (TWSIM) subdirectory contains a sequential simulation engine that is interface compatible with TWOS. This means that an application designer and programmer who wish to use TWOS can prototype code on TWSIM on a single processor and/or workstation before having to deal with the complexity of working on a distributed system. TWSIM also provides statistics about the application which may be helpful for determining the correctness of an application and for achieving good performance on TWOS. Version 2.5.1 has an updated interface that is not compatible with 2.0. The program's user manual assists the simulation programmer in the design, coding, and implementation of discrete-event simulations running on TWOS. The manual also includes a practical user's guide to the TWOS application benchmark, Colliding Pucks. TWOS supports simulations written in the C programming language. It is designed to run on the Sun3/Sun4 series computers and the BBN "Butterfly" GP-1000 computer. The standard distribution medium for this package is a .25 inch tape cartridge in TAR format. TWOS was developed in 1989 and updated in 1991. This program is a copyrighted work with all copyright vested in NASA. Sun3 and Sun4 are trademarks of Sun Microsystems, Inc.
A novel fiber-optical vibration defending system with on-line intelligent identification function
NASA Astrophysics Data System (ADS)
Wu, Huijuan; Xie, Xin; Li, Hanyu; Li, Xiaoyu; Wu, Yu; Gong, Yuan; Rao, Yunjiang
2013-09-01
Capacity of the sensor network is always a bottleneck problem for the novel FBG-based quasi-distributed fiberoptical defending system. In this paper, a highly sensitive sensing network with FBG vibration sensors is presented to relieve stress of the capacity and the system cost. However, higher sensitivity may cause higher Nuisance Alarm Rates (NARs) in practical uses. It is necessary to further classify the intrusion pattern or threat level and determine the validity of an unexpected event. Then an intelligent identification method is proposed by extracting the statistical features of the vibration signals in the time domain, and inputting them into a 3-layer Back-Propagation(BP) Artificial Neural Network to classify the events of interest. Experiments of both simulation and field tests are carried out to validate its effectiveness. The results show the recognition rate can be achieved up to 100% for the simulation signals and as high as 96.03% in the real tests.
Creating a Realistic Weather Environment for Motion-Based Piloted Flight Simulation
NASA Technical Reports Server (NTRS)
Daniels, Taumi S.; Schaffner, Philip R.; Evans, Emory T.; Neece, Robert T.; Young, Steve D.
2012-01-01
A flight simulation environment is being enhanced to facilitate experiments that evaluate research prototypes of advanced onboard weather radar, hazard/integrity monitoring (HIM), and integrated alerting and notification (IAN) concepts in adverse weather conditions. The simulation environment uses weather data based on real weather events to support operational scenarios in a terminal area. A simulated atmospheric environment was realized by using numerical weather data sets. These were produced from the High-Resolution Rapid Refresh (HRRR) model hosted and run by the National Oceanic and Atmospheric Administration (NOAA). To align with the planned flight simulation experiment requirements, several HRRR data sets were acquired courtesy of NOAA. These data sets coincided with severe weather events at the Memphis International Airport (MEM) in Memphis, TN. In addition, representative flight tracks for approaches and departures at MEM were generated and used to develop and test simulations of (1) what onboard sensors such as the weather radar would observe; (2) what datalinks of weather information would provide; and (3) what atmospheric conditions the aircraft would experience (e.g. turbulence, winds, and icing). The simulation includes a weather radar display that provides weather and turbulence modes, derived from the modeled weather along the flight track. The radar capabilities and the pilots controls simulate current-generation commercial weather radar systems. Appropriate data-linked weather advisories (e.g., SIGMET) were derived from the HRRR weather models and provided to the pilot consistent with NextGen concepts of use for Aeronautical Information Service (AIS) and Meteorological (MET) data link products. The net result of this simulation development was the creation of an environment that supports investigations of new flight deck information systems, methods for incorporation of better weather information, and pilot interface and operational improvements for better aviation safety. This research is part of a larger effort at NASA to study the impact of the growing complexity of operations, information, and systems on crew decision-making and response effectiveness; and then to recommend methods for improving future designs.
Event management for large scale event-driven digital hardware spiking neural networks.
Caron, Louis-Charles; D'Haene, Michiel; Mailhot, Frédéric; Schrauwen, Benjamin; Rouat, Jean
2013-09-01
The interest in brain-like computation has led to the design of a plethora of innovative neuromorphic systems. Individually, spiking neural networks (SNNs), event-driven simulation and digital hardware neuromorphic systems get a lot of attention. Despite the popularity of event-driven SNNs in software, very few digital hardware architectures are found. This is because existing hardware solutions for event management scale badly with the number of events. This paper introduces the structured heap queue, a pipelined digital hardware data structure, and demonstrates its suitability for event management. The structured heap queue scales gracefully with the number of events, allowing the efficient implementation of large scale digital hardware event-driven SNNs. The scaling is linear for memory, logarithmic for logic resources and constant for processing time. The use of the structured heap queue is demonstrated on a field-programmable gate array (FPGA) with an image segmentation experiment and a SNN of 65,536 neurons and 513,184 synapses. Events can be processed at the rate of 1 every 7 clock cycles and a 406×158 pixel image is segmented in 200 ms. Copyright © 2013 Elsevier Ltd. All rights reserved.
Mioni, Giovanna; Bertucci, Erica; Rosato, Antonella; Terrett, Gill; Rendell, Peter G; Zamuner, Massimo; Stablum, Franca
2017-06-01
Previous studies have shown that traumatic brain injury (TBI) patients have difficulties with prospective memory (PM). Considering that PM is closely linked to independent living it is of primary interest to develop strategies that can improve PM performance in TBI patients. This study employed Virtual Week task as a measure of PM, and we included future event simulation to boost PM performance. Study 1 evaluated the efficacy of the strategy and investigated possible practice effects. Twenty-four healthy participants performed Virtual Week in a no strategy condition, and 24 healthy participants performed it in a mixed condition (no strategy - future event simulation). In Study 2, 18 TBI patients completed the mixed condition of Virtual Week and were compared with the 24 healthy controls who undertook the mixed condition of Virtual Week in Study 1. All participants also completed a neuropsychological evaluation to characterize the groups on level of cognitive functioning. Study 1 showed that participants in the future event simulation condition outperformed participants in the no strategy condition, and these results were not attributable to practice effects. Results of Study 2 showed that TBI patients performed PM tasks less accurately than controls, but that future event simulation can substantially reduce TBI-related deficits in PM performance. The future event simulation strategy also improved the controls' PM performance. These studies showed the value of future event simulation strategy in improving PM performance in healthy participants as well as in TBI patients. TBI patients performed PM tasks less accurately than controls, confirming prospective memory impairment in these patients. Participants in the future event simulation condition out-performed participants in the no strategy condition. Future event simulation can substantially reduce TBI-related deficits in PM performance. Future event simulation strategy also improved the controls' PM performance. © 2017 The British Psychological Society.
NASA Technical Reports Server (NTRS)
Intriligator, Devrie S.; Detman, Thomas; Gloecker, George; Gloeckler, Christine; Dryer, Murray; Sun, Wei; Intriligator, James; Deehr, Charles
2012-01-01
We report the first comparisons of pickup proton simulation results with in situ measurements of pickup protons obtained by the SWICS instrument on Ulysses. Simulations were run using the three dimensional (3D) time-dependent Hybrid Heliospheric Modeling System with Pickup Protons (HHMS-PI). HHMS-PI is an MHD solar wind model, expanded to include the basic physics of pickup protons from neutral hydrogen that drifts into the heliosphere from the local interstellar medium. We use the same model and input data developed by Detman et al. (2011) to now investigate the pickup protons. The simulated interval of 82 days in 2003 2004, includes both quiet solar wind (SW) and also the October November 2003 solar events (the Halloween 2003 solar storms). The HHMS-PI pickup proton simulations generally agree with the SWICS measurements and the HHMS-PI simulated solar wind generally agrees with SWOOPS (also on Ulysses) measurements. Many specific features in the observations are well represented by the model. We simulated twenty specific solar events associated with the Halloween 2003 storm. We give the specific values of the solar input parameters for the HHMS-PI simulations that provide the best combined agreement in the times of arrival of the solar-generated shocks at both ACE and Ulysses. We show graphical comparisons of simulated and observed parameters, and we give quantitative measures of the agreement of simulated with observed parameters. We suggest that some of the variations in the pickup proton density during the Halloween 2003 solar events may be attributed to depletion of the inflowing local interstellar medium (LISM) neutral hydrogen (H) caused by its increased conversion to pickup protons in the immediately preceding shock.
NASA Astrophysics Data System (ADS)
Rankin, Drew J.; Jiang, Jin
2011-04-01
Verification and validation (V&V) of safety control system quality and performance is required prior to installing control system hardware within nuclear power plants (NPPs). Thus, the objective of the hardware-in-the-loop (HIL) platform introduced in this paper is to verify the functionality of these safety control systems. The developed platform provides a flexible simulated testing environment which enables synchronized coupling between the real and simulated world. Within the platform, National Instruments (NI) data acquisition (DAQ) hardware provides an interface between a programmable electronic system under test (SUT) and a simulation computer. Further, NI LabVIEW resides on this remote DAQ workstation for signal conversion and routing between Ethernet and standard industrial signals as well as for user interface. The platform is applied to the testing of a simplified implementation of Canadian Deuterium Uranium (CANDU) shutdown system no. 1 (SDS1) which monitors only the steam generator level of the simulated NPP. CANDU NPP simulation is performed on a Darlington NPP desktop training simulator provided by Ontario Power Generation (OPG). Simplified SDS1 logic is implemented on an Invensys Tricon v9 programmable logic controller (PLC) to test the performance of both the safety controller and the implemented logic. Prior to HIL simulation, platform availability of over 95% is achieved for the configuration used during the V&V of the PLC. Comparison of HIL simulation results to benchmark simulations shows good operational performance of the PLC following a postulated initiating event (PIE).
UAV Swarm Operational Risk Assessment System
2015-09-01
a SIPRNET connection. For practicality in development of this prototype, the interface was created using the MATLAB GUI language . By design, the use ...and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503. 1. AGENCY USE ONLY (Leave blank) 2. REPORT DATE September 2015 3...discrete-event simulation of UAV swarm attacks using ExtendSim, statistical analysis of the simulation data using Minitab, and a graphical user interface
Rough Mill Simulations Reveal That Productivity When Processing Short Lumber Can Be High
Janice K. Wiedenbeck; Philip A. Araman
1995-01-01
Handling rates and costs associated with using short-length lumber (less than 8 ft. long) in furniture and cabinet industry rough mills have been assumed to be prohibitive. Discrete-event systems simulation models of both a crosscut-first and gang-rip-first rough mill were built to measure the effect of lumber length on equipment utilization and the volume and value of...
NASA Technical Reports Server (NTRS)
Burkhardt, Z.; Ramachandran, N.; Majumdar, A.
2017-01-01
Fluid Transient analysis is important for the design of spacecraft propulsion system to ensure structural stability of the system in the event of sudden closing or opening of the valve. Generalized Fluid System Simulation Program (GFSSP), a general purpose flow network code developed at NASA/MSFC is capable of simulating pressure surge due to sudden opening or closing of valve when thermodynamic properties of real fluid are available for the entire range of simulation. Specifically GFSSP needs an accurate representation of pressure-density relationship in order to predict pressure surge during a fluid transient. Unfortunately, the available thermodynamic property programs such as REFPROP, GASP or GASPAK does not provide the thermodynamic properties of Monomethylhydrazine (MMH). This paper will illustrate the process used for building a customized table of properties of state variables from available properties and speed of sound that is required by GFSSP for simulation. Good agreement was found between the simulations and measured data. This method can be adopted for modeling flow networks and systems with other fluids whose properties are not known in detail in order to obtain general technical insight. Rigorous code validation of this approach will be done and reported at a future date.
Analysis hierarchical model for discrete event systems
NASA Astrophysics Data System (ADS)
Ciortea, E. M.
2015-11-01
The This paper presents the hierarchical model based on discrete event network for robotic systems. Based on the hierarchical approach, Petri network is analysed as a network of the highest conceptual level and the lowest level of local control. For modelling and control of complex robotic systems using extended Petri nets. Such a system is structured, controlled and analysed in this paper by using Visual Object Net ++ package that is relatively simple and easy to use, and the results are shown as representations easy to interpret. The hierarchical structure of the robotic system is implemented on computers analysed using specialized programs. Implementation of hierarchical model discrete event systems, as a real-time operating system on a computer network connected via a serial bus is possible, where each computer is dedicated to local and Petri model of a subsystem global robotic system. Since Petri models are simplified to apply general computers, analysis, modelling, complex manufacturing systems control can be achieved using Petri nets. Discrete event systems is a pragmatic tool for modelling industrial systems. For system modelling using Petri nets because we have our system where discrete event. To highlight the auxiliary time Petri model using transport stream divided into hierarchical levels and sections are analysed successively. Proposed robotic system simulation using timed Petri, offers the opportunity to view the robotic time. Application of goods or robotic and transmission times obtained by measuring spot is obtained graphics showing the average time for transport activity, using the parameters sets of finished products. individually.
Real-time monitoring of Lévy flights in a single quantum system
NASA Astrophysics Data System (ADS)
Issler, M.; Höller, J.; Imamoǧlu, A.
2016-02-01
Lévy flights are random walks where the dynamics is dominated by rare events. Even though they have been studied in vastly different physical systems, their observation in a single quantum system has remained elusive. Here we analyze a periodically driven open central spin system and demonstrate theoretically that the dynamics of the spin environment exhibits Lévy flights. For the particular realization in a single-electron charged quantum dot driven by periodic resonant laser pulses, we use Monte Carlo simulations to confirm that the long waiting times between successive nuclear spin-flip events are governed by a power-law distribution; the corresponding exponent η =-3 /2 can be directly measured in real time by observing the waiting time distribution of successive photon emission events. Remarkably, the dominant intrinsic limitation of the scheme arising from nuclear quadrupole coupling can be minimized by adjusting the magnetic field or by implementing spin echo.
Use of the Hadoop structured storage tools for the ATLAS EventIndex event catalogue
NASA Astrophysics Data System (ADS)
Favareto, A.
2016-09-01
The ATLAS experiment at the LHC collects billions of events each data-taking year, and processes them to make them available for physics analysis in several different formats. An even larger amount of events is in addition simulated according to physics and detector models and then reconstructed and analysed to be compared to real events. The EventIndex is a catalogue of all events in each production stage; it includes for each event a few identification parameters, some basic non-mutable information coming from the online system, and the references to the files that contain the event in each format (plus the internal pointers to the event within each file for quick retrieval). Each EventIndex record is logically simple but the system has to hold many tens of billions of records, all equally important. The Hadoop technology was selected at the start of the EventIndex project development in 2012 and proved to be robust and flexible to accommodate this kind of information; both the insertion and query response times are acceptable for the continuous and automatic operation that started in Spring 2015. This paper describes the EventIndex data input and organisation in Hadoop and explains the operational challenges that were overcome in order to achieve the expected performance.
NASA Astrophysics Data System (ADS)
Shaffer, Gary; Fernández Villanueva, Esteban; Rondanelli, Roberto; Olaf Pepke Pedersen, Jens; Malskær Olsen, Steffen; Huber, Matthew
2017-11-01
Geological records reveal a number of ancient, large and rapid negative excursions of the carbon-13 isotope. Such excursions can only be explained by massive injections of depleted carbon to the Earth system over a short duration. These injections may have forced strong global warming events, sometimes accompanied by mass extinctions such as the Triassic-Jurassic and end-Permian extinctions 201 and 252 million years ago, respectively. In many cases, evidence points to methane as the dominant form of injected carbon, whether as thermogenic methane formed by magma intrusions through overlying carbon-rich sediment or from warming-induced dissociation of methane hydrate, a solid compound of methane and water found in ocean sediments. As a consequence of the ubiquity and importance of methane in major Earth events, Earth system models for addressing such events should include a comprehensive treatment of methane cycling but such a treatment has often been lacking. Here we implement methane cycling in the Danish Center for Earth System Science (DCESS) model, a simplified but well-tested Earth system model of intermediate complexity. We use a generic methane input function that allows variation in input type, size, timescale and ocean-atmosphere partition. To be able to treat such massive inputs more correctly, we extend the model to deal with ocean suboxic/anoxic conditions and with radiative forcing and methane lifetimes appropriate for high atmospheric methane concentrations. With this new model version, we carried out an extensive set of simulations for methane inputs of various sizes, timescales and ocean-atmosphere partitions to probe model behavior. We find that larger methane inputs over shorter timescales with more methane dissolving in the ocean lead to ever-increasing ocean anoxia with consequences for ocean life and global carbon cycling. Greater methane input directly to the atmosphere leads to more warming and, for example, greater carbon dioxide release from land soils. Analysis of synthetic sediment cores from the simulations provides guidelines for the interpretation of real sediment cores spanning the warming events. With this improved DCESS model version and paleo-reconstructions, we are now better armed to gauge the amounts, types, timescales and locations of methane injections driving specific, observed deep-time, global warming events.
NASA Astrophysics Data System (ADS)
Jia, Chaoqing; Hu, Jun; Chen, Dongyan; Liu, Yurong; Alsaadi, Fuad E.
2018-07-01
In this paper, we discuss the event-triggered resilient filtering problem for a class of time-varying systems subject to stochastic uncertainties and successive packet dropouts. The event-triggered mechanism is employed with hope to reduce the communication burden and save network resources. The stochastic uncertainties are considered to describe the modelling errors and the phenomenon of successive packet dropouts is characterized by a random variable obeying the Bernoulli distribution. The aim of the paper is to provide a resilient event-based filtering approach for addressed time-varying systems such that, for all stochastic uncertainties, successive packet dropouts and filter gain perturbation, an optimized upper bound of the filtering error covariance is obtained by designing the filter gain. Finally, simulations are provided to demonstrate the effectiveness of the proposed robust optimal filtering strategy.
A software bus for thread objects
NASA Technical Reports Server (NTRS)
Callahan, John R.; Li, Dehuai
1995-01-01
The authors have implemented a software bus for lightweight threads in an object-oriented programming environment that allows for rapid reconfiguration and reuse of thread objects in discrete-event simulation experiments. While previous research in object-oriented, parallel programming environments has focused on direct communication between threads, our lightweight software bus, called the MiniBus, provides a means to isolate threads from their contexts of execution by restricting communications between threads to message-passing via their local ports only. The software bus maintains a topology of connections between these ports. It routes, queues, and delivers messages according to this topology. This approach allows for rapid reconfiguration and reuse of thread objects in other systems without making changes to the specifications or source code. A layered approach that provides the needed transparency to developers is presented. Examples of using the MiniBus are given, and the value of bus architectures in building and conducting simulations of discrete-event systems is discussed.
NASA Astrophysics Data System (ADS)
Yang, Jian; Sun, Shuaishuai; Tian, Tongfei; Li, Weihua; Du, Haiping; Alici, Gursel; Nakano, Masami
2016-03-01
Protecting civil engineering structures from uncontrollable events such as earthquakes while maintaining their structural integrity and serviceability is very important; this paper describes the performance of a stiffness softening magnetorheological elastomer (MRE) isolator in a scaled three storey building. In order to construct a closed-loop system, a scaled three storey building was designed and built according to the scaling laws, and then four MRE isolator prototypes were fabricated and utilised to isolate the building from the motion induced by a scaled El Centro earthquake. Fuzzy logic was used to output the current signals to the isolators, based on the real-time responses of the building floors, and then a simulation was used to evaluate the feasibility of this closed loop control system before carrying out an experimental test. The simulation and experimental results showed that the stiffness softening MRE isolator controlled by fuzzy logic could suppress structural vibration well.
ORION Environmental Control and Life Support Systems Suit Loop and Pressure Control Analysis
NASA Technical Reports Server (NTRS)
Eckhardt, Brad; Conger, Bruce; Stambaugh, Imelda C.
2015-01-01
Under NASA's ORION Multi-Purpose Crew Vehicle (MPCV) Environmental Control and Life Support System (ECLSS) Project at Johnson Space Center's (JSC), the Crew and Thermal Systems Division has developed performance models of the air system using Thermal Desktop/FloCAD. The Thermal Desktop model includes an Air Revitalization System (ARS Loop), a Suit Loop, a Cabin Loop, and Pressure Control System (PCS) for supplying make-up gas (N2 and O2) to the Cabin and Suit Loop. The ARS and PCS are designed to maintain air quality at acceptable O2, CO2 and humidity levels as well as internal pressures in the vehicle Cabin and during suited operations. This effort required development of a suite of Thermal Desktop Orion ECLSS models to address the need for various simulation capabilities regarding ECLSS performance. An initial highly detailed model of the ARS Loop was developed in order to simulate rapid pressure transients (water hammer effects) within the ARS Loop caused by events such as cycling of the Pressurized Swing Adsorption (PSA) Beds and required high temporal resolution (small time steps) in the model during simulation. A second ECLSS model was developed to simulate events which occur over longer periods of time (over 30 minutes) where O2, CO2 and humidity levels, as well as internal pressures needed to be monitored in the cabin and for suited operations. Stand-alone models of the PCS and the Negative Pressure relief Valve (NPRV) were developed to study thermal effects within the PCS during emergency scenarios (Cabin Leak) and cabin pressurization during vehicle re-entry into Earth's atmosphere. Results from the Orion ECLSS models were used during Orion Delta-PDR (July, 2014) to address Key Design Requirements (KDR's) for Suit Loop operations for multiple mission scenarios.
Evaluation of Probable Maximum Precipitation and Flood under Climate Change in the 21st Century
NASA Astrophysics Data System (ADS)
Gangrade, S.; Kao, S. C.; Rastogi, D.; Ashfaq, M.; Naz, B. S.; Kabela, E.; Anantharaj, V. G.; Singh, N.; Preston, B. L.; Mei, R.
2016-12-01
Critical infrastructures are potentially vulnerable to extreme hydro-climatic events. Under a warming environment, the magnitude and frequency of extreme precipitation and flood are likely to increase enhancing the needs to more accurately quantify the risks due to climate change. In this study, we utilized an integrated modeling framework that includes the Weather Research Forecasting (WRF) model and a high resolution distributed hydrology soil vegetation model (DHSVM) to simulate probable maximum precipitation (PMP) and flood (PMF) events over Alabama-Coosa-Tallapoosa River Basin. A total of 120 storms were selected to simulate moisture maximized PMP under different meteorological forcings, including historical storms driven by Climate Forecast System Reanalysis (CFSR) and baseline (1981-2010), near term future (2021-2050) and long term future (2071-2100) storms driven by Community Climate System Model version 4 (CCSM4) under Representative Concentrations Pathway 8.5 emission scenario. We also analyzed the sensitivity of PMF to various antecedent hydrologic conditions such as initial soil moisture conditions and tested different compulsive approaches. Overall, a statistical significant increase is projected for future PMP and PMF, mainly attributed to the increase of background air temperature. The ensemble of simulated PMP and PMF along with their sensitivity allows us to better quantify the potential risks associated with hydro-climatic extreme events on critical energy-water infrastructures such as major hydropower dams and nuclear power plants.
Executable Architecture Modeling and Simulation Based on fUML
2014-06-01
SoS behaviors. Wang et al.[9] use SysML sequence diagram to model the behaviors and translate the models into Colored Petri Nets (CPN). Staines T.S...Renzhong and Dagli C H. An executable system architecture approach to discrete events system modeling using SysML in conjunction with colored Petri
An efficient hybrid method for stochastic reaction-diffusion biochemical systems with delay
NASA Astrophysics Data System (ADS)
Sayyidmousavi, Alireza; Ilie, Silvana
2017-12-01
Many chemical reactions, such as gene transcription and translation in living cells, need a certain time to finish once they are initiated. Simulating stochastic models of reaction-diffusion systems with delay can be computationally expensive. In the present paper, a novel hybrid algorithm is proposed to accelerate the stochastic simulation of delayed reaction-diffusion systems. The delayed reactions may be of consuming or non-consuming delay type. The algorithm is designed for moderately stiff systems in which the events can be partitioned into slow and fast subsets according to their propensities. The proposed algorithm is applied to three benchmark problems and the results are compared with those of the delayed Inhomogeneous Stochastic Simulation Algorithm. The numerical results show that the new hybrid algorithm achieves considerable speed-up in the run time and very good accuracy.
NASA Technical Reports Server (NTRS)
Huffaker, R. Milton; Targ, Russell
1988-01-01
Detailed computer simulations of the lidar wind-measuring process have been conducted to evaluate the use of pulsed coherent lidar for airborne windshear monitoring. NASA data fields for an actual microburst event were used in the simulation. Both CO2 and Ho:YAG laser lidar systems performed well in the microburst test case, and were able to measure wind shear in the severe weather of this wet microburst to ranges in excess of 1.4 km. The consequent warning time gained was about 15 sec.
A Queue Simulation Tool for a High Performance Scientific Computing Center
NASA Technical Reports Server (NTRS)
Spear, Carrie; McGalliard, James
2007-01-01
The NASA Center for Computational Sciences (NCCS) at the Goddard Space Flight Center provides high performance highly parallel processors, mass storage, and supporting infrastructure to a community of computational Earth and space scientists. Long running (days) and highly parallel (hundreds of CPUs) jobs are common in the workload. NCCS management structures batch queues and allocates resources to optimize system use and prioritize workloads. NCCS technical staff use a locally developed discrete event simulation tool to model the impacts of evolving workloads, potential system upgrades, alternative queue structures and resource allocation policies.
Puig, Vannia A; Szpunar, Karl K
2017-08-01
Over the past decade, psychologists have devoted considerable attention to episodic simulation-the ability to imagine specific hypothetical events. Perhaps one of the most consistent patterns of data to emerge from this literature is that positive simulations of the future are rated as more detailed than negative simulations of the future, a pattern of results that is commonly interpreted as evidence for a positivity bias in future thinking. In the present article, we demonstrate across two experiments that negative future events are consistently simulated in more detail than positive future events when frequency of prior thinking is taken into account as a possible confounding variable and when level of detail associated with simulated events is assessed using an objective scoring criterion. Our findings are interpreted in the context of the mobilization-minimization hypothesis of event cognition that suggests people are especially likely to devote cognitive resources to processing negative scenarios. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Perceptual evaluation of visual alerts in surveillance videos
NASA Astrophysics Data System (ADS)
Rogowitz, Bernice E.; Topkara, Mercan; Pfeiffer, William; Hampapur, Arun
2015-03-01
Visual alerts are commonly used in video monitoring and surveillance systems to mark events, presumably making them more salient to human observers. Surprisingly, the effectiveness of computer-generated alerts in improving human performance has not been widely studied. To address this gap, we have developed a tool for simulating different alert parameters in a realistic visual monitoring situation, and have measured human detection performance under conditions that emulated different set-points in a surveillance algorithm. In the High-Sensitivity condition, the simulated alerts identified 100% of the events with many false alarms. In the Lower-Sensitivity condition, the simulated alerts correctly identified 70% of the targets, with fewer false alarms. In the control condition, no simulated alerts were provided. To explore the effects of learning, subjects performed these tasks in three sessions, on separate days, in a counterbalanced, within subject design. We explore these results within the context of cognitive models of human attention and learning. We found that human observers were more likely to respond to events when marked by a visual alert. Learning played a major role in the two alert conditions. In the first session, observers generated almost twice as many False Alarms as in the No-Alert condition, as the observers responded pre-attentively to the computer-generated false alarms. However, this rate dropped equally dramatically in later sessions, as observers learned to discount the false cues. Highest observer Precision, Hits/(Hits + False Alarms), was achieved in the High Sensitivity condition, but only after training. The successful evaluation of surveillance systems depends on understanding human attention and performance.
NASA Astrophysics Data System (ADS)
Alessandrini, Cinzia; Del Longo, Mauro; Pecora, Silvano; Puma, Francesco; Vezzani, Claudia
2013-04-01
In spite of the historical abundance of water due to rains and to huge storage capacity provided by alpine lakes, Po river basin, the most important Italian water district experienced in the past ten years five drought/water scarcity events respectively in 2003, 2006, 2007 and 2012 summers and in the 2011-2012 winter season. The basic approach to these crises was the observation and the post-event evaluation; from 2007 an advanced numerical modelling system, called Drought Early Warning System for the Po River (DEWS-Po) was developed, providing advanced tools to simulate the hydrological and anthropic processes that affect river flows and allowing to follow events with real-time evaluations. In early 2012 the same system enabled also forecasts. Dews-Po system gives a real-time representation of water distribution across the basin, characterized by high anthropogenic pressure, optimizing with specific tools water allocation in competing situations. The system represents an innovative approach in drought forecast and in water resource management in the Po basin, giving deterministic and probabilistic meteorological forecasts as input to a chain for numerical distributed modelling of hydrological and hydraulic simulations. The system architecture is designed to receive in input hydro-meteorological actually observed and forecasted variables: deterministic meteorological forecasts with a fifteen days lead time, withdrawals data for different uses, natural an artificial reservoirs storage and release data. The model details are very sharp, simulating also the interaction between Adriatic sea and Po river in the delta area in terms of salt intrusion forecasting. Calculation of return period through run-method and of drought stochastic-indicators are enabled to assess the characteristics of the on-going and forecasted event. An Inter-institutional Technical Board is constituted within the Po River Basin Authority since 2008 and meets regularly during water crises to act decisions regarding water management in order to prevent major impacts. The Board is made of experts from public administrations with a strong involvement of stakeholders representative of different uses. The Dews- Po was intensively used by the Technical Board as decision support system during the 2012 summer event, providing tools to understand the on-going situation of water availability and use across the basin, helping to evaluate water management choices in an objective way, through what-if scenarios considering withdrawals reduction and increased releases from regulated Alpine lakes. A description of the use of Dews- Po system within the Technical Board is given, especially focusing on those elements, prone to be considered "good management indicators", which proved to be most useful in ensuring the success of governance action. Strength and improvement needs of the system are then described
Collapse of Experimental Colloidal Aging using Record Dynamics
NASA Astrophysics Data System (ADS)
Robe, Dominic; Boettcher, Stefan; Sibani, Paolo; Yunker, Peter
The theoretical framework of record dynamics (RD) posits that aging behavior in jammed systems is controlled by short, rare events involving activation of only a few degrees of freedom. RD predicts dynamics in an aging system to progress with the logarithm of t /tw . This prediction has been verified through new analysis of experimental data on an aging 2D colloidal system. MSD and persistence curves spanning three orders of magnitude in waiting time are collapsed. These predictions have also been found consistent with a number of experiments and simulations, but verification of the specific assumptions that RD makes about the underlying statistics of these rare events has been elusive. Here the observation of individual particles allows for the first time the direct verification of the assumptions about event rates and sizes. This work is suppoted by NSF Grant DMR-1207431.
Evolution of cooperative behavior in simulation agents
NASA Astrophysics Data System (ADS)
Stroud, Phillip D.
1998-03-01
A simulated automobile factory paint shop is used as a testbed for exploring the emulation of human decision-making behavior. A discrete-events simulation of the paint shop as a collection of interacting Java actors is described. An evolutionary cognitive architecture is under development for building software actors to emulate humans in simulations of human- dominated complex systems. In this paper, the cognitive architecture is extended by implementing a persistent population of trial behaviors with an incremental fitness valuation update strategy, and by allowing a group of cognitive actors to share information. A proof-of-principle demonstration is presented.
Heidari, Zahra; Roe, Daniel R; Galindo-Murillo, Rodrigo; Ghasemi, Jahan B; Cheatham, Thomas E
2016-07-25
Long time scale molecular dynamics (MD) simulations of biological systems are becoming increasingly commonplace due to the availability of both large-scale computational resources and significant advances in the underlying simulation methodologies. Therefore, it is useful to investigate and develop data mining and analysis techniques to quickly and efficiently extract the biologically relevant information from the incredible amount of generated data. Wavelet analysis (WA) is a technique that can quickly reveal significant motions during an MD simulation. Here, the application of WA on well-converged long time scale (tens of μs) simulations of a DNA helix is described. We show how WA combined with a simple clustering method can be used to identify both the physical and temporal locations of events with significant motion in MD trajectories. We also show that WA can not only distinguish and quantify the locations and time scales of significant motions, but by changing the maximum time scale of WA a more complete characterization of these motions can be obtained. This allows motions of different time scales to be identified or ignored as desired.
The distributed production system of the SuperB project: description and results
NASA Astrophysics Data System (ADS)
Brown, D.; Corvo, M.; Di Simone, A.; Fella, A.; Luppi, E.; Paoloni, E.; Stroili, R.; Tomassetti, L.
2011-12-01
The SuperB experiment needs large samples of MonteCarlo simulated events in order to finalize the detector design and to estimate the data analysis performances. The requirements are beyond the capabilities of a single computing farm, so a distributed production model capable of exploiting the existing HEP worldwide distributed computing infrastructure is needed. In this paper we describe the set of tools that have been developed to manage the production of the required simulated events. The production of events follows three main phases: distribution of input data files to the remote site Storage Elements (SE); job submission, via SuperB GANGA interface, to all available remote sites; output files transfer to CNAF repository. The job workflow includes procedures for consistency checking, monitoring, data handling and bookkeeping. A replication mechanism allows storing the job output on the local site SE. Results from 2010 official productions are reported.
Physics, chemistry and pulmonary sequelae of thermodegradation events in long-mission space flight
NASA Technical Reports Server (NTRS)
Todd, Paul; Sklar, Michael; Ramirez, W. Fred; Smith, Gerald J.; Morgenthaler, George W.; Oberdoerster, Guenter
1993-01-01
An event in which electronic insulation consisting of polytetrafluoroethylene undergoes thermodegradation on the Space Station Freedom is considered experimentally and theoretically from the initial chemistry and convective transport through pulmonary deposition in humans. The low-gravity enviroment impacts various stages of event simulation. Vapor-phase and particulate thermodegradation products were considered as potential spacecraft contaminants. A potential pathway for the production of ultrafine particles was identified. Different approaches to the simulation and prediction of contaminant transport were studied and used to predict the distribution of generic vapor-phase products in a Space Station model. A lung transport model was used to assess the pulmonary distribution of inhaled particles, and, finally, the impact of adaptation to low gravity on the human response to this inhalation risk was explored on the basis of known physiological modifications of the immune, endocrine, musculoskeletal and pulmonary systems that accompany space flight.
Nymmik, R A
1999-10-01
A wide range of the galactic cosmic ray and SEP event flux simulation problems for the near-Earth satellite and manned spacecraft orbits and for the interplanetary mission trajectories are discussed. The models of the galactic cosmic ray and SEP events in the Earth orbit beyond the Earth's magnetosphere are used as a basis. The particle fluxes in the near-Earth orbits should be calculated using the transmission functions. To calculate the functions, the dependences of the cutoff rigidities on the magnetic disturbance level and on magnetic local time have to be known. In the case of space flights towards the Sun and to the boundary of the solar system, particular attention is paid to the changes in the SEP event occurrence frequency and size. The particle flux gradients are applied in this case to galactic cosmic ray fluxes.
Zhang, Zhi-Hui; Yang, Guang-Hong
2017-05-01
This paper provides a novel event-triggered fault detection (FD) scheme for discrete-time linear systems. First, an event-triggered interval observer is proposed to generate the upper and lower residuals by taking into account the influence of the disturbances and the event error. Second, the robustness of the residual interval against the disturbances and the fault sensitivity are improved by introducing l 1 and H ∞ performances. Third, dilated linear matrix inequalities are used to decouple the Lyapunov matrices from the system matrices. The nonnegative conditions for the estimation error variables are presented with the aid of the slack matrix variables. This technique allows considering a more general Lyapunov function. Furthermore, the FD decision scheme is proposed by monitoring whether the zero value belongs to the residual interval. It is shown that the information communication burden is reduced by designing the event-triggering mechanism, while the FD performance can still be guaranteed. Finally, simulation results demonstrate the effectiveness of the proposed method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
C(α) torsion angles as a flexible criterion to extract secrets from a molecular dynamics simulation.
Victor Paul Raj, Fredrick Robin Devadoss; Exner, Thomas E
2014-04-01
Given the increasing complexity of simulated molecular systems, and the fact that simulation times have now reached milliseconds to seconds, immense amounts of data (in the gigabyte to terabyte range) are produced in current molecular dynamics simulations. Manual analysis of these data is a very time-consuming task, and important events that lead from one intermediate structure to another can become occluded in the noise resulting from random thermal fluctuations. To overcome these problems and facilitate a semi-automated data analysis, we introduce in this work a measure based on C(α) torsion angles: torsion angles formed by four consecutive C(α) atoms. This measure describes changes in the backbones of large systems on a residual length scale (i.e., a small number of residues at a time). Cluster analysis of individual C(α) torsion angles and its fuzzification led to continuous time patches representing (meta)stable conformations and to the identification of events acting as transitions between these conformations. The importance of a change in torsion angle to structural integrity is assessed by comparing this change to the average fluctuations in the same torsion angle over the complete simulation. Using this novel measure in combination with other measures such as the root mean square deviation (RMSD) and time series of distance measures, we performed an in-depth analysis of a simulation of the open form of DNA polymerase I. The times at which major conformational changes occur and the most important parts of the molecule and their interrelations were pinpointed in this analysis. The simultaneous determination of the time points and localizations of major events is a significant advantage of the new bottom-up approach presented here, as compared to many other (top-down) approaches in which only the similarity of the complete structure is analyzed.
NASA Astrophysics Data System (ADS)
Chern, J.; Tao, W.; Shen, B.
2011-12-01
The Madden-Julian oscillation (MJO) is the dominant component of intraseasonal variability in the tropic. It interacts and influences a wide range of weather and climate phenomena across different temporal and spatial scales. Despite the important role the MJO plays in the weather and climate system, past multi-model MJO intercomparison studies have shown that current global general circulation models (GCMs) still have considerable shortcomings in representing and forecasting this phenomenon. To improve representation of MJO and tropical convective cloud systems in global model, an Multiscale Modeling Framework (MMF) in which a cloud-resolving model takes the place of the sing-column cumulus parameterization used in convectional GCMs has been successfully developed at NAAS Goddard (Tao et al. 2009). To evaluate and improve the ability of this modeling system in representation and prediction of the MJO, several numerical hindcast experiments of a few selected MJO events during YOTC have been carried out. The ability of the model to simulate the MJO events is examined using diagnostic and skill metrics developed by the CLIVAR MJO Working Group Project as well as comparisons with a high-resolution global mesoscale model simulations, satellite observations, and analysis dataset. Several key variables associated with the MJO are investigated, including precipitation, outgoing longwave radiation, large-scale circulation, surface latent heat flux, low-level moisture convergence, vertical structure of moisture and hydrometers, and vertical diabatic heating profiles to gain insight of cloud processes associated with the MJO events.
NASA Astrophysics Data System (ADS)
Santillan, J. R.; Amora, A. M.; Makinano-Santillan, M.; Marqueso, J. T.; Cutamora, L. C.; Serviano, J. L.; Makinano, R. M.
2016-06-01
In this paper, we present a combined geospatial and two dimensional (2D) flood modeling approach to assess the impacts of flooding due to extreme rainfall events. We developed and implemented this approach to the Tago River Basin in the province of Surigao del Sur in Mindanao, Philippines, an area which suffered great damage due to flooding caused by Tropical Storms Lingling and Jangmi in the year 2014. The geospatial component of the approach involves extraction of several layers of information such as detailed topography/terrain, man-made features (buildings, roads, bridges) from 1-m spatial resolution LiDAR Digital Surface and Terrain Models (DTM/DSMs), and recent land-cover from Landsat 7 ETM+ and Landsat 8 OLI images. We then used these layers as inputs in developing a Hydrologic Engineering Center Hydrologic Modeling System (HEC HMS)-based hydrologic model, and a hydraulic model based on the 2D module of the latest version of HEC River Analysis System (RAS) to dynamically simulate and map the depth and extent of flooding due to extreme rainfall events. The extreme rainfall events used in the simulation represent 6 hypothetical rainfall events with return periods of 2, 5, 10, 25, 50, and 100 years. For each event, maximum flood depth maps were generated from the simulations, and these maps were further transformed into hazard maps by categorizing the flood depth into low, medium and high hazard levels. Using both the flood hazard maps and the layers of information extracted from remotely-sensed datasets in spatial overlay analysis, we were then able to estimate and assess the impacts of these flooding events to buildings, roads, bridges and landcover. Results of the assessments revealed increase in number of buildings, roads and bridges; and increase in areas of land-cover exposed to various flood hazards as rainfall events become more extreme. The wealth of information generated from the flood impact assessment using the approach can be very useful to the local government units and the concerned communities within Tago River Basin as an aid in determining in an advance manner all those infrastructures (buildings, roads and bridges) and land-cover that can be affected by different extreme rainfall event flood scenarios.
Symbolic discrete event system specification
NASA Technical Reports Server (NTRS)
Zeigler, Bernard P.; Chi, Sungdo
1992-01-01
Extending discrete event modeling formalisms to facilitate greater symbol manipulation capabilities is important to further their use in intelligent control and design of high autonomy systems. An extension to the DEVS formalism that facilitates symbolic expression of event times by extending the time base from the real numbers to the field of linear polynomials over the reals is defined. A simulation algorithm is developed to generate the branching trajectories resulting from the underlying nondeterminism. To efficiently manage symbolic constraints, a consistency checking algorithm for linear polynomial constraints based on feasibility checking algorithms borrowed from linear programming has been developed. The extended formalism offers a convenient means to conduct multiple, simultaneous explorations of model behaviors. Examples of application are given with concentration on fault model analysis.
Repetition-Related Reductions in Neural Activity during Emotional Simulations of Future Events.
Szpunar, Karl K; Jing, Helen G; Benoit, Roland G; Schacter, Daniel L
2015-01-01
Simulations of future experiences are often emotionally arousing, and the tendency to repeatedly simulate negative future outcomes has been identified as a predictor of the onset of symptoms of anxiety. Nonetheless, next to nothing is known about how the healthy human brain processes repeated simulations of emotional future events. In this study, we present a paradigm that can be used to study repeated simulations of the emotional future in a manner that overcomes phenomenological confounds between positive and negative events. The results show that pulvinar nucleus and orbitofrontal cortex respectively demonstrate selective reductions in neural activity in response to frequently as compared to infrequently repeated simulations of negative and positive future events. Implications for research on repeated simulations of the emotional future in both non-clinical and clinical populations are discussed.
Weintraub, Ari Y; Deutsch, Ellen S; Hales, Roberta L; Buchanan, Newton A; Rock, Whitney L; Rehman, Mohamed A
2017-06-01
Learning to use a new electronic anesthesia information management system can be challenging. Documenting anesthetic events, medication administration, and airway management in an unfamiliar system while simultaneously caring for a patient with the vigilance required for safe anesthesia can be distracting and risky. This technical report describes a vendor-agnostic approach to training using a high-technology manikin in a simulated clinical scenario. Training was feasible and valued by participants but required a combination of electronic and manual components. Further exploration may reveal simulated patient care training that provides the greatest benefit to participants as well as feedback to inform electronic health record improvements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Ke; Euser, Bryan J.; Rougier, Esteban
Sheared granular layers undergoing stick-slip behavior are broadly employed to study the physics and dynamics of earthquakes. In this paper, a two-dimensional implementation of the combined finite-discrete element method (FDEM), which merges the finite element method (FEM) and the discrete element method (DEM), is used to explicitly simulate a sheared granular fault system including both gouge and plate, and to investigate the influence of different normal loads on seismic moment, macroscopic friction coefficient, kinetic energy, gouge layer thickness, and recurrence time between slips. In the FDEM model, the deformation of plates and particles is simulated using the FEM formulation whilemore » particle-particle and particle-plate interactions are modeled using DEM-derived techniques. The simulated seismic moment distributions are generally consistent with those obtained from the laboratory experiments. In addition, the simulation results demonstrate that with increasing normal load, (i) the kinetic energy of the granular fault system increases; (ii) the gouge layer thickness shows a decreasing trend; and (iii) the macroscopic friction coefficient does not experience much change. Analyses of the slip events reveal that, as the normal load increases, more slip events with large kinetic energy release and longer recurrence time occur, and the magnitude of gouge layer thickness decrease also tends to be larger; while the macroscopic friction coefficient drop decreases. Finally, the simulations not only reveal the influence of normal loads on the dynamics of sheared granular fault gouge, but also demonstrate the capabilities of FDEM for studying stick-slip dynamic behavior of granular fault systems.« less
Estimating ICU bed capacity using discrete event simulation.
Zhu, Zhecheng; Hen, Bee Hoon; Teow, Kiok Liang
2012-01-01
The intensive care unit (ICU) in a hospital caters for critically ill patients. The number of the ICU beds has a direct impact on many aspects of hospital performance. Lack of the ICU beds may cause ambulance diversion and surgery cancellation, while an excess of ICU beds may cause a waste of resources. This paper aims to develop a discrete event simulation (DES) model to help the healthcare service providers determine the proper ICU bed capacity which strikes the balance between service level and cost effectiveness. The DES model is developed to reflect the complex patient flow of the ICU system. Actual operational data, including emergency arrivals, elective arrivals and length of stay, are directly fed into the DES model to capture the variations in the system. The DES model is validated by open box test and black box test. The validated model is used to test two what-if scenarios which the healthcare service providers are interested in: the proper number of the ICU beds in service to meet the target rejection rate and the extra ICU beds in service needed to meet the demand growth. A 12-month period of actual operational data was collected from an ICU department with 13 ICU beds in service. Comparison between the simulation results and the actual situation shows that the DES model accurately captures the variations in the system, and the DES model is flexible to simulate various what-if scenarios. DES helps the healthcare service providers describe the current situation, and simulate the what-if scenarios for future planning.
Gao, Ke; Euser, Bryan J.; Rougier, Esteban; ...
2018-06-20
Sheared granular layers undergoing stick-slip behavior are broadly employed to study the physics and dynamics of earthquakes. In this paper, a two-dimensional implementation of the combined finite-discrete element method (FDEM), which merges the finite element method (FEM) and the discrete element method (DEM), is used to explicitly simulate a sheared granular fault system including both gouge and plate, and to investigate the influence of different normal loads on seismic moment, macroscopic friction coefficient, kinetic energy, gouge layer thickness, and recurrence time between slips. In the FDEM model, the deformation of plates and particles is simulated using the FEM formulation whilemore » particle-particle and particle-plate interactions are modeled using DEM-derived techniques. The simulated seismic moment distributions are generally consistent with those obtained from the laboratory experiments. In addition, the simulation results demonstrate that with increasing normal load, (i) the kinetic energy of the granular fault system increases; (ii) the gouge layer thickness shows a decreasing trend; and (iii) the macroscopic friction coefficient does not experience much change. Analyses of the slip events reveal that, as the normal load increases, more slip events with large kinetic energy release and longer recurrence time occur, and the magnitude of gouge layer thickness decrease also tends to be larger; while the macroscopic friction coefficient drop decreases. Finally, the simulations not only reveal the influence of normal loads on the dynamics of sheared granular fault gouge, but also demonstrate the capabilities of FDEM for studying stick-slip dynamic behavior of granular fault systems.« less
NASA Technical Reports Server (NTRS)
Norgard, John D.
2012-01-01
For future NASA Manned Space Exploration of the Moon and Mars, a blunt body capsule, called the Orion Crew Exploration Vehicle (CEV), composed of a Crew Module (CM) and a Service Module (SM), with a parachute decent assembly is planned for reentry back to Earth. A Capsule Parachute Assembly System (CPAS) is being developed for preliminary parachute drop tests at the Yuma Proving Ground (YPG) to simulate high-speed reentry to Earth from beyond Low-Earth-Orbit (LEO) and to provide measurements of landing parameters and parachute loads. The avionics systems on CPAS also provide mission critical firing events to deploy, reef, and release the parachutes in three stages (extraction, drogues, mains) using mortars and pressure cartridge assemblies. In addition, a Mid-Air Delivery System (MDS) is used to separate the capsule from the sled that is used to eject the capsule from the back of the drop plane. Also, high-speed and high-definition cameras in a Video Camera System (VCS) are used to film the drop plane extraction and parachute landing events. To verify Electromagnetic Compatibility (EMC) of the CPAS system from unintentional radiation, Electromagnetic Interference (EMI) measurements are being made inside a semi-anechoic chamber at NASA/JSC at 1m from the electronic components of the CPAS system. In addition, EMI measurements of the integrated CPAS system are being made inside a hanger at YPG. These near-field B-Dot probe measurements on the surface of a parachute simulator (DART) are being extrapolated outward to the 1m standard distance for comparison to the MIL-STD radiated emissions limit.
A Simulation of Readiness-Based Sparing Policies
2017-06-01
variant of a greedy heuristic algorithm to set stock levels and estimate overall WS availability. Our discrete event simulation is then used to test the...available in the optimization tools. 14. SUBJECT TERMS readiness-based sparing, discrete event simulation, optimization, multi-indenture...variant of a greedy heuristic algorithm to set stock levels and estimate overall WS availability. Our discrete event simulation is then used to test the
Sloane, E B; Gelhot, V
2004-01-01
This research is motivated by the rapid pace of medical device and information system integration. Although the ability to interconnect many medical devices and information systems may help improve patient care, there is no way to detect if incompatibilities between one or more devices might cause critical events such as patient alarms to go unnoticed or cause one or more of the devices to become stuck in a disabled state. Petri net tools allow automated testing of all possible states and transitions between devices and/or systems to detect potential failure modes in advance. This paper describes an early research project to use Petri nets to simulate and validate a multi-modality central patient monitoring system. A free Petri net tool, HPSim, is used to simulate two wireless patient monitoring networks: one with 44 heart monitors and a central monitoring system and a second version that includes an additional 44 wireless pulse oximeters. In the latter Petri net simulation, a potentially dangerous heart arrhythmia and pulse oximetry alarms were detected.
Exploration Supply Chain Simulation
NASA Technical Reports Server (NTRS)
2008-01-01
The Exploration Supply Chain Simulation project was chartered by the NASA Exploration Systems Mission Directorate to develop a software tool, with proper data, to quantitatively analyze supply chains for future program planning. This tool is a discrete-event simulation that uses the basic supply chain concepts of planning, sourcing, making, delivering, and returning. This supply chain perspective is combined with other discrete or continuous simulation factors. Discrete resource events (such as launch or delivery reviews) are represented as organizational functional units. Continuous resources (such as civil service or contractor program functions) are defined as enabling functional units. Concepts of fixed and variable costs are included in the model to allow the discrete events to interact with cost calculations. The definition file is intrinsic to the model, but a blank start can be initiated at any time. The current definition file is an Orion Ares I crew launch vehicle. Parameters stretch from Kennedy Space Center across and into other program entities (Michaud Assembly Facility, Aliant Techsystems, Stennis Space Center, Johnson Space Center, etc.) though these will only gain detail as the file continues to evolve. The Orion Ares I file definition in the tool continues to evolve, and analysis from this tool is expected in 2008. This is the first application of such business-driven modeling to a NASA/government-- aerospace contractor endeavor.
Autonomous control of production networks using a pheromone approach
NASA Astrophysics Data System (ADS)
Armbruster, D.; de Beer, C.; Freitag, M.; Jagalski, T.; Ringhofer, C.
2006-04-01
The flow of parts through a production network is usually pre-planned by a central control system. Such central control fails in presence of highly fluctuating demand and/or unforeseen disturbances. To manage such dynamic networks according to low work-in-progress and short throughput times, an autonomous control approach is proposed. Autonomous control means a decentralized routing of the autonomous parts themselves. The parts’ decisions base on backward propagated information about the throughput times of finished parts for different routes. So, routes with shorter throughput times attract parts to use this route again. This process can be compared to ants leaving pheromones on their way to communicate with following ants. The paper focuses on a mathematical description of such autonomously controlled production networks. A fluid model with limited service rates in a general network topology is derived and compared to a discrete-event simulation model. Whereas the discrete-event simulation of production networks is straightforward, the formulation of the addressed scenario in terms of a fluid model is challenging. Here it is shown, how several problems in a fluid model formulation (e.g. discontinuities) can be handled mathematically. Finally, some simulation results for the pheromone-based control with both the discrete-event simulation model and the fluid model are presented for a time-dependent influx.
Surgeon Training in Telerobotic Surgery via a Hardware-in-the-Loop Simulator
Alemzadeh, Homa; Chen, Daniel; Kalbarczyk, Zbigniew; Iyer, Ravishankar K.; Kesavadas, Thenkurussi
2017-01-01
This work presents a software and hardware framework for a telerobotic surgery safety and motor skill training simulator. The aims are at providing trainees a comprehensive simulator for acquiring essential skills to perform telerobotic surgery. Existing commercial robotic surgery simulators lack features for safety training and optimal motion planning, which are critical factors in ensuring patient safety and efficiency in operation. In this work, we propose a hardware-in-the-loop simulator directly introducing these two features. The proposed simulator is built upon the Raven-II™ open source surgical robot, integrated with a physics engine and a safety hazard injection engine. Also, a Fast Marching Tree-based motion planning algorithm is used to help trainee learn the optimal instrument motion patterns. The main contributions of this work are (1) reproducing safety hazards events, related to da Vinci™ system, reported to the FDA MAUDE database, with a novel haptic feedback strategy to provide feedback to the operator when the underlying dynamics differ from the real robot's states so that the operator will be aware and can mitigate the negative impact of the safety-critical events, and (2) using motion planner to generate semioptimal path in an interactive robotic surgery training environment. PMID:29065635
Peng, Hai-Qin; Liu, Yan; Wang, Hong-Wu; Ma, Lu-Ming
2015-10-01
In recent years, due to global climate change and rapid urbanization, extreme weather events occur to the city at an increasing frequency. Waterlogging is common because of heavy rains. In this case, the urban drainage system can no longer meet the original design requirements, resulting in traffic jams and even paralysis and post a threat to urban safety. Therefore, it provides a necessary foundation for urban drainage planning and design to accurately assess the capacity of the drainage system and correctly simulate the transport effect of drainage network and the carrying capacity of drainage facilities. This study adopts InfoWorks Integrated Catchment Management (ICM) to present the two combined sewer drainage systems in Yangpu District, Shanghai (China). The model can assist the design of the drainage system. Model calibration is performed based on the historical rainfall events. The calibrated model is used for the assessment of the outlet drainage and pipe loads for the storm scenario currently existing or possibly occurring in the future. The study found that the simulation and analysis results of the drainage system model were reliable. They could fully reflect the service performance of the drainage system in the study area and provide decision-making support for regional flood control and transformation of pipeline network.
Kinetic Monte Carlo modeling of chemical reactions coupled with heat transfer.
Castonguay, Thomas C; Wang, Feng
2008-03-28
In this paper, we describe two types of effective events for describing heat transfer in a kinetic Monte Carlo (KMC) simulation that may involve stochastic chemical reactions. Simulations employing these events are referred to as KMC-TBT and KMC-PHE. In KMC-TBT, heat transfer is modeled as the stochastic transfer of "thermal bits" between adjacent grid points. In KMC-PHE, heat transfer is modeled by integrating the Poisson heat equation for a short time. Either approach is capable of capturing the time dependent system behavior exactly. Both KMC-PHE and KMC-TBT are validated by simulating pure heat transfer in a rod and a square and modeling a heated desorption problem where exact numerical results are available. KMC-PHE is much faster than KMC-TBT and is used to study the endothermic desorption of a lattice gas. Interesting findings from this study are reported.
Kinetic Monte Carlo modeling of chemical reactions coupled with heat transfer
NASA Astrophysics Data System (ADS)
Castonguay, Thomas C.; Wang, Feng
2008-03-01
In this paper, we describe two types of effective events for describing heat transfer in a kinetic Monte Carlo (KMC) simulation that may involve stochastic chemical reactions. Simulations employing these events are referred to as KMC-TBT and KMC-PHE. In KMC-TBT, heat transfer is modeled as the stochastic transfer of "thermal bits" between adjacent grid points. In KMC-PHE, heat transfer is modeled by integrating the Poisson heat equation for a short time. Either approach is capable of capturing the time dependent system behavior exactly. Both KMC-PHE and KMC-TBT are validated by simulating pure heat transfer in a rod and a square and modeling a heated desorption problem where exact numerical results are available. KMC-PHE is much faster than KMC-TBT and is used to study the endothermic desorption of a lattice gas. Interesting findings from this study are reported.
Effects of electronic excitation on cascade dynamics in nickel–iron and nickel–palladium systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zarkadoula, Eva; Samolyuk, German; Weber, William J.
Using molecular dynamics simulations and the two-temperature model, we provide in this paper a comparison of the surviving damage from single ion irradiation events in nickel-based alloys, for cascades with and without taking into account the effects of the electronic excitations. We find that including the electronic effects impacts the amount of the resulting damage and the production of isolated defects. Finally, irradiation of nickel–palladium systems results in larger numbers of defects compared to nickel–iron systems, with similar numbers of isolated defects. We additionally investigate the mass effect on the two-temperature model in molecular dynamics simulations of cascades.
Effects of electronic excitation on cascade dynamics in nickel–iron and nickel–palladium systems
Zarkadoula, Eva; Samolyuk, German; Weber, William J.
2017-06-10
Using molecular dynamics simulations and the two-temperature model, we provide in this paper a comparison of the surviving damage from single ion irradiation events in nickel-based alloys, for cascades with and without taking into account the effects of the electronic excitations. We find that including the electronic effects impacts the amount of the resulting damage and the production of isolated defects. Finally, irradiation of nickel–palladium systems results in larger numbers of defects compared to nickel–iron systems, with similar numbers of isolated defects. We additionally investigate the mass effect on the two-temperature model in molecular dynamics simulations of cascades.
NASA Technical Reports Server (NTRS)
Hurwitz, Margaret M.; Garfinkel, Chaim I.; Newman, Paul A.; Oman, Luke D.
2013-01-01
Warm pool El Nino (WPEN) events are characterized by positive sea surface temperature (SST) anomalies in the central equatorial Pacific. Under present-day climate conditions, WPEN events generate poleward propagating wavetrains and enhance midlatitude planetary wave activity, weakening the stratospheric polar vortices. The late 21st century extratropical atmospheric response to WPEN events is investigated using the Goddard Earth Observing System Chemistry-Climate Model (GEOSCCM), version 2. GEOSCCM simulations are forced by projected late 21st century concentrations of greenhouse gases (GHGs) and ozone-depleting substances (ODSs) and by SSTs and sea ice concentrations from an existing ocean-atmosphere simulation. Despite known ocean-atmosphere model biases, the prescribed SST fields represent a best estimate of the structure of late 21st century WPEN events. The future Arctic vortex response is qualitatively similar to that observed in recent decades but is weaker in late winter. This response reflects the weaker SST forcing in the Nino 3.4 region and subsequently weaker Northern Hemisphere tropospheric teleconnections. The Antarctic stratosphere does not respond to WPEN events in a future climate, reflecting a change in tropospheric teleconnections: The meridional wavetrain weakens while a more zonal wavetrain originates near Australia. Sensitivity simulations show that a strong poleward wavetrain response to WPEN requires a strengthening and southeastward extension of the South Pacific Convergence Zone; this feature is not captured by the late 21st century modeled SSTs. Expected future increases in GHGs and decreases in ODSs do not affect the polar stratospheric responses to WPEN.
NASA Technical Reports Server (NTRS)
Sarani, Sam
2010-01-01
The Cassini spacecraft, the largest and most complex interplanetary spacecraft ever built, continues to undertake unique scientific observations of planet Saturn, Titan, Enceladus, and other moons of the ring world. In order to maintain a stable attitude during the course of its mission, this three-axis stabilized spacecraft uses two different control systems: the Reaction Control System (or RCS) and the Reaction Wheel Assembly (RWA) control system. In the course of its mission, Cassini performs numerous reaction wheel momentum biases (or unloads) using its reaction control thrusters. The use of the RCS thrusters often imparts undesired velocity changes (delta Vs) on the spacecraft and it is crucial for Cassini navigation and attitude control teams to be able to, quickly but accurately, predict the hydrazine usage and delta V vector in Earth Mean Equatorial (J2000) inertial coordinates for reaction wheel bias events, without actually having to spend time and resources simulating the event in a dynamic or hardware-in-the-loop simulation environments. The flight-calibrated methodology described in this paper, and the ground software developed thereof, are designed to provide the RCS thruster on-times, with acceptable accuracy and without any form of dynamic simulation, for reaction wheel biases, along with the hydrazine usage and the delta V in EME-2000 inertial frame.
Performance Evaluation Modeling of Network Sensors
NASA Technical Reports Server (NTRS)
Clare, Loren P.; Jennings, Esther H.; Gao, Jay L.
2003-01-01
Substantial benefits are promised by operating many spatially separated sensors collectively. Such systems are envisioned to consist of sensor nodes that are connected by a communications network. A simulation tool is being developed to evaluate the performance of networked sensor systems, incorporating such metrics as target detection probabilities, false alarms rates, and classification confusion probabilities. The tool will be used to determine configuration impacts associated with such aspects as spatial laydown, and mixture of different types of sensors (acoustic, seismic, imaging, magnetic, RF, etc.), and fusion architecture. The QualNet discrete-event simulation environment serves as the underlying basis for model development and execution. This platform is recognized for its capabilities in efficiently simulating networking among mobile entities that communicate via wireless media. We are extending QualNet's communications modeling constructs to capture the sensing aspects of multi-target sensing (analogous to multiple access communications), unimodal multi-sensing (broadcast), and multi-modal sensing (multiple channels and correlated transmissions). Methods are also being developed for modeling the sensor signal sources (transmitters), signal propagation through the media, and sensors (receivers) that are consistent with the discrete event paradigm needed for performance determination of sensor network systems. This work is supported under the Microsensors Technical Area of the Army Research Laboratory (ARL) Advanced Sensors Collaborative Technology Alliance.
Quantifying radar-rainfall uncertainties in urban drainage flow modelling
NASA Astrophysics Data System (ADS)
Rico-Ramirez, M. A.; Liguori, S.; Schellart, A. N. A.
2015-09-01
This work presents the results of the implementation of a probabilistic system to model the uncertainty associated to radar rainfall (RR) estimates and the way this uncertainty propagates through the sewer system of an urban area located in the North of England. The spatial and temporal correlations of the RR errors as well as the error covariance matrix were computed to build a RR error model able to generate RR ensembles that reproduce the uncertainty associated with the measured rainfall. The results showed that the RR ensembles provide important information about the uncertainty in the rainfall measurement that can be propagated in the urban sewer system. The results showed that the measured flow peaks and flow volumes are often bounded within the uncertainty area produced by the RR ensembles. In 55% of the simulated events, the uncertainties in RR measurements can explain the uncertainties observed in the simulated flow volumes. However, there are also some events where the RR uncertainty cannot explain the whole uncertainty observed in the simulated flow volumes indicating that there are additional sources of uncertainty that must be considered such as the uncertainty in the urban drainage model structure, the uncertainty in the urban drainage model calibrated parameters, and the uncertainty in the measured sewer flows.
NASA Astrophysics Data System (ADS)
Plante, Ianik; Devroye, Luc
2015-09-01
Several computer codes simulating chemical reactions in particles systems are based on the Green's functions of the diffusion equation (GFDE). Indeed, many types of chemical systems have been simulated using the exact GFDE, which has also become the gold standard for validating other theoretical models. In this work, a simulation algorithm is presented to sample the interparticle distance for partially diffusion-controlled reversible ABCD reaction. This algorithm is considered exact for 2-particles systems, is faster than conventional look-up tables and uses only a few kilobytes of memory. The simulation results obtained with this method are compared with those obtained with the independent reaction times (IRT) method. This work is part of our effort in developing models to understand the role of chemical reactions in the radiation effects on cells and tissues and may eventually be included in event-based models of space radiation risks. However, as many reactions are of this type in biological systems, this algorithm might play a pivotal role in future simulation programs not only in radiation chemistry, but also in the simulation of biochemical networks in time and space as well.
NASA Astrophysics Data System (ADS)
Zapartas, E.; de Mink, S. E.; Izzard, R. G.; Yoon, S.-C.; Badenes, C.; Götberg, Y.; de Koter, A.; Neijssel, C. J.; Renzo, M.; Schootemeijer, A.; Shrotriya, T. S.
2017-05-01
Most massive stars, the progenitors of core-collapse supernovae, are in close binary systems and may interact with their companion through mass transfer or merging. We undertake a population synthesis study to compute the delay-time distribution of core-collapse supernovae, that is, the supernova rate versus time following a starburst, taking into account binary interactions. We test the systematic robustness of our results by running various simulations to account for the uncertainties in our standard assumptions. We find that a significant fraction, %, of core-collapse supernovae are "late", that is, they occur 50-200 Myr after birth, when all massive single stars have already exploded. These late events originate predominantly from binary systems with at least one, or, in most cases, with both stars initially being of intermediate mass (4-8 M⊙). The main evolutionary channels that contribute often involve either the merging of the initially more massive primary star with its companion or the engulfment of the remaining core of the primary by the expanding secondary that has accreted mass at an earlier evolutionary stage. Also, the total number of core-collapse supernovae increases by % because of binarity for the same initial stellar mass. The high rate implies that we should have already observed such late core-collapse supernovae, but have not recognized them as such. We argue that φ Persei is a likely progenitor and that eccentric neutron star - white dwarf systems are likely descendants. Late events can help explain the discrepancy in the delay-time distributions derived from supernova remnants in the Magellanic Clouds and extragalactic type Ia events, lowering the contribution of prompt Ia events. We discuss ways to test these predictions and speculate on the implications for supernova feedback in simulations of galaxy evolution.
NASA Technical Reports Server (NTRS)
Straube, Timothy Milton
1993-01-01
The design and implementation of a vertical degree of freedom suspension system is described which provides a constant force off-load condition to counter gravity over large displacements. By accommodating motions up to one meter for structures weighing up to 100 pounds, the system is useful for experiments which simulate orbital construction events such as docking, multiple component assembly, or structural deployment. A unique aspect of this device is the combination of a large stroke passive off-load device augmented by electromotive torque actuated force feedback. The active force feedback has the effect of reducing break-away friction by a factor of twenty over the passive system alone. The thesis describes the development of the suspension hardware and the control algorithm. Experiments were performed to verify the suspensions system's effectiveness in providing a gravity off-load and simulating the motion of a structure in orbit. Additionally, a three dimensional system concept is presented as an extension of the one dimensional suspension system which was implemented.
Event- and Time-Driven Techniques Using Parallel CPU-GPU Co-processing for Spiking Neural Networks
Naveros, Francisco; Garrido, Jesus A.; Carrillo, Richard R.; Ros, Eduardo; Luque, Niceto R.
2017-01-01
Modeling and simulating the neural structures which make up our central neural system is instrumental for deciphering the computational neural cues beneath. Higher levels of biological plausibility usually impose higher levels of complexity in mathematical modeling, from neural to behavioral levels. This paper focuses on overcoming the simulation problems (accuracy and performance) derived from using higher levels of mathematical complexity at a neural level. This study proposes different techniques for simulating neural models that hold incremental levels of mathematical complexity: leaky integrate-and-fire (LIF), adaptive exponential integrate-and-fire (AdEx), and Hodgkin-Huxley (HH) neural models (ranged from low to high neural complexity). The studied techniques are classified into two main families depending on how the neural-model dynamic evaluation is computed: the event-driven or the time-driven families. Whilst event-driven techniques pre-compile and store the neural dynamics within look-up tables, time-driven techniques compute the neural dynamics iteratively during the simulation time. We propose two modifications for the event-driven family: a look-up table recombination to better cope with the incremental neural complexity together with a better handling of the synchronous input activity. Regarding the time-driven family, we propose a modification in computing the neural dynamics: the bi-fixed-step integration method. This method automatically adjusts the simulation step size to better cope with the stiffness of the neural model dynamics running in CPU platforms. One version of this method is also implemented for hybrid CPU-GPU platforms. Finally, we analyze how the performance and accuracy of these modifications evolve with increasing levels of neural complexity. We also demonstrate how the proposed modifications which constitute the main contribution of this study systematically outperform the traditional event- and time-driven techniques under increasing levels of neural complexity. PMID:28223930
Oliker, Nurit; Ostfeld, Avi
2014-03-15
This study describes a decision support system, alerts for contamination events in water distribution systems. The developed model comprises a weighted support vector machine (SVM) for the detection of outliers, and a following sequence analysis for the classification of contamination events. The contribution of this study is an improvement of contamination events detection ability and a multi-dimensional analysis of the data, differing from the parallel one-dimensional analysis conducted so far. The multivariate analysis examines the relationships between water quality parameters and detects changes in their mutual patterns. The weights of the SVM model accomplish two goals: blurring the difference between sizes of the two classes' data sets (as there are much more normal/regular than event time measurements), and adhering the time factor attribute by a time decay coefficient, ascribing higher importance to recent observations when classifying a time step measurement. All model parameters were determined by data driven optimization so the calibration of the model was completely autonomic. The model was trained and tested on a real water distribution system (WDS) data set with randomly simulated events superimposed on the original measurements. The model is prominent in its ability to detect events that were only partly expressed in the data (i.e., affecting only some of the measured parameters). The model showed high accuracy and better detection ability as compared to previous modeling attempts of contamination event detection. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Zhou, Yaping; Wu, Di; Lau, K.- M.; Tao, Wei-Kuo
2016-01-01
Large-scale forcing and land-atmosphere interactions on precipitation are investigated with NASA-Unified WRF (NU-WRF) simulations during fast transitions of ENSO phases from spring to early summer of 2010 and 2011. The model is found to capture major precipitation episodes in the 3-month simulations without resorting to nudging. However, the mean intensity of the simulated precipitation is underestimated by 46% and 57% compared with the observations in dry and wet regions in the southwestern and south-central United States, respectively. Sensitivity studies show that large-scale atmospheric forcing plays a major role in producing regional precipitation. A methodology to account for moisture contributions to individual precipitation events, as well as total precipitation, is presented under the same moisture budget framework. The analysis shows that the relative contributions of local evaporation and large-scale moisture convergence depend on the dry/wet regions and are a function of temporal and spatial scales. While the ratio of local and large-scale moisture contributions vary with domain size and weather system, evaporation provides a major moisture source in the dry region and during light rain events, which leads to greater sensitivity to soil moisture in the dry region and during light rain events. The feedback of land surface processes to large-scale forcing is well simulated, as indicated by changes in atmospheric circulation and moisture convergence. Overall, the results reveal an asymmetrical response of precipitation events to soil moisture, with higher sensitivity under dry than wet conditions. Drier soil moisture tends to suppress further existing below-normal precipitation conditions via a positive soil moisture-land surface flux feedback that could worsen drought conditions in the southwestern United States.
NASA Astrophysics Data System (ADS)
Gavrilov, Nikolai M.; Koval, Andrey V.; Pogoreltsev, Alexander I.; Savenkova, Elena N.
2018-04-01
Parameterization schemes of atmospheric normal modes (NMs) and orographic gravity waves (OGWs) have been implemented into the mechanistic Middle and Upper Atmosphere Model (MUAM) simulating atmospheric general circulation. Based on the 12-members ensemble of runs with the MUAM, a composite of the stratospheric warming (SW) has been constructed using the UK Met Office data as the lower boundary conditions. The simulation results show that OGW amplitudes increase at altitudes above 30 km in the Northern Hemisphere after the SW event. At altitudes of about 50 km, OGWs have largest amplitudes over North American and European mountain systems before and during the composite SW, and over Himalayas after the SW. Simulations demonstrate substantial (up to 50-70%) variations of amplitudes of stationary planetary waves (PWs) during and after the SW in the mesosphere-lower thermosphere of the Northern Hemisphere. Westward travelling NMs have amplitude maxima not only in the Northern, but also in the Southern Hemisphere, where these modes have waveguides in the middle and upper atmosphere. Simulated variations of PW and NM amplitudes correspond to changes in the mean zonal wind, EP-fluxes and wave refractive index at different phases of the composite SW events. Inclusion of the parameterization of OGW effects leads to decreases in amplitudes (up to 15%) of almost all SPWs before and after the SW event and their increase (up to 40-60%) after the SW in the stratosphere and mesosphere at middle and high northern latitudes. It is suggested that observed changes in NM amplitudes in the Southern Hemisphere during SW could be caused by divergence of increased southward EP-flux. This EP-flux increases due to OGW drag before SW and extends into the Southern Hemisphere.
Reliability computation using fault tree analysis
NASA Technical Reports Server (NTRS)
Chelson, P. O.
1971-01-01
A method is presented for calculating event probabilities from an arbitrary fault tree. The method includes an analytical derivation of the system equation and is not a simulation program. The method can handle systems that incorporate standby redundancy and it uses conditional probabilities for computing fault trees where the same basic failure appears in more than one fault path.
Design and development of JUNO event data model
NASA Astrophysics Data System (ADS)
Li, Teng; Xia, Xin; Huang, Xing-Tao; Zou, Jia-Heng; Li, Wei-Dong; Lin, Tao; Zhang, Kun; Deng, Zi-Yan
2017-06-01
The Jiangmen Underground Neutrino Observatory (JUNO) detector is designed to determine the neutrino mass hierarchy and precisely measure oscillation parameters. The general purpose design also allows measurements of neutrinos from many terrestrial and non-terrestrial sources. The JUNO Event Data Model (EDM) plays a central role in the offline software system. It describes the event data entities through all processing stages for both simulated and collected data, and provides persistency via the input/output system. Also, the EDM is designed to enable flexible event handling such as event navigation, as well as the splitting of MC IBD signals and mixing of MC backgrounds. This paper describes the design, implementation and performance of the JUNO EDM. Supported by Joint Large-Scale Scientific Facility Funds of the NSFC and CAS (U1532258), the Program for New Century Excellent Talents in University (NCET-13-0342), the Shandong Natural Science Funds for Distinguished Young Scholar (JQ201402) and the Strategic Priority Research Program of the Chinese Academy of Sciences (XDA10010900)
NASA Astrophysics Data System (ADS)
Coulibaly, S.; Clerc, M. G.; Selmi, F.; Barbay, S.
2017-02-01
The occurrence of extreme events in a spatially extended microcavity laser has been recently reported [Selmi et al., Phys. Rev. Lett. 116, 013901 (2016), 10.1103/PhysRevLett.116.013901] to be correlated to emergence of spatiotemporal chaos. In this dissipative system, the role of spatial coupling through diffraction is essential to observe the onset of spatiotemporal complexity. We investigate further the formation mechanism of extreme events by comparing the statistical and dynamical analyses. Experimental measurements together with numerical simulations allow us to assign the quasiperiodicity mechanism as the route to spatiotemporal chaos in this system. Moreover, by investigating the fine structure of the maximum Lyapunov exponent, of the Lyapunov spectrum, and of the Kaplan-Yorke dimension of the chaotic attractor, we are able to deduce that intermittency plays a key role in the proportion of extreme events measured. We assign the observed mechanism of generation of extreme events to quasiperiodic extended spatiotemporal intermittency.
Dam failure analysis for the Lago El Guineo Dam, Orocovis, Puerto Rico
Gómez-Fragoso, Julieta; Heriberto Torres-Sierra,
2016-08-09
The U.S. Geological Survey, in cooperation with the Puerto Rico Electric Power Authority, completed hydrologic and hydraulic analyses to assess the potential hazard to human life and property associated with the hypothetical failure of the Lago El Guineo Dam. The Lago El Guineo Dam is within the headwaters of the Río Grande de Manatí and impounds a drainage area of about 4.25 square kilometers.The hydrologic assessment was designed to determine the outflow hydrographs and peak discharges for Lago El Guineo and other subbasins in the Río Grande de Manatí hydrographic basin for three extreme rainfall events: (1) a 6-hour probable maximum precipitation event, (2) a 24-hour probable maximum precipitation event, and (3) a 24-hour, 100-year recurrence rainfall event. The hydraulic study simulated a dam failure of Lago El Guineo Dam using flood hydrographs generated from the hydrologic study. The simulated dam failure generated a hydrograph that was routed downstream from Lago El Guineo Dam through the lower reaches of the Río Toro Negro and the Río Grande de Manatí to determine water-surface profiles developed from the event-based hydrologic scenarios and “sunny day” conditions. The Hydrologic Engineering Center’s Hydrologic Modeling System (HEC–HMS) and Hydrologic Engineering Center’s River Analysis System (HEC–RAS) computer programs, developed by the U.S. Army Corps of Engineers, were used for the hydrologic and hydraulic modeling, respectively. The flow routing in the hydraulic analyses was completed using the unsteady flow module available in the HEC–RAS model.Above the Lago El Guineo Dam, the simulated inflow peak discharges from HEC–HMS resulted in about 550 and 414 cubic meters per second for the 6- and 24-hour probable maximum precipitation events, respectively. The 24-hour, 100-year recurrence storm simulation resulted in a peak discharge of about 216 cubic meters per second. For the hydrologic analysis, no dam failure conditions are considered within the model. The results of the hydrologic simulations indicated that for all hydrologic conditions scenarios, the Lago El Guineo Dam would not experience overtopping. For the dam breach hydraulic analysis, failure by piping was the selected hypothetical failure mode for the Lago El Guineo Dam.Results from the simulated dam failure of the Lago El Guineo Dam using the HEC–RAS model for the 6- and 24-hour probable maximum precipitation events indicated peak discharges below the dam of 1,342.43 and 1,434.69 cubic meters per second, respectively. Dam failure during the 24-hour, 100-year recurrence rainfall event resulted in a peak discharge directly downstream from Lago El Guineo Dam of 1,183.12 cubic meters per second. Dam failure during sunny-day conditions (no precipitation) produced a peak discharge at Lago El Guineo Dam of 1,015.31 cubic meters per second assuming the initial water-surface elevation was at the morning-glory spillway invert elevation.The results of the hydraulic analysis indicate that the flood would extend to many inhabited areas along the stream banks from the Lago El Guineo Dam to the mouth of the Río Grande as a result of the simulated failure of the Lago El Guineo Dam. Low-lying regions in the vicinity of Ciales, Manatí, and Barceloneta, Puerto Rico, are among the regions that would be most affected by failure of the Lago El Guineo Dam. Effects of the flood control (levee) structure constructed in 2000 to provide protection to the low-lying populated areas of Barceloneta, Puerto Rico, were considered in the hydraulic analysis of dam failure. The results indicate that overtopping can be expected in the aforementioned levee during 6- and 24-hour probable maximum precipitation events. The levee was not overtopped during dam failure scenarios under the 24-hour, 100-year recurrence rainfall event or sunny-day conditions.
Computational Embryology and Predictive Toxicology of Cleft Palate
Capacity to model and simulate key events in developmental toxicity using computational systems biology and biological knowledge steps closer to hazard identification across the vast landscape of untested environmental chemicals. In this context, we chose cleft palate as a model ...
NASA Astrophysics Data System (ADS)
Rimo, Tan Hauw Sen; Chai Tin, Ong
2017-12-01
Capacity utilization (CU) measurement is an important task in a manufacturing system, especially in make-to-order (MTO) type manufacturing system with product customization, in predicting capacity to meet future demand. A stochastic discrete-event simulation is developed using ARENA software to determine CU and capacity gap (CG) in short run production function. This study focused on machinery breakdown and product defective rate as random variables in the simulation. The study found that the manufacturing system run in 68.01% CU and 31.99% CG. It is revealed that machinery breakdown and product defective rate have a direct relationship with CU. By improving product defective rate into zero defect, manufacturing system can improve CU up to 73.56% and CG decrease to 26.44%. While improving machinery breakdown into zero breakdowns will improve CU up to 93.99% and the CG decrease to 6.01%. This study helps operation level to study CU using “what-if” analysis in order to meet future demand in more practical and easier method by using simulation approach. Further study is recommended by including other random variables that affect CU to make the simulation closer with the real-life situation for a better decision.
Synchronization of autonomous objects in discrete event simulation
NASA Technical Reports Server (NTRS)
Rogers, Ralph V.
1990-01-01
Autonomous objects in event-driven discrete event simulation offer the potential to combine the freedom of unrestricted movement and positional accuracy through Euclidean space of time-driven models with the computational efficiency of event-driven simulation. The principal challenge to autonomous object implementation is object synchronization. The concept of a spatial blackboard is offered as a potential methodology for synchronization. The issues facing implementation of a spatial blackboard are outlined and discussed.
A flooding induced station blackout analysis for a pressurized water reactor using the RISMC toolkit
Mandelli, Diego; Prescott, Steven; Smith, Curtis; ...
2015-05-17
In this paper we evaluate the impact of a power uprate on a pressurized water reactor (PWR) for a tsunami-induced flooding test case. This analysis is performed using the RISMC toolkit: the RELAP-7 and RAVEN codes. RELAP-7 is the new generation of system analysis codes that is responsible for simulating the thermal-hydraulic dynamics of PWR and boiling water reactor systems. RAVEN has two capabilities: to act as a controller of the RELAP-7 simulation (e.g., component/system activation) and to perform statistical analyses. In our case, the simulation of the flooding is performed by using an advanced smooth particle hydrodynamics code calledmore » NEUTRINO. The obtained results allow the user to investigate and quantify the impact of timing and sequencing of events on system safety. The impact of power uprate is determined in terms of both core damage probability and safety margins.« less
Mars Exploration Rover Terminal Descent Mission Modeling and Simulation
NASA Technical Reports Server (NTRS)
Raiszadeh, Behzad; Queen, Eric M.
2004-01-01
Because of NASA's added reliance on simulation for successful interplanetary missions, the MER mission has developed a detailed EDL trajectory modeling and simulation. This paper summarizes how the MER EDL sequence of events are modeled, verification of the methods used, and the inputs. This simulation is built upon a multibody parachute trajectory simulation tool that has been developed in POST I1 that accurately simulates the trajectory of multiple vehicles in flight with interacting forces. In this model the parachute and the suspended bodies are treated as 6 Degree-of-Freedom (6 DOF) bodies. The terminal descent phase of the mission consists of several Entry, Descent, Landing (EDL) events, such as parachute deployment, heatshield separation, deployment of the lander from the backshell, deployment of the airbags, RAD firings, TIRS firings, etc. For an accurate, reliable simulation these events need to be modeled seamlessly and robustly so that the simulations will remain numerically stable during Monte-Carlo simulations. This paper also summarizes how the events have been modeled, the numerical issues, and modeling challenges.
Single event test methodology for integrated optoelectronics
NASA Technical Reports Server (NTRS)
Label, Kenneth A.; Cooley, James A.; Stassinopoulos, E. G.; Marshall, Paul; Crabtree, Christina
1993-01-01
A single event upset (SEU), defined as a transient or glitch on the output of a device, and its applicability to integrated optoelectronics are discussed in the context of spacecraft design and the need for more than a bit error rate viewpoint for testing and analysis. A methodology for testing integrated optoelectronic receivers and transmitters for SEUs is presented, focusing on the actual test requirements and system schemes needed for integrated optoelectronic devices. Two main causes of single event effects in the space environment, including protons and galactic cosmic rays, are considered along with ground test facilities for simulating the space environment.
A State Event Detection Algorithm for Numerically Simulating Hybrid Systems with Model Singularities
2007-01-01
the case of non- constant step sizes. Therefore the event dynamics after the predictor and corrector phases are, respectively, gpk +1 = g( xk + hk+1{ m...the Extrapolation Polynomial Using a Taylor series expansion of the predicted event function eq.(6) gpk +1 = gk + hk+1 dgp dt ∣∣∣∣ (x,t)=(xk,tk) + h2k...1 2! d2gp dt2 ∣∣∣∣ (x,t)=(xk,tk) + . . . , (8) we can determine the value of gpk +1 as a function of the, yet undetermined, step size hk+1. Recalling
Quality improvement utilizing in-situ simulation for a dual-hospital pediatric code response team.
Yager, Phoebe; Collins, Corey; Blais, Carlene; O'Connor, Kathy; Donovan, Patricia; Martinez, Maureen; Cummings, Brian; Hartnick, Christopher; Noviski, Natan
2016-09-01
Given the rarity of in-hospital pediatric emergency events, identification of gaps and inefficiencies in the code response can be difficult. In-situ, simulation-based medical education programs can identify unrecognized systems-based challenges. We hypothesized that developing an in-situ, simulation-based pediatric emergency response program would identify latent inefficiencies in a complex, dual-hospital pediatric code response system and allow rapid intervention testing to improve performance before implementation at an institutional level. Pediatric leadership from two hospitals with a shared pediatric code response team employed the Institute for Healthcare Improvement's (IHI) Breakthrough Model for Collaborative Improvement to design a program consisting of Plan-Do-Study-Act cycles occurring in a simulated environment. The objectives of the program were to 1) identify inefficiencies in our pediatric code response; 2) correlate to current workflow; 3) employ an iterative process to test quality improvement interventions in a safe environment; and 4) measure performance before actual implementation at the institutional level. Twelve dual-hospital, in-situ, simulated, pediatric emergencies occurred over one year. The initial simulated event allowed identification of inefficiencies including delayed provider response, delayed initiation of cardiopulmonary resuscitation (CPR), and delayed vascular access. These gaps were linked to process issues including unreliable code pager activation, slow elevator response, and lack of responder familiarity with layout and contents of code cart. From first to last simulation with multiple simulated process improvements, code response time for secondary providers coming from the second hospital decreased from 29 to 7 min, time to CPR initiation decreased from 90 to 15 s, and vascular access obtainment decreased from 15 to 3 min. Some of these simulated process improvements were adopted into the institutional response while others continue to be trended over time for evidence that observed changes represent a true new state of control. Utilizing the IHI's Breakthrough Model, we developed a simulation-based program to 1) successfully identify gaps and inefficiencies in a complex, dual-hospital, pediatric code response system and 2) provide an environment in which to safely test quality improvement interventions before institutional dissemination. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warburton, Thomas Karl
2017-01-01
The Deep Underground Neutrino Experiment (DUNE) is a next-generation neutrino experiment which will be built at the Sanford Underground Research Facility (SURF), and will receive a wide-band neutrino beam from Fermilab, 1300~km away. At this baseline DUNE will be able to study many of the properties of neutrino mixing, including the neutrino mass hierarchy and the value of the CP-violating complex phase (more » $$\\delta_{CP}$$). DUNE will utilise Liquid Argon (LAr) Time Projection Chamber (TPC) (LArTPC) technology, and the Far Detector (FD) will consist of four modules, each containing 17.1~kt of LAr with a fiducial mass of around 10~kt. Each of these FD modules represents around an order of magnitude increase in size, when compared to existing LArTPC experiments. \\\\ The 35 ton detector is the first DUNE prototype for the single (LAr) phase design of the FD. There were two running periods, one from November 2013 to February 2014, and a second from November 2015 to March 2016. During t he second running period, a system of TPCs was installed, and cosmic-ray data were collected. A method of particle identification was developed using simulations, though this was not applied to the data due to the higher than expected noise level. A new method of determining the interaction time of a track, using the effects of longitudinal diffusion, was developed using the cosmic-ray data. A camera system was also installed in the detector for monitoring purposes, and to look for high voltage breakdowns. \\\\ Simulations concerning the muon-induced background rate to nucleon decay are performed, following the incorporation of the MUon Simulations UNderground (MUSUN) generator into the DUNE software framework. A series of cuts which are based on Monte Carlo truth information is developed, designed to reject simulated background events, whilst preserving simulated signal events in the $$n \\rightarrow K^{+} + e^{-}$$ decay channel. No background events are seen to survive the app lication of these cuts in a sample of 2~$$\\times$$~10$^9$ muon! s, representing 401.6~years of detector live time. This corresponds to an annual background rate of <~0.44~events$$\\cdot$$Mt$$^{-1}\\cdot$$year$$^{-1}$$ at 90\\% confidence, using a fiducial mass of 13.8~kt.« less
Abich, Julian; Reinerman-Jones, Lauren; Matthews, Gerald
2017-06-01
The present study investigated how three task demand factors influenced performance, subjective workload and stress of novice intelligence, surveillance, and reconnaissance operators within a simulation of an unmanned ground vehicle. Manipulations were task type, dual-tasking and event rate. Participants were required to discriminate human targets within a street scene from a direct video feed (threat detection [TD] task) and detect changes in symbols presented in a map display (change detection [CD] task). Dual-tasking elevated workload and distress, and impaired performance for both tasks. However, with increasing event rate, CD task deteriorated, but TD improved. Thus, standard workload models provide a better guide to evaluating the demands of abstract symbols than to processing realistic human characters. Assessment of stress and workload may be especially important in the design and evaluation of systems in which human character critical signals must be detected in video images. Practitioner Summary: This experiment assessed subjective workload and stress during threat and CD tasks performed alone and in combination. Results indicated an increase in event rate led to significant improvements in performance during TD, but decrements during CD, yet both had associated increases in workload and engagement.
Hybrid stochastic simulations of intracellular reaction-diffusion systems.
Kalantzis, Georgios
2009-06-01
With the observation that stochasticity is important in biological systems, chemical kinetics have begun to receive wider interest. While the use of Monte Carlo discrete event simulations most accurately capture the variability of molecular species, they become computationally costly for complex reaction-diffusion systems with large populations of molecules. On the other hand, continuous time models are computationally efficient but they fail to capture any variability in the molecular species. In this study a hybrid stochastic approach is introduced for simulating reaction-diffusion systems. We developed an adaptive partitioning strategy in which processes with high frequency are simulated with deterministic rate-based equations, and those with low frequency using the exact stochastic algorithm of Gillespie. Therefore the stochastic behavior of cellular pathways is preserved while being able to apply it to large populations of molecules. We describe our method and demonstrate its accuracy and efficiency compared with the Gillespie algorithm for two different systems. First, a model of intracellular viral kinetics with two steady states and second, a compartmental model of the postsynaptic spine head for studying the dynamics of Ca+2 and NMDA receptors.
A system performance throughput model applicable to advanced manned telescience systems
NASA Technical Reports Server (NTRS)
Haines, Richard F.
1990-01-01
As automated space systems become more complex, autonomous, and opaque to the flight crew, it becomes increasingly difficult to determine whether the total system is performing as it should. Some of the complex and interrelated human performance measurement issues are addressed that are related to total system validation. An evaluative throughput model is presented which can be used to generate a human operator-related benchmark or figure of merit for a given system which involves humans at the input and output ends as well as other automated intelligent agents. The concept of sustained and accurate command/control data information transfer is introduced. The first two input parameters of the model involve nominal and off-nominal predicted events. The first of these calls for a detailed task analysis while the second is for a contingency event assessment. The last two required input parameters involving actual (measured) events, namely human performance and continuous semi-automated system performance. An expression combining these four parameters was found using digital simulations and identical, representative, random data to yield the smallest variance.
NASA Astrophysics Data System (ADS)
Yamada, Y.; Gouda, N.; Yano, T.; Sako, N.; Hatsutori, Y.; Tanaka, T.; Yamauchi, M.
We explain simulation tools in JASMINE project(JASMINE simulator). The JASMINE project stands at the stage where its basic design will be determined in a few years. Then it is very important to simulate the data stream generated by astrometric fields at JASMINE in order to support investigations of error budgets, sampling strategy, data compression, data analysis, scientific performances, etc. Of course, component simulations are needed, but total simulations which include all components from observation target to satellite system are also very important. We find that new software technologies, such as Object Oriented(OO) methodologies are ideal tools for the simulation system of JASMINE(the JASMINE simulator). The simulation system should include all objects in JASMINE such as observation techniques, models of instruments and bus design, orbit, data transfer, data analysis etc. in order to resolve all issues which can be expected beforehand and make it easy to cope with some unexpected problems which might occur during the mission of JASMINE. So, the JASMINE Simulator is designed as handling events such as photons from astronomical objects, control signals for devices, disturbances for satellite attitude, by instruments such as mirrors and detectors, successively. The simulator is also applied to the technical demonstration "Nano-JASMINE". The accuracy of ordinary sensor is not enough for initial phase attitude control. Mission instruments may be a good sensor for this purpose. The problem of attitude control in initial phase is a good example of this software because the problem is closely related to both mission instruments and satellite bus systems.
NASA Astrophysics Data System (ADS)
Yamada, Y.; Gouda, N.; Yano, T.; Kobayashi, Y.; Suganuma, M.; Tsujimoto, T.; Sako, N.; Hatsutori, Y.; Tanaka, T.
2006-08-01
We explain simulation tools in JASMINE project (JASMINE simulator). The JASMINE project stands at the stage where its basic design will be determined in a few years. Then it is very important to simulate the data stream generated by astrometric fields at JASMINE in order to support investigations of error budgets, sampling strategy, data compression, data analysis, scientific performances, etc. Of course, component simulations are needed, but total simulations which include all components from observation target to satellite system are also very important. We find that new software technologies, such as Object Oriented (OO) methodologies are ideal tools for the simulation system of JASMINE (the JASMINE simulator). The simulation system should include all objects in JASMINE such as observation techniques, models of instruments and bus design, orbit, data transfer, data analysis etc. in order to resolve all issues which can be expected beforehand and make it easy to cope with some unexpected problems which might occur during the mission of JASMINE. So, the JASMINE Simulator is designed as handling events such as photons from astronomical objects, control signals for devices, disturbances for satellite attitude, by instruments such as mirrors and detectors, successively. The simulator is also applied to the technical demonstration "Nano-JASMINE". The accuracy of ordinary sensor is not enough for initial phase attitude control. Mission instruments may be a good sensor for this purpose. The problem of attitude control in initial phase is a good example of this software because the problem is closely related to both mission instruments and satellite bus systems.
"No-Go Considerations" for In Situ Simulation Safety.
Bajaj, Komal; Minors, Anjoinette; Walker, Katie; Meguerdichian, Michael; Patterson, Mary
2018-06-01
In situ simulation is the practice of simulation in the actual clinical environment and has demonstrated utility in the assessment of system processes, identification of latent safety threats, and improvement in teamwork and communication. Nonetheless, performing simulated events in a real patient care setting poses potential risks to patient and staff safety. One integral aspect of a comprehensive approach to ensure the safety of in situ simulation includes the identification and establishment of "no-go considerations," that is, key decision-making considerations under which in situ simulations should be canceled, postponed, moved to another area, or rescheduled. These considerations should be modified and adjusted to specific clinical units. This article provides a framework of key essentials in developing no-go considerations.
Collision-induced evaporation of water clusters and contribution of momentum transfer
NASA Astrophysics Data System (ADS)
Calvo, Florent; Berthias, Francis; Feketeová, Linda; Abdoul-Carime, Hassan; Farizon, Bernadette; Farizon, Michel
2017-05-01
The evaporation of water molecules from high-velocity argon atoms impinging on protonated water clusters has been computationally investigated using molecular dynamics simulations with the reactive OSS2 potential to model water clusters and the ZBL pair potential to represent their interaction with the projectile. Swarms of trajectories and an event-by-event analysis reveal the conditions under which a specific number of molecular evaporation events is found one nanosecond after impact, thereby excluding direct knockout events from the analysis. These simulations provide velocity distributions that exhibit two main features, with a major statistical component arising from a global redistribution of the collision energy into intermolecular degrees of freedom, and another minor but non-ergodic feature at high velocities. The latter feature is produced by direct impacts on the peripheral water molecules and reflects a more complete momentum transfer. These two components are consistent with recent experimental measurements and confirm that electronic processes are not explicitly needed to explain the observed non-ergodic behavior. Contribution to the Topical Issue "Dynamics of Systems at the Nanoscale", edited by Andrey Solov'yov and Andrei Korol.
Reducing uncertainty in Climate Response Time Scale by Bayesian Analysis of the 8.2 ka event
NASA Astrophysics Data System (ADS)
Lorenz, A.; Held, H.; Bauer, E.; Schneider von Deimling, T.
2009-04-01
We analyze the possibility of uncertainty reduction in Climate Response Time Scale by utilizing Greenland ice-core data that contain the 8.2 ka event within a Bayesian model-data intercomparison with the Earth system model of intermediate complexity, CLIMBER-2.3. Within a stochastic version of the model it has been possible to mimic the 8.2 ka event within a plausible experimental setting and with relatively good accuracy considering the timing of the event in comparison to other modeling exercises [1]. The simulation of the centennial cold event is effectively determined by the oceanic cooling rate which depends largely on the ocean diffusivity described by diffusion coefficients of relatively wide uncertainty ranges. The idea now is to discriminate between the different values of diffusivities according to their likelihood to rightly represent the duration of the 8.2 ka event and thus to exploit the paleo data to constrain uncertainty in model parameters in analogue to [2]. Implementing this inverse Bayesian Analysis with this model the technical difficulty arises to establish the related likelihood numerically in addition to the uncertain model parameters: While mainstream uncertainty analyses can assume a quasi-Gaussian shape of likelihood, with weather fluctuating around a long term mean, the 8.2 ka event as a highly nonlinear effect precludes such an a priori assumption. As a result of this study [3] the Bayesian Analysis showed a reduction of uncertainty in vertical ocean diffusivity parameters of factor 2 compared to prior knowledge. This learning effect on the model parameters is propagated to other model outputs of interest; e.g. the inverse ocean heat capacity, which is important for the dominant time scale of climate response to anthropogenic forcing which, in combination with climate sensitivity, strongly influences the climate systems reaction for the near- and medium-term future. 1 References [1] E. Bauer, A. Ganopolski, M. Montoya: Simulation of the cold climate event 8200 years ago by meltwater outburst from lake Agassiz. Paleoceanography 19:PA3014, (2004) [2] T. Schneider von Deimling, H. Held, A. Ganopolski, S. Rahmstorf, Climate sensitivity estimated from ensemble simulations of glacial climates, Climate Dynamics 27, 149-163, DOI 10.1007/s00382-006-0126-8 (2006). [3] A. Lorenz, Diploma Thesis, U Potsdam (2007).
A search for model parsimony in a real time flood forecasting system
NASA Astrophysics Data System (ADS)
Grossi, G.; Balistrocchi, M.
2009-04-01
As regards the hydrological simulation of flood events, a physically based distributed approach is the most appealing one, especially in those areas where the spatial variability of the soil hydraulic properties as well as of the meteorological forcing cannot be left apart, such as in mountainous regions. On the other hand, dealing with real time flood forecasting systems, less detailed models requiring a minor number of parameters may be more convenient, reducing both the computational costs and the calibration uncertainty. In fact in this case a precise quantification of the entire hydrograph pattern is not necessary, while the expected output of a real time flood forecasting system is just an estimate of the peak discharge, the time to peak and in some cases the flood volume. In this perspective a parsimonious model has to be found in order to increase the efficiency of the system. A suitable case study was identified in the northern Apennines: the Taro river is a right tributary to the Po river and drains about 2000 km2 of mountains, hills and floodplain, equally distributed . The hydrometeorological monitoring of this medium sized watershed is managed by ARPA Emilia Romagna through a dense network of uptodate gauges (about 30 rain gauges and 10 hydrometers). Detailed maps of the surface elevation, land use and soil texture characteristics are also available. Five flood events were recorded by the new monitoring network in the years 2003-2007: during these events the peak discharge was higher than 1000 m3/s, which is actually quite a high value when compared to the mean discharge rate of about 30 m3/s. The rainfall spatial patterns of such storms were analyzed in previous works by means of geostatistical tools and a typical semivariogram was defined, with the aim of establishing a typical storm structure leading to flood events in the Taro river. The available information was implemented into a distributed flood event model with a spatial resolution of 90m; then the hydrologic detail was reduced by progressively assuming a uniform rainfall field and constant soil properties. A semi-distributed model, obtained by subdividing the catchment into three sub-catchment, and a lumped model were also applied to simulate the selected flood events. Errors were quantified in terms of the peak discharge ratio, the flood volume and the time to peak by comparing the simulated hydrographs to the observed ones.
Dynamic processes in heavy-ion collisions at intermediate energies
NASA Astrophysics Data System (ADS)
Prendergast, E. P.
1999-03-01
This thesis describes the study of the reaction dynamics in heavy-ion collisions of small nuclear systems at intermediate energies. For this, experiments were performed of 24Mg+27A1 at 45 and 95 AMeV. The experiments described in this thesis were performed at the GANIL accelerator facility in Caeri (France) using the Huygens detectors in conjunction with the ‘MUR’. The Huygens detectors consist of the CsI(Tl)-Wall (CIW) covering the backward hemisphere and, located at mid-rapidity, the central trigger detector (CTD), a gas chamber with microstrip read-out backed by 48 plastic scintillators. The forward region is covered by 16 of the plastic scintillators of the CTD and by the MUR, a time-of-flight wall consisting of 96 plastic scintillator sheets. In earlier experiments only fragments with atomic number, Z, greater then two could be identifled in the CTD. Therefore, an investigation was done into the properties of different drift gases. The use of freon (CF4) in the drift chamber, combined with an increase of the gas pressure to 150 mbar, makes it possible to identify all particles with Z ≥ 2. Under these conditions particles with Z = 1 can only be identifled to approximately 25 AMeV. The Isospin Quantum Molecular Dynamics (IQMD) model has been used, to interpret the measured data. This model gives a microscopical description of heavy-ion collisions and simulates collisions on an event by event basis. In IQMD all protons and neutrons are represented as individual Gaussian wave packets. After initialisation the path of each nucleon is calculated for 200 fm/c, after which the simulation is stopped. At this time, nucleons which are close in space are clustered into fragments. The events generated by IQMD can then be processed by a GEANT detector simulation. This calculation takes into account the effects of the detector on the incoming particles. By using the GEANT simulation it is possible to give a direct comparison between the results of IQMD and the experimental data. The impact-parameter selection procedure, based on the charged-particle multiplicity, was studied using IQMD events and the GEANT detector simulation. This showed that indeed an impact-parameter selection can be made with this method. However, the accuracy of this selection for these small systems is not very good. In particular the central-event selection is heavily polluted by mid-central events. Only mid-central events have been studied for 24Mg+27A1 at 45 and 95 AMeV. In order to study the collective flow in heavy-ion collisions, first the event plane has to be reconstructed. Again IQMD events and the GEANT detector simulation were used to investigate the effectiveness of several different event-plane reconstruction methods. It was found that an event plane can be reconstructed. The azimuthal-correlation method gives marginally the best result. With this method to reconstruct the reaction plane, the directed in-plane fiow was studied. The experimental data showed a strongly reduced flow at 95 AMeV compared to 45 AMeV, in accordance with a balancing energy of 114 ± 10 AMeV as derived from literature. Finally, the reaction dynamics were studied using the azimuthal correlations and the polar-angle distributions of intermediate-mass fragments (IMFs) emitted at midrapidity, both of which do not require an event-plane reconstructioh. The azimuthal correlations for the two energies are quite similar, whereas the directed in-plane flow is substantially higher at 45 AMeV than at 95 AMeV. This shows that the azimuthal correlations are insensitive to the magnitude of the directed in-plane flow. At both energies, the azimuthal-correlation functions for the various IMFs show absolute maxima at 180°, which can not be explained by a mid-rapidity source emitting fragments mdependently. However, the distributions are described by IQMD. The maxima are either caused target-projectile correlations (as in IQMD) or by momentum conservation. To describe the momentum-conservation scenario, a second model was introduced, which simulates the prompt multifragmentation of a small source. This model was fitted to the measured azimuthal-correlation functions, resulting in source sizes between 32 and 40 amu, depending on the mass of the emitted IMFs. Subsequently, the polar-angle distributions of the two models were compared to the experimental data. The distributions of the experimental data showed target- and projectile-like maxima, which can not be described by a decaying source, but are described by IQMD. Therefore, it is concluded that the IMF production in these small systems is a dynamic process with no evidence of a mid-rapidity source.
Computer simulation of earthquakes
NASA Technical Reports Server (NTRS)
Cohen, S. C.
1976-01-01
Two computer simulation models of earthquakes were studied for the dependence of the pattern of events on the model assumptions and input parameters. Both models represent the seismically active region by mechanical blocks which are connected to one another and to a driving plate. The blocks slide on a friction surface. In the first model elastic forces were employed and time independent friction to simulate main shock events. The size, length, and time and place of event occurrence were influenced strongly by the magnitude and degree of homogeniety in the elastic and friction parameters of the fault region. Periodically reoccurring similar events were frequently observed in simulations with near homogeneous parameters along the fault, whereas, seismic gaps were a common feature of simulations employing large variations in the fault parameters. The second model incorporated viscoelastic forces and time-dependent friction to account for aftershock sequences. The periods between aftershock events increased with time and the aftershock region was confined to that which moved in the main event.
NASA Technical Reports Server (NTRS)
Sarani, Siamak
2010-01-01
This paper describes a methodology for accurate and flight-calibrated determination of the on-times of the Cassini spacecraft Reaction Control System (RCS) thrusters, without any form of dynamic simulation, for the reaction wheel biases. The hydrazine usage and the delta V vector in body frame are also computed from the respective thruster on-times. The Cassini spacecraft, the largest and most complex interplanetary spacecraft ever built, continues to undertake ambitious and unique scientific observations of planet Saturn, Titan, Enceladus, and other moons of Saturn. In order to maintain a stable attitude during the course of its mission, this three-axis stabilized spacecraft uses two different control systems: the RCS and the reaction wheel assembly control system. The RCS is used to execute a commanded spacecraft slew, to maintain three-axis attitude control, control spacecraft's attitude while performing science observations with coarse pointing requirements, e.g. during targeted low-altitude Titan and Enceladus flybys, bias the momentum of reaction wheels, and to perform RCS-based orbit trim maneuvers. The use of RCS often imparts undesired delta V on the spacecraft. The Cassini navigation team requires accurate predictions of the delta V in spacecraft coordinates and inertial frame resulting from slews using RCS thrusters and more importantly from reaction wheel bias events. It is crucial for the Cassini spacecraft attitude control and navigation teams to be able to, quickly but accurately, predict the hydrazine usage and delta V for various reaction wheel bias events without actually having to spend time and resources simulating the event in flight software-based dynamic simulation or hardware-in-the-loop simulation environments. The methodology described in this paper, and the ground software developed thereof, are designed to provide just that. This methodology assumes a priori knowledge of thrust magnitudes and thruster pulse rise and tail-off time constants for eight individual attitude control thrusters, the spacecraft's wet mass and its center of mass location, and a few other key parameters.
Assessment of an Automated Touchdown Detection Algorithm for the Orion Crew Module
NASA Technical Reports Server (NTRS)
Gay, Robert S.
2011-01-01
Orion Crew Module (CM) touchdown detection is critical to activating the post-landing sequence that safe?s the Reaction Control Jets (RCS), ensures that the vehicle remains upright, and establishes communication with recovery forces. In order to accommodate safe landing of an unmanned vehicle or incapacitated crew, an onboard automated detection system is required. An Orion-specific touchdown detection algorithm was developed and evaluated to differentiate landing events from in-flight events. The proposed method will be used to initiate post-landing cutting of the parachute riser lines, to prevent CM rollover, and to terminate RCS jet firing prior to submersion. The RCS jets continue to fire until touchdown to maintain proper CM orientation with respect to the flight path and to limit impact loads, but have potentially hazardous consequences if submerged while firing. The time available after impact to cut risers and initiate the CM Up-righting System (CMUS) is measured in minutes, whereas the time from touchdown to RCS jet submersion is a function of descent velocity, sea state conditions, and is often less than one second. Evaluation of the detection algorithms was performed for in-flight events (e.g. descent under chutes) using hi-fidelity rigid body analyses in the Decelerator Systems Simulation (DSS), whereas water impacts were simulated using a rigid finite element model of the Orion CM in LS-DYNA. Two touchdown detection algorithms were evaluated with various thresholds: Acceleration magnitude spike detection, and Accumulated velocity changed (over a given time window) spike detection. Data for both detection methods is acquired from an onboard Inertial Measurement Unit (IMU) sensor. The detection algorithms were tested with analytically generated in-flight and landing IMU data simulations. The acceleration spike detection proved to be faster while maintaining desired safety margin. Time to RCS jet submersion was predicted analytically across a series of simulated Orion landing conditions. This paper details the touchdown detection method chosen and the analysis used to support the decision.
NASA Astrophysics Data System (ADS)
Nair, U. S.; Keiser, K.; Wu, Y.; Maskey, M.; Berendes, D.; Glass, P.; Dhakal, A.; Christopher, S. A.
2012-12-01
The Alabama Forestry Commission (AFC) is responsible for wildfire control and also prescribed burn management in the state of Alabama. Visibility and air quality degradation resulting from smoke are two pieces of information that are crucial for this activity. Currently the tools available to AFC are the dispersion index available from the National Weather Service and also surface smoke concentrations. The former provides broad guidance for prescribed burning activities but does not provide specific information regarding smoke transport, areas affected and quantification of air quality and visibility degradation. While the NOAA operational air quality guidance includes surface smoke concentrations from existing fire events, it does not account for contributions from background aerosols, which are important for the southeastern region including Alabama. Also lacking is the quantification of visibility. The University of Alabama in Huntsville has developed a state-of-the-art integrated modeling system to address these concerns. This system based on the Community Air Quality Modeling System (CMAQ) that ingests satellite derived smoke emissions and also assimilates NASA MODIS derived aerosol optical thickness. In addition, this operational modeling system also simulates the impact of potential prescribed burn events based on location information derived from the AFC prescribed burn permit database. A lagrangian model is used to simulate smoke plumes for the prescribed burns requests. The combined air quality and visibility degradation resulting from these smoke plumes and background aerosols is computed and the information is made available through a web based decision support system utilizing open source GIS components. This system provides information regarding intersections between highways and other critical facilities such as old age homes, hospitals and schools. The system also includes satellite detected fire locations and other satellite derived datasets relevant for fire and smoke management.
Extreme event statistics in a drifting Markov chain
NASA Astrophysics Data System (ADS)
Kindermann, Farina; Hohmann, Michael; Lausch, Tobias; Mayer, Daniel; Schmidt, Felix; Widera, Artur
2017-07-01
We analyze extreme event statistics of experimentally realized Markov chains with various drifts. Our Markov chains are individual trajectories of a single atom diffusing in a one-dimensional periodic potential. Based on more than 500 individual atomic traces we verify the applicability of the Sparre Andersen theorem to our system despite the presence of a drift. We present detailed analysis of four different rare-event statistics for our system: the distributions of extreme values, of record values, of extreme value occurrence in the chain, and of the number of records in the chain. We observe that, for our data, the shape of the extreme event distributions is dominated by the underlying exponential distance distribution extracted from the atomic traces. Furthermore, we find that even small drifts influence the statistics of extreme events and record values, which is supported by numerical simulations, and we identify cases in which the drift can be determined without information about the underlying random variable distributions. Our results facilitate the use of extreme event statistics as a signal for small drifts in correlated trajectories.
Taheriyoun, Masoud; Moradinejad, Saber
2015-01-01
The reliability of a wastewater treatment plant is a critical issue when the effluent is reused or discharged to water resources. Main factors affecting the performance of the wastewater treatment plant are the variation of the influent, inherent variability in the treatment processes, deficiencies in design, mechanical equipment, and operational failures. Thus, meeting the established reuse/discharge criteria requires assessment of plant reliability. Among many techniques developed in system reliability analysis, fault tree analysis (FTA) is one of the popular and efficient methods. FTA is a top down, deductive failure analysis in which an undesired state of a system is analyzed. In this study, the problem of reliability was studied on Tehran West Town wastewater treatment plant. This plant is a conventional activated sludge process, and the effluent is reused in landscape irrigation. The fault tree diagram was established with the violation of allowable effluent BOD as the top event in the diagram, and the deficiencies of the system were identified based on the developed model. Some basic events are operator's mistake, physical damage, and design problems. The analytical method is minimal cut sets (based on numerical probability) and Monte Carlo simulation. Basic event probabilities were calculated according to available data and experts' opinions. The results showed that human factors, especially human error had a great effect on top event occurrence. The mechanical, climate, and sewer system factors were in subsequent tier. Literature shows applying FTA has been seldom used in the past wastewater treatment plant (WWTP) risk analysis studies. Thus, the developed FTA model in this study considerably improves the insight into causal failure analysis of a WWTP. It provides an efficient tool for WWTP operators and decision makers to achieve the standard limits in wastewater reuse and discharge to the environment.
Exchange Service Station Gasoline Pumping Operation Simulation.
1980-06-01
an event step simulation model of the Naval operation.s The model has been developed as a management tool and aid to decision making. The environment...has been developed as a management tool and aid to decision making. The environment in which the system operates is discussed and the significant...of the variables such as arrival rates; while others are primarily controlled by managerial decision making, for example the number of pumps available
Optimal Discrete Event Supervisory Control of Aircraft Gas Turbine Engines
NASA Technical Reports Server (NTRS)
Litt, Jonathan (Technical Monitor); Ray, Asok
2004-01-01
This report presents an application of the recently developed theory of optimal Discrete Event Supervisory (DES) control that is based on a signed real measure of regular languages. The DES control techniques are validated on an aircraft gas turbine engine simulation test bed. The test bed is implemented on a networked computer system in which two computers operate in the client-server mode. Several DES controllers have been tested for engine performance and reliability.
An Overview of Grain Growth Theories for Pure Single Phase Systems,
1986-10-01
the fundamental causes for these distributions. This Blanc and Mocellin (1979) and Carnal and Mocellin (1981j set out to do. 7.1 Monte-Carlo Simulations...termed event B) (in 2-D) of 3-sided grains. (2) Neighbour-switching (termed event C). Blanc and Mocellin (1979) dealt with 2-D sections through...Kurtz and Carpay (1980a). 7.2 Analytical Method to Obtain fn Carnal and Mocellin (1981) obtained the distribution of grain coordination numbers in
Shannon, Robin; Glowacki, David R
2018-02-15
The chemical master equation is a powerful theoretical tool for analyzing the kinetics of complex multiwell potential energy surfaces in a wide range of different domains of chemical kinetics spanning combustion, atmospheric chemistry, gas-surface chemistry, solution phase chemistry, and biochemistry. There are two well-established methodologies for solving the chemical master equation: a stochastic "kinetic Monte Carlo" approach and a matrix-based approach. In principle, the results yielded by both approaches are identical; the decision of which approach is better suited to a particular study depends on the details of the specific system under investigation. In this Article, we present a rigorous method for accelerating stochastic approaches by several orders of magnitude, along with a method for unbiasing the accelerated results to recover the "true" value. The approach we take in this paper is inspired by the so-called "boxed molecular dynamics" (BXD) method, which has previously only been applied to accelerate rare events in molecular dynamics simulations. Here we extend BXD to design a simple algorithmic strategy for accelerating rare events in stochastic kinetic simulations. Tests on a number of systems show that the results obtained using the BXD rare event strategy are in good agreement with unbiased results. To carry out these tests, we have implemented a kinetic Monte Carlo approach in MESMER, which is a cross-platform, open-source, and freely available master equation solver.
Toward sensor-based context aware systems.
Sakurai, Yoshitaka; Takada, Kouhei; Anisetti, Marco; Bellandi, Valerio; Ceravolo, Paolo; Damiani, Ernesto; Tsuruta, Setsuo
2012-01-01
This paper proposes a methodology for sensor data interpretation that can combine sensor outputs with contexts represented as sets of annotated business rules. Sensor readings are interpreted to generate events labeled with the appropriate type and level of uncertainty. Then, the appropriate context is selected. Reconciliation of different uncertainty types is achieved by a simple technique that moves uncertainty from events to business rules by generating combs of standard Boolean predicates. Finally, context rules are evaluated together with the events to take a decision. The feasibility of our idea is demonstrated via a case study where a context-reasoning engine has been connected to simulated heartbeat sensors using prerecorded experimental data. We use sensor outputs to identify the proper context of operation of a system and trigger decision-making based on context information.
Numerical simulations of significant orographic precipitation in Madeira island
NASA Astrophysics Data System (ADS)
Couto, Flavio Tiago; Ducrocq, Véronique; Salgado, Rui; Costa, Maria João
2016-03-01
High-resolution simulations of high precipitation events with the MESO-NH model are presented, and also used to verify that increasing horizontal resolution in zones of complex orography, such as in Madeira island, improve the simulation of the spatial distribution and total precipitation. The simulations succeeded in reproducing the general structure of the cloudy systems over the ocean in the four periods considered of significant accumulated precipitation. The accumulated precipitation over the Madeira was better represented with the 0.5 km horizontal resolution and occurred under four distinct synoptic situations. Different spatial patterns of the rainfall distribution over the Madeira have been identified.
NASA Astrophysics Data System (ADS)
Chen, C. T.; Lo, S. H.; Wang, C. C.; Tsuboki, K.
2017-12-01
More than 2000 mm rainfall occurred over southern Taiwan when a category 1 Typhoon Morakot pass through Taiwan in early August 2009. Entire village and hundred of people were buried by massive mudslides induced by record-breaking precipitation. Whether the past anthropogenic warming played a significant role in such extreme event remained very controversial. On one hand, people argue it's nearly impossible to attribute an individual extreme event to global warming. On the other hand, the increase of heavy rainfall is consistent with the expected effects of climate change on tropical cyclone. To diagnose possible anthropogenic contributions to the odds of such heavy rainfall associated with Typhoon Morakot, we adapt an existing probabilistic event attribution framework to simulate a `world that was' and compare it with an alternative condition, 'world that might have been' that removed the historical anthropogenic drivers of climate. One limitation for applying such approach to high-impact weather system is that it will require models capable of capturing the essential processes lead to the studied extremes. Using a cloud system resolving model that can properly simulate the complicated interactions between tropical cyclone, large-scale background, topography, we first perform the ensemble `world that was' simulations using high resolution ECMWF YOTC analysis. We then re-simulate, having adjusted the analysis to `world that might have been conditions' by removing the regional atmospheric and oceanic forcing due to human influences estimated from the CMIP5 model ensemble mean conditions between all forcing and natural forcing only historical runs. Thus our findings are highly conditional on the driving analysis and adjustments therein, but the setup allows us to elucidate possible contribution of anthropogenic forcing to changes in the likelihood of heavy rainfall associated Typhoon Morakot in early August 2009.
Simulation of Runoff Concentration on Arable Fields and the Impact of Adapted Tillage Practises
NASA Astrophysics Data System (ADS)
Winter, F.; Disse, M.
2012-04-01
Conservational tillage can reduce runoff on arable fields. Due to crop residues remaining on the fields a seasonal constant ground cover is achieved. This additional soil cover not only decreases the drying of the topsoil but also reduces the mechanical impact of raindrops and the possibly resulting soil crust. Further implications of the mulch layer can be observed during heavy precipitation events and occurring surface runoff. The natural roughness of the ground surface is further increased and thus the flow velocity is decreased, resulting in an enhanced ability of runoff to infiltrate into the soil (so called Runon-Infiltration). The hydrological model system WaSiM-ETH hitherto simulates runoff concentration by a flow time grid in the catchment, which is derived from topographical features of the catchment during the preprocessing analysis. The retention of both surface runoff and interflow is modelled by a single reservoir in every discrete flow time zone until the outlet of a subcatchment is reached. For a more detailed analysis of the flow paths in catchments of the lower mesoscale (< 1 km2) the model was extended by a kinematic wave approach for the surface runoff concentration. This allows the simulation of small-scale variation in runoff generation and its temporal distribution in detail. Therefore the assessment of adapted tillage systems can be derived. On singular fields of the Scheyern research farm north-west of Munich it can be shown how different crops and tillage practises can influence runoff generation and concentration during single heavy precipitation events. From the simulation of individual events in agricultural areas of the lower mesoscale hydrologically susceptible areas can be identified and the positive impact of an adapted agricultural management on runoff generation and concentration can be quantifed.
Simulation framework for intelligent transportation systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ewing, T.; Doss, E.; Hanebutte, U.
1996-10-01
A simulation framework has been developed for a large-scale, comprehensive, scaleable simulation of an Intelligent Transportation System (ITS). The simulator is designed for running on parallel computers and distributed (networked) computer systems, but can run on standalone workstations for smaller simulations. The simulator currently models instrumented smart vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide two-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphicalmore » user interfaces to support human-factors studies. Realistic modeling of variations of the posted driving speed are based on human factors studies that take into consideration weather, road conditions, driver personality and behavior, and vehicle type. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on parallel computers, such as ANL`s IBM SP-2, for large-scale problems. A novel feature of the approach is that vehicles are represented by autonomous computer processes which exchange messages with other processes. The vehicles have a behavior model which governs route selection and driving behavior, and can react to external traffic events much like real vehicles. With this approach, the simulation is scaleable to take advantage of emerging massively parallel processor (MPP) systems.« less
NASA Astrophysics Data System (ADS)
Wang, Zhu; Shi, Peijun; Zhang, Zhao; Meng, Yongchang; Luan, Yibo; Wang, Jiwei
2017-09-01
Separating out the influence of climatic trend, fluctuations and extreme events on crop yield is of paramount importance to climate change adaptation, resilience, and mitigation. Previous studies lack systematic and explicit assessment of these three fundamental aspects of climate change on crop yield. This research attempts to separate out the impacts on rice yields of climatic trend (linear trend change related to mean value), fluctuations (variability surpassing the "fluctuation threshold" which defined as one standard deviation (1 SD) of the residual between the original data series and the linear trend value for each climatic variable), and extreme events (identified by absolute criterion for each kind of extreme events related to crop yield). The main idea of the research method was to construct climate scenarios combined with crop system simulation model. Comparable climate scenarios were designed to express the impact of each climate change component and, were input to the crop system model (CERES-Rice), which calculated the related simulated yield gap to quantify the percentage impacts of climatic trend, fluctuations, and extreme events. Six Agro-Meteorological Stations (AMS) in Hunan province were selected to study the quantitatively impact of climatic trend, fluctuations and extreme events involving climatic variables (air temperature, precipitation, and sunshine duration) on early rice yield during 1981-2012. The results showed that extreme events were found to have the greatest impact on early rice yield (-2.59 to -15.89%). Followed by climatic fluctuations with a range of -2.60 to -4.46%, and then the climatic trend (4.91-2.12%). Furthermore, the influence of climatic trend on early rice yield presented "trade-offs" among various climate variables and AMS. Climatic trend and extreme events associated with air temperature showed larger effects on early rice yield than other climatic variables, particularly for high-temperature events (-2.11 to -12.99%). Finally, the methodology use to separate out the influences of the climatic trend, fluctuations, and extreme events on crop yield was proved to be feasible and robust. Designing different climate scenarios and feeding them into a crop system model is a potential way to evaluate the quantitative impact of each climate variable.
NASA Astrophysics Data System (ADS)
Zhuang, J.; Vere-Jones, D.; Ogata, Y.; Christophersen, A.; Savage, M. K.; Jackson, D. D.
2008-12-01
In this study we investigate the foreshock probabilities calculated from earthquake catalogs from Japan, Southern California and New Zealand. Unlike conventional studies on foreshocks, we use a probability-based declustering method to separate each catalog into stochastic versions of family trees, such that each event is classified as either having been triggered by a preceding event, or being a spontaneous event. The probabilities are determined from parameters that provide the best fit of the real catalogue using a space- time epidemic-type aftershock sequence (ETAS) model. The model assumes that background and triggered earthquakes have the same magnitude dependent triggering capability. A foreshock here is defined as a spontaneous event that has one or more larger descendants, and a triggered foreshock is a triggered event that has one or more larger descendants. The proportion of foreshocks in spontaneous events of each catalog is found to be lower than the proportion of triggered foreshocks in triggered events. One possibility is that this is due to different triggering productivity in spontaneous versus triggered events, i.e., a triggered event triggers more children than a spontaneous events of the same magnitude. To understand what causes the above differences between spontaneous and triggered events, we apply the same procedures to several synthetic catalogs simulated by using different models. The first simulation is done by using the ETAS model with parameters and spontaneous rate fitted from the JMA catalog. The second synthetic catalog is simulated by using an adjusted ETAS model that takes into account the triggering effect from events lower than the magnitude. That is, we simulated the catalog with a low magnitude threshold with the original ETAS model, and then we remove the events smaller than a higher magnitude threshold. The third model for simulation assumes that different triggering behaviors exist between spontaneous event and triggered events. We repeat the fitting and reconstruction procedures to all those simulated catalogs. The reconstruction results for the first synthetic catalog do not show the difference between spontaneous events and triggered event or the differences in foreshock probabilities. On the other hand, results from the synthetic catalogs simulated with the second and the third models clearly reconstruct such differences. In summary our results implies that one of the causes of such differences may be neglecting the triggering effort from events smaller than the cut-off magnitude or magnitude errors. For the objective of forecasting seismicity, we can use a clustering model in which spontaneous events trigger child events in a different way from triggered events to avoid over-predicting earthquake risks with foreshocks. To understand the physical implication of this study, we need further careful studies to compare the real seismicity and the adjusted ETAS model, which takes the triggering effect from events below the cut-off magnitude into account.
SURFACE FLOODS IN COIMBRA: simple and dual-drainage studies
NASA Astrophysics Data System (ADS)
Leitão, J. P.; Simões, N. E.; Pina, R.; Marques, A. Sá; Maksimović, Č.; Gonçalves, Gil
2009-09-01
Surface water flooding occurs due to extreme rainfall and the inability of the sewer system to drain all runoff. As a consequence, a considerable volume of water is carried out over the surface through preferential flow paths and can eventually accumulate in natural (or man-made) ponds. This can cause minor material losses but also major incidents with obvious consequences in economic activities and the normal people's life. Unfortunately, due to predicted climate changes and increase of urbanisation levels, the urban flooding phenomenon has been reported more often. The Portuguese city of Coimbra is a medium size city that has suffered several river floods in the past. However, after the construction of hydraulic control structures, the number of fluvial flood events was greatly reduced. In the 1990s two new problems started. On one hand, houses started to be built on flood plain areas; on the other hand, some areas experienced a boom in the degree of urbanisation. This created flood problems of a different type dislocating the flood areas from the traditional flood areas along the river to new areas that did not reported flood in history. The catchment studied has a total area of approximately 1.5 km2 and discharges in the Coselhas brook The catchment can be divided in three regions with different characteristics: (i) the "Lower City" which is a low-lying area with 0.4 km2 and with a combined sewer system; (ii) the "Upper City" which is a considerably hilly area, highly urbanized and with an area of approximately 0.2 km2; and (iii) the remaining area which is also highly urbanized, with an area of 0.9 km2, where the main flood problems are generated. The sewer system is 34.8 km long; 29 km are of the combined type, and only 1.2 km is exclusive for storm water. The time of concentration of the catchment is estimated to be 45 min. On the 9 June 2006, an extreme rainfall event caused severe flooding in the city. After the rainfall had stopped, water continued to flow along the roads towards the Praça 8 de Maio, which is the lowest point in the whole catchment and where water tends to accumulate. As presented in Table 1, the return periods calculated for durations shorter than 30 minutes are not high. In fact, this rainfall event is characterised by an extreme heavy intensity regarding its total duration; thus it cannot be considered a short period event with a high intensity. As its total duration is approximately the time of concentration of the catchment, the flooding event was very significant. A 50 year return period was estimated for the event with 45 minutes duration. Table 1: Return period interpretation of the 9 June 2006 rainfall event Duration 5 (min) 10 (min) 15 (min) 30 (min) 45 (min) Maximum rainfall intensity (mm/h)122.4 76.8 72.4 61.6 47.6 Return period1(year) 10 8 20 > 50 50 Comparing the simulation results and the actual flood locations, it is concluded that the main cause of flooding is not the capacity of the sewer system. Despite the high slopes and the high level of imperviousness of the catchment, the flood seems to be mainly caused due to the limited capacity of the sewer inlets. It suggests that the correct analysis of the hydraulic behaviour of the catchment drainage system should contemplate the analysis of the overland flow system, either using a one- (1D) or two-dimensional (2D) approaches. Hence, simulation of the 9 June 2006 event were also carried out considering the 1D sewer model, an 1D/1D model and an 1D/2D model. The methodology developed at the Imperial College London to generate overland flow networks was used in the 1D/1D model. Infoworks CS was used to do the hydraulic simulations of the 1D/1D and 1D/2D models. The results of the simulations taking into account the overland flow system will be presented in this paper. Local community reports and photos are then used to validate the simulation results obtained. Acknowledgements The authors would like to acknowledge Águas de Coimbra, E.M. and Edinfor (Portugal) for providing the data used in this study. Provision of the software used to carry out the hydraulic simulations.by Wallingford Software is also acknowledged. The first and second authors also acknowledge the financial support from the Fundação para a Ciência e Tecnologia, Portugal [SFRH/BD/21382/2005 and SFRH/BD/37797/2007].
A hierarchical model of the evolution of cooperation in cultural systems.
Savatsky, K; Reynolds, R G
1989-01-01
In this paper the following problem is addressed: "Under what conditions can a collection of individual organisms learn to cooperate when cooperation appears to outwardly degrade individual performance at the outset. In order to attempt a theoretical solution to this problem, data from a real world problem in anthropology is used. A distributed simulation model of this system was developed to assess its long term behavior using using an approach suggested by Zeigler (Zeigler, B.P., 1984, Multifaceted Modelling and Discrete Event Simulation (Academic Press, London)). The results of the simulation are used to show that although cooperation degrades the performance potential of each individual, it enhances the persistence of the individual's partial solution to the problem in certain situations."
NASA Astrophysics Data System (ADS)
Reyhancan, Iskender Atilla; Ebrahimi, Alborz; Çolak, Üner; Erduran, M. Nizamettin; Angin, Nergis
2017-01-01
A new Monte-Carlo Library Least Square (MCLLS) approach for treating non-linear radiation analysis problem in Neutron Inelastic-scattering and Thermal-capture Analysis (NISTA) was developed. 14 MeV neutrons were produced by a neutron generator via the 3H (2H , n) 4He reaction. The prompt gamma ray spectra from bulk samples of seven different materials were measured by a Bismuth Germanate (BGO) gamma detection system. Polyethylene was used as neutron moderator along with iron and lead as neutron and gamma ray shielding, respectively. The gamma detection system was equipped with a list mode data acquisition system which streams spectroscopy data directly to the computer, event-by-event. A GEANT4 simulation toolkit was used for generating the single-element libraries of all the elements of interest. These libraries were then used in a Linear Library Least Square (LLLS) approach with an unknown experimental sample spectrum to fit it with the calculated elemental libraries. GEANT4 simulation results were also used for the selection of the neutron shielding material.
NASA Astrophysics Data System (ADS)
López-Coto, R.; Mazin, D.; Paoletti, R.; Blanch Bigas, O.; Cortina, J.
2016-04-01
Imaging atmospheric Cherenkov telescopes (IACTs) such as the Major Atmospheric Gamma-ray Imaging Cherenkov (MAGIC) telescopes endeavor to reach the lowest possible energy threshold. In doing so the trigger system is a key element. Reducing the trigger threshold is hampered by the rapid increase of accidental triggers generated by ambient light (the so-called Night Sky Background NSB). In this paper we present a topological trigger, dubbed Topo-trigger, which rejects events on the basis of their relative orientation in the telescope cameras. We have simulated and tested the trigger selection algorithm in the MAGIC telescopes. The algorithm was tested using MonteCarlo simulations and shows a rejection of 85% of the accidental stereo triggers while preserving 99% of the gamma rays. A full implementation of this trigger system would achieve an increase in collection area between 10 and 20% at the energy threshold. The analysis energy threshold of the instrument is expected to decrease by ~ 8%. The selection algorithm was tested on real MAGIC data taken with the current trigger configuration and no γ-like events were found to be lost.
Gao, Lin; Zhang, Tongsheng; Wang, Jue; Stephen, Julia
2014-01-01
When connectivity analysis is carried out for event related EEG and MEG, the presence of strong spatial correlations from spontaneous activity in background may mask the local neuronal evoked activity and lead to spurious connections. In this paper, we hypothesized PCA decomposition could be used to diminish the background activity and further improve the performance of connectivity analysis in event related experiments. The idea was tested using simulation, where we found that for the 306-channel Elekta Neuromag system, the first 4 PCs represent the dominant background activity, and the source connectivity pattern after preprocessing is consistent with the true connectivity pattern designed in the simulation. Improving signal to noise of the evoked responses by discarding the first few PCs demonstrates increased coherences at major physiological frequency bands when removing the first few PCs. Furthermore, the evoked information was maintained after PCA preprocessing. In conclusion, it is demonstrated that the first few PCs represent background activity, and PCA decomposition can be employed to remove it to expose the evoked activity for the channels under investigation. Therefore, PCA can be applied as a preprocessing approach to improve neuronal connectivity analysis for event related data. PMID:22918837
Gao, Lin; Zhang, Tongsheng; Wang, Jue; Stephen, Julia
2013-04-01
When connectivity analysis is carried out for event related EEG and MEG, the presence of strong spatial correlations from spontaneous activity in background may mask the local neuronal evoked activity and lead to spurious connections. In this paper, we hypothesized PCA decomposition could be used to diminish the background activity and further improve the performance of connectivity analysis in event related experiments. The idea was tested using simulation, where we found that for the 306-channel Elekta Neuromag system, the first 4 PCs represent the dominant background activity, and the source connectivity pattern after preprocessing is consistent with the true connectivity pattern designed in the simulation. Improving signal to noise of the evoked responses by discarding the first few PCs demonstrates increased coherences at major physiological frequency bands when removing the first few PCs. Furthermore, the evoked information was maintained after PCA preprocessing. In conclusion, it is demonstrated that the first few PCs represent background activity, and PCA decomposition can be employed to remove it to expose the evoked activity for the channels under investigation. Therefore, PCA can be applied as a preprocessing approach to improve neuronal connectivity analysis for event related data.
Supervisory control of mobile sensor networks: math formulation, simulation, and implementation.
Giordano, Vincenzo; Ballal, Prasanna; Lewis, Frank; Turchiano, Biagio; Zhang, Jing Bing
2006-08-01
This paper uses a novel discrete-event controller (DEC) for the coordination of cooperating heterogeneous wireless sensor networks (WSNs) containing both unattended ground sensors (UGSs) and mobile sensor robots. The DEC sequences the most suitable tasks for each agent and assigns sensor resources according to the current perception of the environment. A matrix formulation makes this DEC particularly useful for WSN, where missions change and sensor agents may be added or may fail. WSN have peculiarities that complicate their supervisory control. Therefore, this paper introduces several new tools for DEC design and operation, including methods for generating the required supervisory matrices based on mission planning, methods for modifying the matrices in the event of failed nodes, or nodes entering the network, and a novel dynamic priority assignment weighting approach for selecting the most appropriate and useful sensors for a given mission task. The resulting DEC represents a complete dynamical description of the WSN system, which allows a fast programming of deployable WSN, a computer simulation analysis, and an efficient implementation. The DEC is actually implemented on an experimental wireless-sensor-network prototyping system. Both simulation and experimental results are presented to show the effectiveness and versatility of the developed control architecture.
Three-dimensional numerical simulation of the 20 June 1991, Orlando microburst
NASA Technical Reports Server (NTRS)
Proctor, Fred H.
1992-01-01
On 20 June 1991, NASA's Boeing 737, equipped with in-situ and look-ahead wind-shear detection systems, made direct low-level penetrations (300-350 m AGL) through a microburst during several stages of its evolution. This microburst was located roughly 20 km northeast of Orlando International Airport and was monitored by a Terminal Doppler Weather Radar (TDWR) located about 10 km south of the airport. The first NASA encounter with this microburst (Event 142), at approximately 2041 UTC, was during its intensification phase. At flight level, in-situ measurements indicated a peak 1-km (averaged) F-factor of approximately 0.1. The second NASA encounter (Event 143) occurred at approximately 2046 UTC, about the time of microburst peak intensity. It was during this penetration that a peak 1-km F-factor of approximately 17 was encountered, which was the largest in-situ measurement of the 1991 summer deployment. By the third encounter (Event 144), at approximately 2051 UTC, the microburst had expanded into a macroburst. During this phase of evolution, an in-situ 1-km F-factor of 0.08 was measured. The focus of this paper is to examine this microburst via numerical simulation from an unsteady, three-dimensional meteorological cloud model. The simulated high-resolution data fields of wind, temperature, radar reflectivity factor, and precipitation are closely examined so as to derive information not readily available from 'observations' and to enhance our understanding of the actual event. Characteristics of the simulated microburst evolution are compared with TDWR and in-situ measurements.
Krishnan, Ranjani; Walton, Emily B; Van Vliet, Krystyn J
2009-11-01
As computational resources increase, molecular dynamics simulations of biomolecules are becoming an increasingly informative complement to experimental studies. In particular, it has now become feasible to use multiple initial molecular configurations to generate an ensemble of replicate production-run simulations that allows for more complete characterization of rare events such as ligand-receptor unbinding. However, there are currently no explicit guidelines for selecting an ensemble of initial configurations for replicate simulations. Here, we use clustering analysis and steered molecular dynamics simulations to demonstrate that the configurational changes accessible in molecular dynamics simulations of biomolecules do not necessarily correlate with observed rare-event properties. This informs selection of a representative set of initial configurations. We also employ statistical analysis to identify the minimum number of replicate simulations required to sufficiently sample a given biomolecular property distribution. Together, these results suggest a general procedure for generating an ensemble of replicate simulations that will maximize accurate characterization of rare-event property distributions in biomolecules.
Development of the Code RITRACKS
NASA Technical Reports Server (NTRS)
Plante, Ianik; Cucinotta, Francis A.
2013-01-01
A document discusses the code RITRACKS (Relativistic Ion Tracks), which was developed to simulate heavy ion track structure at the microscopic and nanoscopic scales. It is a Monte-Carlo code that simulates the production of radiolytic species in water, event-by-event, and which may be used to simulate tracks and also to calculate dose in targets and voxels of different sizes. The dose deposited by the radiation can be calculated in nanovolumes (voxels). RITRACKS allows simulation of radiation tracks without the need of extensive knowledge of computer programming or Monte-Carlo simulations. It is installed as a regular application on Windows systems. The main input parameters entered by the user are the type and energy of the ion, the length and size of the irradiated volume, the number of ions impacting the volume, and the number of histories. The simulation can be started after the input parameters are entered in the GUI. The number of each kind of interactions for each track is shown in the result details window. The tracks can be visualized in 3D after the simulation is complete. It is also possible to see the time evolution of the tracks and zoom on specific parts of the tracks. The software RITRACKS can be very useful for radiation scientists to investigate various problems in the fields of radiation physics, radiation chemistry, and radiation biology. For example, it can be used to simulate electron ejection experiments (radiation physics).