Reducing EnergyPlus Run Time For Code Compliance Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Athalye, Rahul A.; Gowri, Krishnan; Schultz, Robert W.
2014-09-12
Integration of the EnergyPlus ™ simulation engine into performance-based code compliance software raises a concern about simulation run time, which impacts timely feedback of compliance results to the user. EnergyPlus annual simulations for proposed and code baseline building models, and mechanical equipment sizing result in simulation run times beyond acceptable limits. This paper presents a study that compares the results of a shortened simulation time period using 4 weeks of hourly weather data (one per quarter), to an annual simulation using full 52 weeks of hourly weather data. Three representative building types based on DOE Prototype Building Models and threemore » climate zones were used for determining the validity of using a shortened simulation run period. Further sensitivity analysis and run time comparisons were made to evaluate the robustness and run time savings of using this approach. The results of this analysis show that the shortened simulation run period provides compliance index calculations within 1% of those predicted using annual simulation results, and typically saves about 75% of simulation run time.« less
Scaling NS-3 DCE Experiments on Multi-Core Servers
2016-06-15
that work well together. 3.2 Simulation Server Details We ran the simulations on a Dell® PowerEdge M520 blade server[8] running Ubuntu Linux 14.04...To minimize the amount of time needed to complete all of the simulations, we planned to run multiple simulations at the same time on a blade server...MacBook was running the simulation inside a virtual machine (Ubuntu 14.04), while the blade server was running the same operating system directly on
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Tianzhen; Buhl, Fred; Haves, Philip
2008-09-20
EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences,more » identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.« less
Operating system for a real-time multiprocessor propulsion system simulator
NASA Technical Reports Server (NTRS)
Cole, G. L.
1984-01-01
The success of the Real Time Multiprocessor Operating System (RTMPOS) in the development and evaluation of experimental hardware and software systems for real time interactive simulation of air breathing propulsion systems was evaluated. The Real Time Multiprocessor Operating System (RTMPOS) provides the user with a versatile, interactive means for loading, running, debugging and obtaining results from a multiprocessor based simulator. A front end processor (FEP) serves as the simulator controller and interface between the user and the simulator. These functions are facilitated by the RTMPOS which resides on the FEP. The RTMPOS acts in conjunction with the FEP's manufacturer supplied disk operating system that provides typical utilities like an assembler, linkage editor, text editor, file handling services, etc. Once a simulation is formulated, the RTMPOS provides for engineering level, run time operations such as loading, modifying and specifying computation flow of programs, simulator mode control, data handling and run time monitoring. Run time monitoring is a powerful feature of RTMPOS that allows the user to record all actions taken during a simulation session and to receive advisories from the simulator via the FEP. The RTMPOS is programmed mainly in PASCAL along with some assembly language routines. The RTMPOS software is easily modified to be applicable to hardware from different manufacturers.
SPEEDES - A multiple-synchronization environment for parallel discrete-event simulation
NASA Technical Reports Server (NTRS)
Steinman, Jeff S.
1992-01-01
Synchronous Parallel Environment for Emulation and Discrete-Event Simulation (SPEEDES) is a unified parallel simulation environment. It supports multiple-synchronization protocols without requiring users to recompile their code. When a SPEEDES simulation runs on one node, all the extra parallel overhead is removed automatically at run time. When the same executable runs in parallel, the user preselects the synchronization algorithm from a list of options. SPEEDES currently runs on UNIX networks and on the California Institute of Technology/Jet Propulsion Laboratory Mark III Hypercube. SPEEDES also supports interactive simulations. Featured in the SPEEDES environment is a new parallel synchronization approach called Breathing Time Buckets. This algorithm uses some of the conservative techniques found in Time Bucket synchronization, along with the optimism that characterizes the Time Warp approach. A mathematical model derived from first principles predicts the performance of Breathing Time Buckets. Along with the Breathing Time Buckets algorithm, this paper discusses the rules for processing events in SPEEDES, describes the implementation of various other synchronization protocols supported by SPEEDES, describes some new ones for the future, discusses interactive simulations, and then gives some performance results.
The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code.
Kunkel, Susanne; Schenck, Wolfram
2017-01-01
NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling.
The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code
Kunkel, Susanne; Schenck, Wolfram
2017-01-01
NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling. PMID:28701946
Lytton, William W; Neymotin, Samuel A; Hines, Michael L
2008-06-30
In an effort to design a simulation environment that is more similar to that of neurophysiology, we introduce a virtual slice setup in the NEURON simulator. The virtual slice setup runs continuously and permits parameter changes, including changes to synaptic weights and time course and to intrinsic cell properties. The virtual slice setup permits shocks to be applied at chosen locations and activity to be sampled intra- or extracellularly from chosen locations. By default, a summed population display is shown during a run to indicate the level of activity and no states are saved. Simulations can run for hours of model time, therefore it is not practical to save all of the state variables. These, in any case, are primarily of interest at discrete times when experiments are being run: the simulation can be stopped momentarily at such times to save activity patterns. The virtual slice setup maintains an automated notebook showing shocks and parameter changes as well as user comments. We demonstrate how interaction with a continuously running simulation encourages experimental prototyping and can suggest additional dynamical features such as ligand wash-in and wash-out-alternatives to typical instantaneous parameter change. The virtual slice setup currently uses event-driven cells and runs at approximately 2 min/h on a laptop.
Simulation Study of Evacuation Control Center Operations Analysis
2011-06-01
28 4.3 Baseline Manning (Runs 1, 2, & 3) . . . . . . . . . . . . 30 4.3.1 Baseline Statistics Interpretation...46 Appendix B. Key Statistic Matrix: Runs 1-12 . . . . . . . . . . . . . 48 Appendix C. Blue Dart...Completion Time . . . 33 11. Paired T result - Run 5 v. Run 6: ECC Completion Time . . . 35 12. Key Statistics : Run 3 vs. Run 9
Simulation of a master-slave event set processor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Comfort, J.C.
1984-03-01
Event set manipulation may consume a considerable amount of the computation time spent in performing a discrete-event simulation. One way of minimizing this time is to allow event set processing to proceed in parallel with the remainder of the simulation computation. The paper describes a multiprocessor simulation computer, in which all non-event set processing is performed by the principal processor (called the host). Event set processing is coordinated by a front end processor (the master) and actually performed by several other functionally identical processors (the slaves). A trace-driven simulation program modeling this system was constructed, and was run with tracemore » output taken from two different simulation programs. Output from this simulation suggests that a significant reduction in run time may be realized by this approach. Sensitivity analysis was performed on the significant parameters to the system (number of slave processors, relative processor speeds, and interprocessor communication times). A comparison between actual and simulation run times for a one-processor system was used to assist in the validation of the simulation. 7 references.« less
NASA Astrophysics Data System (ADS)
Tong, Qiujie; Wang, Qianqian; Li, Xiaoyang; Shan, Bin; Cui, Xuntai; Li, Chenyu; Peng, Zhong
2016-11-01
In order to satisfy the requirements of the real-time and generality, a laser target simulator in semi-physical simulation system based on RTX+LabWindows/CVI platform is proposed in this paper. Compared with the upper-lower computers simulation platform architecture used in the most of the real-time system now, this system has better maintainability and portability. This system runs on the Windows platform, using Windows RTX real-time extension subsystem to ensure the real-time performance of the system combining with the reflective memory network to complete some real-time tasks such as calculating the simulation model, transmitting the simulation data, and keeping real-time communication. The real-time tasks of simulation system run under the RTSS process. At the same time, we use the LabWindows/CVI to compile a graphical interface, and complete some non-real-time tasks in the process of simulation such as man-machine interaction, display and storage of the simulation data, which run under the Win32 process. Through the design of RTX shared memory and task scheduling algorithm, the data interaction between the real-time tasks process of RTSS and non-real-time tasks process of Win32 is completed. The experimental results show that this system has the strongly real-time performance, highly stability, and highly simulation accuracy. At the same time, it also has the good performance of human-computer interaction.
NASA Technical Reports Server (NTRS)
Chawner, David M.; Gomez, Ray J.
2010-01-01
In the Applied Aerosciences and CFD branch at Johnson Space Center, computational simulations are run that face many challenges. Two of which are the ability to customize software for specialized needs and the need to run simulations as fast as possible. There are many different tools that are used for running these simulations and each one has its own pros and cons. Once these simulations are run, there needs to be software capable of visualizing the results in an appealing manner. Some of this software is called open source, meaning that anyone can edit the source code to make modifications and distribute it to all other users in a future release. This is very useful, especially in this branch where many different tools are being used. File readers can be written to load any file format into a program, to ease the bridging from one tool to another. Programming such a reader requires knowledge of the file format that is being read as well as the equations necessary to obtain the derived values after loading. When running these CFD simulations, extremely large files are being loaded and having values being calculated. These simulations usually take a few hours to complete, even on the fastest machines. Graphics processing units (GPUs) are usually used to load the graphics for computers; however, in recent years, GPUs are being used for more generic applications because of the speed of these processors. Applications run on GPUs have been known to run up to forty times faster than they would on normal central processing units (CPUs). If these CFD programs are extended to run on GPUs, the amount of time they would require to complete would be much less. This would allow more simulations to be run in the same amount of time and possibly perform more complex computations.
GUMICS4 Synthetic and Dynamic Simulations of the ECLAT Project
NASA Astrophysics Data System (ADS)
Facsko, G.; Palmroth, M. M.; Gordeev, E.; Hakkinen, L. V.; Honkonen, I. J.; Janhunen, P.; Sergeev, V. A.; Kauristie, K.; Milan, S. E.
2012-12-01
The European Commission funded the European Cluster Assimilation Techniques (ECLAT) project as a collaboration of five leader European universities and research institutes. A main contribution of the Finnish Meteorological Institute (FMI) is to provide a wide range of global MHD runs with the Grand Unified Magnetosphere Ionosphere Coupling simulation (GUMICS). The runs are divided in two categories: synthetic runs investigating the extent of solar wind drivers that can influence magnetospheric dynamics, as well as dynamic runs using measured solar wind data as input. Here we consider the first set of runs with synthetic solar wind input. The solar wind density, velocity and the interplanetary magnetic field had different magnitudes and orientations; furthermore two F10.7 flux values were selected for solar radiation minimum and maximum values. The solar wind parameter values were constant such that a constant stable solution was archived. All configurations were run several times with three different (-15°, 0°, +15°) tilt angles in the GSE X-Z plane. The Cray XT supercomputer of the FMI provides a unique opportunity in global magnetohydrodynamic simulation: running the GUMICS-4 based on one year real solar wind data. Solar wind magnetic field, density, temperature and velocity data based on Advanced Composition Explorer (ACE) and WIND measurements are downloaded from the OMNIWeb open database and a special input file is created for each Cluster orbit. All data gaps are replaced with linear interpolations between the last and first valid data values before and after the data gap. Minimum variance transformation is applied for the Interplanetary Magnetic Field data to clean and avoid the code of divergence. The Cluster orbits are divided into slices allowing parallel computation and each slice has an average tilt angle value. The file timestamps start one hour before the perigee to provide time for building up a magnetosphere in the simulation space. The real measurements were extrapolated into one minute intervals by the database and the time steps of the simulation result are shifted by 20-30 minutes calculated from the spacecraft position and the actual solar wind velocity. All simulation results are saved every 5th minutes (in calculation time). The result of the 162 simulations named so called "synthetic run library" were visualized and uploaded to the homepage of the FMI after validation as well as the year run savings. Here we present details of these runs.
NASA Astrophysics Data System (ADS)
Anantharaj, V. G.; Venzke, J.; Lingerfelt, E.; Messer, B.
2015-12-01
Climate model simulations are used to understand the evolution and variability of earth's climate. Unfortunately, high-resolution multi-decadal climate simulations can take days to weeks to complete. Typically, the simulation results are not analyzed until the model runs have ended. During the course of the simulation, the output may be processed periodically to ensure that the model is preforming as expected. However, most of the data analytics and visualization are not performed until the simulation is finished. The lengthy time period needed for the completion of the simulation constrains the productivity of climate scientists. Our implementation of near real-time data visualization analytics capabilities allows scientists to monitor the progress of their simulations while the model is running. Our analytics software executes concurrently in a co-scheduling mode, monitoring data production. When new data are generated by the simulation, a co-scheduled data analytics job is submitted to render visualization artifacts of the latest results. These visualization output are automatically transferred to Bellerophon's data server located at ORNL's Compute and Data Environment for Science (CADES) where they are processed and archived into Bellerophon's database. During the course of the experiment, climate scientists can then use Bellerophon's graphical user interface to view animated plots and their associated metadata. The quick turnaround from the start of the simulation until the data are analyzed permits research decisions and projections to be made days or sometimes even weeks sooner than otherwise possible! The supercomputer resources used to run the simulation are unaffected by co-scheduling the data visualization jobs, so the model runs continuously while the data are visualized. Our just-in-time data visualization software looks to increase climate scientists' productivity as climate modeling moves into exascale era of computing.
Computational steering of GEM based detector simulations
NASA Astrophysics Data System (ADS)
Sheharyar, Ali; Bouhali, Othmane
2017-10-01
Gas based detector R&D relies heavily on full simulation of detectors and their optimization before final prototypes can be built and tested. These simulations in particular those with complex scenarios such as those involving high detector voltages or gas with larger gains are computationally intensive may take several days or weeks to complete. These long-running simulations usually run on the high-performance computers in batch mode. If the results lead to unexpected behavior, then the simulation might be rerun with different parameters. However, the simulations (or jobs) may have to wait in a queue until they get a chance to run again because the supercomputer is a shared resource that maintains a queue of other user programs as well and executes them as time and priorities permit. It may result in inefficient resource utilization and increase in the turnaround time for the scientific experiment. To overcome this issue, the monitoring of the behavior of a simulation, while it is running (or live), is essential. In this work, we employ the computational steering technique by coupling the detector simulations with a visualization package named VisIt to enable the exploration of the live data as it is produced by the simulation.
Real-time simulation of large-scale floods
NASA Astrophysics Data System (ADS)
Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.
2016-08-01
According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.
Real-time visual simulation of APT system based on RTW and Vega
NASA Astrophysics Data System (ADS)
Xiong, Shuai; Fu, Chengyu; Tang, Tao
2012-10-01
The Matlab/Simulink simulation model of APT (acquisition, pointing and tracking) system is analyzed and established. Then the model's C code which can be used for real-time simulation is generated by RTW (Real-Time Workshop). Practical experiments show, the simulation result of running the C code is the same as running the Simulink model directly in the Matlab environment. MultiGen-Vega is a real-time 3D scene simulation software system. With it and OpenGL, the APT scene simulation platform is developed and used to render and display the virtual scenes of the APT system. To add some necessary graphics effects to the virtual scenes real-time, GLSL (OpenGL Shading Language) shaders are used based on programmable GPU. By calling the C code, the scene simulation platform can adjust the system parameters on-line and get APT system's real-time simulation data to drive the scenes. Practical application shows that this visual simulation platform has high efficiency, low charge and good simulation effect.
NOTE: Implementation of angular response function modeling in SPECT simulations with GATE
NASA Astrophysics Data System (ADS)
Descourt, P.; Carlier, T.; Du, Y.; Song, X.; Buvat, I.; Frey, E. C.; Bardies, M.; Tsui, B. M. W.; Visvikis, D.
2010-05-01
Among Monte Carlo simulation codes in medical imaging, the GATE simulation platform is widely used today given its flexibility and accuracy, despite long run times, which in SPECT simulations are mostly spent in tracking photons through the collimators. In this work, a tabulated model of the collimator/detector response was implemented within the GATE framework to significantly reduce the simulation times in SPECT. This implementation uses the angular response function (ARF) model. The performance of the implemented ARF approach has been compared to standard SPECT GATE simulations in terms of the ARF tables' accuracy, overall SPECT system performance and run times. Considering the simulation of the Siemens Symbia T SPECT system using high-energy collimators, differences of less than 1% were measured between the ARF-based and the standard GATE-based simulations, while considering the same noise level in the projections, acceleration factors of up to 180 were obtained when simulating a planar 364 keV source seen with the same SPECT system. The ARF-based and the standard GATE simulation results also agreed very well when considering a four-head SPECT simulation of a realistic Jaszczak phantom filled with iodine-131, with a resulting acceleration factor of 100. In conclusion, the implementation of an ARF-based model of collimator/detector response for SPECT simulations within GATE significantly reduces the simulation run times without compromising accuracy.
NASA Technical Reports Server (NTRS)
Mcenulty, R. E.
1977-01-01
The G189A simulation of the Shuttle Orbiter ECLSS was upgraded. All simulation library versions and simulation models were converted from the EXEC2 to the EXEC8 computer system and a new program, G189PL, was added to the combination master program library. The program permits the post-plotting of up to 100 frames of plot data over any time interval of a G189 simulation run. The overlay structure of the G189A simulations were restructured for the purpose of conserving computer core requirements and minimizing run time requirements.
Adaptive Integration of Nonsmooth Dynamical Systems
2017-10-11
controlled time stepping method to interactively design running robots. [1] John Shepherd, Samuel Zapolsky, and Evan M. Drumwright, “Fast multi-body...software like this to test software running on my robots. Started working in simulation after attempting to use software like this to test software... running on my robots. The libraries that produce these beautiful results have failed at simulating robotic manipulation. Postulate: It is easier to
Anhøj, Jacob; Olesen, Anne Vingaard
2014-01-01
A run chart is a line graph of a measure plotted over time with the median as a horizontal line. The main purpose of the run chart is to identify process improvement or degradation, which may be detected by statistical tests for non-random patterns in the data sequence. We studied the sensitivity to shifts and linear drifts in simulated processes using the shift, crossings and trend rules for detecting non-random variation in run charts. The shift and crossings rules are effective in detecting shifts and drifts in process centre over time while keeping the false signal rate constant around 5% and independent of the number of data points in the chart. The trend rule is virtually useless for detection of linear drift over time, the purpose it was intended for.
NASA Technical Reports Server (NTRS)
Hueschen, Richard M.; Hankins, Walter W., III; Barker, L. Keith
2001-01-01
This report examines a rollout and turnoff (ROTO) system for reducing the runway occupancy time for transport aircraft in low-visibility weather. Simulator runs were made to evaluate the system that includes a head-up display (HUD) to show the pilot a graphical overlay of the runway along with guidance and steering information to a chosen exit. Fourteen pilots (airline, corporate jet, and research pilots) collectively flew a total of 560 rollout and turnoff runs using all eight runways at Hartsfield Atlanta International Airport. The runs consisted of 280 runs for each of two runway visual ranges (RVRs) (300 and 1200 ft). For each visual range, half the runs were conducted with the HUD information and half without. For the runs conducted with the HUD information, the runway occupancy times were lower and more consistent. The effect was more pronounced as visibility decreased. For the 1200-ft visibility, the runway occupancy times were 13% lower with HUD information (46.1 versus 52.8 sec). Similarly, for the 300-ft visibility, the times were 28% lower (45.4 versus 63.0 sec). Also, for the runs with HUD information, 78% (RVR 1200) and 75% (RVR 300) had runway occupancy times less than 50 sec, versus 41 and 20%, respectively, without HUD information.
New NASA 3D Animation Shows Seven Days of Simulated Earth Weather
2014-08-11
This visualization shows early test renderings of a global computational model of Earth's atmosphere based on data from NASA's Goddard Earth Observing System Model, Version 5 (GEOS-5). This particular run, called Nature Run 2, was run on a supercomputer, spanned 2 years of simulation time at 30 minute intervals, and produced Petabytes of output. The visualization spans a little more than 7 days of simulation time which is 354 time steps. The time period was chosen because a simulated category-4 typhoon developed off the coast of China. The 7 day period is repeated several times during the course of the visualization. Credit: NASA's Scientific Visualization Studio Read more or download here: svs.gsfc.nasa.gov/goto?4180 NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
NASA Astrophysics Data System (ADS)
Magaldi, Marcello G.; Haine, Thomas W. N.
2015-02-01
The cascade of dense waters of the Southeast Greenland shelf during summer 2003 is investigated with two very high-resolution (0.5-km) simulations. The first simulation is non-hydrostatic. The second simulation is hydrostatic and about 3.75 times less expensive. Both simulations are compared to a 2-km hydrostatic run, about 31 times less expensive than the 0.5 km non-hydrostatic case. Time-averaged volume transport values for deep waters are insensitive to the changes in horizontal resolution and vertical momentum dynamics. By this metric, both lateral stirring and vertical shear instabilities associated with the cascading process are accurately parameterized by the turbulent schemes used at 2-km horizontal resolution. All runs compare well with observations and confirm that the cascade is mainly driven by cyclones which are linked to dense overflow boluses at depth. The passage of the cyclones is also associated with the generation of internal gravity waves (IGWs) near the shelf. Surface fields and kinetic energy spectra do not differ significantly between the runs for horizontal scales L > 30 km. Complex structures emerge and the spectra flatten at scales L < 30 km in the 0.5-km runs. In the non-hydrostatic case, additional energy is found in the vertical kinetic energy spectra at depth in the 2 km < L < 10 km range and with frequencies around 7 times the inertial frequency. This enhancement is missing in both hydrostatic runs and is here argued to be due to the different IGW evolution and propagation offshore. The different IGW behavior in the non-hydrostatic case has strong implications for the energetics: compared to the 2-km case, the baroclinic conversion term and vertical kinetic energy are about 1.4 and at least 34 times larger, respectively. This indicates that the energy transfer from the geostrophic eddy field to IGWs and their propagation away from the continental slope is not properly represented in the hydrostatic runs.
An extension of the OpenModelica compiler for using Modelica models in a discrete event simulation
Nutaro, James
2014-11-03
In this article, a new back-end and run-time system is described for the OpenModelica compiler. This new back-end transforms a Modelica model into a module for the adevs discrete event simulation package, thereby extending adevs to encompass complex, hybrid dynamical systems. The new run-time system that has been built within the adevs simulation package supports models with state-events and time-events and that comprise differential-algebraic systems with high index. Finally, although the procedure for effecting this transformation is based on adevs and the Discrete Event System Specification, it can be adapted to any discrete event simulation package.
Effect of match-run frequencies on the number of transplants and waiting times in kidney exchange.
Ashlagi, Itai; Bingaman, Adam; Burq, Maximilien; Manshadi, Vahideh; Gamarnik, David; Murphey, Cathi; Roth, Alvin E; Melcher, Marc L; Rees, Michael A
2018-05-01
Numerous kidney exchange (kidney paired donation [KPD]) registries in the United States have gradually shifted to high-frequency match-runs, raising the question of whether this harms the number of transplants. We conducted simulations using clinical data from 2 KPD registries-the Alliance for Paired Donation, which runs multihospital exchanges, and Methodist San Antonio, which runs single-center exchanges-to study how the frequency of match-runs impacts the number of transplants and the average waiting times. We simulate the options facing each of the 2 registries by repeated resampling from their historical pools of patient-donor pairs and nondirected donors, with arrival and departure rates corresponding to the historical data. We find that longer intervals between match-runs do not increase the total number of transplants, and that prioritizing highly sensitized patients is more effective than waiting longer between match-runs for transplanting highly sensitized patients. While we do not find that frequent match-runs result in fewer transplanted pairs, we do find that increasing arrival rates of new pairs improves both the fraction of transplanted pairs and waiting times. © 2017 The American Society of Transplantation and the American Society of Transplant Surgeons.
Performance analysis of local area networks
NASA Technical Reports Server (NTRS)
Alkhatib, Hasan S.; Hall, Mary Grace
1990-01-01
A simulation of the TCP/IP protocol running on a CSMA/CD data link layer was described. The simulation was implemented using the simula language, and object oriented discrete event language. It allows the user to set the number of stations at run time, as well as some station parameters. Those parameters are the interrupt time and the dma transfer rate for each station. In addition, the user may configure the network at run time with stations of differing characteristics. Two types are available, and the parameters of both types are read from input files at run time. The parameters include the dma transfer rate, interrupt time, data rate, average message size, maximum frame size and the average interarrival time of messages per station. The information collected for the network is the throughput and the mean delay per packet. For each station, the number of messages attempted as well as the number of messages successfully transmitted is collected in addition to the throughput and mean packet delay per station.
Limits to high-speed simulations of spiking neural networks using general-purpose computers.
Zenke, Friedemann; Gerstner, Wulfram
2014-01-01
To understand how the central nervous system performs computations using recurrent neuronal circuitry, simulations have become an indispensable tool for theoretical neuroscience. To study neuronal circuits and their ability to self-organize, increasing attention has been directed toward synaptic plasticity. In particular spike-timing-dependent plasticity (STDP) creates specific demands for simulations of spiking neural networks. On the one hand a high temporal resolution is required to capture the millisecond timescale of typical STDP windows. On the other hand network simulations have to evolve over hours up to days, to capture the timescale of long-term plasticity. To do this efficiently, fast simulation speed is the crucial ingredient rather than large neuron numbers. Using different medium-sized network models consisting of several thousands of neurons and off-the-shelf hardware, we compare the simulation speed of the simulators: Brian, NEST and Neuron as well as our own simulator Auryn. Our results show that real-time simulations of different plastic network models are possible in parallel simulations in which numerical precision is not a primary concern. Even so, the speed-up margin of parallelism is limited and boosting simulation speeds beyond one tenth of real-time is difficult. By profiling simulation code we show that the run times of typical plastic network simulations encounter a hard boundary. This limit is partly due to latencies in the inter-process communications and thus cannot be overcome by increased parallelism. Overall, these results show that to study plasticity in medium-sized spiking neural networks, adequate simulation tools are readily available which run efficiently on small clusters. However, to run simulations substantially faster than real-time, special hardware is a prerequisite.
Changes in running pattern due to fatigue and cognitive load in orienteering.
Millet, Guillaume Y; Divert, Caroline; Banizette, Marion; Morin, Jean-Benoit
2010-01-01
The aim of this study was to examine the influence of fatigue on running biomechanics in normal running, in normal running with a cognitive task, and in running while map reading. Nineteen international and less experienced orienteers performed a fatiguing running exercise of duration and intensity similar to a classic distance orienteering race on an instrumented treadmill while performing mental arithmetic, an orienteering simulation, and control running at regular intervals. Two-way repeated-measures analysis of variance did not reveal any significant difference between mental arithmetic and control running for any of the kinematic and kinetic parameters analysed eight times over the fatiguing protocol. However, these parameters were systematically different between the orienteering simulation and the other two conditions (mental arithmetic and control running). The adaptations in orienteering simulation running were significantly more pronounced in the elite group when step frequency, peak vertical ground reaction force, vertical stiffness, and maximal downward displacement of the centre of mass during contact were considered. The effects of fatigue on running biomechanics depended on whether the orienteers read their map or ran normally. It is concluded that adding a cognitive load does not modify running patterns. Therefore, all changes in running pattern observed during the orienteering simulation, particularly in elite orienteers, are the result of adaptations to enable efficient map reading and/or potentially prevent injuries. Finally, running patterns are not affected to the same extent by fatigue when a map reading task is added.
Platform-Independence and Scheduling In a Multi-Threaded Real-Time Simulation
NASA Technical Reports Server (NTRS)
Sugden, Paul P.; Rau, Melissa A.; Kenney, P. Sean
2001-01-01
Aviation research often relies on real-time, pilot-in-the-loop flight simulation as a means to develop new flight software, flight hardware, or pilot procedures. Often these simulations become so complex that a single processor is incapable of performing the necessary computations within a fixed time-step. Threads are an elegant means to distribute the computational work-load when running on a symmetric multi-processor machine. However, programming with threads often requires operating system specific calls that reduce code portability and maintainability. While a multi-threaded simulation allows a significant increase in the simulation complexity, it also increases the workload of a simulation operator by requiring that the operator determine which models run on which thread. To address these concerns an object-oriented design was implemented in the NASA Langley Standard Real-Time Simulation in C++ (LaSRS++) application framework. The design provides a portable and maintainable means to use threads and also provides a mechanism to automatically load balance the simulation models.
Runtime visualization of the human arterial tree.
Insley, Joseph A; Papka, Michael E; Dong, Suchuan; Karniadakis, George; Karonis, Nicholas T
2007-01-01
Large-scale simulation codes typically execute for extended periods of time and often on distributed computational resources. Because these simulations can run for hours, or even days, scientists like to get feedback about the state of the computation and the validity of its results as it runs. It is also important that these capabilities be made available with little impact on the performance and stability of the simulation. Visualizing and exploring data in the early stages of the simulation can help scientists identify problems early, potentially avoiding a situation where a simulation runs for several days, only to discover that an error with an input parameter caused both time and resources to be wasted. We describe an application that aids in the monitoring and analysis of a simulation of the human arterial tree. The application provides researchers with high-level feedback about the state of the ongoing simulation and enables them to investigate particular areas of interest in greater detail. The application also offers monitoring information about the amount of data produced and data transfer performance among the various components of the application.
NASA Astrophysics Data System (ADS)
Mohaghegh, Shahab
2010-05-01
Surrogate Reservoir Model (SRM) is new solution for fast track, comprehensive reservoir analysis (solving both direct and inverse problems) using existing reservoir simulation models. SRM is defined as a replica of the full field reservoir simulation model that runs and provides accurate results in real-time (one simulation run takes only a fraction of a second). SRM mimics the capabilities of a full field model with high accuracy. Reservoir simulation is the industry standard for reservoir management. It is used in all phases of field development in the oil and gas industry. The routine of simulation studies calls for integration of static and dynamic measurements into the reservoir model. Full field reservoir simulation models have become the major source of information for analysis, prediction and decision making. Large prolific fields usually go through several versions (updates) of their model. Each new version usually is a major improvement over the previous version. The updated model includes the latest available information incorporated along with adjustments that usually are the result of single-well or multi-well history matching. As the number of reservoir layers (thickness of the formations) increases, the number of cells representing the model approaches several millions. As the reservoir models grow in size, so does the time that is required for each run. Schemes such as grid computing and parallel processing helps to a certain degree but do not provide the required speed for tasks such as: field development strategies using comprehensive reservoir analysis, solving the inverse problem for injection/production optimization, quantifying uncertainties associated with the geological model and real-time optimization and decision making. These types of analyses require hundreds or thousands of runs. Furthermore, with the new push for smart fields in the oil/gas industry that is a natural growth of smart completion and smart wells, the need for real time reservoir modeling becomes more pronounced. SRM is developed using the state of the art in neural computing and fuzzy pattern recognition to address the ever growing need in the oil and gas industry to perform accurate, but high speed simulation and modeling. Unlike conventional geo-statistical approaches (response surfaces, proxy models …) that require hundreds of simulation runs for development, SRM is developed only with a few (from 10 to 30 runs) simulation runs. SRM can be developed regularly (as new versions of the full field model become available) off-line and can be put online for real-time processing to guide important decisions. SRM has proven its value in the field. An SRM was developed for a giant oil field in the Middle East. The model included about one million grid blocks with more than 165 horizontal wells and took ten hours for a single run on 12 parallel CPUs. Using only 10 simulation runs, an SRM was developed that was able to accurately mimic the behavior of the reservoir simulation model. Performing a comprehensive reservoir analysis that included making millions of SRM runs, wells in the field were divided into five clusters. It was predicted that wells in cluster one & two are best candidates for rate relaxation with minimal, long term water production while wells in clusters four and five are susceptive to high water cuts. Two and a half years and 20 wells later, rate relaxation results from the field proved that all the predictions made by the SRM analysis were correct. While incremental oil production increased in all wells (wells in clusters 1 produced the most followed by wells in cluster 2, 3 …) the percent change in average monthly water cut for wells in each cluster clearly demonstrated the analytic power of SRM. As it was correctly predicted, wells in clusters 1 and 2 actually experience a reduction in water cut while a substantial increase in water cut was observed in wells classified into clusters 4 and 5. Performing these analyses would have been impossible using the original full field simulation model.
NASA Astrophysics Data System (ADS)
Chen, Xiuhong; Huang, Xianglei; Jiao, Chaoyi; Flanner, Mark G.; Raeker, Todd; Palen, Brock
2017-01-01
The suites of numerical models used for simulating climate of our planet are usually run on dedicated high-performance computing (HPC) resources. This study investigates an alternative to the usual approach, i.e. carrying out climate model simulations on commercially available cloud computing environment. We test the performance and reliability of running the CESM (Community Earth System Model), a flagship climate model in the United States developed by the National Center for Atmospheric Research (NCAR), on Amazon Web Service (AWS) EC2, the cloud computing environment by Amazon.com, Inc. StarCluster is used to create virtual computing cluster on the AWS EC2 for the CESM simulations. The wall-clock time for one year of CESM simulation on the AWS EC2 virtual cluster is comparable to the time spent for the same simulation on a local dedicated high-performance computing cluster with InfiniBand connections. The CESM simulation can be efficiently scaled with the number of CPU cores on the AWS EC2 virtual cluster environment up to 64 cores. For the standard configuration of the CESM at a spatial resolution of 1.9° latitude by 2.5° longitude, increasing the number of cores from 16 to 64 reduces the wall-clock running time by more than 50% and the scaling is nearly linear. Beyond 64 cores, the communication latency starts to outweigh the benefit of distributed computing and the parallel speedup becomes nearly unchanged.
Australia's marine virtual laboratory
NASA Astrophysics Data System (ADS)
Proctor, Roger; Gillibrand, Philip; Oke, Peter; Rosebrock, Uwe
2014-05-01
In all modelling studies of realistic scenarios, a researcher has to go through a number of steps to set up a model in order to produce a model simulation of value. The steps are generally the same, independent of the modelling system chosen. These steps include determining the time and space scales and processes of the required simulation; obtaining data for the initial set up and for input during the simulation time; obtaining observation data for validation or data assimilation; implementing scripts to run the simulation(s); and running utilities or custom-built software to extract results. These steps are time consuming and resource hungry, and have to be done every time irrespective of the simulation - the more complex the processes, the more effort is required to set up the simulation. The Australian Marine Virtual Laboratory (MARVL) is a new development in modelling frameworks for researchers in Australia. MARVL uses the TRIKE framework, a java-based control system developed by CSIRO that allows a non-specialist user configure and run a model, to automate many of the modelling preparation steps needed to bring the researcher faster to the stage of simulation and analysis. The tool is seen as enhancing the efficiency of researchers and marine managers, and is being considered as an educational aid in teaching. In MARVL we are developing a web-based open source application which provides a number of model choices and provides search and recovery of relevant observations, allowing researchers to: a) efficiently configure a range of different community ocean and wave models for any region, for any historical time period, with model specifications of their choice, through a user-friendly web application, b) access data sets to force a model and nest a model into, c) discover and assemble ocean observations from the Australian Ocean Data Network (AODN, http://portal.aodn.org.au/webportal/) in a format that is suitable for model evaluation or data assimilation, and d) run the assembled configuration in a cloud computing environment, or download the assembled configuration and packaged data to run on any other system of the user's choice. MARVL is now being applied in a number of case studies around Australia ranging in scale from locally confined estuaries to the Tasman Sea between Australia and New Zealand. In time we expect the range of models offered will include biogeochemical models.
Just-in-time connectivity for large spiking networks.
Lytton, William W; Omurtag, Ahmet; Neymotin, Samuel A; Hines, Michael L
2008-11-01
The scale of large neuronal network simulations is memory limited due to the need to store connectivity information: connectivity storage grows as the square of neuron number up to anatomically relevant limits. Using the NEURON simulator as a discrete-event simulator (no integration), we explored the consequences of avoiding the space costs of connectivity through regenerating connectivity parameters when needed: just in time after a presynaptic cell fires. We explored various strategies for automated generation of one or more of the basic static connectivity parameters: delays, postsynaptic cell identities, and weights, as well as run-time connectivity state: the event queue. Comparison of the JitCon implementation to NEURON's standard NetCon connectivity method showed substantial space savings, with associated run-time penalty. Although JitCon saved space by eliminating connectivity parameters, larger simulations were still memory limited due to growth of the synaptic event queue. We therefore designed a JitEvent algorithm that added items to the queue only when required: instead of alerting multiple postsynaptic cells, a spiking presynaptic cell posted a callback event at the shortest synaptic delay time. At the time of the callback, this same presynaptic cell directly notified the first postsynaptic cell and generated another self-callback for the next delay time. The JitEvent implementation yielded substantial additional time and space savings. We conclude that just-in-time strategies are necessary for very large network simulations but that a variety of alternative strategies should be considered whose optimality will depend on the characteristics of the simulation to be run.
Just in time connectivity for large spiking networks
Lytton, William W.; Omurtag, Ahmet; Neymotin, Samuel A; Hines, Michael L
2008-01-01
The scale of large neuronal network simulations is memory-limited due to the need to store connectivity information: connectivity storage grows as the square of neuron number up to anatomically-relevant limits. Using the NEURON simulator as a discrete-event simulator (no integration), we explored the consequences of avoiding the space costs of connectivity through regenerating connectivity parameters when needed – just-in-time after a presynaptic cell fires. We explored various strategies for automated generation of one or more of the basic static connectivity parameters: delays, postsynaptic cell identities and weights, as well as run-time connectivity state: the event queue. Comparison of the JitCon implementation to NEURON’s standard NetCon connectivity method showed substantial space savings, with associated run-time penalty. Although JitCon saved space by eliminating connectivity parameters, larger simulations were still memory-limited due to growth of the synaptic event queue. We therefore designed a JitEvent algorithm that only added items to the queue when required: instead of alerting multiple postsynaptic cells, a spiking presynaptic cell posted a callback event at the shortest synaptic delay time. At the time of the callback, this same presynaptic cell directly notified the first postsynaptic cell and generated another self-callback for the next delay time. The JitEvent implementation yielded substantial additional time and space savings. We conclude that just-in-time strategies are necessary for very large network simulations but that a variety of alternative strategies should be considered whose optimality will depend on the characteristics of the simulation to be run. PMID:18533821
VERSE - Virtual Equivalent Real-time Simulation
NASA Technical Reports Server (NTRS)
Zheng, Yang; Martin, Bryan J.; Villaume, Nathaniel
2005-01-01
Distributed real-time simulations provide important timing validation and hardware in the- loop results for the spacecraft flight software development cycle. Occasionally, the need for higher fidelity modeling and more comprehensive debugging capabilities - combined with a limited amount of computational resources - calls for a non real-time simulation environment that mimics the real-time environment. By creating a non real-time environment that accommodates simulations and flight software designed for a multi-CPU real-time system, we can save development time, cut mission costs, and reduce the likelihood of errors. This paper presents such a solution: Virtual Equivalent Real-time Simulation Environment (VERSE). VERSE turns the real-time operating system RTAI (Real-time Application Interface) into an event driven simulator that runs in virtual real time. Designed to keep the original RTAI architecture as intact as possible, and therefore inheriting RTAI's many capabilities, VERSE was implemented with remarkably little change to the RTAI source code. This small footprint together with use of the same API allows users to easily run the same application in both real-time and virtual time environments. VERSE has been used to build a workstation testbed for NASA's Space Interferometry Mission (SIM PlanetQuest) instrument flight software. With its flexible simulation controls and inexpensive setup and replication costs, VERSE will become an invaluable tool in future mission development.
Simulated tsunami run-up amplification factors around Penang Island for preliminary risk assessment
NASA Astrophysics Data System (ADS)
Lim, Yong Hui; Kh'ng, Xin Yi; Teh, Su Yean; Koh, Hock Lye; Tan, Wai Kiat
2017-08-01
The mega-tsunami Andaman that struck Malaysia on 26 December 2004 affected 200 kilometers of northwest Peninsular Malaysia coastline from Perlis to Selangor. It is anticipated by the tsunami scientific community that the next mega-tsunami is due to occur any time soon. This rare catastrophic event has awakened the attention of Malaysian government to take appropriate risk reduction measures, including timely and orderly evacuation. To effectively evacuate ordinary citizens to a safe ground or a nearest designated emergency shelter, a well prepared evacuation route is essential with the estimated tsunami run-up heights and inundation distances on land clearly indicated on the evacuation map. The run-up heights and inundation distances are simulated by an in-house model 2-D TUNA-RP based upon credible scientific tsunami source scenarios derived from tectonic activity around the region. To provide a useful tool for estimating the run-up heights along the entire coast of Penang Island, we computed tsunami amplification factors based upon 2-D TUNA-RP model simulations in this paper. The inundation map and run-up amplification factors in six domains along the entire coastline of Penang Island are provided. The comparison between measured tsunami wave heights for the 2004 Andaman tsunami and TUNA-RP model simulated values demonstrates good agreement.
Implementation of unsteady sampling procedures for the parallel direct simulation Monte Carlo method
NASA Astrophysics Data System (ADS)
Cave, H. M.; Tseng, K.-C.; Wu, J.-S.; Jermy, M. C.; Huang, J.-C.; Krumdieck, S. P.
2008-06-01
An unsteady sampling routine for a general parallel direct simulation Monte Carlo method called PDSC is introduced, allowing the simulation of time-dependent flow problems in the near continuum range. A post-processing procedure called DSMC rapid ensemble averaging method (DREAM) is developed to improve the statistical scatter in the results while minimising both memory and simulation time. This method builds an ensemble average of repeated runs over small number of sampling intervals prior to the sampling point of interest by restarting the flow using either a Maxwellian distribution based on macroscopic properties for near equilibrium flows (DREAM-I) or output instantaneous particle data obtained by the original unsteady sampling of PDSC for strongly non-equilibrium flows (DREAM-II). The method is validated by simulating shock tube flow and the development of simple Couette flow. Unsteady PDSC is found to accurately predict the flow field in both cases with significantly reduced run-times over single processor code and DREAM greatly reduces the statistical scatter in the results while maintaining accurate particle velocity distributions. Simulations are then conducted of two applications involving the interaction of shocks over wedges. The results of these simulations are compared to experimental data and simulations from the literature where there these are available. In general, it was found that 10 ensembled runs of DREAM processing could reduce the statistical uncertainty in the raw PDSC data by 2.5-3.3 times, based on the limited number of cases in the present study.
Efficiently Scheduling Multi-core Guest Virtual Machines on Multi-core Hosts in Network Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B; Perumalla, Kalyan S
2011-01-01
Virtual machine (VM)-based simulation is a method used by network simulators to incorporate realistic application behaviors by executing actual VMs as high-fidelity surrogates for simulated end-hosts. A critical requirement in such a method is the simulation time-ordered scheduling and execution of the VMs. Prior approaches such as time dilation are less efficient due to the high degree of multiplexing possible when multiple multi-core VMs are simulated on multi-core host systems. We present a new simulation time-ordered scheduler to efficiently schedule multi-core VMs on multi-core real hosts, with a virtual clock realized on each virtual core. The distinguishing features of ourmore » approach are: (1) customizable granularity of the VM scheduling time unit on the simulation time axis, (2) ability to take arbitrary leaps in virtual time by VMs to maximize the utilization of host (real) cores when guest virtual cores idle, and (3) empirically determinable optimality in the tradeoff between total execution (real) time and time-ordering accuracy levels. Experiments show that it is possible to get nearly perfect time-ordered execution, with a slight cost in total run time, relative to optimized non-simulation VM schedulers. Interestingly, with our time-ordered scheduler, it is also possible to reduce the time-ordering error from over 50% of non-simulation scheduler to less than 1% realized by our scheduler, with almost the same run time efficiency as that of the highly efficient non-simulation VM schedulers.« less
Simulation of a Real-Time Local Data Integration System over East-Central Florida
NASA Technical Reports Server (NTRS)
Case, Jonathan
1999-01-01
The Applied Meteorology Unit (AMU) simulated a real-time configuration of a Local Data Integration System (LDIS) using data from 15-28 February 1999. The objectives were to assess the utility of a simulated real-time LDIS, evaluate and extrapolate system performance to identify the hardware necessary to run a real-time LDIS, and determine the sensitivities of LDIS. The ultimate goal for running LDIS is to generate analysis products that enhance short-range (less than 6 h) weather forecasts issued in support of the 45th Weather Squadron, Spaceflight Meteorology Group, and Melbourne National Weather Service operational requirements. The simulation used the Advanced Regional Prediction System (ARPS) Data Analysis System (ADAS) software on an IBM RS/6000 workstation with a 67-MHz processor. This configuration ran in real-time, but not sufficiently fast for operational requirements. Thus, the AMU recommends a workstation with a 200-MHz processor and 512 megabytes of memory to run the AMU's configuration of LDIS in real-time. This report presents results from two case studies and several data sensitivity experiments. ADAS demonstrates utility through its ability to depict high-resolution cloud and wind features in a variety of weather situations. The sensitivity experiments illustrate the influence of disparate data on the resulting ADAS analyses.
Simulation of ozone production in a complex circulation region using nested grids
NASA Astrophysics Data System (ADS)
Taghavi, M.; Cautenet, S.; Foret, G.
2004-06-01
During the ESCOMPTE precampaign (summer 2000, over Southern France), a 3-day period of intensive observation (IOP0), associated with ozone peaks, has been simulated. The comprehensive RAMS model, version 4.3, coupled on-line with a chemical module including 29 species, is used to follow the chemistry of the polluted zone. This efficient but time consuming method can be used because the code is installed on a parallel computer, the SGI 3800. Two runs are performed: run 1 with a single grid and run 2 with two nested grids. The simulated fields of ozone, carbon monoxide, nitrogen oxides and sulfur dioxide are compared with aircraft and surface station measurements. The 2-grid run looks substantially better than the run with one grid because the former takes the outer pollutants into account. This on-line method helps to satisfactorily retrieve the chemical species redistribution and to explain the impact of dynamics on this redistribution.
A Monte-Carlo maplet for the study of the optical properties of biological tissues
NASA Astrophysics Data System (ADS)
Yip, Man Ho; Carvalho, M. J.
2007-12-01
Monte-Carlo simulations are commonly used to study complex physical processes in various fields of physics. In this paper we present a Maple program intended for Monte-Carlo simulations of photon transport in biological tissues. The program has been designed so that the input data and output display can be handled by a maplet (an easy and user-friendly graphical interface), named the MonteCarloMaplet. A thorough explanation of the programming steps and how to use the maplet is given. Results obtained with the Maple program are compared with corresponding results available in the literature. Program summaryProgram title:MonteCarloMaplet Catalogue identifier:ADZU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:3251 No. of bytes in distributed program, including test data, etc.:296 465 Distribution format: tar.gz Programming language:Maple 10 Computer: Acer Aspire 5610 (any running Maple 10) Operating system: Windows XP professional (any running Maple 10) Classification: 3.1, 5 Nature of problem: Simulate the transport of radiation in biological tissues. Solution method: The Maple program follows the steps of the C program of L. Wang et al. [L. Wang, S.L. Jacques, L. Zheng, Computer Methods and Programs in Biomedicine 47 (1995) 131-146]; The Maple library routine for random number generation is used [Maple 10 User Manual c Maplesoft, a division of Waterloo Maple Inc., 2005]. Restrictions: Running time increases rapidly with the number of photons used in the simulation. Unusual features: A maplet (graphical user interface) has been programmed for data input and output. Note that the Monte-Carlo simulation was programmed with Maple 10. If attempting to run the simulation with an earlier version of Maple, appropriate modifications (regarding typesetting fonts) are required and once effected the worksheet runs without problem. However some of the windows of the maplet may still appear distorted. Running time: Depends essentially on the number of photons used in the simulation. Elapsed times for particular runs are reported in the main text.
The Trick Simulation Toolkit: A NASA/Opensource Framework for Running Time Based Physics Models
NASA Technical Reports Server (NTRS)
Penn, John M.
2016-01-01
The Trick Simulation Toolkit is a simulation development environment used to create high fidelity training and engineering simulations at the NASA Johnson Space Center and many other NASA facilities. Its purpose is to generate a simulation executable from a collection of user-supplied models and a simulation definition file. For each Trick-based simulation, Trick automatically provides job scheduling, numerical integration, the ability to write and restore human readable checkpoints, data recording, interactive variable manipulation, a run-time interpreter, and many other commonly needed capabilities. This allows simulation developers to concentrate on their domain expertise and the algorithms and equations of their models. Also included in Trick are tools for plotting recorded data and various other supporting utilities and libraries. Trick is written in C/C++ and Java and supports both Linux and MacOSX computer operating systems. This paper describes Trick's design and use at NASA Johnson Space Center.
Multidisciplinary Simulation Acceleration using Multiple Shared-Memory Graphical Processing Units
NASA Astrophysics Data System (ADS)
Kemal, Jonathan Yashar
For purposes of optimizing and analyzing turbomachinery and other designs, the unsteady Favre-averaged flow-field differential equations for an ideal compressible gas can be solved in conjunction with the heat conduction equation. We solve all equations using the finite-volume multiple-grid numerical technique, with the dual time-step scheme used for unsteady simulations. Our numerical solver code targets CUDA-capable Graphical Processing Units (GPUs) produced by NVIDIA. Making use of MPI, our solver can run across networked compute notes, where each MPI process can use either a GPU or a Central Processing Unit (CPU) core for primary solver calculations. We use NVIDIA Tesla C2050/C2070 GPUs based on the Fermi architecture, and compare our resulting performance against Intel Zeon X5690 CPUs. Solver routines converted to CUDA typically run about 10 times faster on a GPU for sufficiently dense computational grids. We used a conjugate cylinder computational grid and ran a turbulent steady flow simulation using 4 increasingly dense computational grids. Our densest computational grid is divided into 13 blocks each containing 1033x1033 grid points, for a total of 13.87 million grid points or 1.07 million grid points per domain block. To obtain overall speedups, we compare the execution time of the solver's iteration loop, including all resource intensive GPU-related memory copies. Comparing the performance of 8 GPUs to that of 8 CPUs, we obtain an overall speedup of about 6.0 when using our densest computational grid. This amounts to an 8-GPU simulation running about 39.5 times faster than running than a single-CPU simulation.
Scientific Benefits of Space Science Models Archiving at Community Coordinated Modeling Center
NASA Technical Reports Server (NTRS)
Kuznetsova, Maria M.; Berrios, David; Chulaki, Anna; Hesse, Michael; MacNeice, Peter J.; Maddox, Marlo M.; Pulkkinen, Antti; Rastaetter, Lutz; Taktakishvili, Aleksandre
2009-01-01
The Community Coordinated Modeling Center (CCMC) hosts a set of state-of-the-art space science models ranging from the solar atmosphere to the Earth's upper atmosphere. CCMC provides a web-based Run-on-Request system, by which the interested scientist can request simulations for a broad range of space science problems. To allow the models to be driven by data relevant to particular events CCMC developed a tool that automatically downloads data from data archives and transform them to required formats. CCMC also provides a tailored web-based visualization interface for the model output, as well as the capability to download the simulation output in portable format. CCMC offers a variety of visualization and output analysis tools to aid scientists in interpretation of simulation results. During eight years since the Run-on-request system became available the CCMC archived the results of almost 3000 runs that are covering significant space weather events and time intervals of interest identified by the community. The simulation results archived at CCMC also include a library of general purpose runs with modeled conditions that are used for education and research. Archiving results of simulations performed in support of several Modeling Challenges helps to evaluate the progress in space weather modeling over time. We will highlight the scientific benefits of CCMC space science model archive and discuss plans for further development of advanced methods to interact with simulation results.
WE-C-217BCD-08: Rapid Monte Carlo Simulations of DQE(f) of Scintillator-Based Detectors.
Star-Lack, J; Abel, E; Constantin, D; Fahrig, R; Sun, M
2012-06-01
Monte Carlo simulations of DQE(f) can greatly aid in the design of scintillator-based detectors by helping optimize key parameters including scintillator material and thickness, pixel size, surface finish, and septa reflectivity. However, the additional optical transport significantly increases simulation times, necessitating a large number of parallel processors to adequately explore the parameter space. To address this limitation, we have optimized the DQE(f) algorithm, reducing simulation times per design iteration to 10 minutes on a single CPU. DQE(f) is proportional to the ratio, MTF(f)̂2 /NPS(f). The LSF-MTF simulation uses a slanted line source and is rapidly performed with relatively few gammas launched. However, the conventional NPS simulation for standard radiation exposure levels requires the acquisition of multiple flood fields (nRun), each requiring billions of input gamma photons (nGamma), many of which will scintillate, thereby producing thousands of optical photons (nOpt) per deposited MeV. The resulting execution time is proportional to the product nRun x nGamma x nOpt. In this investigation, we revisit the theoretical derivation of DQE(f), and reveal significant computation time savings through the optimization of nRun, nGamma, and nOpt. Using GEANT4, we determine optimal values for these three variables for a GOS scintillator-amorphous silicon portal imager. Both isotropic and Mie optical scattering processes were modeled. Simulation results were validated against the literature. We found that, depending on the radiative and optical attenuation properties of the scintillator, the NPS can be accurately computed using values for nGamma below 1000, and values for nOpt below 500/MeV. nRun should remain above 200. Using these parameters, typical computation times for a complete NPS ranged from 2-10 minutes on a single CPU. The number of launched particles and corresponding execution times for a DQE simulation can be dramatically reduced allowing for accurate computation with modest computer hardware. NIHRO1 CA138426. Several authors work for Varian Medical Systems. © 2012 American Association of Physicists in Medicine.
Vulnerability Model. A Simulation System for Assessing Damage Resulting from Marine Spills
1975-06-01
used and the scenario simulated. The test runs were made on an IBM 360/65 computer. Running times were generally between 15 and 35 CPU seconds...fect filrthcr north. A petroleum tank-truck operation was located within 600 feet Of L:- stock pond on which the crude oil had dammred itp . At 5 A-M
Multi-GPGPU Tsunami simulation at Toyama-bay
NASA Astrophysics Data System (ADS)
Furuyama, Shoichi; Ueda, Yuki
2017-07-01
Accelerated multi General Purpose Graphics Processing Unit (GPGPU) calculation for Tsunami run-up simulation was achieved at the wide area (whole Toyama-bay in Japan) by faster computation technique. Toyama-bay has active-faults at the sea-bed. It has a high possibility to occur earthquakes and Tsunami waves in the case of the huge earthquake, that's why to predict the area of Tsunami run-up is important for decreasing damages to residents by the disaster. However it is very hard task to achieve the simulation by the computer resources problem. A several meter's order of the high resolution calculation is required for the running-up Tsunami simulation because artificial structures on the ground such as roads, buildings, and houses are very small. On the other hand the huge area simulation is also required. In the Toyama-bay case the area is 42 [km] × 15 [km]. When 5 [m] × 5 [m] size computational cells are used for the simulation, over 26,000,000 computational cells are generated. To calculate the simulation, a normal CPU desktop computer took about 10 hours for the calculation. An improvement of calculation time is important problem for the immediate prediction system of Tsunami running-up, as a result it will contribute to protect a lot of residents around the coastal region. The study tried to decrease this calculation time by using multi GPGPU system which is equipped with six NVIDIA TESLA K20xs, InfiniBand network connection between computer nodes by MVAPICH library. As a result 5.16 times faster calculation was achieved on six GPUs than one GPU case and it was 86% parallel efficiency to the linear speed up.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gong, Zhenhuan; Boyuka, David; Zou, X
Download Citation Email Print Request Permissions Save to Project The size and scope of cutting-edge scientific simulations are growing much faster than the I/O and storage capabilities of their run-time environments. The growing gap is exacerbated by exploratory, data-intensive analytics, such as querying simulation data with multivariate, spatio-temporal constraints, which induces heterogeneous access patterns that stress the performance of the underlying storage system. Previous work addresses data layout and indexing techniques to improve query performance for a single access pattern, which is not sufficient for complex analytics jobs. We present PARLO a parallel run-time layout optimization framework, to achieve multi-levelmore » data layout optimization for scientific applications at run-time before data is written to storage. The layout schemes optimize for heterogeneous access patterns with user-specified priorities. PARLO is integrated with ADIOS, a high-performance parallel I/O middleware for large-scale HPC applications, to achieve user-transparent, light-weight layout optimization for scientific datasets. It offers simple XML-based configuration for users to achieve flexible layout optimization without the need to modify or recompile application codes. Experiments show that PARLO improves performance by 2 to 26 times for queries with heterogeneous access patterns compared to state-of-the-art scientific database management systems. Compared to traditional post-processing approaches, its underlying run-time layout optimization achieves a 56% savings in processing time and a reduction in storage overhead of up to 50%. PARLO also exhibits a low run-time resource requirement, while also limiting the performance impact on running applications to a reasonable level.« less
Web-HLA and Service-Enabled RTI in the Simulation Grid
NASA Astrophysics Data System (ADS)
Huang, Jijie; Li, Bo Hu; Chai, Xudong; Zhang, Lin
HLA-based simulations in a grid environment have now become a main research hotspot in the M&S community, but there are many shortcomings of the current HLA running in a grid environment. This paper analyzes the analogies between HLA and OGSA from the software architecture point of view, and points out the service-oriented method should be introduced into the three components of HLA to overcome its shortcomings. This paper proposes an expanded running architecture that can integrate the HLA with OGSA and realizes a service-enabled RTI (SE-RTI). In addition, in order to handle the bottleneck problem that is how to efficiently realize the HLA time management mechanism, this paper proposes a centralized way by which the CRC of the SE-RTI takes charge of the time management and the dispatching of TSO events of each federate. Benchmark experiments indicate that the running velocity of simulations in Internet or WAN is properly improved.
DualSPHysics: A numerical tool to simulate real breakwaters
NASA Astrophysics Data System (ADS)
Zhang, Feng; Crespo, Alejandro; Altomare, Corrado; Domínguez, José; Marzeddu, Andrea; Shang, Shao-ping; Gómez-Gesteira, Moncho
2018-02-01
The open-source code DualSPHysics is used in this work to compute the wave run-up in an existing dike in the Chinese coast using realistic dimensions, bathymetry and wave conditions. The GPU computing power of the DualSPHysics allows simulating real-engineering problems that involve complex geometries with a high resolution in a reasonable computational time. The code is first validated by comparing the numerical free-surface elevation, the wave orbital velocities and the time series of the run-up with physical data in a wave flume. Those experiments include a smooth dike and an armored dike with two layers of cubic blocks. After validation, the code is applied to a real case to obtain the wave run-up under different incident wave conditions. In order to simulate the real open sea, the spurious reflections from the wavemaker are removed by using an active wave absorption technique.
Adaptive Grid Refinement for Atmospheric Boundary Layer Simulations
NASA Astrophysics Data System (ADS)
van Hooft, Antoon; van Heerwaarden, Chiel; Popinet, Stephane; van der linden, Steven; de Roode, Stephan; van de Wiel, Bas
2017-04-01
We validate and benchmark an adaptive mesh refinement (AMR) algorithm for numerical simulations of the atmospheric boundary layer (ABL). The AMR technique aims to distribute the computational resources efficiently over a domain by refining and coarsening the numerical grid locally and in time. This can be beneficial for studying cases in which length scales vary significantly in time and space. We present the results for a case describing the growth and decay of a convective boundary layer. The AMR results are benchmarked against two runs using a fixed, fine meshed grid. First, with the same numerical formulation as the AMR-code and second, with a code dedicated to ABL studies. Compared to the fixed and isotropic grid runs, the AMR algorithm can coarsen and refine the grid such that accurate results are obtained whilst using only a fraction of the grid cells. Performance wise, the AMR run was cheaper than the fixed and isotropic grid run with similar numerical formulations. However, for this specific case, the dedicated code outperformed both aforementioned runs.
NASA Technical Reports Server (NTRS)
Jefferson, David; Beckman, Brian
1986-01-01
This paper describes the concept of virtual time and its implementation in the Time Warp Operating System at the Jet Propulsion Laboratory. Virtual time is a distributed synchronization paradigm that is appropriate for distributed simulation, database concurrency control, real time systems, and coordination of replicated processes. The Time Warp Operating System is targeted toward the distributed simulation application and runs on a 32-node JPL Mark II Hypercube.
Malataras, G; Kappas, C; Lovelock, D M; Mohan, R
1997-01-01
This article presents a comparison between two implementations of an EGS4 Monte Carlo simulation of a radiation therapy machine. The first implementation was run on a high performance RISC workstation, and the second was run on an inexpensive PC. The simulation was performed using the MCRAD user code. The photon energy spectra, as measured at a plane transverse to the beam direction and containing the isocenter, were compared. The photons were also binned radially in order to compare the variation of the spectra with radius. With 500,000 photons recorded in each of the two simulations, the running times were 48 h and 116 h for the workstation and the PC, respectively. No significant statistical differences between the two implementations were found.
Nonlinear relaxation algorithms for circuit simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saleh, R.A.
Circuit simulation is an important Computer-Aided Design (CAD) tool in the design of Integrated Circuits (IC). However, the standard techniques used in programs such as SPICE result in very long computer-run times when applied to large problems. In order to reduce the overall run time, a number of new approaches to circuit simulation were developed and are described. These methods are based on nonlinear relaxation techniques and exploit the relative inactivity of large circuits. Simple waveform-processing techniques are described to determine the maximum possible speed improvement that can be obtained by exploiting this property of large circuits. Three simulation algorithmsmore » are described, two of which are based on the Iterated Timing Analysis (ITA) method and a third based on the Waveform-Relaxation Newton (WRN) method. New programs that incorporate these techniques were developed and used to simulate a variety of industrial circuits. The results from these simulations are provided. The techniques are shown to be much faster than the standard approach. In addition, a number of parallel aspects of these algorithms are described, and a general space-time model of parallel-task scheduling is developed.« less
Acceleration of discrete stochastic biochemical simulation using GPGPU.
Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira
2015-01-01
For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130.
Acceleration of discrete stochastic biochemical simulation using GPGPU
Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira
2015-01-01
For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130. PMID:25762936
GRODY - GAMMA RAY OBSERVATORY DYNAMICS SIMULATOR IN ADA
NASA Technical Reports Server (NTRS)
Stark, M.
1994-01-01
Analysts use a dynamics simulator to test the attitude control system algorithms used by a satellite. The simulator must simulate the hardware, dynamics, and environment of the particular spacecraft and provide user services which enable the analyst to conduct experiments. Researchers at Goddard's Flight Dynamics Division developed GRODY alongside GROSS (GSC-13147), a FORTRAN simulator which performs the same functions, in a case study to assess the feasibility and effectiveness of the Ada programming language for flight dynamics software development. They used popular object-oriented design techniques to link the simulator's design with its function. GRODY is designed for analysts familiar with spacecraft attitude analysis. The program supports maneuver planning as well as analytical testing and evaluation of the attitude determination and control system used on board the Gamma Ray Observatory (GRO) satellite. GRODY simulates the GRO on-board computer and Control Processor Electronics. The analyst/user sets up and controls the simulation. GRODY allows the analyst to check and update parameter values and ground commands, obtain simulation status displays, interrupt the simulation, analyze previous runs, and obtain printed output of simulation runs. The video terminal screen display allows visibility of command sequences, full-screen display and modification of parameters using input fields, and verification of all input data. Data input available for modification includes alignment and performance parameters for all attitude hardware, simulation control parameters which determine simulation scheduling and simulator output, initial conditions, and on-board computer commands. GRODY generates eight types of output: simulation results data set, analysis report, parameter report, simulation report, status display, plots, diagnostic output (which helps the user trace any problems that have occurred during a simulation), and a permanent log of all runs and errors. The analyst can send results output in graphical or tabular form to a terminal, disk, or hardcopy device, and can choose to have any or all items plotted against time or against each other. Goddard researchers developed GRODY on a VAX 8600 running VMS version 4.0. For near real time performance, GRODY requires a VAX at least as powerful as a model 8600 running VMS 4.0 or a later version. To use GRODY, the VAX needs an Ada Compilation System (ACS), Code Management System (CMS), and 1200K memory. GRODY is written in Ada and FORTRAN.
NASA Technical Reports Server (NTRS)
Springer, P.
1993-01-01
This paper discusses the method in which the Cascade-Correlation algorithm was parallelized in such a way that it could be run using the Time Warp Operating System (TWOS). TWOS is a special purpose operating system designed to run parellel discrete event simulations with maximum efficiency on parallel or distributed computers.
Validation of Supersonic Film Cooling Modeling for Liquid Rocket Engine Applications
NASA Technical Reports Server (NTRS)
Morris, Christopher I.; Ruf, Joseph H.
2010-01-01
Topics include: upper stage engine key requirements and design drivers; Calspan "stage 1" results, He slot injection into hypersonic flow (air); test articles for shock generator diagram, slot injector details, and instrumentation positions; test conditions; modeling approach; 2-d grid used for film cooling simulations of test article; heat flux profiles from 2-d flat plate simulations (run #4); heat flux profiles from 2-d backward facing step simulations (run #43); isometric sketch of single coolant nozzle, and x-z grid of half-nozzle domain; comparison of 2-d and 3-d simulations of coolant nozzles (run #45); flowfield properties along coolant nozzle centerline (run #45); comparison of 3-d CFD nozzle flow calculations with experimental data; nozzle exit plane reduced to linear profile for use in 2-d film-cooling simulations (run #45); synthetic Schlieren image of coolant injection region (run #45); axial velocity profiles from 2-d film-cooling simulation (run #45); coolant mass fraction profiles from 2-d film-cooling simulation (run #45); heat flux profiles from 2-d film cooling simulations (run #45); heat flux profiles from 2-d film cooling simulations (runs #47, #45, and #47); 3-d grid used for film cooling simulations of test article; heat flux contours from 3-d film-cooling simulation (run #45); and heat flux profiles from 3-d and 2-d film cooling simulations (runs #44, #46, and #47).
Rover Attitude and Pointing System Simulation Testbed
NASA Technical Reports Server (NTRS)
Vanelli, Charles A.; Grinblat, Jonathan F.; Sirlin, Samuel W.; Pfister, Sam
2009-01-01
The MER (Mars Exploration Rover) Attitude and Pointing System Simulation Testbed Environment (RAPSSTER) provides a simulation platform used for the development and test of GNC (guidance, navigation, and control) flight algorithm designs for the Mars rovers, which was specifically tailored to the MERs, but has since been used in the development of rover algorithms for the Mars Science Laboratory (MSL) as well. The software provides an integrated simulation and software testbed environment for the development of Mars rover attitude and pointing flight software. It provides an environment that is able to run the MER GNC flight software directly (as opposed to running an algorithmic model of the MER GNC flight code). This improves simulation fidelity and confidence in the results. Further more, the simulation environment allows the user to single step through its execution, pausing, and restarting at will. The system also provides for the introduction of simulated faults specific to Mars rover environments that cannot be replicated in other testbed platforms, to stress test the GNC flight algorithms under examination. The software provides facilities to do these stress tests in ways that cannot be done in the real-time flight system testbeds, such as time-jumping (both forwards and backwards), and introduction of simulated actuator faults that would be difficult, expensive, and/or destructive to implement in the real-time testbeds. Actual flight-quality codes can be incorporated back into the development-test suite of GNC developers, closing the loop between the GNC developers and the flight software developers. The software provides fully automated scripting, allowing multiple tests to be run with varying parameters, without human supervision.
Simulation of linear mechanical systems
NASA Technical Reports Server (NTRS)
Sirlin, S. W.
1993-01-01
A dynamics and controls analyst is typically presented with a structural dynamics model and must perform various input/output tests and design control laws. The required time/frequency simulations need to be done many times as models change and control designs evolve. This paper examines some simple ways that open and closed loop frequency and time domain simulations can be done using the special structure of the system equations usually available. Routines were developed to run under Pro-Matlab in a mixture of the Pro-Matlab interpreter and FORTRAN (using the .mex facility). These routines are often orders of magnitude faster than trying the typical 'brute force' approach of using built-in Pro-Matlab routines such as bode. This makes the analyst's job easier since not only does an individual run take less time, but much larger models can be attacked, often allowing the whole model reduction step to be eliminated.
Computer Simulation of the Neuronal Action Potential.
ERIC Educational Resources Information Center
Solomon, Paul R.; And Others
1988-01-01
A series of computer simulations of the neuronal resting and action potentials are described. Discusses the use of simulations to overcome the difficulties of traditional instruction, such as blackboard illustration, which can only illustrate these events at one point in time. Describes systems requirements necessary to run the simulations.…
ERIC Educational Resources Information Center
Nordmark, Staffan
1984-01-01
This report contains a theoretical model for describing the motion of a passenger car. The simulation program based on this model is used in conjunction with an advanced driving simulator and run in real time. The mathematical model is complete in the sense that the dynamics of the engine, transmission and steering system is described in some…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, Benjamin S.; Hamilton, Steven P.; Jarrett, Michael G.
This report describes the performance improvements made to the VERA Core Simulator (VERA-CS) during FY2016. The development of the VERA Core Simulator has focused on the capability needed to deplete physical reactors and help solve various problems; this capability required the accurate simulation of many operating cycles of a nuclear power plant. The first section of this report introduces two test problems used to assess the run-time performance of VERA-CS using a source dated February 2016. The next section provides a brief overview of the major modifications made to decrease the computational cost. Following the descriptions of the major improvements,more » the run-time for each improvement is shown. Conclusions on the work are presented, and further follow-on performance improvements are suggested.« less
Simulations of Eurasian winter temperature trends in coupled and uncoupled CFSv2
NASA Astrophysics Data System (ADS)
Collow, Thomas W.; Wang, Wanqiu; Kumar, Arun
2018-01-01
Conflicting results have been presented regarding the link between Arctic sea-ice loss and midlatitude cooling, particularly over Eurasia. This study analyzes uncoupled (atmosphere-only) and coupled (ocean-atmosphere) simulations by the Climate Forecast System, version 2 (CFSv2), to examine this linkage during the Northern Hemisphere winter, focusing on the simulation of the observed surface cooling trend over Eurasia during the last three decades. The uncoupled simulations are Atmospheric Model Intercomparison Project (AMIP) runs forced with mean seasonal cycles of sea surface temperature (SST) and sea ice, using combinations of SST and sea ice from different time periods to assess the role that each plays individually, and to assess the role of atmospheric internal variability. Coupled runs are used to further investigate the role of internal variability via the analysis of initialized predictions and the evolution of the forecast with lead time. The AMIP simulations show a mean warming response over Eurasia due to SST changes, but little response to changes in sea ice. Individual runs simulate cooler periods over Eurasia, and this is shown to be concurrent with a stronger Siberian high and warming over Greenland. No substantial differences in the variability of Eurasian surface temperatures are found between the different model configurations. In the coupled runs, the region of significant warming over Eurasia is small at short leads, but increases at longer leads. It is concluded that, although the models have some capability in highlighting the temperature variability over Eurasia, the observed cooling may still be a consequence of internal variability.
Optimized Hypervisor Scheduler for Parallel Discrete Event Simulations on Virtual Machine Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B; Perumalla, Kalyan S
2013-01-01
With the advent of virtual machine (VM)-based platforms for parallel computing, it is now possible to execute parallel discrete event simulations (PDES) over multiple virtual machines, in contrast to executing in native mode directly over hardware as is traditionally done over the past decades. While mature VM-based parallel systems now offer new, compelling benefits such as serviceability, dynamic reconfigurability and overall cost effectiveness, the runtime performance of parallel applications can be significantly affected. In particular, most VM-based platforms are optimized for general workloads, but PDES execution exhibits unique dynamics significantly different from other workloads. Here we first present results frommore » experiments that highlight the gross deterioration of the runtime performance of VM-based PDES simulations when executed using traditional VM schedulers, quantitatively showing the bad scaling properties of the scheduler as the number of VMs is increased. The mismatch is fundamental in nature in the sense that any fairness-based VM scheduler implementation would exhibit this mismatch with PDES runs. We also present a new scheduler optimized specifically for PDES applications, and describe its design and implementation. Experimental results obtained from running PDES benchmarks (PHOLD and vehicular traffic simulations) over VMs show over an order of magnitude improvement in the run time of the PDES-optimized scheduler relative to the regular VM scheduler, with over 20 reduction in run time of simulations using up to 64 VMs. The observations and results are timely in the context of emerging systems such as cloud platforms and VM-based high performance computing installations, highlighting to the community the need for PDES-specific support, and the feasibility of significantly reducing the runtime overhead for scalable PDES on VM platforms.« less
Accelerating 3D Hall MHD Magnetosphere Simulations with Graphics Processing Units
NASA Astrophysics Data System (ADS)
Bard, C.; Dorelli, J.
2017-12-01
The resolution required to simulate planetary magnetospheres with Hall magnetohydrodynamics result in program sizes approaching several hundred million grid cells. These would take years to run on a single computational core and require hundreds or thousands of computational cores to complete in a reasonable time. However, this requires access to the largest supercomputers. Graphics processing units (GPUs) provide a viable alternative: one GPU can do the work of roughly 100 cores, bringing Hall MHD simulations of Ganymede within reach of modest GPU clusters ( 8 GPUs). We report our progress in developing a GPU-accelerated, three-dimensional Hall magnetohydrodynamic code and present Hall MHD simulation results for both Ganymede (run on 8 GPUs) and Mercury (56 GPUs). We benchmark our Ganymede simulation with previous results for the Galileo G8 flyby, namely that adding the Hall term to ideal MHD simulations changes the global convection pattern within the magnetosphere. Additionally, we present new results for the G1 flyby as well as initial results from Hall MHD simulations of Mercury and compare them with the corresponding ideal MHD runs.
Just-in-time adaptive disturbance estimation for run-to-run control of photolithography overlay
NASA Astrophysics Data System (ADS)
Firth, Stacy K.; Campbell, W. J.; Edgar, Thomas F.
2002-07-01
One of the main challenges to implementations of traditional run-to-run control in the semiconductor industry is a high mix of products in a single factory. To address this challenge, Just-in-time Adaptive Disturbance Estimation (JADE) has been developed. JADE uses a recursive weighted least-squares parameters estimation technique to identify the contributions to variation that are dependent on product, as well as the tools on which the lot was processed. As applied to photolithography overlay, JADE assigns these sources of variation to contributions from the context items: tool, product, reference tool, and reference reticle. Simulations demonstrate that JADE effectively identifies disturbances in contributing context items when the variations are known to be additive. The superior performance of JADE over traditional EWMA is also shown in these simulations. The results of application of JADE to data from a high mix production facility show that JADE still performs better than EWMA, even with the challenges of a real manufacturing environment.
NASA Astrophysics Data System (ADS)
Schalge, Bernd; Rihani, Jehan; Haese, Barbara; Baroni, Gabriele; Erdal, Daniel; Haefliger, Vincent; Lange, Natascha; Neuweiler, Insa; Hendricks-Franssen, Harrie-Jan; Geppert, Gernot; Ament, Felix; Kollet, Stefan; Cirpka, Olaf; Saavedra, Pablo; Han, Xujun; Attinger, Sabine; Kunstmann, Harald; Vereecken, Harry; Simmer, Clemens
2017-04-01
Currently, an integrated approach to simulating the earth system is evolving where several compartment models are coupled to achieve the best possible physically consistent representation. We used the model TerrSysMP, which fully couples subsurface, land surface and atmosphere, in a synthetic study that mimicked the Neckar catchment in Southern Germany. A virtual reality run at a high resolution of 400m for the land surface and subsurface and 1.1km for the atmosphere was made. Ensemble runs at a lower resolution (800m for the land surface and subsurface) were also made. The ensemble was generated by varying soil and vegetation parameters and lateral atmospheric forcing among the different ensemble members in a systematic way. It was found that the ensemble runs deviated for some variables and some time periods largely from the virtual reality reference run (the reference run was not covered by the ensemble), which could be related to the different model resolutions. This was for example the case for river discharge in the summer. We also analyzed the spread of model states as function of time and found clear relations between the spread and the time of the year and weather conditions. For example, the ensemble spread of latent heat flux related to uncertain soil parameters was larger under dry soil conditions than under wet soil conditions. Another example is that the ensemble spread of atmospheric states was more influenced by uncertain soil and vegetation parameters under conditions of low air pressure gradients (in summer) than under conditions with larger air pressure gradients in winter. The analysis of the ensemble of fully coupled model simulations provided valuable insights in the dynamics of land-atmosphere feedbacks which we will further highlight in the presentation.
Analyzing Spacecraft Telecommunication Systems
NASA Technical Reports Server (NTRS)
Kordon, Mark; Hanks, David; Gladden, Roy; Wood, Eric
2004-01-01
Multi-Mission Telecom Analysis Tool (MMTAT) is a C-language computer program for analyzing proposed spacecraft telecommunication systems. MMTAT utilizes parameterized input and computational models that can be run on standard desktop computers to perform fast and accurate analyses of telecommunication links. MMTAT is easy to use and can easily be integrated with other software applications and run as part of almost any computational simulation. It is distributed as either a stand-alone application program with a graphical user interface or a linkable library with a well-defined set of application programming interface (API) calls. As a stand-alone program, MMTAT provides both textual and graphical output. The graphs make it possible to understand, quickly and easily, how telecommunication performance varies with variations in input parameters. A delimited text file that can be read by any spreadsheet program is generated at the end of each run. The API in the linkable-library form of MMTAT enables the user to control simulation software and to change parameters during a simulation run. Results can be retrieved either at the end of a run or by use of a function call at any time step.
User's instructions for the cardiovascular Walters model
NASA Technical Reports Server (NTRS)
Croston, R. C.
1973-01-01
The model is a combined, steady-state cardiovascular and thermal model. It was originally developed for interactive use, but was converted to batch mode simulation for the Sigma 3 computer. The model has the purpose to compute steady-state circulatory and thermal variables in response to exercise work loads and environmental factors. During a computer simulation run, several selected variables are printed at each time step. End conditions are also printed at the completion of the run.
Fast and Accurate Simulation of the Cray XMT Multithreaded Supercomputer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Villa, Oreste; Tumeo, Antonino; Secchi, Simone
Irregular applications, such as data mining and analysis or graph-based computations, show unpredictable memory/network access patterns and control structures. Highly multithreaded architectures with large processor counts, like the Cray MTA-1, MTA-2 and XMT, appear to address their requirements better than commodity clusters. However, the research on highly multithreaded systems is currently limited by the lack of adequate architectural simulation infrastructures due to issues such as size of the machines, memory footprint, simulation speed, accuracy and customization. At the same time, Shared-memory MultiProcessors (SMPs) with multi-core processors have become an attractive platform to simulate large scale machines. In this paper, wemore » introduce a cycle-level simulator of the highly multithreaded Cray XMT supercomputer. The simulator runs unmodified XMT applications. We discuss how we tackled the challenges posed by its development, detailing the techniques introduced to make the simulation as fast as possible while maintaining a high accuracy. By mapping XMT processors (ThreadStorm with 128 hardware threads) to host computing cores, the simulation speed remains constant as the number of simulated processors increases, up to the number of available host cores. The simulator supports zero-overhead switching among different accuracy levels at run-time and includes a network model that takes into account contention. On a modern 48-core SMP host, our infrastructure simulates a large set of irregular applications 500 to 2000 times slower than real time when compared to a 128-processor XMT, while remaining within 10\\% of accuracy. Emulation is only from 25 to 200 times slower than real time.« less
Designing Crop Simulation Web Service with Service Oriented Architecture Principle
NASA Astrophysics Data System (ADS)
Chinnachodteeranun, R.; Hung, N. D.; Honda, K.
2015-12-01
Crop simulation models are efficient tools for simulating crop growth processes and yield. Running crop models requires data from various sources as well as time-consuming data processing, such as data quality checking and data formatting, before those data can be inputted to the model. It makes the use of crop modeling limited only to crop modelers. We aim to make running crop models convenient for various users so that the utilization of crop models will be expanded, which will directly improve agricultural applications. As the first step, we had developed a prototype that runs DSSAT on Web called as Tomorrow's Rice (v. 1). It predicts rice yields based on a planting date, rice's variety and soil characteristics using DSSAT crop model. A user only needs to select a planting location on the Web GUI then the system queried historical weather data from available sources and expected yield is returned. Currently, we are working on weather data connection via Sensor Observation Service (SOS) interface defined by Open Geospatial Consortium (OGC). Weather data can be automatically connected to a weather generator for generating weather scenarios for running the crop model. In order to expand these services further, we are designing a web service framework consisting of layers of web services to support compositions and executions for running crop simulations. This framework allows a third party application to call and cascade each service as it needs for data preparation and running DSSAT model using a dynamic web service mechanism. The framework has a module to manage data format conversion, which means users do not need to spend their time curating the data inputs. Dynamic linking of data sources and services are implemented using the Service Component Architecture (SCA). This agriculture web service platform demonstrates interoperability of weather data using SOS interface, convenient connections between weather data sources and weather generator, and connecting various services for running crop models for decision support.
McEwan, Phil; Bergenheim, Klas; Yuan, Yong; Tetlow, Anthony P; Gordon, Jason P
2010-01-01
Simulation techniques are well suited to modelling diseases yet can be computationally intensive. This study explores the relationship between modelled effect size, statistical precision, and efficiency gains achieved using variance reduction and an executable programming language. A published simulation model designed to model a population with type 2 diabetes mellitus based on the UKPDS 68 outcomes equations was coded in both Visual Basic for Applications (VBA) and C++. Efficiency gains due to the programming language were evaluated, as was the impact of antithetic variates to reduce variance, using predicted QALYs over a 40-year time horizon. The use of C++ provided a 75- and 90-fold reduction in simulation run time when using mean and sampled input values, respectively. For a series of 50 one-way sensitivity analyses, this would yield a total run time of 2 minutes when using C++, compared with 155 minutes for VBA when using mean input values. The use of antithetic variates typically resulted in a 53% reduction in the number of simulation replications and run time required. When drawing all input values to the model from distributions, the use of C++ and variance reduction resulted in a 246-fold improvement in computation time compared with VBA - for which the evaluation of 50 scenarios would correspondingly require 3.8 hours (C++) and approximately 14.5 days (VBA). The choice of programming language used in an economic model, as well as the methods for improving precision of model output can have profound effects on computation time. When constructing complex models, more computationally efficient approaches such as C++ and variance reduction should be considered; concerns regarding model transparency using compiled languages are best addressed via thorough documentation and model validation.
A Storm Surge and Inundation Model of the Back River Watershed at NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Loftis, Jon Derek; Wang, Harry V.; DeYoung, Russell J.
2013-01-01
This report on a Virginia Institute for Marine Science project demonstrates that the sub-grid modeling technology (now as part of Chesapeake Bay Inundation Prediction System, CIPS) can incorporate high-resolution Lidar measurements provided by NASA Langley Research Center into the sub-grid model framework to resolve detailed topographic features for use as a hydrological transport model for run-off simulations within NASA Langley and Langley Air Force Base. The rainfall over land accumulates in the ditches/channels resolved via the model sub-grid was tested to simulate the run-off induced by heavy precipitation. Possessing both the capabilities for storm surge and run-off simulations, the CIPS model was then applied to simulate real storm events starting with Hurricane Isabel in 2003. It will be shown that the model can generate highly accurate on-land inundation maps as demonstrated by excellent comparison of the Langley tidal gauge time series data (CAPABLE.larc.nasa.gov) and spatial patterns of real storm wrack line measurements with the model results simulated during Hurricanes Isabel (2003), Irene (2011), and a 2009 Nor'easter. With confidence built upon the model's performance, sea level rise scenarios from the ICCP (International Climate Change Partnership) were also included in the model scenario runs to simulate future inundation cases.
TWOS - TIME WARP OPERATING SYSTEM, VERSION 2.5.1
NASA Technical Reports Server (NTRS)
Bellenot, S. F.
1994-01-01
The Time Warp Operating System (TWOS) is a special-purpose operating system designed to support parallel discrete-event simulation. TWOS is a complete implementation of the Time Warp mechanism, a distributed protocol for virtual time synchronization based on process rollback and message annihilation. Version 2.5.1 supports simulations and other computations using both virtual time and dynamic load balancing; it does not support general time-sharing or multi-process jobs using conventional message synchronization and communication. The program utilizes the underlying operating system's resources. TWOS runs a single simulation at a time, executing it concurrently on as many processors of a distributed system as are allocated. The simulation needs only to be decomposed into objects (logical processes) that interact through time-stamped messages. TWOS provides transparent synchronization. The user does not have to add any more special logic to aid in synchronization, nor give any synchronization advice, nor even understand much about how the Time Warp mechanism works. The Time Warp Simulator (TWSIM) subdirectory contains a sequential simulation engine that is interface compatible with TWOS. This means that an application designer and programmer who wish to use TWOS can prototype code on TWSIM on a single processor and/or workstation before having to deal with the complexity of working on a distributed system. TWSIM also provides statistics about the application which may be helpful for determining the correctness of an application and for achieving good performance on TWOS. Version 2.5.1 has an updated interface that is not compatible with 2.0. The program's user manual assists the simulation programmer in the design, coding, and implementation of discrete-event simulations running on TWOS. The manual also includes a practical user's guide to the TWOS application benchmark, Colliding Pucks. TWOS supports simulations written in the C programming language. It is designed to run on the Sun3/Sun4 series computers and the BBN "Butterfly" GP-1000 computer. The standard distribution medium for this package is a .25 inch tape cartridge in TAR format. TWOS was developed in 1989 and updated in 1991. This program is a copyrighted work with all copyright vested in NASA. Sun3 and Sun4 are trademarks of Sun Microsystems, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bush, Brian W; Brunhart-Lupo, Nicholas J; Gruchalla, Kenny M
This brochure describes a system dynamics simulation (SD) framework that supports an end-to-end analysis workflow that is optimized for deployment on ESIF facilities(Peregrine and the Insight Center). It includes (I) parallel and distributed simulation of SD models, (ii) real-time 3D visualization of running simulations, and (iii) comprehensive database-oriented persistence of simulation metadata, inputs, and outputs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bush, Brian W; Brunhart-Lupo, Nicholas J; Gruchalla, Kenny M
This presentation describes a system dynamics simulation (SD) framework that supports an end-to-end analysis workflow that is optimized for deployment on ESIF facilities(Peregrine and the Insight Center). It includes (I) parallel and distributed simulation of SD models, (ii) real-time 3D visualization of running simulations, and (iii) comprehensive database-oriented persistence of simulation metadata, inputs, and outputs.
Mars Tumbleweed Simulation Using Singular Perturbation Theory
NASA Technical Reports Server (NTRS)
Raiszadeh, Behzad; Calhoun, Phillip
2005-01-01
The Mars Tumbleweed is a new surface rover concept that utilizes Martian winds as the primary source of mobility. Several designs have been proposed for the Mars Tumbleweed, all using aerodynamic drag to generate force for traveling about the surface. The Mars Tumbleweed, in its deployed configuration, must be large and lightweight to provide the ratio of drag force to rolling resistance necessary to initiate motion from the Martian surface. This paper discusses the dynamic simulation details of a candidate Tumbleweed design. The dynamic simulation model must properly evaluate and characterize the motion of the tumbleweed rover to support proper selection of system design parameters. Several factors, such as model flexibility, simulation run times, and model accuracy needed to be considered in modeling assumptions. The simulation was required to address the flexibility of the rover and its interaction with the ground, and properly evaluate its mobility. Proper assumptions needed to be made such that the simulated dynamic motion is accurate and realistic while not overly burdened by long simulation run times. This paper also shows results that provided reasonable correlation between the simulation and a drop/roll test of a tumbleweed prototype.
The ISS as a platform for a fully simulated mars voyage
NASA Astrophysics Data System (ADS)
Narici, Livio; Reitz, Guenther
2016-07-01
The ISS can mimic the impact of microgravity, radiation, living and psychological conditions that astronauts will face during a deep space cruise, for example to Mars. This suggests the ISS as the most valuable "analogue" for deep space exploration. NASA has indeed suggested a 'full-up deep space simulation on last available ISS Mission: 6/7 crew for one year duration; full simulation of time delays & autonomous operations'. This idea should be pushed further. It is indeed conceivable to use the ISS as the final "analogue", performing a real 'dry-run' of a deep space mission (such as a mission to Mars), as close as reasonably possible to what will be the real voyage. This Mars ISS dry run (ISS4Mars) would last 500-800 days, mimicking most of the challenges which will be undertaken such as length, isolation, food provision, decision making, time delays, health monitoring diagnostic and therapeutic actions and more: not a collection of "single experiments", but a complete exploration simulation were all the pieces will come together for the first in space simulated Mars voyage. Most of these challenges are the same that those that will be encountered during a Moon voyage, with the most evident exceptions being the duration and the communication delay. At the time of the Mars ISS dry run all the science and technological challenges will have to be mostly solved by dedicated works. These solutions will be synergistically deployed in the dry run which will simulate all the different aspects of the voyage, the trip to Mars, the permanence on the planet and the return to Earth. During the dry run i) There will be no arrivals/departure of spacecrafts; 2) Proper communications delay with ground will be simulated; 3) Decision processes will migrate from Ground to ISS; 4) Permanence on Mars will be simulated. Mars ISS dry run will use just a portion of the ISS which will be totally isolated from the rest of the ISS, leaving to the other ISS portions the task to provide the needed operational support for the ISS survival as well as the support for emergency situations. Beside helping in focusing the attention of the many space and space related programs to the quest for Mars, ISS4Mars will maintain a high level of attention of the funding institutions and provide an important focus for the general public. This talk will present the many scientific issues still open to be addressed (see for example the disciplinary reports of the THESEUS project#), some example of the challenging tests that could be performed, some of the operational challenges, as well as list some of the issues not likely/possible to be simulated. # http://www.theseus-eu.org
Shoe cleat position during cycling and its effect on subsequent running performance in triathletes.
Viker, Tomas; Richardson, Matt X
2013-01-01
Research with cyclists suggests a decreased load on the lower limbs by placing the shoe cleat more posteriorly, which may benefit subsequent running in a triathlon. This study investigated the effect of shoe cleat position during cycling on subsequent running. Following bike-run training sessions with both aft and traditional cleat positions, 13 well-trained triathletes completed a 30 min simulated draft-legal triathlon cycling leg, followed by a maximal 5 km run on two occasions, once with aft-placed and once with traditionally placed cleats. Oxygen consumption, breath frequency, heart rate, cadence and power output were measured during cycling, while heart rate, contact time, 200 m lap time and total time were measured during running. Cardiovascular measures did not differ between aft and traditional cleat placement during the cycling protocol. The 5 km run time was similar for aft and traditional cleat placement, at 1084 ± 80 s and 1072 ± 64 s, respectively, as was contact time during km 1 and 5, and heart rate and running speed for km 5 for the two cleat positions. Running speed during km 1 was 2.1% ± 1.8 faster (P < 0.05) for the traditional cleat placement. There are no beneficial effects of an aft cleat position on subsequent running in a short distance triathlon.
Development and validation of the European Cluster Assimilation Techniques run libraries
NASA Astrophysics Data System (ADS)
Facskó, G.; Gordeev, E.; Palmroth, M.; Honkonen, I.; Janhunen, P.; Sergeev, V.; Kauristie, K.; Milan, S.
2012-04-01
The European Commission funded the European Cluster Assimilation Techniques (ECLAT) project as a collaboration of five leader European universities and research institutes. A main contribution of the Finnish Meteorological Institute (FMI) is to provide a wide range global MHD runs with the Grand Unified Magnetosphere Ionosphere Coupling simulation (GUMICS). The runs are divided in two categories: Synthetic runs investigating the extent of solar wind drivers that can influence magnetospheric dynamics, as well as dynamic runs using measured solar wind data as input. Here we consider the first set of runs with synthetic solar wind input. The solar wind density, velocity and the interplanetary magnetic field had different magnitudes and orientations; furthermore two F10.7 flux values were selected for solar radiation minimum and maximum values. The solar wind parameter values were constant such that a constant stable solution was archived. All configurations were run several times with three different (-15°, 0°, +15°) tilt angles in the GSE X-Z plane. The result of the 192 simulations named so called "synthetic run library" were visualized and uploaded to the homepage of the FMI after validation. Here we present details of these runs.
Development of the CELSS emulator at NASA. Johnson Space Center
NASA Technical Reports Server (NTRS)
Cullingford, Hatice S.
1990-01-01
The Closed Ecological Life Support System (CELSS) Emulator is under development. It will be used to investigate computer simulations of integrated CELSS operations involving humans, plants, and process machinery. Described here is Version 1.0 of the CELSS Emulator that was initiated in 1988 on the Johnson Space Center (JSC) Multi Purpose Applications Console Test Bed as the simulation framework. The run model of the simulation system now contains a CELSS model called BLSS. The CELSS simulator empowers us to generate model data sets, store libraries of results for further analysis, and also display plots of model variables as a function of time. The progress of the project is presented with sample test runs and simulation display pages.
GPU Particle Tracking and MHD Simulations with Greatly Enhanced Computational Speed
NASA Astrophysics Data System (ADS)
Ziemba, T.; O'Donnell, D.; Carscadden, J.; Cash, M.; Winglee, R.; Harnett, E.
2008-12-01
GPUs are intrinsically highly parallelized systems that provide more than an order of magnitude computing speed over a CPU based systems, for less cost than a high end-workstation. Recent advancements in GPU technologies allow for full IEEE float specifications with performance up to several hundred GFLOPs per GPU, and new software architectures have recently become available to ease the transition from graphics based to scientific applications. This allows for a cheap alternative to standard supercomputing methods and should increase the time to discovery. 3-D particle tracking and MHD codes have been developed using NVIDIA's CUDA and have demonstrated speed up of nearly a factor of 20 over equivalent CPU versions of the codes. Such a speed up enables new applications to develop, including real time running of radiation belt simulations and real time running of global magnetospheric simulations, both of which could provide important space weather prediction tools.
Master of Puppets: Cooperative Multitasking for In Situ Processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morozov, Dmitriy; Lukic, Zarija
2016-01-01
Modern scientific and engineering simulations track the time evolution of billions of elements. For such large runs, storing most time steps for later analysis is not a viable strategy. It is far more efficient to analyze the simulation data while it is still in memory. Here, we present a novel design for running multiple codes in situ: using coroutines and position-independent executables we enable cooperative multitasking between simulation and analysis, allowing the same executables to post-process simulation output, as well as to process it on the fly, both in situ and in transit. We present Henson, an implementation of ourmore » design, and illustrate its versatility by tackling analysis tasks with different computational requirements. This design differs significantly from the existing frameworks and offers an efficient and robust approach to integrating multiple codes on modern supercomputers. The techniques we present can also be integrated into other in situ frameworks.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Monozov, Dmitriy; Lukie, Zarija
2016-04-01
Modern scientific and engineering simulations track the time evolution of billions of elements. For such large runs, storing most time steps for later analysis is not a viable strategy. It is far more efficient to analyze the simulation data while it is still in memory. The developers present a novel design for running multiple codes in situ: using coroutines and position-independent executables they enable cooperative multitasking between simulation and analysis, allowing the same executables to post-process simulation output, as well as to process it on the fly, both in situ and in transit. They present Henson, an implementation of ourmore » design, and illustrate its versatility by tackling analysis tasks with different computational requirements. Our design differs significantly from the existing frameworks and offers an efficient and robust approach to integrating multiple codes on modern supercomputers. The presented techniques can also be integrated into other in situ frameworks.« less
Modeling a maintenance simulation of the geosynchronous platform
NASA Technical Reports Server (NTRS)
Kleiner, A. F., Jr.
1980-01-01
A modeling technique used to conduct a simulation study comparing various maintenance routines for a space platform is dicussed. A system model is described and illustrated, the basic concepts of a simulation pass are detailed, and sections on failures and maintenance are included. The operation of the system across time is best modeled by a discrete event approach with two basic events - failure and maintenance of the system. Each overall simulation run consists of introducing a particular model of the physical system, together with a maintenance policy, demand function, and mission lifetime. The system is then run through many passes, each pass corresponding to one mission and the model is re-initialized before each pass. Statistics are compiled at the end of each pass and after the last pass a report is printed. Items of interest typically include the time to first maintenance, total number of maintenance trips for each pass, average capability of the system, etc.
Massively parallel algorithms for trace-driven cache simulations
NASA Technical Reports Server (NTRS)
Nicol, David M.; Greenberg, Albert G.; Lubachevsky, Boris D.
1991-01-01
Trace driven cache simulation is central to computer design. A trace is a very long sequence of reference lines from main memory. At the t(exp th) instant, reference x sub t is hashed into a set of cache locations, the contents of which are then compared with x sub t. If at the t sup th instant x sub t is not present in the cache, then it is said to be a miss, and is loaded into the cache set, possibly forcing the replacement of some other memory line, and making x sub t present for the (t+1) sup st instant. The problem of parallel simulation of a subtrace of N references directed to a C line cache set is considered, with the aim of determining which references are misses and related statistics. A simulation method is presented for the Least Recently Used (LRU) policy, which regradless of the set size C runs in time O(log N) using N processors on the exclusive read, exclusive write (EREW) parallel model. A simpler LRU simulation algorithm is given that runs in O(C log N) time using N/log N processors. Timings are presented of the second algorithm's implementation on the MasPar MP-1, a machine with 16384 processors. A broad class of reference based line replacement policies are considered, which includes LRU as well as the Least Frequently Used and Random replacement policies. A simulation method is presented for any such policy that on any trace of length N directed to a C line set runs in the O(C log N) time with high probability using N processors on the EREW model. The algorithms are simple, have very little space overhead, and are well suited for SIMD implementation.
NASA Astrophysics Data System (ADS)
Tavakkol, Sasan; Lynett, Patrick
2017-08-01
In this paper, we introduce an interactive coastal wave simulation and visualization software, called Celeris. Celeris is an open source software which needs minimum preparation to run on a Windows machine. The software solves the extended Boussinesq equations using a hybrid finite volume-finite difference method and supports moving shoreline boundaries. The simulation and visualization are performed on the GPU using Direct3D libraries, which enables the software to run faster than real-time. Celeris provides a first-of-its-kind interactive modeling platform for coastal wave applications and it supports simultaneous visualization with both photorealistic and colormapped rendering capabilities. We validate our software through comparison with three standard benchmarks for non-breaking and breaking waves.
Low-cost real-time 3D PC distributed-interactive-simulation (DIS) application for C4I
NASA Astrophysics Data System (ADS)
Gonthier, David L.; Veron, Harry
1998-04-01
A 3D Distributed Interactive Simulation (DIS) application was developed and demonstrated in a PC environment. The application is capable of running in the stealth mode or as a player which includes battlefield simulations, such as ModSAF. PCs can be clustered together, but not necessarily collocated, to run a simulation or training exercise on their own. A 3D perspective view of the battlefield is displayed that includes terrain, trees, buildings and other objects supported by the DIS application. Screen update rates of 15 to 20 frames per second have been achieved with fully lit and textured scenes thus providing high quality and fast graphics. A complete PC system can be configured for under $2,500. The software runs under Windows95 and WindowsNT. It is written in C++ and uses a commercial API called RenderWare for 3D rendering. The software uses Microsoft Foundation classes and Microsoft DirectPlay for joystick input. The RenderWare libraries enhance the performance through optimization for MMX and the Pentium Pro processor. The RenderWare and the Righteous 3D graphics board from Orchid Technologies with an advertised rendering rate of up to 2 million texture mapped triangles per second. A low-cost PC DIS simulator that can partake in a real-time collaborative simulation with other platforms is thus achieved.
DREAM: An Efficient Methodology for DSMC Simulation of Unsteady Processes
NASA Astrophysics Data System (ADS)
Cave, H. M.; Jermy, M. C.; Tseng, K. C.; Wu, J. S.
2008-12-01
A technique called the DSMC Rapid Ensemble Averaging Method (DREAM) for reducing the statistical scatter in the output from unsteady DSMC simulations is introduced. During post-processing by DREAM, the DSMC algorithm is re-run multiple times over a short period before the temporal point of interest thus building up a combination of time- and ensemble-averaged sampling data. The particle data is regenerated several mean collision times before the output time using the particle data generated during the original DSMC run. This methodology conserves the original phase space data from the DSMC run and so is suitable for reducing the statistical scatter in highly non-equilibrium flows. In this paper, the DREAM-II method is investigated and verified in detail. Propagating shock waves at high Mach numbers (Mach 8 and 12) are simulated using a parallel DSMC code (PDSC) and then post-processed using DREAM. The ability of DREAM to obtain the correct particle velocity distribution in the shock structure is demonstrated and the reduction of statistical scatter in the output macroscopic properties is measured. DREAM is also used to reduce the statistical scatter in the results from the interaction of a Mach 4 shock with a square cavity and for the interaction of a Mach 12 shock on a wedge in a channel.
CMacIonize: Monte Carlo photoionisation and moving-mesh radiation hydrodynamics
NASA Astrophysics Data System (ADS)
Vandenbroucke, Bert; Wood, Kenneth
2018-02-01
CMacIonize simulates the self-consistent evolution of HII regions surrounding young O and B stars, or other sources of ionizing radiation. The code combines a Monte Carlo photoionization algorithm that uses a complex mix of hydrogen, helium and several coolants in order to self-consistently solve for the ionization and temperature balance at any given time, with a standard first order hydrodynamics scheme. The code can be run as a post-processing tool to get the line emission from an existing simulation snapshot, but can also be used to run full radiation hydrodynamical simulations. Both the radiation transfer and the hydrodynamics are implemented in a general way that is independent of the grid structure that is used to discretize the system, allowing it to be run both as a standard fixed grid code and also as a moving-mesh code.
Visualization and Tracking of Parallel CFD Simulations
NASA Technical Reports Server (NTRS)
Vaziri, Arsi; Kremenetsky, Mark
1995-01-01
We describe a system for interactive visualization and tracking of a 3-D unsteady computational fluid dynamics (CFD) simulation on a parallel computer. CM/AVS, a distributed, parallel implementation of a visualization environment (AVS) runs on the CM-5 parallel supercomputer. A CFD solver is run as a CM/AVS module on the CM-5. Data communication between the solver, other parallel visualization modules, and a graphics workstation, which is running AVS, are handled by CM/AVS. Partitioning of the visualization task, between CM-5 and the workstation, can be done interactively in the visual programming environment provided by AVS. Flow solver parameters can also be altered by programmable interactive widgets. This system partially removes the requirement of storing large solution files at frequent time steps, a characteristic of the traditional 'simulate (yields) store (yields) visualize' post-processing approach.
Cai, Jian-liang; Zhang, Yi; Sun, Guo-feng; Li, Ning-chen; Zhang, Xiang-hua; Na, Yan-qun
2012-12-01
To investigate the value of laparoscopic virtual reality simulator in laparoscopic suture ability training of catechumen. After finishing the virtual reality training of basic laparoscopic skills, 26 catechumen were divided randomly into 2 groups, one group undertook advanced laparoscopic skill (suture technique) training with laparoscopic virtual reality simulator (virtual group), another used laparoscopic box trainer (box group). Using our homemade simulations, before grouping and after training, every trainee performed nephropyeloureterostomy under laparoscopy, the running time, anastomosis quality and proficiency were recorded and assessed. For virtual group, the running time, anastomosis quality and proficiency scores before grouping were (98 ± 11) minutes, 3.20 ± 0.41, 3.47 ± 0.64, respectively, after training were (53 ± 8) minutes, 6.87 ± 0.74, 6.33 ± 0.82, respectively, all the differences were statistically significant (all P < 0.01). In box group, before grouping were (98 ± 10) minutes, 3.17 ± 0.39, 3.42 ± 0.67, respectively, after training were (52 ± 9) minutes, 6.08 ± 0.90, 6.33 ± 0.78, respectively, all the differences also were statistically significant (all P < 0.01). After training, the running time and proficiency scores of virtual group were similar to box group (all P > 0.05), however, anstomosis quality scores in virtual group were higher than in box group (P = 0.02). The laparoscopic virtual reality simulator is better than traditional box trainer in advanced laparoscopic suture ability training of catechumen.
ERIC Educational Resources Information Center
Glazier, Rebecca A.
2011-01-01
Despite the growing availability and popularity of simulations and other active teaching techniques, many instructors may be deterred from using simulations because of the potentially high costs involved. Instructors could spend a preponderance of their time and resources developing and executing simulations, but such an approach is not necessary.…
Red-light running violation prediction using observational and simulator data.
Jahangiri, Arash; Rakha, Hesham; Dingus, Thomas A
2016-11-01
In the United States, 683 people were killed and an estimated 133,000 were injured in crashes due to running red lights in 2012. To help prevent/mitigate crashes caused by running red lights, these violations need to be identified before they occur, so both the road users (i.e., drivers, pedestrians, etc.) in potential danger and the infrastructure can be notified and actions can be taken accordingly. Two different data sets were used to assess the feasibility of developing red-light running (RLR) violation prediction models: (1) observational data and (2) driver simulator data. Both data sets included common factors, such as time to intersection (TTI), distance to intersection (DTI), and velocity at the onset of the yellow indication. However, the observational data set provided additional factors that the simulator data set did not, and vice versa. The observational data included vehicle information (e.g., speed, acceleration, etc.) for several different time frames. For each vehicle approaching an intersection in the observational data set, required data were extracted from several time frames as the vehicle drew closer to the intersection. However, since the observational data were inherently anonymous, driver factors such as age and gender were unavailable in the observational data set. Conversely, the simulator data set contained age and gender. In addition, the simulator data included a secondary (non-driving) task factor and a treatment factor (i.e., incoming/outgoing calls while driving). The simulator data only included vehicle information for certain time frames (e.g., yellow onset); the data did not provide vehicle information for several different time frames while vehicles were approaching an intersection. In this study, the random forest (RF) machine-learning technique was adopted to develop RLR violation prediction models. Factor importance was obtained for different models and different data sets to show how differently the factors influence the performance of each model. A sensitivity analysis showed that the factor importance to identify RLR violations changed when data from different time frames were used to develop the prediction models. TTI, DTI, the required deceleration parameter (RDP), and velocity at the onset of a yellow indication were among the most important factors identified by both models constructed using observational data and simulator data. Furthermore, in addition to the factors obtained from a point in time (i.e., yellow onset), valuable information suitable for RLR violation prediction was obtained from defined monitoring periods. It was found that period lengths of 2-6m contributed to the best model performance. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Janjusic, Tommy; Kartsaklis, Christos
Application analysis is facilitated through a number of program profiling tools. The tools vary in their complexity, ease of deployment, design, and profiling detail. Specifically, understand- ing, analyzing, and optimizing is of particular importance for scientific applications where minor changes in code paths and data-structure layout can have profound effects. Understanding how intricate data-structures are accessed and how a given memory system responds is a complex task. In this paper we describe a trace profiling tool, Glprof, specifically aimed to lessen the burden of the programmer to pin-point heavily involved data-structures during an application's run-time, and understand data-structure run-time usage.more » Moreover, we showcase the tool's modularity using additional cache simulation components. We elaborate on the tool's design, and features. Finally we demonstrate the application of our tool in the context of Spec bench- marks using the Glprof profiler and two concurrently running cache simulators, PPC440 and AMD Interlagos.« less
Khowailed, Iman Akef; Petrofsky, Jerrold; Lohman, Everett; Daher, Noha
2015-01-01
Background The aim of this study was to examine the effects of a 6-week training program of simulated barefoot running (SBR) on running kinetics in habitually shod (wearing shoes) female recreational runners. Material/Methods Twelve female runners age 25.7±3.4 years gradually increased running distance in Vibram FiveFingers minimal shoes over a 6-week period. The kinetic analysis of treadmill running at 10 Km/h was performed pre- and post-intervention in shod running, non-habituated SBR, and habituated SBR conditions. Spatiotemporal parameters, ground reaction force components, and electromyography (EMG) were measured in all conditions. Results Post-intervention data indicated a significant decrease across time in the habituation SBR for EMG activity of the tibialis anterior (TA) in the pre-activation and absorptive phase of running (P<0.001). A significant increase was denoted in the pre-activation amplitude of the gastrocnemius (GAS) between the shod running, unhabituated SBR, and habituated SBR. Six weeks of SBR was associated with a significant decrease in the loading rates and impact forces. Additionally, SBR significantly decrease the stride length, step duration, and flight time, and stride frequency was significantly higher compared to shod running. Conclusions The findings of this study indicate that changes in motor patterns in previously habitually shod runners are possible and can be accomplished within 6 weeks. Non-habituation SBR did not show a significant neuromuscular adaptation in the EMG activity of TA and GAS as manifested after 6 weeks of habituated SBR. PMID:26166443
Quantitative Spectral Radiance Measurements in the HYMETS Arc Jet
NASA Technical Reports Server (NTRS)
Danehy, Paul M.; Hires, Drew V.; Johansen, Craig T.; Bathel, Brett F.; Jones, Stephen B.; Gragg, Jeffrey G.; Splinter, Scott C.
2012-01-01
Calibrated spectral radiance measurements of gaseous emission spectra have been obtained from the HYMETS (Hypersonic Materials Environmental Test System) 400 kW arc-heated wind tunnel at NASA Langley Research Center. A fiber-optic coupled spectrometer collected natural luminosity from the flow. Spectral radiance measurements are reported between 340 and 1000 nm. Both Silicon Carbide (SiC) and Phenolic Impregnated Carbon Ablator (PICA) samples were placed in the flow. Test gases studied included a mostly-N2 atmosphere (95% nitrogen, 5% argon), a simulated Earth Air atmosphere (75% nitrogen, 20% oxygen, 5% argon) and a simulated Martian atmosphere (71% carbon dioxide, 24% nitrogen, 5% argon). The bulk enthalpy of the flow was varied as was the location of the measurement. For the intermediate flow enthalpy tested (20 MJ/kg), emission from the Mars simulant gas was about 10 times higher than the Air flow and 15 times higher than the mostly-N2 atmosphere. Shock standoff distances were estimated from the spectral radiance measurements. Within-run, run-to-run and day-to-day repeatability of the emission were studied, with significant variations (15-100%) noted.
HAL/S-360 compiler system specification
NASA Technical Reports Server (NTRS)
Johnson, A. E.; Newbold, P. N.; Schulenberg, C. W.; Avakian, A. E.; Varga, S.; Helmers, P. H.; Helmers, C. T., Jr.; Hotz, R. L.
1974-01-01
A three phase language compiler is described which produces IBM 360/370 compatible object modules and a set of simulation tables to aid in run time verification. A link edit step augments the standard OS linkage editor. A comprehensive run time system and library provide the HAL/S operating environment, error handling, a pseudo real time executive, and an extensive set of mathematical, conversion, I/O, and diagnostic routines. The specifications of the information flow and content for this system are also considered.
High-speed GPU-based finite element simulations for NDT
NASA Astrophysics Data System (ADS)
Huthwaite, P.; Shi, F.; Van Pamel, A.; Lowe, M. J. S.
2015-03-01
The finite element method solved with explicit time increments is a general approach which can be applied to many ultrasound problems. It is widely used as a powerful tool within NDE for developing and testing inspection techniques, and can also be used in inversion processes. However, the solution technique is computationally intensive, requiring many calculations to be performed for each simulation, so traditionally speed has been an issue. For maximum speed, an implementation of the method, called Pogo [Huthwaite, J. Comp. Phys. 2014, doi: 10.1016/j.jcp.2013.10.017], has been developed to run on graphics cards, exploiting the highly parallelisable nature of the algorithm. Pogo typically demonstrates speed improvements of 60-90x over commercial CPU alternatives. Pogo is applied to three NDE examples, where the speed improvements are important: guided wave tomography, where a full 3D simulation must be run for each source transducer and every different defect size; scattering from rough cracks, where many simulations need to be run to build up a statistical model of the behaviour; and ultrasound propagation within coarse-grained materials where the mesh must be highly refined and many different cases run.
Electromagnetic Simulations for Aerospace Application Final Report CRADA No. TC-0376-92
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madsen, N.; Meredith, S.
Electromagnetic (EM) simulation tools play an important role in the design cycle, allowing optimization of a design before it is fabricated for testing. The purpose of this cooperative project was to provide Lockheed with state-of-the-art electromagnetic (EM) simulation software that will enable the optimal design of the next generation of low-observable (LO) military aircraft through the VHF regime. More particularly, the project was principally code development and validation, its goal to produce a 3-D, conforming grid,time-domain (TD) EM simulation tool, consisting of a mesh generator, a DS13D-based simulation kernel, and an RCS postprocessor, which was useful in the optimization ofmore » LO aircraft, both for full-aircraft simulations run on a massively parallel computer and for small scale problems run on a UNIX workstation.« less
Time and Space Partitioning the EagleEye Reference Misson
NASA Astrophysics Data System (ADS)
Bos, Victor; Mendham, Peter; Kauppinen, Panu; Holsti, Niklas; Crespo, Alfons; Masmano, Miguel; de la Puente, Juan A.; Zamorano, Juan
2013-08-01
We discuss experiences gained by porting a Software Validation Facility (SVF) and a satellite Central Software (CSW) to a platform with support for Time and Space Partitioning (TSP). The SVF and CSW are part of the EagleEye Reference mission of the European Space Agency (ESA). As a reference mission, EagleEye is a perfect candidate to evaluate practical aspects of developing satellite CSW for and on TSP platforms. The specific TSP platform we used consists of a simulated LEON3 CPU controlled by the XtratuM separation micro-kernel. On top of this, we run five separate partitions. Each partition runs its own real-time operating system or Ada run-time kernel, which in turn are running the application software of the CSW. We describe issues related to partitioning; inter-partition communication; scheduling; I/O; and fault-detection, isolation, and recovery (FDIR).
Real-time simulation of an automotive gas turbine using the hybrid computer
NASA Technical Reports Server (NTRS)
Costakis, W.; Merrill, W. C.
1984-01-01
A hybrid computer simulation of an Advanced Automotive Gas Turbine Powertrain System is reported. The system consists of a gas turbine engine, an automotive drivetrain with four speed automatic transmission, and a control system. Generally, dynamic performance is simulated on the analog portion of the hybrid computer while most of the steady state performance characteristics are calculated to run faster than real time and makes this simulation a useful tool for a variety of analytical studies.
Can nudging be used to quantify model sensitivities in precipitation and cloud forcing?
NASA Astrophysics Data System (ADS)
Lin, Guangxing; Wan, Hui; Zhang, Kai; Qian, Yun; Ghan, Steven J.
2016-09-01
Efficient simulation strategies are crucial for the development and evaluation of high-resolution climate models. This paper evaluates simulations with constrained meteorology for the quantification of parametric sensitivities in the Community Atmosphere Model version 5 (CAM5). Two parameters are perturbed as illustrating examples: the convection relaxation time scale (TAU), and the threshold relative humidity for the formation of low-level stratiform clouds (rhminl). Results suggest that the fidelity of the constrained simulations depends on the detailed implementation of nudging and the mechanism through which the perturbed parameter affects precipitation and cloud. The relative computational costs of nudged and free-running simulations are determined by the magnitude of internal variability in the physical quantities of interest, as well as the magnitude of the parameter perturbation. In the case of a strong perturbation in convection, temperature, and/or wind nudging with a 6 h relaxation time scale leads to nonnegligible side effects due to the distorted interactions between resolved dynamics and parameterized convection, while 1 year free-running simulations can satisfactorily capture the annual mean precipitation and cloud forcing sensitivities. In the case of a relatively weak perturbation in the large-scale condensation scheme, results from 1 year free-running simulations are strongly affected by natural noise, while nudging winds effectively reduces the noise, and reasonably reproduces the sensitivities. These results indicate that caution is needed when using nudged simulations to assess precipitation and cloud forcing sensitivities to parameter changes in general circulation models. We also demonstrate that ensembles of short simulations are useful for understanding the evolution of model sensitivities.
Durham extremely large telescope adaptive optics simulation platform.
Basden, Alastair; Butterley, Timothy; Myers, Richard; Wilson, Richard
2007-03-01
Adaptive optics systems are essential on all large telescopes for which image quality is important. These are complex systems with many design parameters requiring optimization before good performance can be achieved. The simulation of adaptive optics systems is therefore necessary to categorize the expected performance. We describe an adaptive optics simulation platform, developed at Durham University, which can be used to simulate adaptive optics systems on the largest proposed future extremely large telescopes as well as on current systems. This platform is modular, object oriented, and has the benefit of hardware application acceleration that can be used to improve the simulation performance, essential for ensuring that the run time of a given simulation is acceptable. The simulation platform described here can be highly parallelized using parallelization techniques suited for adaptive optics simulation, while still offering the user complete control while the simulation is running. The results from the simulation of a ground layer adaptive optics system are provided as an example to demonstrate the flexibility of this simulation platform.
Antonioletti, Mario; Biktashev, Vadim N; Jackson, Adrian; Kharche, Sanjay R; Stary, Tomas; Biktasheva, Irina V
2017-01-01
The BeatBox simulation environment combines flexible script language user interface with the robust computational tools, in order to setup cardiac electrophysiology in-silico experiments without re-coding at low-level, so that cell excitation, tissue/anatomy models, stimulation protocols may be included into a BeatBox script, and simulation run either sequentially or in parallel (MPI) without re-compilation. BeatBox is a free software written in C language to be run on a Unix-based platform. It provides the whole spectrum of multi scale tissue modelling from 0-dimensional individual cell simulation, 1-dimensional fibre, 2-dimensional sheet and 3-dimensional slab of tissue, up to anatomically realistic whole heart simulations, with run time measurements including cardiac re-entry tip/filament tracing, ECG, local/global samples of any variables, etc. BeatBox solvers, cell, and tissue/anatomy models repositories are extended via robust and flexible interfaces, thus providing an open framework for new developments in the field. In this paper we give an overview of the BeatBox current state, together with a description of the main computational methods and MPI parallelisation approaches.
Reliable results from stochastic simulation models
Donald L., Jr. Gochenour; Leonard R. Johnson
1973-01-01
Development of a computer simulation model is usually done without fully considering how long the model should run (e.g. computer time) before the results are reliable. However construction of confidence intervals (CI) about critical output parameters from the simulation model makes it possible to determine the point where model results are reliable. If the results are...
Development of the CELSS Emulator at NASA JSC
NASA Technical Reports Server (NTRS)
Cullingford, Hatice S.
1989-01-01
The Controlled Ecological Life Support System (CELSS) Emulator is under development at the NASA Johnson Space Center (JSC) with the purpose to investigate computer simulations of integrated CELSS operations involving humans, plants, and process machinery. This paper describes Version 1.0 of the CELSS Emulator that was initiated in 1988 on the JSC Multi Purpose Applications Console Test Bed as the simulation framework. The run module of the simulation system now contains a CELSS model called BLSS. The CELSS Emulator makes it possible to generate model data sets, store libraries of results for further analysis, and also display plots of model variables as a function of time. The progress of the project is presented with sample test runs and simulation display pages.
BaHaMAS A Bash Handler to Monitor and Administrate Simulations
NASA Astrophysics Data System (ADS)
Sciarra, Alessandro
2018-03-01
Numerical QCD is often extremely resource demanding and it is not rare to run hundreds of simulations at the same time. Each of these can last for days or even months and it typically requires a job-script file as well as an input file with the physical parameters for the application to be run. Moreover, some monitoring operations (i.e. copying, moving, deleting or modifying files, resume crashed jobs, etc.) are often required to guarantee that the final statistics is correctly accumulated. Proceeding manually in handling simulations is probably the most error-prone way and it is deadly uncomfortable and inefficient! BaHaMAS was developed and successfully used in the last years as a tool to automatically monitor and administrate simulations.
12 weeks of simulated barefoot running changes foot-strike patterns in female runners.
McCarthy, C; Fleming, N; Donne, B; Blanksby, B
2014-05-01
To investigate the effect of a transition program of simulated barefoot running (SBR) on running kinematics and foot-strike patterns, female recreational athletes (n=9, age 29 ± 3 yrs) without SBR experience gradually increased running distance in Vibram FiveFingers SBR footwear over 12 weeks. Matched controls (n=10, age 30 ± 4 yrs) continued running in standard footwear. A 3-D motion analysis of treadmill running at 12 km/h(-1) was performed by both groups, barefoot and shod, pre- and post-intervention. Post-intervention data indicated a more-forefoot strike pattern in the SBR group compared to controls; both running barefoot (P>0.05), and shod (P<0.001). When assessed barefoot, there were significant kinematic differences across time in the SBR group for ankle flexion angle at toe-off (P<0.01). When assessed shod, significant kinematic changes occurred across time, for ankle flexion angles at foot-strike (P<0.001) and toe-off (P<0.01), and for range of motion (ROM) in the absorptive phase of stance (P<0.01). A knee effect was recorded in the SBR group for flexion ROM in the absorptive phase of stance (P<0.05). No significant changes occurred in controls. Therefore, a 12-week transition program in SBR could assist athletes seeking a more-forefoot strike pattern and "barefoot" kinematics, regardless of preferred footwear. © Georg Thieme Verlag KG Stuttgart · New York.
Investigation on the Practicality of Developing Reduced Thermal Models
NASA Technical Reports Server (NTRS)
Lombardi, Giancarlo; Yang, Kan
2015-01-01
Throughout the spacecraft design and development process, detailed instrument thermal models are created to simulate their on-orbit behavior and to ensure that they do not exceed any thermal limits. These detailed models, while generating highly accurate predictions, can sometimes lead to long simulation run times, especially when integrated with a spacecraft observatory model. Therefore, reduced models containing less detail are typically produced in tandem with the detailed models so that results may be more readily available, albeit less accurate. In the current study, both reduced and detailed instrument models are integrated with their associated spacecraft bus models to examine the impact of instrument model reduction on run time and accuracy. Preexisting instrument bus thermal model pairs from several projects were used to determine trends between detailed and reduced thermal models; namely, the Mirror Optical Bench (MOB) on the Gravity and Extreme Magnetism Small Explorer (GEMS) spacecraft, Advanced Topography Laser Altimeter System (ATLAS) on the Ice, Cloud, and Elevation Satellite 2 (ICESat-2), and the Neutral Mass Spectrometer (NMS) on the Lunar Atmosphere and Dust Environment Explorer (LADEE). Hot and cold cases were run for each model to capture the behavior of the models at both thermal extremes. It was found that, though decreasing the number of nodes from a detailed to reduced model brought about a reduction in the run-time, a large time savings was not observed, nor was it a linear relationship between the percentage of nodes reduced and time saved. However, significant losses in accuracy were observed with greater model reduction. It was found that while reduced models are useful in decreasing run time, there exists a threshold of reduction where, once exceeded, the loss in accuracy outweighs the benefit from reduced model runtime.
Optimizing Scientist Time through In Situ Visualization and Analysis.
Patchett, John; Ahrens, James
2018-01-01
In situ processing produces reduced size persistent representations of a simulations state while the simulation is running. The need for in situ visualization and data analysis is usually described in terms of supercomputer size and performance in relation to available storage size.
NASA Astrophysics Data System (ADS)
Rizvi, Syed S.; Shah, Dipali; Riasat, Aasia
The Time Wrap algorithm [3] offers a run time recovery mechanism that deals with the causality errors. These run time recovery mechanisms consists of rollback, anti-message, and Global Virtual Time (GVT) techniques. For rollback, there is a need to compute GVT which is used in discrete-event simulation to reclaim the memory, commit the output, detect the termination, and handle the errors. However, the computation of GVT requires dealing with transient message problem and the simultaneous reporting problem. These problems can be dealt in an efficient manner by the Samadi's algorithm [8] which works fine in the presence of causality errors. However, the performance of both Time Wrap and Samadi's algorithms depends on the latency involve in GVT computation. Both algorithms give poor latency for large simulation systems especially in the presence of causality errors. To improve the latency and reduce the processor ideal time, we implement tree and butterflies barriers with the optimistic algorithm. Our analysis shows that the use of synchronous barriers such as tree and butterfly with the optimistic algorithm not only minimizes the GVT latency but also minimizes the processor idle time.
Object oriented design (OOD) in real-time hardware-in-the-loop (HWIL) simulations
NASA Astrophysics Data System (ADS)
Morris, Joe; Richard, Henri; Lowman, Alan; Youngren, Rob
2006-05-01
Using Object Oriented Design (OOD) concepts in AMRDEC's Hardware-in-the Loop (HWIL) real-time simulations allows the user to interchange parts of the simulation to meet test requirements. A large-scale three-spectral band simulator connected via a high speed reflective memory ring for time-critical data transfers to PC controllers connected by non real-time Ethernet protocols is used to separate software objects from logical entities close to their respective controlled hardware. Each standalone object does its own dynamic initialization, real-time processing, and end of run processing; therefore it can be easily maintained and updated. A Resource Allocation Program (RAP) is also utilized along with a device table to allocate, organize, and document the communication protocol between the software and hardware components. A GUI display program lists all allocations and deallocations of HWIL memory and hardware resources. This interactive program is also used to clean up defunct allocations of dead processes. Three examples are presented using the OOD and RAP concepts. The first is the control of an ACUTRONICS built three-axis flight table using the same control for calibration and real-time functions. The second is the transportability of a six-degree-of-freedom (6-DOF) simulation from an Onyx residence to a Linux-PC. The third is the replacement of the 6-DOF simulation with a replay program to drive the facility with archived run data for demonstration or analysis purposes.
Visual Elements in Flight Simulation
1975-07-01
control. In consequence, current efforts tc create appropriate visual simulations run the gamut from efforts toward almost complete replication of the...create appropriate visual simulations run the gamut from efforts to create appropriate visual simulations run the gamut from efforts toward almost
Robotics On-Board Trainer (ROBoT)
NASA Technical Reports Server (NTRS)
Johnson, Genevieve; Alexander, Greg
2013-01-01
ROBoT is an on-orbit version of the ground-based Dynamics Skills Trainer (DST) that astronauts use for training on a frequent basis. This software consists of two primary software groups. The first series of components is responsible for displaying the graphical scenes. The remaining components are responsible for simulating the Mobile Servicing System (MSS), the Japanese Experiment Module Remote Manipulator System (JEMRMS), and the H-II Transfer Vehicle (HTV) Free Flyer Robotics Operations. The MSS simulation software includes: Robotic Workstation (RWS) simulation, a simulation of the Space Station Remote Manipulator System (SSRMS), a simulation of the ISS Command and Control System (CCS), and a portion of the Portable Computer System (PCS) software necessary for MSS operations. These components all run under the CentOS4.5 Linux operating system. The JEMRMS simulation software includes real-time, HIL, dynamics, manipulator multi-body dynamics, and a moving object contact model with Tricks discrete time scheduling. The JEMRMS DST will be used as a functional proficiency and skills trainer for flight crews. The HTV Free Flyer Robotics Operations simulation software adds a functional simulation of HTV vehicle controllers, sensors, and data to the MSS simulation software. These components are intended to support HTV ISS visiting vehicle analysis and training. The scene generation software will use DOUG (Dynamic On-orbit Ubiquitous Graphics) to render the graphical scenes. DOUG runs on a laptop running the CentOS4.5 Linux operating system. DOUG is an Open GL-based 3D computer graphics rendering package. It uses pre-built three-dimensional models of on-orbit ISS and space shuttle systems elements, and provides realtime views of various station and shuttle configurations.
Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).
Yang, Owen; Choi, Bernard
2013-01-01
To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches.
Accelerating sino-atrium computer simulations with graphic processing units.
Zhang, Hong; Xiao, Zheng; Lin, Shien-fong
2015-01-01
Sino-atrial node cells (SANCs) play a significant role in rhythmic firing. To investigate their role in arrhythmia and interactions with the atrium, computer simulations based on cellular dynamic mathematical models are generally used. However, the large-scale computation usually makes research difficult, given the limited computational power of Central Processing Units (CPUs). In this paper, an accelerating approach with Graphic Processing Units (GPUs) is proposed in a simulation consisting of the SAN tissue and the adjoining atrium. By using the operator splitting method, the computational task was made parallel. Three parallelization strategies were then put forward. The strategy with the shortest running time was further optimized by considering block size, data transfer and partition. The results showed that for a simulation with 500 SANCs and 30 atrial cells, the execution time taken by the non-optimized program decreased 62% with respect to a serial program running on CPU. The execution time decreased by 80% after the program was optimized. The larger the tissue was, the more significant the acceleration became. The results demonstrated the effectiveness of the proposed GPU-accelerating methods and their promising applications in more complicated biological simulations.
An algorithm for fast elastic wave simulation using a vectorized finite difference operator
NASA Astrophysics Data System (ADS)
Malkoti, Ajay; Vedanti, Nimisha; Tiwari, Ram Krishna
2018-07-01
Modern geophysical imaging techniques exploit the full wavefield information which can be simulated numerically. These numerical simulations are computationally expensive due to several factors, such as a large number of time steps and nodes, big size of the derivative stencil and huge model size. Besides these constraints, it is also important to reformulate the numerical derivative operator for improved efficiency. In this paper, we have introduced a vectorized derivative operator over the staggered grid with shifted coordinate systems. The operator increases the efficiency of simulation by exploiting the fact that each variable can be represented in the form of a matrix. This operator allows updating all nodes of a variable defined on the staggered grid, in a manner similar to the collocated grid scheme and thereby reducing the computational run-time considerably. Here we demonstrate an application of this operator to simulate the seismic wave propagation in elastic media (Marmousi model), by discretizing the equations on a staggered grid. We have compared the performance of this operator on three programming languages, which reveals that it can increase the execution speed by a factor of at least 2-3 times for FORTRAN and MATLAB; and nearly 100 times for Python. We have further carried out various tests in MATLAB to analyze the effect of model size and the number of time steps on total simulation run-time. We find that there is an additional, though small, computational overhead for each step and it depends on total number of time steps used in the simulation. A MATLAB code package, 'FDwave', for the proposed simulation scheme is available upon request.
NASA Astrophysics Data System (ADS)
KIM, J.; Smith, M. B.; Koren, V.; Salas, F.; Cui, Z.; Johnson, D.
2017-12-01
The National Oceanic and Atmospheric Administration (NOAA)-National Weather Service (NWS) developed the Hydrology Laboratory-Research Distributed Hydrologic Model (HL-RDHM) framework as an initial step towards spatially distributed modeling at River Forecast Centers (RFCs). Recently, the NOAA/NWS worked with the National Center for Atmospheric Research (NCAR) to implement the National Water Model (NWM) for nationally-consistent water resources prediction. The NWM is based on the WRF-Hydro framework and is run at a 1km spatial resolution and 1-hour time step over the contiguous United States (CONUS) and contributing areas in Canada and Mexico. In this study, we compare streamflow simulations from HL-RDHM and WRF-Hydro to observations from 279 USGS stations. For streamflow simulations, HL-RDHM is run on 4km grids with the temporal resolution of 1 hour for a 5-year period (Water Years 2008-2012), using a priori parameters provided by NOAA-NWS. The WRF-Hydro streamflow simulations for the same time period are extracted from NCAR's 23 retrospective run of the NWM (version 1.0) over CONUS based on 1km grids. We choose 279 USGS stations which are relatively less affected by dams or reservoirs, in the domains of six different RFCs. We use the daily average values of simulations and observations for the convenience of comparison. The main purpose of this research is to evaluate how HL-RDHM and WRF-Hydro perform at USGS gauge stations. We compare daily time-series of observations and both simulations, and calculate the error values using a variety of error functions. Using these plots and error values, we evaluate the performances of HL-RDHM and WRF-Hydro models. Our results show a mix of model performance across geographic regions.
A Fast-Time Simulation Tool for Analysis of Airport Arrival Traffic
NASA Technical Reports Server (NTRS)
Erzberger, Heinz; Meyn, Larry A.; Neuman, Frank
2004-01-01
The basic objective of arrival sequencing in air traffic control automation is to match traffic demand and airport capacity while minimizing delays. The performance of an automated arrival scheduling system, such as the Traffic Management Advisor developed by NASA for the FAA, can be studied by a fast-time simulation that does not involve running expensive and time-consuming real-time simulations. The fast-time simulation models runway configurations, the characteristics of arrival traffic, deviations from predicted arrival times, as well as the arrival sequencing and scheduling algorithm. This report reviews the development of the fast-time simulation method used originally by NASA in the design of the sequencing and scheduling algorithm for the Traffic Management Advisor. The utility of this method of simulation is demonstrated by examining the effect on delays of altering arrival schedules at a hub airport.
NASA Astrophysics Data System (ADS)
Srinath, Srikar; Poyneer, Lisa A.; Rudy, Alexander R.; Ammons, S. M.
2014-08-01
The advent of expensive, large-aperture telescopes and complex adaptive optics (AO) systems has strengthened the need for detailed simulation of such systems from the top of the atmosphere to control algorithms. The credibility of any simulation is underpinned by the quality of the atmosphere model used for introducing phase variations into the incident photons. Hitherto, simulations which incorporate wind layers have relied upon phase screen generation methods that tax the computation and memory capacities of the platforms on which they run. This places limits on parameters of a simulation, such as exposure time or resolution, thus compromising its utility. As aperture sizes and fields of view increase the problem will only get worse. We present an autoregressive method for evolving atmospheric phase that is efficient in its use of computation resources and allows for variability in the power contained in frozen flow or stochastic components of the atmosphere. Users have the flexibility of generating atmosphere datacubes in advance of runs where memory constraints allow to save on computation time or of computing the phase at each time step for long exposure times. Preliminary tests of model atmospheres generated using this method show power spectral density and rms phase in accordance with established metrics for Kolmogorov models.
Two graphical user interfaces for managing and analyzing MODFLOW groundwater-model scenarios
Banta, Edward R.
2014-01-01
Scenario Manager and Scenario Analyzer are graphical user interfaces that facilitate the use of calibrated, MODFLOW-based groundwater models for investigating possible responses to proposed stresses on a groundwater system. Scenario Manager allows a user, starting with a calibrated model, to design and run model scenarios by adding or modifying stresses simulated by the model. Scenario Analyzer facilitates the process of extracting data from model output and preparing such display elements as maps, charts, and tables. Both programs are designed for users who are familiar with the science on which groundwater modeling is based but who may not have a groundwater modeler’s expertise in building and calibrating a groundwater model from start to finish. With Scenario Manager, the user can manipulate model input to simulate withdrawal or injection wells, time-variant specified hydraulic heads, recharge, and such surface-water features as rivers and canals. Input for stresses to be simulated comes from user-provided geographic information system files and time-series data files. A Scenario Manager project can contain multiple scenarios and is self-documenting. Scenario Analyzer can be used to analyze output from any MODFLOW-based model; it is not limited to use with scenarios generated by Scenario Manager. Model-simulated values of hydraulic head, drawdown, solute concentration, and cell-by-cell flow rates can be presented in display elements. Map data can be represented as lines of equal value (contours) or as a gradated color fill. Charts and tables display time-series data obtained from output generated by a transient-state model run or from user-provided text files of time-series data. A display element can be based entirely on output of a single model run, or, to facilitate comparison of results of multiple scenarios, an element can be based on output from multiple model runs. Scenario Analyzer can export display elements and supporting metadata as a Portable Document Format file.
NASA Technical Reports Server (NTRS)
Harvey, Jason; Moore, Michael
2013-01-01
The General-Use Nodal Network Solver (GUNNS) is a modeling software package that combines nodal analysis and the hydraulic-electric analogy to simulate fluid, electrical, and thermal flow systems. GUNNS is developed by L-3 Communications under the TS21 (Training Systems for the 21st Century) project for NASA Johnson Space Center (JSC), primarily for use in space vehicle training simulators at JSC. It has sufficient compactness and fidelity to model the fluid, electrical, and thermal aspects of space vehicles in real-time simulations running on commodity workstations, for vehicle crew and flight controller training. It has a reusable and flexible component and system design, and a Graphical User Interface (GUI), providing capability for rapid GUI-based simulator development, ease of maintenance, and associated cost savings. GUNNS is optimized for NASA's Trick simulation environment, but can be run independently of Trick.
An effective online data monitoring and saving strategy for large-scale climate simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xian, Xiaochen; Archibald, Rick; Mayer, Benjamin
Large-scale climate simulation models have been developed and widely used to generate historical data and study future climate scenarios. These simulation models often have to run for a couple of months to understand the changes in the global climate over the course of decades. This long-duration simulation process creates a huge amount of data with both high temporal and spatial resolution information; however, how to effectively monitor and record the climate changes based on these large-scale simulation results that are continuously produced in real time still remains to be resolved. Due to the slow process of writing data to disk,more » the current practice is to save a snapshot of the simulation results at a constant, slow rate although the data generation process runs at a very high speed. This study proposes an effective online data monitoring and saving strategy over the temporal and spatial domains with the consideration of practical storage and memory capacity constraints. Finally, our proposed method is able to intelligently select and record the most informative extreme values in the raw data generated from real-time simulations in the context of better monitoring climate changes.« less
Cognitive task analysis-based design and authoring software for simulation training.
Munro, Allen; Clark, Richard E
2013-10-01
The development of more effective medical simulators requires a collaborative team effort where three kinds of expertise are carefully coordinated: (1) exceptional medical expertise focused on providing complete and accurate information about the medical challenges (i.e., critical skills and knowledge) to be simulated; (2) instructional expertise focused on the design of simulation-based training and assessment methods that produce maximum learning and transfer to patient care; and (3) software development expertise that permits the efficient design and development of the software required to capture expertise, present it in an engaging way, and assess student interactions with the simulator. In this discussion, we describe a method of capturing more complete and accurate medical information for simulators and combine it with new instructional design strategies that emphasize the learning of complex knowledge. Finally, we describe three different types of software support (Development/Authoring, Run Time, and Post Run Time) required at different stages in the development of medical simulations and the instructional design elements of the software required at each stage. We describe the contributions expected of each kind of software and the different instructional control authoring support required. Reprint & Copyright © 2013 Association of Military Surgeons of the U.S.
An effective online data monitoring and saving strategy for large-scale climate simulations
Xian, Xiaochen; Archibald, Rick; Mayer, Benjamin; ...
2018-01-22
Large-scale climate simulation models have been developed and widely used to generate historical data and study future climate scenarios. These simulation models often have to run for a couple of months to understand the changes in the global climate over the course of decades. This long-duration simulation process creates a huge amount of data with both high temporal and spatial resolution information; however, how to effectively monitor and record the climate changes based on these large-scale simulation results that are continuously produced in real time still remains to be resolved. Due to the slow process of writing data to disk,more » the current practice is to save a snapshot of the simulation results at a constant, slow rate although the data generation process runs at a very high speed. This study proposes an effective online data monitoring and saving strategy over the temporal and spatial domains with the consideration of practical storage and memory capacity constraints. Finally, our proposed method is able to intelligently select and record the most informative extreme values in the raw data generated from real-time simulations in the context of better monitoring climate changes.« less
Weak simulated extratropical responses to complete tropical deforestation
Findell, K.L.; Knutson, T.R.; Milly, P.C.D.
2006-01-01
The Geophysical Fluid Dynamics Laboratory atmosphere-land model version 2 (AM2/LM2) coupled to a 50-m-thick slab ocean model has been used to investigate remote responses to tropical deforestation. Magnitudes and significance of differences between a control run and a deforested run are assessed through comparisons of 50-yr time series, accounting for autocorrelation and field significance. Complete conversion of the broadleaf evergreen forests of South America, central Africa, and the islands of Oceania to grasslands leads to highly significant local responses. In addition, a broad but mild warming is seen throughout the tropical troposphere (<0.2??C between 700 and 150 mb), significant in northern spring and summer. However, the simulation results show very little statistically significant response beyond the Tropics. There are no significant differences in any hydroclimatic variables (e.g., precipitation, soil moisture, evaporation) in either the northern or the southern extratropics. Small but statistically significant local differences in some geopotential height and wind fields are present in the southeastern Pacific Ocean. Use of the same statistical tests on two 50-yr segments of the control run show that the small but significant extratropical differences between the deforested run and the control run are similar in magnitude and area to the differences between nonoverlapping segments of the control run. These simulations suggest that extratropical responses to complete tropical deforestation are unlikely to be distinguishable from natural climate variability.
Weather model performance on extreme rainfall events simulation's over Western Iberian Peninsula
NASA Astrophysics Data System (ADS)
Pereira, S. C.; Carvalho, A. C.; Ferreira, J.; Nunes, J. P.; Kaiser, J. J.; Rocha, A.
2012-08-01
This study evaluates the performance of the WRF-ARW numerical weather model in simulating the spatial and temporal patterns of an extreme rainfall period over a complex orographic region in north-central Portugal. The analysis was performed for the December month of 2009, during the Portugal Mainland rainy season. The heavy rainfall to extreme heavy rainfall periods were due to several low surface pressure's systems associated with frontal surfaces. The total amount of precipitation for December exceeded, in average, the climatological mean for the 1971-2000 time period in +89 mm, varying from 190 mm (south part of the country) to 1175 mm (north part of the country). Three model runs were conducted to assess possible improvements in model performance: (1) the WRF-ARW is forced with the initial fields from a global domain model (RunRef); (2) data assimilation for a specific location (RunObsN) is included; (3) nudging is used to adjust the analysis field (RunGridN). Model performance was evaluated against an observed hourly precipitation dataset of 15 rainfall stations using several statistical parameters. The WRF-ARW model reproduced well the temporal rainfall patterns but tended to overestimate precipitation amounts. The RunGridN simulation provided the best results but model performance of the other two runs was good too, so that the selected extreme rainfall episode was successfully reproduced.
NASA Astrophysics Data System (ADS)
Franzoni, G.; Norkus, A.; Pol, A. A.; Srimanobhas, N.; Walker, J.
2017-10-01
Physics analysis at the Compact Muon Solenoid requires both the production of simulated events and processing of the data collected by the experiment. Since the end of the LHC Run-I in 2012, CMS has produced over 20 billion simulated events, from 75 thousand processing requests organised in one hundred different campaigns. These campaigns emulate different configurations of collision events, the detector, and LHC running conditions. In the same time span, sixteen data processing campaigns have taken place to reconstruct different portions of the Run-I and Run-II data with ever improving algorithms and calibrations. The scale and complexity of the events simulation and processing, and the requirement that multiple campaigns must proceed in parallel, demand that a comprehensive, frequently updated and easily accessible monitoring be made available. The monitoring must serve both the analysts, who want to know which and when datasets will become available, and the central production teams in charge of submitting, prioritizing, and running the requests across the distributed computing infrastructure. The Production Monitoring Platform (pMp) web-based service, has been developed in 2015 to address those needs. It aggregates information from multiple services used to define, organize, and run the processing requests. Information is updated hourly using a dedicated elastic database and the monitoring provides multiple configurable views to assess the status of single datasets as well as entire production campaigns. This contribution will describe the pMp development, the evolution of its functionalities, and one and half year of operational experience.
Chonggang Xu; Hong S. He; Yuanman Hu; Yu Chang; Xiuzhen Li; Rencang Bu
2005-01-01
Geostatistical stochastic simulation is always combined with Monte Carlo method to quantify the uncertainty in spatial model simulations. However, due to the relatively long running time of spatially explicit forest models as a result of their complexity, it is always infeasible to generate hundreds or thousands of Monte Carlo simulations. Thus, it is of great...
Hostetler, S.W.; Alder, J.R.; Allan, A.M.
2011-01-01
We have completed an array of high-resolution simulations of present and future climate over Western North America (WNA) and Eastern North America (ENA) by dynamically downscaling global climate simulations using a regional climate model, RegCM3. The simulations are intended to provide long time series of internally consistent surface and atmospheric variables for use in climate-related research. In addition to providing high-resolution weather and climate data for the past, present, and future, we have developed an integrated data flow and methodology for processing, summarizing, viewing, and delivering the climate datasets to a wide range of potential users. Our simulations were run over 50- and 15-kilometer model grids in an attempt to capture more of the climatic detail associated with processes such as topographic forcing than can be captured by general circulation models (GCMs). The simulations were run using output from four GCMs. All simulations span the present (for example, 1968-1999), common periods of the future (2040-2069), and two simulations continuously cover 2010-2099. The trace gas concentrations in our simulations were the same as those of the GCMs: the IPCC 20th century time series for 1968-1999 and the A2 time series for simulations of the future. We demonstrate that RegCM3 is capable of producing present day annual and seasonal climatologies of air temperature and precipitation that are in good agreement with observations. Important features of the high-resolution climatology of temperature, precipitation, snow water equivalent (SWE), and soil moisture are consistently reproduced in all model runs over WNA and ENA. The simulations provide a potential range of future climate change for selected decades and display common patterns of the direction and magnitude of changes. As expected, there are some model to model differences that limit interpretability and give rise to uncertainties. Here, we provide background information about the GCMs and the RegCM3, a basic evaluation of the model output and examples of simulated future climate. We also provide information needed to access the web applications for visualizing and downloading the data, and give complete metadata that describe the variables in the datasets.
NASA Astrophysics Data System (ADS)
Shen, Wenqiang; Tang, Jianping; Wang, Yuan; Wang, Shuyu; Niu, Xiaorui
2017-04-01
In this study, the characteristics of tropical cyclones (TCs) over the East Asia Coordinated Regional Downscaling Experiment domain are examined with the Weather Research and Forecasting (WRF) model. Eight 20-year (1989-2008) simulations are performed using the WRF model, with lateral boundary forcing from the ERA-Interim reanalysis, to test the sensitivity of TC simulation to interior spectral nudging (SN, including nudging time interval, nudging variables) and radiation schemes [Community Atmosphere Model (CAM), Rapid Radiative Transfer Model (RRTM)]. The simulated TCs are compared with the observation from the Regional Specialized Meteorological Centers TC best tracks. It is found that all WRF runs can simulate the climatology of key TC features such as the tracks and location/frequency of genesis reasonably well, and reproduce the inter-annual variations and seasonal cycle of TC counts. The SN runs produce enhanced TC activity compare to the runs without SN. The thermodynamic profile suggests that nudging with horizontal wind increases the unstable of thermodynamic states in tropics, which results in excessive TCs genesis. The experiments with wind and temperature nudging improve the overestimation of TCs numbers, especially suppress the TCs intensification by correct the thermodynamic profile. Weak SN coefficient enhances TCs activity significantly even with wind and temperature nudging. The analysis of TCs numbers and large scale circulation shows that the SN parameters adopted in our experiments do not appear to suppress the formation of TC. The excessive TCs activity in CAM runs relative to RRTM runs are also due to the enhanced atmospheric instability.
Comparison of the 1.5 Mile Run Times at 7,200 Feet and Simulated 850 Feet in a Hyperoxic Room
2012-03-01
run test was developed as an easy, inexpensive, and accurate way to estimate VO2 max, in large groups of AF personnel. In 2004 the AF fitness program...The average max VO2 was 48.6 mL.kg.-1min-1. A 30.6 seconds, or 4.2%, significant difference (p<.001) was observed between the two runs. These...6 Figure 2 – Maximal Oxygen Update ( VO2 max) Test
Solving Equations of Multibody Dynamics
NASA Technical Reports Server (NTRS)
Jain, Abhinandan; Lim, Christopher
2007-01-01
Darts++ is a computer program for solving the equations of motion of a multibody system or of a multibody model of a dynamic system. It is intended especially for use in dynamical simulations performed in designing and analyzing, and developing software for the control of, complex mechanical systems. Darts++ is based on the Spatial-Operator- Algebra formulation for multibody dynamics. This software reads a description of a multibody system from a model data file, then constructs and implements an efficient algorithm that solves the dynamical equations of the system. The efficiency and, hence, the computational speed is sufficient to make Darts++ suitable for use in realtime closed-loop simulations. Darts++ features an object-oriented software architecture that enables reconfiguration of system topology at run time; in contrast, in related prior software, system topology is fixed during initialization. Darts++ provides an interface to scripting languages, including Tcl and Python, that enable the user to configure and interact with simulation objects at run time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duan, Nan; Dimitrovski, Aleksandar D; Simunovic, Srdjan
2016-01-01
The development of high-performance computing techniques and platforms has provided many opportunities for real-time or even faster-than-real-time implementation of power system simulations. One approach uses the Parareal in time framework. The Parareal algorithm has shown promising theoretical simulation speedups by temporal decomposing a simulation run into a coarse simulation on the entire simulation interval and fine simulations on sequential sub-intervals linked through the coarse simulation. However, it has been found that the time cost of the coarse solver needs to be reduced to fully exploit the potentials of the Parareal algorithm. This paper studies a Parareal implementation using reduced generatormore » models for the coarse solver and reports the testing results on the IEEE 39-bus system and a 327-generator 2383-bus Polish system model.« less
Martínez, F; Casermeiro, M A; Morales, D; Cuevas, G; Walter, Ingrid
2003-04-15
Biosolids and composted municipal solid wastes were surface-applied (0 and 80 Mg ha(-1)) to a degraded soil in a semi-arid environment to determine their effects on the quantity and quality of run-off water. Three and 4 years after application, a simulated rainfall was performed (intensity=942.5 ml min(-1) and kinetic energy=3.92 J m(-2)) on 0.078 m(2) plots using a portable rainfall simulator. The run-off from the different treatment plots was collected and analysed. The type of treatment was highly related to infiltration, run-off and sediment production. The biosolid-treated plots showed the minimum value of total run-off, maximum time to the beginning of run-off and maximum run-off ratio (the relationship between total rainfall and run-off). The MSW-treated plots showed values intermediate between biosolid-treated plots and control plots. Soil losses were also closely related to treatment type. Control plots showed the maximum sediment yield, MSW-treated plots showed intermediate values, and biosolid plots the minimum values for washout. The concentrations of NH(4)-N and PO(4)-P in the run-off water were significantly higher in the treated plots than in control plots. The highest PO(4)-P value, 0.73 mg l(-1), was obtained in the soil treated with biosolids; NO(3)-N concentration also increased significantly with respect to the control and MSW treatments. NH(4)-N concentrations of 15.6 and 15.0 mg l(-1) were recorded in the soils treated with biosolids and MSW, respectively, values approximately five times higher than those obtained in run-off water from untreated soil. However, the concentrations of all these constituents were lower than threshold limits cited in water quality standards for agricultural use. With the exception of Cu, all trace metals analysed in the run-off water were below detection limits.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ansari, A.; Mohaghegh, S.; Shahnam, M.
To ensure the usefulness of simulation technologies in practice, their credibility needs to be established with Uncertainty Quantification (UQ) methods. In this project, smart proxy is introduced to significantly reduce the computational cost of conducting large number of multiphase CFD simulations, which is typically required for non-intrusive UQ analysis. Smart proxy for CFD models are developed using pattern recognition capabilities of Artificial Intelligence (AI) and Data Mining (DM) technologies. Several CFD simulation runs with different inlet air velocities for a rectangular fluidized bed are used to create a smart CFD proxy that is capable of replicating the CFD results formore » the entire geometry and inlet velocity range. The smart CFD proxy is validated with blind CFD runs (CFD runs that have not played any role during the development of the smart CFD proxy). The developed and validated smart CFD proxy generates its results in seconds with reasonable error (less than 10%). Upon completion of this project, UQ studies that rely on hundreds or thousands of smart CFD proxy runs can be accomplished in minutes. Following figure demonstrates a validation example (blind CFD run) showing the results from the MFiX simulation and the smart CFD proxy for pressure distribution across a fluidized bed at a given time-step (the layer number corresponds to the vertical location in the bed).« less
2007-01-01
Aid (IWEDA) we developed techniques that allowed significant improvement in weather effects and impacts for wargames. TAWS was run for numerous and...found that the wargame realism was increased without impacting the run time. While these techniques are applicable to wargames in general, we tested...them by incorporation into the Advanced Warfighting Simulation (AWARS) model. AWARS was modified to incorporate weather impacts upon sensor
NASA Astrophysics Data System (ADS)
Rougé, Charles; Harou, Julien J.; Pulido-Velazquez, Manuel; Matrosov, Evgenii S.
2017-04-01
The marginal opportunity cost of water refers to benefits forgone by not allocating an additional unit of water to its most economically productive use at a specific location in a river basin at a specific moment in time. Estimating the opportunity cost of water is an important contribution to water management as it can be used for better water allocation or better system operation, and can suggest where future water infrastructure could be most beneficial. Opportunity costs can be estimated using 'shadow values' provided by hydro-economic optimization models. Yet, such models' use of optimization means the models had difficulty accurately representing the impact of operating rules and regulatory and institutional mechanisms on actual water allocation. In this work we use more widely available river basin simulation models to estimate opportunity costs. This has been done before by adding in the model a small quantity of water at the place and time where the opportunity cost should be computed, then running a simulation and comparing the difference in system benefits. The added system benefits per unit of water added to the system then provide an approximation of the opportunity cost. This approximation can then be used to design efficient pricing policies that provide incentives for users to reduce their water consumption. Yet, this method requires one simulation run per node and per time step, which is demanding computationally for large-scale systems and short time steps (e.g., a day or a week). Besides, opportunity cost estimates are supposed to reflect the most productive use of an additional unit of water, yet the simulation rules do not necessarily use water that way. In this work, we propose an alternative approach, which computes the opportunity cost through a double backward induction, first recursively from outlet to headwaters within the river network at each time step, then recursively backwards in time. Both backward inductions only require linear operations, and the resulting algorithm tracks the maximal benefit that can be obtained by having an additional unit of water at any node in the network and at any date in time. Results 1) can be obtained from the results of a rule-based simulation using a single post-processing run, and 2) are exactly the (gross) benefit forgone by not allocating an additional unit of water to its most productive use. The proposed method is applied to London's water resource system to track the value of storage in the city's water supply reservoirs on the Thames River throughout a weekly 85-year simulation. Results, obtained in 0.4 seconds on a single processor, reflect the environmental cost of water shortage. This fast computation allows visualizing the seasonal variations of the opportunity cost depending on reservoir levels, demonstrating the potential of this approach for exploring water values and its variations using simulation models with multiple runs (e.g. of stochastically generated plausible future river inflows).
Computer-Aided System Engineering and Analysis (CASE/A) Programmer's Manual, Version 5.0
NASA Technical Reports Server (NTRS)
Knox, J. C.
1996-01-01
The Computer Aided System Engineering and Analysis (CASE/A) Version 5.0 Programmer's Manual provides the programmer and user with information regarding the internal structure of the CASE/A 5.0 software system. CASE/A 5.0 is a trade study tool that provides modeling/simulation capabilities for analyzing environmental control and life support systems and active thermal control systems. CASE/A has been successfully used in studies such as the evaluation of carbon dioxide removal in the space station. CASE/A modeling provides a graphical and command-driven interface for the user. This interface allows the user to construct a model by placing equipment components in a graphical layout of the system hardware, then connect the components via flow streams and define their operating parameters. Once the equipment is placed, the simulation time and other control parameters can be set to run the simulation based on the model constructed. After completion of the simulation, graphical plots or text files can be obtained for evaluation of the simulation results over time. Additionally, users have the capability to control the simulation and extract information at various times in the simulation (e.g., control equipment operating parameters over the simulation time or extract plot data) by using "User Operations (OPS) Code." This OPS code is written in FORTRAN with a canned set of utility subroutines for performing common tasks. CASE/A version 5.0 software runs under the VAX VMS(Trademark) environment. It utilizes the Tektronics 4014(Trademark) graphics display system and the VTIOO(Trademark) text manipulation/display system.
NASA Astrophysics Data System (ADS)
Abdul Ghani, B.
2005-09-01
"TEA CO 2 Laser Simulator" has been designed to simulate the dynamic emission processes of the TEA CO 2 laser based on the six-temperature model. The program predicts the behavior of the laser output pulse (power, energy, pulse duration, delay time, FWHM, etc.) depending on the physical and geometrical input parameters (pressure ratio of gas mixture, reflecting area of the output mirror, media length, losses, filling and decay factors, etc.). Program summaryTitle of program: TEA_CO2 Catalogue identifier: ADVW Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVW Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: P.IV DELL PC Setup: Atomic Energy Commission of Syria, Scientific Services Department, Mathematics and Informatics Division Operating system: MS-Windows 9x, 2000, XP Programming language: Delphi 6.0 No. of lines in distributed program, including test data, etc.: 47 315 No. of bytes in distributed program, including test data, etc.:7 681 109 Distribution format:tar.gz Classification: 15 Laser Physics Nature of the physical problem: "TEA CO 2 Laser Simulator" is a program that predicts the behavior of the laser output pulse by studying the effect of the physical and geometrical input parameters on the characteristics of the output laser pulse. The laser active medium consists of a CO 2-N 2-He gas mixture. Method of solution: Six-temperature model, for the dynamics emission of TEA CO 2 laser, has been adapted in order to predict the parameters of laser output pulses. A simulation of the laser electrical pumping was carried out using two approaches; empirical function equation (8) and differential equation (9). Typical running time: The program's running time mainly depends on both integration interval and step; for a 4 μs period of time and 0.001 μs integration step (defaults values used in the program), the running time will be about 4 seconds. Restrictions on the complexity: Using a very small integration step might leads to stop the program run due to the huge number of calculating points and to a small paging file size of the MS-Windows virtual memory. In such case, it is recommended to enlarge the paging file size to the appropriate size, or to use a bigger value of integration step.
Parallel Multi-cycle LES of an Optical Pent-roof DISI Engine Under Motored Operating Conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Dam, Noah; Sjöberg, Magnus; Zeng, Wei
The use of Large-eddy Simulations (LES) has increased due to their ability to resolve the turbulent fluctuations of engine flows and capture the resulting cycle-to-cycle variability. One drawback of LES, however, is the requirement to run multiple engine cycles to obtain the necessary cycle statistics for full validation. The standard method to obtain the cycles by running a single simulation through many engine cycles sequentially can take a long time to complete. Recently, a new strategy has been proposed by our research group to reduce the amount of time necessary to simulate the many engine cycles by running individual enginemore » cycle simulations in parallel. With modern large computing systems this has the potential to reduce the amount of time necessary for a full set of simulated engine cycles to finish by up to an order of magnitude. In this paper, the Parallel Perturbation Methodology (PPM) is used to simulate up to 35 engine cycles of an optically accessible, pent-roof Directinjection Spark-ignition (DISI) engine at two different motored engine operating conditions, one throttled and one un-throttled. Comparisons are made against corresponding sequential-cycle simulations to verify the similarity of results using either methodology. Mean results from the PPM approach are very similar to sequential-cycle results with less than 0.5% difference in pressure and a magnitude structure index (MSI) of 0.95. Differences in cycle-to-cycle variability (CCV) predictions are larger, but close to the statistical uncertainty in the measurement for the number of cycles simulated. PPM LES results were also compared against experimental data. Mean quantities such as pressure or mean velocities were typically matched to within 5- 10%. Pressure CCVs were under-predicted, mostly due to the lack of any perturbations in the pressure boundary conditions between cycles. Velocity CCVs for the simulations had the same average magnitude as experiments, but the experimental data showed greater spatial variation in the root-mean-square (RMS). Conversely, circular standard deviation results showed greater repeatability of the flow directionality and swirl vortex positioning than the simulations.« less
The Prodiguer Messaging Platform
NASA Astrophysics Data System (ADS)
Greenslade, Mark; Denvil, Sebastien; Raciazek, Jerome; Carenton, Nicolas; Levavasseur, Guillame
2014-05-01
CONVERGENCE is a French multi-partner national project designed to gather HPC and informatics expertise to innovate in the context of running French climate models with differing grids and at differing resolutions. Efficient and reliable execution of these models and the management and dissemination of model output (data and meta-data) are just some of the complexities that CONVERGENCE aims to resolve. The Institut Pierre Simon Laplace (IPSL) is responsible for running climate simulations upon a set of heterogenous HPC environments within France. With heterogeneity comes added complexity in terms of simulation instrumentation and control. Obtaining a global perspective upon the state of all simulations running upon all HPC environments has hitherto been problematic. In this presentation we detail how, within the context of CONVERGENCE, the implementation of the Prodiguer messaging platform resolves complexity and permits the development of real-time applications such as: 1. a simulation monitoring dashboard; 2. a simulation metrics visualizer; 3. an automated simulation runtime notifier; 4. an automated output data & meta-data publishing pipeline; The Prodiguer messaging platform leverages a widely used open source message broker software called RabbitMQ. RabbitMQ itself implements the Advanced Message Queue Protocol (AMPQ). Hence it will be demonstrated that the Prodiguer messaging platform is built upon both open source and open standards.
Dysrhythmias in Laypersons During Centrifuge-Simulated Suborbital Spaceflight.
Suresh, Rahul; Blue, Rebecca S; Mathers, Charles H; Castleberry, Tarah L; Vanderploeg, James M
2017-11-01
There are limited data on cardiac dysrhythmias in laypersons during hypergravity exposure. We report layperson electrocardiograph (ECG) findings and tolerance of dysrhythmias during centrifuge-simulated suborbital spaceflight. Volunteers participated in varied-length centrifuge training programs of 2-7 centrifuge runs over 0.5-2 d, culminating in two simulated suborbital spaceflights of combined +Gz and +Gx (peak +4.0 Gz, +6.0 Gx, duration 5 s). Monitors recorded pre- and post-run mean arterial blood pressure (MAP), 6-s average heart rate (HR) collected at prespecified points during exposures, documented dysrhythmias observed on continuous 3-lead ECG, self-reported symptoms, and objective signs of intolerance on real-time video monitoring. Participating in the study were 148 subjects (43 women). Documented dysrhythmias included sinus pause (N = 5), couplet premature ventricular contractions (N = 4), bigeminy (N = 3), accelerated idioventricular rhythm (N = 1), and relative bradycardia (RB, defined as a transient HR drop of >20 bpm; N = 63). None were associated with subjective symptoms or objective signs of acceleration intolerance. Episodes of RB occurred only during +Gx exposures. Subjects had a higher post-run vs. pre-run MAP after all exposures, but demonstrated no difference in pre- and post-run HR. RB was more common in men, younger individuals, and subjects experiencing more centrifuge runs. Dysrhythmias in laypersons undergoing simulated suborbital spaceflight were well tolerated, though RB was frequently noted during short-duration +Gx exposure. No subjects demonstrated associated symptoms or objective hemodynamic sequelae from these events. Even so, heightened caution remains warranted when monitoring dysrhythmias in laypersons with significant cardiopulmonary disease or taking medications that modulate cardiac conduction.Suresh R, Blue RS, Mathers CH, Castleberry TL, Vanderploeg JM. Dysrhythmias in laypersons during centrifuge-stimulated suborbital spaceflight. Aerosp Med Hum Perform. 2017; 88(11):1008-1015.
Scalable load balancing for massively parallel distributed Monte Carlo particle transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Brien, M. J.; Brantley, P. S.; Joy, K. I.
2013-07-01
In order to run computer simulations efficiently on massively parallel computers with hundreds of thousands or millions of processors, care must be taken that the calculation is load balanced across the processors. Examining the workload of every processor leads to an unscalable algorithm, with run time at least as large as O(N), where N is the number of processors. We present a scalable load balancing algorithm, with run time 0(log(N)), that involves iterated processor-pair-wise balancing steps, ultimately leading to a globally balanced workload. We demonstrate scalability of the algorithm up to 2 million processors on the Sequoia supercomputer at Lawrencemore » Livermore National Laboratory. (authors)« less
Rapidly Re-Configurable Flight Simulator Tools for Crew Vehicle Integration Research and Design
NASA Technical Reports Server (NTRS)
Schutte, Paul C.; Trujillo, Anna; Pritchett, Amy R.
2000-01-01
While simulation is a valuable research and design tool, the time and difficulty required to create new simulations (or re-use existing simulations) often limits their application. This report describes the design of the software architecture for the Reconfigurable Flight Simulator (RFS), which provides a robust simulation framework that allows the simulator to fulfill multiple research and development goals. The core of the architecture provides the interface standards for simulation components, registers and initializes components, and handles the communication between simulation components. The simulation components are each a pre-compiled library 'plug-in' module. This modularity allows independent development and sharing of individual simulation components. Additional interfaces can be provided through the use of Object Data/Method Extensions (OD/ME). RFS provides a programmable run-time environment for real-time access and manipulation, and has networking capabilities using the High Level Architecture (HLA).
Rapidly Re-Configurable Flight Simulator Tools for Crew Vehicle Integration Research and Design
NASA Technical Reports Server (NTRS)
Pritchett, Amy R.
2002-01-01
While simulation is a valuable research and design tool, the time and difficulty required to create new simulations (or re-use existing simulations) often limits their application. This report describes the design of the software architecture for the Reconfigurable Flight Simulator (RFS), which provides a robust simulation framework that allows the simulator to fulfill multiple research and development goals. The core of the architecture provides the interface standards for simulation components, registers and initializes components, and handles the communication between simulation components. The simulation components are each a pre-compiled library 'plugin' module. This modularity allows independent development and sharing of individual simulation components. Additional interfaces can be provided through the use of Object Data/Method Extensions (OD/ME). RFS provides a programmable run-time environment for real-time access and manipulation, and has networking capabilities using the High Level Architecture (HLA).
Alakent, Burak; Doruker, Pemra; Camurdan, Mehmet C
2004-09-08
Time series analysis is applied on the collective coordinates obtained from principal component analysis of independent molecular dynamics simulations of alpha-amylase inhibitor tendamistat and immunity protein of colicin E7 based on the Calpha coordinates history. Even though the principal component directions obtained for each run are considerably different, the dynamics information obtained from these runs are surprisingly similar in terms of time series models and parameters. There are two main differences in the dynamics of the two proteins: the higher density of low frequencies and the larger step sizes for the interminima motions of colicin E7 than those of alpha-amylase inhibitor, which may be attributed to the higher number of residues of colicin E7 and/or the structural differences of the two proteins. The cumulative density function of the low frequencies in each run conforms to the expectations from the normal mode analysis. When different runs of alpha-amylase inhibitor are projected on the same set of eigenvectors, it is found that principal components obtained from a certain conformational region of a protein has a moderate explanation power in other conformational regions and the local minima are similar to a certain extent, while the height of the energy barriers in between the minima significantly change. As a final remark, time series analysis tools are further exploited in this study with the motive of explaining the equilibrium fluctuations of proteins. Copyright 2004 American Institute of Physics
NASA Astrophysics Data System (ADS)
Alakent, Burak; Doruker, Pemra; Camurdan, Mehmet C.
2004-09-01
Time series analysis is applied on the collective coordinates obtained from principal component analysis of independent molecular dynamics simulations of α-amylase inhibitor tendamistat and immunity protein of colicin E7 based on the Cα coordinates history. Even though the principal component directions obtained for each run are considerably different, the dynamics information obtained from these runs are surprisingly similar in terms of time series models and parameters. There are two main differences in the dynamics of the two proteins: the higher density of low frequencies and the larger step sizes for the interminima motions of colicin E7 than those of α-amylase inhibitor, which may be attributed to the higher number of residues of colicin E7 and/or the structural differences of the two proteins. The cumulative density function of the low frequencies in each run conforms to the expectations from the normal mode analysis. When different runs of α-amylase inhibitor are projected on the same set of eigenvectors, it is found that principal components obtained from a certain conformational region of a protein has a moderate explanation power in other conformational regions and the local minima are similar to a certain extent, while the height of the energy barriers in between the minima significantly change. As a final remark, time series analysis tools are further exploited in this study with the motive of explaining the equilibrium fluctuations of proteins.
Equilibration and analysis of first-principles molecular dynamics simulations of water
NASA Astrophysics Data System (ADS)
Dawson, William; Gygi, François
2018-03-01
First-principles molecular dynamics (FPMD) simulations based on density functional theory are becoming increasingly popular for the description of liquids. In view of the high computational cost of these simulations, the choice of an appropriate equilibration protocol is critical. We assess two methods of estimation of equilibration times using a large dataset of first-principles molecular dynamics simulations of water. The Gelman-Rubin potential scale reduction factor [A. Gelman and D. B. Rubin, Stat. Sci. 7, 457 (1992)] and the marginal standard error rule heuristic proposed by White [Simulation 69, 323 (1997)] are evaluated on a set of 32 independent 64-molecule simulations of 58 ps each, amounting to a combined cumulative time of 1.85 ns. The availability of multiple independent simulations also allows for an estimation of the variance of averaged quantities, both within MD runs and between runs. We analyze atomic trajectories, focusing on correlations of the Kohn-Sham energy, pair correlation functions, number of hydrogen bonds, and diffusion coefficient. The observed variability across samples provides a measure of the uncertainty associated with these quantities, thus facilitating meaningful comparisons of different approximations used in the simulations. We find that the computed diffusion coefficient and average number of hydrogen bonds are affected by a significant uncertainty in spite of the large size of the dataset used. A comparison with classical simulations using the TIP4P/2005 model confirms that the variability of the diffusivity is also observed after long equilibration times. Complete atomic trajectories and simulation output files are available online for further analysis.
Equilibration and analysis of first-principles molecular dynamics simulations of water.
Dawson, William; Gygi, François
2018-03-28
First-principles molecular dynamics (FPMD) simulations based on density functional theory are becoming increasingly popular for the description of liquids. In view of the high computational cost of these simulations, the choice of an appropriate equilibration protocol is critical. We assess two methods of estimation of equilibration times using a large dataset of first-principles molecular dynamics simulations of water. The Gelman-Rubin potential scale reduction factor [A. Gelman and D. B. Rubin, Stat. Sci. 7, 457 (1992)] and the marginal standard error rule heuristic proposed by White [Simulation 69, 323 (1997)] are evaluated on a set of 32 independent 64-molecule simulations of 58 ps each, amounting to a combined cumulative time of 1.85 ns. The availability of multiple independent simulations also allows for an estimation of the variance of averaged quantities, both within MD runs and between runs. We analyze atomic trajectories, focusing on correlations of the Kohn-Sham energy, pair correlation functions, number of hydrogen bonds, and diffusion coefficient. The observed variability across samples provides a measure of the uncertainty associated with these quantities, thus facilitating meaningful comparisons of different approximations used in the simulations. We find that the computed diffusion coefficient and average number of hydrogen bonds are affected by a significant uncertainty in spite of the large size of the dataset used. A comparison with classical simulations using the TIP4P/2005 model confirms that the variability of the diffusivity is also observed after long equilibration times. Complete atomic trajectories and simulation output files are available online for further analysis.
Grace: A cross-platform micromagnetic simulator on graphics processing units
NASA Astrophysics Data System (ADS)
Zhu, Ru
2015-12-01
A micromagnetic simulator running on graphics processing units (GPUs) is presented. Different from GPU implementations of other research groups which are predominantly running on NVidia's CUDA platform, this simulator is developed with C++ Accelerated Massive Parallelism (C++ AMP) and is hardware platform independent. It runs on GPUs from venders including NVidia, AMD and Intel, and achieves significant performance boost as compared to previous central processing unit (CPU) simulators, up to two orders of magnitude. The simulator paved the way for running large size micromagnetic simulations on both high-end workstations with dedicated graphics cards and low-end personal computers with integrated graphics cards, and is freely available to download.
Principled Design of an Augmented Reality Trainer for Medics
2018-04-13
retake test is scheduled. In addition, extensive simulation capstone scenarios are run with a full body manikin that includes airway management...platform so they could run with high quality graphical resolution. We updated the underlying data models to reflect the training scenario parameters...Sedeh, P., Schumann, M., & Groeben, H. (2009). Laryngoscopy via Macintosh blade versus GlideScope: Success rate and time for endotracheal intubation
Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Keller, David U J; Seemann, Gunnar; Dossel, Olaf; Pitman, Michael C; Rice, John J
2009-01-01
Orthogonal recursive bisection (ORB) algorithm can be used as data decomposition strategy to distribute a large data set of a cardiac model to a distributed memory supercomputer. It has been shown previously that good scaling results can be achieved using the ORB algorithm for data decomposition. However, the ORB algorithm depends on the distribution of computational load of each element in the data set. In this work we investigated the dependence of data decomposition and load balancing on different rotations of the anatomical data set to achieve optimization in load balancing. The anatomical data set was given by both ventricles of the Visible Female data set in a 0.2 mm resolution. Fiber orientation was included. The data set was rotated by 90 degrees around x, y and z axis, respectively. By either translating or by simply taking the magnitude of the resulting negative coordinates we were able to create 14 data set of the same anatomy with different orientation and position in the overall volume. Computation load ratios for non - tissue vs. tissue elements used in the data decomposition were 1:1, 1:2, 1:5, 1:10, 1:25, 1:38.85, 1:50 and 1:100 to investigate the effect of different load ratios on the data decomposition. The ten Tusscher et al. (2004) electrophysiological cell model was used in monodomain simulations of 1 ms simulation time to compare performance using the different data sets and orientations. The simulations were carried out for load ratio 1:10, 1:25 and 1:38.85 on a 512 processor partition of the IBM Blue Gene/L supercomputer. Th results show that the data decomposition does depend on the orientation and position of the anatomy in the global volume. The difference in total run time between the data sets is 10 s for a simulation time of 1 ms. This yields a difference of about 28 h for a simulation of 10 s simulation time. However, given larger processor partitions, the difference in run time decreases and becomes less significant. Depending on the processor partition size, future work will have to consider the orientation of the anatomy in the global volume for longer simulation runs.
Rapid ISS Power Availability Simulator
NASA Technical Reports Server (NTRS)
Downing, Nicholas
2011-01-01
The ISS (International Space Station) Power Resource Officers (PROs) needed a tool to automate the calculation of thousands of ISS power availability simulations used to generate power constraint matrices. Each matrix contains 864 cells, and each cell represents a single power simulation that must be run. The tools available to the flight controllers were very operator intensive and not conducive to rapidly running the thousands of simulations necessary to generate the power constraint data. SOLAR is a Java-based tool that leverages commercial-off-the-shelf software (Satellite Toolkit) and an existing in-house ISS EPS model (SPEED) to rapidly perform thousands of power availability simulations. SOLAR has a very modular architecture and consists of a series of plug-ins that are loosely coupled. The modular architecture of the software allows for the easy replacement of the ISS power system model simulator, re-use of the Satellite Toolkit integration code, and separation of the user interface from the core logic. Satellite Toolkit (STK) is used to generate ISS eclipse and insulation times, solar beta angle, position of the solar arrays over time, and the amount of shadowing on the solar arrays, which is then provided to SPEED to calculate power generation forecasts. The power planning turn-around time is reduced from three months to two weeks (83-percent decrease) using SOLAR, and the amount of PRO power planning support effort is reduced by an estimated 30 percent.
A Method for Generating Reduced-Order Linear Models of Multidimensional Supersonic Inlets
NASA Technical Reports Server (NTRS)
Chicatelli, Amy; Hartley, Tom T.
1998-01-01
Simulation of high speed propulsion systems may be divided into two categories, nonlinear and linear. The nonlinear simulations are usually based on multidimensional computational fluid dynamics (CFD) methodologies and tend to provide high resolution results that show the fine detail of the flow. Consequently, these simulations are large, numerically intensive, and run much slower than real-time. ne linear simulations are usually based on large lumping techniques that are linearized about a steady-state operating condition. These simplistic models often run at or near real-time but do not always capture the detailed dynamics of the plant. Under a grant sponsored by the NASA Lewis Research Center, Cleveland, Ohio, a new method has been developed that can be used to generate improved linear models for control design from multidimensional steady-state CFD results. This CFD-based linear modeling technique provides a small perturbation model that can be used for control applications and real-time simulations. It is important to note the utility of the modeling procedure; all that is needed to obtain a linear model of the propulsion system is the geometry and steady-state operating conditions from a multidimensional CFD simulation or experiment. This research represents a beginning step in establishing a bridge between the controls discipline and the CFD discipline so that the control engineer is able to effectively use multidimensional CFD results in control system design and analysis.
Simulation framework for intelligent transportation systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ewing, T.; Doss, E.; Hanebutte, U.
1996-10-01
A simulation framework has been developed for a large-scale, comprehensive, scaleable simulation of an Intelligent Transportation System (ITS). The simulator is designed for running on parallel computers and distributed (networked) computer systems, but can run on standalone workstations for smaller simulations. The simulator currently models instrumented smart vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide two-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphicalmore » user interfaces to support human-factors studies. Realistic modeling of variations of the posted driving speed are based on human factors studies that take into consideration weather, road conditions, driver personality and behavior, and vehicle type. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on parallel computers, such as ANL`s IBM SP-2, for large-scale problems. A novel feature of the approach is that vehicles are represented by autonomous computer processes which exchange messages with other processes. The vehicles have a behavior model which governs route selection and driving behavior, and can react to external traffic events much like real vehicles. With this approach, the simulation is scaleable to take advantage of emerging massively parallel processor (MPP) systems.« less
Influence of hydrodynamic thrust bearings on the nonlinear oscillations of high-speed rotors
NASA Astrophysics Data System (ADS)
Chatzisavvas, Ioannis; Boyaci, Aydin; Koutsovasilis, Panagiotis; Schweizer, Bernhard
2016-10-01
This paper investigates the effect of hydrodynamic thrust bearings on the nonlinear vibrations and the bifurcations occurring in rotor/bearing systems. In order to examine the influence of thrust bearings, run-up simulations may be carried out. To be able to perform such run-up calculations, a computationally efficient thrust bearing model is mandatory. Direct discretization of the Reynolds equation for thrust bearings by means of a Finite Element or Finite Difference approach entails rather large simulation times, since in every time-integration step a discretized model of the Reynolds equation has to be solved simultaneously with the rotor model. Implementation of such a coupled rotor/bearing model may be accomplished by a co-simulation approach. Such an approach prevents, however, a thorough analysis of the rotor/bearing system based on extensive parameter studies. A major point of this work is the derivation of a very time-efficient but rather precise model for transient simulations of rotors with hydrodynamic thrust bearings. The presented model makes use of a global Galerkin approach, where the pressure field is approximated by global trial functions. For the considered problem, an analytical evaluation of the relevant integrals is possible. As a consequence, the system of equations of the discretized bearing model is obtained symbolically. In combination with a proper decomposition of the governing system matrix, a numerically efficient implementation can be achieved. Using run-up simulations with the proposed model, the effect of thrust bearings on the bifurcations points as well as on the amplitudes and frequencies of the subsynchronous rotor oscillations is investigated. Especially, the influence of the magnitude of the axial force, the geometry of the thrust bearing and the oil parameters is examined. It is shown that the thrust bearing exerts a large influence on the nonlinear rotor oscillations, especially to those related with the conical mode of the rotor. A comparison between a full co-simulation approach and a reduced Galerkin implementation is carried out. It is shown that a speed-up of 10-15 times may be obtained with the Galerkin model compared to the co-simulation model under the same accuracy.
NASA Astrophysics Data System (ADS)
Amran, M. A. M.; Idayu, N.; Faizal, K. M.; Sanusi, M.; Izamshah, R.; Shahir, M.
2016-11-01
In this study, the main objective is to determine the percentage difference of part weight between experimental and simulation work. The effect of process parameters on weight of plastic part is also investigated. The process parameters involved were mould temperature, melt temperature, injection time and cooling time. Autodesk Simulation Moldflow software was used to run the simulation of the plastic part. Taguchi method was selected as Design of Experiment to conduct the experiment. Then, the simulation result was validated with the experimental result. It was found that the minimum and maximum percentage of differential of part weight between simulation and experimental work are 0.35 % and 1.43 % respectively. In addition, the most significant parameter that affected part weight is the mould temperature, followed by melt temperature, injection time and cooling time.
NASA Technical Reports Server (NTRS)
Vrnak, Daniel R.; Stueber, Thomas J.; Le, Dzu K.
2012-01-01
This report presents a method for running a dynamic legacy inlet simulation in concert with another dynamic simulation that uses a graphical interface. The legacy code, NASA's LArge Perturbation INlet (LAPIN) model, was coded using the FORTRAN 77 (The Portland Group, Lake Oswego, OR) programming language to run in a command shell similar to other applications that used the Microsoft Disk Operating System (MS-DOS) (Microsoft Corporation, Redmond, WA). Simulink (MathWorks, Natick, MA) is a dynamic simulation that runs on a modern graphical operating system. The product of this work has both simulations, LAPIN and Simulink, running synchronously on the same computer with periodic data exchanges. Implementing the method described in this paper avoided extensive changes to the legacy code and preserved its basic operating procedure. This paper presents a novel method that promotes inter-task data communication between the synchronously running processes.
Key algorithms used in GR02: A computer simulation model for predicting tree and stand growth
Garrett A. Hughes; Paul E. Sendak; Paul E. Sendak
1985-01-01
GR02 is an individual tree, distance-independent simulation model for predicting tree and stand growth over time. It performs five major functions during each run: (1) updates diameter at breast height, (2) updates total height, (3) estimates mortality, (4) determines regeneration, and (5) updates crown class.
Designing Realistic Human Behavior into Multi-Agent Systems
2001-09-01
different results based on some sort of randomness built into it, a trend can be looked at over time and a success or failure rate can be...simulation remains in that state, very different results can be achieved each simulation run. An analyst can look at success and failure over a long
Symplectic molecular dynamics simulations on specially designed parallel computers.
Borstnik, Urban; Janezic, Dusanka
2005-01-01
We have developed a computer program for molecular dynamics (MD) simulation that implements the Split Integration Symplectic Method (SISM) and is designed to run on specialized parallel computers. The MD integration is performed by the SISM, which analytically treats high-frequency vibrational motion and thus enables the use of longer simulation time steps. The low-frequency motion is treated numerically on specially designed parallel computers, which decreases the computational time of each simulation time step. The combination of these approaches means that less time is required and fewer steps are needed and so enables fast MD simulations. We study the computational performance of MD simulation of molecular systems on specialized computers and provide a comparison to standard personal computers. The combination of the SISM with two specialized parallel computers is an effective way to increase the speed of MD simulations up to 16-fold over a single PC processor.
NASA Astrophysics Data System (ADS)
Myre, Joseph M.
Heterogeneous computing systems have recently come to the forefront of the High-Performance Computing (HPC) community's interest. HPC computer systems that incorporate special purpose accelerators, such as Graphics Processing Units (GPUs), are said to be heterogeneous. Large scale heterogeneous computing systems have consistently ranked highly on the Top500 list since the beginning of the heterogeneous computing trend. By using heterogeneous computing systems that consist of both general purpose processors and special- purpose accelerators, the speed and problem size of many simulations could be dramatically increased. Ultimately this results in enhanced simulation capabilities that allows, in some cases for the first time, the execution of parameter space and uncertainty analyses, model optimizations, and other inverse modeling techniques that are critical for scientific discovery and engineering analysis. However, simplifying the usage and optimization of codes for heterogeneous computing systems remains a challenge. This is particularly true for scientists and engineers for whom understanding HPC architectures and undertaking performance analysis may not be primary research objectives. To enable scientists and engineers to remain focused on their primary research objectives, a modular environment for geophysical inversion and run-time autotuning on heterogeneous computing systems is presented. This environment is composed of three major components: 1) CUSH---a framework for reducing the complexity of programming heterogeneous computer systems, 2) geophysical inversion routines which can be used to characterize physical systems, and 3) run-time autotuning routines designed to determine configurations of heterogeneous computing systems in an attempt to maximize the performance of scientific and engineering codes. Using three case studies, a lattice-Boltzmann method, a non-negative least squares inversion, and a finite-difference fluid flow method, it is shown that this environment provides scientists and engineers with means to reduce the programmatic complexity of their applications, to perform geophysical inversions for characterizing physical systems, and to determine high-performing run-time configurations of heterogeneous computing systems using a run-time autotuner.
Funnell, Mark P.; Dykes, Nick R.; Owen, Elliot J.; Mears, Stephen A.; Rollo, Ian; James, Lewis J.
2017-01-01
This study assessed the effect of carbohydrate intake on self-selected soccer-specific running performance. Sixteen male soccer players (age 23 ± 4 years; body mass 76.9 ± 7.2 kg; predicted VO2max = 54.2 ± 2.9 mL∙kg−1∙min−1; soccer experience 13 ± 4 years) completed a progressive multistage fitness test, familiarisation trial and two experimental trials, involving a modified version of the Loughborough Intermittent Shuttle Test (LIST) to simulate a soccer match in a fed state. Subjects completed six 15 min blocks (two halves of 45 min) of intermittent shuttle running, with a 15-min half-time. Blocks 3 and 6, allowed self-selection of running speeds and sprint times, were assessed throughout. Subjects consumed 250 mL of either a 12% carbohydrate solution (CHO) or a non-caloric taste matched placebo (PLA) before and at half-time of the LIST. Sprint times were not different between trials (CHO 2.71 ± 0.15 s, PLA 2.70 ± 0.14 s; p = 0.202). Total distance covered in self-selected blocks (block 3: CHO 2.07 ± 0.06 km; PLA 2.09 ± 0.08 km; block 6: CHO 2.04 ± 0.09 km; PLA 2.06 ± 0.08 km; p = 0.122) was not different between trials. There was no difference between trials for distance covered (p ≥ 0.297) or mean speed (p ≥ 0.172) for jogging or cruising. Blood glucose concentration was greater (p < 0.001) at the end of half-time during the CHO trial. In conclusion, consumption of 250 mL of 12% CHO solution before and at half-time of a simulated soccer match does not affect self-selected running or sprint performance in a fed state. PMID:28067762
Funnell, Mark P; Dykes, Nick R; Owen, Elliot J; Mears, Stephen A; Rollo, Ian; James, Lewis J
2017-01-05
This study assessed the effect of carbohydrate intake on self-selected soccer-specific running performance. Sixteen male soccer players (age 23 ± 4 years; body mass 76.9 ± 7.2 kg; predicted VO 2max = 54.2 ± 2.9 mL∙kg -1 ∙min -1 ; soccer experience 13 ± 4 years) completed a progressive multistage fitness test, familiarisation trial and two experimental trials, involving a modified version of the Loughborough Intermittent Shuttle Test (LIST) to simulate a soccer match in a fed state. Subjects completed six 15 min blocks (two halves of 45 min) of intermittent shuttle running, with a 15-min half-time. Blocks 3 and 6, allowed self-selection of running speeds and sprint times, were assessed throughout. Subjects consumed 250 mL of either a 12% carbohydrate solution (CHO) or a non-caloric taste matched placebo (PLA) before and at half-time of the LIST. Sprint times were not different between trials (CHO 2.71 ± 0.15 s, PLA 2.70 ± 0.14 s; p = 0.202). Total distance covered in self-selected blocks (block 3: CHO 2.07 ± 0.06 km; PLA 2.09 ± 0.08 km; block 6: CHO 2.04 ± 0.09 km; PLA 2.06 ± 0.08 km; p = 0.122) was not different between trials. There was no difference between trials for distance covered ( p ≥ 0.297) or mean speed ( p ≥ 0.172) for jogging or cruising. Blood glucose concentration was greater ( p < 0.001) at the end of half-time during the CHO trial. In conclusion, consumption of 250 mL of 12% CHO solution before and at half-time of a simulated soccer match does not affect self-selected running or sprint performance in a fed state.
The Trick Simulation Toolkit: A NASA/Open source Framework for Running Time Based Physics Models
NASA Technical Reports Server (NTRS)
Penn, John M.; Lin, Alexander S.
2016-01-01
This paper describes the design and use at of the Trick Simulation Toolkit, a simulation development environment for creating high fidelity training and engineering simulations at the NASA Johnson Space Center and many other NASA facilities. It describes Trick's design goals and how the development environment attempts to achieve those goals. It describes how Trick is used in some of the many training and engineering simulations at NASA. Finally it describes the Trick NASA/Open source project on Github.
The evolution of extreme precipitations in high resolution scenarios over France
NASA Astrophysics Data System (ADS)
Colin, J.; Déqué, M.; Somot, S.
2009-09-01
Over the past years, improving the modelling of extreme events and their variability at climatic time scales has become one of the challenging issue raised in the regional climate research field. This study shows the results of a high resolution (12 km) scenario run over France with the limited area model (LAM) ALADIN-Climat, regarding the representation of extreme precipitations. The runs were conducted in the framework of the ANR-SCAMPEI national project on high resolution scenarios over French mountains. As a first step, we attempt to quantify one of the uncertainties implied by the use of LAM : the size of the area on which the model is run. In particular, we address the issue of whether a relatively small domain allows the model to create its small scale process. Indeed, high resolution scenarios cannot be run on large domains because of the computation time. Therefore one needs to answer this preliminary question before producing and analyzing such scenarios. To do so, we worked in the framework of a « big brother » experiment. We performed a 23-year long global simulation in present-day climate (1979-2001) with the ARPEGE-Climat GCM, at a resolution of approximately 50 km over Europe (stretched grid). This first simulation, named ARP50, constitutes the « big brother » reference of our experiment. It has been validated in comparison with the CRU climatology. Then we filtered the short waves (up to 200 km) from ARP50 in order to obtain the equivalent of coarse resolution lateral boundary conditions (LBC). We have carried out three ALADIN-Climat simulations at a 50 km resolution with these LBC, using different configurations of the model : * FRA50, run over a small domain (2000 x 2000 km, centered over France), * EUR50, run over a larger domain (5000 x 5000 km, centered over France as well), * EUR50-SN, run over the large domain (using spectral nudging). Considering the facts that ARPEGE-Climat and ALADIN-Climat models share the same physics and dynamics and that both regional and global simulations were run at the same resolution, ARP50 can be regarded as a reference with which FRA50, EUR50 and EUR50-SN should each be compared. After an analysis of the differences between the regional simulations and ARP50 in annual and seasonal mean, we focus on the representation of rainfall extremes comparing two dimensional fields of various index inspired from STARDEX and quantile-quantile plots. The results show a good agreement with the ARP50 reference for all three regional simulations and little differences are found between them. This result indicates that the use of small domains is not significantly detrimental to the modelling of extreme precipitation events. It also shows that the spectral nudging technique has no detrimental effect on the extreme precipitation. Therefore, high resolution scenarios performed on a relatively small domain such as the ones run for SCAMPEI, can be regarded as good tools to explore their possible evolution in the future climate. Preliminary results on the response of precipitation extremes over South-East France are given.
Development and testing of a fast conceptual river water quality model.
Keupers, Ingrid; Willems, Patrick
2017-04-15
Modern, model based river quality management strongly relies on river water quality models to simulate the temporal and spatial evolution of pollutant concentrations in the water body. Such models are typically constructed by extending detailed hydrodynamic models with a component describing the advection-diffusion and water quality transformation processes in a detailed, physically based way. This approach is too computational time demanding, especially when simulating long time periods that are needed for statistical analysis of the results or when model sensitivity analysis, calibration and validation require a large number of model runs. To overcome this problem, a structure identification method to set up a conceptual river water quality model has been developed. Instead of calculating the water quality concentrations at each water level and discharge node, the river branch is divided into conceptual reservoirs based on user information such as location of interest and boundary inputs. These reservoirs are modelled as Plug Flow Reactor (PFR) and Continuously Stirred Tank Reactor (CSTR) to describe advection and diffusion processes. The same water quality transformation processes as in the detailed models are considered but with adjusted residence times based on the hydrodynamic simulation results and calibrated to the detailed water quality simulation results. The developed approach allows for a much faster calculation time (factor 10 5 ) without significant loss of accuracy, making it feasible to perform time demanding scenario runs. Copyright © 2017 Elsevier Ltd. All rights reserved.
Deterministic Stress Modeling of Hot Gas Segregation in a Turbine
NASA Technical Reports Server (NTRS)
Busby, Judy; Sondak, Doug; Staubach, Brent; Davis, Roger
1998-01-01
Simulation of unsteady viscous turbomachinery flowfields is presently impractical as a design tool due to the long run times required. Designers rely predominantly on steady-state simulations, but these simulations do not account for some of the important unsteady flow physics. Unsteady flow effects can be modeled as source terms in the steady flow equations. These source terms, referred to as Lumped Deterministic Stresses (LDS), can be used to drive steady flow solution procedures to reproduce the time-average of an unsteady flow solution. The goal of this work is to investigate the feasibility of using inviscid lumped deterministic stresses to model unsteady combustion hot streak migration effects on the turbine blade tip and outer air seal heat loads using a steady computational approach. The LDS model is obtained from an unsteady inviscid calculation. The LDS model is then used with a steady viscous computation to simulate the time-averaged viscous solution. Both two-dimensional and three-dimensional applications are examined. The inviscid LDS model produces good results for the two-dimensional case and requires less than 10% of the CPU time of the unsteady viscous run. For the three-dimensional case, the LDS model does a good job of reproducing the time-averaged viscous temperature migration and separation as well as heat load on the outer air seal at a CPU cost that is 25% of that of an unsteady viscous computation.
Sailfish: A flexible multi-GPU implementation of the lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Januszewski, M.; Kostur, M.
2014-09-01
We present Sailfish, an open source fluid simulation package implementing the lattice Boltzmann method (LBM) on modern Graphics Processing Units (GPUs) using CUDA/OpenCL. We take a novel approach to GPU code implementation and use run-time code generation techniques and a high level programming language (Python) to achieve state of the art performance, while allowing easy experimentation with different LBM models and tuning for various types of hardware. We discuss the general design principles of the code, scaling to multiple GPUs in a distributed environment, as well as the GPU implementation and optimization of many different LBM models, both single component (BGK, MRT, ELBM) and multicomponent (Shan-Chen, free energy). The paper also presents results of performance benchmarks spanning the last three NVIDIA GPU generations (Tesla, Fermi, Kepler), which we hope will be useful for researchers working with this type of hardware and similar codes. Catalogue identifier: AETA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Lesser General Public License, version 3 No. of lines in distributed program, including test data, etc.: 225864 No. of bytes in distributed program, including test data, etc.: 46861049 Distribution format: tar.gz Programming language: Python, CUDA C, OpenCL. Computer: Any with an OpenCL or CUDA-compliant GPU. Operating system: No limits (tested on Linux and Mac OS X). RAM: Hundreds of megabytes to tens of gigabytes for typical cases. Classification: 12, 6.5. External routines: PyCUDA/PyOpenCL, Numpy, Mako, ZeroMQ (for multi-GPU simulations), scipy, sympy Nature of problem: GPU-accelerated simulation of single- and multi-component fluid flows. Solution method: A wide range of relaxation models (LBGK, MRT, regularized LB, ELBM, Shan-Chen, free energy, free surface) and boundary conditions within the lattice Boltzmann method framework. Simulations can be run in single or double precision using one or more GPUs. Restrictions: The lattice Boltzmann method works for low Mach number flows only. Unusual features: The actual numerical calculations run exclusively on GPUs. The numerical code is built dynamically at run-time in CUDA C or OpenCL, using templates and symbolic formulas. The high-level control of the simulation is maintained by a Python process. Additional comments: !!!!! The distribution file for this program is over 45 Mbytes and therefore is not delivered directly when Download or Email is requested. Instead a html file giving details of how the program can be obtained is sent. !!!!! Running time: Problem-dependent, typically minutes (for small cases or short simulations) to hours (large cases or long simulations).
Simulating an Exploding Fission-Bomb Core
NASA Astrophysics Data System (ADS)
Reed, Cameron
2016-03-01
A time-dependent desktop-computer simulation of the core of an exploding fission bomb (nuclear weapon) has been developed. The simulation models a core comprising a mixture of two isotopes: a fissile one (such as U-235) and an inert one (such as U-238) that captures neutrons and removes them from circulation. The user sets the enrichment percentage and scattering and fission cross-sections of the fissile isotope, the capture cross-section of the inert isotope, the number of neutrons liberated per fission, the number of ``initiator'' neutrons, the radius of the core, and the neutron-reflection efficiency of a surrounding tamper. The simulation, which is predicated on ordinary kinematics, follows the three-dimensional motions and fates of neutrons as they travel through the core. Limitations of time and computer memory render it impossible to model a real-life core, but results of numerous runs clearly demonstrate the existence of a critical mass for a given set of parameters and the dramatic effects of enrichment and tamper efficiency on the growth (or decay) of the neutron population. The logic of the simulation will be described and results of typical runs will be presented and discussed.
Automated Run-Time Mission and Dialog Generation
2007-03-01
Processing, Social Network Analysis, Simulation, Automated Scenario Generation 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT Unclassified...9 D. SOCIAL NETWORKS...13 B. MISSION AND DIALOG GENERATION.................................................13 C. SOCIAL NETWORKS
The effect of carbohydrate ingestion on performance during a simulated soccer match.
Goedecke, Julia H; White, Nicholas J; Chicktay, Waheed; Mahomed, Hafsa; Durandt, Justin; Lambert, Michael I
2013-12-16
This study investigated how performance was affected after soccer players, in a postprandial state, ingested a 7% carbohydrate (CHO) solution compared to a placebo (0% CHO) during a simulated soccer match. Using a double-blind placebo-controlled design, 22 trained male league soccer players (age: 24 ± 7 years, wt: 73.4 ± 12.0 kg, VO2max: 51.8 ± 4.3 mL O2/kg/min) completed two trials, separated by 7 days, during which they ingested, in random order, 700 mL of either a 7% CHO or placebo drink during a simulated soccer match. Ratings of perceived exertion (RPE), agility, timed and run to fatigue were measured during the trials. Change in agility times was not altered by CHO vs. placebo ingestion (0.57 ± 1.48 vs. 0.66 ± 1.00, p = 0.81). Timed runs to fatigue were 381 ± 267 s vs. 294 ± 159 s for the CHO and placebo drinks, respectively (p = 0.11). Body mass modified the relationship between time to fatigue and drink ingestion (p = 0.02 for drink × body mass), such that lower body mass was associated with increased time to fatigue when the players ingested CHO, but not placebo. RPE values for the final stage of the simulated soccer match were 8.5 ± 1.7 and 8.6 ± 1.5 for the CHO and placebo drinks respectively (p = 0.87). The group data showed that the 7% CHO solution (49 g CHO) did not significantly improve performance during a simulated soccer match in league soccer players who had normal pre-match nutrition. However, when adjusting for body mass, increasing CHO intake was associated with improved time to fatigue during the simulated soccer match.
The Effect of Carbohydrate Ingestion on Performance during a Simulated Soccer Match
Goedecke, Julia H.; White, Nicholas J.; Chicktay, Waheed; Mahomed, Hafsa; Durandt, Justin; Lambert, Michael I.
2013-01-01
Aim: This study investigated how performance was affected after soccer players, in a postprandial state, ingested a 7% carbohydrate (CHO) solution compared to a placebo (0% CHO) during a simulated soccer match. Methods: Using a double-blind placebo-controlled design, 22 trained male league soccer players (age: 24 ± 7 years, wt: 73.4 ± 12.0 kg, VO2max: 51.8 ± 4.3 mL O2/kg/min) completed two trials, separated by 7 days, during which they ingested, in random order, 700 mL of either a 7% CHO or placebo drink during a simulated soccer match. Ratings of perceived exertion (RPE), agility, timed and run to fatigue were measured during the trials. Results: Change in agility times was not altered by CHO vs. placebo ingestion (0.57 ± 1.48 vs. 0.66 ± 1.00, p = 0.81). Timed runs to fatigue were 381 ± 267 s vs. 294 ± 159 s for the CHO and placebo drinks, respectively (p = 0.11). Body mass modified the relationship between time to fatigue and drink ingestion (p = 0.02 for drink × body mass), such that lower body mass was associated with increased time to fatigue when the players ingested CHO, but not placebo. RPE values for the final stage of the simulated soccer match were 8.5 ± 1.7 and 8.6 ± 1.5 for the CHO and placebo drinks respectively (p = 0.87). Conclusions: The group data showed that the 7% CHO solution (49 g CHO) did not significantly improve performance during a simulated soccer match in league soccer players who had normal pre-match nutrition. However, when adjusting for body mass, increasing CHO intake was associated with improved time to fatigue during the simulated soccer match. PMID:24352094
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cleveland, Mathew A., E-mail: cleveland7@llnl.gov; Brunner, Thomas A.; Gentile, Nicholas A.
2013-10-15
We describe and compare different approaches for achieving numerical reproducibility in photon Monte Carlo simulations. Reproducibility is desirable for code verification, testing, and debugging. Parallelism creates a unique problem for achieving reproducibility in Monte Carlo simulations because it changes the order in which values are summed. This is a numerical problem because double precision arithmetic is not associative. Parallel Monte Carlo, both domain replicated and decomposed simulations, will run their particles in a different order during different runs of the same simulation because the non-reproducibility of communication between processors. In addition, runs of the same simulation using different domain decompositionsmore » will also result in particles being simulated in a different order. In [1], a way of eliminating non-associative accumulations using integer tallies was described. This approach successfully achieves reproducibility at the cost of lost accuracy by rounding double precision numbers to fewer significant digits. This integer approach, and other extended and reduced precision reproducibility techniques, are described and compared in this work. Increased precision alone is not enough to ensure reproducibility of photon Monte Carlo simulations. Non-arbitrary precision approaches require a varying degree of rounding to achieve reproducibility. For the problems investigated in this work double precision global accuracy was achievable by using 100 bits of precision or greater on all unordered sums which where subsequently rounded to double precision at the end of every time-step.« less
Understanding resonance graphs using Easy Java Simulations (EJS) and why we use EJS
NASA Astrophysics Data System (ADS)
Wee, Loo Kang; Lee, Tat Leong; Chew, Charles; Wong, Darren; Tan, Samuel
2015-03-01
This paper reports a computer model simulation created using Easy Java Simulation (EJS) for learners to visualize how the steady-state amplitude of a driven oscillating system varies with the frequency of the periodic driving force. The simulation shows (N = 100) identical spring-mass systems being subjected to (1) a periodic driving force of equal amplitude but different driving frequencies, and (2) different amounts of damping. The simulation aims to create a visually intuitive way of understanding how the series of amplitude versus driving frequency graphs are obtained by showing how the displacement of the system changes over time as it transits from the transient to the steady state. A suggested ‘how to use’ the model is added to help educators and students in their teaching and learning, where we explain the theoretical steady-state equation time conditions when the model begins to allow data recording of maximum amplitudes to closely match the theoretical equation, and the steps to collect different runs of the degree of damping. We also discuss two of the design features in our computer model: displaying the instantaneous oscillation together with the achieved steady-state amplitudes, and the explicit world view overlay with scientific representation with different degrees of damping runs. Three advantages of using EJS include: (1) open source codes and creative commons attribution licenses for scaling up of interactively engaging educational practices; (2) the models made can run on almost any device, including Android and iOS; and (3) it allows the redefinition of physics educational practices through computer modeling.
LUXSim: A component-centric approach to low-background simulations
Akerib, D. S.; Bai, X.; Bedikian, S.; ...
2012-02-13
Geant4 has been used throughout the nuclear and high-energy physics community to simulate energy depositions in various detectors and materials. These simulations have mostly been run with a source beam outside the detector. In the case of low-background physics, however, a primary concern is the effect on the detector from radioactivity inherent in the detector parts themselves. From this standpoint, there is no single source or beam, but rather a collection of sources with potentially complicated spatial extent. LUXSim is a simulation framework used by the LUX collaboration that takes a component-centric approach to event generation and recording. A newmore » set of classes allows for multiple radioactive sources to be set within any number of components at run time, with the entire collection of sources handled within a single simulation run. Various levels of information can also be recorded from the individual components, with these record levels also being set at runtime. This flexibility in both source generation and information recording is possible without the need to recompile, reducing the complexity of code management and the proliferation of versions. Within the code itself, casting geometry objects within this new set of classes rather than as the default Geant4 classes automatically extends this flexibility to every individual component. No additional work is required on the part of the developer, reducing development time and increasing confidence in the results. Here, we describe the guiding principles behind LUXSim, detail some of its unique classes and methods, and give examples of usage.« less
Relaxation estimation of RMSD in molecular dynamics immunosimulations.
Schreiner, Wolfgang; Karch, Rudolf; Knapp, Bernhard; Ilieva, Nevena
2012-01-01
Molecular dynamics simulations have to be sufficiently long to draw reliable conclusions. However, no method exists to prove that a simulation has converged. We suggest the method of "lagged RMSD-analysis" as a tool to judge if an MD simulation has not yet run long enough. The analysis is based on RMSD values between pairs of configurations separated by variable time intervals Δt. Unless RMSD(Δt) has reached a stationary shape, the simulation has not yet converged.
Simulating Microbial Community Patterning Using Biocellion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, Seung-Hwa; Kahan, Simon H.; Momeni, Babak
2014-04-17
Mathematical modeling and computer simulation are important tools for understanding complex interactions between cells and their biotic and abiotic environment: similarities and differences between modeled and observed behavior provide the basis for hypothesis forma- tion. Momeni et al. [5] investigated pattern formation in communities of yeast strains engaging in different types of ecological interactions, comparing the predictions of mathematical modeling and simulation to actual patterns observed in wet-lab experiments. However, simu- lations of millions of cells in a three-dimensional community are ex- tremely time-consuming. One simulation run in MATLAB may take a week or longer, inhibiting exploration of the vastmore » space of parameter combinations and assumptions. Improving the speed, scale, and accu- racy of such simulations facilitates hypothesis formation and expedites discovery. Biocellion is a high performance software framework for ac- celerating discrete agent-based simulation of biological systems with millions to trillions of cells. Simulations of comparable scale and accu- racy to those taking a week of computer time using MATLAB require just hours using Biocellion on a multicore workstation. Biocellion fur- ther accelerates large scale, high resolution simulations using cluster computers by partitioning the work to run on multiple compute nodes. Biocellion targets computational biologists who have mathematical modeling backgrounds and basic C++ programming skills. This chap- ter describes the necessary steps to adapt the original Momeni et al.'s model to the Biocellion framework as a case study.« less
Pilot-in-the Loop CFD Method Development
2016-10-20
State University. All software supporting piloted simulations must run at real time speeds or faster. This requirement drives the number of...objects in the environment. In turn, this flowfield affects the local aerodynamics of the main rotor blade sections, affecting blade air loads, and...model, empirical models of ground effect and rotor / airframe interactions) are disabled when running in fully coupled mode, so as to not “double count
Dshell++: A Component Based, Reusable Space System Simulation Framework
NASA Technical Reports Server (NTRS)
Lim, Christopher S.; Jain, Abhinandan
2009-01-01
This paper describes the multi-mission Dshell++ simulation framework for high fidelity, physics-based simulation of spacecraft, robotic manipulation and mobility systems. Dshell++ is a C++/Python library which uses modern script driven object-oriented techniques to allow component reuse and a dynamic run-time interface for complex, high-fidelity simulation of spacecraft and robotic systems. The goal of the Dshell++ architecture is to manage the inherent complexity of physicsbased simulations while supporting component model reuse across missions. The framework provides several features that support a large degree of simulation configurability and usability.
Crashworthiness simulations with DYNA3D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schauer, D.A.; Hoover, C.G.; Kay, G.J.
1996-04-01
Current progress in parallel algorithm research and applications in vehicle crash simulation is described for the explicit, finite element algorithms in DYNA3D. Problem partitioning methods and parallel algorithms for contact at material interfaces are the two challenging algorithm research problems that are addressed. Two prototype parallel contact algorithms have been developed for treating the cases of local and arbitrary contact. Demonstration problems for local contact are crashworthiness simulations with 222 locally defined contact surfaces and a vehicle/barrier collision modeled with arbitrary contact. A simulation of crash tests conducted for a vehicle impacting a U-channel small sign post embedded in soilmore » has been run on both the serial and parallel versions of DYNA3D. A significant reduction in computational time has been observed when running these problems on the parallel version. However, to achieve maximum efficiency, complex problems must be appropriately partitioned, especially when contact dominates the computation.« less
Constructive Engineering of Simulations
NASA Technical Reports Server (NTRS)
Snyder, Daniel R.; Barsness, Brendan
2011-01-01
Joint experimentation that investigates sensor optimization, re-tasking and management has far reaching implications for Department of Defense, Interagency and multinational partners. An adaption of traditional human in the loop (HITL) Modeling and Simulation (M&S) was one approach used to generate the findings necessary to derive and support these implications. Here an entity-based simulation was re-engineered to run on USJFCOM's High Performance Computer (HPC). The HPC was used to support the vast number of constructive runs necessary to produce statistically significant data in a timely manner. Then from the resulting sensitivity analysis, event designers blended the necessary visualization and decision making components into a synthetic environment for the HITL simulations trials. These trials focused on areas where human decision making had the greatest impact on the sensor investigations. Thus, this paper discusses how re-engineering existing M&S for constructive applications can positively influence the design of an associated HITL experiment.
Yan, Xuedong; Liu, Yang; Xu, Yongcun
2015-01-01
Drivers' incorrect decisions of crossing signalized intersections at the onset of the yellow change may lead to red light running (RLR), and RLR crashes result in substantial numbers of severe injuries and property damage. In recent years, some Intelligent Transport System (ITS) concepts have focused on reducing RLR by alerting drivers that they are about to violate the signal. The objective of this study is to conduct an experimental investigation on the effectiveness of the red light violation warning system using a voice message. In this study, the prototype concept of the RLR audio warning system was modeled and tested in a high-fidelity driving simulator. According to the concept, when a vehicle is approaching an intersection at the onset of yellow and the time to the intersection is longer than the yellow interval, the in-vehicle warning system can activate the following audio message "The red light is impending. Please decelerate!" The intent of the warning design is to encourage drivers who cannot clear an intersection during the yellow change interval to stop at the intersection. The experimental results showed that the warning message could decrease red light running violations by 84.3 percent. Based on the logistic regression analyses, drivers without a warning were about 86 times more likely to make go decisions at the onset of yellow and about 15 times more likely to run red lights than those with a warning. Additionally, it was found that the audio warning message could significantly reduce RLR severity because the RLR drivers' red-entry times without a warning were longer than those with a warning. This driving simulator study showed a promising effect of the audio in-vehicle warning message on reducing RLR violations and crashes. It is worthwhile to further develop the proposed technology in field applications.
Evaluating Real-Time Platforms for Aircraft Prognostic Health Management Using Hardware-In-The-Loop
2008-08-01
obtained when using HIL and a simulated load. Initially, noticeable differences are seen when comparing the results from each real - time operating system . However...same model in native Simulink. These results show that each real - time operating system can be configured to accurately run transient Simulink
Fujii, Keisuke; Shinya, Masahiro; Yamashita, Daichi; Kouzaki, Motoki; Oda, Shingo
2014-01-01
We previously estimated the timing when ball game defenders detect relevant information through visual input for reacting to an attacker's running direction after a cutting manoeuvre, called cue timing. The purpose of this study was to investigate what specific information is relevant for defenders, and how defenders process this information to decide on their opponents' running direction. In this study, we hypothesised that defenders extract information regarding the position and velocity of the attackers' centre of mass (CoM) and the contact foot. We used a model which simulates the future trajectory of the opponent's CoM based upon an inverted pendulum movement. The hypothesis was tested by comparing observed defender's cue timing, model-estimated cue timing using the inverted pendulum model (IPM cue timing) and cue timing using only the current CoM position (CoM cue timing). The IPM cue timing was defined as the time when the simulated pendulum falls leftward or rightward given the initial values for position and velocity of the CoM and the contact foot at the time. The model-estimated IPM cue timing and the empirically observed defender's cue timing were comparable in median value and were significantly correlated, whereas the CoM cue timing was significantly more delayed than the IPM and the defender's cue timings. Based on these results, we discuss the possibility that defenders may be able to anticipate the future direction of an attacker by forwardly simulating inverted pendulum movement.
Uptake and storage of anthropogenic CO2 in the pacific ocean estimated using two modeling approaches
NASA Astrophysics Data System (ADS)
Li, Yangchun; Xu, Yongfu
2012-07-01
A basin-wide ocean general circulation model (OGCM) of the Pacific Ocean is employed to estimate the uptake and storage of anthropogenic CO2 using two different simulation approaches. The simulation (named BIO) makes use of a carbon model with biological processes and full thermodynamic equations to calculate surface water partial pressure of CO2, whereas the other simulation (named PTB) makes use of a perturbation approach to calculate surface water partial pressure of anthropogenic CO2. The results from the two simulations agree well with the estimates based on observation data in most important aspects of the vertical distribution as well as the total inventory of anthropogenic carbon. The storage of anthropogenic carbon from BIO is closer to the observation-based estimate than that from PTB. The Revelle factor in 1994 obtained in BIO is generally larger than that obtained in PTB in the whole Pacific, except for the subtropical South Pacific. This, to large extent, leads to the difference in the surface anthropogenic CO2 concentration between the two runs. The relative difference in the annual uptake between the two runs is almost constant during the integration processes after 1850. This is probably not caused by dissolved inorganic carbon (DIC), but rather by a factor independent of time. In both runs, the rate of change in anthropogenic CO2 fluxes with time is consistent with the rate of change in the growth rate of atmospheric partial pressure of CO2.
Future directions in flight simulation: A user perspective
NASA Technical Reports Server (NTRS)
Jackson, Bruce
1993-01-01
Langley Research Center was an early leader in simulation technology, including a special emphasis in space vehicle simulations such as the rendezvous and docking simulator for the Gemini program and the lunar landing simulator used before Apollo. In more recent times, Langley operated the first synergistic six degree of freedom motion platform (the Visual Motion Simulator, or VMS) and developed the first dual-dome air combat simulator, the Differential Maneuvering Simulator (DMS). Each Langley simulator was developed more or less independently from one another with different programming support. At present time, the various simulation cockpits, while supported by the same host computer system, run dissimilar software. The majority of recent investments in Langley's simulation facilities have been hardware procurements: host processors, visual systems, and most recently, an improved motion system. Investments in software improvements, however, have not been of the same order.
Operating system for a real-time multiprocessor propulsion system simulator. User's manual
NASA Technical Reports Server (NTRS)
Cole, G. L.
1985-01-01
The NASA Lewis Research Center is developing and evaluating experimental hardware and software systems to help meet future needs for real-time, high-fidelity simulations of air-breathing propulsion systems. Specifically, the real-time multiprocessor simulator project focuses on the use of multiple microprocessors to achieve the required computing speed and accuracy at relatively low cost. Operating systems for such hardware configurations are generally not available. A real time multiprocessor operating system (RTMPOS) that supports a variety of multiprocessor configurations was developed at Lewis. With some modification, RTMPOS can also support various microprocessors. RTMPOS, by means of menus and prompts, provides the user with a versatile, user-friendly environment for interactively loading, running, and obtaining results from a multiprocessor-based simulator. The menu functions are described and an example simulation session is included to demonstrate the steps required to go from the simulation loading phase to the execution phase.
Economic optimization software applied to JFK airport heating and cooling plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gay, R.R.; McCoy, L.
This paper describes the on-line economic optimization routine developed by Enter Software, Inc. for application at the heating and cooling plant for the JFK International Airport near New York City. The objective of the economic optimization is to find the optimum plant configuration (which gas turbines to run, power levels of each gas turbine, duct firing levels, which auxiliary water heaters to run, which electric chillers to run, and which absorption chillers to run) which produces maximum net income at the plant as plant loads and the prices vary. The routines also include a planner which runs a series ofmore » optimizations over multiple plant configurations to simulate the varying plant operating conditions for the purpose of predicting the overall plant results over a period of time.« less
Hulme, Adam; Thompson, Jason; Nielsen, Rasmus Oestergaard; Read, Gemma J M; Salmon, Paul M
2018-06-18
There have been recent calls for the application of the complex systems approach in sports injury research. However, beyond theoretical description and static models of complexity, little progress has been made towards formalising this approach in way that is practical to sports injury scientists and clinicians. Therefore, our objective was to use a computational modelling method and develop a dynamic simulation in sports injury research. Agent-based modelling (ABM) was used to model the occurrence of sports injury in a synthetic athlete population. The ABM was developed based on sports injury causal frameworks and was applied in the context of distance running-related injury (RRI). Using the acute:chronic workload ratio (ACWR), we simulated the dynamic relationship between changes in weekly running distance and RRI through the manipulation of various 'athlete management tools'. The findings confirmed that building weekly running distances over time, even within the reported ACWR 'sweet spot', will eventually result in RRI as athletes reach and surpass their individual physical workload limits. Introducing training-related error into the simulation and the modelling of a 'hard ceiling' dynamic resulted in a higher RRI incidence proportion across the population at higher absolute workloads. The presented simulation offers a practical starting point to further apply more sophisticated computational models that can account for the complex nature of sports injury aetiology. Alongside traditional forms of scientific inquiry, the use of ABM and other simulation-based techniques could be considered as a complementary and alternative methodological approach in sports injury research. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Penetration of n-hexadecane and water into wood under conditions simulating catastrophic floods
Ganna Baglayeva; Wayne S. Seames; Charles R. Frihart; Jane O' Dell; Evguenii I. Kozliak
2017-01-01
To simulate fuel oil spills occurring during catastrophic floods, short-term absorption of two chemicals, n-hexadecane (representative of semivolatile organic compounds in fuel oil) and water, into southern yellow pine was gravimetrically monitored as a function of time at ambient conditions. Different scenarios were run on the basis of (1) the...
The viability of ADVANTG deterministic method for synthetic radiography generation
NASA Astrophysics Data System (ADS)
Bingham, Andrew; Lee, Hyoung K.
2018-07-01
Fast simulation techniques to generate synthetic radiographic images of high resolution are helpful when new radiation imaging systems are designed. However, the standard stochastic approach requires lengthy run time with poorer statistics at higher resolution. The investigation of the viability of a deterministic approach to synthetic radiography image generation was explored. The aim was to analyze a computational time decrease over the stochastic method. ADVANTG was compared to MCNP in multiple scenarios including a small radiography system prototype, to simulate high resolution radiography images. By using ADVANTG deterministic code to simulate radiography images the computational time was found to decrease 10 to 13 times compared to the MCNP stochastic approach while retaining image quality.
Comparing nonlinear MHD simulations of low-aspect-ratio RFPs to RELAX experiments
NASA Astrophysics Data System (ADS)
McCollam, K. J.; den Hartog, D. J.; Jacobson, C. M.; Sovinec, C. R.; Masamune, S.; Sanpei, A.
2016-10-01
Standard reversed-field pinch (RFP) plasmas provide a nonlinear dynamical system as a validation domain for numerical MHD simulation codes, with applications in general toroidal confinement scenarios including tokamaks. Using the NIMROD code, we simulate the nonlinear evolution of RFP plasmas similar to those in the RELAX experiment. The experiment's modest Lundquist numbers S (as low as a few times 104) make closely matching MHD simulations tractable given present computing resources. Its low aspect ratio ( 2) motivates a comparison study using cylindrical and toroidal geometries in NIMROD. We present initial results from nonlinear single-fluid runs at S =104 for both geometries and a range of equilibrium parameters, which preliminarily show that the magnetic fluctuations are roughly similar between the two geometries and between simulation and experiment, though there appear to be some qualitative differences in their temporal evolution. Runs at higher S are planned. This work is supported by the U.S. DOE and by the Japan Society for the Promotion of Science.
Computer simulation of multigrid body dynamics and control
NASA Technical Reports Server (NTRS)
Swaminadham, M.; Moon, Young I.; Venkayya, V. B.
1990-01-01
The objective is to set up and analyze benchmark problems on multibody dynamics and to verify the predictions of two multibody computer simulation codes. TREETOPS and DISCOS have been used to run three example problems - one degree-of-freedom spring mass dashpot system, an inverted pendulum system, and a triple pendulum. To study the dynamics and control interaction, an inverted planar pendulum with an external body force and a torsional control spring was modeled as a hinge connected two-rigid body system. TREETOPS and DISCOS affected the time history simulation of this problem. System state space variables and their time derivatives from two simulation codes were compared.
Running Parallel Discrete Event Simulators on Sierra
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnes, P. D.; Jefferson, D. R.
2015-12-03
In this proposal we consider porting the ROSS/Charm++ simulator and the discrete event models that run under its control so that they run on the Sierra architecture and make efficient use of the Volta GPUs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gowardhan, Akshay; Neuscamman, Stephanie; Donetti, John
Aeolus is an efficient three-dimensional computational fluid dynamics code based on finite volume method developed for predicting transport and dispersion of contaminants in a complex urban area. It solves the time dependent incompressible Navier-Stokes equation on a regular Cartesian staggered grid using a fractional step method. It also solves a scalar transport equation for temperature and using the Boussinesq approximation. The model also includes a Lagrangian dispersion model for predicting the transport and dispersion of atmospheric contaminants. The model can be run in an efficient Reynolds Average Navier-Stokes (RANS) mode with a run time of several minutes, or a moremore » detailed Large Eddy Simulation (LES) mode with run time of hours for a typical simulation. This report describes the model components, including details on the physics models used in the code, as well as several model validation efforts. Aeolus wind and dispersion predictions are compared to field data from the Joint Urban Field Trials 2003 conducted in Oklahoma City (Allwine et al 2004) including both continuous and instantaneous releases. Newly implemented Aeolus capabilities include a decay chain model and an explosive Radiological Dispersal Device (RDD) source term; these capabilities are described. Aeolus predictions using the buoyant explosive RDD source are validated against two experimental data sets: the Green Field explosive cloud rise experiments conducted in Israel (Sharon et al 2012) and the Full-Scale RDD Field Trials conducted in Canada (Green et al 2016).« less
Zimmerman, M I; Bowman, G R
2016-01-01
Molecular dynamics (MD) simulations are a powerful tool for understanding enzymes' structures and functions with full atomistic detail. These physics-based simulations model the dynamics of a protein in solution and store snapshots of its atomic coordinates at discrete time intervals. Analysis of the snapshots from these trajectories provides thermodynamic and kinetic properties such as conformational free energies, binding free energies, and transition times. Unfortunately, simulating biologically relevant timescales with brute force MD simulations requires enormous computing resources. In this chapter we detail a goal-oriented sampling algorithm, called fluctuation amplification of specific traits, that quickly generates pertinent thermodynamic and kinetic information by using an iterative series of short MD simulations to explore the vast depths of conformational space. © 2016 Elsevier Inc. All rights reserved.
MDGRAPE-4: a special-purpose computer system for molecular dynamics simulations.
Ohmura, Itta; Morimoto, Gentaro; Ohno, Yousuke; Hasegawa, Aki; Taiji, Makoto
2014-08-06
We are developing the MDGRAPE-4, a special-purpose computer system for molecular dynamics (MD) simulations. MDGRAPE-4 is designed to achieve strong scalability for protein MD simulations through the integration of general-purpose cores, dedicated pipelines, memory banks and network interfaces (NIFs) to create a system on chip (SoC). Each SoC has 64 dedicated pipelines that are used for non-bonded force calculations and run at 0.8 GHz. Additionally, it has 65 Tensilica Xtensa LX cores with single-precision floating-point units that are used for other calculations and run at 0.6 GHz. At peak performance levels, each SoC can evaluate 51.2 G interactions per second. It also has 1.8 MB of embedded shared memory banks and six network units with a peak bandwidth of 7.2 GB s(-1) for the three-dimensional torus network. The system consists of 512 (8×8×8) SoCs in total, which are mounted on 64 node modules with eight SoCs. The optical transmitters/receivers are used for internode communication. The expected maximum power consumption is 50 kW. While MDGRAPE-4 software has still been improved, we plan to run MD simulations on MDGRAPE-4 in 2014. The MDGRAPE-4 system will enable long-time molecular dynamics simulations of small systems. It is also useful for multiscale molecular simulations where the particle simulation parts often become bottlenecks.
MDGRAPE-4: a special-purpose computer system for molecular dynamics simulations
Ohmura, Itta; Morimoto, Gentaro; Ohno, Yousuke; Hasegawa, Aki; Taiji, Makoto
2014-01-01
We are developing the MDGRAPE-4, a special-purpose computer system for molecular dynamics (MD) simulations. MDGRAPE-4 is designed to achieve strong scalability for protein MD simulations through the integration of general-purpose cores, dedicated pipelines, memory banks and network interfaces (NIFs) to create a system on chip (SoC). Each SoC has 64 dedicated pipelines that are used for non-bonded force calculations and run at 0.8 GHz. Additionally, it has 65 Tensilica Xtensa LX cores with single-precision floating-point units that are used for other calculations and run at 0.6 GHz. At peak performance levels, each SoC can evaluate 51.2 G interactions per second. It also has 1.8 MB of embedded shared memory banks and six network units with a peak bandwidth of 7.2 GB s−1 for the three-dimensional torus network. The system consists of 512 (8×8×8) SoCs in total, which are mounted on 64 node modules with eight SoCs. The optical transmitters/receivers are used for internode communication. The expected maximum power consumption is 50 kW. While MDGRAPE-4 software has still been improved, we plan to run MD simulations on MDGRAPE-4 in 2014. The MDGRAPE-4 system will enable long-time molecular dynamics simulations of small systems. It is also useful for multiscale molecular simulations where the particle simulation parts often become bottlenecks. PMID:24982255
Software for simulation of a computed tomography imaging spectrometer using optical design software
NASA Astrophysics Data System (ADS)
Spuhler, Peter T.; Willer, Mark R.; Volin, Curtis E.; Descour, Michael R.; Dereniak, Eustace L.
2000-11-01
Our Imaging Spectrometer Simulation Software known under the name Eikon should improve and speed up the design of a Computed Tomography Imaging Spectrometer (CTIS). Eikon uses existing raytracing software to simulate a virtual instrument. Eikon enables designers to virtually run through the design, calibration and data acquisition, saving significant cost and time when designing an instrument. We anticipate that Eikon simulations will improve future designs of CTIS by allowing engineers to explore more instrument options.
Relaxation Estimation of RMSD in Molecular Dynamics Immunosimulations
Schreiner, Wolfgang; Karch, Rudolf; Knapp, Bernhard; Ilieva, Nevena
2012-01-01
Molecular dynamics simulations have to be sufficiently long to draw reliable conclusions. However, no method exists to prove that a simulation has converged. We suggest the method of “lagged RMSD-analysis” as a tool to judge if an MD simulation has not yet run long enough. The analysis is based on RMSD values between pairs of configurations separated by variable time intervals Δt. Unless RMSD(Δt) has reached a stationary shape, the simulation has not yet converged. PMID:23019425
Quantifying Uncertainty from Computational Factors in Simulations of a Model Ballistic System
2017-08-01
Comparison of runs 6–9 with the corresponding simulations from the stop time study (Tables 22 and 23) show that the restart series produces...Disclaimers The findings in this report are not to be construed as an official Department of the Army position unless so designated by other authorized...0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing
Accelerating Molecular Dynamic Simulation on Graphics Processing Units
Friedrichs, Mark S.; Eastman, Peter; Vaidyanathan, Vishal; Houston, Mike; Legrand, Scott; Beberg, Adam L.; Ensign, Daniel L.; Bruns, Christopher M.; Pande, Vijay S.
2009-01-01
We describe a complete implementation of all-atom protein molecular dynamics running entirely on a graphics processing unit (GPU), including all standard force field terms, integration, constraints, and implicit solvent. We discuss the design of our algorithms and important optimizations needed to fully take advantage of a GPU. We evaluate its performance, and show that it can be more than 700 times faster than a conventional implementation running on a single CPU core. PMID:19191337
Numerical Simulation of Nonperiodic Rail Operation Diagram Characteristics
Qian, Yongsheng; Wang, Bingbing; Zeng, Junwei; Wang, Xin
2014-01-01
This paper succeeded in utilizing cellular automata (CA) model to simulate the process of the train operation under the four-aspect color light system and getting the nonperiodic diagram of the mixed passenger and freight tracks. Generally speaking, the concerned models could simulate well the situation of wagon in preventing trains from colliding when parking and restarting and of the real-time changes the situation of train speeds and displacement and get hold of the current train states in their departures and arrivals. Finally the model gets the train diagram that simulates the train operation in different ratios of the van and analyzes some parameter characters in the process of train running, such as time, speed, through capacity, interval departing time, and departing numbers. PMID:25435863
Sensitivity study of a dynamic thermodynamic sea ice model
NASA Astrophysics Data System (ADS)
Holland, David M.; Mysak, Lawrence A.; Manak, Davinder K.; Oberhuber, Josef M.
1993-02-01
A numerical simulation of the seasonal sea ice cover in the Arctic Ocean and the Greenland, Iceland, and Norwegian seas is presented. The sea ice model is extracted from Oberhuber's (1990) coupled sea ice-mixed layer-isopycnal general circulation model and is written in spherical coordinates. The advantage of such a model over previous sea ice models is that it can be easily coupled to either global atmospheric or ocean general circulation models written in spherical coordinates. In this model, the thermodynamics are a modification of that of Parkinson and Washington (1979), while the dynamics use the full Hibler (1979) viscous-plastic rheology. Monthly thermodynamic and dynamic forcing fields for the atmosphere and ocean are specified. The simulations of the seasonal cycle of ice thickness, compactness, and velocity, for a control set of parameters, compare favorably with the known seasonal characteristics of these fields. A sensitivity study of the control simulation of the seasonal sea ice cover is presented. The sensitivity runs are carried out under three different themes, namely, numerical conditions, parameter values, and physical processes. This last theme refers to experiments in which physical processes are either newly added or completely removed from the model. Approximately 80 sensitivity runs have been performed in which a change from the control run environment has been implemented. Comparisons have been made between the control run and a particular sensitivity run based on time series of the seasonal cycle of the domain-averaged ice thickness, compactness, areal coverage, and kinetic energy. In addition, spatially varying fields of ice thickness, compactness, velocity, and surface temperature for each season are presented for selected experiments. A brief description and discussion of the more interesting experiments are presented. The simulation of the seasonal cycle of Arctic sea ice cover is shown to be robust.
Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware
Knight, James C.; Tully, Philip J.; Kaplan, Bernhard A.; Lansner, Anders; Furber, Steve B.
2016-01-01
SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 × 104 neurons and 5.1 × 107 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45× more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models. PMID:27092061
Modality-Driven Classification and Visualization of Ensemble Variance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bensema, Kevin; Gosink, Luke; Obermaier, Harald
Paper for the IEEE Visualization Conference Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space.
Whitelaw, Claire; Calvert, Katrina; Epee, Mathias
2018-02-01
Obstetric emergency simulation training is an evidence-based intervention for the reduction of perinatal and maternal morbidity. In Western Australia, obstetric emergency training has been run using the In Time course since 2006. The study aimed to determine if the provision of In Time train the trainer courses to outer metro, rural and remote units in Western Australia had led to sustained ongoing training in those units. Ten years following the introduction of the course, we performed a survey to examine which units are continuing to run In Time, what are the perceived benefits in units still utilising In Time, and what were the barriers to training in units that had discontinued. A link to an online survey was sent to the units where In Time training had occurred. Telephone enquiries were additionally used to ensure a good response rate. The survey response rate was 100%. Six of the 11 units where training had been provided continue to run In Time. Units where training had discontinued had done so in order to take up alternatives, or as a result of trainers leaving. Of the units who had discontinued training, one wished to recommence In Time. Local in situ training in obstetric emergencies as exemplified by the In Time course remains a popular and valued training intervention across Western Australia. This training may be of particular benefit to small and remote units, but these are the areas in which training is hardest to sustain. © 2017 The Royal Australian and New Zealand College of Obstetricians and Gynaecologists.
SIM_EXPLORE: Software for Directed Exploration of Complex Systems
NASA Technical Reports Server (NTRS)
Burl, Michael; Wang, Esther; Enke, Brian; Merline, William J.
2013-01-01
Physics-based numerical simulation codes are widely used in science and engineering to model complex systems that would be infeasible to study otherwise. While such codes may provide the highest- fidelity representation of system behavior, they are often so slow to run that insight into the system is limited. Trying to understand the effects of inputs on outputs by conducting an exhaustive grid-based sweep over the input parameter space is simply too time-consuming. An alternative approach called "directed exploration" has been developed to harvest information from numerical simulators more efficiently. The basic idea is to employ active learning and supervised machine learning to choose cleverly at each step which simulation trials to run next based on the results of previous trials. SIM_EXPLORE is a new computer program that uses directed exploration to explore efficiently complex systems represented by numerical simulations. The software sequentially identifies and runs simulation trials that it believes will be most informative given the results of previous trials. The results of new trials are incorporated into the software's model of the system behavior. The updated model is then used to pick the next round of new trials. This process, implemented as a closed-loop system wrapped around existing simulation code, provides a means to improve the speed and efficiency with which a set of simulations can yield scientifically useful results. The software focuses on the case in which the feedback from the simulation trials is binary-valued, i.e., the learner is only informed of the success or failure of the simulation trial to produce a desired output. The software offers a number of choices for the supervised learning algorithm (the method used to model the system behavior given the results so far) and a number of choices for the active learning strategy (the method used to choose which new simulation trials to run given the current behavior model). The software also makes use of the LEGION distributed computing framework to leverage the power of a set of compute nodes. The approach has been demonstrated on a planetary science application in which numerical simulations are used to study the formation of asteroid families.
Real-Time Climate Simulations in the Interactive 3D Game Universe Sandbox ²
NASA Astrophysics Data System (ADS)
Goldenson, N. L.
2014-12-01
Exploration in an open-ended computer game is an engaging way to explore climate and climate change. Everyone can explore physical models with real-time visualization in the educational simulator Universe Sandbox ² (universesandbox.com/2), which includes basic climate simulations on planets. I have implemented a time-dependent, one-dimensional meridional heat transport energy balance model to run and be adjustable in real time in the midst of a larger simulated system. Universe Sandbox ² is based on the original game - at its core a gravity simulator - with other new physically-based content for stellar evolution, and handling collisions between bodies. Existing users are mostly science enthusiasts in informal settings. We believe that this is the first climate simulation to be implemented in a professionally developed computer game with modern 3D graphical output in real time. The type of simple climate model we've adopted helps us depict the seasonal cycle and the more drastic changes that come from changing the orbit or other external forcings. Users can alter the climate as the simulation is running by altering the star(s) in the simulation, dragging to change orbits and obliquity, adjusting the climate simulation parameters directly or changing other properties like CO2 concentration that affect the model parameters in representative ways. Ongoing visuals of the expansion and contraction of sea ice and snow-cover respond to the temperature calculations, and make it accessible to explore a variety of scenarios and intuitive to understand the output. Variables like temperature can also be graphed in real time. We balance computational constraints with the ability to capture the physical phenomena we wish to visualize, giving everyone access to a simple open-ended meridional energy balance climate simulation to explore and experiment with. The software lends itself to labs at a variety of levels about climate concepts including seasons, the Greenhouse effect, reservoirs and flows, albedo feedback, Snowball Earth, climate sensitivity, and model experiment design. Climate calculations are extended to Mars with some modifications to the Earth climate component, and could be used in lessons about the Mars atmosphere, and exploring scenarios of Mars climate history.
NASA Technical Reports Server (NTRS)
Case, Jonathan L.; Kumar, Sujay V.; Kuligowski, Robert J.; Langston, Carrie
2013-01-01
The NASA Short ]term Prediction Research and Transition (SPoRT) Center in Huntsville, AL is running a real ]time configuration of the NASA Land Information System (LIS) with the Noah land surface model (LSM). Output from the SPoRT ]LIS run is used to initialize land surface variables for local modeling applications at select National Weather Service (NWS) partner offices, and can be displayed in decision support systems for situational awareness and drought monitoring. The SPoRT ]LIS is run over a domain covering the southern and eastern United States, fully nested within the National Centers for Environmental Prediction Stage IV precipitation analysis grid, which provides precipitation forcing to the offline LIS ]Noah runs. The SPoRT Center seeks to expand the real ]time LIS domain to the entire Continental U.S. (CONUS); however, geographical limitations with the Stage IV analysis product have inhibited this expansion. Therefore, a goal of this study is to test alternative precipitation forcing datasets that can enable the LIS expansion by improving upon the current geographical limitations of the Stage IV product. The four precipitation forcing datasets that are inter ]compared on a 4 ]km resolution CONUS domain include the Stage IV, an experimental GOES quantitative precipitation estimate (QPE) from NESDIS/STAR, the National Mosaic and QPE (NMQ) product from the National Severe Storms Laboratory, and the North American Land Data Assimilation System phase 2 (NLDAS ]2) analyses. The NLDAS ]2 dataset is used as the control run, with each of the other three datasets considered experimental runs compared against the control. The regional strengths, weaknesses, and biases of each precipitation analysis are identified relative to the NLDAS ]2 control in terms of accumulated precipitation pattern and amount, and the impacts on the subsequent LSM spin ]up simulations. The ultimate goal is to identify an alternative precipitation forcing dataset that can best support an expansion of the real ]time SPoRT ]LIS to a domain covering the entire CONUS.
CUDA Fortran acceleration for the finite-difference time-domain method
NASA Astrophysics Data System (ADS)
Hadi, Mohammed F.; Esmaeili, Seyed A.
2013-05-01
A detailed description of programming the three-dimensional finite-difference time-domain (FDTD) method to run on graphical processing units (GPUs) using CUDA Fortran is presented. Two FDTD-to-CUDA thread-block mapping designs are investigated and their performances compared. Comparative assessment of trade-offs between GPU's shared memory and L1 cache is also discussed. This presentation is for the benefit of FDTD programmers who work exclusively with Fortran and are reluctant to port their codes to C in order to utilize GPU computing. The derived CUDA Fortran code is compared with an optimized CPU version that runs on a workstation-class CPU to present a realistic GPU to CPU run time comparison and thus help in making better informed investment decisions on FDTD code redesigns and equipment upgrades. All analyses are mirrored with CUDA C simulations to put in perspective the present state of CUDA Fortran development.
NASA Astrophysics Data System (ADS)
Dilmen, Derya I.; Titov, Vasily V.; Roe, Gerard H.
2015-12-01
On September 29, 2009, an Mw = 8.1 earthquake at 17:48 UTC in Tonga Trench generated a tsunami that caused heavy damage across Samoa, American Samoa, and Tonga islands. Tutuila island, which is located 250 km from the earthquake epicenter, experienced tsunami flooding and strong currents on the north and east coasts, causing 34 fatalities (out of 192 total deaths from this tsunami) and widespread structural and ecological damage. The surrounding coral reefs also suffered heavy damage. The damage was formally evaluated based on detailed surveys before and immediately after the tsunami. This setting thus provides a unique opportunity to evaluate the relationship between tsunami dynamics and coral damage. In this study, estimates of the maximum wave amplitudes and coastal inundation of the tsunami are obtained with the MOST model (T itov and S ynolakis, J. Waterway Port Coast Ocean Eng: pp 171, 1998; T itov and G onzalez, NOAA Tech. Memo. ERL PMEL 112:11, 1997), which is now the operational tsunami forecast tool used by the National Oceanic and Atmospheric Administration (NOAA). The earthquake source function was constrained using the real-time deep-ocean tsunami data from three DART® (Deep-ocean Assessment and Reporting for Tsunamis) systems in the far field, and by tide-gauge observations in the near field. We compare the simulated run-up with observations to evaluate the simulation performance. We present an overall synthesis of the tide-gauge data, survey results of the run-up, inundation measurements, and the datasets of coral damage around the island. These data are used to assess the overall accuracy of the model run-up prediction for Tutuila, and to evaluate the model accuracy over the coral reef environment during the tsunami event. Our primary findings are that: (1) MOST-simulated run-up correlates well with observed run-up for this event ( r = 0.8), it tends to underestimated amplitudes over coral reef environment around Tutuila (for 15 of 31 villages, run-up is underestimated by more than 10 %; in only 5 was run-up overestimated by more than 10 %), and (2) the locations where the model underestimates run-up also tend to have experienced heavy or very heavy coral damage (8 of the 15 villages), whereas well-estimated run-up locations characteristically experience low or very low damage (7 of 11 villages). These findings imply that a numerical model may overestimate the energy loss of the tsunami waves during their interaction with the coral reef. We plan future studies to quantify this energy loss and to explore what improvements can be made in simulations of tsunami run-up when simulating coastal environments with fringing coral reefs.
A PICKSC Science Gateway for enabling the common plasma physicist to run kinetic software
NASA Astrophysics Data System (ADS)
Hu, Q.; Winjum, B. J.; Zonca, A.; Youn, C.; Tsung, F. S.; Mori, W. B.
2017-10-01
Computer simulations offer tremendous opportunities for studying plasmas, ranging from simulations for students that illuminate fundamental educational concepts to research-level simulations that advance scientific knowledge. Nevertheless, there is a significant hurdle to using simulation tools. Users must navigate codes and software libraries, determine how to wrangle output into meaningful plots, and oftentimes confront a significant cyberinfrastructure with powerful computational resources. Science gateways offer a Web-based environment to run simulations without needing to learn or manage the underlying software and computing cyberinfrastructure. We discuss our progress on creating a Science Gateway for the Particle-in-Cell and Kinetic Simulation Software Center that enables users to easily run and analyze kinetic simulations with our software. We envision that this technology could benefit a wide range of plasma physicists, both in the use of our simulation tools as well as in its adaptation for running other plasma simulation software. Supported by NSF under Grant ACI-1339893 and by the UCLA Institute for Digital Research and Education.
Severe Nuclear Accident Program (SNAP) - a real time model for accidental releases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saltbones, J.; Foss, A.; Bartnicki, J.
1996-12-31
The model: Several Nuclear Accident Program (SNAP) has been developed at the Norwegian Meteorological Institute (DNMI) in Oslo to provide decision makers and Government officials with real-time tool for simulating large accidental releases of radioactivity from nuclear power plants or other sources. SNAP is developed in the Lagrangian framework in which atmospheric transport of radioactive pollutants is simulated by emitting a large number of particles from the source. The main advantage of the Lagrangian approach is a possibility of precise parameterization of advection processes, especially close to the source. SNAP can be used to predict the transport and deposition ofmore » a radioactive cloud in e future (up to 48 hours, in the present version) or to analyze the behavior of the cloud in the past. It is also possible to run the model in the mixed mode (partly analysis and partly forecast). In the routine run we assume unit (1 g s{sup -1}) emission in each of three classes. This assumption is very convenient for the main user of the model output in case of emergency: Norwegian Radiation Protection Agency. Due to linearity of the model equations, user can test different emission scenarios as a post processing task by assigning different weights to concentration and deposition fields corresponding to each of three emission classes. SNAP is fully operational and can be run by the meteorologist on duty at any time. The output from SNAP has two forms: First on the maps of Europe, or selected parts of Europe, individual particles are shown during the simulation period. Second, immediately after the simulation, concentration/deposition fields can be shown every three hours of the simulation period as isoline maps for each emission class. In addition, concentration and deposition maps, as well as some meteorological data, are stored on a public accessible disk for further processing by the model users.« less
Particle-in-cell simulations of Earth-like magnetosphere during a magnetic field reversal
NASA Astrophysics Data System (ADS)
Barbosa, M. V. G.; Alves, M. V.; Vieira, L. E. A.; Schmitz, R. G.
2017-12-01
The geologic record shows that hundreds of pole reversals have occurred throughout Earth's history. The mean interval between the poles reversals is roughly 200 to 300 thousand years and the last reversal occurred around 780 thousand years ago. Pole reversal is a slow process, during which the strength of the magnetic field decreases, become more complex, with the appearance of more than two poles for some time and then the field strength increases, changing polarity. Along the process, the magnetic field configuration changes, leaving the Earth-like planet vulnerable to the harmful effects of the Sun. Understanding what happens with the magnetosphere during these pole reversals is an open topic of investigation. Only recently PIC codes are used to modeling magnetospheres. Here we use the particle code iPIC3D [Markidis et al, Mathematics and Computers in Simulation, 2010] to simulate an Earth-like magnetosphere at three different times along the pole reversal process. The code was modified, so the Earth-like magnetic field is generated using an expansion in spherical harmonics with the Gauss coefficients given by a MHD simulation of the Earth's core [Glatzmaier et al, Nature, 1995; 1999; private communication to L.E.A.V.]. Simulations show the qualitative behavior of the magnetosphere, such as the current structures. Only the planet magnetic field was changed in the runs. The solar wind is the same for all runs. Preliminary results show the formation of the Chapman-Ferraro current in the front of the magnetosphere in all the cases. Run for the middle of the reversal process, the low intensity magnetic field and its asymmetrical configuration the current structure changes and the presence of multiple poles can be observed. In all simulations, a structure similar to the radiation belts was found. Simulations of more severe solar wind conditions are necessary to determine the real impact of the reversal in the magnetosphere.
NONMEMory: a run management tool for NONMEM.
Wilkins, Justin J
2005-06-01
NONMEM is an extremely powerful tool for nonlinear mixed-effect modelling and simulation of pharmacokinetic and pharmacodynamic data. However, it is a console-based application whose output does not lend itself to rapid interpretation or efficient management. NONMEMory has been created to be a comprehensive project manager for NONMEM, providing detailed summary, comparison and overview of the runs comprising a given project, including the display of output data, simple post-run processing, fast diagnostic plots and run output management, complementary to other available modelling aids. Analysis time ought not to be spent on trivial tasks, and NONMEMory's role is to eliminate these as far as possible by increasing the efficiency of the modelling process. NONMEMory is freely available from http://www.uct.ac.za/depts/pha/nonmemory.php.
NASA Astrophysics Data System (ADS)
Ivkin, N.; Liu, Z.; Yang, L. F.; Kumar, S. S.; Lemson, G.; Neyrinck, M.; Szalay, A. S.; Braverman, V.; Budavari, T.
2018-04-01
Cosmological N-body simulations play a vital role in studying models for the evolution of the Universe. To compare to observations and make a scientific inference, statistic analysis on large simulation datasets, e.g., finding halos, obtaining multi-point correlation functions, is crucial. However, traditional in-memory methods for these tasks do not scale to the datasets that are forbiddingly large in modern simulations. Our prior paper (Liu et al., 2015) proposes memory-efficient streaming algorithms that can find the largest halos in a simulation with up to 109 particles on a small server or desktop. However, this approach fails when directly scaling to larger datasets. This paper presents a robust streaming tool that leverages state-of-the-art techniques on GPU boosting, sampling, and parallel I/O, to significantly improve performance and scalability. Our rigorous analysis of the sketch parameters improves the previous results from finding the centers of the 103 largest halos (Liu et al., 2015) to ∼ 104 - 105, and reveals the trade-offs between memory, running time and number of halos. Our experiments show that our tool can scale to datasets with up to ∼ 1012 particles while using less than an hour of running time on a single GPU Nvidia GTX 1080.
NASA Astrophysics Data System (ADS)
Gomes, J. L.; Chou, S. C.; Yaguchi, S. M.
2012-04-01
Physics parameterizations and the model vertical and horizontal resolutions, for example, can significantly contribute to the uncertainty in the numerical weather predictions, especially at regions with complex topography. The objective of this study is to assess the influences of model precipitation production schemes and horizontal resolution on the diurnal cycle of precipitation in the Eta Model . The model was run in hydrostatic mode at 3- and 5-km grid sizes, the vertical resolution was set to 50 layers, and the time steps to 6 and 10 s, respectively. The initial and boundary conditions were taken from ERA-Interim reanalysis. Over the sea the 0.25-deg sea surface temperature from NOAA was used. The model was setup to run for each resolution over Angra dos Reis, located in the Southeast region of Brazil, for the rainy period between 18 December 2009 and 01 de January 2010, the model simulation range was 48 hours. In one set of runs the cumulus parameterization was switched off, in this case the model precipitation was fully simulated by cloud microphysics scheme, and in the other set the model was run with weak cumulus convection. The results show that as the model horizontal resolution increases from 5 to 3 km, the spatial pattern of the precipitation hardly changed, although the maximum precipitation core increased in magnitude. Daily data from automatic station data was used to evaluate the runs and shows that the diurnal cycle of temperature and precipitation were better simulated for 3 km when compared against observations. The model configuration results without cumulus convection shows a small contraction in the precipitating area and an increase in the simulated maximum values. The diurnal cycle of precipitation was better simulated with some activity of the cumulus convection scheme. The skill scores for the period and for different forecast ranges are higher at weak and moderate precipitation rates.
Pairwise velocities in the "Running FLRW" cosmological model
NASA Astrophysics Data System (ADS)
Bibiano, Antonio; Croton, Darren J.
2017-05-01
We present an analysis of the pairwise velocity statistics from a suite of cosmological N-body simulations describing the 'Running Friedmann-Lemaître-Robertson-Walker' (R-FLRW) cosmological model. This model is based on quantum field theory in a curved space-time and extends Λ cold dark matter (CDM) with a time-evolving vacuum energy density, ρ _Λ. To enforce local conservation of matter, a time-evolving gravitational coupling is also included. Our results constitute the first study of velocities in the R-FLRW cosmology, and we also compare with other dark energy simulations suites, repeating the same analysis. We find a strong degeneracy between the pairwise velocity and σ8 at z = 0 for almost all scenarios considered, which remains even when we look back to epochs as early as z = 2. We also investigate various coupled dark energy models, some of which show minimal degeneracy, and reveal interesting deviations from ΛCDM that could be readily exploited by future cosmological observations to test and further constrain our understanding of dark energy.
Speeding up N-body simulations of modified gravity: chameleon screening models
NASA Astrophysics Data System (ADS)
Bose, Sownak; Li, Baojiu; Barreira, Alexandre; He, Jian-hua; Hellwing, Wojciech A.; Koyama, Kazuya; Llinares, Claudio; Zhao, Gong-Bo
2017-02-01
We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied f(R) gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergence rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of f(R) simulations. For example, a test simulation with 5123 particles in a box of size 512 Mpc/h is now 5 times faster than before, while a Millennium-resolution simulation for f(R) gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.
FLAME: A platform for high performance computing of complex systems, applied for three case studies
Kiran, Mariam; Bicak, Mesude; Maleki-Dizaji, Saeedeh; ...
2011-01-01
FLAME allows complex models to be automatically parallelised on High Performance Computing (HPC) grids enabling large number of agents to be simulated over short periods of time. Modellers are hindered by complexities of porting models on parallel platforms and time taken to run large simulations on a single machine, which FLAME overcomes. Three case studies from different disciplines were modelled using FLAME, and are presented along with their performance results on a grid.
NASA Astrophysics Data System (ADS)
Zhao, Jing-bo; Han, Bing-yuan; Bei, Shao-yi
2017-10-01
Range extender is the core component of E-REV, its start-stop control determines the operation modes of vehicle. This paper based on a certain type of E-REV, researched constant power control strategy of range extender in extended-range model, to target range as constraint condition, combined with different driving cycle conditions, by correcting battery SOC for range extender start-stop moment, optimized the control strategy of range extender, and established the vehicle and range extender start-stop control simulation model. Selected NEDC and UDDS conditions simulation results show that: under certain target mileage, the range extender running time reduced by 37.2% and 28.2% in the NEDC condition, and running time UDDS conditions were reduced by 40.6% and 33.5% in the UDDS condition, reached the purpose of meeting the vehicle mileage and reducing consumption and emission.
Mathematical model simulation of a diesel spill in the Potomac River
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, S.S.; Nicolette, J.P.; Markarian, R.K.
1995-12-31
A mathematical modeling technique was used to simulate the transport and fate of approximately 400,000 gallons of spilled diesel fuel and its impact on the aquatic biota in the Potomac River and Sugarland Run. Sugarland Run is a tributary about 21 miles upstream from Washington, DC. The mass balance model predicted the dynamic (spatial and temporal) distribution of spilled oil. The distributions were presented in terms of surface oil slick and sheen, dissolved and undissolved total petroleum hydrocarbons (TPH) in the water surface, water column, river sediments, shoreline and atmosphere. The processes simulated included advective movement, dispersion, dissolution, evaporation, volatilization,more » sedimentation, shoreline deposition, biodegradation, and removal of oil from cleanup operations. The model predicted that the spill resulted in a water column dissolved TPH concentration range of 0.05 to 18.6 ppm in Sugarland Run. The spilled oil traveled 10 miles along Sugarland Run before it reached the Potomac River. At the Potomac River, the water column TPH concentration was predicted to have decreased to the range of 0.0 to 0.43 ppm. These levels were consistent with field samples. To assess biological injury, the model used 4, 8, 24, 48, and 96-hr LC values in computing the fish injury caused by the fuel oil. The model used the maximum running average of dissolved TPH and exposure time to predict levels of fish mortality in the range of 38 to 40% in Sugarland Run. This prediction was consistent with field fisheries surveys. The model also computed the amount of spilled oil that adsorbed and settled into the river sediments.« less
Simpson, Richard J; Graham, Scott M; Connaboy, Christopher; Clement, Richard; Pollonini, Luca; Florida-James, Geraint D
2017-01-01
We developed a standardized laboratory treadmill protocol for assessing physiological responses to a simulated backpack load-carriage task in trained soldiers, and assessed the efficacy of blood lactate thresholds (LTs) and economy in predicting future backpack running success over an 8-mile course in field conditions. LTs and corresponding physiological responses were determined in 17 elite British soldiers who completed an incremental treadmill walk/run protocol to exhaustion carrying 20 kg backpack load. Treadmill velocity at the breakpoint (r = -0.85) and Δ 1 mmol l(-1) (r = -0.80) LTs, and relative V˙O2 at 4 mmol l(-1) (r = 0.76) and treadmill walk/run velocities of 6.4 (r = 0.76), 7.4 (r = 0.80), 11.4 (r = 0.66) and 12.4 (r = 0.65) km h(-1) were significantly associated with field test completion time. We report for the first time that LTs and backpack walk/run economy are major determinants of backpack load-carriage performance in trained soldiers. Copyright © 2016 Elsevier Ltd. All rights reserved.
2006-06-01
levels of automation applied as per Figure 13. .................................. 60 x THIS PAGE...models generated for this thesis were set to run for 60 minutes. To run the simulation for the set time, the analyst provides a random number seed to...1984). The IMPRINT 59 workload value of 60 has been used by a consensus of workload modeling SMEs to represent the ‘high’ threshold, while the
James, Andrew I.; Jawitz, James W.; Munoz-Carpena, Rafael
2009-01-01
A model to simulate transport of materials in surface water and ground water has been developed to numerically approximate solutions to the advection-dispersion equation. This model, known as the Transport and Reaction Simulation Engine (TaRSE), uses an algorithm that incorporates a time-splitting technique where the advective part of the equation is solved separately from the dispersive part. An explicit finite-volume Godunov method is used to approximate the advective part, while a mixed-finite element technique is used to approximate the dispersive part. The dispersive part uses an implicit discretization, which allows it to run stably with a larger time step than the explicit advective step. The potential exists to develop algorithms that run several advective steps, and then one dispersive step that encompasses the time interval of the advective steps. Because the dispersive step is computationally most expensive, schemes can be implemented that are more computationally efficient than non-time-split algorithms. This technique enables scientists to solve problems with high grid Peclet numbers, such as transport problems with sharp solute fronts, without spurious oscillations in the numerical approximation to the solution and with virtually no artificial diffusion.
Implicit Learning of a Finger Motor Sequence by Patients with Cerebral Palsy After Neurofeedback.
Alves-Pinto, Ana; Turova, Varvara; Blumenstein, Tobias; Hantuschke, Conny; Lampe, Renée
2017-03-01
Facilitation of implicit learning of a hand motor sequence after a single session of neurofeedback training of alpha power recorded from the motor cortex has been shown in healthy individuals (Ros et al., Biological Psychology 95:54-58, 2014). This facilitation effect could be potentially applied to improve the outcome of rehabilitation in patients with impaired hand motor function. In the current study a group of ten patients diagnosed with cerebral palsy trained reduction of alpha power derived from brain activity recorded from right and left motor areas. Training was distributed in three periods of 8 min each. In between, participants performed a serial reaction time task with their non-dominant hand, to a total of five runs. A similar procedure was repeated a week or more later but this time training was based on simulated brain activity. Reaction times pooled across participants decreased on each successive run faster after neurofeedback training than after the simulation training. Also recorded were two 3-min baseline conditions, once with the eyes open, another with the eyes closed, at the beginning and end of the experimental session. No significant changes in alpha power with neurofeedback or with simulation training were obtained and no correlation with the reductions in reaction time could be established. Contributions for this are discussed.
NASA Technical Reports Server (NTRS)
Manobianco, John; Zack, John W.; Taylor, Gregory E.
1996-01-01
This paper describes the capabilities and operational utility of a version of the Mesoscale Atmospheric Simulation System (MASS) that has been developed to support operational weather forecasting at the Kennedy Space Center (KSC) and Cape Canaveral Air Station (CCAS). The implementation of local, mesoscale modeling systems at KSC/CCAS is designed to provide detailed short-range (less than 24 h) forecasts of winds, clouds, and hazardous weather such as thunderstorms. Short-range forecasting is a challenge for daily operations, and manned and unmanned launches since KSC/CCAS is located in central Florida where the weather during the warm season is dominated by mesoscale circulations like the sea breeze. For this application, MASS has been modified to run on a Stardent 3000 workstation. Workstation-based, real-time numerical modeling requires a compromise between the requirement to run the system fast enough so that the output can be used before expiration balanced against the desire to improve the simulations by increasing resolution and using more detailed physical parameterizations. It is now feasible to run high-resolution mesoscale models such as MASS on local workstations to provide timely forecasts at a fraction of the cost required to run these models on mainframe supercomputers. MASS has been running in the Applied Meteorology Unit (AMU) at KSC/CCAS since January 1994 for the purpose of system evaluation. In March 1995, the AMU began sending real-time MASS output to the forecasters and meteorologists at CCAS, Spaceflight Meteorology Group (Johnson Space Center, Houston, Texas), and the National Weather Service (Melbourne, Florida). However, MASS is not yet an operational system. The final decision whether to transition MASS for operational use will depend on a combination of forecaster feedback, the AMU's final evaluation results, and the life-cycle costs of the operational system.
Jiao, Jiao; Li, Yi; Yao, Lei; Chen, Yajun; Guo, Yueping; Wong, Stephen H S; Ng, Frency S F; Hu, Junyan
2017-10-01
To investigate clothing-induced differences in human thermal response and running performance, eight male athletes participated in a repeated-measure study by wearing three sets of clothing (CloA, CloB, and CloC). CloA and CloB were body-mapping-designed with 11% and 7% increased capacity of heat dissipation respectively than CloC, the commonly used running clothing. The experiments were conducted by using steady-state running followed by an all-out performance running in a controlled hot environment. Participants' thermal responses such as core temperature (T c ), mean skin temperature ([Formula: see text]), heat storage (S), and the performance running time were measured. CloA resulted in shorter performance time than CloC (323.1 ± 10.4 s vs. 353.6 ± 13.2 s, p = 0.01), and induced the lowest [Formula: see text], smallest ΔT c , and smallest S in the resting and running phases. This study indicated that clothing made with different heat dissipation capacities affects athlete thermal responses and running performance in a hot environment. Practitioner Summary: A protocol that simulated the real situation in running competitions was used to investigate the effects of body-mapping-designed clothing on athletes' thermal responses and running performance. The findings confirmed the effects of optimised clothing with body-mapping design and advanced fabrics, and ensured the practical advantage of developed clothing on exercise performance.
Houghton, Laurence A; Dawson, Brian T; Rubenson, Jonas
2013-04-01
The aim of this study was to determine whether intermittent shuttle running times (during a prolonged, simulated cricket batting innings) and Achilles tendon properties were affected by 8 weeks of plyometric training (PLYO, n = 7) or normal preseason (control [CON], n = 8). Turn (5-0-5-m agility) and 5-m sprint times were assessed using timing gates. Achilles tendon properties were determined using dynamometry, ultrasonography, and musculoskeletal geometry. Countermovement and squat jump heights were also assessed before and after training. Mean 5-0-5-m turn time did not significantly change in PLYO or CON (pre vs. post: 2.25 ± 0.08 vs. 2.22 ± 0.07 and 2.26 ± 0.06 vs. 2.25 ± 0.08 seconds, respectively). Mean 5-m sprint time did not significantly change in PLYO or CON (pre vs. post: 0.85 ± 0.02 vs. 0.84 ± 0.02 and 0.85 ± 0.03 vs. 0.85 ± 0.02 seconds, respectively). However, inferences from the smallest worthwhile change suggested that PLYO had a 51-72% chance of positive effects but only 6-15% chance of detrimental effects on shuttle running times. Jump heights only increased in PLYO (9.1-11.0%, p < 0.050). Achilles tendon mechanical properties (force, stiffness, elastic energy, strain, modulus) did not change in PLYO or CON. However, Achilles tendon cross-sectional area increased in PLYO (pre vs. post: 70 ± 7 vs. 79 ± 8 mm, p < 0.01) but not CON (77 ± 4 vs. 77 ± 5 mm, p > 0.050). In conclusion, plyometric training had possible benefits on intermittent shuttle running times and improved jump performance. Also, plyometric training increased tendon cross-sectional area, but further investigation is required to determine whether this translates to decreased injury risk.
Aftermath of early Hit-and-Run collisions in the Inner Solar System
NASA Astrophysics Data System (ADS)
Sarid, Gal; Stewart, Sarah T.; Leinhardt, zoe M.
2015-08-01
Planet formation epoch, in the terrestrial planet region and the asteroid belt, was characterized by a vigorous dynamical environment that was conducive to giant impacts among planetary embryos and asteroidal parent bodies, leading to diverse outcomes. Among these the greatest potential for producing diverse end-members lies is the erosive Hit-and-Run regime (small mass ratios, off-axis oblique impacts and non-negligible ejected mass), which is also more probable in terms of the early dynamical encounter configuration in the inner solar system. This collision regime has been invoked to explain outstanding issues, such as planetary volatile loss records, origin of the Moon and mantle stripping from Mercury and some of the larger asteroids (Vesta, Psyche).We performed and analyzed a set of simulations of Hit-and-Run events, covering a large range of mass ratios (1-20), impact parameters (0.25-0.96, for near head-on to barely grazing) and impact velocities (~1.5-5 times the mutual escape velocity, as dependent on the mass ratio). We used an SPH code with tabulated EOS and a nominal simlated time >1 day, to track the collisional shock processing and the provenance of material components. of collision debris. Prior to impact runs, all bodies were allowed to initially settle to negligible particle velocities in isolation, within ~20 simulated hrs. The total number of particles involved in each of our collision simulations was between (1-3 x 105). Resulting configurations include stripped mantles, melting/vaporization of rock and/or iron cores and strong variations of asteroid parent bodies fromcanonical chondritic composition.In the context of large planetary formation simulations, velocity and impact angle distributions are necessary to asses impact probabilities. The mass distribution and interaction within planetary embryo and asteroid swarms depends both on gravitational dynamics and the applied fragmentation mechanism. We will present results pertaining to general projectile remnant scaling relations, constitution of ejected unbound material and the composition of variedcollision remnants, which become available to seed the asteroid belt.
Application of Nearly Linear Solvers to Electric Power System Computation
NASA Astrophysics Data System (ADS)
Grant, Lisa L.
To meet the future needs of the electric power system, improvements need to be made in the areas of power system algorithms, simulation, and modeling, specifically to achieve a time frame that is useful to industry. If power system time-domain simulations could run in real-time, then system operators would have situational awareness to implement online control and avoid cascading failures, significantly improving power system reliability. Several power system applications rely on the solution of a very large linear system. As the demands on power systems continue to grow, there is a greater computational complexity involved in solving these large linear systems within reasonable time. This project expands on the current work in fast linear solvers, developed for solving symmetric and diagonally dominant linear systems, in order to produce power system specific methods that can be solved in nearly-linear run times. The work explores a new theoretical method that is based on ideas in graph theory and combinatorics. The technique builds a chain of progressively smaller approximate systems with preconditioners based on the system's low stretch spanning tree. The method is compared to traditional linear solvers and shown to reduce the time and iterations required for an accurate solution, especially as the system size increases. A simulation validation is performed, comparing the solution capabilities of the chain method to LU factorization, which is the standard linear solver for power flow. The chain method was successfully demonstrated to produce accurate solutions for power flow simulation on a number of IEEE test cases, and a discussion on how to further improve the method's speed and accuracy is included.
Evaluation of Tsunami Run-Up on Coastal Areas at Regional Scale
NASA Astrophysics Data System (ADS)
González, M.; Aniel-Quiroga, Í.; Gutiérrez, O.
2017-12-01
Tsunami hazard assessment is tackled by means of numerical simulations, giving as a result, the areas flooded by tsunami wave inland. To get this, some input data is required, i.e., the high resolution topobathymetry of the study area, the earthquake focal mechanism parameters, etc. The computational cost of these kinds of simulations are still excessive. An important restriction for the elaboration of large scale maps at National or regional scale is the reconstruction of high resolution topobathymetry on the coastal zone. An alternative and traditional method consists of the application of empirical-analytical formulations to calculate run-up at several coastal profiles (i.e. Synolakis, 1987), combined with numerical simulations offshore without including coastal inundation. In this case, the numerical simulations are faster but some limitations are added as the coastal bathymetric profiles are very simply idealized. In this work, we present a complementary methodology based on a hybrid numerical model, formed by 2 models that were coupled ad hoc for this work: a non-linear shallow water equations model (NLSWE) for the offshore part of the propagation and a Volume of Fluid model (VOF) for the areas near the coast and inland, applying each numerical scheme where they better reproduce the tsunami wave. The run-up of a tsunami scenario is obtained by applying the coupled model to an ad-hoc numerical flume. To design this methodology, hundreds of worldwide topobathymetric profiles have been parameterized, using 5 parameters (2 depths and 3 slopes). In addition, tsunami waves have been also parameterized by their height and period. As an application of the numerical flume methodology, the coastal parameterized profiles and tsunami waves have been combined to build a populated database of run-up calculations. The combination was tackled by means of numerical simulations in the numerical flume The result is a tsunami run-up database that considers real profiles shape, realistic tsunami waves, and optimized numerical simulations. This database allows the calculation of the run-up of any new tsunami wave by interpolation on the database, in a short period of time, based on the tsunami wave characteristics provided as an output of the NLSWE model along the coast at a large scale domain (regional or National scale).
Colt: an experiment in wormhole run-time reconfiguration
NASA Astrophysics Data System (ADS)
Bittner, Ray; Athanas, Peter M.; Musgrove, Mark
1996-10-01
Wormhole run-time reconfiguration (RTR) is an attempt to create a refined computing paradigm for high performance computational tasks. By combining concepts from field programmable gate array (FPGA) technologies with data flow computing, the Colt/Stallion architecture achieves high utilization of hardware resources, and facilitates rapid run-time reconfiguration. Targeted mainly at DSP-type operations, the Colt integrated circuit -- a prototype wormhole RTR device -- compares favorably to contemporary DSP alternatives in terms of silicon area consumed per unit computation and in computing performance. Although emphasis has been placed on signal processing applications, general purpose computation has not been overlooked. Colt is a prototype that defines an architecture not only at the chip level but also in terms of an overall system design. As this system is realized, the concept of wormhole RTR will be applied to numerical computation and DSP applications including those common to image processing, communications systems, digital filters, acoustic processing, real-time control systems and simulation acceleration.
A Multiplicative Cascade Model for High-Resolution Space-Time Downscaling of Rainfall
NASA Astrophysics Data System (ADS)
Raut, Bhupendra A.; Seed, Alan W.; Reeder, Michael J.; Jakob, Christian
2018-02-01
Distributions of rainfall with the time and space resolutions of minutes and kilometers, respectively, are often needed to drive the hydrological models used in a range of engineering, environmental, and urban design applications. The work described here is the first step in constructing a model capable of downscaling rainfall to scales of minutes and kilometers from time and space resolutions of several hours and a hundred kilometers. A multiplicative random cascade model known as the Short-Term Ensemble Prediction System is run with parameters from the radar observations at Melbourne (Australia). The orographic effects are added through multiplicative correction factor after the model is run. In the first set of model calculations, 112 significant rain events over Melbourne are simulated 100 times. Because of the stochastic nature of the cascade model, the simulations represent 100 possible realizations of the same rain event. The cascade model produces realistic spatial and temporal patterns of rainfall at 6 min and 1 km resolution (the resolution of the radar data), the statistical properties of which are in close agreement with observation. In the second set of calculations, the cascade model is run continuously for all days from January 2008 to August 2015 and the rainfall accumulations are compared at 12 locations in the greater Melbourne area. The statistical properties of the observations lie with envelope of the 100 ensemble members. The model successfully reproduces the frequency distribution of the 6 min rainfall intensities, storm durations, interarrival times, and autocorrelation function.
Consistency of internal fluxes in a hydrological model running at multiple time steps
NASA Astrophysics Data System (ADS)
Ficchi, Andrea; Perrin, Charles; Andréassian, Vazken
2016-04-01
Improving hydrological models remains a difficult task and many ways can be explored, among which one can find the improvement of spatial representation, the search for more robust parametrization, the better formulation of some processes or the modification of model structures by trial-and-error procedure. Several past works indicate that model parameters and structure can be dependent on the modelling time step, and there is thus some rationale in investigating how a model behaves across various modelling time steps, to find solutions for improvements. Here we analyse the impact of data time step on the consistency of the internal fluxes of a rainfall-runoff model run at various time steps, by using a large data set of 240 catchments. To this end, fine time step hydro-climatic information at sub-hourly resolution is used as input of a parsimonious rainfall-runoff model (GR) that is run at eight different model time steps (from 6 minutes to one day). The initial structure of the tested model (i.e. the baseline) corresponds to the daily model GR4J (Perrin et al., 2003), adapted to be run at variable sub-daily time steps. The modelled fluxes considered are interception, actual evapotranspiration and intercatchment groundwater flows. Observations of these fluxes are not available, but the comparison of modelled fluxes at multiple time steps gives additional information for model identification. The joint analysis of flow simulation performance and consistency of internal fluxes at different time steps provides guidance to the identification of the model components that should be improved. Our analysis indicates that the baseline model structure is to be modified at sub-daily time steps to warrant the consistency and realism of the modelled fluxes. For the baseline model improvement, particular attention is devoted to the interception model component, whose output flux showed the strongest sensitivity to modelling time step. The dependency of the optimal model complexity on time step is also analysed. References: Perrin, C., Michel, C., Andréassian, V., 2003. Improvement of a parsimonious model for streamflow simulation. Journal of Hydrology, 279(1-4): 275-289. DOI:10.1016/S0022-1694(03)00225-7
Toofanny, Rudesh D; Simms, Andrew M; Beck, David A C; Daggett, Valerie
2011-08-10
Molecular dynamics (MD) simulations offer the ability to observe the dynamics and interactions of both whole macromolecules and individual atoms as a function of time. Taken in context with experimental data, atomic interactions from simulation provide insight into the mechanics of protein folding, dynamics, and function. The calculation of atomic interactions or contacts from an MD trajectory is computationally demanding and the work required grows exponentially with the size of the simulation system. We describe the implementation of a spatial indexing algorithm in our multi-terabyte MD simulation database that significantly reduces the run-time required for discovery of contacts. The approach is applied to the Dynameomics project data. Spatial indexing, also known as spatial hashing, is a method that divides the simulation space into regular sized bins and attributes an index to each bin. Since, the calculation of contacts is widely employed in the simulation field, we also use this as the basis for testing compression of data tables. We investigate the effects of compression of the trajectory coordinate tables with different options of data and index compression within MS SQL SERVER 2008. Our implementation of spatial indexing speeds up the calculation of contacts over a 1 nanosecond (ns) simulation window by between 14% and 90% (i.e., 1.2 and 10.3 times faster). For a 'full' simulation trajectory (51 ns) spatial indexing reduces the calculation run-time between 31 and 81% (between 1.4 and 5.3 times faster). Compression resulted in reduced table sizes but resulted in no significant difference in the total execution time for neighbour discovery. The greatest compression (~36%) was achieved using page level compression on both the data and indexes. The spatial indexing scheme significantly decreases the time taken to calculate atomic contacts and could be applied to other multidimensional neighbor discovery problems. The speed up enables on-the-fly calculation and visualization of contacts and rapid cross simulation analysis for knowledge discovery. Using page compression for the atomic coordinate tables and indexes saves ~36% of disk space without any significant decrease in calculation time and should be considered for other non-transactional databases in MS SQL SERVER 2008.
2011-01-01
Background Molecular dynamics (MD) simulations offer the ability to observe the dynamics and interactions of both whole macromolecules and individual atoms as a function of time. Taken in context with experimental data, atomic interactions from simulation provide insight into the mechanics of protein folding, dynamics, and function. The calculation of atomic interactions or contacts from an MD trajectory is computationally demanding and the work required grows exponentially with the size of the simulation system. We describe the implementation of a spatial indexing algorithm in our multi-terabyte MD simulation database that significantly reduces the run-time required for discovery of contacts. The approach is applied to the Dynameomics project data. Spatial indexing, also known as spatial hashing, is a method that divides the simulation space into regular sized bins and attributes an index to each bin. Since, the calculation of contacts is widely employed in the simulation field, we also use this as the basis for testing compression of data tables. We investigate the effects of compression of the trajectory coordinate tables with different options of data and index compression within MS SQL SERVER 2008. Results Our implementation of spatial indexing speeds up the calculation of contacts over a 1 nanosecond (ns) simulation window by between 14% and 90% (i.e., 1.2 and 10.3 times faster). For a 'full' simulation trajectory (51 ns) spatial indexing reduces the calculation run-time between 31 and 81% (between 1.4 and 5.3 times faster). Compression resulted in reduced table sizes but resulted in no significant difference in the total execution time for neighbour discovery. The greatest compression (~36%) was achieved using page level compression on both the data and indexes. Conclusions The spatial indexing scheme significantly decreases the time taken to calculate atomic contacts and could be applied to other multidimensional neighbor discovery problems. The speed up enables on-the-fly calculation and visualization of contacts and rapid cross simulation analysis for knowledge discovery. Using page compression for the atomic coordinate tables and indexes saves ~36% of disk space without any significant decrease in calculation time and should be considered for other non-transactional databases in MS SQL SERVER 2008. PMID:21831299
Towards real-time photon Monte Carlo dose calculation in the cloud
NASA Astrophysics Data System (ADS)
Ziegenhein, Peter; Kozin, Igor N.; Kamerling, Cornelis Ph; Oelfke, Uwe
2017-06-01
Near real-time application of Monte Carlo (MC) dose calculation in clinic and research is hindered by the long computational runtimes of established software. Currently, fast MC software solutions are available utilising accelerators such as graphical processing units (GPUs) or clusters based on central processing units (CPUs). Both platforms are expensive in terms of purchase costs and maintenance and, in case of the GPU, provide only limited scalability. In this work we propose a cloud-based MC solution, which offers high scalability of accurate photon dose calculations. The MC simulations run on a private virtual supercomputer that is formed in the cloud. Computational resources can be provisioned dynamically at low cost without upfront investment in expensive hardware. A client-server software solution has been developed which controls the simulations and transports data to and from the cloud efficiently and securely. The client application integrates seamlessly into a treatment planning system. It runs the MC simulation workflow automatically and securely exchanges simulation data with the server side application that controls the virtual supercomputer. Advanced encryption standards were used to add an additional security layer, which encrypts and decrypts patient data on-the-fly at the processor register level. We could show that our cloud-based MC framework enables near real-time dose computation. It delivers excellent linear scaling for high-resolution datasets with absolute runtimes of 1.1 seconds to 10.9 seconds for simulating a clinical prostate and liver case up to 1% statistical uncertainty. The computation runtimes include the transportation of data to and from the cloud as well as process scheduling and synchronisation overhead. Cloud-based MC simulations offer a fast, affordable and easily accessible alternative for near real-time accurate dose calculations to currently used GPU or cluster solutions.
Towards real-time photon Monte Carlo dose calculation in the cloud.
Ziegenhein, Peter; Kozin, Igor N; Kamerling, Cornelis Ph; Oelfke, Uwe
2017-06-07
Near real-time application of Monte Carlo (MC) dose calculation in clinic and research is hindered by the long computational runtimes of established software. Currently, fast MC software solutions are available utilising accelerators such as graphical processing units (GPUs) or clusters based on central processing units (CPUs). Both platforms are expensive in terms of purchase costs and maintenance and, in case of the GPU, provide only limited scalability. In this work we propose a cloud-based MC solution, which offers high scalability of accurate photon dose calculations. The MC simulations run on a private virtual supercomputer that is formed in the cloud. Computational resources can be provisioned dynamically at low cost without upfront investment in expensive hardware. A client-server software solution has been developed which controls the simulations and transports data to and from the cloud efficiently and securely. The client application integrates seamlessly into a treatment planning system. It runs the MC simulation workflow automatically and securely exchanges simulation data with the server side application that controls the virtual supercomputer. Advanced encryption standards were used to add an additional security layer, which encrypts and decrypts patient data on-the-fly at the processor register level. We could show that our cloud-based MC framework enables near real-time dose computation. It delivers excellent linear scaling for high-resolution datasets with absolute runtimes of 1.1 seconds to 10.9 seconds for simulating a clinical prostate and liver case up to 1% statistical uncertainty. The computation runtimes include the transportation of data to and from the cloud as well as process scheduling and synchronisation overhead. Cloud-based MC simulations offer a fast, affordable and easily accessible alternative for near real-time accurate dose calculations to currently used GPU or cluster solutions.
Continuation of advanced crew procedures development techniques
NASA Technical Reports Server (NTRS)
Arbet, J. D.; Benbow, R. L.; Evans, M. E.; Mangiaracina, A. A.; Mcgavern, J. L.; Spangler, M. C.; Tatum, I. C.
1976-01-01
An operational computer program, the Procedures and Performance Program (PPP) which operates in conjunction with the Phase I Shuttle Procedures Simulator to provide a procedures recording and crew/vehicle performance monitoring capability was developed. A technical synopsis of each task resulting in the development of the Procedures and Performance Program is provided. Conclusions and recommendations for action leading to the improvements in production of crew procedures development and crew training support are included. The PPP provides real-time CRT displays and post-run hardcopy output of procedures, difference procedures, performance data, parametric analysis data, and training script/training status data. During post-run, the program is designed to support evaluation through the reconstruction of displays to any point in time. A permanent record of the simulation exercise can be obtained via hardcopy output of the display data and via transfer to the Generalized Documentation Processor (GDP). Reference procedures data may be transferred from the GDP to the PPP. Interface is provided with the all digital trajectory program, the Space Vehicle Dynamics Simulator (SVDS) to support initial procedures timeline development.
A Lagrangian stochastic model for aerial spray transport above an oak forest
Wang, Yansen; Miller, David R.; Anderson, Dean E.; McManus, Michael L.
1995-01-01
An aerial spray droplets' transport model has been developed by applying recent advances in Lagrangian stochastic simulation of heavy particles. A two-dimensional Lagrangian stochastic model was adopted to simulate the spray droplet dispersion in atmospheric turbulence by adjusting the Lagrangian integral time scale along the drop trajectory. The other major physical processes affecting the transport of spray droplets above a forest canopy, the aircraft wingtip vortices and the droplet evaporation, were also included in each time step of the droplets' transport.The model was evaluated using data from an aerial spray field experiment. In generally neutral stability conditions, the accuracy of the model predictions varied from run-to-run as expected. The average root-mean-square error was 24.61 IU cm−2, and the average relative error was 15%. The model prediction was adequate in two-dimensional steady wind conditions, but was less accurate in variable wind condition. The results indicated that the model can simulate successfully the ensemble; average transport of aerial spray droplets under neutral, steady atmospheric wind conditions.
Flame-Vortex Studies to Quantify Markstein Numbers Needed to Model Flame Extinction Limits
NASA Technical Reports Server (NTRS)
Driscoll, James F.; Feikema, Douglas A.
2003-01-01
This has quantified a database of Markstein numbers for unsteady flames; future work will quantify a database of flame extinction limits for unsteady conditions. Unsteady extinction limits have not been documented previously; both a stretch rate and a residence time must be measured, since extinction requires that the stretch rate be sufficiently large for a sufficiently long residence time. Ma was measured for an inwardly-propagating flame (IPF) that is negatively-stretched under microgravity conditions. Computations also were performed using RUN-1DL to explain the measurements. The Markstein number of an inwardly-propagating flame, for both the microgravity experiment and the computations, is significantly larger than that of an outwardy-propagating flame. The computed profiles of the various species within the flame suggest reasons. Computed hydrogen concentrations build up ahead of the IPF but not the OPF. Understanding was gained by running the computations for both simplified and full-chemistry conditions. Numerical Simulations. To explain the experimental findings, numerical simulations of both inwardly and outwardly propagating spherical flames (with complex chemistry) were generated using the RUN-1DL code, which includes 16 species and 46 reactions.
Interactions between hyporheic flow produced by stream meanders, bars, and dunes
Stonedahl, Susa H.; Harvey, Judson W.; Packman, Aaron I.
2013-01-01
Stream channel morphology from grain-scale roughness to large meanders drives hyporheic exchange flow. In practice, it is difficult to model hyporheic flow over the wide spectrum of topographic features typically found in rivers. As a result, many studies only characterize isolated exchange processes at a single spatial scale. In this work, we simulated hyporheic flows induced by a range of geomorphic features including meanders, bars and dunes in sand bed streams. Twenty cases were examined with 5 degrees of river meandering. Each meandering river model was run initially without any small topographic features. Models were run again after superimposing only bars and then only dunes, and then run a final time after including all scales of topographic features. This allowed us to investigate the relative importance and interactions between flows induced by different scales of topography. We found that dunes typically contributed more to hyporheic exchange than bars and meanders. Furthermore, our simulations show that the volume of water exchanged and the distributions of hyporheic residence times resulting from various scales of topographic features are close to, but not linearly additive. These findings can potentially be used to develop scaling laws for hyporheic flow that can be widely applied in streams and rivers.
GPU accelerated Monte-Carlo simulation of SEM images for metrology
NASA Astrophysics Data System (ADS)
Verduin, T.; Lokhorst, S. R.; Hagen, C. W.
2016-03-01
In this work we address the computation times of numerical studies in dimensional metrology. In particular, full Monte-Carlo simulation programs for scanning electron microscopy (SEM) image acquisition are known to be notoriously slow. Our quest in reducing the computation time of SEM image simulation has led us to investigate the use of graphics processing units (GPUs) for metrology. We have succeeded in creating a full Monte-Carlo simulation program for SEM images, which runs entirely on a GPU. The physical scattering models of this GPU simulator are identical to a previous CPU-based simulator, which includes the dielectric function model for inelastic scattering and also refinements for low-voltage SEM applications. As a case study for the performance, we considered the simulated exposure of a complex feature: an isolated silicon line with rough sidewalls located on a at silicon substrate. The surface of the rough feature is decomposed into 408 012 triangles. We have used an exposure dose of 6 mC/cm2, which corresponds to 6 553 600 primary electrons on average (Poisson distributed). We repeat the simulation for various primary electron energies, 300 eV, 500 eV, 800 eV, 1 keV, 3 keV and 5 keV. At first we run the simulation on a GeForce GTX480 from NVIDIA. The very same simulation is duplicated on our CPU-based program, for which we have used an Intel Xeon X5650. Apart from statistics in the simulation, no difference is found between the CPU and GPU simulated results. The GTX480 generates the images (depending on the primary electron energy) 350 to 425 times faster than a single threaded Intel X5650 CPU. Although this is a tremendous speedup, we actually have not reached the maximum throughput because of the limited amount of available memory on the GTX480. Nevertheless, the speedup enables the fast acquisition of simulated SEM images for metrology. We now have the potential to investigate case studies in CD-SEM metrology, which otherwise would take unreasonable amounts of computation time.
Regan, R. Steve; Niswonger, Richard G.; Markstrom, Steven L.; Barlow, Paul M.
2015-10-02
The spin-up simulation should be run for a sufficient length of time necessary to establish antecedent conditions throughout a model domain. Each GSFLOW application can require different lengths of time to account for the hydrologic stresses to propagate through a coupled groundwater and surface-water system. Typically, groundwater hydrologic processes require many years to come into equilibrium with dynamic climate and other forcing (or stress) data, such as precipitation and well pumping, whereas runoff-dominated surface-water processes respond relatively quickly. Use of a spin-up simulation can substantially reduce execution-time requirements for applications where the time period of interest is small compared to the time for hydrologic memory; thus, use of the restart option can be an efficient strategy for forecast and calibration simulations that require multiple simulations starting from the same day.
Software Comparison for Renewable Energy Deployment in a Distribution Network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, David Wenzhong; Muljadi, Eduard; Tian, Tian
The main objective of this report is to evaluate different software options for performing robust distributed generation (DG) power system modeling. The features and capabilities of four simulation tools, OpenDSS, GridLAB-D, CYMDIST, and PowerWorld Simulator, are compared to analyze their effectiveness in analyzing distribution networks with DG. OpenDSS and GridLAB-D, two open source software, have the capability to simulate networks with fluctuating data values. These packages allow the running of a simulation each time instant by iterating only the main script file. CYMDIST, a commercial software, allows for time-series simulation to study variations on network controls. PowerWorld Simulator, another commercialmore » tool, has a batch mode simulation function through the 'Time Step Simulation' tool, which obtains solutions for a list of specified time points. PowerWorld Simulator is intended for analysis of transmission-level systems, while the other three are designed for distribution systems. CYMDIST and PowerWorld Simulator feature easy-to-use graphical user interfaces (GUIs). OpenDSS and GridLAB-D, on the other hand, are based on command-line programs, which increase the time necessary to become familiar with the software packages.« less
Running into Trouble with the Time-Dependent Propagation of a Wavepacket
ERIC Educational Resources Information Center
Garriz, Abel E.; Sztrajman, Alejandro; Mitnik, Dario
2010-01-01
The propagation in time of a wavepacket is a conceptually rich problem suitable to be studied in any introductory quantum mechanics course. This subject is covered analytically in most of the standard textbooks. Computer simulations have become a widespread pedagogical tool, easily implemented in computer labs and in classroom demonstrations.…
Non-linear structure formation in the `Running FLRW' cosmological model
NASA Astrophysics Data System (ADS)
Bibiano, Antonio; Croton, Darren J.
2016-07-01
We present a suite of cosmological N-body simulations describing the `Running Friedmann-Lemaïtre-Robertson-Walker' (R-FLRW) cosmological model. This model is based on quantum field theory in a curved space-time and extends Lambda cold dark matter (ΛCDM) with a time-evolving vacuum density, Λ(z), and time-evolving gravitational Newton's coupling, G(z). In this paper, we review the model and introduce the necessary analytical treatment needed to adapt a reference N-body code. Our resulting simulations represent the first realization of the full growth history of structure in the R-FLRW cosmology into the non-linear regime, and our normalization choice makes them fully consistent with the latest cosmic microwave background data. The post-processing data products also allow, for the first time, an analysis of the properties of the halo and sub-halo populations. We explore the degeneracies of many statistical observables and discuss the steps needed to break them. Furthermore, we provide a quantitative description of the deviations of R-FLRW from ΛCDM, which could be readily exploited by future cosmological observations to test and further constrain the model.
Experimental Performance of a Genetic Algorithm for Airborne Strategic Conflict Resolution
NASA Technical Reports Server (NTRS)
Karr, David A.; Vivona, Robert A.; Roscoe, David A.; DePascale, Stephen M.; Consiglio, Maria
2009-01-01
The Autonomous Operations Planner, a research prototype flight-deck decision support tool to enable airborne self-separation, uses a pattern-based genetic algorithm to resolve predicted conflicts between the ownship and traffic aircraft. Conflicts are resolved by modifying the active route within the ownship s flight management system according to a predefined set of maneuver pattern templates. The performance of this pattern-based genetic algorithm was evaluated in the context of batch-mode Monte Carlo simulations running over 3600 flight hours of autonomous aircraft in en-route airspace under conditions ranging from typical current traffic densities to several times that level. Encountering over 8900 conflicts during two simulation experiments, the genetic algorithm was able to resolve all but three conflicts, while maintaining a required time of arrival constraint for most aircraft. Actual elapsed running time for the algorithm was consistent with conflict resolution in real time. The paper presents details of the genetic algorithm s design, along with mathematical models of the algorithm s performance and observations regarding the effectiveness of using complimentary maneuver patterns when multiple resolutions by the same aircraft were required.
Experimental Performance of a Genetic Algorithm for Airborne Strategic Conflict Resolution
NASA Technical Reports Server (NTRS)
Karr, David A.; Vivona, Robert A.; Roscoe, David A.; DePascale, Stephen M.; Consiglio, Maria
2009-01-01
The Autonomous Operations Planner, a research prototype flight-deck decision support tool to enable airborne self-separation, uses a pattern-based genetic algorithm to resolve predicted conflicts between the ownship and traffic aircraft. Conflicts are resolved by modifying the active route within the ownship's flight management system according to a predefined set of maneuver pattern templates. The performance of this pattern-based genetic algorithm was evaluated in the context of batch-mode Monte Carlo simulations running over 3600 flight hours of autonomous aircraft in en-route airspace under conditions ranging from typical current traffic densities to several times that level. Encountering over 8900 conflicts during two simulation experiments, the genetic algorithm was able to resolve all but three conflicts, while maintaining a required time of arrival constraint for most aircraft. Actual elapsed running time for the algorithm was consistent with conflict resolution in real time. The paper presents details of the genetic algorithm's design, along with mathematical models of the algorithm's performance and observations regarding the effectiveness of using complimentary maneuver patterns when multiple resolutions by the same aircraft were required.
Initial Data Analysis Results for ATD-2 ISAS HITL Simulation
NASA Technical Reports Server (NTRS)
Lee, Hanbong
2017-01-01
To evaluate the operational procedures and information requirements for the core functional capabilities of the ATD-2 project, such as tactical surface metering tool, APREQ-CFR procedure, and data element exchanges between ramp and tower, human-in-the-loop (HITL) simulations were performed in March, 2017. This presentation shows the initial data analysis results from the HITL simulations. With respect to the different runway configurations and metering values in tactical surface scheduler, various airport performance metrics were analyzed and compared. These metrics include gate holding time, taxi-out in time, runway throughput, queue size and wait time in queue, and TMI flight compliance. In addition to the metering value, other factors affecting the airport performance in the HITL simulation, including run duration, runway changes, and TMI constraints, are also discussed.
Theory and Simulation of Real and Ideal Magnetohydrodynamic Turbulence
NASA Technical Reports Server (NTRS)
Shebalin, John V.
2004-01-01
Incompressible, homogeneous magnetohydrodynamic (MHD) turbulence consists of fluctuating vorticity and magnetic fields, which are represented in terms of their Fourier coefficients. Here, a set of five Fourier spectral transform method numerical simulations of two-dimensional (2-D) MHD turbulence on a 512(sup 2) grid is described. Each simulation is a numerically realized dynamical system consisting of Fourier modes associated with wave vectors k, with integer components, such that k = |k| less than or equal to k(sub max). The simulation set consists of one ideal (non-dissipative) case and four real (dissipative) cases. All five runs had equivalent initial conditions. The dimensions of the dynamical systems associated with these cases are the numbers of independent real and imaginary parts of the Fourier modes. The ideal simulation has a dimension of 366104, while each real simulation has a dimension of 411712. The real runs vary in magnetic Prandtl number P(sub M), with P(sub M) is a member of {0.1, 0.25, 1, 4}. In the results presented here, all runs have been taken to a simulation time of t = 25. Although ideal and real Fourier spectra are quite different at high k, they are similar at low values of k. Their low k behavior indicates the existence of broken symmetry and coherent structure in real MHD turbulence, similar to what exists in ideal MHD turbulence. The value of PM strongly affects the ratio of kinetic to magnetic energy and energy dissipation (which is mostly ohmic). The relevance of these results to 3-D Navier-Stokes and MHD turbulence is discussed.
Sams, J. I.; Witt, E. C.
1995-01-01
The Hydrological Simulation Program - Fortran (HSPF) was used to simulate streamflow and sediment transport in two surface-mined basins of Fayette County, Pa. Hydrologic data from the Stony Fork Basin (0.93 square miles) was used to calibrate HSPF parameters. The calibrated parameters were applied to an HSPF model of the Poplar Run Basin (8.83 square miles) to evaluate the transfer value of model parameters. The results of this investigation provide information to the Pennsylvania Department of Environmental Resources, Bureau of Mining and Reclamation, regarding the value of the simulated hydrologic data for use in cumulative hydrologic-impact assessments of surface-mined basins. The calibration period was October 1, 1985, through September 30, 1988 (water years 1986-88). The simulated data were representative of the observed data from the Stony Fork Basin. Mean simulated streamflow was 1.64 cubic feet per second compared to measured streamflow of 1.58 cubic feet per second for the 3-year period. The difference between the observed and simulated peak stormflow ranged from 4.0 to 59.7 percent for 12 storms. The simulated sediment load for the 1987 water year was 127.14 tons (0.21 ton per acre), which compares to a measured sediment load of 147.09 tons (0.25 ton per acre). The total simulated suspended-sediment load for the 3-year period was 538.2 tons (0.30 ton per acre per year), which compares to a measured sediment load of 467.61 tons (0.26 ton per acre per year). The model was verified by comparing observed and simulated data from October 1, 1988, through September 30, 1989. The results obtained were comparable to those from the calibration period. The simulated mean daily discharge was representative of the range of data observed from the basin and of the frequency with which specific discharges were equalled or exceeded. The calibrated and verified parameters from the Stony Fork model were applied to an HSPF model of the Poplar Run Basin. The two basins are in a similar physical setting. Data from October 1, 1987, through September 30, 1989, were used to evaluate the Poplar Run model. In general, the results from the Poplar Run model were comparable to those obtained from the Stony Fork model. The difference between observed and simulated total streamflow was 1.1 percent for the 2-year period. The mean annual streamflow simulated by the Poplar Run model was 18.3 cubic feet per second. This compares to an observed streamflow of 18.15 cubic feet per second. For the 2-year period, the simulated sediment load was 2,754 tons (0.24 ton per acre per year), which compares to a measured sediment load of 3,051.2 tons (0.27 ton per acre per year) for the Poplar Run Basin. Cumulative frequency-distribution curves of the observed and simulated streamflow compared well. The comparison between observed and simulated data improved as the time span increased. Simulated annual means and totals were more representative of the observed data than hourly data used in comparing storm events. The structure and organization of the HSPF model facilitated the simulation of a wide range of hydrologic processes. The simulation results from this investigation indicate that model parameters may be transferred to ungaged basins to generate representative hydrologic data through modeling techniques.
Optimizing Utilization of Detectors
2016-03-01
provide a quantifiable process to determine how much time should be allocated to each task sharing the same asset . This optimized expected time... allocation is calculated by numerical analysis and Monte Carlo simulation. Numerical analysis determines the expectation by involving an integral and...determines the optimum time allocation of the asset by repeatedly running experiments to approximate the expectation of the random variables. This
Statistical Emulator for Expensive Classification Simulators
NASA Technical Reports Server (NTRS)
Ross, Jerret; Samareh, Jamshid A.
2016-01-01
Expensive simulators prevent any kind of meaningful analysis to be performed on the phenomena they model. To get around this problem the concept of using a statistical emulator as a surrogate representation of the simulator was introduced in the 1980's. Presently, simulators have become more and more complex and as a result running a single example on these simulators is very expensive and can take days to weeks or even months. Many new techniques have been introduced, termed criteria, which sequentially select the next best (most informative to the emulator) point that should be run on the simulator. These criteria methods allow for the creation of an emulator with only a small number of simulator runs. We follow and extend this framework to expensive classification simulators.
Software Framework for Advanced Power Plant Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
John Widmann; Sorin Munteanu; Aseem Jain
2010-08-01
This report summarizes the work accomplished during the Phase II development effort of the Advanced Process Engineering Co-Simulator (APECS). The objective of the project is to develop the tools to efficiently combine high-fidelity computational fluid dynamics (CFD) models with process modeling software. During the course of the project, a robust integration controller was developed that can be used in any CAPE-OPEN compliant process modeling environment. The controller mediates the exchange of information between the process modeling software and the CFD software. Several approaches to reducing the time disparity between CFD simulations and process modeling have been investigated and implemented. Thesemore » include enabling the CFD models to be run on a remote cluster and enabling multiple CFD models to be run simultaneously. Furthermore, computationally fast reduced-order models (ROMs) have been developed that can be 'trained' using the results from CFD simulations and then used directly within flowsheets. Unit operation models (both CFD and ROMs) can be uploaded to a model database and shared between multiple users.« less
HYDES: A generalized hybrid computer program for studying turbojet or turbofan engine dynamics
NASA Technical Reports Server (NTRS)
Szuch, J. R.
1974-01-01
This report describes HYDES, a hybrid computer program capable of simulating one-spool turbojet, two-spool turbojet, or two-spool turbofan engine dynamics. HYDES is also capable of simulating two- or three-stream turbofans with or without mixing of the exhaust streams. The program is intended to reduce the time required for implementing dynamic engine simulations. HYDES was developed for running on the Lewis Research Center's Electronic Associates (EAI) 690 Hybrid Computing System and satisfies the 16384-word core-size and hybrid-interface limits of that machine. The program could be modified for running on other computing systems. The use of HYDES to simulate a single-spool turbojet and a two-spool, two-stream turbofan engine is demonstrated. The form of the required input data is shown and samples of output listings (teletype) and transient plots (x-y plotter) are provided. HYDES is shown to be capable of performing both steady-state design and off-design analyses and transient analyses.
Parallelization and automatic data distribution for nuclear reactor simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liebrock, L.M.
1997-07-01
Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directlymore » affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed.« less
Real-time electron dynamics for massively parallel excited-state simulations
NASA Astrophysics Data System (ADS)
Andrade, Xavier
The simulation of the real-time dynamics of electrons, based on time dependent density functional theory (TDDFT), is a powerful approach to study electronic excited states in molecular and crystalline systems. What makes the method attractive is its flexibility to simulate different kinds of phenomena beyond the linear-response regime, including strongly-perturbed electronic systems and non-adiabatic electron-ion dynamics. Electron-dynamics simulations are also attractive from a computational point of view. They can run efficiently on massively parallel architectures due to the low communication requirements. Our implementations of electron dynamics, based on the codes Octopus (real-space) and Qball (plane-waves), allow us to simulate systems composed of thousands of atoms and to obtain good parallel scaling up to 1.6 million processor cores. Due to the versatility of real-time electron dynamics and its parallel performance, we expect it to become the method of choice to apply the capabilities of exascale supercomputers for the simulation of electronic excited states.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mishra, Srikanta; Jin, Larry; He, Jincong
2015-06-30
Reduced-order models provide a means for greatly accelerating the detailed simulations that will be required to manage CO 2 storage operations. In this work, we investigate the use of one such method, POD-TPWL, which has previously been shown to be effective in oil reservoir simulation problems. This method combines trajectory piecewise linearization (TPWL), in which the solution to a new (test) problem is represented through a linearization around the solution to a previously-simulated (training) problem, with proper orthogonal decomposition (POD), which enables solution states to be expressed in terms of a relatively small number of parameters. We describe the applicationmore » of POD-TPWL for CO 2-water systems simulated using a compositional procedure. Stanford’s Automatic Differentiation-based General Purpose Research Simulator (AD-GPRS) performs the full-order training simulations and provides the output (derivative matrices and system states) required by the POD-TPWL method. A new POD-TPWL capability introduced in this work is the use of horizontal injection wells that operate under rate (rather than bottom-hole pressure) control. Simulation results are presented for CO 2 injection into a synthetic aquifer and into a simplified model of the Mount Simon formation. Test cases involve the use of time-varying well controls that differ from those used in training runs. Results of reasonable accuracy are consistently achieved for relevant well quantities. Runtime speedups of around a factor of 370 relative to full- order AD-GPRS simulations are achieved, though the preprocessing needed for POD-TPWL model construction corresponds to the computational requirements for about 2.3 full-order simulation runs. A preliminary treatment for POD-TPWL modeling in which test cases differ from training runs in terms of geological parameters (rather than well controls) is also presented. Results in this case involve only small differences between training and test runs, though they do demonstrate that the approach is able to capture basic solution trends. The impact of some of the detailed numerical treatments within the POD-TPWL formulation is considered in an Appendix.« less
Effects of simulated weightlessness on fish otolith growth: Clinostat versus Rotating-Wall Vessel
NASA Astrophysics Data System (ADS)
Brungs, Sonja; Hauslage, Jens; Hilbig, Reinhard; Hemmersbach, Ruth; Anken, Ralf
2011-09-01
Stimulus dependence is a general feature of developing sensory systems. It has been shown earlier that the growth of inner ear heavy stones (otoliths) of late-stage Cichlid fish ( Oreochromis mossambicus) and Zebrafish ( Danio rerio) is slowed down by hypergravity, whereas microgravity during space flight yields an opposite effect, i.e. larger than 1 g otoliths, in Swordtail ( Xiphophorus helleri) and in Cichlid fish late-stage embryos. These and related studies proposed that otolith growth is actively adjusted via a feedback mechanism to produce a test mass of the appropriate physical capacity. Using ground-based techniques to apply simulated weightlessness, long-term clinorotation (CR; exposure on a fast-rotating Clinostat with one axis of rotation) led to larger than 1 g otoliths in late-stage Cichlid fish. Larger than normal otoliths were also found in early-staged Zebrafish embryos after short-term Wall Vessel Rotation (WVR; also regarded as a method to simulate weightlessness). These results are basically in line with the results obtained on Swordtails from space flight. Thus, the growth of fish inner ear otoliths seems to be an appropriate parameter to assess the quality of "simulated weightlessness" provided by a particular simulation device. Since CR and WVR are in worldwide use to simulate weightlessness conditions on ground using small-sized specimens, we were prompted to directly compare the effects of CR and WVR on otolith growth using developing Cichlids as model organism. Animals were simultaneously subjected to CR and WVR from a point of time when otolith primordia had begun to calcify both within the utricle (gravity perception) and the saccule (hearing); the respective otoliths are the lapilli and the sagittae. Three such runs were subsequently carried out, using three different batches of fish. The runs were discontinued when the animals began to hatch. In the course of all three runs performed, CR led to larger than normal lapilli, whereas WVR had no effect on the growth of these otoliths. Regarding sagittae, CR resulted in larger than normal stones in one of the three runs. The other CR runs and all WVR runs had no effect on sagittal growth. These results clearly indicate that CR rather than WVR can be regarded as a device to simulate weightlessness using the Cichlid as model organism. Since WVR has earlier been shown to affect otolith growth in Zebrafish, the lifestyle of an animal (mouth-breeding versus egg-laying) seems to be of considerable importance. Further studies using a variety of simulation techniques (including, e.g. magnetic levitation and random positioning) and various species are needed in order to identify the most appropriate technique to simulate weightlessness regarding a particular model organism.
3RIP Evaluation of the Performance of the Search System Using a Realtime Simulation Technique.
ERIC Educational Resources Information Center
Lofstrom, Mats
This report describes a real-time simulation experiment to evaluate the performance of the search and editing system 3RIP, an interactive system written in the language BLISS on a DEC-10 computer. The test vehicle, preliminary test runs, and capacity test are detailed, and the following conclusions are reported: (1) 3RIP performs well up to the…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bose, Sownak; Li, Baojiu; He, Jian-hua
We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied f ( R ) gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergencemore » rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of f ( R ) simulations. For example, a test simulation with 512{sup 3} particles in a box of size 512 Mpc/ h is now 5 times faster than before, while a Millennium-resolution simulation for f ( R ) gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.« less
Run-up Variability due to Source Effects
NASA Astrophysics Data System (ADS)
Del Giudice, Tania; Zolezzi, Francesca; Traverso, Chiara; Valfrè, Giulio; Poggi, Pamela; Parker, Eric J.
2010-05-01
This paper investigates the variability of tsunami run-up at a specific location due to uncertainty in earthquake source parameters. It is important to quantify this 'inter-event' variability for probabilistic assessments of tsunami hazard. In principal, this aspect of variability could be studied by comparing field observations at a single location from a number of tsunamigenic events caused by the same source. As such an extensive dataset does not exist, we decided to study the inter-event variability through numerical modelling. We attempt to answer the question 'What is the potential variability of tsunami wave run-up at a specific site, for a given magnitude earthquake occurring at a known location'. The uncertainty is expected to arise from the lack of knowledge regarding the specific details of the fault rupture 'source' parameters. The following steps were followed: the statistical distributions of the main earthquake source parameters affecting the tsunami height were established by studying fault plane solutions of known earthquakes; a case study based on a possible tsunami impact on Egypt coast has been set up and simulated, varying the geometrical parameters of the source; simulation results have been analyzed deriving relationships between run-up height and source parameters; using the derived relationships a Monte Carlo simulation has been performed in order to create the necessary dataset to investigate the inter-event variability of the run-up height along the coast; the inter-event variability of the run-up height along the coast has been investigated. Given the distribution of source parameters and their variability, we studied how this variability propagates to the run-up height, using the Cornell 'Multi-grid coupled Tsunami Model' (COMCOT). The case study was based on the large thrust faulting offshore the south-western Greek coast, thought to have been responsible for the infamous 1303 tsunami. Numerical modelling of the event was used to assess the impact on the North African coast. The effects of uncertainty in fault parameters were assessed by perturbing the base model, and observing variation on wave height along the coast. The tsunami wave run-up was computed at 4020 locations along the Egyptian coast between longitudes 28.7 E and 33.8 E. To assess the effects of fault parameters uncertainty, input model parameters have been varied and effects on run-up have been analyzed. The simulations show that for a given point there are linear relationships between run-up and both fault dislocation and rupture length. A superposition analysis shows that a linear combination of the effects of the different source parameters (evaluated results) leads to a good approximation of the simulated results. This relationship is then used as the basis for a Monte Carlo simulation. The Monte Carlo simulation was performed for 1600 scenarios at each of the 4020 points along the coast. The coefficient of variation (the ratio between standard deviation of the results and the average of the run-up heights along the coast) is comprised between 0.14 and 3.11 with an average value along the coast equal to 0.67. The coefficient of variation of normalized run-up has been compared with the standard deviation of spectral acceleration attenuation laws used for probabilistic seismic hazard assessment studies. These values have a similar meaning, and the uncertainty in the two cases is similar. The 'rule of thumb' relationship between mean and sigma can be expressed as follows: ?+ σ ≈ 2?. The implication is that the uncertainty in run-up estimation should give a range of values within approximately two times the average. This uncertainty should be considered in tsunami hazard analysis, such as inundation and risk maps, evacuation plans and the other related steps.
Electronics and Software Engineer for Robotics Project Intern
NASA Technical Reports Server (NTRS)
Teijeiro, Antonio
2017-01-01
I was assigned to mentor high school students for the 2017 First Robotics Competition. Using a team based approach, I worked with the students to program the robot and applied my electrical background to build the robot from start to finish. I worked with students who had an interest in electrical engineering to teach them about voltage, current, pulse width modulation, solenoids, electromagnets, relays, DC motors, DC motor controllers, crimping and soldering electrical components, Java programming, and robotic simulation. For the simulation, we worked together to generate graphics files, write simulator description format code, operate Linux, and operate SOLIDWORKS. Upon completion of the FRC season, I transitioned over to providing full time support for the LCS hardware team. During this phase of my internship I helped my co-intern write test steps for two networking hardware DVTs , as well as run cables and update cable running lists.
Running SW4 On New Commodity Technology Systems (CTS-1) Platform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodgers, Arthur J.; Petersson, N. Anders; Pitarka, Arben
We have recently been running earthquake ground motion simulations with SW4 on the new capacity computing systems, called the Commodity Technology Systems - 1 (CTS-1) at Lawrence Livermore National Laboratory (LLNL). SW4 is a fourth order time domain finite difference code developed by LLNL and distributed by the Computational Infrastructure for Geodynamics (CIG). SW4 simulates seismic wave propagation in complex three-dimensional Earth models including anelasticity and surface topography. We are modeling near-fault earthquake strong ground motions for the purposes of evaluating the response of engineered structures, such as nuclear power plants and other critical infrastructure. Engineering analysis of structures requiresmore » the inclusion of high frequencies which can cause damage, but are often difficult to include in simulations because of the need for large memory to model fine grid spacing on large domains.« less
Multi-Model Validation in the Chesapeake Bay Region During Frontier Sentinel 2010
2012-09-28
which a 72-hr forecast took approximately 1 hr. Identical runs were performed on the DoD Supercomputing Resources Center (DSRC) host “ DaVinci ” at the...performance Navy DSRC host DaVinci . Products of water level and horizontal current maps as well as station time series, identical to those produced by the...forecast meteorological fields. The NCOM simulations were run daily on 128 CPUs at the Navy DSRC host DaVinci and required approximately 5 hrs of wall
2017-06-01
maintenance times from the fleet are randomly resampled when running the model to enhance model realism. The use of a simulation model to represent the...helicopter regiment. 2. Attack Helicopter UH TIGER The EC665, or Airbus Helicopter TIGER, (Figure 3) is a four- bladed , twin- engine multi-role attack...migrated into the automated management system SAP Standard Product Family (SASPF), and the usage clock starts to run with the amount of the current
Methods and Measurements in Real-Time Air Traffic Control System Simulation
1983-04-01
Percent of Variance Consumed by Factors 28 7 Correlations Between ABM II Factor Scores and SE14 1 30 Sector-Density Cell -Based Facter Scores 8 SEX I Cell ...runs for each of 31 subjects under each of 6 sector geometry-traffic density combinations ( cells ). Initial analyses, involving correlations between the...two runs in each cell , indicated very low correlations between the replicates. It was decided that before going further it would be best to conduct a
Anhøj, Jacob
2015-01-01
Run charts are widely used in healthcare improvement, but there is little consensus on how to interpret them. The primary aim of this study was to evaluate and compare the diagnostic properties of different sets of run chart rules. A run chart is a line graph of a quality measure over time. The main purpose of the run chart is to detect process improvement or process degradation, which will turn up as non-random patterns in the distribution of data points around the median. Non-random variation may be identified by simple statistical tests including the presence of unusually long runs of data points on one side of the median or if the graph crosses the median unusually few times. However, there is no general agreement on what defines “unusually long” or “unusually few”. Other tests of questionable value are frequently used as well. Three sets of run chart rules (Anhoej, Perla, and Carey rules) have been published in peer reviewed healthcare journals, but these sets differ significantly in their sensitivity and specificity to non-random variation. In this study I investigate the diagnostic values expressed by likelihood ratios of three sets of run chart rules for detection of shifts in process performance using random data series. The study concludes that the Anhoej rules have good diagnostic properties and are superior to the Perla and the Carey rules. PMID:25799549
Implicit integration methods for dislocation dynamics
Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; ...
2015-01-20
In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events, and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. Here, this paper investigates the viability of high order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a waymore » of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.« less
Full-Body Musculoskeletal Model for Muscle-Driven Simulation of Human Gait.
Rajagopal, Apoorva; Dembia, Christopher L; DeMers, Matthew S; Delp, Denny D; Hicks, Jennifer L; Delp, Scott L
2016-10-01
Musculoskeletal models provide a non-invasive means to study human movement and predict the effects of interventions on gait. Our goal was to create an open-source 3-D musculoskeletal model with high-fidelity representations of the lower limb musculature of healthy young individuals that can be used to generate accurate simulations of gait. Our model includes bony geometry for the full body, 37 degrees of freedom to define joint kinematics, Hill-type models of 80 muscle-tendon units actuating the lower limbs, and 17 ideal torque actuators driving the upper body. The model's musculotendon parameters are derived from previous anatomical measurements of 21 cadaver specimens and magnetic resonance images of 24 young healthy subjects. We tested the model by evaluating its computational time and accuracy of simulations of healthy walking and running. Generating muscle-driven simulations of normal walking and running took approximately 10 minutes on a typical desktop computer. The differences between our muscle-generated and inverse dynamics joint moments were within 3% (RMSE) of the peak inverse dynamics joint moments in both walking and running, and our simulated muscle activity showed qualitative agreement with salient features from experimental electromyography data. These results suggest that our model is suitable for generating muscle-driven simulations of healthy gait. We encourage other researchers to further validate and apply the model to study other motions of the lower extremity. The model is implemented in the open-source software platform OpenSim. The model and data used to create and test the simulations are freely available at https://simtk.org/home/full_body/, allowing others to reproduce these results and create their own simulations.
Full body musculoskeletal model for muscle-driven simulation of human gait
Rajagopal, Apoorva; Dembia, Christopher L.; DeMers, Matthew S.; Delp, Denny D.; Hicks, Jennifer L.; Delp, Scott L.
2017-01-01
Objective Musculoskeletal models provide a non-invasive means to study human movement and predict the effects of interventions on gait. Our goal was to create an open-source, three-dimensional musculoskeletal model with high-fidelity representations of the lower limb musculature of healthy young individuals that can be used to generate accurate simulations of gait. Methods Our model includes bony geometry for the full body, 37 degrees of freedom to define joint kinematics, Hill-type models of 80 muscle-tendon units actuating the lower limbs, and 17 ideal torque actuators driving the upper body. The model’s musculotendon parameters are derived from previous anatomical measurements of 21 cadaver specimens and magnetic resonance images of 24 young healthy subjects. We tested the model by evaluating its computational time and accuracy of simulations of healthy walking and running. Results Generating muscle-driven simulations of normal walking and running took approximately 10 minutes on a typical desktop computer. The differences between our muscle-generated and inverse dynamics joint moments were within 3% (RMSE) of the peak inverse dynamics joint moments in both walking and running, and our simulated muscle activity showed qualitative agreement with salient features from experimental electromyography data. Conclusion These results suggest that our model is suitable for generating muscle-driven simulations of healthy gait. We encourage other researchers to further validate and apply the model to study other motions of the lower-extremity. Significance The model is implemented in the open source software platform OpenSim. The model and data used to create and test the simulations are freely available at https://simtk.org/home/full_body/, allowing others to reproduce these results and create their own simulations. PMID:27392337
Characterization of the Body-to-Body Propagation Channel for Subjects during Sports Activities.
Mohamed, Marshed; Cheffena, Michael; Moldsvor, Arild
2018-02-18
Body-to-body wireless networks (BBWNs) have great potential to find applications in team sports activities among others. However, successful design of such systems requires great understanding of the communication channel as the movement of the body components causes time-varying shadowing and fading effects. In this study, we present results of the measurement campaign of BBWN during running and cycling activities. Among others, the results indicated the presence of good and bad states with each state following a specific distribution for the considered propagation scenarios. This motivated the development of two-state semi-Markov model, for simulation of the communication channels. The simulation model was validated using the available measurement data in terms of first and second order statistics and have shown good agreement. The first order statistics obtained from the simulation model as well as the measured results were then used to analyze the performance of the BBWNs channels under running and cycling activities in terms of capacity and outage probability. Cycling channels showed better performance than running, having higher channel capacity and lower outage probability, regardless of the speed of the subjects involved in the measurement campaign.
Tribological measurements on a Charnley-type artificial hip joint
NASA Technical Reports Server (NTRS)
Jones, W. R., Jr.
1983-01-01
A total hip simulator was used to determine the friction and wear properties of Charnley-type (316L stainless steel balls and sterile ultrahigh molecular weight polyethylene cups) hip prostheses. Three different sets of specimens were tested to 395,000, 101,500 and 233,000 walking cycles, respectively. All tests were run unlubricated, at ambient conditions (22 to 26 C, 30 to 50 percent relative humidity), at 30 walking cycles per minute, under a dynamic load simulating walking. Polyethylene cup wear rates ranged from 1.4 to 39 ten billions cu m which corresponds to dimensional losses of 4.0 to 11 microns per year. Although these wear rates are lower than those obtained from other hip simulators and from in vivo X-ray measurements, they are comparable when taking run-in and plastic deformation into account. Maximum tangential friction forces ranged from 93 to 129 N under variable load (267 to 3090 N range) and from 93 to 143 N under a static load of 3090 N. A portion of one test 250,000 walking cycles) run under dry air ( 1 percent relative humidity) yielded a wear rate almost 6 times greater than that obtained under wet air ( 70 percent relative humidity) conditions.
NASA Astrophysics Data System (ADS)
Rusgiyarto, Ferry; Sjafruddin, Ade; Frazila, Russ Bona; Suprayogi
2017-06-01
Increasing container traffic and land acquisition problem for terminal expansion leads to usage of external yard in a port buffer area. This condition influenced the terminal performance because a road which connects the terminal and the external yard was also used by non-container traffic. Location choice problem considered to solve this condition, but the previous research has not taken account a stochastic condition of container arrival rate and service time yet. Bi-level programming framework was used to find optimum location configuration. In the lower-level, there was a problem to construct the equation, which correlated the terminal operation and the road due to different time cycle equilibrium. Container moves from the quay to a terminal gate in a daily unit of time, meanwhile, it moves from the terminal gate to the external yard through the road in a minute unit of time. If the equation formulated in hourly unit equilibrium, it cannot catch up the container movement characteristics in the terminal. Meanwhile, if the equation formulated in daily unit equilibrium, it cannot catch up the road traffic movement characteristics in the road. This problem can be addressed using simulation model. Discrete Event Simulation Model was used to simulate import container flow processes in the container terminal and external yard. Optimum location configuration in the upper-level was the combinatorial problem, which was solved by Full Enumeration approach. The objective function of the external yard location model was to minimize user transport cost (or time) and to maximize operator benefit. Numerical experiment was run for the scenario assumption of two container handling ways, three external yards, and thirty-day simulation periods. Jakarta International Container Terminal (JICT) container characteristics data was referred for the simulation. Based on five runs which were 5, 10, 15, 20, and 30 repetitions, operation one of three available external yards (external yard - 3) was the optimum result. Apparently, the model confirmed the hypothesis that there was an optimum configuration of the external yard. Nevertheless, the model needs detail elaboration related to the objective function and the optimization constraint. It requires detail validation, in term of service time value, distribution pattern, and arrival rate in each unit server modeled in the next step of the research. The model gave unique and relatively consistent value of each run. It was indicated that the method has a chance to solve the research problem.
Simulated fault injection - A methodology to evaluate fault tolerant microprocessor architectures
NASA Technical Reports Server (NTRS)
Choi, Gwan S.; Iyer, Ravishankar K.; Carreno, Victor A.
1990-01-01
A simulation-based fault-injection method for validating fault-tolerant microprocessor architectures is described. The approach uses mixed-mode simulation (electrical/logic analysis), and injects transient errors in run-time to assess the resulting fault impact. As an example, a fault-tolerant architecture which models the digital aspects of a dual-channel real-time jet-engine controller is used. The level of effectiveness of the dual configuration with respect to single and multiple transients is measured. The results indicate 100 percent coverage of single transients. Approximately 12 percent of the multiple transients affect both channels; none result in controller failure since two additional levels of redundancy exist.
Memory interface simulator: A computer design aid
NASA Technical Reports Server (NTRS)
Taylor, D. S.; Williams, T.; Weatherbee, J. E.
1972-01-01
Results are presented of a study conducted with a digital simulation model being used in the design of the Automatically Reconfigurable Modular Multiprocessor System (ARMMS), a candidate computer system for future manned and unmanned space missions. The model simulates the activity involved as instructions are fetched from random access memory for execution in one of the system central processing units. A series of model runs measured instruction execution time under various assumptions pertaining to the CPU's and the interface between the CPU's and RAM. Design tradeoffs are presented in the following areas: Bus widths, CPU microprogram read only memory cycle time, multiple instruction fetch, and instruction mix.
Monitoring Object Library Usage and Changes
NASA Technical Reports Server (NTRS)
Owen, R. K.; Craw, James M. (Technical Monitor)
1995-01-01
The NASA Ames Numerical Aerodynamic Simulation program Aeronautics Consolidated Supercomputing Facility (NAS/ACSF) supercomputing center services over 1600 users, and has numerous analysts with root access. Several tools have been developed to monitor object library usage and changes. Some of the tools do "noninvasive" monitoring and other tools implement run-time logging even for object-only libraries. The run-time logging identifies who, when, and what is being used. The benefits are that real usage can be measured, unused libraries can be discontinued, training and optimization efforts can be focused at those numerical methods that are actually used. An overview of the tools will be given and the results will be discussed.
Parallel 3D Multi-Stage Simulation of a Turbofan Engine
NASA Technical Reports Server (NTRS)
Turner, Mark G.; Topp, David A.
1998-01-01
A 3D multistage simulation of each component of a modern GE Turbofan engine has been made. An axisymmetric view of this engine is presented in the document. This includes a fan, booster rig, high pressure compressor rig, high pressure turbine rig and a low pressure turbine rig. In the near future, all components will be run in a single calculation for a solution of 49 blade rows. The simulation exploits the use of parallel computations by using two levels of parallelism. Each blade row is run in parallel and each blade row grid is decomposed into several domains and run in parallel. 20 processors are used for the 4 blade row analysis. The average passage approach developed by John Adamczyk at NASA Lewis Research Center has been further developed and parallelized. This is APNASA Version A. It is a Navier-Stokes solver using a 4-stage explicit Runge-Kutta time marching scheme with variable time steps and residual smoothing for convergence acceleration. It has an implicit K-E turbulence model which uses an ADI solver to factor the matrix. Between 50 and 100 explicit time steps are solved before a blade row body force is calculated and exchanged with the other blade rows. This outer iteration has been coined a "flip." Efforts have been made to make the solver linearly scaleable with the number of blade rows. Enough flips are run (between 50 and 200) so the solution in the entire machine is not changing. The K-E equations are generally solved every other explicit time step. One of the key requirements in the development of the parallel code was to make the parallel solution exactly (bit for bit) match the serial solution. This has helped isolate many small parallel bugs and guarantee the parallelization was done correctly. The domain decomposition is done only in the axial direction since the number of points axially is much larger than the other two directions. This code uses MPI for message passing. The parallel speed up of the solver portion (no 1/0 or body force calculation) for a grid which has 227 points axially.
Distributed dynamic simulations of networked control and building performance applications.
Yahiaoui, Azzedine
2018-02-01
The use of computer-based automation and control systems for smart sustainable buildings, often so-called Automated Buildings (ABs), has become an effective way to automatically control, optimize, and supervise a wide range of building performance applications over a network while achieving the minimum energy consumption possible, and in doing so generally refers to Building Automation and Control Systems (BACS) architecture. Instead of costly and time-consuming experiments, this paper focuses on using distributed dynamic simulations to analyze the real-time performance of network-based building control systems in ABs and improve the functions of the BACS technology. The paper also presents the development and design of a distributed dynamic simulation environment with the capability of representing the BACS architecture in simulation by run-time coupling two or more different software tools over a network. The application and capability of this new dynamic simulation environment are demonstrated by an experimental design in this paper.
Distributed dynamic simulations of networked control and building performance applications
Yahiaoui, Azzedine
2017-01-01
The use of computer-based automation and control systems for smart sustainable buildings, often so-called Automated Buildings (ABs), has become an effective way to automatically control, optimize, and supervise a wide range of building performance applications over a network while achieving the minimum energy consumption possible, and in doing so generally refers to Building Automation and Control Systems (BACS) architecture. Instead of costly and time-consuming experiments, this paper focuses on using distributed dynamic simulations to analyze the real-time performance of network-based building control systems in ABs and improve the functions of the BACS technology. The paper also presents the development and design of a distributed dynamic simulation environment with the capability of representing the BACS architecture in simulation by run-time coupling two or more different software tools over a network. The application and capability of this new dynamic simulation environment are demonstrated by an experimental design in this paper. PMID:29568135
A new climate modeling framework for convection-resolving simulation at continental scale
NASA Astrophysics Data System (ADS)
Charpilloz, Christophe; di Girolamo, Salvatore; Arteaga, Andrea; Fuhrer, Oliver; Hoefler, Torsten; Schulthess, Thomas; Schär, Christoph
2017-04-01
Major uncertainties remain in our understanding of the processes that govern the water cycle in a changing climate and their representation in weather and climate models. Of particular concern are heavy precipitation events of convective origin (thunderstorms and rain showers). The aim of the crCLIM project [1] is to propose a new climate modeling framework that alleviates the I/O-bottleneck in large-scale, convection-resolving climate simulations and thus to enable new analysis techniques for climate scientists. Due to the large computational costs, convection-resolving simulations are currently restricted to small computational domains or very short time scales, unless the largest available supercomputers system such as hybrid CPU-GPU architectures are used [3]. Hence, the COSMO model has been adapted to run on these architectures for research and production purposes [2]. However, the amount of generated data also increases and storing this data becomes infeasible making the analysis of simulations results impractical. To circumvent this problem and enable high-resolution models in climate we propose a data-virtualization layer (DVL) that re-runs simulations on demand and transparently manages the data for the analysis, that means we trade off computational effort (time) for storage (space). This approach also requires a bit-reproducible version of the COSMO model that produces identical results on different architectures (CPUs and GPUs) [4] that will be coupled with a performance model in order enable optimal re-runs depending on requirements of the re-run and available resources. In this contribution, we discuss the strategy to develop the DVL, a first performance model, the challenge of bit-reproducibility and the first results of the crCLIM project. [1] http://www.c2sm.ethz.ch/research/crCLIM.html [2] O. Fuhrer, C. Osuna, X. Lapillonne, T. Gysi, M. Bianco, and T. Schulthess. "Towards gpu-accelerated operational weather forecasting." In The GPU Technology Conference, GTC. 2013. [3] D. Leutwyler, O. Fuhrer, X. Lapillonne, D. Lüthi, and C. Schär. "Towards European-scale convection-resolving climate simulations with GPUs: a study with COSMO 4.19." Geoscientific Model Development 9, no. 9 (2016): 3393. [4] A. Arteaga, O. Fuhrer, and T. Hoefler. "Designing bit-reproducible portable high-performance applications." In Parallel and Distributed Processing Symposium, 2014 IEEE 28th International, pp. 1235-1244. IEEE, 2014.
NASA Technical Reports Server (NTRS)
Benavente, Javier E.; Luce, Norris R.
1989-01-01
Demands for nonlinear time history simulations of large, flexible multibody dynamic systems has created a need for efficient interfaces between finite-element modeling programs and time-history simulations. One such interface, TREEFLX, an interface between NASTRAN and TREETOPS, a nonlinear dynamics and controls time history simulation for multibody structures, is presented and demonstrated via example using the proposed Space Station Mobile Remote Manipulator System (MRMS). The ability to run all three programs (NASTRAN, TREEFLX and TREETOPS), in addition to other programs used for controller design and model reduction (such as DMATLAB and TREESEL, both described), under a UNIX Workstation environment demonstrates the flexibility engineers now have in designing, developing and testing control systems for dynamically complex systems.
An Obstacle Alerting System for Agricultural Application
NASA Technical Reports Server (NTRS)
DeMaio, Joe
2003-01-01
Wire strikes are a significant cause of helicopter accidents. The aircraft most at risk are aerial applicators. The present study examines the effectiveness of a wire alert delivered by way of the lightbar, a GPS-based guidance system for aerial application. The alert lead-time needed to avoid an invisible wire is compared with that to avoid a visible wire. A flight simulator was configured to simulate an agricultural application helicopter. Two pilots flew simulated spray runs in fields with visible wires, invisible wires, and no wires. The wire alert was effective in reducing wire strikes. A lead-time of 3.5 sec was required for the alert to be effective. The lead- time required was the same whether the pilot could see the wire or not.
NASA Astrophysics Data System (ADS)
Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann
2009-02-01
Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.
Influences of chemical sympathectomy and simulated weightlessness on male and female rats
NASA Technical Reports Server (NTRS)
Woodman, Christopher R.; Stump, Craig S.; Stump, Jane A.; Sebastian, Lisa A.; Rahman, Z.; Tipton, Charles M.
1991-01-01
Consideration is given to a study aimed at determining whether the sympathetic nervous system is associated with the changes in maximum oxygen consumption (VO2max), run time, and mechanical efficiency observed during simulated weightlessness in male and female rats. Female and male rats were compared for food consumption, body mass, and body composition in conditions of simulated weightlessness to provide an insight into how these parameters may influence aerobic capacity and exercise performance. It is concluded that chemical sympathectomy and/or a weight-bearing stimulus will attenuate the loss in VO2max associated with simulated weightlessness in rats despite similar changes in body mass and composition. It is noted that the mechanisms remain unclear at this time.
NASA Technical Reports Server (NTRS)
Veres, Joseph
2001-01-01
This report outlines the detailed simulation of Aircraft Turbofan Engine. The objectives were to develop a detailed flow model of a full turbofan engine that runs on parallel workstation clusters overnight and to develop an integrated system of codes for combustor design and analysis to enable significant reduction in design time and cost. The model will initially simulate the 3-D flow in the primary flow path including the flow and chemistry in the combustor, and ultimately result in a multidisciplinary model of the engine. The overnight 3-D simulation capability of the primary flow path in a complete engine will enable significant reduction in the design and development time of gas turbine engines. In addition, the NPSS (Numerical Propulsion System Simulation) multidisciplinary integration and analysis are discussed.
Reduced-Order Models Based on POD-Tpwl for Compositional Subsurface Flow Simulation
NASA Astrophysics Data System (ADS)
Durlofsky, L. J.; He, J.; Jin, L. Z.
2014-12-01
A reduced-order modeling procedure applicable for compositional subsurface flow simulation will be described and applied. The technique combines trajectory piecewise linearization (TPWL) and proper orthogonal decomposition (POD) to provide highly efficient surrogate models. The method is based on a molar formulation (which uses pressure and overall component mole fractions as the primary variables) and is applicable for two-phase, multicomponent systems. The POD-TPWL procedure expresses new solutions in terms of linearizations around solution states generated and saved during previously simulated 'training' runs. High-dimensional states are projected into a low-dimensional subspace using POD. Thus, at each time step, only a low-dimensional linear system needs to be solved. Results will be presented for heterogeneous three-dimensional simulation models involving CO2 injection. Both enhanced oil recovery and carbon storage applications (with horizontal CO2 injectors) will be considered. Reasonably close agreement between full-order reference solutions and compositional POD-TPWL simulations will be demonstrated for 'test' runs in which the well controls differ from those used for training. Construction of the POD-TPWL model requires preprocessing overhead computations equivalent to about 3-4 full-order runs. Runtime speedups using POD-TPWL are, however, very significant - typically O(100-1000). The use of POD-TPWL for well control optimization will also be illustrated. For this application, some amount of retraining during the course of the optimization is required, which leads to smaller, but still significant, speedup factors.
NASA Astrophysics Data System (ADS)
Vivoni, Enrique R.; Mascaro, Giuseppe; Mniszewski, Susan; Fasel, Patricia; Springer, Everett P.; Ivanov, Valeriy Y.; Bras, Rafael L.
2011-10-01
SummaryA major challenge in the use of fully-distributed hydrologic models has been the lack of computational capabilities for high-resolution, long-term simulations in large river basins. In this study, we present the parallel model implementation and real-world hydrologic assessment of the Triangulated Irregular Network (TIN)-based Real-time Integrated Basin Simulator (tRIBS). Our parallelization approach is based on the decomposition of a complex watershed using the channel network as a directed graph. The resulting sub-basin partitioning divides effort among processors and handles hydrologic exchanges across boundaries. Through numerical experiments in a set of nested basins, we quantify parallel performance relative to serial runs for a range of processors, simulation complexities and lengths, and sub-basin partitioning methods, while accounting for inter-run variability on a parallel computing system. In contrast to serial simulations, the parallel model speed-up depends on the variability of hydrologic processes. Load balancing significantly improves parallel speed-up with proportionally faster runs as simulation complexity (domain resolution and channel network extent) increases. The best strategy for large river basins is to combine a balanced partitioning with an extended channel network, with potential savings through a lower TIN resolution. Based on these advances, a wider range of applications for fully-distributed hydrologic models are now possible. This is illustrated through a set of ensemble forecasts that account for precipitation uncertainty derived from a statistical downscaling model.
CFS UPDATES NEW The CFS server has been upgraded. If you have been downloading data using anonymous data. Thank you for your cooperation. NEW CMIP2 Simulation Run extended CMIP126 Simulation Run has been extended from 100 to 225 years! Monthly data from this run is available! NEW CFS Retrospective Forecast
Flight code validation simulator
NASA Astrophysics Data System (ADS)
Sims, Brent A.
1996-05-01
An End-To-End Simulation capability for software development and validation of missile flight software on the actual embedded computer has been developed utilizing a 486 PC, i860 DSP coprocessor, embedded flight computer and custom dual port memory interface hardware. This system allows real-time interrupt driven embedded flight software development and checkout. The flight software runs in a Sandia Digital Airborne Computer and reads and writes actual hardware sensor locations in which Inertial Measurement Unit data resides. The simulator provides six degree of freedom real-time dynamic simulation, accurate real-time discrete sensor data and acts on commands and discretes from the flight computer. This system was utilized in the development and validation of the successful premier flight of the Digital Miniature Attitude Reference System in January of 1995 at the White Sands Missile Range on a two stage attitude controlled sounding rocket.
Ensemble Bayesian forecasting system Part I: Theory and algorithms
NASA Astrophysics Data System (ADS)
Herr, Henry D.; Krzysztofowicz, Roman
2015-05-01
The ensemble Bayesian forecasting system (EBFS), whose theory was published in 2001, is developed for the purpose of quantifying the total uncertainty about a discrete-time, continuous-state, non-stationary stochastic process such as a time series of stages, discharges, or volumes at a river gauge. The EBFS is built of three components: an input ensemble forecaster (IEF), which simulates the uncertainty associated with random inputs; a deterministic hydrologic model (of any complexity), which simulates physical processes within a river basin; and a hydrologic uncertainty processor (HUP), which simulates the hydrologic uncertainty (an aggregate of all uncertainties except input). It works as a Monte Carlo simulator: an ensemble of time series of inputs (e.g., precipitation amounts) generated by the IEF is transformed deterministically through a hydrologic model into an ensemble of time series of outputs, which is next transformed stochastically by the HUP into an ensemble of time series of predictands (e.g., river stages). Previous research indicated that in order to attain an acceptable sampling error, the ensemble size must be on the order of hundreds (for probabilistic river stage forecasts and probabilistic flood forecasts) or even thousands (for probabilistic stage transition forecasts). The computing time needed to run the hydrologic model this many times renders the straightforward simulations operationally infeasible. This motivates the development of the ensemble Bayesian forecasting system with randomization (EBFSR), which takes full advantage of the analytic meta-Gaussian HUP and generates multiple ensemble members after each run of the hydrologic model; this auxiliary randomization reduces the required size of the meteorological input ensemble and makes it operationally feasible to generate a Bayesian ensemble forecast of large size. Such a forecast quantifies the total uncertainty, is well calibrated against the prior (climatic) distribution of predictand, possesses a Bayesian coherence property, constitutes a random sample of the predictand, and has an acceptable sampling error-which makes it suitable for rational decision making under uncertainty.
Mastin, Mark
2012-01-01
A previous collaborative effort between the U.S. Geological Survey and the Bureau of Reclamation resulted in a watershed model for four watersheds that discharge into Potholes Reservoir, Washington. Since the model was constructed, two new meteorological sites have been established that provide more reliable real-time information. The Bureau of Reclamation was interested in incorporating this new information into the existing watershed model developed in 2009, and adding measured snowpack information to update simulated results and to improve forecasts of runoff. This report includes descriptions of procedures to aid a user in making model runs, including a description of the Object User Interface for the watershed model with details on specific keystrokes to generate model runs for the contributing basins. A new real-time, data-gathering computer program automates the creation of the model input files and includes the new meteorological sites. The 2009 watershed model was updated with the new sites and validated by comparing simulated results to measured data. As in the previous study, the updated model (2012 model) does a poor job of simulating individual storms, but a reasonably good job of simulating seasonal runoff volumes. At three streamflow-gaging stations, the January 1 to June 30 retrospective forecasts of runoff volume for years 2010 and 2011 were within 40 percent of the measured runoff volume for five of the six comparisons, ranging from -39.4 to 60.3 percent difference. A procedure for collecting measured snowpack data and using the data in the watershed model for forecast model runs, based on the Ensemble Streamflow Prediction method, is described, with an example that uses 2004 snow-survey data.
Simulation of LHC events on a millions threads
NASA Astrophysics Data System (ADS)
Childers, J. T.; Uram, T. D.; LeCompte, T. J.; Papka, M. E.; Benjamin, D. P.
2015-12-01
Demand for Grid resources is expected to double during LHC Run II as compared to Run I; the capacity of the Grid, however, will not double. The HEP community must consider how to bridge this computing gap by targeting larger compute resources and using the available compute resources as efficiently as possible. Argonne's Mira, the fifth fastest supercomputer in the world, can run roughly five times the number of parallel processes that the ATLAS experiment typically uses on the Grid. We ported Alpgen, a serial x86 code, to run as a parallel application under MPI on the Blue Gene/Q architecture. By analysis of the Alpgen code, we reduced the memory footprint to allow running 64 threads per node, utilizing the four hardware threads available per core on the PowerPC A2 processor. Event generation and unweighting, typically run as independent serial phases, are coupled together in a single job in this scenario, reducing intermediate writes to the filesystem. By these optimizations, we have successfully run LHC proton-proton physics event generation at the scale of a million threads, filling two-thirds of Mira.
A new paradigm for reproducing and analyzing N-body simulations of planetary systems
NASA Astrophysics Data System (ADS)
Rein, Hanno; Tamayo, Daniel
2017-05-01
The reproducibility of experiments is one of the main principles of the scientific method. However, numerical N-body experiments, especially those of planetary systems, are currently not reproducible. In the most optimistic scenario, they can only be replicated in an approximate or statistical sense. Even if authors share their full source code and initial conditions, differences in compilers, libraries, operating systems or hardware often lead to qualitatively different results. We provide a new set of easy-to-use, open-source tools that address the above issues, allowing for exact (bit-by-bit) reproducibility of N-body experiments. In addition to generating completely reproducible integrations, we show that our framework also offers novel and innovative ways to analyse these simulations. As an example, we present a high-accuracy integration of the Solar system spanning 10 Gyr, requiring several weeks to run on a modern CPU. In our framework, we can not only easily access simulation data at predefined intervals for which we save snapshots, but at any time during the integration. We achieve this by integrating an on-demand reconstructed simulation forward in time from the nearest snapshot. This allows us to extract arbitrary quantities at any point in the saved simulation exactly (bit-by-bit), and within seconds rather than weeks. We believe that the tools we present in this paper offer a new paradigm for how N-body simulations are run, analysed and shared across the community.
A fundamental study of suction for Laminar Flow Control (LFC)
NASA Astrophysics Data System (ADS)
Watmuff, Jonathan H.
1992-10-01
This report covers the period forming the first year of the project. The aim is to experimentally investigate the effects of suction as a technique for Laminar Flow Control. Experiments are to be performed which require substantial modifications to be made to the experimental facility. Considerable effort has been spent developing new high performance constant temperature hot-wire anemometers for general purpose use in the Fluid Mechanics Laboratory. Twenty instruments have been delivered. An important feature of the facility is that it is totally automated under computer control. Unprecedently large quantities of data can be acquired and the results examined using the visualization tools developed specifically for studying the results of numerical simulations on graphics works stations. The experiment must be run for periods of up to a month at a time since the data is collected on a point-by-point basis. Several techniques were implemented to reduce the experimental run-time by a significant factor. Extra probes have been constructed and modifications have been made to the traverse hardware and to the real-time experimental code to enable multiple probes to be used. This will reduce the experimental run-time by the appropriate factor. Hot-wire calibration drift has been a frustrating problem owing to the large range of ambient temperatures experienced in the laboratory. The solution has been to repeat the calibrations at frequent intervals. However the calibration process has consumed up to 40 percent of the run-time. A new method of correcting the drift is very nearly finalized and when implemented it will also lead to a significant reduction in the experimental run-time.
A fundamental study of suction for Laminar Flow Control (LFC)
NASA Technical Reports Server (NTRS)
Watmuff, Jonathan H.
1992-01-01
This report covers the period forming the first year of the project. The aim is to experimentally investigate the effects of suction as a technique for Laminar Flow Control. Experiments are to be performed which require substantial modifications to be made to the experimental facility. Considerable effort has been spent developing new high performance constant temperature hot-wire anemometers for general purpose use in the Fluid Mechanics Laboratory. Twenty instruments have been delivered. An important feature of the facility is that it is totally automated under computer control. Unprecedently large quantities of data can be acquired and the results examined using the visualization tools developed specifically for studying the results of numerical simulations on graphics works stations. The experiment must be run for periods of up to a month at a time since the data is collected on a point-by-point basis. Several techniques were implemented to reduce the experimental run-time by a significant factor. Extra probes have been constructed and modifications have been made to the traverse hardware and to the real-time experimental code to enable multiple probes to be used. This will reduce the experimental run-time by the appropriate factor. Hot-wire calibration drift has been a frustrating problem owing to the large range of ambient temperatures experienced in the laboratory. The solution has been to repeat the calibrations at frequent intervals. However the calibration process has consumed up to 40 percent of the run-time. A new method of correcting the drift is very nearly finalized and when implemented it will also lead to a significant reduction in the experimental run-time.
CADNA: a library for estimating round-off error propagation
NASA Astrophysics Data System (ADS)
Jézéquel, Fabienne; Chesneaux, Jean-Marie
2008-06-01
The CADNA library enables one to estimate round-off error propagation using a probabilistic approach. With CADNA the numerical quality of any simulation program can be controlled. Furthermore by detecting all the instabilities which may occur at run time, a numerical debugging of the user code can be performed. CADNA provides new numerical types on which round-off errors can be estimated. Slight modifications are required to control a code with CADNA, mainly changes in variable declarations, input and output. This paper describes the features of the CADNA library and shows how to interpret the information it provides concerning round-off error propagation in a code. Program summaryProgram title:CADNA Catalogue identifier:AEAT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:53 420 No. of bytes in distributed program, including test data, etc.:566 495 Distribution format:tar.gz Programming language:Fortran Computer:PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system:LINUX, UNIX Classification:4.14, 6.5, 20 Nature of problem:A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method:The CADNA library [1] implements Discrete Stochastic Arithmetic [2-4] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Restrictions:CADNA requires a Fortran 90 (or newer) compiler. In the program to be linked with the CADNA library, round-off errors on complex variables cannot be estimated. Furthermore array functions such as product or sum must not be used. Only the arithmetic operators and the abs, min, max and sqrt functions can be used for arrays. Running time:The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected. References:The CADNA library, URL address: http://www.lip6.fr/cadna. J.-M. Chesneaux, L'arithmétique Stochastique et le Logiciel CADNA, Habilitation á diriger des recherches, Université Pierre et Marie Curie, Paris, 1995. J. Vignes, A stochastic arithmetic for reliable scientific computation, Math. Comput. Simulation 35 (1993) 233-261. J. Vignes, Discrete stochastic arithmetic for validating results of numerical software, Numer. Algorithms 37 (2004) 377-390.
The effect of gas dynamics on semi-analytic modelling of cluster galaxies
NASA Astrophysics Data System (ADS)
Saro, A.; De Lucia, G.; Dolag, K.; Borgani, S.
2008-12-01
We study the degree to which non-radiative gas dynamics affect the merger histories of haloes along with subsequent predictions from a semi-analytic model (SAM) of galaxy formation. To this aim, we use a sample of dark matter only and non-radiative smooth particle hydrodynamics (SPH) simulations of four massive clusters. The presence of gas-dynamical processes (e.g. ram pressure from the hot intra-cluster atmosphere) makes haloes more fragile in the runs which include gas. This results in a 25 per cent decrease in the total number of subhaloes at z = 0. The impact on the galaxy population predicted by SAMs is complicated by the presence of `orphan' galaxies, i.e. galaxies whose parent substructures are reduced below the resolution limit of the simulation. In the model employed in our study, these galaxies survive (unaffected by the tidal stripping process) for a residual merging time that is computed using a variation of the Chandrasekhar formula. Due to ram-pressure stripping, haloes in gas simulations tend to be less massive than their counterparts in the dark matter simulations. The resulting merging times for satellite galaxies are then longer in these simulations. On the other hand, the presence of gas influences the orbits of haloes making them on average more circular and therefore reducing the estimated merging times with respect to the dark matter only simulation. This effect is particularly significant for the most massive satellites and is (at least in part) responsible for the fact that brightest cluster galaxies in runs with gas have stellar masses which are about 25 per cent larger than those obtained from dark matter only simulations. Our results show that gas dynamics has only a marginal impact on the statistical properties of the galaxy population, but that its impact on the orbits and merging times of haloes strongly influences the assembly of the most massive galaxies.
Large-eddy simulation of dust-uplift by a haboob density current
NASA Astrophysics Data System (ADS)
Huang, Qian; Marsham, John H.; Tian, Wenshou; Parker, Douglas J.; Garcia-Carreras, Luis
2018-04-01
Cold pool outflows have been shown from both observations and convection-permitting models to be a dominant source of dust emissions ("haboobs") in the summertime Sahel and Sahara, and to cause dust uplift over deserts across the world. In this paper Met Office Large Eddy Model (LEM) simulations, which resolve the turbulence within the cold-pools much better than previous studies of haboobs with convection-permitting models, are used to investigate the winds that uplift dust in cold pools, and the resultant dust transport. In order to simulate the cold pool outflow, an idealized cooling is added in the model during the first 2 h of 5.7 h run time. Given the short duration of the runs, dust is treated as a passive tracer. Dust uplift largely occurs in the "head" of the density current, consistent with the few existing observations. In the modeled density current dust is largely restricted to the lowest, coldest and well mixed layers of the cold pool outflow (below around 400 m), except above the "head" of the cold pool where some dust reaches 2.5 km. This rapid transport to above 2 km will contribute to long atmospheric lifetimes of large dust particles from haboobs. Decreasing the model horizontal grid-spacing from 1.0 km to 100 m resolves more turbulence, locally increasing winds, increasing mixing and reducing the propagation speed of the density current. Total accumulated dust uplift is approximately twice as large in 1.0 km runs compared with 100 m runs, suggesting that for studying haboobs in convection-permitting runs the representation of turbulence and mixing is significant. Simulations with surface sensible heat fluxes representative of those from a desert region during daytime show that increasing surface fluxes slows the density current due to increased mixing, but increase dust uplift rates, due to increased downward transport of momentum to the surface.
SiMon: Simulation Monitor for Computational Astrophysics
NASA Astrophysics Data System (ADS)
Xuran Qian, Penny; Cai, Maxwell Xu; Portegies Zwart, Simon; Zhu, Ming
2017-09-01
Scientific discovery via numerical simulations is important in modern astrophysics. This relatively new branch of astrophysics has become possible due to the development of reliable numerical algorithms and the high performance of modern computing technologies. These enable the analysis of large collections of observational data and the acquisition of new data via simulations at unprecedented accuracy and resolution. Ideally, simulations run until they reach some pre-determined termination condition, but often other factors cause extensive numerical approaches to break down at an earlier stage. In those cases, processes tend to be interrupted due to unexpected events in the software or the hardware. In those cases, the scientist handles the interrupt manually, which is time-consuming and prone to errors. We present the Simulation Monitor (SiMon) to automatize the farming of large and extensive simulation processes. Our method is light-weight, it fully automates the entire workflow management, operates concurrently across multiple platforms and can be installed in user space. Inspired by the process of crop farming, we perceive each simulation as a crop in the field and running simulation becomes analogous to growing crops. With the development of SiMon we relax the technical aspects of simulation management. The initial package was developed for extensive parameter searchers in numerical simulations, but it turns out to work equally well for automating the computational processing and reduction of observational data reduction.
Automatic Fitting of Spiking Neuron Models to Electrophysiological Recordings
Rossant, Cyrille; Goodman, Dan F. M.; Platkiewicz, Jonathan; Brette, Romain
2010-01-01
Spiking models can accurately predict the spike trains produced by cortical neurons in response to somatically injected currents. Since the specific characteristics of the model depend on the neuron, a computational method is required to fit models to electrophysiological recordings. The fitting procedure can be very time consuming both in terms of computer simulations and in terms of code writing. We present algorithms to fit spiking models to electrophysiological data (time-varying input and spike trains) that can run in parallel on graphics processing units (GPUs). The model fitting library is interfaced with Brian, a neural network simulator in Python. If a GPU is present it uses just-in-time compilation to translate model equations into optimized code. Arbitrary models can then be defined at script level and run on the graphics card. This tool can be used to obtain empirically validated spiking models of neurons in various systems. We demonstrate its use on public data from the INCF Quantitative Single-Neuron Modeling 2009 competition by comparing the performance of a number of neuron spiking models. PMID:20224819
Computer simulation results of attitude estimation of earth orbiting satellites
NASA Technical Reports Server (NTRS)
Kou, S. R.
1976-01-01
Computer simulation results of attitude estimation of Earth-orbiting satellites (including Space Telescope) subjected to environmental disturbances and noises are presented. Decomposed linear recursive filter and Kalman filter were used as estimation tools. Six programs were developed for this simulation, and all were written in the basic language and were run on HP 9830A and HP 9866A computers. Simulation results show that a decomposed linear recursive filter is accurate in estimation and fast in response time. Furthermore, for higher order systems, this filter has computational advantages (i.e., less integration errors and roundoff errors) over a Kalman filter.
Daily hydro- and morphodynamic simulations at Duck, NC, USA using Delft3D
NASA Astrophysics Data System (ADS)
Penko, Allison; Veeramony, Jay; Palmsten, Margaret; Bak, Spicer; Brodie, Katherine; Hesser, Tyler
2017-04-01
Operational forecasting of the coastal nearshore has wide ranging societal and humanitarian benefits, specifically for the prediction of natural hazards due to extreme storm events. However, understanding the model limitations and uncertainty is as equally important as the predictions themselves. By comparing and contrasting the predictions of multiple high-resolution models in a location with near real-time collection of observations, we are able to perform a vigorous analysis of the model results in order to achieve more robust and certain predictions. In collaboration with the U.S. Army Corps of Engineers Field Research Facility (USACE FRF) as part of the Coastal Model Test Bed (CMTB) project, we have set up Delft3D at Duck, NC, USA to run in near-real time, driven by measured wave data at the boundary. The CMTB at the USACE FRF allows for the unique integration of operational wave, circulation, and morphology models with real-time observations. The FRF has an extensive array of in-situ and remotely sensed oceanographic, bathymetric, and meteorological data that is broadcast in near-real time onto a publically accessible server. Wave, current, and bed elevation instruments are permanently installed across the model domain including 2 waverider buoys in 17-m and 26-m water depths at 3.5-km and 17-km offshore, respectively, that record directional wave data every 30-min. Here, we present the workflow and output of the Delft3D hydro- and morphodynamic simulations at Duck, and show the tactical benefits and operational potential of such a system. A nested Delft3D simulation runs a parent grid that extends 12-km in the along-shore and 3.5-km in the cross-shore with 50-m resolution and a maximum depth of approximately 17-m. The bathymetry for the parent grid was obtained from a regional digital elevation model (DEM) generated by the Federal Emergency Management Agency (FEMA). The inner nested grid extends 1.8-km in the along-shore and 1-km in the cross-shore with 5-m resolution and a maximum depth of approximately 8-m. The inner nested grid initial model bathymetry is set to either the predicted bathymetry from the previous day's simulation or a survey, whichever is more recent. Delft3D-WAVE runs in the parent grid and is driven with the real-time spectral wave measurements from the waverider buoy in 17-m depth. The spectral output from Delft3D-WAVE in the parent grid is then used as the boundary condition for the inner nested high-resolution grid, in which the coupled Delft3D wave-flow-morphology model is run. The model results are then compared to the wave, current, and bathymetry observations collected at the FRF as well as other models that are run in the CMTB.
An interactive drilling simulator for teaching and research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cooper, G.A.; Cooper, A.G.; Bihn, G.
1995-12-31
An interactive program has been constructed that allows a student or engineer to simulate the drilling of an oil well, and to optimize the drilling process by comparing different drilling plans. The program operates in a very user-friendly way, with emphasis on menu and button-driven commands. The simulator may be run either as a training program, with exercises that illustrate various features of the drilling process, as a game, in which a student is set a challenge to drill a well with minimum cost or time under constraints set by an instructor, or as a simulator of a real situationmore » to investigate the merit of different drilling strategies. It has three main parts, a Lithology Editor, a Settings Editor and the simulation program itself. The Lithology Editor allows the student, instructor or engineer to build a real or imaginary sequence of rock layers, each characterized by its mineralogy, drilling and log responses. The Settings Editor allows the definition of all the operational parameters, ranging from the drilling and wear rates of particular bits in specified rocks to the costs of different procedures. The simulator itself contains an algorithm that determines rate of penetration and rate of wear of the bit as drilling continues. It also determines whether the well kicks or fractures, and assigns various other {open_quotes}accident{close_quotes} conditions. During operation, a depth vs. time curve is displayed, together with a {open_quotes}mud log{close_quotes} showing the rock layers penetrated. If desired, the well may be {open_quotes}logged{close_quotes} casings may be set and pore and fracture pressure gradients may be displayed. During drilling, the total time and cost are shown, together with cost per foot in total and for the current bit run.« less
Java simulations of embedded control systems.
Farias, Gonzalo; Cervin, Anton; Arzén, Karl-Erik; Dormido, Sebastián; Esquembre, Francisco
2010-01-01
This paper introduces a new Open Source Java library suited for the simulation of embedded control systems. The library is based on the ideas and architecture of TrueTime, a toolbox of Matlab devoted to this topic, and allows Java programmers to simulate the performance of control processes which run in a real time environment. Such simulations can improve considerably the learning and design of multitasking real-time systems. The choice of Java increases considerably the usability of our library, because many educators program already in this language. But also because the library can be easily used by Easy Java Simulations (EJS), a popular modeling and authoring tool that is increasingly used in the field of Control Education. EJS allows instructors, students, and researchers with less programming capabilities to create advanced interactive simulations in Java. The paper describes the ideas, implementation, and sample use of the new library both for pure Java programmers and for EJS users. The JTT library and some examples are online available on http://lab.dia.uned.es/jtt.
Java Simulations of Embedded Control Systems
Farias, Gonzalo; Cervin, Anton; Årzén, Karl-Erik; Dormido, Sebastián; Esquembre, Francisco
2010-01-01
This paper introduces a new Open Source Java library suited for the simulation of embedded control systems. The library is based on the ideas and architecture of TrueTime, a toolbox of Matlab devoted to this topic, and allows Java programmers to simulate the performance of control processes which run in a real time environment. Such simulations can improve considerably the learning and design of multitasking real-time systems. The choice of Java increases considerably the usability of our library, because many educators program already in this language. But also because the library can be easily used by Easy Java Simulations (EJS), a popular modeling and authoring tool that is increasingly used in the field of Control Education. EJS allows instructors, students, and researchers with less programming capabilities to create advanced interactive simulations in Java. The paper describes the ideas, implementation, and sample use of the new library both for pure Java programmers and for EJS users. The JTT library and some examples are online available on http://lab.dia.uned.es/jtt. PMID:22163674
TIERRAS: A package to simulate high energy cosmic ray showers underground, underwater and under-ice
NASA Astrophysics Data System (ADS)
Tueros, Matías; Sciutto, Sergio
2010-02-01
In this paper we present TIERRAS, a Monte Carlo simulation program based on the well-known AIRES air shower simulations system that enables the propagation of particle cascades underground, providing a tool to study particles arriving underground from a primary cosmic ray on the atmosphere or to initiate cascades directly underground and propagate them, exiting into the atmosphere if necessary. We show several cross-checks of its results against CORSIKA, FLUKA, GEANT and ZHS simulations and we make some considerations regarding its possible use and limitations. The first results of full underground shower simulations are presented, as an example of the package capabilities. Program summaryProgram title: TIERRAS for AIRES Catalogue identifier: AEFO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 36 489 No. of bytes in distributed program, including test data, etc.: 3 261 669 Distribution format: tar.gz Programming language: Fortran 77 and C Computer: PC, Alpha, IBM, HP, Silicon Graphics and Sun workstations Operating system: Linux, DEC Unix, AIX, SunOS, Unix System V RAM: 22 Mb bytes Classification: 1.1 External routines: TIERRAS requires AIRES 2.8.4 to be installed on the system. AIRES 2.8.4 can be downloaded from http://www.fisica.unlp.edu.ar/auger/aires/eg_AiresDownload.html. Nature of problem: Simulation of high and ultra high energy underground particle showers. Solution method: Modification of the AIRES 2.8.4 code to accommodate underground conditions. Restrictions: In AIRES some processes that are not statistically significant on the atmosphere are not simulated. In particular, it does not include muon photonuclear processes. This imposes a limitation on the application of this package to a depth of 1 km of standard rock (or 2.5 km of water equivalent). Neutrinos are not tracked on the simulation, but their energy is taken into account in decays. Running time: A TIERRAS for AIRES run of a 10 eV shower with statistical sampling (thinning) below 10 eV and 0.2 weight factor (see [1]) uses approximately 1 h of CPU time on an Intel Core 2 Quad Q6600 at 2.4 GHz. It uses only one core, so 4 simultaneous simulations can be run on this computer. Aires includes a spooling system to run several simultaneous jobs of any type. References:S. Sciutto, AIRES 2.6 User Manual, http://www.fisica.unlp.edu.ar/auger/aires/.
Pilot-in-the-Loop CFD Method Development
2017-02-01
Penn State University. All software supporting piloted simulations must run at real time speeds or faster. This requirement drives the number of...dynamics of interacting blade tip vortices with a ground plane,” American Helicopter Society 64 th Annual Forum Proceedings, 2008. [2] Johnson, W
NASA Astrophysics Data System (ADS)
Arendt, V.; Shalchi, A.
2018-06-01
We explore numerically the transport of energetic particles in a turbulent magnetic field configuration. A test-particle code is employed to compute running diffusion coefficients as well as particle distribution functions in the different directions of space. Our numerical findings are compared with models commonly used in diffusion theory such as Gaussian distribution functions and solutions of the cosmic ray Fokker-Planck equation. Furthermore, we compare the running diffusion coefficients across the mean magnetic field with solutions obtained from the time-dependent version of the unified non-linear transport theory. In most cases we find that particle distribution functions are indeed of Gaussian form as long as a two-component turbulence model is employed. For turbulence setups with reduced dimensionality, however, the Gaussian distribution can no longer be obtained. It is also shown that the unified non-linear transport theory agrees with simulated perpendicular diffusion coefficients as long as the pure two-dimensional model is excluded.
The Behavior of TCP and Its Extensions in Space
NASA Technical Reports Server (NTRS)
Wang, Ruhai; Horan, Stephen
2001-01-01
The performance of Transmission Control Protocol (TCP) in space has been examined from the observations of simulation and experimental tests for several years at National Aeronautics and Space Administration (NASA), Department of Defense (DoD) and universities. At New Mexico State University (NMSU), we have been concentrating on studying the performance of two protocol suites: the file transfer protocol (ftp) running over Transmission Control Protocol/Internet Protocol (TCP/IP) stack and the file protocol (fp) running over the Space Communications Protocol Standards (SCPS)-Transport Protocol (TP) developed under the Consultative Committee for Space Data Systems (CCSDS) standards process. SCPS-TP is considered to be TCP's extensions for space communications. This dissertation experimentally studies the behavior of TCP and SCPS-TP by running the protocol suites over both the Space-to-Ground Link Simulator (SGLS) test-bed and realistic satellite link. The study concentrates on comparing protocol behavior by plotting the averaged file transfer times for different experimental configurations and analyzing them using Statistical Analysis System (SAS) based procedures. The effects of different link delays and various Bit-Error-Rates (BERS) on each protocol performance are also studied and linear regression models are built for experiments over SGLS test-bed to reflect the relationships between the file transfer time and various transmission conditions.
Sensor-scheduling simulation of disparate sensors for Space Situational Awareness
NASA Astrophysics Data System (ADS)
Hobson, T.; Clarkson, I.
2011-09-01
The art and science of space situational awareness (SSA) has been practised and developed from the time of Sputnik. However, recent developments, such as the accelerating pace of satellite launch, the proliferation of launch capable agencies, both commercial and sovereign, and recent well-publicised collisions involving man-made space objects, has further magnified the importance of timely and accurate SSA. The United States Strategic Command (USSTRATCOM) operates the Space Surveillance Network (SSN), a global network of sensors tasked with maintaining SSA. The rapidly increasing number of resident space objects will require commensurate improvements in the SSN. Sensors are scarce resources that must be scheduled judiciously to obtain measurements of maximum utility. Improvements in sensor scheduling and fusion, can serve to reduce the number of additional sensors that may be required. Recently, Hill et al. [1] have proposed and developed a simulation environment named TASMAN (Tasking Autonomous Sensors in a Multiple Application Network) to enable testing of alternative scheduling strategies within a simulated multi-sensor, multi-target environment. TASMAN simulates a high-fidelity, hardware-in-the-loop system by running multiple machines with different roles in parallel. At present, TASMAN is limited to simulations involving electro-optic sensors. Its high fidelity is at once a feature and a limitation, since supercomputing is required to run simulations of appreciable scale. In this paper, we describe an alternative, modular and scalable SSA simulation system that can extend the work of Hill et al with reduced complexity, albeit also with reduced fidelity. The tool has been developed in MATLAB and therefore can be run on a very wide range of computing platforms. It can also make use of MATLAB’s parallel processing capabilities to obtain considerable speed-up. The speed and flexibility so obtained can be used to quickly test scheduling algorithms even with a relatively large number of space objects. We further describe an application of the tool by exploring how the relative mixture of electro-optical and radar sensors can impact the scheduling, fusion and achievable accuracy of an SSA system. By varying the mixture of sensor types, we are able to characterise the main advantages and disadvantages of each configuration.
NASA Technical Reports Server (NTRS)
Case, Jonathan L.; Santos, Pablo; Lazarus, Steven M.; Splitt, Michael E.; Haines, Stephanie L.; Dembek, Scott R.; Lapenta, William M.
2008-01-01
Studies at the Short-term Prediction Research and Transition (SPORT) Center have suggested that the use of Moderate Resolution Imaging Spectroradiometer (MODIS) sea-surface temperature (SST) composites in regional weather forecast models can have a significant positive impact on short-term numerical weather prediction in coastal regions. Recent work by LaCasse et al (2007, Monthly Weather Review) highlights lower atmospheric differences in regional numerical simulations over the Florida offshore waters using 2-km SST composites derived from the MODIS instrument aboard the polar-orbiting Aqua and Terra Earth Observing System satellites. To help quantify the value of this impact on NWS Weather Forecast Offices (WFOs), the SPORT Center and the NWS WFO at Miami, FL (MIA) are collaborating on a project to investigate the impact of using the high-resolution MODIS SST fields within the Weather Research and Forecasting (WRF) prediction system. The project's goal is to determine whether more accurate specification of the lower-boundary forcing within WRF will result in improved land/sea fluxes and hence, more accurate evolution of coastal mesoscale circulations and the associated sensible weather elements. The NWS MIA is currently running WRF in real-time to support daily forecast operations, using the National Centers for Environmental Prediction Nonhydrostatic Mesoscale Model dynamical core within the NWS Science and Training Resource Center's Environmental Modeling System (EMS) software. Twenty-seven hour forecasts are run dally initialized at 0300, 0900, 1500, and 2100 UTC on a domain with 4-km grid spacing covering the southern half of Florida and adjacent waters of the Gulf of Mexico and Atlantic Ocean. Each model run is initialized using the Local Analysis and Prediction System (LAPS) analyses available in AWIPS. The SSTs are initialized with the NCEP Real-Time Global (RTG) analyses at 1/12deg resolution (approx.9 km); however, the RTG product does not exhibit fine-scale details consistent with its grid resolution. SPORT is conducting parallel WRF EMS runs identical to the operational runs at NWS MIA except for the use of MODIS SST composites in place of the RTG product as the initial and boundary conditions over water, The MODIS SST composites for initializing the SPORT WRF runs are generated on a 2-km grid four times daily at 0400, 0700, 1600, and 1900 UTC, based on the times of the overhead passes of the Aqua and Terra satellites. The incorporation of the MODIS SST data into the SPORT WRF runs is staggered such that SSTs are updated with a new composite every six hours in each of the WRF runs. From mid-February to July 2007, over 500 parallel WRF simulations have been collected for analysis and verification. This paper will present verification results comparing the NWS MIA operational WRF runs to the SPORT experimental runs, and highlight any substantial differences noted in the predicted mesoscale phenomena for specific cases.
NASA Technical Reports Server (NTRS)
Schubert, Siegfried; Kang, In-Sik; Reale, Oreste
2009-01-01
This talk gives an update on the progress and further plans for a coordinated project to carry out and analyze high-resolution simulations of tropical storm activity with a number of state-of-the-art global climate models. Issues addressed include, the mechanisms by which SSTs control tropical storm. activity on inter-annual and longer time scales, the modulation of that activity by the Madden Julian Oscillation on sub-seasonal time scales, as well as the sensitivity of the results to model formulation. The project also encourages companion coarser resolution runs to help assess resolution dependence, and. the ability of the models to capture the large-scale and long-terra changes in the parameters important for hurricane development. Addressing the above science questions is critical to understanding the nature of the variability of the Asian-Australian monsoon and its regional impacts, and thus CLIVAR RAMP fully endorses the proposed tropical storm simulation activity. The project is open to all interested organizations and investigators, and the results from the runs will be shared among the participants, as well as made available to the broader scientific community for analysis.
Forecasting the Relative and Cumulative Effects of Multiple Stressors on At-risk Populations
2011-08-01
Vitals (observed vital rates), Movement, Ranges, Barriers (barrier interactions), Stochasticity (a time series of stochasticity indices...Simulation Viewer are themselves stochastic . They can change each time it is run. B. 196 Analysis If multiple Census events are present in the life...30-year period. A monthly time series was generated for the 20th-century using monthly anomalies for temperature, precipitation, and percent
Computation and Validation of the Dynamic Response Index (DRI)
2013-08-06
matplotlib plotting library. • Executed from command line. • Allows several optional arguments. • Runs on Windows, Linux, UNIX, and Mac OS X. 10... vs . Time: Triangular pulse input data with given time duration and peak acceleration: Time (s) EARTH Code: Motivation • Error Assessment of...public release • ARC provided electrothermal battery model example: • Test vs . simulation data for terminal voltage. • EARTH input parameters
Effects of Real-Time NASA Vegetation Data on Model Forecasts of Severe Weather
NASA Technical Reports Server (NTRS)
Case, Jonathan L.; Bell, Jordan R.; LaFontaine, Frank J.; Peters-Lidard, Christa D.
2012-01-01
The NASA Short-term Prediction Research and Transition (SPoRT) Center has developed a Greenness Vegetation Fraction (GVF) dataset, which is updated daily using swaths of Normalized Difference Vegetation Index data from the Moderate Resolution Imaging Spectroradiometer (MODIS) data aboard the NASA-EOS Aqua and Terra satellites. NASA SPoRT started generating daily real-time GVF composites at 1-km resolution over the Continental United States beginning 1 June 2010. A companion poster presentation (Bell et al.) primarily focuses on impact results in an offline configuration of the Noah land surface model (LSM) for the 2010 warm season, comparing the SPoRT/MODIS GVF dataset to the current operational monthly climatology GVF available within the National Centers for Environmental Prediction (NCEP) and Weather Research and Forecasting (WRF) models. This paper/presentation primarily focuses on individual case studies of severe weather events to determine the impacts and possible improvements by using the real-time, high-resolution SPoRT-MODIS GVFs in place of the coarser-resolution NCEP climatological GVFs in model simulations. The NASA-Unified WRF (NU-WRF) modeling system is employed to conduct the sensitivity simulations of individual events. The NU-WRF is an integrated modeling system based on the Advanced Research WRF dynamical core that is designed to represents aerosol, cloud, precipitation, and land processes at satellite-resolved scales in a coupled simulation environment. For this experiment, the coupling between the NASA Land Information System (LIS) and the WRF model is utilized to measure the impacts of the daily SPoRT/MODIS versus the monthly NCEP climatology GVFs. First, a spin-up run of the LIS is integrated for two years using the Noah LSM to ensure that the land surface fields reach an equilibrium state on the 4-km grid mesh used. Next, the spin-up LIS is run in two separate modes beginning on 1 June 2010, one continuing with the climatology GVFs while the other uses the daily SPoRT/MODIS GVFs. Finally, snapshots of the LIS land surface fields are used to initialize two different simulations of the NU-WRF, one running with climatology LIS and GVFs, and the other running with experimental LIS and NASA/SPoRT GVFs. In this paper/presentation, case study results will be highlighted in regions with significant differences in GVF between the NCEP climatology and SPoRT product during severe weather episodes.
NASA Technical Reports Server (NTRS)
Fortenbaugh, R. L.
1980-01-01
Instructions for using Vertical Attitude Takeoff and Landing Aircraft Simulation (VATLAS), the digital simulation program for application to vertical attitude takeoff and landing (VATOL) aircraft developed for installation on the NASA Ames CDC 7600 computer system are described. The framework for VATLAS is the Off-Line Simulation (OLSIM) routine. The OLSIM routine provides a flexible framework and standardized modules which facilitate the development of off-line aircraft simulations. OLSIM runs under the control of VTOLTH, the main program, which calls the proper modules for executing user specified options. These options include trim, stability derivative calculation, time history generation, and various input-output options.
Performance Analysis of and Tool Support for Transactional Memory on BG/Q
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schindewolf, M
2011-12-08
Martin Schindewolf worked during his internship at the Lawrence Livermore National Laboratory (LLNL) under the guidance of Martin Schulz at the Computer Science Group of the Center for Applied Scientific Computing. We studied the performance of the TM subsystem of BG/Q as well as researched the possibilities for tool support for TM. To study the performance, we run CLOMP-TM. CLOMP-TM is a benchmark designed for the purpose to quantify the overhead of OpenMP and compare different synchronization primitives. To advance CLOMP-TM, we added Message Passing Interface (MPI) routines for a hybrid parallelization. This enables to run multiple MPI tasks, eachmore » running OpenMP, on one node. With these enhancements, a beneficial MPI task to OpenMP thread ratio is determined. Further, the synchronization primitives are ranked as a function of the application characteristics. To demonstrate the usefulness of these results, we investigate a real Monte Carlo simulation called Monte Carlo Benchmark (MCB). Applying the lessons learned yields the best task to thread ratio. Further, we were able to tune the synchronization by transactifying the MCB. Further, we develop tools that capture the performance of the TM run time system and present it to the application's developer. The performance of the TM run time system relies on the built-in statistics. These tools use the Blue Gene Performance Monitoring (BGPM) interface to correlate the statistics from the TM run time system with performance counter values. This combination provides detailed insights in the run time behavior of the application and enables to track down the cause of degraded performance. Further, one tool has been implemented that separates the performance counters in three categories: Successful Speculation, Unsuccessful Speculation and No Speculation. All of the tools are crafted around IBM's xlc compiler for C and C++ and have been run and tested on a Q32 early access system.« less
OSCAR a Matlab based optical FFT code
NASA Astrophysics Data System (ADS)
Degallaix, Jérôme
2010-05-01
Optical simulation softwares are essential tools for designing and commissioning laser interferometers. This article aims to introduce OSCAR, a Matlab based FFT code, to the experimentalist community. OSCAR (Optical Simulation Containing Ansys Results) is used to simulate the steady state electric fields in optical cavities with realistic mirrors. The main advantage of OSCAR over other similar packages is the simplicity of its code requiring only a short time to master. As a result, even for a beginner, it is relatively easy to modify OSCAR to suit other specific purposes. OSCAR includes an extensive manual and numerous detailed examples such as simulating thermal aberration, calculating cavity eigen modes and diffraction loss, simulating flat beam cavities and three mirror ring cavities. An example is also provided about how to run OSCAR on the GPU of modern graphic cards instead of the CPU, making the simulation up to 20 times faster.
System analysis for the Huntsville Operational Support Center distributed computer system
NASA Technical Reports Server (NTRS)
Ingels, F. M.; Mauldin, J.
1984-01-01
The Huntsville Operations Support Center (HOSC) is a distributed computer system used to provide real time data acquisition, analysis and display during NASA space missions and to perform simulation and study activities during non-mission times. The primary purpose is to provide a HOSC system simulation model that is used to investigate the effects of various HOSC system configurations. Such a model would be valuable in planning the future growth of HOSC and in ascertaining the effects of data rate variations, update table broadcasting and smart display terminal data requirements on the HOSC HYPERchannel network system. A simulation model was developed in PASCAL and results of the simulation model for various system configuraions were obtained. A tutorial of the model is presented and the results of simulation runs are presented. Some very high data rate situations were simulated to observe the effects of the HYPERchannel switch over from contention to priority mode under high channel loading.
A Newton-Krylov solver for fast spin-up of online ocean tracers
NASA Astrophysics Data System (ADS)
Lindsay, Keith
2017-01-01
We present a Newton-Krylov based solver to efficiently spin up tracers in an online ocean model. We demonstrate that the solver converges, that tracer simulations initialized with the solution from the solver have small drift, and that the solver takes orders of magnitude less computational time than the brute force spin-up approach. To demonstrate the application of the solver, we use it to efficiently spin up the tracer ideal age with respect to the circulation from different time intervals in a long physics run. We then evaluate how the spun-up ideal age tracer depends on the duration of the physics run, i.e., on how equilibrated the circulation is.
NASA Astrophysics Data System (ADS)
Nolan, D. S.; Klotz, B.
2016-12-01
Obtaining the best estimate of tropical cyclone (TC) intensity is vital for operational forecasting centers to produce accurate forecasts and to issue appropriate warnings. Aircraft data traditionally provide the most reliable information about the TC inner core and surrounding environment, but sampling strategies and observing platforms associated with reconnaissance aircraft have inherent deficiencies that contribute to the uncertainty of the intensity estimate. One such instrument, the stepped frequency microwave radiometer (SFMR) on the NOAA WP-3D aircraft, provides surface wind speeds along the aircraft flight track. However, the standard "figure-4" flight pattern substantially limits the azimuthal coverage of the eyewall, such that the chance of observing the true peak wind speeds is actually quite small. By simulating flights through a high-resolution simulation of Hurricane Isabel (2003), a previous study found that the 1-minute mean (maximum) SFMR winds underestimate a 6-hour running mean maximum wind (i.e. best track) by 7.5-10%. This project applies the same methodology to a suite of hurricane simulations with even higher resolution and more sophisticated physical parameterizations. These include the hurricane nature run of Nolan et al. (2013), the second hurricane nature run, a simulation of Hurricane Bill (2009), and additional idealized simulations. For the nature run cases, we find that the mean underestimate of the best-track estimate is 12-15%, considerably higher than determined from the Isabel simulation, while the other cases are similar to the previous result. Comparisons of the various cases indicates that the primary factors that lead to greater undersampling rates are storm size and storm asymmetry. Minimum surface pressure is also frequently estimated from pressures reported by dropsondes released into the eye, with a standard correction of 1 hPa per 10 knots of wind at the time of "splash." Statistics from thousands of simulated splash points show that this rule is quite good for large wind speeds, but for low wind speeds there is still a positive bias to the pressure estimate, because the chance of hitting the true pressure minimum is quite small.
Tethered satellite system dynamics and control review panel and related activities, phase 3
NASA Technical Reports Server (NTRS)
1991-01-01
Two major tests of the Tethered Satellite System (TSS) engineering and flight units were conducted to demonstrate the functionality of the hardware and software. Deficiencies in the hardware/software integration tests (HSIT) led to a recommendation for more testing to be performed. Selected problem areas of tether dynamics were analyzed, including verification of the severity of skip rope oscillations, verification or comparison runs to explore dynamic phenomena observed in other simulations, and data generation runs to explore the performance of the time domain and frequency domain skip rope observers.
Improvements in Routing for Packet-Switched Networks
1975-02-18
PROGRAM FOR COMPUTER SIMULATION . . 90 B.l Flow Diagram of Adaptive Routine 90 B.2 Progiam ARPSIM 93 B.3 Explanation of Variables...equa. 90 APPENDIX B ADAPTIVE ROUTING PROGRAM FOR COMPUTER SIMULA HON The computer simulation for adaptive routing was initially run on a DDP-24 small...TRANSMIT OVER AVAILABLE LINKS MESSAGES IN QUEUE COMPUTE Ni NUMBER OF ARRIVALS AT EACH NODE i AT TIME T Fig. Bla - Flow Diagram of Program Routine 92
A Generic Inner-Loop Control Law Structure for Six-Degree-of-Freedom Conceptual Aircraft Design
NASA Technical Reports Server (NTRS)
Cox, Timothy H.; Cotting, M. Christopher
2005-01-01
A generic control system framework for both real-time and batch six-degree-of-freedom simulations is presented. This framework uses a simplified dynamic inversion technique to allow for stabilization and control of any type of aircraft at the pilot interface level. The simulation, designed primarily for the real-time simulation environment, also can be run in a batch mode through a simple guidance interface. Direct vehicle-state acceleration feedback is required with the simplified dynamic inversion technique. The estimation of surface effectiveness within real-time simulation timing constraints also is required. The generic framework provides easily modifiable control variables, allowing flexibility in the variables that the pilot commands. A direct control allocation scheme is used to command aircraft effectors. Primary uses for this system include conceptual and preliminary design of aircraft, when vehicle models are rapidly changing and knowledge of vehicle six-degree-of-freedom performance is required. A simulated airbreathing hypersonic vehicle and simulated high-performance fighter aircraft are used to demonstrate the flexibility and utility of the control system.
A Generic Inner-Loop Control Law Structure for Six-Degree-of-Freedom Conceptual Aircraft Design
NASA Technical Reports Server (NTRS)
Cox, Timothy H.; Cotting, Christopher
2005-01-01
A generic control system framework for both real-time and batch six-degree-of-freedom (6-DOF) simulations is presented. This framework uses a simplified dynamic inversion technique to allow for stabilization and control of any type of aircraft at the pilot interface level. The simulation, designed primarily for the real-time simulation environment, also can be run in a batch mode through a simple guidance interface. Direct vehicle-state acceleration feedback is required with the simplified dynamic inversion technique. The estimation of surface effectiveness within real-time simulation timing constraints also is required. The generic framework provides easily modifiable control variables, allowing flexibility in the variables that the pilot commands. A direct control allocation scheme is used to command aircraft effectors. Primary uses for this system include conceptual and preliminary design of aircraft, when vehicle models are rapidly changing and knowledge of vehicle 6-DOF performance is required. A simulated airbreathing hypersonic vehicle and simulated high-performance fighter aircraft are used to demonstrate the flexibility and utility of the control system.
Evaluation of subgrid-scale turbulence models using a fully simulated turbulent flow
NASA Technical Reports Server (NTRS)
Clark, R. A.; Ferziger, J. H.; Reynolds, W. C.
1977-01-01
An exact turbulent flow field was calculated on a three-dimensional grid with 64 points on a side. The flow simulates grid-generated turbulence from wind tunnel experiments. In this simulation, the grid spacing is small enough to include essentially all of the viscous energy dissipation, and the box is large enough to contain the largest eddy in the flow. The method is limited to low-turbulence Reynolds numbers, in our case R sub lambda = 36.6. To complete the calculation using a reasonable amount of computer time with reasonable accuracy, a third-order time-integration scheme was developed which runs at about the same speed as a simple first-order scheme. It obtains this accuracy by saving the velocity field and its first-time derivative at each time step. Fourth-order accurate space-differencing is used.
Modeling of Hall Thruster Lifetime and Erosion Mechanisms (Preprint)
2007-09-01
Hall thruster plasma discharge has been upgraded to simulate the erosion of the thruster acceleration channel, the degradation of which is the main life-limiting factor of the propulsion system. Evolution of the thruster geometry as a result of material removal due to sputtering is modeled by calculating wall erosion rates, stepping the grid boundary by a chosen time step and altering the computational mesh between simulation runs. The code is first tuned to predict the nose cone erosion of a 200 W Busek Hall thruster , the BHT-200. Simulated erosion
NASA Astrophysics Data System (ADS)
Hill, M. C.; Jakeman, J.; Razavi, S.; Tolson, B.
2015-12-01
For many environmental systems model runtimes have remained very long as more capable computers have been used to add more processes and more time and space discretization. Scientists have also added more parameters and kinds of observations, and many model runs are needed to explore the models. Computational demand equals run time multiplied by number of model runs divided by parallelization opportunities. Model exploration is conducted using sensitivity analysis, optimization, and uncertainty quantification. Sensitivity analysis is used to reveal consequences of what may be very complex simulated relations, optimization is used to identify parameter values that fit the data best, or at least better, and uncertainty quantification is used to evaluate the precision of simulated results. The long execution times make such analyses a challenge. Methods for addressing this challenges include computationally frugal analysis of the demanding original model and a number of ingenious surrogate modeling methods. Both commonly use about 50-100 runs of the demanding original model. In this talk we consider the tradeoffs between (1) original model development decisions, (2) computationally frugal analysis of the original model, and (3) using many model runs of the fast surrogate model. Some questions of interest are as follows. If the added processes and discretization invested in (1) are compared with the restrictions and approximations in model analysis produced by long model execution times, is there a net benefit related of the goals of the model? Are there changes to the numerical methods that could reduce the computational demands while giving up less fidelity than is compromised by using computationally frugal methods or surrogate models for model analysis? Both the computationally frugal methods and surrogate models require that the solution of interest be a smooth function of the parameters or interest. How does the information obtained from the local methods typical of (2) and the global averaged methods typical of (3) compare for typical systems? The discussion will use examples of response of the Greenland glacier to global warming and surface and groundwater modeling.
CFD transient simulation of an isolator shock train in a scramjet engine
NASA Astrophysics Data System (ADS)
Hoeger, Troy Christopher
For hypersonic flight, the scramjet engine uses an isolator to contain the pre-combustion shock train formed by the pressure difference between the inlet and the combustion chamber. If this shock train were to reach the inlet, it would cause an engine unstart, disrupting the flow through the engine and leading to a loss of thrust and potential loss of the vehicle. Prior to this work, a Computational Fluid Dynamics (CFD) simulation of the isolator was needed for simulating and characterizing the isolator flow and for finding the relationship between back pressure and changes in the location of the leading edge of the shock train. In this work, the VULCAN code was employed with back pressure as an input to obtain the time history of the shock train leading location. Results were obtained for both transient and steady-state conditions. The simulation showed a relationship between back-to-inlet pressure ratios and final locations of the shock train. For the 2-D runs, locations were within one isolator duct height of experimental results while for 3-D runs, the results were within two isolator duct heights.
Simple Queueing Model Applied to the City of Portland
NASA Astrophysics Data System (ADS)
Simon, Patrice M.; Esser, Jörg; Nagel, Kai
We use a simple traffic micro-simulation model based on queueing dynamics as introduced by Gawron [IJMPC, 9(3):393, 1998] in order to simulate traffic in Portland/Oregon. Links have a flow capacity, that is, they do not release more vehicles per second than is possible according to their capacity. This leads to queue built-up if demand exceeds capacity. Links also have a storage capacity, which means that once a link is full, vehicles that want to enter the link need to wait. This leads to queue spill-back through the network. The model is compatible with route-plan-based approaches such as TRANSIMS, where each vehicle attempts to follow its pre-computed path. Yet, both the data requirements and the computational requirements are considerably lower than for the full TRANSIMS microsimulation. Indeed, the model uses standard emme/2 network data, and runs about eight times faster than real time with more than 100 000 vehicles simultaneously in the simulation on a single Pentium-type CPU. We derive the model's fundamental diagrams and explain it. The simulation is used to simulate traffic on the emme/2 network of the Portland (Oregon) metropolitan region (20 000 links). Demand is generated by a simplified home-to-work destination assignment which generates about half a million trips for the morning peak. Route assignment is done by iterative feedback between micro-simulation and router. An iterative solution of the route assignment for the above problem can be achieved within about half a day of computing time on a desktop workstation. We compare results with field data and with results of traditional assignment runs by the Portland Metropolitan Planning Organization. Thus, with a model such as this one, it is possible to use a dynamic, activities-based approach to transportation simulation (such as in TRANSIMS) with affordable data and hardware. This should enable systematic research about the coupling of demand generation, route assignment, and micro-simulation output.
Comparison of three large-eddy simulations of shock-induced turbulent separation bubbles
NASA Astrophysics Data System (ADS)
Touber, Emile; Sandham, Neil D.
2009-12-01
Three different large-eddy simulation investigations of the interaction between an impinging oblique shock and a supersonic turbulent boundary layer are presented. All simulations made use of the same inflow technique, specifically aimed at avoiding possible low-frequency interferences with the shock/boundary-layer interaction system. All simulations were run on relatively wide computational domains and integrated over times greater than twenty five times the period of the most commonly reported low-frequency shock-oscillation, making comparisons at both time-averaged and low-frequency-dynamic levels possible. The results confirm previous experimental results which suggested a simple linear relation between the interaction length and the oblique-shock strength if scaled using the boundary-layer thickness and wall-shear stress. All the tested cases show evidences of significant low-frequency shock motions. At the wall, energetic low-frequency pressure fluctuations are observed, mainly in the initial part of interaction.
Multiple shooting shadowing for sensitivity analysis of chaotic dynamical systems
NASA Astrophysics Data System (ADS)
Blonigan, Patrick J.; Wang, Qiqi
2018-02-01
Sensitivity analysis methods are important tools for research and design with simulations. Many important simulations exhibit chaotic dynamics, including scale-resolving turbulent fluid flow simulations. Unfortunately, conventional sensitivity analysis methods are unable to compute useful gradient information for long-time-averaged quantities in chaotic dynamical systems. Sensitivity analysis with least squares shadowing (LSS) can compute useful gradient information for a number of chaotic systems, including simulations of chaotic vortex shedding and homogeneous isotropic turbulence. However, this gradient information comes at a very high computational cost. This paper presents multiple shooting shadowing (MSS), a more computationally efficient shadowing approach than the original LSS approach. Through an analysis of the convergence rate of MSS, it is shown that MSS can have lower memory usage and run time than LSS.
Moving target, distributed, real-time simulation using Ada
NASA Technical Reports Server (NTRS)
Collins, W. R.; Feyock, S.; King, L. A.; Morell, L. J.
1985-01-01
Research on a precompiler solution is described for the moving target compiler problem encountered when trying to run parallel simulation algorithms on several microcomputers. The precompiler is under development at NASA-Lewis for simulating jet engines. Since the behavior of any component of a jet engine, e.g., the fan inlet, rear duct, forward sensor, etc., depends on the previous behaviors and not the current behaviors of other components, the behaviors can be modeled on different processors provided the outputs of the processors reach other processors in appropriate time intervals. The simulator works in compute and transfer modes. The Ada procedure sets for the behaviors of different components are divided up and routed by the precompiler, which essentially receives a multitasking program. The subroutines are synchronized after each computation cycle.
Storm Water Management Model User’s Manual Version 5.1 - manual
SWMM 5 provides an integrated environment for editing study area input data, running hydrologic, hydraulic and water quality simulations, and viewing the results in a variety of formats. These include color-coded drainage area and conveyance system maps, time series graphs and ta...
Mars Science Laboratory Workstation Test Set
NASA Technical Reports Server (NTRS)
Henriquez, David A.; Canham, Timothy K.; Chang, Johnny T.; Villaume, Nathaniel
2009-01-01
The Mars Science Laboratory developed the Workstation TestSet (WSTS) is a computer program that enables flight software development on virtual MSL avionics. The WSTS is the non-real-time flight avionics simulator that is designed to be completely software-based and run on a workstation class Linux PC.
Tropical pacing of Antarctic sea ice increase
NASA Astrophysics Data System (ADS)
Schneider, D. P.
2015-12-01
One reason why coupled climate model simulations generally do not reproduce the observed increase in Antarctic sea ice extent may be that their internally generated climate variability does not sync with the observed phases of phenomena like the Pacific Decadal Oscillation (PDO) and ENSO. For example, it is unlikely for a free-running coupled model simulation to capture the shift of the PDO from its positive to negative phase during 1998, and the subsequent ~15 year duration of the negative PDO phase. In previously presented work based on atmospheric models forced by observed tropical SSTs and stratospheric ozone, we demonstrated that tropical variability is key to explaining the wind trends over the Southern Ocean during the past ~35 years, particularly in the Ross, Amundsen and Bellingshausen Seas, the regions of the largest trends in sea ice extent and ice season duration. Here, we extend this idea to coupled model simulations with the Community Earth System Model (CESM) in which the evolution of SST anomalies in the central and eastern tropical Pacific is constrained to match the observations. This ensemble of 10 "tropical pacemaker" simulations shows a more realistic evolution of Antarctic sea ice anomalies than does its unconstrained counterpart, the CESM Large Ensemble (both sets of runs include stratospheric ozone depletion and other time-dependent radiative forcings). In particular, the pacemaker runs show that increased sea ice in the eastern Ross Sea is associated with a deeper Amundsen Sea Low (ASL) and stronger westerlies over the south Pacific. These circulation patterns in turn are linked with the negative phase of the PDO, characterized by negative SST anomalies in the central and eastern Pacific. The timing of tropical decadal variability with respect to ozone depletion further suggests a strong role for tropical variability in the recent acceleration of the Antarctic sea ice trend, as ozone depletion stabilized by late 1990s, prior to the most recent major shift in tropical climate. In the pacemaker runs, the positive sea ice trend in the eastern Ross Sea is stronger during the most recent period (~2000-2014) than it is during period of rapid ozone depletion (~1980-1996).
Sustained Accelerated Idioventricular Rhythm in a Centrifuge-Simulated Suborbital Spaceflight.
Suresh, Rahul; Blue, Rebecca S; Mathers, Charles; Castleberry, Tarah L; Vanderploeg, James M
2017-08-01
Hypergravitational exposures during human centrifugation are known to provoke dysrhythmias, including sinus dysrhythmias/tachycardias, premature atrial/ventricular contractions, and even atrial fibrillations or flutter patterns. However, events are generally short-lived and resolve rapidly after cessation of acceleration. This case report describes a prolonged ectopic ventricular rhythm in response to high G exposure. A previously healthy 30-yr-old man voluntarily participated in centrifuge trials as a part of a larger study, experiencing a total of 7 centrifuge runs over 48 h. Day 1 consisted of two +Gz runs (peak +3.5 Gz, run 2) and two +Gx runs (peak +6.0 Gx, run 4). Day 2 consisted of three runs approximating suborbital spaceflight profiles (combined +Gx and +Gz). Hemodynamic data collected included blood pressure, heart rate, and continuous three-lead electrocardiogram. Following the final acceleration exposure of the last Day 2 run (peak +4.5 Gx and +4.0 Gz combined, resultant +6.0 G), during a period of idle resting centrifuge activity (resultant vector +1.4 G), the subject demonstrated a marked change in his three-lead electrocardiogram from normal sinus rhythm to a wide-complex ectopic ventricular rhythm at a rate of 91-95 bpm, consistent with an accelerated idioventricular rhythm (AIVR). This rhythm was sustained for 2 m, 24 s before reversion to normal sinus. The subject reported no adverse symptoms during this time. While prolonged, the dysrhythmia was asymptomatic and self-limited. AIVR is likely a physiological response to acceleration and can be managed conservatively. Vigilance is needed to ensure that AIVR is correctly distinguished from other, malignant rhythms to avoid inappropriate treatment and negative operational impacts.Suresh R, Blue RS, Mathers C, Castleberry TL, Vanderploeg JM. Sustained accelerated idioventricular rhythm in a centrifuge-simulated suborbital spaceflight. Aerosp Med Hum Perform. 2017; 88(8):789-793.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mudunuru, Maruti Kumar; Karra, Satish; Harp, Dylan Robert
Reduced-order modeling is a promising approach, as many phenomena can be described by a few parameters/mechanisms. An advantage and attractive aspect of a reduced-order model is that it is computational inexpensive to evaluate when compared to running a high-fidelity numerical simulation. A reduced-order model takes couple of seconds to run on a laptop while a high-fidelity simulation may take couple of hours to run on a high-performance computing cluster. The goal of this paper is to assess the utility of regression-based reduced-order models (ROMs) developed from high-fidelity numerical simulations for predicting transient thermal power output for an enhanced geothermal reservoirmore » while explicitly accounting for uncertainties in the subsurface system and site-specific details. Numerical simulations are performed based on equally spaced values in the specified range of model parameters. Key sensitive parameters are then identified from these simulations, which are fracture zone permeability, well/skin factor, bottom hole pressure, and injection flow rate. We found the fracture zone permeability to be the most sensitive parameter. The fracture zone permeability along with time, are used to build regression-based ROMs for the thermal power output. The ROMs are trained and validated using detailed physics-based numerical simulations. Finally, predictions from the ROMs are then compared with field data. We propose three different ROMs with different levels of model parsimony, each describing key and essential features of the power production curves. The coefficients in the proposed regression-based ROMs are developed by minimizing a non-linear least-squares misfit function using the Levenberg–Marquardt algorithm. The misfit function is based on the difference between numerical simulation data and reduced-order model. ROM-1 is constructed based on polynomials up to fourth order. ROM-1 is able to accurately reproduce the power output of numerical simulations for low values of permeabilities and certain features of the field-scale data. ROM-2 is a model with more analytical functions consisting of polynomials up to order eight, exponential functions and smooth approximations of Heaviside functions, and accurately describes the field-data. At higher permeabilities, ROM-2 reproduces numerical results better than ROM-1, however, there is a considerable deviation from numerical results at low fracture zone permeabilities. ROM-3 consists of polynomials up to order ten, and is developed by taking the best aspects of ROM-1 and ROM-2. ROM-1 is relatively parsimonious than ROM-2 and ROM-3, while ROM-2 overfits the data. ROM-3 on the other hand, provides a middle ground for model parsimony. Based on R 2-values for training, validation, and prediction data sets we found that ROM-3 is better model than ROM-2 and ROM-1. For predicting thermal drawdown in EGS applications, where high fracture zone permeabilities (typically greater than 10 –15 m 2) are desired, ROM-2 and ROM-3 outperform ROM-1. As per computational time, all the ROMs are 10 4 times faster when compared to running a high-fidelity numerical simulation. In conclusion, this makes the proposed regression-based ROMs attractive for real-time EGS applications because they are fast and provide reasonably good predictions for thermal power output.« less
Mudunuru, Maruti Kumar; Karra, Satish; Harp, Dylan Robert; ...
2017-07-10
Reduced-order modeling is a promising approach, as many phenomena can be described by a few parameters/mechanisms. An advantage and attractive aspect of a reduced-order model is that it is computational inexpensive to evaluate when compared to running a high-fidelity numerical simulation. A reduced-order model takes couple of seconds to run on a laptop while a high-fidelity simulation may take couple of hours to run on a high-performance computing cluster. The goal of this paper is to assess the utility of regression-based reduced-order models (ROMs) developed from high-fidelity numerical simulations for predicting transient thermal power output for an enhanced geothermal reservoirmore » while explicitly accounting for uncertainties in the subsurface system and site-specific details. Numerical simulations are performed based on equally spaced values in the specified range of model parameters. Key sensitive parameters are then identified from these simulations, which are fracture zone permeability, well/skin factor, bottom hole pressure, and injection flow rate. We found the fracture zone permeability to be the most sensitive parameter. The fracture zone permeability along with time, are used to build regression-based ROMs for the thermal power output. The ROMs are trained and validated using detailed physics-based numerical simulations. Finally, predictions from the ROMs are then compared with field data. We propose three different ROMs with different levels of model parsimony, each describing key and essential features of the power production curves. The coefficients in the proposed regression-based ROMs are developed by minimizing a non-linear least-squares misfit function using the Levenberg–Marquardt algorithm. The misfit function is based on the difference between numerical simulation data and reduced-order model. ROM-1 is constructed based on polynomials up to fourth order. ROM-1 is able to accurately reproduce the power output of numerical simulations for low values of permeabilities and certain features of the field-scale data. ROM-2 is a model with more analytical functions consisting of polynomials up to order eight, exponential functions and smooth approximations of Heaviside functions, and accurately describes the field-data. At higher permeabilities, ROM-2 reproduces numerical results better than ROM-1, however, there is a considerable deviation from numerical results at low fracture zone permeabilities. ROM-3 consists of polynomials up to order ten, and is developed by taking the best aspects of ROM-1 and ROM-2. ROM-1 is relatively parsimonious than ROM-2 and ROM-3, while ROM-2 overfits the data. ROM-3 on the other hand, provides a middle ground for model parsimony. Based on R 2-values for training, validation, and prediction data sets we found that ROM-3 is better model than ROM-2 and ROM-1. For predicting thermal drawdown in EGS applications, where high fracture zone permeabilities (typically greater than 10 –15 m 2) are desired, ROM-2 and ROM-3 outperform ROM-1. As per computational time, all the ROMs are 10 4 times faster when compared to running a high-fidelity numerical simulation. In conclusion, this makes the proposed regression-based ROMs attractive for real-time EGS applications because they are fast and provide reasonably good predictions for thermal power output.« less
Ade, C J; Broxterman, R M; Craig, J C; Schlup, S J; Wilcox, S L; Barstow, T J
2014-11-01
The purpose was to evaluate the relationships between tests of fitness and two activities that simulate components of Lunar- and Martian-based extravehicular activities (EVA). Seventy-one subjects completed two field tests: a physical abilities test and a 10km Walkback test. The relationships between test times and the following parameters were determined: running V˙O2max, gas exchange threshold (GET), speed at V˙O2max (s-V˙O2max), highest sustainable rate of aerobic metabolism [critical speed (CS)], and the finite distance that could be covered above CS (D'): arm cranking V˙O2peak, GET, critical power (CP), and the finite work that can be performed above CP (W'). CS, running V˙O2max, s-V˙O2max, and arm cranking V˙O2peak had the highest correlations with the physical abilities field test (r=0.66-0.82, P<0.001). For the 10km Walkback, CS, s-V˙O2max, and running V˙O2max were significant predictors (r=0.64-0.85, P<0.001). CS and to a lesser extent V˙O2max are most strongly associated with tasks that simulate aspects of EVA performance, highlighting CS as a method for evaluating astronaut physical capacity. Copyright © 2014 Elsevier B.V. All rights reserved.
Optimum Vehicle Component Integration with InVeST (Integrated Vehicle Simulation Testbed)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ng, W; Paddack, E; Aceves, S
2001-12-27
We have developed an Integrated Vehicle Simulation Testbed (InVeST). InVeST is based on the concept of Co-simulation, and it allows the development of virtual vehicles that can be analyzed and optimized as an overall integrated system. The virtual vehicle is defined by selecting different vehicle components from a component library. Vehicle component models can be written in multiple programming languages running on different computer platforms. At the same time, InVeST provides full protection for proprietary models. Co-simulation is a cost-effective alternative to competing methodologies, such as developing a translator or selecting a single programming language for all vehicle components. InVeSTmore » has been recently demonstrated using a transmission model and a transmission controller model. The transmission model was written in SABER and ran on a Sun/Solaris workstation, while the transmission controller was written in MATRIXx and ran on a PC running Windows NT. The demonstration was successfully performed. Future plans include the applicability of Co-simulation and InVeST to analysis and optimization of multiple complex systems, including those of Intelligent Transportation Systems.« less
RAY-RAMSES: a code for ray tracing on the fly in N-body simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barreira, Alexandre; Llinares, Claudio; Bose, Sownak
2016-05-01
We present a ray tracing code to compute integrated cosmological observables on the fly in AMR N-body simulations. Unlike conventional ray tracing techniques, our code takes full advantage of the time and spatial resolution attained by the N-body simulation by computing the integrals along the line of sight on a cell-by-cell basis through the AMR simulation grid. Moroever, since it runs on the fly in the N-body run, our code can produce maps of the desired observables without storing large (or any) amounts of data for post-processing. We implemented our routines in the RAMSES N-body code and tested the implementationmore » using an example of weak lensing simulation. We analyse basic statistics of lensing convergence maps and find good agreement with semi-analytical methods. The ray tracing methodology presented here can be used in several cosmological analysis such as Sunyaev-Zel'dovich and integrated Sachs-Wolfe effect studies as well as modified gravity. Our code can also be used in cross-checks of the more conventional methods, which can be important in tests of theory systematics in preparation for upcoming large scale structure surveys.« less
Komarov, Ivan; D'Souza, Roshan M
2012-01-01
The Gillespie Stochastic Simulation Algorithm (GSSA) and its variants are cornerstone techniques to simulate reaction kinetics in situations where the concentration of the reactant is too low to allow deterministic techniques such as differential equations. The inherent limitations of the GSSA include the time required for executing a single run and the need for multiple runs for parameter sweep exercises due to the stochastic nature of the simulation. Even very efficient variants of GSSA are prohibitively expensive to compute and perform parameter sweeps. Here we present a novel variant of the exact GSSA that is amenable to acceleration by using graphics processing units (GPUs). We parallelize the execution of a single realization across threads in a warp (fine-grained parallelism). A warp is a collection of threads that are executed synchronously on a single multi-processor. Warps executing in parallel on different multi-processors (coarse-grained parallelism) simultaneously generate multiple trajectories. Novel data-structures and algorithms reduce memory traffic, which is the bottleneck in computing the GSSA. Our benchmarks show an 8×-120× performance gain over various state-of-the-art serial algorithms when simulating different types of models.
Fully 3D modeling of tokamak vertical displacement events with realistic parameters
NASA Astrophysics Data System (ADS)
Pfefferle, David; Ferraro, Nathaniel; Jardin, Stephen; Bhattacharjee, Amitava
2016-10-01
In this work, we model the complex multi-domain and highly non-linear physics of Vertical Displacement Events (VDEs), one of the most damaging off-normal events in tokamaks, with the implicit 3D extended MHD code M3D-C1. The code has recently acquired the capability to include finite thickness conducting structures within the computational domain. By exploiting the possibility of running a linear 3D calculation on top of a non-linear 2D simulation, we monitor the non-axisymmetric stability and assess the eigen-structure of kink modes as the simulation proceeds. Once a stability boundary is crossed, a fully 3D non-linear calculation is launched for the remainder of the simulation, starting from an earlier time of the 2D run. This procedure, along with adaptive zoning, greatly increases the efficiency of the calculation, and allows to perform VDE simulations with realistic parameters and high resolution. Simulations are being validated with NSTX data where both axisymmetric (toroidally averaged) and non-axisymmetric induced and conductive (halo) currents have been measured. This work is supported by US DOE Grant DE-AC02-09CH11466.
Jobs masonry in LHCb with elastic Grid Jobs
NASA Astrophysics Data System (ADS)
Stagni, F.; Charpentier, Ph
2015-12-01
In any distributed computing infrastructure, a job is normally forbidden to run for an indefinite amount of time. This limitation is implemented using different technologies, the most common one being the CPU time limit implemented by batch queues. It is therefore important to have a good estimate of how much CPU work a job will require: otherwise, it might be killed by the batch system, or by whatever system is controlling the jobs’ execution. In many modern interwares, the jobs are actually executed by pilot jobs, that can use the whole available time in running multiple consecutive jobs. If at some point the available time in a pilot is too short for the execution of any job, it should be released, while it could have been used efficiently by a shorter job. Within LHCbDIRAC, the LHCb extension of the DIRAC interware, we developed a simple way to fully exploit computing capabilities available to a pilot, even for resources with limited time capabilities, by adding elasticity to production MonteCarlo (MC) simulation jobs. With our approach, independently of the time available, LHCbDIRAC will always have the possibility to execute a MC job, whose length will be adapted to the available amount of time: therefore the same job, running on different computing resources with different time limits, will produce different amounts of events. The decision on the number of events to be produced is made just in time at the start of the job, when the capabilities of the resource are known. In order to know how many events a MC job will be instructed to produce, LHCbDIRAC simply requires three values: the CPU-work per event for that type of job, the power of the machine it is running on, and the time left for the job before being killed. Knowing these values, we can estimate the number of events the job will be able to simulate with the available CPU time. This paper will demonstrate that, using this simple but effective solution, LHCb manages to make a more efficient use of the available resources, and that it can easily use new types of resources. An example is represented by resources provided by batch queues, where low-priority MC jobs can be used as "masonry" jobs in multi-jobs pilots. A second example is represented by opportunistic resources with limited available time.
NASA Astrophysics Data System (ADS)
Kiely, Thomas G.; Freericks, J. K.
2018-02-01
In a large transverse field, there is an energy cost associated with flipping spins along the axis of the field. This penalty can be employed to relate the transverse-field Ising model in a large field to the X Y model in no field (when measurements are performed at the proper stroboscopic times). We describe the details for how this relationship works and, in particular, we also show under what circumstances it fails. We examine wave-function overlap between the two models and observables, such as spin-spin Green's functions. In general, the mapping is quite robust at short times, but will ultimately fail if the run time becomes too long. There is also a tradeoff between the length of time one can run a simulation out to and the time jitter of the stroboscopic measurements that must be balanced when planning to employ this mapping.
Modular use of human body models of varying levels of complexity: Validation of head kinematics.
Decker, William; Koya, Bharath; Davis, Matthew L; Gayzik, F Scott
2017-05-29
The significant computational resources required to execute detailed human body finite-element models has motivated the development of faster running, simplified models (e.g., GHBMC M50-OS). Previous studies have demonstrated the ability to modularly incorporate the validated GHBMC M50-O brain model into the simplified model (GHBMC M50-OS+B), which allows for localized analysis of the brain in a fraction of the computation time required for the detailed model. The objective of this study is to validate the head and neck kinematics of the GHBMC M50-O and M50-OS (detailed and simplified versions of the same model) against human volunteer test data in frontal and lateral loading. Furthermore, the effect of modular insertion of the detailed brain model into the M50-OS is quantified. Data from the Navy Biodynamics Laboratory (NBDL) human volunteer studies, including a 15g frontal, 8g frontal, and 7g lateral impact, were reconstructed and simulated using LS-DYNA. A five-point restraint system was used for all simulations, and initial positions of the models were matched with volunteer data using settling and positioning techniques. Both the frontal and lateral simulations were run with the M50-O, M50-OS, and M50-OS+B with active musculature for a total of nine runs. Normalized run times for the various models used in this study were 8.4 min/ms for the M50-O, 0.26 min/ms for the M50-OS, and 0.97 min/ms for the M50-OS+B, a 32- and 9-fold reduction in run time, respectively. Corridors were reanalyzed for head and T1 kinematics from the NBDL studies. Qualitative evaluation of head rotational accelerations and linear resultant acceleration, as well as linear resultant T1 acceleration, showed reasonable results between all models and the experimental data. Objective evaluation of the results for head center of gravity (CG) accelerations was completed via ISO TS 18571, and indicated scores of 0.673 (M50-O), 0.638 (M50-OS), and 0.656 (M50-OS+B) for the 15g frontal impact. Scores at lower g levels yielded similar results, 0.667 (M50-O), 0.675 (M50-OS), and 0.710 (M50-OS+B) for the 8g frontal impact. The 7g lateral simulations also compared fairly with an average ISO score of 0.565 for the M50-O, 0.634 for the M50-OS, and 0.606 for the M50-OS+B. The three HBMs experienced similar head and neck motion in the frontal simulations, but the M50-O predicted significantly greater head rotation in the lateral simulation. The greatest departure from the detailed occupant models were noted in lateral flexion, potentially indicating the need for further study. Precise modeling of the belt system however was limited by available data. A sensitivity study of these parameters in the frontal condition showed that belt slack and muscle activation have a modest effect on the ISO score. The reduction in computation time of the M50-OS+B reduces the burden of high computational requirements when handling detailed HBMs. Future work will focus on harmonizing the lateral head response of the models and studying localized injury criteria within the brain from the M50-O and M50-OS+B.
A Collection of Nonlinear Aircraft Simulations in MATLAB
NASA Technical Reports Server (NTRS)
Garza, Frederico R.; Morelli, Eugene A.
2003-01-01
Nonlinear six degree-of-freedom simulations for a variety of aircraft were created using MATLAB. Data for aircraft geometry, aerodynamic characteristics, mass / inertia properties, and engine characteristics were obtained from open literature publications documenting wind tunnel experiments and flight tests. Each nonlinear simulation was implemented within a common framework in MATLAB, and includes an interface with another commercially-available program to read pilot inputs and produce a three-dimensional (3-D) display of the simulated airplane motion. Aircraft simulations include the General Dynamics F-16 Fighting Falcon, Convair F-106B Delta Dart, Grumman F-14 Tomcat, McDonnell Douglas F-4 Phantom, NASA Langley Free-Flying Aircraft for Sub-scale Experimental Research (FASER), NASA HL-20 Lifting Body, NASA / DARPA X-31 Enhanced Fighter Maneuverability Demonstrator, and the Vought A-7 Corsair II. All nonlinear simulations and 3-D displays run in real time in response to pilot inputs, using contemporary desktop personal computer hardware. The simulations can also be run in batch mode. Each nonlinear simulation includes the full nonlinear dynamics of the bare airframe, with a scaled direct connection from pilot inputs to control surface deflections to provide adequate pilot control. Since all the nonlinear simulations are implemented entirely in MATLAB, user-defined control laws can be added in a straightforward fashion, and the simulations are portable across various computing platforms. Routines for trim, linearization, and numerical integration are included. The general nonlinear simulation framework and the specifics for each particular aircraft are documented.
NASA Astrophysics Data System (ADS)
Amalia, E.; Moelyadi, M. A.; Ihsan, M.
2018-04-01
The flow of air passing around a circular cylinder on the Reynolds number of 250,000 is to show Von Karman Vortex Street Phenomenon. This phenomenon was captured well by using a right turbulence model. In this study, some turbulence models available in software ANSYS Fluent 16.0 was tested to simulate Von Karman vortex street phenomenon, namely k- epsilon, SST k-omega and Reynolds Stress, Detached Eddy Simulation (DES), and Large Eddy Simulation (LES). In addition, it was examined the effect of time step size on the accuracy of CFD simulation. The simulations are carried out by using two-dimensional and three- dimensional models and then compared with experimental data. For two-dimensional model, Von Karman Vortex Street phenomenon was captured successfully by using the SST k-omega turbulence model. As for the three-dimensional model, Von Karman Vortex Street phenomenon was captured by using Reynolds Stress Turbulence Model. The time step size value affects the smoothness quality of curves of drag coefficient over time, as well as affecting the running time of the simulation. The smaller time step size, the better inherent drag coefficient curves produced. Smaller time step size also gives faster computation time.
Simulation-Based Learning: The Learning-Forgetting-Relearning Process and Impact of Learning History
ERIC Educational Resources Information Center
Davidovitch, Lior; Parush, Avi; Shtub, Avy
2008-01-01
The results of empirical experiments evaluating the effectiveness and efficiency of the learning-forgetting-relearning process in a dynamic project management simulation environment are reported. Sixty-six graduate engineering students performed repetitive simulation-runs with a break period of several weeks between the runs. The students used a…
Low dose tomographic fluoroscopy: 4D intervention guidance with running prior
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flach, Barbara; Kuntz, Jan; Brehm, Marcus
Purpose: Today's standard imaging technique in interventional radiology is the single- or biplane x-ray fluoroscopy which delivers 2D projection images as a function of time (2D+T). This state-of-the-art technology, however, suffers from its projective nature and is limited by the superposition of the patient's anatomy. Temporally resolved tomographic volumes (3D+T) would significantly improve the visualization of complex structures. A continuous tomographic data acquisition, if carried out with today's technology, would yield an excessive patient dose. Recently the authors proposed a method that enables tomographic fluoroscopy at the same dose level as projective fluoroscopy which means that if scanning time ofmore » an intervention guided by projective fluoroscopy is the same as that of an intervention guided by tomographic fluoroscopy, almost the same dose is administered to the patient. The purpose of this work is to extend authors' previous work and allow for patient motion during the intervention.Methods: The authors propose the running prior technique for adaptation of a prior image. This adaptation is realized by a combination of registration and projection replacement. In a first step the prior is deformed to the current position via affine and deformable registration. Then the information from outdated projections is replaced by newly acquired projections using forward and backprojection steps. The thus adapted volume is the running prior. The proposed method is validated by simulated as well as measured data. To investigate motion during intervention a moving head phantom was simulated. Real in vivo data of a pig are acquired by a prototype CT system consisting of a flat detector and a continuously rotating clinical gantry.Results: With the running prior technique it is possible to correct for motion without additional dose. For an application in intervention guidance both steps of the running prior technique, registration and replacement, are necessary. Reconstructed volumes based on the running prior show high image quality without introducing new artifacts and the interventional materials are displayed at the correct position.Conclusions: The running prior improves the robustness of low dose 3D+T intervention guidance toward intended or unintended patient motion.« less
Transient Turbine Engine Modeling with Hardware-in-the-Loop Power Extraction (PREPRINT)
2008-07-01
Furthermore, it must be compatible with a real - time operating system that is capable of running the simulation. For some models, especially those that use...problem of interfacing the engine/control model to a real - time operating system and associated lab hardware becomes a problem of interfacing these...model in real-time. This requires the use of a real - time operating system and a compatible I/O (input/output) board. Figure 1 illustrates the HIL
Simulation and Analysis of EXPRESS Run Frequency
2013-12-01
indicator, Customer Wait Time ( CWT ), is a measure of total wait time for a customer from the time they submit a need until it is fulfilled...Department of Defense 2000). MICAP hours is a special subset of CWT reserved for requirements that represent a mission capability need (i.e. an aircraft is...performance is tracked by total CWT and MICAP days, which are convertible to hours by multiplying by 24. CWT is tracked by measuring the total time
Incompressible SPH (ISPH) with fast Poisson solver on a GPU
NASA Astrophysics Data System (ADS)
Chow, Alex D.; Rogers, Benedict D.; Lind, Steven J.; Stansby, Peter K.
2018-05-01
This paper presents a fast incompressible SPH (ISPH) solver implemented to run entirely on a graphics processing unit (GPU) capable of simulating several millions of particles in three dimensions on a single GPU. The ISPH algorithm is implemented by converting the highly optimised open-source weakly-compressible SPH (WCSPH) code DualSPHysics to run ISPH on the GPU, combining it with the open-source linear algebra library ViennaCL for fast solutions of the pressure Poisson equation (PPE). Several challenges are addressed with this research: constructing a PPE matrix every timestep on the GPU for moving particles, optimising the limited GPU memory, and exploiting fast matrix solvers. The ISPH pressure projection algorithm is implemented as 4 separate stages, each with a particle sweep, including an algorithm for the population of the PPE matrix suitable for the GPU, and mixed precision storage methods. An accurate and robust ISPH boundary condition ideal for parallel processing is also established by adapting an existing WCSPH boundary condition for ISPH. A variety of validation cases are presented: an impulsively started plate, incompressible flow around a moving square in a box, and dambreaks (2-D and 3-D) which demonstrate the accuracy, flexibility, and speed of the methodology. Fragmentation of the free surface is shown to influence the performance of matrix preconditioners and therefore the PPE matrix solution time. The Jacobi preconditioner demonstrates robustness and reliability in the presence of fragmented flows. For a dambreak simulation, GPU speed ups demonstrate up to 10-18 times and 1.1-4.5 times compared to single-threaded and 16-threaded CPU run times respectively.
Bridging the scales in atmospheric composition simulations using a nudging technique
NASA Astrophysics Data System (ADS)
D'Isidoro, Massimo; Maurizi, Alberto; Russo, Felicita; Tampieri, Francesco
2010-05-01
Studying the interaction between climate and anthropogenic activities, specifically those concentrated in megacities/hot spots, requires the description of processes in a very wide range of scales from local, where anthropogenic emissions are concentrated to global where we are interested to study the impact of these sources. The description of all the processes at all scales within the same numerical implementation is not feasible because of limited computer resources. Therefore, different phenomena are studied by means of different numerical models that can cover different range of scales. The exchange of information from small to large scale is highly non-trivial though of high interest. In fact uncertainties in large scale simulations are expected to receive large contribution from the most polluted areas where the highly inhomogeneous distribution of sources connected to the intrinsic non-linearity of the processes involved can generate non negligible departures between coarse and fine scale simulations. In this work a new method is proposed and investigated in a case study (August 2009) using the BOLCHEM model. Monthly simulations at coarse (0.5° European domain, run A) and fine (0.1° Central Mediterranean domain, run B) horizontal resolution are performed using the coarse resolution as boundary condition for the fine one. Then another coarse resolution run (run C) is performed, in which the high resolution fields remapped on to the coarse grid are used to nudge the concentrations on the Po Valley area. The nudging is applied to all gas and aerosol species of BOLCHEM. Averaged concentrations and variances over Po Valley and other selected areas for O3 and PM are computed. It is observed that although the variance of run B is markedly larger than that of run A, the variance of run C is smaller because the remapping procedure removes large portion of variance from run B fields. Mean concentrations show some differences depending on species: in general mean values of run C lie between run A and run B. A propagation of the signal outside the nudging region is observed, and is evaluated in terms of differences between coarse resolution (with and without nudging) and fine resolution simulations.
Graphical User Interface for Simulink Integrated Performance Analysis Model
NASA Technical Reports Server (NTRS)
Durham, R. Caitlyn
2009-01-01
The J-2X Engine (built by Pratt & Whitney Rocketdyne,) in the Upper Stage of the Ares I Crew Launch Vehicle, will only start within a certain range of temperature and pressure for Liquid Hydrogen and Liquid Oxygen propellants. The purpose of the Simulink Integrated Performance Analysis Model is to verify that in all reasonable conditions the temperature and pressure of the propellants are within the required J-2X engine start boxes. In order to run the simulation, test variables must be entered at all reasonable values of parameters such as heat leak and mass flow rate. To make this testing process as efficient as possible in order to save the maximum amount of time and money, and to show that the J-2X engine will start when it is required to do so, a graphical user interface (GUI) was created to allow the input of values to be used as parameters in the Simulink Model, without opening or altering the contents of the model. The GUI must allow for test data to come from Microsoft Excel files, allow those values to be edited before testing, place those values into the Simulink Model, and get the output from the Simulink Model. The GUI was built using MATLAB, and will run the Simulink simulation when the Simulate option is activated. After running the simulation, the GUI will construct a new Microsoft Excel file, as well as a MATLAB matrix file, using the output values for each test of the simulation so that they may graphed and compared to other values.
SLUDGE BATCH 6/TANK 40 SIMULANT CHEMICAL PROCESS CELL SIMULATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koopman, David
2010-04-28
Phase III simulant flowsheet testing was completed using the latest composition estimates for SB6/Tank 40 feed to DWPF. The goals of the testing were to determine reasonable operating conditions and assumptions for the startup of SB6 processing in the DWPF. Testing covered the region from 102-159% of the current DWPF stoichiometric acid equation. Nitrite ion concentration was reduced to 90 mg/kg in the SRAT product of the lowest acid run. The 159% acid run reached 60% of the DWPF Sludge Receipt and Adjustment Tank (SRAT) limit of 0.65 lb H2/hr, and then sporadically exceeded the DWPF Slurry Mix Evaporator (SME)more » limit of 0.223 lb H2/hr. Hydrogen generation rates peaked at 112% of the SME limit, but higher than targeted wt% total solids levels may have been partially responsible for rates seen. A stoichiometric factor of 120% met both objectives. A processing window for SB6 exists from 102% to something close to 159% based on the simulant results. An initial recommendation for SB6 processing is at 115-120% of the current DWPF stoichiometric acid equation. The addition of simulated Actinide Removal Process (ARP) and Modular Caustic Side Solvent Extraction Unit (MCU) streams to the SRAT cycle had no apparent impact on the preferred stoichiometric factor. Hydrogen generation occurred continuously after acid addition in three of the four tests. The three runs at 120%, 118.4% with ARP/MCU, and 159% stoichiometry were all still producing around 0.1 lb hydrogen/hr at DWPF scale after 36 hours of boiling in the SRAT. The 120% acid run reached 23% of the SRAT limit and 37% of the SME limit. Conversely, nitrous oxide generation was subdued compared to previous sludge batches, staying below 29 lb/hr in all four tests or about a fourth as much as in comparable SB4 testing. Two processing issues, identified during SB6 Phase II flowsheet testing and qualification simulant testing, were monitored during Phase III. Mercury material balance closure was impacted by acid stoichiometry, and significant mercury was not accounted for in the highest acid run. Coalescence of elemental mercury droplets in the mercury water wash tank (MWWT) appeared to degrade with increasing stoichiometry. Observations were made of mercury scale formation in the SRAT condenser and MWWT. A tacky mercury amalgam with Rh, Pd, and Cu, plus some Ru and Ca formed on the impeller at 159% acid. It contained a significant fraction of the available Pd, Cu, and Rh as well as about 25% of the total mercury charged. Free (elemental) mercury was found in all of the SME products. Ammonia scrubbers were used during the tests to capture off-gas ammonia for material balance purposes. Significant ammonium ion formation was again observed during the SRAT cycle, and ammonia gas entered the off-gas as the pH rose during boiling. Ammonium ion production was lower than in the SB6 Phase II and the qualification simulant testing. Similar ammonium ion formation was seen in the ARP/MCU simulation as in the 120% flowsheet run. A slightly higher pH caused most of the ammonium to vaporize and collect in the ammonia scrubber reflux solution. Two periods of foaminess were noted. Neither required additional antifoam to control the foam growth. A steady foam layer formed during reflux in the 120% acid run. It was about an inch thick, but was 2-3 times more volume of bubbles than is typically seen during reflux. A similar foam layer also was seen during caustic boiling of the simulant during the ARP addition. While frequently seen with the radioactive sludge, foaminess during caustic boiling with simulants has been relatively rare. Two further flowsheet tests were performed and will be documented separately. One test was to evaluate the impact of process conditions that match current DWPF operation (lower rates). The second test was to evaluate the impact of SRAT/SME processing on the rheology of a modified Phase III simulant that had been made five times more viscous using ultrasonication.« less
NASA Astrophysics Data System (ADS)
Durand-Gasselin, Benoit; Dailliez, Thibault; Mössner-Beigel, Monika; Knorr, Stephanie; Rauh, Jochen
2010-12-01
This paper presents the experiences using Michelin's thermo-mechanical TaMeTirE tyre model for real-time handling applications in the field of advanced passenger car simulation. Passenger car handling simulations were performed using the tyre model in a full-vehicle real-time environment in order to assess TaMeTirE's level of consistency with real on-track handling behaviour. To achieve this goal, a first offline comparison with a state-of-the-art handling tyre model was carried out on three handling manoeuvres. Then, online real-time simulations of steering wheel steps and slaloms in straight line were run on Daimler's driving simulator by skilled and unskilled drivers. Two analytical tyre temperature effects and two inflation pressure effects were carried out in order to feel their impact on the handling behaviour of the vehicle. This paper underlines the realism of the handling simulation results performed with TaMeTirE, and shows the significant impact of a pressure or a temperature effect on the handling behaviour of a car.
Optimal Run Strategies in Monte Carlo Iterated Fission Source Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romano, Paul K.; Lund, Amanda L.; Siegel, Andrew R.
2017-06-19
The method of successive generations used in Monte Carlo simulations of nuclear reactor models is known to suffer from intergenerational correlation between the spatial locations of fission sites. One consequence of the spatial correlation is that the convergence rate of the variance of the mean for a tally becomes worse than O(N–1). In this work, we consider how the true variance can be minimized given a total amount of work available as a function of the number of source particles per generation, the number of active/discarded generations, and the number of independent simulations. We demonstrate through both analysis and simulationmore » that under certain conditions the solution time for highly correlated reactor problems may be significantly reduced either by running an ensemble of multiple independent simulations or simply by increasing the generation size to the extent that it is practical. However, if too many simulations or too large a generation size is used, the large fraction of source particles discarded can result in an increase in variance. We also show that there is a strong incentive to reduce the number of generations discarded through some source convergence acceleration technique. Furthermore, we discuss the efficient execution of large simulations on a parallel computer; we argue that several practical considerations favor using an ensemble of independent simulations over a single simulation with very large generation size.« less
The Prodiguer Messaging Platform
NASA Astrophysics Data System (ADS)
Denvil, S.; Greenslade, M. A.; Carenton, N.; Levavasseur, G.; Raciazek, J.
2015-12-01
CONVERGENCE is a French multi-partner national project designed to gather HPC and informatics expertise to innovate in the context of running French global climate models with differing grids and at differing resolutions. Efficient and reliable execution of these models and the management and dissemination of model output are some of the complexities that CONVERGENCE aims to resolve.At any one moment in time, researchers affiliated with the Institut Pierre Simon Laplace (IPSL) climate modeling group, are running hundreds of global climate simulations. These simulations execute upon a heterogeneous set of French High Performance Computing (HPC) environments. The IPSL's simulation execution runtime libIGCM (library for IPSL Global Climate Modeling group) has recently been enhanced so as to support hitherto impossible realtime use cases such as simulation monitoring, data publication, metrics collection, simulation control, visualizations … etc. At the core of this enhancement is Prodiguer: an AMQP (Advanced Message Queue Protocol) based event driven asynchronous distributed messaging platform. libIGCM now dispatches copious amounts of information, in the form of messages, to the platform for remote processing by Prodiguer software agents at IPSL servers in Paris. Such processing takes several forms: Persisting message content to database(s); Launching rollback jobs upon simulation failure; Notifying downstream applications; Automation of visualization pipelines; We will describe and/or demonstrate the platform's: Technical implementation; Inherent ease of scalability; Inherent adaptiveness in respect to supervising simulations; Web portal receiving simulation notifications in realtime.
Blum, Yvonne; Vejdani, Hamid R; Birn-Jeffery, Aleksandra V; Hubicki, Christian M; Hurst, Jonathan W; Daley, Monica A
2014-01-01
To achieve robust and stable legged locomotion in uneven terrain, animals must effectively coordinate limb swing and stance phases, which involve distinct yet coupled dynamics. Recent theoretical studies have highlighted the critical influence of swing-leg trajectory on stability, disturbance rejection, leg loading and economy of walking and running. Yet, simulations suggest that not all these factors can be simultaneously optimized. A potential trade-off arises between the optimal swing-leg trajectory for disturbance rejection (to maintain steady gait) versus regulation of leg loading (for injury avoidance and economy). Here we investigate how running guinea fowl manage this potential trade-off by comparing experimental data to predictions of hypothesis-based simulations of running over a terrain drop perturbation. We use a simple model to predict swing-leg trajectory and running dynamics. In simulations, we generate optimized swing-leg trajectories based upon specific hypotheses for task-level control priorities. We optimized swing trajectories to achieve i) constant peak force, ii) constant axial impulse, or iii) perfect disturbance rejection (steady gait) in the stance following a terrain drop. We compare simulation predictions to experimental data on guinea fowl running over a visible step down. Swing and stance dynamics of running guinea fowl closely match simulations optimized to regulate leg loading (priorities i and ii), and do not match the simulations optimized for disturbance rejection (priority iii). The simulations reinforce previous findings that swing-leg trajectory targeting disturbance rejection demands large increases in stance leg force following a terrain drop. Guinea fowl negotiate a downward step using unsteady dynamics with forward acceleration, and recover to steady gait in subsequent steps. Our results suggest that guinea fowl use swing-leg trajectory consistent with priority for load regulation, and not for steadiness of gait. Swing-leg trajectory optimized for load regulation may facilitate economy and injury avoidance in uneven terrain.
Blum, Yvonne; Vejdani, Hamid R.; Birn-Jeffery, Aleksandra V.; Hubicki, Christian M.; Hurst, Jonathan W.; Daley, Monica A.
2014-01-01
To achieve robust and stable legged locomotion in uneven terrain, animals must effectively coordinate limb swing and stance phases, which involve distinct yet coupled dynamics. Recent theoretical studies have highlighted the critical influence of swing-leg trajectory on stability, disturbance rejection, leg loading and economy of walking and running. Yet, simulations suggest that not all these factors can be simultaneously optimized. A potential trade-off arises between the optimal swing-leg trajectory for disturbance rejection (to maintain steady gait) versus regulation of leg loading (for injury avoidance and economy). Here we investigate how running guinea fowl manage this potential trade-off by comparing experimental data to predictions of hypothesis-based simulations of running over a terrain drop perturbation. We use a simple model to predict swing-leg trajectory and running dynamics. In simulations, we generate optimized swing-leg trajectories based upon specific hypotheses for task-level control priorities. We optimized swing trajectories to achieve i) constant peak force, ii) constant axial impulse, or iii) perfect disturbance rejection (steady gait) in the stance following a terrain drop. We compare simulation predictions to experimental data on guinea fowl running over a visible step down. Swing and stance dynamics of running guinea fowl closely match simulations optimized to regulate leg loading (priorities i and ii), and do not match the simulations optimized for disturbance rejection (priority iii). The simulations reinforce previous findings that swing-leg trajectory targeting disturbance rejection demands large increases in stance leg force following a terrain drop. Guinea fowl negotiate a downward step using unsteady dynamics with forward acceleration, and recover to steady gait in subsequent steps. Our results suggest that guinea fowl use swing-leg trajectory consistent with priority for load regulation, and not for steadiness of gait. Swing-leg trajectory optimized for load regulation may facilitate economy and injury avoidance in uneven terrain. PMID:24979750
NASA Astrophysics Data System (ADS)
Nabil, Mahdi; Rattner, Alexander S.
The volume-of-fluid (VOF) approach is a mature technique for simulating two-phase flows. However, VOF simulation of phase-change heat transfer is still in its infancy. Multiple closure formulations have been proposed in the literature, each suited to different applications. While these have enabled significant research advances, few implementations are publicly available, actively maintained, or inter-operable. Here, a VOF solver is presented (interThermalPhaseChangeFoam), which incorporates an extensible framework for phase-change heat transfer modeling, enabling simulation of diverse phenomena in a single environment. The solver employs object oriented OpenFOAM library features, including Run-Time-Type-Identification to enable rapid implementation and run-time selection of phase change and surface tension force models. The solver is packaged with multiple phase change and surface tension closure models, adapted and refined from earlier studies. This code has previously been applied to study wavy film condensation, Taylor flow evaporation, nucleate boiling, and dropwise condensation. Tutorial cases are provided for simulation of horizontal film condensation, smooth and wavy falling film condensation, nucleate boiling, and bubble condensation. Validation and grid sensitivity studies, interfacial transport models, effects of spurious currents from surface tension models, effects of artificial heat transfer due to numerical factors, and parallel scaling performance are described in detail in the Supplemental Material (see Appendix A). By incorporating the framework and demonstration cases into a single environment, users can rapidly apply the solver to study phase-change processes of interest.
NASA Astrophysics Data System (ADS)
Görbil, Gökçe; Gelenbe, Erol
The simulation of critical infrastructures (CI) can involve the use of diverse domain specific simulators that run on geographically distant sites. These diverse simulators must then be coordinated to run concurrently in order to evaluate the performance of critical infrastructures which influence each other, especially in emergency or resource-critical situations. We therefore describe the design of an adaptive communication middleware that provides reliable and real-time one-to-one and group communications for federations of CI simulators over a wide-area network (WAN). The proposed middleware is composed of mobile agent-based peer-to-peer (P2P) overlays, called virtual networks (VNets), to enable resilient, adaptive and real-time communications over unreliable and dynamic physical networks (PNets). The autonomous software agents comprising the communication middleware monitor their performance and the underlying PNet, and dynamically adapt the P2P overlay and migrate over the PNet in order to optimize communications according to the requirements of the federation and the current conditions of the PNet. Reliable communications is provided via redundancy within the communication middleware and intelligent migration of agents over the PNet. The proposed middleware integrates security methods in order to protect the communication infrastructure against attacks and provide privacy and anonymity to the participants of the federation. Experiments with an initial version of the communication middleware over a real-life networking testbed show that promising improvements can be obtained for unicast and group communications via the agent migration capability of our middleware.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herrnstein, Aaron R.
An ocean model with adaptive mesh refinement (AMR) capability is presented for simulating ocean circulation on decade time scales. The model closely resembles the LLNL ocean general circulation model with some components incorporated from other well known ocean models when appropriate. Spatial components are discretized using finite differences on a staggered grid where tracer and pressure variables are defined at cell centers and velocities at cell vertices (B-grid). Horizontal motion is modeled explicitly with leapfrog and Euler forward-backward time integration, and vertical motion is modeled semi-implicitly. New AMR strategies are presented for horizontal refinement on a B-grid, leapfrog time integration,more » and time integration of coupled systems with unequal time steps. These AMR capabilities are added to the LLNL software package SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) and validated with standard benchmark tests. The ocean model is built on top of the amended SAMRAI library. The resulting model has the capability to dynamically increase resolution in localized areas of the domain. Limited basin tests are conducted using various refinement criteria and produce convergence trends in the model solution as refinement is increased. Carbon sequestration simulations are performed on decade time scales in domains the size of the North Atlantic and the global ocean. A suggestion is given for refinement criteria in such simulations. AMR predicts maximum pH changes and increases in CO 2 concentration near the injection sites that are virtually unattainable with a uniform high resolution due to extremely long run times. Fine scale details near the injection sites are achieved by AMR with shorter run times than the finest uniform resolution tested despite the need for enhanced parallel performance. The North Atlantic simulations show a reduction in passive tracer errors when AMR is applied instead of a uniform coarse resolution. No dramatic or persistent signs of error growth in the passive tracer outgassing or the ocean circulation are observed to result from AMR.« less
Quantitative measurement of pass-by noise radiated by vehicles running at high speeds
NASA Astrophysics Data System (ADS)
Yang, Diange; Wang, Ziteng; Li, Bing; Luo, Yugong; Lian, Xiaomin
2011-03-01
It has been a challenge in the past to accurately locate and quantify the pass-by noise source radiated by the running vehicles. A system composed of a microphone array is developed in our current work to do this work. An acoustic-holography method for moving sound sources is designed to handle the Doppler effect effectively in the time domain. The effective sound pressure distribution is reconstructed on the surface of a running vehicle. The method has achieved a high calculation efficiency and is able to quantitatively measure the sound pressure at the sound source and identify the location of the main sound source. The method is also validated by the simulation experiments and the measurement tests with known moving speakers. Finally, the engine noise, tire noise, exhaust noise and wind noise of the vehicle running at different speeds are successfully identified by this method.
E Pluribus Analysis: Applying a Superforecasting Methodology to the Detection of Homegrown Violence
2018-03-01
actor violence and a set of predefined decision-making protocols. This research included running four simulations using the Monte Carlo technique, which...actor violence and a set of predefined decision-making protocols. This research included running four simulations using the Monte Carlo technique...PREDICTING RANDOMNESS.............................................................24 1. Using a “ Runs Test” to Determine a Temporal Pattern in Lone
Computational simulation of the creep-rupture process in filamentary composite materials
NASA Technical Reports Server (NTRS)
Slattery, Kerry T.; Hackett, Robert M.
1991-01-01
A computational simulation of the internal damage accumulation which causes the creep-rupture phenomenon in filamentary composite materials is developed. The creep-rupture process involves complex interactions between several damage mechanisms. A statistically-based computational simulation using a time-differencing approach is employed to model these progressive interactions. The finite element method is used to calculate the internal stresses. The fibers are modeled as a series of bar elements which are connected transversely by matrix elements. Flaws are distributed randomly throughout the elements in the model. Load is applied, and the properties of the individual elements are updated at the end of each time step as a function of the stress history. The simulation is continued until failure occurs. Several cases, with different initial flaw dispersions, are run to establish a statistical distribution of the time-to-failure. The calculations are performed on a supercomputer. The simulation results compare favorably with the results of creep-rupture experiments conducted at the Lawrence Livermore National Laboratory.
A Computer for Low Context-Switch Time
1990-03-01
Results To find out how an implementation performs, we use a set of programs that make up a simulation system. These programs compile C language programs ...have worse relative context-switch performance: the time needed to switch contexts has not de- creased as much as the time to run programs . Much of...this study is: How seriously is throughput performance im- paired by this approach to computer architecture? Reasonable estimates are possible only
Simulation of ozone production in a complex circulation region using nested grids
NASA Astrophysics Data System (ADS)
Taghavi, M.; Cautenet, S.; Foret, G.
2003-07-01
During ESCOMPTE precampaign (15 June to 10 July 2000), three days of intensive pollution (IOP0) have been observed and simulated. The comprehensive RAMS model, version 4.3, coupled online with a chemical module including 29 species, has been used to follow the chemistry of the zone polluted over southern France. This online method can be used because the code is paralleled and the SGI 3800 computer is very powerful. Two runs have been performed: run1 with one grid and run2 with two nested grids. The redistribution of simulated chemical species (ozone, carbon monoxide, sulphur dioxide and nitrogen oxides) was compared to aircraft measurements and surface stations. The 2-grid run has given substantially better results than the one-grid run only because the former takes the outer pollutants into account. This online method helps to explain dynamics and to retrieve the chemical species redistribution with a good agreement.
A hybrid gyrokinetic ion and isothermal electron fluid code for astrophysical plasma
NASA Astrophysics Data System (ADS)
Kawazura, Y.; Barnes, M.
2018-05-01
This paper describes a new code for simulating astrophysical plasmas that solves a hybrid model composed of gyrokinetic ions (GKI) and an isothermal electron fluid (ITEF) Schekochihin et al. (2009) [9]. This model captures ion kinetic effects that are important near the ion gyro-radius scale while electron kinetic effects are ordered out by an electron-ion mass ratio expansion. The code is developed by incorporating the ITEF approximation into AstroGK, an Eulerian δf gyrokinetics code specialized to a slab geometry Numata et al. (2010) [41]. The new code treats the linear terms in the ITEF equations implicitly while the nonlinear terms are treated explicitly. We show linear and nonlinear benchmark tests to prove the validity and applicability of the simulation code. Since the fast electron timescale is eliminated by the mass ratio expansion, the Courant-Friedrichs-Lewy condition is much less restrictive than in full gyrokinetic codes; the present hybrid code runs ∼ 2√{mi /me } ∼ 100 times faster than AstroGK with a single ion species and kinetic electrons where mi /me is the ion-electron mass ratio. The improvement of the computational time makes it feasible to execute ion scale gyrokinetic simulations with a high velocity space resolution and to run multiple simulations to determine the dependence of turbulent dynamics on parameters such as electron-ion temperature ratio and plasma beta.
Effects of simulated weightlessness and sympathectomy on maximum VO2 of male rats
NASA Technical Reports Server (NTRS)
Woodman, C. R.; Stump, C. S.; Beaulieu, S. M.; Rahman, Z.; Sebastian, L. A.
1989-01-01
The effects of simulated weightlessness (hind-limb suspension) and chemical sympathectomy (by repeated injections with guanethidine sulfate) on the maximum oxygen consumption (VO2 max) of female rats were investigated in rats assigned for 14 days to one of three groups: a head-down hind-limb suspension, a horizontal suspension with hind limbs weight bearing, or the caged control. The VO2 max values were assessed by having rats run on a treadmill enclosed in an airtight chamber. The hind-limb-suspended sympathectomized rats were found to exhibit shorter run times and lower mechanical efficiencies, compared to their presuspension values or the values from saline-injected suspended controls. On the other hand, the suspended sympathectomized rats did not demonstrate a decrease in the VO2 max values that was observed in saline-injected controls.
Hybrid stochastic simulation of reaction-diffusion systems with slow and fast dynamics.
Strehl, Robert; Ilie, Silvana
2015-12-21
In this paper, we present a novel hybrid method to simulate discrete stochastic reaction-diffusion models arising in biochemical signaling pathways. We study moderately stiff systems, for which we can partition each reaction or diffusion channel into either a slow or fast subset, based on its propensity. Numerical approaches missing this distinction are often limited with respect to computational run time or approximation quality. We design an approximate scheme that remedies these pitfalls by using a new blending strategy of the well-established inhomogeneous stochastic simulation algorithm and the tau-leaping simulation method. The advantages of our hybrid simulation algorithm are demonstrated on three benchmarking systems, with special focus on approximation accuracy and efficiency.
Prototype software model for designing intruder detection systems with simulation
NASA Astrophysics Data System (ADS)
Smith, Jeffrey S.; Peters, Brett A.; Curry, James C.; Gupta, Dinesh
1998-08-01
This article explores using discrete-event simulation for the design and control of defence oriented fixed-sensor- based detection system in a facility housing items of significant interest to enemy forces. The key issues discussed include software development, simulation-based optimization within a modeling framework, and the expansion of the framework to create real-time control tools and training simulations. The software discussed in this article is a flexible simulation environment where the data for the simulation are stored in an external database and the simulation logic is being implemented using a commercial simulation package. The simulation assesses the overall security level of a building against various intruder scenarios. A series of simulation runs with different inputs can determine the change in security level with changes in the sensor configuration, building layout, and intruder/guard strategies. In addition, the simulation model developed for the design stage of the project can be modified to produce a control tool for the testing, training, and real-time control of systems with humans and sensor hardware in the loop.
Key technology research of HILS based on real-time operating system
NASA Astrophysics Data System (ADS)
Wang, Fankai; Lu, Huiming; Liu, Che
2018-03-01
In order to solve the problems that the long development cycle of traditional simulation and digital simulation doesn't have the characteristics of real time, this paper designed a HILS(Hardware In the Loop Simulation) system based on the real-time operating platform xPC. This system solved the communication problems between HMI and Simulink models through the MATLAB engine interface, and realized the functions of system setting, offline simulation, model compiling and downloading, etc. Using xPC application interface and integrating the TeeChart ActiveX chart component to realize the monitoring function of real-time target application; Each functional block in the system is encapsulated in the form of DLL, and the data interaction between modules was realized by MySQL database technology. When the HILS system runs, search the address of the online xPC target by means of the Ping command, to establish the Tcp/IP communication between the two machines. The technical effectiveness of the developed system is verified through the typical power station control system.
An Open Simulation System Model for Scientific Applications
NASA Technical Reports Server (NTRS)
Williams, Anthony D.
1995-01-01
A model for a generic and open environment for running multi-code or multi-application simulations - called the open Simulation System Model (OSSM) - is proposed and defined. This model attempts to meet the requirements of complex systems like the Numerical Propulsion Simulator System (NPSS). OSSM places no restrictions on the types of applications that can be integrated at any state of its evolution. This includes applications of different disciplines, fidelities, etc. An implementation strategy is proposed that starts with a basic prototype, and evolves over time to accommodate an increasing number of applications. Potential (standard) software is also identified which may aid in the design and implementation of the system.
Rapid scatter estimation for CBCT using the Boltzmann transport equation
NASA Astrophysics Data System (ADS)
Sun, Mingshan; Maslowski, Alex; Davis, Ian; Wareing, Todd; Failla, Gregory; Star-Lack, Josh
2014-03-01
Scatter in cone-beam computed tomography (CBCT) is a significant problem that degrades image contrast, uniformity and CT number accuracy. One means of estimating and correcting for detected scatter is through an iterative deconvolution process known as scatter kernel superposition (SKS). While the SKS approach is efficient, clinically significant errors on the order 2-4% (20-40 HU) still remain. We have previously shown that the kernel method can be improved by perturbing the kernel parameters based on reference data provided by limited Monte Carlo simulations of a first-pass reconstruction. In this work, we replace the Monte Carlo modeling with a deterministic Boltzmann solver (AcurosCTS) to generate the reference scatter data in a dramatically reduced time. In addition, the algorithm is improved so that instead of adjusting kernel parameters, we directly perturb the SKS scatter estimates. Studies were conducted on simulated data and on a large pelvis phantom scanned on a tabletop system. The new method reduced average reconstruction errors (relative to a reference scan) from 2.5% to 1.8%, and significantly improved visualization of low contrast objects. In total, 24 projections were simulated with an AcurosCTS execution time of 22 sec/projection using an 8-core computer. We have ported AcurosCTS to the GPU, and current run-times are approximately 4 sec/projection using two GPU's running in parallel.
A Model-Driven Co-Design Framework for Fusing Control and Scheduling Viewpoints.
Sundharam, Sakthivel Manikandan; Navet, Nicolas; Altmeyer, Sebastian; Havet, Lionel
2018-02-20
Model-Driven Engineering (MDE) is widely applied in the industry to develop new software functions and integrate them into the existing run-time environment of a Cyber-Physical System (CPS). The design of a software component involves designers from various viewpoints such as control theory, software engineering, safety, etc. In practice, while a designer from one discipline focuses on the core aspects of his field (for instance, a control engineer concentrates on designing a stable controller), he neglects or considers less importantly the other engineering aspects (for instance, real-time software engineering or energy efficiency). This may cause some of the functional and non-functional requirements not to be met satisfactorily. In this work, we present a co-design framework based on timing tolerance contract to address such design gaps between control and real-time software engineering. The framework consists of three steps: controller design, verified by jitter margin analysis along with co-simulation, software design verified by a novel schedulability analysis, and the run-time verification by monitoring the execution of the models on target. This framework builds on CPAL (Cyber-Physical Action Language), an MDE design environment based on model-interpretation, which enforces a timing-realistic behavior in simulation through timing and scheduling annotations. The application of our framework is exemplified in the design of an automotive cruise control system.
A Model-Driven Co-Design Framework for Fusing Control and Scheduling Viewpoints
Navet, Nicolas; Havet, Lionel
2018-01-01
Model-Driven Engineering (MDE) is widely applied in the industry to develop new software functions and integrate them into the existing run-time environment of a Cyber-Physical System (CPS). The design of a software component involves designers from various viewpoints such as control theory, software engineering, safety, etc. In practice, while a designer from one discipline focuses on the core aspects of his field (for instance, a control engineer concentrates on designing a stable controller), he neglects or considers less importantly the other engineering aspects (for instance, real-time software engineering or energy efficiency). This may cause some of the functional and non-functional requirements not to be met satisfactorily. In this work, we present a co-design framework based on timing tolerance contract to address such design gaps between control and real-time software engineering. The framework consists of three steps: controller design, verified by jitter margin analysis along with co-simulation, software design verified by a novel schedulability analysis, and the run-time verification by monitoring the execution of the models on target. This framework builds on CPAL (Cyber-Physical Action Language), an MDE design environment based on model-interpretation, which enforces a timing-realistic behavior in simulation through timing and scheduling annotations. The application of our framework is exemplified in the design of an automotive cruise control system. PMID:29461489
Tutorial: Parallel Computing of Simulation Models for Risk Analysis.
Reilly, Allison C; Staid, Andrea; Gao, Michael; Guikema, Seth D
2016-10-01
Simulation models are widely used in risk analysis to study the effects of uncertainties on outcomes of interest in complex problems. Often, these models are computationally complex and time consuming to run. This latter point may be at odds with time-sensitive evaluations or may limit the number of parameters that are considered. In this article, we give an introductory tutorial focused on parallelizing simulation code to better leverage modern computing hardware, enabling risk analysts to better utilize simulation-based methods for quantifying uncertainty in practice. This article is aimed primarily at risk analysts who use simulation methods but do not yet utilize parallelization to decrease the computational burden of these models. The discussion is focused on conceptual aspects of embarrassingly parallel computer code and software considerations. Two complementary examples are shown using the languages MATLAB and R. A brief discussion of hardware considerations is located in the Appendix. © 2016 Society for Risk Analysis.
Particle-In-Cell simulations of high pressure plasmas using graphics processing units
NASA Astrophysics Data System (ADS)
Gebhardt, Markus; Atteln, Frank; Brinkmann, Ralf Peter; Mussenbrock, Thomas; Mertmann, Philipp; Awakowicz, Peter
2009-10-01
Particle-In-Cell (PIC) simulations are widely used to understand the fundamental phenomena in low-temperature plasmas. Particularly plasmas at very low gas pressures are studied using PIC methods. The inherent drawback of these methods is that they are very time consuming -- certain stability conditions has to be satisfied. This holds even more for the PIC simulation of high pressure plasmas due to the very high collision rates. The simulations take up to very much time to run on standard computers and require the help of computer clusters or super computers. Recent advances in the field of graphics processing units (GPUs) provides every personal computer with a highly parallel multi processor architecture for very little money. This architecture is freely programmable and can be used to implement a wide class of problems. In this paper we present the concepts of a fully parallel PIC simulation of high pressure plasmas using the benefits of GPU programming.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wetzstein, M.; Nelson, Andrew F.; Naab, T.
2009-10-01
We present a numerical code for simulating the evolution of astrophysical systems using particles to represent the underlying fluid flow. The code is written in Fortran 95 and is designed to be versatile, flexible, and extensible, with modular options that can be selected either at the time the code is compiled or at run time through a text input file. We include a number of general purpose modules describing a variety of physical processes commonly required in the astrophysical community and we expect that the effort required to integrate additional or alternate modules into the code will be small. Inmore » its simplest form the code can evolve the dynamical trajectories of a set of particles in two or three dimensions using a module which implements either a Leapfrog or Runge-Kutta-Fehlberg integrator, selected by the user at compile time. The user may choose to allow the integrator to evolve the system using individual time steps for each particle or with a single, global time step for all. Particles may interact gravitationally as N-body particles, and all or any subset may also interact hydrodynamically, using the smoothed particle hydrodynamic (SPH) method by selecting the SPH module. A third particle species can be included with a module to model massive point particles which may accrete nearby SPH or N-body particles. Such particles may be used to model, e.g., stars in a molecular cloud. Free boundary conditions are implemented by default, and a module may be selected to include periodic boundary conditions. We use a binary 'Press' tree to organize particles for rapid access in gravity and SPH calculations. Modules implementing an interface with special purpose 'GRAPE' hardware may also be selected to accelerate the gravity calculations. If available, forces obtained from the GRAPE coprocessors may be transparently substituted for those obtained from the tree, or both tree and GRAPE may be used as a combination GRAPE/tree code. The code may be run without modification on single processors or in parallel using OpenMP compiler directives on large-scale, shared memory parallel machines. We present simulations of several test problems, including a merger simulation of two elliptical galaxies with 800,000 particles. In comparison to the Gadget-2 code of Springel, the gravitational force calculation, which is the most costly part of any simulation including self-gravity, is {approx}4.6-4.9 times faster with VINE when tested on different snapshots of the elliptical galaxy merger simulation when run on an Itanium 2 processor in an SGI Altix. A full simulation of the same setup with eight processors is a factor of 2.91 faster with VINE. The code is available to the public under the terms of the Gnu General Public License.« less
NASA Astrophysics Data System (ADS)
Wetzstein, M.; Nelson, Andrew F.; Naab, T.; Burkert, A.
2009-10-01
We present a numerical code for simulating the evolution of astrophysical systems using particles to represent the underlying fluid flow. The code is written in Fortran 95 and is designed to be versatile, flexible, and extensible, with modular options that can be selected either at the time the code is compiled or at run time through a text input file. We include a number of general purpose modules describing a variety of physical processes commonly required in the astrophysical community and we expect that the effort required to integrate additional or alternate modules into the code will be small. In its simplest form the code can evolve the dynamical trajectories of a set of particles in two or three dimensions using a module which implements either a Leapfrog or Runge-Kutta-Fehlberg integrator, selected by the user at compile time. The user may choose to allow the integrator to evolve the system using individual time steps for each particle or with a single, global time step for all. Particles may interact gravitationally as N-body particles, and all or any subset may also interact hydrodynamically, using the smoothed particle hydrodynamic (SPH) method by selecting the SPH module. A third particle species can be included with a module to model massive point particles which may accrete nearby SPH or N-body particles. Such particles may be used to model, e.g., stars in a molecular cloud. Free boundary conditions are implemented by default, and a module may be selected to include periodic boundary conditions. We use a binary "Press" tree to organize particles for rapid access in gravity and SPH calculations. Modules implementing an interface with special purpose "GRAPE" hardware may also be selected to accelerate the gravity calculations. If available, forces obtained from the GRAPE coprocessors may be transparently substituted for those obtained from the tree, or both tree and GRAPE may be used as a combination GRAPE/tree code. The code may be run without modification on single processors or in parallel using OpenMP compiler directives on large-scale, shared memory parallel machines. We present simulations of several test problems, including a merger simulation of two elliptical galaxies with 800,000 particles. In comparison to the Gadget-2 code of Springel, the gravitational force calculation, which is the most costly part of any simulation including self-gravity, is ~4.6-4.9 times faster with VINE when tested on different snapshots of the elliptical galaxy merger simulation when run on an Itanium 2 processor in an SGI Altix. A full simulation of the same setup with eight processors is a factor of 2.91 faster with VINE. The code is available to the public under the terms of the Gnu General Public License.
Toward transient finite element simulation of thermal deformation of machine tools in real-time
NASA Astrophysics Data System (ADS)
Naumann, Andreas; Ruprecht, Daniel; Wensch, Joerg
2018-01-01
Finite element models without simplifying assumptions can accurately describe the spatial and temporal distribution of heat in machine tools as well as the resulting deformation. In principle, this allows to correct for displacements of the Tool Centre Point and enables high precision manufacturing. However, the computational cost of FE models and restriction to generic algorithms in commercial tools like ANSYS prevents their operational use since simulations have to run faster than real-time. For the case where heat diffusion is slow compared to machine movement, we introduce a tailored implicit-explicit multi-rate time stepping method of higher order based on spectral deferred corrections. Using the open-source FEM library DUNE, we show that fully coupled simulations of the temperature field are possible in real-time for a machine consisting of a stock sliding up and down on rails attached to a stand.
NASA Astrophysics Data System (ADS)
Dal Bianco, N.; Lot, R.; Matthys, K.
2018-01-01
This works regards the design of an electric motorcycle for the annual Isle of Man TT Zero Challenge. Optimal control theory was used to perform lap time simulation and design optimisation. A bespoked model was developed, featuring 3D road topology, vehicle dynamics and electric power train, composed of a lithium battery pack, brushed DC motors and motor controller. The model runs simulations over the entire ? or ? of the Snaefell Mountain Course. The work is validated using experimental data from the BX chassis of the Brunel Racing team, which ran during the 2009 to 2015 TT Zero races. Optimal control is used to improve drive train and power train configurations. Findings demonstrate computational efficiency, good lap time prediction and design optimisation potential, achieving a 2 minutes reduction of the reference lap time through changes in final drive gear ratio, battery pack size and motor configuration.
Suppressing correlations in massively parallel simulations of lattice models
NASA Astrophysics Data System (ADS)
Kelling, Jeffrey; Ódor, Géza; Gemming, Sibylle
2017-11-01
For lattice Monte Carlo simulations parallelization is crucial to make studies of large systems and long simulation time feasible, while sequential simulations remain the gold-standard for correlation-free dynamics. Here, various domain decomposition schemes are compared, concluding with one which delivers virtually correlation-free simulations on GPUs. Extensive simulations of the octahedron model for 2 + 1 dimensional Kardar-Parisi-Zhang surface growth, which is very sensitive to correlation in the site-selection dynamics, were performed to show self-consistency of the parallel runs and agreement with the sequential algorithm. We present a GPU implementation providing a speedup of about 30 × over a parallel CPU implementation on a single socket and at least 180 × with respect to the sequential reference.
Insights into the paleoclimate of the PETM from an ensemble of EMIC simulations
NASA Astrophysics Data System (ADS)
Keery, John; Holden, Philip; Edwards, Neil; Monteiro, Fanny; Ridgwell, Andy
2016-04-01
The Eocene epoch, and in particular, the Paleocene-Eocene Thermal Maximum (PETM) of 55.8 Ma, exhibit several features of particular interest for probing our understanding of the Earth system and carbon cycle. CO2 levels have not yet been definitively established, but were known to have varied considerably, peaking at up to several times modern values. Temperatures were several degrees higher than in the modern era, and there were periods of relatively rapid warming, with substantial variability in carbon cycle processes. The Eocene is therefore highly relevant for our understanding of the climate of the 21st Century. Earth system models of intermediate complexity (EMICs), with less detailed simulation of the dynamics of the atmosphere and oceans than general circulation models (GCMs), are sufficiently fast to allow climate modelling over long periods of geological time in comparatively short periods of computer run-time. This speed advantage of EMICs over GCMs permits an "ensemble" of model simulations to be run, allowing statistical analysis of results to be carried out, and allowing the uncertainties in model predictions to be estimated. Here we apply the EMICs PLASIM-GENIE, and GENIE-1, with an Eocene paleogeography which incorporates the major continental configurations and ocean connections, including a shallow strait linking the Arctic to the Tethys, but with neither the Tasman Gateway nor the Drake Passage yet open. Our two model strategy benefits from the detailed simulation of ocean biogeochemistry in GENIE-1, and the 3D spectral atmospheric dynamics in PLASIM-GENIE, which also provides boundary conditions for the GENIE-1 simulations. Using a 50-member ensemble of 1000-year quasi-equilibrium simulations with PLASIM-GENIE, we investigate the relative contributions of orbital and CO2 variability on climate and equator-pole temperature gradients. Results from PLASIM-GENIE are used to configure a harmonised ensemble of GENIE-1 simulations, which will be compared with newly obtained geochemical data on ocean oxygenation through the Eocene from the UK NERC RESPIRE project.
ORNL Cray X1 evaluation status report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, P.K.; Alexander, R.A.; Apra, E.
2004-05-01
On August 15, 2002 the Department of Energy (DOE) selected the Center for Computational Sciences (CCS) at Oak Ridge National Laboratory (ORNL) to deploy a new scalable vector supercomputer architecture for solving important scientific problems in climate, fusion, biology, nanoscale materials and astrophysics. ''This program is one of the first steps in an initiative designed to provide U.S. scientists with the computational power that is essential to 21st century scientific leadership,'' said Dr. Raymond L. Orbach, director of the department's Office of Science. In FY03, CCS procured a 256-processor Cray X1 to evaluate the processors, memory subsystem, scalability of themore » architecture, software environment and to predict the expected sustained performance on key DOE applications codes. The results of the micro-benchmarks and kernel bench marks show the architecture of the Cray X1 to be exceptionally fast for most operations. The best results are shown on large problems, where it is not possible to fit the entire problem into the cache of the processors. These large problems are exactly the types of problems that are important for the DOE and ultra-scale simulation. Application performance is found to be markedly improved by this architecture: - Large-scale simulations of high-temperature superconductors run 25 times faster than on an IBM Power4 cluster using the same number of processors. - Best performance of the parallel ocean program (POP v1.4.3) is 50 percent higher than on Japan s Earth Simulator and 5 times higher than on an IBM Power4 cluster. - A fusion application, global GYRO transport, was found to be 16 times faster on the X1 than on an IBM Power3. The increased performance allowed simulations to fully resolve questions raised by a prior study. - The transport kernel in the AGILE-BOLTZTRAN astrophysics code runs 15 times faster than on an IBM Power4 cluster using the same number of processors. - Molecular dynamics simulations related to the phenomenon of photon echo run 8 times faster than previously achieved. Even at 256 processors, the Cray X1 system is already outperforming other supercomputers with thousands of processors for a certain class of applications such as climate modeling and some fusion applications. This evaluation is the outcome of a number of meetings with both high-performance computing (HPC) system vendors and application experts over the past 9 months and has received broad-based support from the scientific community and other agencies.« less
NASA Technical Reports Server (NTRS)
Caldwell, E. C.; Cowley, M. S.; Scott-Pandorf, M. M.
2010-01-01
Develop a model that simulates a human running in 0 G using the European Space Agency s (ESA) Subject Loading System (SLS). The model provides ground reaction forces (GRF) based on speed and pull-down forces (PDF). DESIGN The theoretical basis for the Running Model was based on a simple spring-mass model. The dynamic properties of the spring-mass model express theoretical vertical GRF (GRFv) and shear GRF in the posterior-anterior direction (GRFsh) during running gait. ADAMs VIEW software was used to build the model, which has a pelvis, thigh segment, shank segment, and a spring foot (see Figure 1).the model s movement simulates the joint kinematics of a human running at Earth gravity with the aim of generating GRF data. DEVELOPMENT & VERIFICATION ESA provided parabolic flight data of subjects running while using the SLS, for further characterization of the model s GRF. Peak GRF data were fit to a linear regression line dependent on PDF and speed. Interpolation and extrapolation of the regression equation provided a theoretical data matrix, which is used to drive the model s motion equations. Verification of the model was conducted by running the model at 4 different speeds, with each speed accounting for 3 different PDF. The model s GRF data fell within a 1-standard-deviation boundary derived from the empirical ESA data. CONCLUSION The Running Model aids in conducting various simulations (potential scenarios include a fatigued runner or a powerful runner generating high loads at a fast cadence) to determine limitations for the T2 vibration isolation system (VIS) aboard the International Space Station. This model can predict how running with the ESA SLS affects the T2 VIS and may be used for other exercise analyses in the future.
COOL: A code for Dynamic Monte Carlo Simulation of molecular dynamics
NASA Astrophysics Data System (ADS)
Barletta, Paolo
2012-02-01
Cool is a program to simulate evaporative and sympathetic cooling for a mixture of two gases co-trapped in an harmonic potential. The collisions involved are assumed to be exclusively elastic, and losses are due to evaporation from the trap. Each particle is followed individually in its trajectory, consequently properties such as spatial densities or energy distributions can be readily evaluated. The code can be used sequentially, by employing one output as input for another run. The code can be easily generalised to describe more complicated processes, such as the inclusion of inelastic collisions, or the possible presence of more than two species in the trap. New version program summaryProgram title: COOL Catalogue identifier: AEHJ_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHJ_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1 097 733 No. of bytes in distributed program, including test data, etc.: 18 425 722 Distribution format: tar.gz Programming language: C++ Computer: Desktop Operating system: Linux RAM: 500 Mbytes Classification: 16.7, 23 Catalogue identifier of previous version: AEHJ_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182 (2011) 388 Does the new version supersede the previous version?: Yes Nature of problem: Simulation of the sympathetic process occurring for two molecular gases co-trapped in a deep optical trap. Solution method: The Direct Simulation Monte Carlo method exploits the decoupling, over a short time period, of the inter-particle interaction from the trapping potential. The particle dynamics is thus exclusively driven by the external optical field. The rare inter-particle collisions are considered with an acceptance/rejection mechanism, that is, by comparing a random number to the collisional probability defined in terms of the inter-particle cross section and centre-of-mass energy. All particles in the trap are individually simulated so that at each time step a number of useful quantities, such as the spatial densities or the energy distributions, can be readily evaluated. Reasons for new version: A number of issues made the old version very difficult to be ported on different architectures, and impossible to compile on Windows. Furthermore, the test runs results could only be replicated poorly, as a consequence of the simulations being very sensitive to the machine background noise. In practise, as the particles are simulated for billions and billions of steps, the consequence of a small difference in the initial conditions due to the finiteness of double precision real can have macroscopic effects in the output. This is not a problem in its own right, but a feature of such simulations. However, for sake of completeness we have introduced a quadruple precision version of the code which yields the same results independently of the software used to compile it, or the hardware architecture where the code is run. Summary of revisions: A number of bugs in the dynamic memory allocation have been detected and removed, mostly in the cool.cpp file. All files have been renamed with a .cpp ending, rather than .c++, to make them compatible with Windows. The Random Number Generator routine, which is the computational core of the algorithm, has been re-written in C++, and there is no need any longer for cross FORTRAN-C++ compilation. A quadruple precision version of the code is provided alongside the original double precision one. The makefile allows the user to choose which one to compile by setting the switch PRECISION to either double or quad. The source code and header files have been organised into directories to make the code file system look neater. Restrictions: The in-trap motion of the particles is treated classically. Running time: The running time is relatively short, 1-2 hours. However it is convenient to replicate each simulation several times with different initialisations of the random sequence.
Taming Wild Horses: The Need for Virtual Time-based Scheduling of VMs in Network Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B; Perumalla, Kalyan S; Henz, Brian J
2012-01-01
The next generation of scalable network simulators employ virtual machines (VMs) to act as high-fidelity models of traffic producer/consumer nodes in simulated networks. However, network simulations could be inaccurate if VMs are not scheduled according to virtual time, especially when many VMs are hosted per simulator core in a multi-core simulator environment. Since VMs are by default free-running, on the outset, it is not clear if, and to what extent, their untamed execution affects the results in simulated scenarios. Here, we provide the first quantitative basis for establishing the need for generalized virtual time scheduling of VMs in network simulators,more » based on an actual prototyped implementations. To exercise breadth, our system is tested with multiple disparate applications: (a) a set of message passing parallel programs, (b) a computer worm propagation phenomenon, and (c) a mobile ad-hoc wireless network simulation. We define and use error metrics and benchmarks in scaled tests to empirically report the poor match of traditional, fairness-based VM scheduling to VM-based network simulation, and also clearly show the better performance of our simulation-specific scheduler, with up to 64 VMs hosted on a 12-core simulator node.« less
Automatic Train Operation Using Autonomic Prediction of Train Runs
NASA Astrophysics Data System (ADS)
Asuka, Masashi; Kataoka, Kenji; Komaya, Kiyotoshi; Nishida, Syogo
In this paper, we present an automatic train control method adaptable to disturbed train traffic conditions. The proposed method presumes transmission of detected time of a home track clearance to trains approaching to the station by employing equipment of Digital ATC (Automatic Train Control). Using the information, each train controls its acceleration by the method that consists of two approaches. First, by setting a designated restricted speed, the train controls its running time to arrive at the next station in accordance with predicted delay. Second, the train predicts the time at which it will reach the current braking pattern generated by Digital ATC, along with the time when the braking pattern transits ahead. By comparing them, the train correctly chooses the coasting drive mode in advance to avoid deceleration due to the current braking pattern. We evaluated the effectiveness of the proposed method regarding driving conditions, energy consumption and reduction of delays by simulation.
An Evaluation of the Predictability of Austral Summer Season Precipitation over South America.
NASA Astrophysics Data System (ADS)
Misra, Vasubandhu
2004-03-01
In this study predictability of austral summer seasonal precipitation over South America is investigated using a 12-yr set of a 3.5-month range (seasonal) and a 17-yr range (continuous multiannual) five-member ensemble integrations of the Center for Ocean Land Atmosphere Studies (COLA) atmospheric general circulation model (AGCM). These integrations were performed with prescribed observed sea surface temperature (SST); therefore, skill attained represents an estimate of the upper bound of the skill achievable by COLA AGCM with predicted SST. The seasonal runs outperform the multiannual model integrations both in deterministic and probabilistic skill. The simulation of the January February March (JFM) seasonal climatology of precipitation is vastly superior in the seasonal runs except over the Nordeste region where the multiannual runs show a marginal improvement. The teleconnection of the ensemble mean JFM precipitation over tropical South America with global contemporaneous observed sea surface temperature in the seasonal runs conforms more closely to observations than in the multiannual runs. Both the sets of runs clearly beat persistence in predicting the interannual precipitation anomalies over the Amazon River basin, Nordeste, South Atlantic convergence zone, and subtropical South America. However, both types of runs display poorer simulations over subtropical regions than the tropical areas of South America. The examination of probabilistic skill of precipitation supports the conclusions from deterministic skill analysis that the seasonal runs yield superior simulations than the multiannual-type runs.
Simulating three dimensional wave run-up over breakwaters covered by antifer units
NASA Astrophysics Data System (ADS)
Najafi-Jilani, A.; Niri, M. Zakiri; Naderi, Nader
2014-06-01
The paper presents the numerical analysis of wave run-up over rubble-mound breakwaters covered by antifer units using a technique integrating Computer-Aided Design (CAD) and Computational Fluid Dynamics (CFD) software. Direct application of Navier-Stokes equations within armour blocks, is used to provide a more reliable approach to simulate wave run-up over breakwaters. A well-tested Reynolds-averaged Navier-Stokes (RANS) Volume of Fluid (VOF) code (Flow-3D) was adopted for CFD computations. The computed results were compared with experimental data to check the validity of the model. Numerical results showed that the direct three dimensional (3D) simulation method can deliver accurate results for wave run-up over rubble mound breakwaters. The results showed that the placement pattern of antifer units had a great impact on values of wave run-up so that by changing the placement pattern from regular to double pyramid can reduce the wave run-up by approximately 30%. Analysis was done to investigate the influences of surface roughness, energy dissipation in the pores of the armour layer and reduced wave run-up due to inflow into the armour and stone layer.
ms2: A molecular simulation tool for thermodynamic properties
NASA Astrophysics Data System (ADS)
Deublein, Stephan; Eckl, Bernhard; Stoll, Jürgen; Lishchuk, Sergey V.; Guevara-Carrion, Gabriela; Glass, Colin W.; Merker, Thorsten; Bernreuther, Martin; Hasse, Hans; Vrabec, Jadran
2011-11-01
This work presents the molecular simulation program ms2 that is designed for the calculation of thermodynamic properties of bulk fluids in equilibrium consisting of small electro-neutral molecules. ms2 features the two main molecular simulation techniques, molecular dynamics (MD) and Monte-Carlo. It supports the calculation of vapor-liquid equilibria of pure fluids and multi-component mixtures described by rigid molecular models on the basis of the grand equilibrium method. Furthermore, it is capable of sampling various classical ensembles and yields numerous thermodynamic properties. To evaluate the chemical potential, Widom's test molecule method and gradual insertion are implemented. Transport properties are determined by equilibrium MD simulations following the Green-Kubo formalism. ms2 is designed to meet the requirements of academia and industry, particularly achieving short response times and straightforward handling. It is written in Fortran90 and optimized for a fast execution on a broad range of computer architectures, spanning from single processor PCs over PC-clusters and vector computers to high-end parallel machines. The standard Message Passing Interface (MPI) is used for parallelization and ms2 is therefore easily portable to different computing platforms. Feature tools facilitate the interaction with the code and the interpretation of input and output files. The accuracy and reliability of ms2 has been shown for a large variety of fluids in preceding work. Program summaryProgram title:ms2 Catalogue identifier: AEJF_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJF_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Special Licence supplied by the authors No. of lines in distributed program, including test data, etc.: 82 794 No. of bytes in distributed program, including test data, etc.: 793 705 Distribution format: tar.gz Programming language: Fortran90 Computer: The simulation tool ms2 is usable on a wide variety of platforms, from single processor machines over PC-clusters and vector computers to vector-parallel architectures. (Tested with Fortran compilers: gfortran, Intel, PathScale, Portland Group and Sun Studio.) Operating system: Unix/Linux, Windows Has the code been vectorized or parallelized?: Yes. Message Passing Interface (MPI) protocol Scalability. Excellent scalability up to 16 processors for molecular dynamics and >512 processors for Monte-Carlo simulations. RAM:ms2 runs on single processors with 512 MB RAM. The memory demand rises with increasing number of processors used per node and increasing number of molecules. Classification: 7.7, 7.9, 12 External routines: Message Passing Interface (MPI) Nature of problem: Calculation of application oriented thermodynamic properties for rigid electro-neutral molecules: vapor-liquid equilibria, thermal and caloric data as well as transport properties of pure fluids and multi-component mixtures. Solution method: Molecular dynamics, Monte-Carlo, various classical ensembles, grand equilibrium method, Green-Kubo formalism. Restrictions: No. The system size is user-defined. Typical problems addressed by ms2 can be solved by simulating systems containing typically 2000 molecules or less. Unusual features: Feature tools are available for creating input files, analyzing simulation results and visualizing molecular trajectories. Additional comments: Sample makefiles for multiple operation platforms are provided. Documentation is provided with the installation package and is available at http://www.ms-2.de. Running time: The running time of ms2 depends on the problem set, the system size and the number of processes used in the simulation. Running four processes on a "Nehalem" processor, simulations calculating VLE data take between two and twelve hours, calculating transport properties between six and 24 hours.
NASA Astrophysics Data System (ADS)
Liu, Fei; Zhao, Jiuwei; Fu, Xiouhua; Huang, Gang
2018-02-01
By conducting idealized experiments in a general circulation model (GCM) and in a toy theoretical model, we test the hypothesis that shallow convection (SC) is responsible for explaining why the boreal summer intraseasonal oscillation (BSISO) prefers propagating northward. Two simulations are performed using ECHAM4, with the control run using a standard detrainment rate of SC and the sensitivity run turning off the detrainment rate of SC. These two simulations display dramatically different BSISO characteristics. The control run simulates the realistic northward propagation (NP) of the BSISO, while the sensitivity run with little SC only simulates stationary signals. In the sensitivity run, the meridional asymmetries of vorticity and humidity fields are simulated under the monsoon vertical wind shear (VWS); thus, the frictional convergence can be excited to the north of the BSISO. However, the lack of SC makes the lower and middle troposphere very dry, which prohibits further development of deeper convection. A theoretical BSISO model is also constructed, and the result shows that SC is a key to convey the asymmetric vorticity effect to induce the BSISO to move northward. Thus, both the GCM and theoretical model results demonstrate the importance of SC in promoting the NP of the BSISO.
2014-10-07
is counted as. Per the TDTC, a test bridge with longitudinal and/or lateral symmetry under non- eccentric loading can be considered as 1, 2, or 4...Level Run036 3 MLC70T (tracked) BA Run046 6 AB Run055 9 AB Run060 9 BA Run064 12 BA Run071 15 AB Run155 3 MLC96W ( wheeled ) AB...Run331 9 AB Run359 15 AB Run430 12 MLC96W ( wheeled ) BA Run434 12 AB Run447 3 BA Bank Condition: Side Slope, Even Strain Channels High
Modeling disease transmission near eradication: An equation free approach
NASA Astrophysics Data System (ADS)
Williams, Matthew O.; Proctor, Joshua L.; Kutz, J. Nathan
2015-01-01
Although disease transmission in the near eradication regime is inherently stochastic, deterministic quantities such as the probability of eradication are of interest to policy makers and researchers. Rather than running large ensembles of discrete stochastic simulations over long intervals in time to compute these deterministic quantities, we create a data-driven and deterministic "coarse" model for them using the Equation Free (EF) framework. In lieu of deriving an explicit coarse model, the EF framework approximates any needed information, such as coarse time derivatives, by running short computational experiments. However, the choice of the coarse variables (i.e., the state of the coarse system) is critical if the resulting model is to be accurate. In this manuscript, we propose a set of coarse variables that result in an accurate model in the endemic and near eradication regimes, and demonstrate this on a compartmental model representing the spread of Poliomyelitis. When combined with adaptive time-stepping coarse projective integrators, this approach can yield over a factor of two speedup compared to direct simulation, and due to its lower dimensionality, could be beneficial when conducting systems level tasks such as designing eradication or monitoring campaigns.
Martin W. Ritchie; Robert F. Powers
1993-01-01
SYSTUM-1 is an individual-tree/distance-independent simulator developed for use in young plantations in California and southern Oregon. The program was developed to run under the DOS operating system and requires DOS 3.0 or higher running on an 8086 or higher processor. The simulator is designed to provide a link with existing PC-based simulators (CACTOS and ORGANON)...
Modeling Subsurface Reactive Flows Using Leadership-Class Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mills, Richard T; Hammond, Glenn; Lichtner, Peter
2009-01-01
We describe our experiences running PFLOTRAN - a code for simulation of coupled hydro-thermal-chemical processes in variably saturated, non-isothermal, porous media - on leadership-class supercomputers, including initial experiences running on the petaflop incarnation of Jaguar, the Cray XT5 at the National Center for Computational Sciences at Oak Ridge National Laboratory. PFLOTRAN utilizes fully implicit time-stepping and is built on top of the Portable, Extensible Toolkit for Scientific Computation (PETSc). We discuss some of the hurdles to 'at scale' performance with PFLOTRAN and the progress we have made in overcoming them on leadership-class computer architectures.
2005-05-01
simulée d’essai pour obtenir les diagrammes de perte de transmission et de réverbération pour 18 éléments (une source, un réseau remorqué et 16 bouées...were recorded using a 1.5GHz Pentium 4 processor. The test results indicate that the Bellhop program runs fast enough to provide the required acoustic...was determined that the Bellhop program will be fast enough for these clients. Future Plans It is intended to integrate further enhancements that
AMITIS: A 3D GPU-Based Hybrid-PIC Model for Space and Plasma Physics
NASA Astrophysics Data System (ADS)
Fatemi, Shahab; Poppe, Andrew R.; Delory, Gregory T.; Farrell, William M.
2017-05-01
We have developed, for the first time, an advanced modeling infrastructure in space simulations (AMITIS) with an embedded three-dimensional self-consistent grid-based hybrid model of plasma (kinetic ions and fluid electrons) that runs entirely on graphics processing units (GPUs). The model uses NVIDIA GPUs and their associated parallel computing platform, CUDA, developed for general purpose processing on GPUs. The model uses a single CPU-GPU pair, where the CPU transfers data between the system and GPU memory, executes CUDA kernels, and writes simulation outputs on the disk. All computations, including moving particles, calculating macroscopic properties of particles on a grid, and solving hybrid model equations are processed on a single GPU. We explain various computing kernels within AMITIS and compare their performance with an already existing well-tested hybrid model of plasma that runs in parallel using multi-CPU platforms. We show that AMITIS runs ∼10 times faster than the parallel CPU-based hybrid model. We also introduce an implicit solver for computation of Faraday’s Equation, resulting in an explicit-implicit scheme for the hybrid model equation. We show that the proposed scheme is stable and accurate. We examine the AMITIS energy conservation and show that the energy is conserved with an error < 0.2% after 500,000 timesteps, even when a very low number of particles per cell is used.
Salko, Robert K.; Schmidt, Rodney C.; Avramova, Maria N.
2014-11-23
This study describes major improvements to the computational infrastructure of the CTF subchannel code so that full-core, pincell-resolved (i.e., one computational subchannel per real bundle flow channel) simulations can now be performed in much shorter run-times, either in stand-alone mode or as part of coupled-code multi-physics calculations. These improvements support the goals of the Department Of Energy Consortium for Advanced Simulation of Light Water Reactors (CASL) Energy Innovation Hub to develop high fidelity multi-physics simulation tools for nuclear energy design and analysis.
Monte Carlo errors with less errors
NASA Astrophysics Data System (ADS)
Wolff, Ulli; Alpha Collaboration
2004-01-01
We explain in detail how to estimate mean values and assess statistical errors for arbitrary functions of elementary observables in Monte Carlo simulations. The method is to estimate and sum the relevant autocorrelation functions, which is argued to produce more certain error estimates than binning techniques and hence to help toward a better exploitation of expensive simulations. An effective integrated autocorrelation time is computed which is suitable to benchmark efficiencies of simulation algorithms with regard to specific observables of interest. A Matlab code is offered for download that implements the method. It can also combine independent runs (replica) allowing to judge their consistency.
Computational Approaches to Simulation and Optimization of Global Aircraft Trajectories
NASA Technical Reports Server (NTRS)
Ng, Hok Kwan; Sridhar, Banavar
2016-01-01
This study examines three possible approaches to improving the speed in generating wind-optimal routes for air traffic at the national or global level. They are: (a) using the resources of a supercomputer, (b) running the computations on multiple commercially available computers and (c) implementing those same algorithms into NASAs Future ATM Concepts Evaluation Tool (FACET) and compares those to a standard implementation run on a single CPU. Wind-optimal aircraft trajectories are computed using global air traffic schedules. The run time and wait time on the supercomputer for trajectory optimization using various numbers of CPUs ranging from 80 to 10,240 units are compared with the total computational time for running the same computation on a single desktop computer and on multiple commercially available computers for potential computational enhancement through parallel processing on the computer clusters. This study also re-implements the trajectory optimization algorithm for further reduction of computational time through algorithm modifications and integrates that with FACET to facilitate the use of the new features which calculate time-optimal routes between worldwide airport pairs in a wind field for use with existing FACET applications. The implementations of trajectory optimization algorithms use MATLAB, Python, and Java programming languages. The performance evaluations are done by comparing their computational efficiencies and based on the potential application of optimized trajectories. The paper shows that in the absence of special privileges on a supercomputer, a cluster of commercially available computers provides a feasible approach for national and global air traffic system studies.
Information Foraging Theory in Software Maintenance
2012-09-30
classified information, stamp classification level on the top and bottom of this page. 17. LIMITATION OF ABSTRACT. This block must be completed to assign a ...time: for example a time series plot of model reaction times to many (simulated) stimuli presented to it in a run • “ Statistical ” abstractions summed...shall be subject to any penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number
Hardware-in-the-Loop Power Extraction Using Different Real-Time Platforms (PREPRINT)
2008-07-01
engine controller ( FADEC ). Incorporating various transient subsystem level models into a complex modeling tool can be a challenging process when each...used can also be modified or replaced as appropriate. In its current configuration, the generic turbine engine model’s FADEC runs primarily on a...simulation in real-time, two platforms were tested: dSPACE and National Instruments’ (NI) LabVIEW Real-Time. For both dSPACE and NI, the engine and FADEC
Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Pitman, Michael C; Rice, John J
2011-06-01
We present the orthogonal recursive bisection algorithm that hierarchically segments the anatomical model structure into subvolumes that are distributed to cores. The anatomy is derived from the Visible Human Project, with electrophysiology based on the FitzHugh-Nagumo (FHN) and ten Tusscher (TT04) models with monodomain diffusion. Benchmark simulations with up to 16,384 and 32,768 cores on IBM Blue Gene/P and L supercomputers for both FHN and TT04 results show good load balancing with almost perfect speedup factors that are close to linear with the number of cores. Hence, strong scaling is demonstrated. With 32,768 cores, a 1000 ms simulation of full heart beat requires about 6.5 min of wall clock time for a simulation of the FHN model. For the largest machine partitions, the simulations execute at a rate of 0.548 s (BG/P) and 0.394 s (BG/L) of wall clock time per 1 ms of simulation time. To our knowledge, these simulations show strong scaling to substantially higher numbers of cores than reported previously for organ-level simulation of the heart, thus significantly reducing run times. The ability to reduce runtimes could play a critical role in enabling wider use of cardiac models in research and clinical applications.
NASA Astrophysics Data System (ADS)
Ren, Lei; Zhang, Lin; Tao, Fei; (Luke) Zhang, Xiaolong; Luo, Yongliang; Zhang, Yabin
2012-08-01
Multidisciplinary design of complex products leads to an increasing demand for high performance simulation (HPS) platforms. One great challenge is how to achieve high efficient utilisation of large-scale simulation resources in distributed and heterogeneous environments. This article reports a virtualisation-based methodology to realise a HPS platform. This research is driven by the issues concerning large-scale simulation resources deployment and complex simulation environment construction, efficient and transparent utilisation of fine-grained simulation resources and high reliable simulation with fault tolerance. A framework of virtualisation-based simulation platform (VSIM) is first proposed. Then the article investigates and discusses key approaches in VSIM, including simulation resources modelling, a method to automatically deploying simulation resources for dynamic construction of system environment, and a live migration mechanism in case of faults in run-time simulation. Furthermore, the proposed methodology is applied to a multidisciplinary design system for aircraft virtual prototyping and some experiments are conducted. The experimental results show that the proposed methodology can (1) significantly improve the utilisation of fine-grained simulation resources, (2) result in a great reduction in deployment time and an increased flexibility for simulation environment construction and (3)achieve fault tolerant simulation.
The Effects of a Duathlon Simulation on Ventilatory Threshold and Running Economy
Berry, Nathaniel T.; Wideman, Laurie; Shields, Edgar W.; Battaglini, Claudio L.
2016-01-01
Multisport events continue to grow in popularity among recreational, amateur, and professional athletes around the world. This study aimed to determine the compounding effects of the initial run and cycling legs of an International Triathlon Union (ITU) Duathlon simulation on maximal oxygen uptake (VO2max), ventilatory threshold (VT) and running economy (RE) within a thermoneutral, laboratory controlled setting. Seven highly trained multisport athletes completed three trials; Trial-1 consisted of a speed only VO2max treadmill protocol (SOVO2max) to determine VO2max, VT, and RE during a single-bout run; Trial-2 consisted of a 10 km run at 98% of VT followed by an incremental VO2max test on the cycle ergometer; Trial-3 consisted of a 10 km run and 30 km cycling bout at 98% of VT followed by a speed only treadmill test to determine the compounding effects of the initial legs of a duathlon on VO2max, VT, and RE. A repeated measures ANOVA was performed to determine differences between variables across trials. No difference in VO2max, VT (%VO2max), maximal HR, or maximal RPE was observed across trials. Oxygen consumption at VT was significantly lower during Trial-3 compared to Trial-1 (p = 0.01). This decrease was coupled with a significant reduction in running speed at VT (p = 0.015). A significant interaction between trial and running speed indicate that RE was significantly altered during Trial-3 compared to Trial-1 (p < 0.001). The first two legs of a laboratory based duathlon simulation negatively impact VT and RE. Our findings may provide a useful method to evaluate multisport athletes since a single-bout incremental treadmill test fails to reveal important alterations in physiological thresholds. Key points Decrease in relative oxygen uptake at VT (ml·kg-1·min-1) during the final leg of a duathlon simulation, compared to a single-bout maximal run. We observed a decrease in running speed at VT during the final leg of a duathlon simulation; resulting in an increase of more than 2 minutes to complete a 5 km run. During our study, highly trained athletes were unable to complete the final 5 km run at the same intensity that they completed the initial 10 km run (in a laboratory setting). A better understanding, and determination, of training loads during multisport training may help to better periodize training programs; additional research is required. PMID:27274661
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahn, Tae-Hyuk; Sandu, Adrian; Watson, Layne T.
2015-08-01
Ensembles of simulations are employed to estimate the statistics of possible future states of a system, and are widely used in important applications such as climate change and biological modeling. Ensembles of runs can naturally be executed in parallel. However, when the CPU times of individual simulations vary considerably, a simple strategy of assigning an equal number of tasks per processor can lead to serious work imbalances and low parallel efficiency. This paper presents a new probabilistic framework to analyze the performance of dynamic load balancing algorithms for ensembles of simulations where many tasks are mapped onto each processor, andmore » where the individual compute times vary considerably among tasks. Four load balancing strategies are discussed: most-dividing, all-redistribution, random-polling, and neighbor-redistribution. Simulation results with a stochastic budding yeast cell cycle model are consistent with the theoretical analysis. It is especially significant that there is a provable global decrease in load imbalance for the local rebalancing algorithms due to scalability concerns for the global rebalancing algorithms. The overall simulation time is reduced by up to 25 %, and the total processor idle time by 85 %.« less
Contextual classification on a CDC Flexible Processor system. [for photomapped remote sensing data
NASA Technical Reports Server (NTRS)
Smith, B. W.; Siegel, H. J.; Swain, P. H.
1981-01-01
A potential hardware organization for the Flexible Processor Array is presented. An algorithm that implements a contextual classifier for remote sensing data analysis is given, along with uniprocessor classification algorithms. The Flexible Processor algorithm is provided, as are simulated timings for contextual classifiers run on the Flexible Processor Array and another system. The timings are analyzed for context neighborhoods of sizes three and nine.
Gray: a ray tracing-based Monte Carlo simulator for PET
NASA Astrophysics Data System (ADS)
Freese, David L.; Olcott, Peter D.; Buss, Samuel R.; Levin, Craig S.
2018-05-01
Monte Carlo simulation software plays a critical role in PET system design. Performing complex, repeated Monte Carlo simulations can be computationally prohibitive, as even a single simulation can require a large amount of time and a computing cluster to complete. Here we introduce Gray, a Monte Carlo simulation software for PET systems. Gray exploits ray tracing methods used in the computer graphics community to greatly accelerate simulations of PET systems with complex geometries. We demonstrate the implementation of models for positron range, annihilation acolinearity, photoelectric absorption, Compton scatter, and Rayleigh scatter. For validation, we simulate the GATE PET benchmark, and compare energy, distribution of hits, coincidences, and run time. We show a speedup using Gray, compared to GATE for the same simulation, while demonstrating nearly identical results. We additionally simulate the Siemens Biograph mCT system with both the NEMA NU-2 scatter phantom and sensitivity phantom. We estimate the total sensitivity within % when accounting for differences in peak NECR. We also estimate the peak NECR to be kcps, or within % of published experimental data. The activity concentration of the peak is also estimated within 1.3%.
Signal treatments to reduce heavy vehicle crash-risk at metropolitan highway intersections.
Archer, Jeffery; Young, William
2009-05-01
Heavy vehicle red-light running at intersections is a common safety problem that has severe consequences. This paper investigates alternative signal treatments that address this issue. A micro-simulation analysis approach was adopted as a precursor to a field trial. The simulation model emulated traffic conditions at a known problem intersection and provided a baseline measure to compare the effects of: an extension of amber time; an extension of green for heavy vehicles detected in the dilemma zone at the onset of amber; an extension of the all-red safety-clearance time based on the detection of vehicles considered likely to run the red light at two detector locations during amber; an extension of the all-red safety-clearance time based on the detection of potential red-light runners during amber or red; and a combination of the second and fourth alternatives. Results suggested safety improvements for all treatments. An extension of amber provided the best safety effect but is known to be prone to behavioural adaptation effects and wastes traffic movement time unnecessarily. A green extension for heavy vehicles detected in the dilemma zone and an all-red extension for potential red-light runners were deemed to provide a sustainable safety improvement and operational efficiency.
NASA One-Dimensional Combustor Simulation--User Manual for S1D_ML
NASA Technical Reports Server (NTRS)
Stueber, Thomas J.; Paxson, Daniel E.
2014-01-01
The work presented in this paper is to promote research leading to a closed-loop control system to actively suppress thermo-acoustic instabilities. To serve as a model for such a closed-loop control system, a one-dimensional combustor simulation composed using MATLAB software tools has been written. This MATLAB based process is similar to a precursor one-dimensional combustor simulation that was formatted as FORTRAN 77 source code. The previous simulation process requires modification to the FORTRAN 77 source code, compiling, and linking when creating a new combustor simulation executable file. The MATLAB based simulation does not require making changes to the source code, recompiling, or linking. Furthermore, the MATLAB based simulation can be run from script files within the MATLAB environment or with a compiled copy of the executable file running in the Command Prompt window without requiring a licensed copy of MATLAB. This report presents a general simulation overview. Details regarding how to setup and initiate a simulation are also presented. Finally, the post-processing section describes the two types of files created while running the simulation and it also includes simulation results for a default simulation included with the source code.
The Variation of Hydrocarbon Abundances with Latitude and Season in Saturn's Stratosphere
NASA Technical Reports Server (NTRS)
Moses, J. I.; Greathouse, T. K.
2005-01-01
We have developed a realistic, time-variable, one-dimensional, seasonal model for stratospheric photochemistry on Saturn using the Caltech/ JPL KINETICS code [1,2,3]. The model accounts for variations in ultraviolet flux due to orbital position, solar-cycle variations, and ring-shadowing effects. The results for two Saturnian years, starting at Ls = 0 in 1950 and running until the upcoming northern vernal equinox in 2009, are presented for numerous latitudes. The same two model years are run over and over again until the model convergences to make sure that high-altitude effects have had a chance to propagate down through the atmosphere. We use the SOLAR2000 model [4,5], in combination with the spectra presented in [6], to predict the ultraviolet flux at any wavelength and any point in time during the simulation. Saturn's orbital position during the simulation was taken from the ephemeris calculator at http://ssd.jpl.nasa.gov/horizons.html [7]. The photochemical model is derived from "Model C" of [8] and uses a hydrocarbon reaction list that has been extensively updated from that presented in [3].
VPython: Writing Real-time 3D Physics Programs
NASA Astrophysics Data System (ADS)
Chabay, Ruth
2001-06-01
VPython (http://cil.andrew.cmu.edu/projects/visual) combines the Python programming language with an innovative 3D graphics module called Visual, developed by David Scherer. Designed to make 3D physics simulations accessible to novice programmers, VPython allows the programmer to write a purely computational program without any graphics code, and produces an interactive realtime 3D graphical display. In a program 3D objects are created and their positions modified by computational algorithms. Running in a separate thread, the Visual module monitors the positions of these objects and renders them many times per second. Using the mouse, one can zoom and rotate to navigate through the scene. After one hour of instruction, students in an introductory physics course at Carnegie Mellon University, including those who have never programmed before, write programs in VPython to model the behavior of physical systems and to visualize fields in 3D. The Numeric array processing module allows the construction of more sophisticated simulations and models as well. VPython is free and open source. The Visual module is based on OpenGL, and runs on Windows, Linux, and Macintosh.
NASA Astrophysics Data System (ADS)
Řidký, V.; Šidlof, P.; Vlček, V.
2013-04-01
The work is devoted to comparing measured data with the results of numerical simulations. As mathematical model was used mathematical model whitout turbulence for incompressible flow In the experiment was observed the behavior of designed NACA0015 airfoil in airflow. For the numerical solution was used OpenFOAM computational package, this is open-source software based on finite volume method. In the numerical solution is prescribed displacement of the airfoil, which corresponds to the experiment. The velocity at a point close to the airfoil surface is compared with the experimental data obtained from interferographic measurements of the velocity field. Numerical solution is computed on a 3D mesh composed of about 1 million ortogonal hexahedron elements. The time step is limited by the Courant number. Parallel computations are run on supercomputers of the CIV at Technical University in Prague (HAL and FOX) and on a computer cluster of the Faculty of Mechatronics of Liberec (HYDRA). Run time is fixed at five periods, the results from the fifth periods and average value for all periods are then be compared with experiment.
Streaming data analytics via message passing with application to graph algorithms
Plimpton, Steven J.; Shead, Tim
2014-05-06
The need to process streaming data, which arrives continuously at high-volume in real-time, arises in a variety of contexts including data produced by experiments, collections of environmental or network sensors, and running simulations. Streaming data can also be formulated as queries or transactions which operate on a large dynamic data store, e.g. a distributed database. We describe a lightweight, portable framework named PHISH which enables a set of independent processes to compute on a stream of data in a distributed-memory parallel manner. Datums are routed between processes in patterns defined by the application. PHISH can run on top of eithermore » message-passing via MPI or sockets via ZMQ. The former means streaming computations can be run on any parallel machine which supports MPI; the latter allows them to run on a heterogeneous, geographically dispersed network of machines. We illustrate how PHISH can support streaming MapReduce operations, and describe streaming versions of three algorithms for large, sparse graph analytics: triangle enumeration, subgraph isomorphism matching, and connected component finding. Lastly, we also provide benchmark timings for MPI versus socket performance of several kernel operations useful in streaming algorithms.« less
Shee, Kevin; Ghali, Fady M; Hyams, Elias S
Robotic surgical skill development is central to training in urology as well as in other surgical disciplines. Here, we describe a pilot study assessing the relationships between robotic surgery simulator performance and 3 categories of activities, namely, videogames, musical instruments, and athletics. A questionnaire was administered to preclinical medical students for general demographic information and prior experiences in surgery, videogames, musical instruments, and athletics. For follow-up performance studies, we used the Matchboard Level 1 and 2 modules on the da Vinci Skills Simulator, and recorded overall score, time to complete, economy of motion, workspace range, instrument collisions, instruments out of view, and drops. Task 1 was run once, whereas task 2 was run 3 times. All performance studies on the da Vinci Surgical Skills Simulator took place in the Simulation Center at Dartmouth-Hitchcock Medical Center. All participants were medical students at the Geisel School of Medicine. After excluding students with prior hands-on experience in surgery, a total of 30 students completed the study. We found a significant correlation between athletic skill level and performance for both task 1 (p = 0.0002) and task 2 (p = 0.0009). No significant correlations were found for videogame or musical instrument skill level. Students with experience in certain athletics (e.g., volleyball, tennis, and baseball) tended to perform better than students with experience in other athletics (e.g., track and field). For task 2, which was run 3 times, this association did not persist after the third repetition due to significant improvements in students with low-level athletic skill (levels 0-2). Our study suggests that prior experience in high-level athletics, but not videogames or musical instruments, significantly influences surgical proficiency in robot-naive students. Furthermore, our study suggests that practice through task repetition can overcome initial differences that may be related to a background in athletics. These novel relationships may have broader implications for the future recruitment and training of robotic surgeons and may warrant further investigation. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
A Comparison of Speed Profiles During Training and Competition in Elite Wheelchair Rugby Players.
Rhodes, James M; Mason, Barry S; Paulson, Thomas A W; Goosey-Tolfrey, Victoria L
2017-07-01
To investigate the speed profiles of individual training modes in comparison with wheelchair rugby (WCR) competition across player classifications. Speed profiles of 15 international WCR players were determined using a radio-frequency-based indoor tracking system. Mean and peak speed (m/s), work:rest ratios, and the relative time spent in (%) and number of high-speed activities performed were measured across training sessions (n = 464) and international competition (n = 34). Training was classified into 1 of 4 modes: conditioning (n = 71), skill-based (n = 133), game-related (n = 151), and game-simulation drills (n = 109). Game-simulation drills were further categorized by the structured duration, which were 3-min game clock (n = 44), 8-min game clock (n = 39), and 10-min running clock (n = 26). Players were grouped by their International Wheelchair Rugby Federation classification as either low-point (≤1.5; n = 8) or high-point players (≥2.0; n = 7). Conditioning drills were shown to exceed the demands of competition, irrespective of classification (P ≤ .005; effect size [ES] = 0.6-2.0). Skill-based and game-related drills underrepresented the speed profiles of competition (P ≤ .005; ES = 0.5-1.1). Mean speed and work:rest ratios were significantly lower during 3- and 8-min game-simulation drills in relation to competition (P ≤ .039; ES = 0.5-0.7). However, no significant differences were identified between the 10-min running clock and competition. Although game-simulation drills provided the closest representation of competition, the structured duration appeared important since the 10-min running clock increased training specificity. Coaches can therefore modify the desired training response by making subtle changes to the format of game-simulation drills.
NASA Astrophysics Data System (ADS)
Kemp, E. M.; Putman, W. M.; Gurganus, J.; Burns, R. W.; Damon, M. R.; McConaughy, G. R.; Seablom, M. S.; Wojcik, G. S.
2009-12-01
We present a regional downscaling system (RDS) suitable for high-resolution weather and climate simulations in multiple supercomputing environments. The RDS is built on the NASA Workflow Tool, a software framework for configuring, running, and managing computer models on multiple platforms with a graphical user interface. The Workflow Tool is used to run the NASA Goddard Earth Observing System Model Version 5 (GEOS-5), a global atmospheric-ocean model for weather and climate simulations down to 1/4 degree resolution; the NASA Land Information System Version 6 (LIS-6), a land surface modeling system that can simulate soil temperature and moisture profiles; and the Weather Research and Forecasting (WRF) community model, a limited-area atmospheric model for weather and climate simulations down to 1-km resolution. The Workflow Tool allows users to customize model settings to user needs; saves and organizes simulation experiments; distributes model runs across different computer clusters (e.g., the DISCOVER cluster at Goddard Space Flight Center, the Cray CX-1 Desktop Supercomputer, etc.); and handles all file transfers and network communications (e.g., scp connections). Together, the RDS is intended to aid researchers by making simulations as easy as possible to generate on the computer resources available. Initial conditions for LIS-6 and GEOS-5 are provided by Modern Era Retrospective-Analysis for Research and Applications (MERRA) reanalysis data stored on DISCOVER. The LIS-6 is first run for 2-4 years forced by MERRA atmospheric analyses, generating initial conditions for the WRF soil physics. GEOS-5 is then initialized from MERRA data and run for the period of interest. Large-scale atmospheric data, sea-surface temperatures, and sea ice coverage from GEOS-5 are used as boundary conditions for WRF, which is run for the same period of interest. Multiply nested grids are used for both LIS-6 and WRF, with the innermost grid run at a resolution sufficient for typical local weather features (terrain, convection, etc.) All model runs, restarts, and file transfers are coordinated by the Workflow Tool. Two use cases are being pursued. First, the RDS generates regional climate simulations down to 4-km for the Chesapeake Bay region, with WRF output provided as input to more specialized models (e.g., ocean/lake, hydrological, marine biology, and air pollution). This will allow assessment of climate impact on local interests (e.g., changes in Bay water levels and temperatures, innundation, fish kills, etc.) Second, the RDS generates high-resolution hurricane simulations in the tropical North Atlantic. This use case will support Observing System Simulation Experiments (OSSEs) of dynamically-targeted lidar observations as part of the NASA Sensor Web Simulator project. Sample results will be presented at the AGU Fall Meeting.
An atomistic simulation scheme for modeling crystal formation from solution.
Kawska, Agnieszka; Brickmann, Jürgen; Kniep, Rüdiger; Hochrein, Oliver; Zahn, Dirk
2006-01-14
We present an atomistic simulation scheme for investigating crystal growth from solution. Molecular-dynamics simulation studies of such processes typically suffer from considerable limitations concerning both system size and simulation times. In our method this time-length scale problem is circumvented by an iterative scheme which combines a Monte Carlo-type approach for the identification of ion adsorption sites and, after each growth step, structural optimization of the ion cluster and the solvent by means of molecular-dynamics simulation runs. An important approximation of our method is based on assuming full structural relaxation of the aggregates between each of the growth steps. This concept only holds for compounds of low solubility. To illustrate our method we studied CaF2 aggregate growth from aqueous solution, which may be taken as prototypes for compounds of very low solubility. The limitations of our simulation scheme are illustrated by the example of NaCl aggregation from aqueous solution, which corresponds to a solute/solvent combination of very high salt solubility.
High performance hybrid functional Petri net simulations of biological pathway models on CUDA.
Chalkidis, Georgios; Nagasaki, Masao; Miyano, Satoru
2011-01-01
Hybrid functional Petri nets are a wide-spread tool for representing and simulating biological models. Due to their potential of providing virtual drug testing environments, biological simulations have a growing impact on pharmaceutical research. Continuous research advancements in biology and medicine lead to exponentially increasing simulation times, thus raising the demand for performance accelerations by efficient and inexpensive parallel computation solutions. Recent developments in the field of general-purpose computation on graphics processing units (GPGPU) enabled the scientific community to port a variety of compute intensive algorithms onto the graphics processing unit (GPU). This work presents the first scheme for mapping biological hybrid functional Petri net models, which can handle both discrete and continuous entities, onto compute unified device architecture (CUDA) enabled GPUs. GPU accelerated simulations are observed to run up to 18 times faster than sequential implementations. Simulating the cell boundary formation by Delta-Notch signaling on a CUDA enabled GPU results in a speedup of approximately 7x for a model containing 1,600 cells.
Wester, Anne E; Verster, Joris C; Volkerts, Edmund R; Böcker, Koen B E; Kenemans, J Leon
2010-09-01
Driving is a complex task and is susceptible to inattention and distraction. Moreover, alcohol has a detrimental effect on driving performance, possibly due to alcohol-induced attention deficits. The aim of the present study was to assess the effects of alcohol on simulated driving performance and attention orienting and allocation, as assessed by event-related potentials (ERPs). Thirty-two participants completed two test runs in the Divided Attention Steering Simulator (DASS) with blood alcohol concentrations (BACs) of 0.00%, 0.02%, 0.05%, 0.08% and 0.10%. Sixteen participants performed the second DASS test run with a passive auditory oddball to assess alcohol effects on involuntary attention shifting. Sixteen other participants performed the second DASS test run with an active auditory oddball to assess alcohol effects on dual-task performance and active attention allocation. Dose-dependent impairments were found for reaction times, the number of misses and steering error, even more so in dual-task conditions, especially in the active oddball group. ERP amplitudes to novel irrelevant events were also attenuated in a dose-dependent manner. The P3b amplitude to deviant target stimuli decreased with blood alcohol concentration only in the dual-task condition. It is concluded that alcohol increases distractibility and interference from secondary task stimuli, as well as reduces attentional capacity and dual-task integrality.
Structural safety of trams in case of misguidance in a switch
NASA Astrophysics Data System (ADS)
Schindler, Christian; Schwickert, Martin; Simonis, Andreas
2010-08-01
Tram vehicles mainly operate on street tracks where sometimes misguidance in switches occurs due to unfavourable conditions. Generally, in this situation, the first running gear of the vehicle follows the bend track while the next running gears continue straight ahead. This leads to a constraint that can only be solved if the vehicle's articulation is damaged or the wheel derails. The last-mentioned situation is less critical in terms of safety and costs. Five different tram types, one of them high floor, the rest low floor, were examined analytically. Numerical simulation was used to determine which wheel would be the first to derail and what level of force is needed in the articulation area between two carbodies to make a tram derail. It was shown that with pure analytical simulation, only an idea of which tram type behaves better or worse in such a situation can be gained, while a three-dimensional computational simulation gives more realistic values for the forces that arise. Three of the four low-floor tram types need much higher articulation forces to make a wheel derail in a switch misguidance situation. One particular three-car type with two single-axle running gears underneath the centre car must be designed to withstand nearly three times higher articulation forces than a conventional high-floor articulated tram. Tram designers must be aware of that and should design the carbody accordingly.
Czaplewski, Cezary; Kalinowski, Sebastian; Liwo, Adam; Scheraga, Harold A
2009-03-10
The replica exchange (RE) method is increasingly used to improve sampling in molecular dynamics (MD) simulations of biomolecular systems. Recently, we implemented the united-residue UNRES force field for mesoscopic MD. Initial results from UNRES MD simulations show that we are able to simulate folding events that take place in a microsecond or even a millisecond time scale. To speed up the search further, we applied the multiplexing replica exchange molecular dynamics (MREMD) method. The multiplexed variant (MREMD) of the RE method, developed by Rhee and Pande, differs from the original RE method in that several trajectories are run at a given temperature. Each set of trajectories run at a different temperature constitutes a layer. Exchanges are attempted not only within a single layer but also between layers. The code has been parallelized and scales up to 4000 processors. We present a comparison of canonical MD, REMD, and MREMD simulations of protein folding with the UNRES force-field. We demonstrate that the multiplexed procedure increases the power of replica exchange MD considerably and convergence of the thermodynamic quantities is achieved much faster.
Czaplewski, Cezary; Kalinowski, Sebastian; Liwo, Adam; Scheraga, Harold A.
2009-01-01
The replica exchange (RE) method is increasingly used to improve sampling in molecular dynamics (MD) simulations of biomolecular systems. Recently, we implemented the united-residue UNRES force field for mesoscopic MD. Initial results from UNRES MD simulations show that we are able to simulate folding events that take place in a microsecond or even a millisecond time scale. To speed up the search further, we applied the multiplexing replica exchange molecular dynamics (MREMD) method. The multiplexed variant (MREMD) of the RE method, developed by Rhee and Pande, differs from the original RE method in that several trajectories are run at a given temperature. Each set of trajectories run at a different temperature constitutes a layer. Exchanges are attempted not only within a single layer but also between layers. The code has been parallelized and scales up to 4000 processors. We present a comparison of canonical MD, REMD, and MREMD simulations of protein folding with the UNRES force-field. We demonstrate that the multiplexed procedure increases the power of replica exchange MD considerably and convergence of the thermodynamic quantities is achieved much faster. PMID:20161452
Simulated Raman Spectral Analysis of Organic Molecules
NASA Astrophysics Data System (ADS)
Lu, Lu
The advent of the laser technology in the 1960s solved the main difficulty of Raman spectroscopy, resulted in simplified Raman spectroscopy instruments and also boosted the sensitivity of the technique. Up till now, Raman spectroscopy is commonly used in chemistry and biology. As vibrational information is specific to the chemical bonds, Raman spectroscopy provides fingerprints to identify the type of molecules in the sample. In this thesis, we simulate the Raman Spectrum of organic and inorganic materials by General Atomic and Molecular Electronic Structure System (GAMESS) and Gaussian, two computational codes that perform several general chemistry calculations. We run these codes on our CPU-based high-performance cluster (HPC). Through the message passing interface (MPI), a standardized and portable message-passing system which can make the codes run in parallel, we are able to decrease the amount of time for computation and increase the sizes and capacities of systems simulated by the codes. From our simulations, we will set up a database that allows search algorithm to quickly identify N-H and O-H bonds in different materials. Our ultimate goal is to analyze and identify the spectra of organic matter compositions from meteorites and compared these spectra with terrestrial biologically-produced amino acids and residues.
NASA Astrophysics Data System (ADS)
Ng, C. S.; Rosenberg, D.; Pouquet, A.; Germaschewski, K.; Bhattacharjee, A.
2009-04-01
A recently developed spectral-element adaptive refinement incompressible magnetohydrodynamic (MHD) code [Rosenberg, Fournier, Fischer, Pouquet, J. Comp. Phys. 215, 59-80 (2006)] is applied to simulate the problem of MHD island coalescence instability (\\ci) in two dimensions. \\ci is a fundamental MHD process that can produce sharp current layers and subsequent reconnection and heating in a high-Lundquist number plasma such as the solar corona [Ng and Bhattacharjee, Phys. Plasmas, 5, 4028 (1998)]. Due to the formation of thin current layers, it is highly desirable to use adaptively or statically refined grids to resolve them, and to maintain accuracy at the same time. The output of the spectral-element static adaptive refinement simulations are compared with simulations using a finite difference method on the same refinement grids, and both methods are compared to pseudo-spectral simulations with uniform grids as baselines. It is shown that with the statically refined grids roughly scaling linearly with effective resolution, spectral element runs can maintain accuracy significantly higher than that of the finite difference runs, in some cases achieving close to full spectral accuracy.
Mean Line Pump Flow Model in Rocket Engine System Simulation
NASA Technical Reports Server (NTRS)
Veres, Joseph P.; Lavelle, Thomas M.
2000-01-01
A mean line pump flow modeling method has been developed to provide a fast capability for modeling turbopumps of rocket engines. Based on this method, a mean line pump flow code PUMPA has been written that can predict the performance of pumps at off-design operating conditions, given the loss of the diffusion system at the design point. The pump code can model axial flow inducers, mixed-flow and centrifugal pumps. The code can model multistage pumps in series. The code features rapid input setup and computer run time, and is an effective analysis and conceptual design tool. The map generation capability of the code provides the map information needed for interfacing with a rocket engine system modeling code. The off-design and multistage modeling capabilities of the code permit parametric design space exploration of candidate pump configurations and provide pump performance data for engine system evaluation. The PUMPA code has been integrated with the Numerical Propulsion System Simulation (NPSS) code and an expander rocket engine system has been simulated. The mean line pump flow code runs as an integral part of the NPSS rocket engine system simulation and provides key pump performance information directly to the system model at all operating conditions.
NASA Astrophysics Data System (ADS)
Liemohn, M. W.; Welling, D. T.; De Zeeuw, D.; Kuznetsova, M. M.; Rastaetter, L.; Ganushkina, N. Y.; Ilie, R.; Toth, G.; Gombosi, T. I.; van der Holst, B.
2016-12-01
The ground-based magnetometer index Dst is a decent measure of the near-Earth current systems, in particular those in the storm-time inner magnetosphere. The ability of a large-scale, physics-based model to reproduce, or even predict, this index is therefore a tangible measure of the overall validity of the code for space weather research and space weather operational usage. Experimental real-time simulations of the Space Weather Modeling Framework (SWMF) are conducted at the Community Coordinated Modeling Center (CCMC), with results available there (http://ccmc.gsfc.nasa.gov/realtime.php), through the CCMC Integrated Space Weather Analysis (iSWA) site (http://iswa.ccmc.gsfc.nasa.gov/IswaSystemWebApp/), and the Michigan SWMF site (http://csem.engin.umich.edu/realtime). Presently, two configurations of the SWMF are running in real time at CCMC, both focusing on the geospace modules, using the BATS-R-US magnetohydrodynamic model, the Ridley Ionosphere Model, and with and without the Rice Convection Model for inner magnetospheric drift physics. While both have been running for several years, nearly continuous results are available since July 2015. Dst from the model output is compared against the Kyoto real-time Dst. Various quantitative measures are presented to assess the goodness of fit between the models and observations. In particular, correlation coefficients, RMSE and prediction efficiency are calculated and discussed. In addition, contingency tables are presented, demonstrating the ability of the model to predict "disturbed times" as defined by Dst values below some critical threshold. It is shown that the SWMF run with the inner magnetosphere model is significantly better at reproducing storm-time values, with prediction efficiencies above 0.25 and Heidke skill scores above 0.5. This work was funded by NASA and NSF grants, and the European Union's Horizon 2020 research and innovation programme under grant agreement 637302 PROGRESS.
Realtime Space Weather Forecasts Via Android Phone App
NASA Astrophysics Data System (ADS)
Crowley, G.; Haacke, B.; Reynolds, A.
2010-12-01
For the past several years, ASTRA has run a first-principles global 3-D fully coupled thermosphere-ionosphere model in real-time for space weather applications. The model is the Thermosphere-Ionosphere Mesosphere Electrodynamics General Circulation Model (TIMEGCM). ASTRA also runs the Assimilative Mapping of Ionospheric Electrodynamics (AMIE) in real-time. Using AMIE to drive the high latitude inputs to the TIMEGCM produces high fidelity simulations of the global thermosphere and ionosphere. These simulations can be viewed on the Android Phone App developed by ASTRA. The SpaceWeather app for the Android operating system is free and can be downloaded from the Google Marketplace. We present the current status of realtime thermosphere-ionosphere space-weather forcasting and discuss the way forward. We explore some of the issues in maintaining real-time simulations with assimilative data feeds in a quasi-operational setting. We also discuss some of the challenges of presenting large amounts of data on a smartphone. The ASTRA SpaceWeather app includes the broadest and most unique range of space weather data yet to be found on a single smartphone app. This is a one-stop-shop for space weather and the only app where you can get access to ASTRA’s real-time predictions of the global thermosphere and ionosphere, high latitude convection and geomagnetic activity. Because of the phone's GPS capability, users can obtain location specific vertical profiles of electron density, temperature, and time-histories of various parameters from the models. The SpaceWeather app has over 9000 downloads, 30 reviews, and a following of active users. It is clear that real-time space weather on smartphones is here to stay, and must be included in planning for any transition to operational space-weather use.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Yu; Sengupta, Manajit
Solar radiation can be computed using radiative transfer models, such as the Rapid Radiation Transfer Model (RRTM) and its general circulation model applications, and used for various energy applications. Due to the complexity of computing radiation fields in aerosol and cloudy atmospheres, simulating solar radiation can be extremely time-consuming, but many approximations--e.g., the two-stream approach and the delta-M truncation scheme--can be utilized. To provide a new fast option for computing solar radiation, we developed the Fast All-sky Radiation Model for Solar applications (FARMS) by parameterizing the simulated diffuse horizontal irradiance and direct normal irradiance for cloudy conditions from the RRTMmore » runs using a 16-stream discrete ordinates radiative transfer method. The solar irradiance at the surface was simulated by combining the cloud irradiance parameterizations with a fast clear-sky model, REST2. To understand the accuracy and efficiency of the newly developed fast model, we analyzed FARMS runs using cloud optical and microphysical properties retrieved using GOES data from 2009-2012. The global horizontal irradiance for cloudy conditions was simulated using FARMS and RRTM for global circulation modeling with a two-stream approximation and compared to measurements taken from the U.S. Department of Energy's Atmospheric Radiation Measurement Climate Research Facility Southern Great Plains site. Our results indicate that the accuracy of FARMS is comparable to or better than the two-stream approach; however, FARMS is approximately 400 times more efficient because it does not explicitly solve the radiative transfer equation for each individual cloud condition. Radiative transfer model runs are computationally expensive, but this model is promising for broad applications in solar resource assessment and forecasting. It is currently being used in the National Solar Radiation Database, which is publicly available from the National Renewable Energy Laboratory at http://nsrdb.nrel.gov.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Allan Ray
1987-05-01
Increases in high speed hardware have mandated studies in software techniques to exploit the parallel capabilities. This thesis examines the effects a run-time scheduler has on a multiprocessor. The model consists of directed, acyclic graphs, generated from serial FORTRAN benchmark programs by the parallel compiler Parafrase. A multitasked, multiprogrammed environment is created. Dependencies are generated by the compiler. Tasks are bidimensional, i.e., they may specify both time and processor requests. Processor requests may be folded into execution time by the scheduler. The graphs may arrive at arbitrary time intervals. The general case is NP-hard, thus, a variety of heuristics aremore » examined by a simulator. Multiprogramming demonstrates a greater need for a run-time scheduler than does monoprogramming for a variety of reasons, e.g., greater stress on the processors, a larger number of independent control paths, more variety in the task parameters, etc. The dynamic critical path series of algorithms perform well. Dynamic critical volume did not add much. Unfortunately, dynamic critical path maximizes turnaround time as well as throughput. Two schedulers are presented which balance throughput and turnaround time. The first requires classification of jobs by type; the second requires selection of a ratio value which is dependent upon system parameters. 45 refs., 19 figs., 20 tabs.« less
Unsteady, Cooled Turbine Simulation Using a PC-Linux Analysis System
NASA Technical Reports Server (NTRS)
List, Michael G.; Turner, Mark G.; Chen, Jen-Pimg; Remotigue, Michael G.; Veres, Joseph P.
2004-01-01
The fist stage of the high-pressure turbine (HPT) of the GE90 engine was simulated with a three-dimensional unsteady Navier-Sokes solver, MSU Turbo, which uses source terms to simulate the cooling flows. In addition to the solver, its pre-processor, GUMBO, and a post-processing and visualization tool, Turbomachinery Visual3 (TV3) were run in a Linux environment to carry out the simulation and analysis. The solver was run both with and without cooling. The introduction of cooling flow on the blade surfaces, case, and hub and its effects on both rotor-vane interaction as well the effects on the blades themselves were the principle motivations for this study. The studies of the cooling flow show the large amount of unsteadiness in the turbine and the corresponding hot streak migration phenomenon. This research on the GE90 turbomachinery has also led to a procedure for running unsteady, cooled turbine analysis on commodity PC's running the Linux operating system.
Case Studies of Forecasting Ionospheric Total Electron Content
NASA Astrophysics Data System (ADS)
Mannucci, A. J.; Meng, X.; Verkhoglyadova, O. P.; Tsurutani, B.; McGranaghan, R. M.
2017-12-01
We report on medium-range forecast-mode runs of ionosphere-thermosphere coupled models that calculate ionospheric total electron content (TEC), focusing on low-latitude daytime conditions. A medium-range forecast-mode run refers to simulations that are driven by inputs that can be predicted 2-3 days in advance, for example based on simulations of the solar wind. We will present results from a weak geomagnetic storm caused by a high-speed solar wind stream on June 29, 2012. Simulations based on the Global Ionosphere Thermosphere Model (GITM) and the Thermosphere Ionosphere Electrodynamic General Circulation Model (TIEGCM) significantly over-estimate TEC in certain low latitude daytime regions, compared to TEC maps based on observations. We will present the results from a more intense coronal mass ejection (CME) driven storm where the simulations are closer to observations. We compare high latitude data sets to model inputs, such as auroral boundary and convection patterns, to assess the degree to which poorly estimated high latitude drivers may be the largest cause of discrepancy between simulations and observations. Our results reveal many factors that can affect the accuracy of forecasts, including the fidelity of empirical models used to estimate high latitude precipitation patterns, or observation proxies for solar EUV spectra, such as the F10.7 index. Implications for forecasts with few-day lead times are discussed
Automatic mathematical modeling for real time simulation system
NASA Technical Reports Server (NTRS)
Wang, Caroline; Purinton, Steve
1988-01-01
A methodology for automatic mathematical modeling and generating simulation models is described. The models will be verified by running in a test environment using standard profiles with the results compared against known results. The major objective is to create a user friendly environment for engineers to design, maintain, and verify their model and also automatically convert the mathematical model into conventional code for conventional computation. A demonstration program was designed for modeling the Space Shuttle Main Engine Simulation. It is written in LISP and MACSYMA and runs on a Symbolic 3670 Lisp Machine. The program provides a very friendly and well organized environment for engineers to build a knowledge base for base equations and general information. It contains an initial set of component process elements for the Space Shuttle Main Engine Simulation and a questionnaire that allows the engineer to answer a set of questions to specify a particular model. The system is then able to automatically generate the model and FORTRAN code. The future goal which is under construction is to download the FORTRAN code to VAX/VMS system for conventional computation. The SSME mathematical model will be verified in a test environment and the solution compared with the real data profile. The use of artificial intelligence techniques has shown that the process of the simulation modeling can be simplified.
Effect of 29 days of simulated microgravity on maximal oxygen consumption and fat-free mass of rats
NASA Technical Reports Server (NTRS)
Woodman, Christopher R.; Stump, Craig S.; Stump, Jane A.; Rahman, Zia; Tipton, Charles M.
1991-01-01
Effects of a 29-days exposure to simulated microgravity on the values of maximal oxygen consumption and fat-free mass (FFM) and on the mechanical efficiency of running were investigated in rats randomly assigned to one of three regimens: head-down suspension (HDS) at 45 deg, horizontal suspension (HS), or cage control (CC). Before suspension and on days 7, 14, 21, and 28, five exercise performance tests were carried out, with measurements related to maximal oxygen consumption, treadmill run time, and mechanical efficiency. It was found that maximal oxygen consumption of both HDS and HS groups decreased significantly at day 7, after which the HDS rats remained decreased while the HS rats returned to presuspension values. Apparent mechanical efficiency in the HDS and HS groups decreased by 22-35 percent during the experimental period, and FFM decreased significantly.
NASA Astrophysics Data System (ADS)
Jayanthi, Aditya; Coker, Christopher
2016-11-01
In the last decade, CFD simulations have transitioned from the stage where they are used to validate the final designs to the main stream development of products driven by the simulation. However, there are still niche areas of applications liking oiling simulations, where the traditional CFD simulation times are probative to use them in product development and have to rely on experimental methods, which are expensive. In this paper a unique example of Sprocket-Chain simulation will be presented using nanoFluidx a commercial SPH code developed by FluiDyna GmbH and Altair Engineering. The grid less nature of the of SPH method has inherent advantages in the areas of application with complex geometry which pose severe challenge to classical finite volume CFD methods due to complex moving geometries, moving meshes and high resolution requirements leading to long simulation times. The simulations times using nanoFluidx can be reduced from weeks to days allowing the flexibility to run more simulation and can be in used in main stream product development. The example problem under consideration is a classical Multiphysics problem and a sequentially coupled solution of Motion Solve and nanoFluidX will be presented. This abstract is replacing DFD16-2016-000045.
Hybrid stochastic simulation of reaction-diffusion systems with slow and fast dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strehl, Robert; Ilie, Silvana, E-mail: silvana@ryerson.ca
2015-12-21
In this paper, we present a novel hybrid method to simulate discrete stochastic reaction-diffusion models arising in biochemical signaling pathways. We study moderately stiff systems, for which we can partition each reaction or diffusion channel into either a slow or fast subset, based on its propensity. Numerical approaches missing this distinction are often limited with respect to computational run time or approximation quality. We design an approximate scheme that remedies these pitfalls by using a new blending strategy of the well-established inhomogeneous stochastic simulation algorithm and the tau-leaping simulation method. The advantages of our hybrid simulation algorithm are demonstrated onmore » three benchmarking systems, with special focus on approximation accuracy and efficiency.« less
Molecular dynamics simulations and applications in computational toxicology and nanotoxicology.
Selvaraj, Chandrabose; Sakkiah, Sugunadevi; Tong, Weida; Hong, Huixiao
2018-02-01
Nanotoxicology studies toxicity of nanomaterials and has been widely applied in biomedical researches to explore toxicity of various biological systems. Investigating biological systems through in vivo and in vitro methods is expensive and time taking. Therefore, computational toxicology, a multi-discipline field that utilizes computational power and algorithms to examine toxicology of biological systems, has gained attractions to scientists. Molecular dynamics (MD) simulations of biomolecules such as proteins and DNA are popular for understanding of interactions between biological systems and chemicals in computational toxicology. In this paper, we review MD simulation methods, protocol for running MD simulations and their applications in studies of toxicity and nanotechnology. We also briefly summarize some popular software tools for execution of MD simulations. Published by Elsevier Ltd.
Integration of MATLAB Simulink(Registered Trademark) Models with the Vertical Motion Simulator
NASA Technical Reports Server (NTRS)
Lewis, Emily K.; Vuong, Nghia D.
2012-01-01
This paper describes the integration of MATLAB Simulink(Registered TradeMark) models into the Vertical Motion Simulator (VMS) at NASA Ames Research Center. The VMS is a high-fidelity, large motion flight simulator that is capable of simulating a variety of aerospace vehicles. Integrating MATLAB Simulink models into the VMS needed to retain the development flexibility of the MATLAB environment and allow rapid deployment of model changes. The process developed at the VMS was used successfully in a number of recent simulation experiments. This accomplishment demonstrated that the model integrity was preserved, while working within the hard real-time run environment of the VMS architecture, and maintaining the unique flexibility of the VMS to meet diverse research requirements.
Virtual Observation System for Earth System Model: An Application to ACME Land Model Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Dali; Yuan, Fengming; Hernandez, Benjamin
Investigating and evaluating physical-chemical-biological processes within an Earth system model (EMS) can be very challenging due to the complexity of both model design and software implementation. A virtual observation system (VOS) is presented to enable interactive observation of these processes during system simulation. Based on advance computing technologies, such as compiler-based software analysis, automatic code instrumentation, and high-performance data transport, the VOS provides run-time observation capability, in-situ data analytics for Earth system model simulation, model behavior adjustment opportunities through simulation steering. A VOS for a terrestrial land model simulation within the Accelerated Climate Modeling for Energy model is also presentedmore » to demonstrate the implementation details and system innovations.« less
Virtual Observation System for Earth System Model: An Application to ACME Land Model Simulations
Wang, Dali; Yuan, Fengming; Hernandez, Benjamin; ...
2017-01-01
Investigating and evaluating physical-chemical-biological processes within an Earth system model (EMS) can be very challenging due to the complexity of both model design and software implementation. A virtual observation system (VOS) is presented to enable interactive observation of these processes during system simulation. Based on advance computing technologies, such as compiler-based software analysis, automatic code instrumentation, and high-performance data transport, the VOS provides run-time observation capability, in-situ data analytics for Earth system model simulation, model behavior adjustment opportunities through simulation steering. A VOS for a terrestrial land model simulation within the Accelerated Climate Modeling for Energy model is also presentedmore » to demonstrate the implementation details and system innovations.« less
Performance of a reconfigured atmospheric general circulation model at low resolution
NASA Astrophysics Data System (ADS)
Wen, Xinyu; Zhou, Tianjun; Wang, Shaowu; Wang, Bin; Wan, Hui; Li, Jian
2007-07-01
Paleoclimate simulations usually require model runs over a very long time. The fast integration version of a state-of-the-art general circulation model (GCM), which shares the same physical and dynamical processes but with reduced horizontal resolution and increased time step, is usually developed. In this study, we configure a fast version of an atmospheric GCM (AGCM), the Grid Atmospheric Model of IAP/LASG (Institute of Atmospheric Physics/State Key Laboratory of Numerical Modeling for Atmospheric Sciences and Geophysical Fluid Dynamics), at low resolution (GAMIL-L, hereafter), and compare the simulation results with the NCEP/NCAR reanalysis and other data to examine its performance. GAMIL-L, which is derived from the original GAMIL, is a finite difference AGCM with 72×40 grids in longitude and latitude and 26 vertical levels. To validate the simulated climatology and variability, two runs were achieved. One was a 60-year control run with fixed climatological monthly sea surface temperature (SST) forcing, and the other was a 50-yr (1950 2000) integration with observational time-varying monthly SST forcing. Comparisons between these two cases and the reanalysis, including intra-seasonal and inter-annual variability are also presented. In addition, the differences between GAMIL-L and the original version of GAMIL are also investigated. The results show that GAMIL-L can capture most of the large-scale dynamical features of the atmosphere, especially in the tropics and mid latitudes, although a few deficiencies exist, such as the underestimated Hadley cell and thereby the weak strength of the Asia summer monsoon. However, the simulated mean states over high latitudes, especially over the polar regions, are not acceptable. Apart from dynamics, the thermodynamic features mainly depend upon the physical parameterization schemes. Since the physical package of GAMIL-L is exactly the same as the original high-resolution version of GAMIL, in which the NCAR Community Atmosphere Model (CAM2) physical package was used, there are only small differences between them in the precipitation and temperature fields. Because our goal is to develop a fast-running AGCM and employ it in the coupled climate system model of IAP/LASG for paleoclimate studies such as ENSO and Australia-Asia monsoon, particular attention has been paid to the model performances in the tropics. More model validations, such as those ran for the Southern Oscillation and South Asia monsoon, indicate that GAMIL-L is reasonably competent and valuable in this regard.
Vizualization Challenges of a Subduction Simulation Using One Billion Markers
NASA Astrophysics Data System (ADS)
Rudolph, M. L.; Gerya, T. V.; Yuen, D. A.
2004-12-01
Recent advances in supercomputing technology have permitted us to study the multiscale, multicomponent fluid dynamics of subduction zones at unprecedented resolutions down to about the length of a football field. We have performed numerical simulations using one billion tracers over a grid of about 80 thousand points in two dimensions. These runs have been performed using a thermal-chemical simulation that accounts for hydration and partial melting in the thermal, mechanical, petrological, and rheological domains. From these runs, we have observed several geophysically interesting phenomena including the development of plumes with unmixed mantle composition as well as plumes with mixed mantle/crust components. Unmixed plumes form at depths greater than 100km (5-10 km above the upper interface of subducting slab) and consist of partially molten wet peridotite. Mixed plumes form at lesser depth directly from the subducting slab and contain partially molten hydrated oceanic crust and sediments. These high resolution simulations have also spurred the development of new visualization methods. We have created a new web-based interface to data from our subduction simulation and other high-resolution 2D data that uses an hierarchical data format to achieve response times of less than one second when accessing data files on the order of 3GB. This interface, WEB-IS4, uses a Javascript and HTML frontend coupled with a C and PHP backend and allows the user to perform region of interest zooming, real-time colormap selection, and can return relevant statistics relating to the data in the region of interest.
Developing a Learning Algorithm-Generated Empirical Relaxer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, Wayne; Kallman, Josh; Toreja, Allen
2016-03-30
One of the main difficulties when running Arbitrary Lagrangian-Eulerian (ALE) simulations is determining how much to relax the mesh during the Eulerian step. This determination is currently made by the user on a simulation-by-simulation basis. We present a Learning Algorithm-Generated Empirical Relaxer (LAGER) which uses a regressive random forest algorithm to automate this decision process. We also demonstrate that LAGER successfully relaxes a variety of test problems, maintains simulation accuracy, and has the potential to significantly decrease both the person-hours and computational hours needed to run a successful ALE simulation.
Gravitational Instability of Small Particles in Stratified Dusty Disks
NASA Astrophysics Data System (ADS)
Shi, J.; Chiang, E.
2012-12-01
Self-gravity is an attractive means of forming the building blocks of planets, a.k.a. the first-generation planetesimals. For ensembles of dust particles to aggregate into self-gravitating, bound structures, they must first collect into regions of extraordinarily high density in circumstellar gas disks. We have modified the ATHENA code to simulate dusty, compressible, self-gravitating flows in a 3D shearing box configuration, working in the limit that dust particles are small enough to be perfectly entrained in gas. We have used our code to determine the critical density thresholds required for disk gas to undergo gravitational collapse. In the strict limit that the stopping times of particles in gas are infinitesimally small, our numerical simulations and analytic calculations reveal that the critical density threshold for gravitational collapse is orders of magnitude above what has been commonly assumed. We discuss how finite but still short stopping times under realistic conditions can lower the threshold to a level that may be attainable. Nonlinear development of gravitational instability in a stratified dusty disk. Shown are volume renderings of dust density for the bottom half of a disk at t=0, 6, 8, and 9 Omega^{-1}. The initial disk first develops shearing density waves. These waves then steep and form long extending filament along the azimuth. These filaments eventually break and form very dense dust clumps. The time evolution of the maximum dust density within the simulation box. Run std32 stands for a standard run which has averaged Toomre's Q=0.5. Qgtrsim 1.0 for the rest runs in the plot (Z1 has twice metallicity than the standard; Q1 has twice Q_g, the Toomre's Q for the gas disk alone; M1 has twice the dust-to-gas ratio than the standard at the midplane; R1 is constructed so that the midplane density exceeds the Roche criterion however the Toomre's Q is above unity.)
Tone-assisted time delay interferometry on GRACE Follow-On
NASA Astrophysics Data System (ADS)
Francis, Samuel P.; Shaddock, Daniel A.; Sutton, Andrew J.; de Vine, Glenn; Ware, Brent; Spero, Robert E.; Klipstein, William M.; McKenzie, Kirk
2015-07-01
We have demonstrated the viability of using the Laser Ranging Interferometer on the Gravity Recovery and Climate Experiment Follow-On (GRACE-FO) space mission to test key aspects of the interspacecraft interferometry proposed for detecting gravitational waves. The Laser Ranging Interferometer on GRACE-FO will be the first demonstration of interspacecraft interferometry. GRACE-FO shares many similarities with proposed space-based gravitational wave detectors based on the Laser Interferometer Space Antenna (LISA) concept. Given these similarities, GRACE-FO provides a unique opportunity to test novel interspacecraft interferometry techniques that a LISA-like mission will use. The LISA Experience from GRACE-FO Optical Payload (LEGOP) is a project developing tests of arm locking and time delay interferometry (TDI), two frequency stabilization techniques, that could be performed on GRACE-FO. In the proposed LEGOP TDI demonstration one GRACE-FO spacecraft will have a free-running laser while the laser on the other spacecraft will be locked to a cavity. It is proposed that two one-way interspacecraft phase measurements will be combined with an appropriate delay in order to produce a round-trip, dual one-way ranging (DOWR) measurement independent of the frequency noise of the free-running laser. This paper describes simulated and experimental tests of a tone-assisted TDI ranging (TDIR) technique that uses a least-squares fitting algorithm and fractional-delay interpolation to find and implement the delays needed to form the DOWR TDI combination. The simulation verifies tone-assisted TDIR works under GRACE-FO conditions. Using simulated GRACE-FO signals the tone-assisted TDIR algorithm estimates the time-varying interspacecraft range with a rms error of ±0.2 m , suppressing the free-running laser frequency noise by 8 orders of magnitude. The experimental results demonstrate the practicability of the technique, measuring the delay at the 6 ns level in the presence of a significant displacement signal.
The joy of interactive modeling
NASA Astrophysics Data System (ADS)
Donchyts, Gennadii; Baart, Fedor; van Dam, Arthur; Jagers, Bert
2013-04-01
The conventional way of working with hydrodynamical models usually consists of the following steps: 1) define a schematization (e.g., in a graphical user interface, or by editing input files) 2) run model from start to end 3) visualize results 4) repeat any of the previous steps. This cycle commonly takes up from hours to several days. What if we can make this happen instantly? As most of the research done using numerical models is in fact qualitative and exploratory (Oreskes et al., 1994), why not use these models as such? How can we adapt models so that we can edit model input, run and visualize results at the same time? More and more, interactive models become available as online apps, mainly for demonstration and educational purposes. These models often simplify the physics behind flows and run on simplified model geometries, particularly when compared with state-of-the-art scientific simulation packages. Here we show how the aforementioned conventional standalone models ("static, run once") can be transformed into interactive models. The basic concepts behind turning existing (conventional) model engines into interactive engines are the following. The engine does not run the model from start to end, but is always available in memory, and can be fed by new boundary conditions, or state changes at any time. The model can be run continuously, per step, or up to a specified time. The Hollywood principle dictates how the model engine is instructed from 'outside', instead of the model engine taking all necessary actions on its own initiative. The underlying techniques that facilitate these concepts are introspection of the computation engine, which exposes its state variables, and control functions, e.g. for time stepping, via a standardized interface, such as BMI (Peckam et. al., 2012). In this work we have used a shallow water flow model engine D-Flow Flexible Mesh. The model was converted from executable to a library, and coupled to the graphical modelling environment Delta Shell. Both the engine and the environment are open source tools under active development at Deltares. The combination provides direct interactive control over the time loop and model state, and offers live 3D visualization of the running model using VTK library.
Rinehart, Joseph; Chung, Elena; Canales, Cecilia; Cannesson, Maxime
2012-10-01
The authors compared the performance of a group of anesthesia providers to closed-loop (Learning Intravenous Resuscitator [LIR]) management in a simulated hemorrhage scenario using cardiac output monitoring. A prospective cohort study. In silico simulation. University hospital anesthesiologists and the LIR closed-loop fluid administration system. Using a patient simulator, a 90-minute simulated hemorrhage protocol was run, which included a 1,200-mL blood loss over 30 minutes. Twenty practicing anesthesiology providers were asked to manage this scenario by providing fluids and vasopressor medication at their discretion. The simulation program was also run 20 times with the LIR closed-loop algorithm managing fluids and an additional 20 times with no intervention. Simulated patient weight, height, heart rate, mean arterial pressure, and cardiac output (CO) were similar at baseline. The mean stroke volume, the mean arterial pressure, CO, and the final CO were higher in the closed-loop group than in the practitioners group, and the coefficient of variance was lower. The closed-loop group received slightly more fluid (2.1 v 1.9 L, p < 0.05) than the anesthesiologist group. Despite the roughly similar volumes of fluid given, the closed-loop maintained more stable hemodynamics than the practitioners primarily because the fluid was given earlier in the protocol and CO optimized before the hemorrhage began, whereas practitioners tended to resuscitate well but only after significant hemodynamic change indicated the need. Overall, these data support the potential usefulness of this closed-loop algorithm in clinical settings in which dynamic predictors are not available or applicable. Published by Elsevier Inc.
Next-Generation Climate Modeling Science Challenges for Simulation, Workflow and Analysis Systems
NASA Astrophysics Data System (ADS)
Koch, D. M.; Anantharaj, V. G.; Bader, D. C.; Krishnan, H.; Leung, L. R.; Ringler, T.; Taylor, M.; Wehner, M. F.; Williams, D. N.
2016-12-01
We will present two examples of current and future high-resolution climate-modeling research that are challenging existing simulation run-time I/O, model-data movement, storage and publishing, and analysis. In each case, we will consider lessons learned as current workflow systems are broken by these large-data science challenges, as well as strategies to repair or rebuild the systems. First we consider the science and workflow challenges to be posed by the CMIP6 multi-model HighResMIP, involving around a dozen modeling groups performing quarter-degree simulations, in 3-member ensembles for 100 years, with high-frequency (1-6 hourly) diagnostics, which is expected to generate over 4PB of data. An example of science derived from these experiments will be to study how resolution affects the ability of models to capture extreme-events such as hurricanes or atmospheric rivers. Expected methods to transfer (using parallel Globus) and analyze (using parallel "TECA" software tools) HighResMIP data for such feature-tracking by the DOE CASCADE project will be presented. A second example will be from the Accelerated Climate Modeling for Energy (ACME) project, which is currently addressing challenges involving multiple century-scale coupled high resolution (quarter-degree) climate simulations on DOE Leadership Class computers. ACME is anticipating production of over 5PB of data during the next 2 years of simulations, in order to investigate the drivers of water cycle changes, sea-level-rise, and carbon cycle evolution. The ACME workflow, from simulation to data transfer, storage, analysis and publication will be presented. Current and planned methods to accelerate the workflow, including implementing run-time diagnostics, and implementing server-side analysis to avoid moving large datasets will be presented.
The effect of foot strike pattern on achilles tendon load during running.
Almonroeder, Thomas; Willson, John D; Kernozek, Thomas W
2013-08-01
In this study we compared Achilles tendon loading parameters during barefoot running among females with different foot strike patterns using open-source computer muscle modeling software to provide dynamic simulations of running. Muscle forces of the gastrocnemius and soleus were estimated from experimental data collected in a motion capture laboratory during barefoot running for 11 runners utilizing a rearfoot strike (RFS) and 8 runners utilizing a non-RFS (NRFS) pattern. Our results show that peak Achilles tendon force occurred earlier in stance phase (p = 0.007), which contributed to a 15% increase in average Achilles tendon loading rate among participants adopting a NRFS pattern (p = 0.06). Stance time, step length, and the estimated number of steps per mile were similar between groups. However, runners with a NRFS pattern experienced 11% greater Achilles tendon impulse each step (p = 0.05) and nearly significantly greater Achilles tendon impulse per mile run (p = 0.06). This difference equates to an additional 47.7 body weights for each mile run with a NRFS pattern. Runners considering a NRFS pattern may want to account for these novel stressors and adapt training programs accordingly.
Applying Parallel Adaptive Methods with GeoFEST/PYRAMID to Simulate Earth Surface Crustal Dynamics
NASA Technical Reports Server (NTRS)
Norton, Charles D.; Lyzenga, Greg; Parker, Jay; Glasscoe, Margaret; Donnellan, Andrea; Li, Peggy
2006-01-01
This viewgraph presentation reviews the use Adaptive Mesh Refinement (AMR) in simulating the Crustal Dynamics of Earth's Surface. AMR simultaneously improves solution quality, time to solution, and computer memory requirements when compared to generating/running on a globally fine mesh. The use of AMR in simulating the dynamics of the Earth's Surface is spurred by future proposed NASA missions, such as InSAR for Earth surface deformation and other measurements. These missions will require support for large-scale adaptive numerical methods using AMR to model observations. AMR was chosen because it has been successful in computation fluid dynamics for predictive simulation of complex flows around complex structures.
Numerical simulation of MPD thruster flows with anomalous transport
NASA Technical Reports Server (NTRS)
Caldo, Giuliano; Choueiri, Edgar Y.; Kelly, Arnold J.; Jahn, Robert G.
1992-01-01
Anomalous transport effects in an Ar self-field coaxial MPD thruster are presently studied by means of a fully 2D two-fluid numerical code; its calculations are extended to a range of typical operating conditions. An effort is made to compare the spatial distribution of the steady state flow and field properties and thruster power-dissipation values for simulation runs with and without anomalous transport. A conductivity law based on the nonlinear saturation of lower hybrid current-driven instability is used for the calculations. Anomalous-transport simulation runs have indicated that the resistivity in specific areas of the discharge is significantly higher than that calculated in classical runs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guan, Qiang
At exascale, the challenge becomes to develop applications that run at scale and use exascale platforms reliably, efficiently, and flexibly. Workflows become much more complex because they must seamlessly integrate simulation and data analytics. They must include down-sampling, post-processing, feature extraction, and visualization. Power and data transfer limitations require these analysis tasks to be run in-situ or in-transit. We expect successful workflows will comprise multiple linked simulations along with tens of analysis routines. Users will have limited development time at scale and, therefore, must have rich tools to develop, debug, test, and deploy applications. At this scale, successful workflows willmore » compose linked computations from an assortment of reliable, well-defined computation elements, ones that can come and go as required, based on the needs of the workflow over time. We propose a novel framework that utilizes both virtual machines (VMs) and software containers to create a workflow system that establishes a uniform build and execution environment (BEE) beyond the capabilities of current systems. In this environment, applications will run reliably and repeatably across heterogeneous hardware and software. Containers, both commercial (Docker and Rocket) and open-source (LXC and LXD), define a runtime that isolates all software dependencies from the machine operating system. Workflows may contain multiple containers that run different operating systems, different software, and even different versions of the same software. We will run containers in open-source virtual machines (KVM) and emulators (QEMU) so that workflows run on any machine entirely in user-space. On this platform of containers and virtual machines, we will deliver workflow software that provides services, including repeatable execution, provenance, checkpointing, and future proofing. We will capture provenance about how containers were launched and how they interact to annotate workflows for repeatable and partial re-execution. We will coordinate the physical snapshots of virtual machines with parallel programming constructs, such as barriers, to automate checkpoint and restart. We will also integrate with HPC-specific container runtimes to gain access to accelerators and other specialized hardware to preserve native performance. Containers will link development to continuous integration. When application developers check code in, it will automatically be tested on a suite of different software and hardware architectures.« less
CASL VMA FY16 Milestone Report (L3:VMA.VUQ.P13.07) Westinghouse Mixing with COBRA-TF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gordon, Natalie
2016-09-30
COBRA-TF (CTF) is a low-resolution code currently maintained as CASL's subchannel analysis tool. CTF operates as a two-phase, compressible code over a mesh comprised of subchannels and axial discretized nodes. In part because CTF is a low-resolution code, simulation run time is not computationally expensive, only on the order of minutes. Hi-resolution codes such as STAR-CCM+ can be used to train lower-fidelity codes such as CTF. Unlike STAR-CCM+, CTF has no turbulence model, only a two-phase turbulent mixing coefficient, β. β can be set to a constant value or calculated in terms of Reynolds number using an empirical correlation. Resultsmore » from STAR-CCM+ can be used to inform the appropriate value of β. Once β is calibrated, CTF runs can be an inexpensive alternative to costly STAR-CCM+ runs for scoping analyses. Based on the results of CTF runs, STAR-CCM+ can be run for specific parameters of interest. CASL areas of application are CIPS for single phase analysis and DNB-CTF for two-phase analysis.« less
A high-order language for a system of closely coupled processing elements
NASA Technical Reports Server (NTRS)
Feyock, S.; Collins, W. R.
1986-01-01
The research reported in this paper was occasioned by the requirements on part of the Real-Time Digital Simulator (RTDS) project under way at NASA Lewis Research Center. The RTDS simulation scheme employs a network of CPUs running lock-step cycles in the parallel computations of jet airplane simulations. Their need for a high order language (HOL) that would allow non-experts to write simulation applications and that could be implemented on a possibly varying network can best be fulfilled by using the programming language Ada. We describe how the simulation problems can be modeled in Ada, how to map a single, multi-processing Ada program into code for individual processors, regardless of network reconfiguration, and why some Ada language features are particulary well-suited to network simulations.
NASA Technical Reports Server (NTRS)
Kasahara, Hironori; Honda, Hiroki; Narita, Seinosuke
1989-01-01
Parallel processing of real-time dynamic systems simulation on a multiprocessor system named OSCAR is presented. In the simulation of dynamic systems, generally, the same calculation are repeated every time step. However, we cannot apply to Do-all or the Do-across techniques for parallel processing of the simulation since there exist data dependencies from the end of an iteration to the beginning of the next iteration and furthermore data-input and data-output are required every sampling time period. Therefore, parallelism inside the calculation required for a single time step, or a large basic block which consists of arithmetic assignment statements, must be used. In the proposed method, near fine grain tasks, each of which consists of one or more floating point operations, are generated to extract the parallelism from the calculation and assigned to processors by using optimal static scheduling at compile time in order to reduce large run time overhead caused by the use of near fine grain tasks. The practicality of the scheme is demonstrated on OSCAR (Optimally SCheduled Advanced multiprocessoR) which has been developed to extract advantageous features of static scheduling algorithms to the maximum extent.
HYDRA : High-speed simulation architecture for precision spacecraft formation simulation
NASA Technical Reports Server (NTRS)
Martin, Bryan J.; Sohl, Garett.
2003-01-01
e Hierarchical Distributed Reconfigurable Architecture- is a scalable simulation architecture that provides flexibility and ease-of-use which take advantage of modern computation and communication hardware. It also provides the ability to implement distributed - or workstation - based simulations and high-fidelity real-time simulation from a common core. Originally designed to serve as a research platform for examining fundamental challenges in formation flying simulation for future space missions, it is also finding use in other missions and applications, all of which can take advantage of the underlying Object-Oriented structure to easily produce distributed simulations. Hydra automates the process of connecting disparate simulation components (Hydra Clients) through a client server architecture that uses high-level descriptions of data associated with each client to find and forge desirable connections (Hydra Services) at run time. Services communicate through the use of Connectors, which abstract messaging to provide single-interface access to any desired communication protocol, such as from shared-memory message passing to TCP/IP to ACE and COBRA. Hydra shares many features with the HLA, although providing more flexibility in connectivity services and behavior overriding.
Feedback control design for non-inductively sustained scenarios in NSTX-U using TRANSP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boyer, M. D.; Andre, R. G.; Gates, D. A.
This paper examines a method for real-time control of non-inductively sustained scenarios in NSTX-U by using TRANSP, a time-dependent integrated modeling code for prediction and interpretive analysis of tokamak experimental data, as a simulator. The actuators considered for control in this work are the six neutral beam sources and the plasma boundary shape. To understand the response of the plasma current, stored energy, and central safety factor to these actuators and to enable systematic design of control algorithms, simulations were run in which the actuators were modulated and a linearized dynamic response model was generated. A multi-variable model-based control schememore » that accounts for the coupling and slow dynamics of the system while mitigating the effect of actuator limitations was designed and simulated. Simulations show that modest changes in the outer gap and heating power can improve the response time of the system, reject perturbations, and track target values of the controlled values.« less
Feedback control design for non-inductively sustained scenarios in NSTX-U using TRANSP
Boyer, M. D.; Andre, R. G.; Gates, D. A.; ...
2017-04-24
This paper examines a method for real-time control of non-inductively sustained scenarios in NSTX-U by using TRANSP, a time-dependent integrated modeling code for prediction and interpretive analysis of tokamak experimental data, as a simulator. The actuators considered for control in this work are the six neutral beam sources and the plasma boundary shape. To understand the response of the plasma current, stored energy, and central safety factor to these actuators and to enable systematic design of control algorithms, simulations were run in which the actuators were modulated and a linearized dynamic response model was generated. A multi-variable model-based control schememore » that accounts for the coupling and slow dynamics of the system while mitigating the effect of actuator limitations was designed and simulated. Simulations show that modest changes in the outer gap and heating power can improve the response time of the system, reject perturbations, and track target values of the controlled values.« less
Feedback control design for non-inductively sustained scenarios in NSTX-U using TRANSP
NASA Astrophysics Data System (ADS)
Boyer, M. D.; Andre, R. G.; Gates, D. A.; Gerhardt, S. P.; Menard, J. E.; Poli, F. M.
2017-06-01
This paper examines a method for real-time control of non-inductively sustained scenarios in NSTX-U by using TRANSP, a time-dependent integrated modeling code for prediction and interpretive analysis of tokamak experimental data, as a simulator. The actuators considered for control in this work are the six neutral beam sources and the plasma boundary shape. To understand the response of the plasma current, stored energy, and central safety factor to these actuators and to enable systematic design of control algorithms, simulations were run in which the actuators were modulated and a linearized dynamic response model was generated. A multi-variable model-based control scheme that accounts for the coupling and slow dynamics of the system while mitigating the effect of actuator limitations was designed and simulated. Simulations show that modest changes in the outer gap and heating power can improve the response time of the system, reject perturbations, and track target values of the controlled values.
Integrating Multibody Simulation and CFD: toward Complex Multidisciplinary Design Optimization
NASA Astrophysics Data System (ADS)
Pieri, Stefano; Poloni, Carlo; Mühlmeier, Martin
This paper describes the use of integrated multidisciplinary analysis and optimization of a race car model on a predefined circuit. The objective is the definition of the most efficient geometric configuration that can guarantee the lowest lap time. In order to carry out this study it has been necessary to interface the design optimization software modeFRONTIER with the following softwares: CATIA v5, a three dimensional CAD software, used for the definition of the parametric geometry; A.D.A.M.S./Motorsport, a multi-body dynamic simulation software; IcemCFD, a mesh generator, for the automatic generation of the CFD grid; CFX, a Navier-Stokes code, for the fluid-dynamic forces prediction. The process integration gives the possibility to compute, for each geometrical configuration, a set of aerodynamic coefficients that are then used in the multiboby simulation for the computation of the lap time. Finally an automatic optimization procedure is started and the lap-time minimized. The whole process is executed on a Linux cluster running CFD simulations in parallel.
NASA Astrophysics Data System (ADS)
Vogel, H.; Förstner, J.; Vogel, B.; Hanisch, T.; Mühr, B.; Schättler, U.; Schad, T.
2014-08-01
An extended version of the German operational weather forecast model was used to simulate the ash dispersion during the eruption of the Eyjafjallajökull. As an operational forecast was launched every 6 hours, a time-lagged ensemble was obtained. Sensitivity runs show the ability of the model to simulate thin ash layers when an increased vertical resolution is used. Calibration of the model results with measured data allows for a quantitative forecast of the ash concentration. After this calibration an independent comparison of the simulated number concentration of 3 μm particles and observations at Hohenpeißenberg gives a correlation coefficient of 0.79. However, this agreement could only be reached after additional modifications of the emissions. Based on the time lagged ensemble the conditional probability of violation of a certain threshold is calculated. Improving the ensemble technique used in our study such probabilities could become valuable information for the forecasters advising the organizations responsible for the closing of the airspace.
Pasta nucleosynthesis: Molecular dynamics simulations of nuclear statistical equilibrium
NASA Astrophysics Data System (ADS)
Caplan, M. E.; Schneider, A. S.; Horowitz, C. J.; Berry, D. K.
2015-06-01
Background: Exotic nonspherical nuclear pasta shapes are expected in nuclear matter at just below saturation density because of competition between short-range nuclear attraction and long-range Coulomb repulsion. Purpose: We explore the impact nuclear pasta may have on nucleosynthesis during neutron star mergers when cold dense nuclear matter is ejected and decompressed. Methods: We use a hybrid CPU/GPU molecular dynamics (MD) code to perform decompression simulations of cold dense matter with 51 200 and 409 600 nucleons from 0.080 fm-3 down to 0.00125 fm-3 . Simulations are run for proton fractions YP= 0.05, 0.10, 0.20, 0.30, and 0.40 at temperatures T = 0.5, 0.75, and 1.0 MeV. The final composition of each simulation is obtained using a cluster algorithm and compared to a constant density run. Results: Size of nuclei in the final state of decompression runs are in good agreement with nuclear statistical equilibrium (NSE) models for temperatures of 1 MeV while constant density runs produce nuclei smaller than the ones obtained with NSE. Our MD simulations produces unphysical results with large rod-like nuclei in the final state of T =0.5 MeV runs. Conclusions: Our MD model is valid at higher densities than simple nuclear statistical equilibrium models and may help determine the initial temperatures and proton fractions of matter ejected in mergers.
A static data flow simulation study at Ames Research Center
NASA Technical Reports Server (NTRS)
Barszcz, Eric; Howard, Lauri S.
1987-01-01
Demands in computational power, particularly in the area of computational fluid dynamics (CFD), led NASA Ames Research Center to study advanced computer architectures. One architecture being studied is the static data flow architecture based on research done by Jack B. Dennis at MIT. To improve understanding of this architecture, a static data flow simulator, written in Pascal, has been implemented for use on a Cray X-MP/48. A matrix multiply and a two-dimensional fast Fourier transform (FFT), two algorithms used in CFD work at Ames, have been run on the simulator. Execution times can vary by a factor of more than 2 depending on the partitioning method used to assign instructions to processing elements. Service time for matching tokens has proved to be a major bottleneck. Loop control and array address calculation overhead can double the execution time. The best sustained MFLOPS rates were less than 50% of the maximum capability of the machine.
Parallel network simulations with NEURON.
Migliore, M; Cannia, C; Lytton, W W; Markram, Henry; Hines, M L
2006-10-01
The NEURON simulation environment has been extended to support parallel network simulations. Each processor integrates the equations for its subnet over an interval equal to the minimum (interprocessor) presynaptic spike generation to postsynaptic spike delivery connection delay. The performance of three published network models with very different spike patterns exhibits superlinear speedup on Beowulf clusters and demonstrates that spike communication overhead is often less than the benefit of an increased fraction of the entire problem fitting into high speed cache. On the EPFL IBM Blue Gene, almost linear speedup was obtained up to 100 processors. Increasing one model from 500 to 40,000 realistic cells exhibited almost linear speedup on 2,000 processors, with an integration time of 9.8 seconds and communication time of 1.3 seconds. The potential for speed-ups of several orders of magnitude makes practical the running of large network simulations that could otherwise not be explored.
Parallel Network Simulations with NEURON
Migliore, M.; Cannia, C.; Lytton, W.W; Markram, Henry; Hines, M. L.
2009-01-01
The NEURON simulation environment has been extended to support parallel network simulations. Each processor integrates the equations for its subnet over an interval equal to the minimum (interprocessor) presynaptic spike generation to postsynaptic spike delivery connection delay. The performance of three published network models with very different spike patterns exhibits superlinear speedup on Beowulf clusters and demonstrates that spike communication overhead is often less than the benefit of an increased fraction of the entire problem fitting into high speed cache. On the EPFL IBM Blue Gene, almost linear speedup was obtained up to 100 processors. Increasing one model from 500 to 40,000 realistic cells exhibited almost linear speedup on 2000 processors, with an integration time of 9.8 seconds and communication time of 1.3 seconds. The potential for speed-ups of several orders of magnitude makes practical the running of large network simulations that could otherwise not be explored. PMID:16732488
CBP Toolbox Version 3.0 “Beta Testing” Performance Evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, III, F. G.
2016-07-29
One function of the Cementitious Barriers Partnership (CBP) is to assess available models of cement degradation and to assemble suitable models into a “Toolbox” that would be made available to members of the partnership, as well as the DOE Complex. To this end, SRNL and Vanderbilt University collaborated to develop an interface using the GoldSim software to the STADIUM @ code developed by SIMCO Technologies, Inc. and LeachXS/ORCHESTRA developed by Energy research Centre of the Netherlands (ECN). Release of Version 3.0 of the CBP Toolbox is planned in the near future. As a part of this release, an increased levelmore » of quality assurance for the partner codes and the GoldSim interface has been developed. This report documents results from evaluation testing of the ability of CBP Toolbox 3.0 to perform simulations of concrete degradation applicable to performance assessment of waste disposal facilities. Simulations of the behavior of Savannah River Saltstone Vault 2 and Vault 1/4 concrete subject to sulfate attack and carbonation over a 500- to 1000-year time period were run using a new and upgraded version of the STADIUM @ code and the version of LeachXS/ORCHESTRA released in Version 2.0 of the CBP Toolbox. Running both codes allowed comparison of results from two models which take very different approaches to simulating cement degradation. In addition, simulations of chloride attack on the two concretes were made using the STADIUM @ code. The evaluation sought to demonstrate that: 1) the codes are capable of running extended realistic simulations in a reasonable amount of time; 2) the codes produce “reasonable” results; the code developers have provided validation test results as part of their code QA documentation; and 3) the two codes produce results that are consistent with one another. Results of the evaluation testing showed that the three criteria listed above were met by the CBP partner codes. Therefore, it is concluded that the codes can be used to support performance assessment. This conclusion takes into account the QA documentation produced for the partner codes and for the CBP Toolbox.« less
CFD simulation research on residential indoor air quality.
Yang, Li; Ye, Miao; He, Bao-Jie
2014-02-15
Nowadays people are excessively depending on air conditioning to create a comfortable indoor environment, but it could cause some health problems in a long run. In this paper, wind velocity field, temperature field and air age field in a bedroom with wall-hanging air conditioning running in summer are analyzed by CFD numerical simulation technology. The results show that wall-hanging air conditioning system can undertake indoor heat load and conduct good indoor thermal comfort. In terms of wind velocity, air speed in activity area where people sit and stand is moderate, most of which cannot feel wind flow and meet the summer indoor wind comfort requirement. However, for air quality, there are local areas without ventilation and toxic gases not discharged in time. Therefore it is necessary to take effective measures to improve air quality. Compared with the traditional measurement method, CFD software has many advantages in simulating indoor environment, so it is hopeful for humans to create a more comfortable, healthy living environment by CFD in the future. Copyright © 2013 Elsevier B.V. All rights reserved.
Anchoring quartet-based phylogenetic distances and applications to species tree reconstruction.
Sayyari, Erfan; Mirarab, Siavash
2016-11-11
Inferring species trees from gene trees using the coalescent-based summary methods has been the subject of much attention, yet new scalable and accurate methods are needed. We introduce DISTIQUE, a new statistically consistent summary method for inferring species trees from gene trees under the coalescent model. We generalize our results to arbitrary phylogenetic inference problems; we show that two arbitrarily chosen leaves, called anchors, can be used to estimate relative distances between all other pairs of leaves by inferring relevant quartet trees. This results in a family of distance-based tree inference methods, with running times ranging between quadratic to quartic in the number of leaves. We show in simulated studies that DISTIQUE has comparable accuracy to leading coalescent-based summary methods and reduced running times.
Low-cost optical data acquisition system for blade vibration measurement
NASA Technical Reports Server (NTRS)
Posta, Stephen J.
1988-01-01
A low cost optical data acquisition system was designed to measure deflection of vibrating rotor blade tips. The basic principle of the new design is to record raw data, which is a set of blade arrival times, in memory and to perform all processing by software following a run. This approach yields a simple and inexpensive system with the least possible hardware. Functional elements of the system were breadboarded and operated satisfactorily during rotor simulations on the bench, and during a data collection run with a two-bladed rotor in the Lewis Research Center Spin Rig. Software was written to demonstrate the sorting and processing of data stored in the system control computer, after retrieval from the data acquisition system. The demonstration produced an accurate graphical display of deflection versus time.
Implementing Parquet equations using HPX
NASA Astrophysics Data System (ADS)
Kellar, Samuel; Wagle, Bibek; Yang, Shuxiang; Tam, Ka-Ming; Kaiser, Hartmut; Moreno, Juana; Jarrell, Mark
A new C++ runtime system (HPX) enables simulations of complex systems to run more efficiently on parallel and heterogeneous systems. This increased efficiency allows for solutions to larger simulations of the parquet approximation for a system with impurities. The relevancy of the parquet equations depends upon the ability to solve systems which require long runs and large amounts of memory. These limitations, in addition to numerical complications arising from stability of the solutions, necessitate running on large distributed systems. As the computational resources trend towards the exascale and the limitations arising from computational resources vanish efficiency of large scale simulations becomes a focus. HPX facilitates efficient simulations through intelligent overlapping of computation and communication. Simulations such as the parquet equations which require the transfer of large amounts of data should benefit from HPX implementations. Supported by the the NSF EPSCoR Cooperative Agreement No. EPS-1003897 with additional support from the Louisiana Board of Regents.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, Robert K; Schlicher, Bob G
Vulnerability in security of an information system is quantitatively predicted. The information system may receive malicious actions against its security and may receive corrective actions for restoring the security. A game oriented agent based model is constructed in a simulator application. The game ABM model represents security activity in the information system. The game ABM model has two opposing participants including an attacker and a defender, probabilistic game rules and allowable game states. A specified number of simulations are run and a probabilistic number of the plurality of allowable game states are reached in each simulation run. The probability ofmore » reaching a specified game state is unknown prior to running each simulation. Data generated during the game states is collected to determine a probability of one or more aspects of security in the information system.« less
40 CFR 86.134-96 - Running loss test.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 19 2012-07-01 2012-07-01 false Running loss test. 86.134-96 Section... Heavy-Duty Vehicles; Test Procedures § 86.134-96 Running loss test. (a) Overview. Gasoline- and methanol-fueled vehicles are to be tested for running loss emissions during simulated high-temperature urban...
40 CFR 86.134-96 - Running loss test.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 18 2011-07-01 2011-07-01 false Running loss test. 86.134-96 Section... Heavy-Duty Vehicles; Test Procedures § 86.134-96 Running loss test. (a) Overview. Gasoline- and methanol-fueled vehicles are to be tested for running loss emissions during simulated high-temperature urban...
40 CFR 86.134-96 - Running loss test.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Running loss test. 86.134-96 Section... Heavy-Duty Vehicles; Test Procedures § 86.134-96 Running loss test. (a) Overview. Gasoline- and methanol-fueled vehicles are to be tested for running loss emissions during simulated high-temperature urban...
40 CFR 86.134-96 - Running loss test.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 19 2013-07-01 2013-07-01 false Running loss test. 86.134-96 Section... Heavy-Duty Vehicles; Test Procedures § 86.134-96 Running loss test. (a) Overview. Gasoline- and methanol-fueled vehicles are to be tested for running loss emissions during simulated high-temperature urban...
NASA Astrophysics Data System (ADS)
Warrier, M.; Bhardwaj, U.; Hemani, H.; Schneider, R.; Mutzke, A.; Valsakumar, M. C.
2015-12-01
We report on molecular Dynamics (MD) simulations carried out in fcc Cu and bcc W using the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) code to study (i) the statistical variations in the number of interstitials and vacancies produced by energetic primary knock-on atoms (PKA) (0.1-5 keV) directed in random directions and (ii) the in-cascade cluster size distributions. It is seen that around 60-80 random directions have to be explored for the average number of displaced atoms to become steady in the case of fcc Cu, whereas for bcc W around 50-60 random directions need to be explored. The number of Frenkel pairs produced in the MD simulations are compared with that from the Binary Collision Approximation Monte Carlo (BCA-MC) code SDTRIM-SP and the results from the NRT model. It is seen that a proper choice of the damage energy, i.e. the energy required to create a stable interstitial, is essential for the BCA-MC results to match the MD results. On the computational front it is seen that in-situ processing saves the need to input/output (I/O) atomic position data of several tera-bytes when exploring a large number of random directions and there is no difference in run-time because the extra run-time in processing data is offset by the time saved in I/O.
Predictable turn-around time for post tape-out flow
NASA Astrophysics Data System (ADS)
Endo, Toshikazu; Park, Minyoung; Ghosh, Pradiptya
2012-03-01
A typical post-out flow data path at the IC Fabrication has following major components of software based processing - Boolean operations before the application of resolution enhancement techniques (RET) and optical proximity correctin (OPC), the RET and OPC step [etch retargeting, sub-resolution assist feature insertion (SRAF) and OPC], post-OPCRET Boolean operations and sometimes in the same flow simulation based verification. There are two objectives that an IC Fabrication tapeout flow manager wants to achieve with the flow - predictable completion time and fastest turn-around time (TAT). At times they may be competing. There have been studies in the literature modeling the turnaround time from historical data for runs with the same recipe and later using that to derive the resource allocation for subsequent runs. [3]. This approach is more feasible in predominantly simulation dominated tools but for edge operation dominated flow it may not be possible especially if some processing acceleration methods like pattern matching or hierarchical processing is involved. In this paper, we suggest an alternative method of providing target turnaround time and managing the priority of jobs while not doing any upfront resource modeling and resource planning. The methodology then systematically either meets the turnaround time need and potentially lets the user know if it will not as soon as possible. This builds on top of the Calibre Cluster Management (CalCM) resource management work previously published [1][2]. The paper describes the initial demonstration of the concept.
Simulating storage part of application with Simgrid
NASA Astrophysics Data System (ADS)
Wang, Cong
2017-10-01
Design of a file system simulation and visualization system, using simgrid API and visualization techniques to help users understanding and improving the file system portion of their application. The core of the simulator is the API provided by simgrid, cluefs tracks and catches the procedure of the I/O operation. Run the simulator simulating this application to generate the output visualization file, which can visualize the I/O action proportion and time series. Users can also change the parameters in the configuration file to change the parameters of the storage system such as reading and writing bandwidth, users can also adjust the storage strategy, test the performance, getting reference to be much easier to optimize the storage system. We have tested all the aspects of the simulator, the results suggest that the simulator performance can be believable.
Innovative Techniques to Predict Atmospheric Effects on Sensor Performance
2009-10-15
since acquiring the MRO data, extensive tabulation of all of the data from all visible satellites (generally, non- resolved ) was also accomplished...efficient code has been written to run multiple OSC simulations in less time . Data from many passes of the same satellite is useful for SOI, whether it is...the data analyzed. Questions about the data were resolved using OSC to determine solar phase angle (SPA), range, time of penumbra entrance/exit and
NASA Astrophysics Data System (ADS)
Rohmer, Jeremy; Idier, Deborah; Bulteau, Thomas; Paris, François
2016-04-01
From a risk management perspective, it can be of high interest to identify the critical set of offshore conditions that lead to inundation on key assets for the studied territory (e.g., assembly points, evacuation routes, hospitals, etc.). This inverse approach of risk assessment (Idier et al., NHESS, 2013) can be of primary importance either for the estimation of the coastal flood hazard return period or for constraining the early warning networks based on hydro-meteorological forecast or observations. However, full-process based models for coastal flooding simulation have very large computational time cost (typically of several hours), which often limits the analysis to a few scenarios. Recently, it has been shown that meta-modelling approaches can efficiently handle this difficulty (e.g., Rohmer & Idier, NHESS, 2012). Yet, the full-process based models are expected to present strong non-linearities (non-regularities) or shocks (discontinuities), i.e. dynamics controlled by thresholds. For instance, in case of coastal defense, the dynamics is characterized first by a linear behavior of the waterline position (increase with increasing offshore conditions), as long as there is no overtopping, and then by a very strong increase (as soon as the offshore conditions are energetic enough to lead to wave overtopping, and then overflow). Such behavior might make the training phase of the meta-model very tedious. In the present study, we propose to explore the feasibility of active learning techniques, aka semi-supervised machine learning, to track the set of critical conditions with a reduced number of long-running simulations. The basic idea relies on identifying the simulation scenarios which should both reduce the meta-model error and improve the prediction of the critical contour of interest. To overcome the afore-described difficulty related to non-regularity, we rely on Support Vector Machines, which have shown very high performance for structural reliability assessment. The developments are done on a cross-shore case, using the process-based SWASH model. The related computational time is 10 hours for a single run. The dynamic forcing conditions are parametrized by several factors (storm surge S, significant wave height Hs, dephasing between tide and surge, etc.). In particular, we validated the approach with respect to a reference set of 400 long-running simulations in the domain of (S ; Hs). Our tests showed that the tracking of the critical contour can be achieved with a reasonable number of long-running simulations of a few tens.
The application of connectionism to query planning/scheduling in intelligent user interfaces
NASA Technical Reports Server (NTRS)
Short, Nicholas, Jr.; Shastri, Lokendra
1990-01-01
In the mid nineties, the Earth Observing System (EOS) will generate an estimated 10 terabytes of data per day. This enormous amount of data will require the use of sophisticated technologies from real time distributed Artificial Intelligence (AI) and data management. Without regard to the overall problems in distributed AI, efficient models were developed for doing query planning and/or scheduling in intelligent user interfaces that reside in a network environment. Before intelligent query/planning can be done, a model for real time AI planning and/or scheduling must be developed. As Connectionist Models (CM) have shown promise in increasing run times, a connectionist approach to AI planning and/or scheduling is proposed. The solution involves merging a CM rule based system to a general spreading activation model for the generation and selection of plans. The system was implemented in the Rochester Connectionist Simulator and runs on a Sun 3/260.
A new synoptic scale resolving global climate simulation using the Community Earth System Model
NASA Astrophysics Data System (ADS)
Small, R. Justin; Bacmeister, Julio; Bailey, David; Baker, Allison; Bishop, Stuart; Bryan, Frank; Caron, Julie; Dennis, John; Gent, Peter; Hsu, Hsiao-ming; Jochum, Markus; Lawrence, David; Muñoz, Ernesto; diNezio, Pedro; Scheitlin, Tim; Tomas, Robert; Tribbia, Joseph; Tseng, Yu-heng; Vertenstein, Mariana
2014-12-01
High-resolution global climate modeling holds the promise of capturing planetary-scale climate modes and small-scale (regional and sometimes extreme) features simultaneously, including their mutual interaction. This paper discusses a new state-of-the-art high-resolution Community Earth System Model (CESM) simulation that was performed with these goals in mind. The atmospheric component was at 0.25° grid spacing, and ocean component at 0.1°. One hundred years of "present-day" simulation were completed. Major results were that annual mean sea surface temperature (SST) in the equatorial Pacific and El-Niño Southern Oscillation variability were well simulated compared to standard resolution models. Tropical and southern Atlantic SST also had much reduced bias compared to previous versions of the model. In addition, the high resolution of the model enabled small-scale features of the climate system to be represented, such as air-sea interaction over ocean frontal zones, mesoscale systems generated by the Rockies, and Tropical Cyclones. Associated single component runs and standard resolution coupled runs are used to help attribute the strengths and weaknesses of the fully coupled run. The high-resolution run employed 23,404 cores, costing 250 thousand processor-hours per simulated year and made about two simulated years per day on the NCAR-Wyoming supercomputer "Yellowstone."
How well do the GCMs replicate the historical precipitation variability in the Colorado River Basin?
NASA Astrophysics Data System (ADS)
Guentchev, G.; Barsugli, J. J.; Eischeid, J.; Raff, D. A.; Brekke, L.
2009-12-01
Observed precipitation variability measures are compared to measures obtained using the World Climate Research Programme (WCRP) Coupled Model Intercomparison Project (CMIP3) General Circulation Models (GCM) data from 36 model projections downscaled by Brekke at al. (2007) and 30 model projections downscaled by Jon Eischeid. Three groups of variability measures are considered in this historical period (1951-1999) comparison: a) basic variability measures, such as standard deviation, interdecadal standard deviation; b) exceedance probability values, i.e., 10% (extreme wet years) and 90% (extreme dry years) exceedance probability values of series of n-year running mean annual amounts, where n=1,12; 10% exceedance probability values of annual maximum monthly precipitation (extreme wet months); and c) runs variability measures, e.g., frequency of negative and positive runs of annual precipitation amounts, total number of the negative and positive runs. Two gridded precipitation data sets produced from observations are used: the Maurer et al. (2002) and the Daly et al. (1994) Precipitation Regression on Independent Slopes Method (PRISM) data sets. The data consist of monthly grid-point precipitation averaged on a United States Geological Survey (USGS) hydrological sub-region scale. The statistical significance of the obtained model minus observed measure differences is assessed using a block bootstrapping approach. The analyses were performed on annual, seasonal and monthly scale. The results indicate that the interdecadal standard deviation is underestimated, in general, on all time scales by the downscaled model data. The differences are statistically significant at a 0.05 significance level for several Lower Colorado Basin sub-regions on annual and seasonal scale, and for several sub-regions located mostly in the Upper Colorado River Basin for the months of March, June, July and November. Although the models simulate drier extreme wet years, wetter extreme dry years and drier extreme wet months for the Upper Colorado basin, the differences are mostly not-significant. Exceptions are the results about the extreme wet years for n=3 for sub-region White-Yampa, for n=6, 7, and 8 for sub-region Upper Colorado-Dolores, and about the extreme dry years for n=11 for sub-region Great Divide-Upper Green. None of the results for the sub-regions in the Lower Colorado Basin were significant. For most of the Upper Colorado sub-regions the models simulate significantly lower frequency of negative and positive 4-6 year runs, while for several sub-regions a significantly higher frequency of 2-year negative runs is evident in the model versus the Maurer data comparisons. The model projections versus the PRISM data comparison reveals similar results for the negative runs, while for the positive runs the results indicate that the models simulate higher frequency of the 2-6 year runs. The results for the Lower Colorado basin sub-regions are similar, in general, to these for the Upper Colorado sub-regions. The differences between the simulated and the observed total number of negative or positive runs were not significant for almost all of the sub-regions within the Colorado River Basin.
Challenges in Visual Analysis of Ensembles
Crossno, Patricia
2018-04-12
Modeling physical phenomena through computational simulation increasingly relies on generating a collection of related runs, known as an ensemble. In this paper, we explore the challenges we face in developing analysis and visualization systems for large and complex ensemble data sets, which we seek to understand without having to view the results of every simulation run. Implementing approaches and ideas developed in response to this goal, we demonstrate the analysis of a 15K run material fracturing study using Slycat, our ensemble analysis system.