Validation of Supersonic Film Cooling Modeling for Liquid Rocket Engine Applications
NASA Technical Reports Server (NTRS)
Morris, Christopher I.; Ruf, Joseph H.
2010-01-01
Topics include: upper stage engine key requirements and design drivers; Calspan "stage 1" results, He slot injection into hypersonic flow (air); test articles for shock generator diagram, slot injector details, and instrumentation positions; test conditions; modeling approach; 2-d grid used for film cooling simulations of test article; heat flux profiles from 2-d flat plate simulations (run #4); heat flux profiles from 2-d backward facing step simulations (run #43); isometric sketch of single coolant nozzle, and x-z grid of half-nozzle domain; comparison of 2-d and 3-d simulations of coolant nozzles (run #45); flowfield properties along coolant nozzle centerline (run #45); comparison of 3-d CFD nozzle flow calculations with experimental data; nozzle exit plane reduced to linear profile for use in 2-d film-cooling simulations (run #45); synthetic Schlieren image of coolant injection region (run #45); axial velocity profiles from 2-d film-cooling simulation (run #45); coolant mass fraction profiles from 2-d film-cooling simulation (run #45); heat flux profiles from 2-d film cooling simulations (run #45); heat flux profiles from 2-d film cooling simulations (runs #47, #45, and #47); 3-d grid used for film cooling simulations of test article; heat flux contours from 3-d film-cooling simulation (run #45); and heat flux profiles from 3-d and 2-d film cooling simulations (runs #44, #46, and #47).
GEANT4 distributed computing for compact clusters
NASA Astrophysics Data System (ADS)
Harrawood, Brian P.; Agasthya, Greeshma A.; Lakshmanan, Manu N.; Raterman, Gretchen; Kapadia, Anuj J.
2014-11-01
A new technique for distribution of GEANT4 processes is introduced to simplify running a simulation in a parallel environment such as a tightly coupled computer cluster. Using a new C++ class derived from the GEANT4 toolkit, multiple runs forming a single simulation are managed across a local network of computers with a simple inter-node communication protocol. The class is integrated with the GEANT4 toolkit and is designed to scale from a single symmetric multiprocessing (SMP) machine to compact clusters ranging in size from tens to thousands of nodes. User designed 'work tickets' are distributed to clients using a client-server work flow model to specify the parameters for each individual run of the simulation. The new g4DistributedRunManager class was developed and well tested in the course of our Neutron Stimulated Emission Computed Tomography (NSECT) experiments. It will be useful for anyone running GEANT4 for large discrete data sets such as covering a range of angles in computed tomography, calculating dose delivery with multiple fractions or simply speeding the through-put of a single model.
Statistical Emulator for Expensive Classification Simulators
NASA Technical Reports Server (NTRS)
Ross, Jerret; Samareh, Jamshid A.
2016-01-01
Expensive simulators prevent any kind of meaningful analysis to be performed on the phenomena they model. To get around this problem the concept of using a statistical emulator as a surrogate representation of the simulator was introduced in the 1980's. Presently, simulators have become more and more complex and as a result running a single example on these simulators is very expensive and can take days to weeks or even months. Many new techniques have been introduced, termed criteria, which sequentially select the next best (most informative to the emulator) point that should be run on the simulator. These criteria methods allow for the creation of an emulator with only a small number of simulator runs. We follow and extend this framework to expensive classification simulators.
The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code.
Kunkel, Susanne; Schenck, Wolfram
2017-01-01
NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling.
The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code
Kunkel, Susanne; Schenck, Wolfram
2017-01-01
NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling. PMID:28701946
Comparison of SPHC Hydrocode Results with Penetration Equations and Results of Other Codes
NASA Technical Reports Server (NTRS)
Evans, Steven W.; Stallworth, Roderick; Stellingwerf, Robert F.
2004-01-01
The SPHC hydrodynamic code was used to simulate impacts of spherical aluminum projectiles on a single-wall aluminum plate and on a generic Whipple shield. Simulations were carried out in two and three dimensions. Projectile speeds ranged from 2 kilometers per second to 10 kilometers per second for the single-wall runs, and from 3 kilometers per second to 40 kilometers per second for the Whipple shield runs. Spallation limit results of the single-wall simulations are compared with predictions from five standard penetration equations, and are shown to fall comfortably within the envelope of these analytical relations. Ballistic limit results of the Whipple shield simulations are compared with results from the AUTODYN-2D and PAM-SHOCK-3D codes presented in a paper at the Hypervelocity Impact Symposium 2000 and the Christiansen formulation of 2003.
Parvin, C A
1993-03-01
The error detection characteristics of quality-control (QC) rules that use control observations within a single analytical run are investigated. Unlike the evaluation of QC rules that span multiple analytical runs, most of the fundamental results regarding the performance of QC rules applied within a single analytical run can be obtained from statistical theory, without the need for simulation studies. The case of two control observations per run is investigated for ease of graphical display, but the conclusions can be extended to more than two control observations per run. Results are summarized in a graphical format that offers many interesting insights into the relations among the various QC rules. The graphs provide heuristic support to the theoretical conclusions that no QC rule is best under all error conditions, but the multirule that combines the mean rule and a within-run standard deviation rule offers an attractive compromise.
The Effects of a Duathlon Simulation on Ventilatory Threshold and Running Economy
Berry, Nathaniel T.; Wideman, Laurie; Shields, Edgar W.; Battaglini, Claudio L.
2016-01-01
Multisport events continue to grow in popularity among recreational, amateur, and professional athletes around the world. This study aimed to determine the compounding effects of the initial run and cycling legs of an International Triathlon Union (ITU) Duathlon simulation on maximal oxygen uptake (VO2max), ventilatory threshold (VT) and running economy (RE) within a thermoneutral, laboratory controlled setting. Seven highly trained multisport athletes completed three trials; Trial-1 consisted of a speed only VO2max treadmill protocol (SOVO2max) to determine VO2max, VT, and RE during a single-bout run; Trial-2 consisted of a 10 km run at 98% of VT followed by an incremental VO2max test on the cycle ergometer; Trial-3 consisted of a 10 km run and 30 km cycling bout at 98% of VT followed by a speed only treadmill test to determine the compounding effects of the initial legs of a duathlon on VO2max, VT, and RE. A repeated measures ANOVA was performed to determine differences between variables across trials. No difference in VO2max, VT (%VO2max), maximal HR, or maximal RPE was observed across trials. Oxygen consumption at VT was significantly lower during Trial-3 compared to Trial-1 (p = 0.01). This decrease was coupled with a significant reduction in running speed at VT (p = 0.015). A significant interaction between trial and running speed indicate that RE was significantly altered during Trial-3 compared to Trial-1 (p < 0.001). The first two legs of a laboratory based duathlon simulation negatively impact VT and RE. Our findings may provide a useful method to evaluate multisport athletes since a single-bout incremental treadmill test fails to reveal important alterations in physiological thresholds. Key points Decrease in relative oxygen uptake at VT (ml·kg-1·min-1) during the final leg of a duathlon simulation, compared to a single-bout maximal run. We observed a decrease in running speed at VT during the final leg of a duathlon simulation; resulting in an increase of more than 2 minutes to complete a 5 km run. During our study, highly trained athletes were unable to complete the final 5 km run at the same intensity that they completed the initial 10 km run (in a laboratory setting). A better understanding, and determination, of training loads during multisport training may help to better periodize training programs; additional research is required. PMID:27274661
Simulation of ozone production in a complex circulation region using nested grids
NASA Astrophysics Data System (ADS)
Taghavi, M.; Cautenet, S.; Foret, G.
2004-06-01
During the ESCOMPTE precampaign (summer 2000, over Southern France), a 3-day period of intensive observation (IOP0), associated with ozone peaks, has been simulated. The comprehensive RAMS model, version 4.3, coupled on-line with a chemical module including 29 species, is used to follow the chemistry of the polluted zone. This efficient but time consuming method can be used because the code is installed on a parallel computer, the SGI 3800. Two runs are performed: run 1 with a single grid and run 2 with two nested grids. The simulated fields of ozone, carbon monoxide, nitrogen oxides and sulfur dioxide are compared with aircraft and surface station measurements. The 2-grid run looks substantially better than the run with one grid because the former takes the outer pollutants into account. This on-line method helps to satisfactorily retrieve the chemical species redistribution and to explain the impact of dynamics on this redistribution.
NASA Astrophysics Data System (ADS)
Jin, Zhe-Yan; Dong, Qiao-Tian; Yang, Zhi-Gang
2015-02-01
The present study experimentally investigated the effect of a simulated single-horn glaze ice accreted on rotor blades on the vortex structures in the wake of a horizontal axis wind turbine by using the stereoscopic particle image velocimetry (Stereo-PIV) technique. During the experiments, four horizontal axis wind turbine models were tested, and both "free-run" and "phase-locked" Stereo-PIV measurements were carried out. Based on the "free-run" measurements, it was found that because of the simulated single-horn glaze ice, the shape, vorticity, and trajectory of tip vortices were changed significantly, and less kinetic energy of the airflow could be harvested by the wind turbine. In addition, the "phase-locked" results indicated that the presence of simulated single-horn glaze ice resulted in a dramatic reduction of the vorticity peak of the tip vortices. Moreover, as the length of the glaze ice increased, both root and tip vortex gaps were found to increase accordingly.
Employing multi-GPU power for molecular dynamics simulation: an extension of GALAMOST
NASA Astrophysics Data System (ADS)
Zhu, You-Liang; Pan, Deng; Li, Zhan-Wei; Liu, Hong; Qian, Hu-Jun; Zhao, Yang; Lu, Zhong-Yuan; Sun, Zhao-Yan
2018-04-01
We describe the algorithm of employing multi-GPU power on the basis of Message Passing Interface (MPI) domain decomposition in a molecular dynamics code, GALAMOST, which is designed for the coarse-grained simulation of soft matters. The code of multi-GPU version is developed based on our previous single-GPU version. In multi-GPU runs, one GPU takes charge of one domain and runs single-GPU code path. The communication between neighbouring domains takes a similar algorithm of CPU-based code of LAMMPS, but is optimised specifically for GPUs. We employ a memory-saving design which can enlarge maximum system size at the same device condition. An optimisation algorithm is employed to prolong the update period of neighbour list. We demonstrate good performance of multi-GPU runs on the simulation of Lennard-Jones liquid, dissipative particle dynamics liquid, polymer and nanoparticle composite, and two-patch particles on workstation. A good scaling of many nodes on cluster for two-patch particles is presented.
CMB constraints on running non-Gaussianity
NASA Astrophysics Data System (ADS)
Oppizzi, F.; Liguori, M.; Renzi, A.; Arroja, F.; Bartolo, N.
2018-05-01
We develop a complete set of tools for CMB forecasting, simulation and estimation of primordial running bispectra, arising from a variety of curvaton and single-field (DBI) models of Inflation. We validate our pipeline using mock CMB running non-Gaussianity realizations and test it on real data by obtaining experimental constraints on the fNL running spectral index, nNG, using WMAP 9-year data. Our final bounds (68% C.L.) read ‑0.6< nNG<1.4}, ‑0.3< nNG<1.2, ‑1.1
Volume 2: Compendium of Abstracts
2017-06-01
simulation work using a standard running model for legged systems, the Spring Loaded Inverted Pendulum (SLIP) Model. In this model, the dynamics of a single...bar SLIP model is analyzed using a basin of attraction analyses to determine the optimal configuration for running at different velocities and...acquisition, and the automatic target acquisition were then compared to each other. After running trials with the current system, it will be
Komarov, Ivan; D'Souza, Roshan M
2012-01-01
The Gillespie Stochastic Simulation Algorithm (GSSA) and its variants are cornerstone techniques to simulate reaction kinetics in situations where the concentration of the reactant is too low to allow deterministic techniques such as differential equations. The inherent limitations of the GSSA include the time required for executing a single run and the need for multiple runs for parameter sweep exercises due to the stochastic nature of the simulation. Even very efficient variants of GSSA are prohibitively expensive to compute and perform parameter sweeps. Here we present a novel variant of the exact GSSA that is amenable to acceleration by using graphics processing units (GPUs). We parallelize the execution of a single realization across threads in a warp (fine-grained parallelism). A warp is a collection of threads that are executed synchronously on a single multi-processor. Warps executing in parallel on different multi-processors (coarse-grained parallelism) simultaneously generate multiple trajectories. Novel data-structures and algorithms reduce memory traffic, which is the bottleneck in computing the GSSA. Our benchmarks show an 8×-120× performance gain over various state-of-the-art serial algorithms when simulating different types of models.
Accelerating 3D Hall MHD Magnetosphere Simulations with Graphics Processing Units
NASA Astrophysics Data System (ADS)
Bard, C.; Dorelli, J.
2017-12-01
The resolution required to simulate planetary magnetospheres with Hall magnetohydrodynamics result in program sizes approaching several hundred million grid cells. These would take years to run on a single computational core and require hundreds or thousands of computational cores to complete in a reasonable time. However, this requires access to the largest supercomputers. Graphics processing units (GPUs) provide a viable alternative: one GPU can do the work of roughly 100 cores, bringing Hall MHD simulations of Ganymede within reach of modest GPU clusters ( 8 GPUs). We report our progress in developing a GPU-accelerated, three-dimensional Hall magnetohydrodynamic code and present Hall MHD simulation results for both Ganymede (run on 8 GPUs) and Mercury (56 GPUs). We benchmark our Ganymede simulation with previous results for the Galileo G8 flyby, namely that adding the Hall term to ideal MHD simulations changes the global convection pattern within the magnetosphere. Additionally, we present new results for the G1 flyby as well as initial results from Hall MHD simulations of Mercury and compare them with the corresponding ideal MHD runs.
SIMREL: Software for Coefficient Alpha and Its Confidence Intervals with Monte Carlo Studies
ERIC Educational Resources Information Center
Yurdugul, Halil
2009-01-01
This article describes SIMREL, a software program designed for the simulation of alpha coefficients and the estimation of its confidence intervals. SIMREL runs on two alternatives. In the first one, if SIMREL is run for a single data file, it performs descriptive statistics, principal components analysis, and variance analysis of the item scores…
A new synoptic scale resolving global climate simulation using the Community Earth System Model
NASA Astrophysics Data System (ADS)
Small, R. Justin; Bacmeister, Julio; Bailey, David; Baker, Allison; Bishop, Stuart; Bryan, Frank; Caron, Julie; Dennis, John; Gent, Peter; Hsu, Hsiao-ming; Jochum, Markus; Lawrence, David; Muñoz, Ernesto; diNezio, Pedro; Scheitlin, Tim; Tomas, Robert; Tribbia, Joseph; Tseng, Yu-heng; Vertenstein, Mariana
2014-12-01
High-resolution global climate modeling holds the promise of capturing planetary-scale climate modes and small-scale (regional and sometimes extreme) features simultaneously, including their mutual interaction. This paper discusses a new state-of-the-art high-resolution Community Earth System Model (CESM) simulation that was performed with these goals in mind. The atmospheric component was at 0.25° grid spacing, and ocean component at 0.1°. One hundred years of "present-day" simulation were completed. Major results were that annual mean sea surface temperature (SST) in the equatorial Pacific and El-Niño Southern Oscillation variability were well simulated compared to standard resolution models. Tropical and southern Atlantic SST also had much reduced bias compared to previous versions of the model. In addition, the high resolution of the model enabled small-scale features of the climate system to be represented, such as air-sea interaction over ocean frontal zones, mesoscale systems generated by the Rockies, and Tropical Cyclones. Associated single component runs and standard resolution coupled runs are used to help attribute the strengths and weaknesses of the fully coupled run. The high-resolution run employed 23,404 cores, costing 250 thousand processor-hours per simulated year and made about two simulated years per day on the NCAR-Wyoming supercomputer "Yellowstone."
Simulated single molecule microscopy with SMeagol.
Lindén, Martin; Ćurić, Vladimir; Boucharin, Alexis; Fange, David; Elf, Johan
2016-08-01
SMeagol is a software tool to simulate highly realistic microscopy data based on spatial systems biology models, in order to facilitate development, validation and optimization of advanced analysis methods for live cell single molecule microscopy data. SMeagol runs on Matlab R2014 and later, and uses compiled binaries in C for reaction-diffusion simulations. Documentation, source code and binaries for Mac OS, Windows and Ubuntu Linux can be downloaded from http://smeagol.sourceforge.net johan.elf@icm.uu.se Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Automated Knowledge Discovery From Simulators
NASA Technical Reports Server (NTRS)
Burl, Michael; DeCoste, Dennis; Mazzoni, Dominic; Scharenbroich, Lucas; Enke, Brian; Merline, William
2007-01-01
A computational method, SimLearn, has been devised to facilitate efficient knowledge discovery from simulators. Simulators are complex computer programs used in science and engineering to model diverse phenomena such as fluid flow, gravitational interactions, coupled mechanical systems, and nuclear, chemical, and biological processes. SimLearn uses active-learning techniques to efficiently address the "landscape characterization problem." In particular, SimLearn tries to determine which regions in "input space" lead to a given output from the simulator, where "input space" refers to an abstraction of all the variables going into the simulator, e.g., initial conditions, parameters, and interaction equations. Landscape characterization can be viewed as an attempt to invert the forward mapping of the simulator and recover the inputs that produce a particular output. Given that a single simulation run can take days or weeks to complete even on a large computing cluster, SimLearn attempts to reduce costs by reducing the number of simulations needed to effect discoveries. Unlike conventional data-mining methods that are applied to static predefined datasets, SimLearn involves an iterative process in which a most informative dataset is constructed dynamically by using the simulator as an oracle. On each iteration, the algorithm models the knowledge it has gained through previous simulation trials and then chooses which simulation trials to run next. Running these trials through the simulator produces new data in the form of input-output pairs. The overall process is embodied in an algorithm that combines support vector machines (SVMs) with active learning. SVMs use learning from examples (the examples are the input-output pairs generated by running the simulator) and a principle called maximum margin to derive predictors that generalize well to new inputs. In SimLearn, the SVM plays the role of modeling the knowledge that has been gained through previous simulation trials. Active learning is used to determine which new input points would be most informative if their output were known. The selected input points are run through the simulator to generate new information that can be used to refine the SVM. The process is then repeated. SimLearn carefully balances exploration (semi-randomly searching around the input space) versus exploitation (using the current state of knowledge to conduct a tightly focused search). During each iteration, SimLearn uses not one, but an ensemble of SVMs. Each SVM in the ensemble is characterized by different hyper-parameters that control various aspects of the learned predictor - for example, whether the predictor is constrained to be very smooth (nearby points in input space lead to similar output predictions) or whether the predictor is allowed to be "bumpy." The various SVMs will have different preferences about which input points they would like to run through the simulator next. SimLearn includes a formal mechanism for balancing the ensemble SVM preferences so that a single choice can be made for the next set of trials.
Effect of match-run frequencies on the number of transplants and waiting times in kidney exchange.
Ashlagi, Itai; Bingaman, Adam; Burq, Maximilien; Manshadi, Vahideh; Gamarnik, David; Murphey, Cathi; Roth, Alvin E; Melcher, Marc L; Rees, Michael A
2018-05-01
Numerous kidney exchange (kidney paired donation [KPD]) registries in the United States have gradually shifted to high-frequency match-runs, raising the question of whether this harms the number of transplants. We conducted simulations using clinical data from 2 KPD registries-the Alliance for Paired Donation, which runs multihospital exchanges, and Methodist San Antonio, which runs single-center exchanges-to study how the frequency of match-runs impacts the number of transplants and the average waiting times. We simulate the options facing each of the 2 registries by repeated resampling from their historical pools of patient-donor pairs and nondirected donors, with arrival and departure rates corresponding to the historical data. We find that longer intervals between match-runs do not increase the total number of transplants, and that prioritizing highly sensitized patients is more effective than waiting longer between match-runs for transplanting highly sensitized patients. While we do not find that frequent match-runs result in fewer transplanted pairs, we do find that increasing arrival rates of new pairs improves both the fraction of transplanted pairs and waiting times. © 2017 The American Society of Transplantation and the American Society of Transplant Surgeons.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2012-12-17
Grizzly is a simulation tool for assessing the effects of age-related degradation on systems, structures, and components of nuclear power plants. Grizzly is built on the MOOSE framework, and uses a Jacobian-free Newton Krylov method to obtain solutions to tightly coupled thermo-mechanical simulations. Grizzly runs on a wide range of hardware, from a single processor to massively parallel machines.
WE-C-217BCD-08: Rapid Monte Carlo Simulations of DQE(f) of Scintillator-Based Detectors.
Star-Lack, J; Abel, E; Constantin, D; Fahrig, R; Sun, M
2012-06-01
Monte Carlo simulations of DQE(f) can greatly aid in the design of scintillator-based detectors by helping optimize key parameters including scintillator material and thickness, pixel size, surface finish, and septa reflectivity. However, the additional optical transport significantly increases simulation times, necessitating a large number of parallel processors to adequately explore the parameter space. To address this limitation, we have optimized the DQE(f) algorithm, reducing simulation times per design iteration to 10 minutes on a single CPU. DQE(f) is proportional to the ratio, MTF(f)̂2 /NPS(f). The LSF-MTF simulation uses a slanted line source and is rapidly performed with relatively few gammas launched. However, the conventional NPS simulation for standard radiation exposure levels requires the acquisition of multiple flood fields (nRun), each requiring billions of input gamma photons (nGamma), many of which will scintillate, thereby producing thousands of optical photons (nOpt) per deposited MeV. The resulting execution time is proportional to the product nRun x nGamma x nOpt. In this investigation, we revisit the theoretical derivation of DQE(f), and reveal significant computation time savings through the optimization of nRun, nGamma, and nOpt. Using GEANT4, we determine optimal values for these three variables for a GOS scintillator-amorphous silicon portal imager. Both isotropic and Mie optical scattering processes were modeled. Simulation results were validated against the literature. We found that, depending on the radiative and optical attenuation properties of the scintillator, the NPS can be accurately computed using values for nGamma below 1000, and values for nOpt below 500/MeV. nRun should remain above 200. Using these parameters, typical computation times for a complete NPS ranged from 2-10 minutes on a single CPU. The number of launched particles and corresponding execution times for a DQE simulation can be dramatically reduced allowing for accurate computation with modest computer hardware. NIHRO1 CA138426. Several authors work for Varian Medical Systems. © 2012 American Association of Physicists in Medicine.
Platform-Independence and Scheduling In a Multi-Threaded Real-Time Simulation
NASA Technical Reports Server (NTRS)
Sugden, Paul P.; Rau, Melissa A.; Kenney, P. Sean
2001-01-01
Aviation research often relies on real-time, pilot-in-the-loop flight simulation as a means to develop new flight software, flight hardware, or pilot procedures. Often these simulations become so complex that a single processor is incapable of performing the necessary computations within a fixed time-step. Threads are an elegant means to distribute the computational work-load when running on a symmetric multi-processor machine. However, programming with threads often requires operating system specific calls that reduce code portability and maintainability. While a multi-threaded simulation allows a significant increase in the simulation complexity, it also increases the workload of a simulation operator by requiring that the operator determine which models run on which thread. To address these concerns an object-oriented design was implemented in the NASA Langley Standard Real-Time Simulation in C++ (LaSRS++) application framework. The design provides a portable and maintainable means to use threads and also provides a mechanism to automatically load balance the simulation models.
Resource Contention Management in Parallel Systems
1989-04-01
technical competence include communications, command and control, battle management, information processing, surveillance sensors, intelligence data ...two-simulation approach since they require only a single simulation run. More importantly, since they involve only observed data , they may also be...we use the original, unobservable RAC of Section 2 and handle un- observable transitions by generating artifcial events, when required, using a random
NASA Astrophysics Data System (ADS)
Mohaghegh, Shahab
2010-05-01
Surrogate Reservoir Model (SRM) is new solution for fast track, comprehensive reservoir analysis (solving both direct and inverse problems) using existing reservoir simulation models. SRM is defined as a replica of the full field reservoir simulation model that runs and provides accurate results in real-time (one simulation run takes only a fraction of a second). SRM mimics the capabilities of a full field model with high accuracy. Reservoir simulation is the industry standard for reservoir management. It is used in all phases of field development in the oil and gas industry. The routine of simulation studies calls for integration of static and dynamic measurements into the reservoir model. Full field reservoir simulation models have become the major source of information for analysis, prediction and decision making. Large prolific fields usually go through several versions (updates) of their model. Each new version usually is a major improvement over the previous version. The updated model includes the latest available information incorporated along with adjustments that usually are the result of single-well or multi-well history matching. As the number of reservoir layers (thickness of the formations) increases, the number of cells representing the model approaches several millions. As the reservoir models grow in size, so does the time that is required for each run. Schemes such as grid computing and parallel processing helps to a certain degree but do not provide the required speed for tasks such as: field development strategies using comprehensive reservoir analysis, solving the inverse problem for injection/production optimization, quantifying uncertainties associated with the geological model and real-time optimization and decision making. These types of analyses require hundreds or thousands of runs. Furthermore, with the new push for smart fields in the oil/gas industry that is a natural growth of smart completion and smart wells, the need for real time reservoir modeling becomes more pronounced. SRM is developed using the state of the art in neural computing and fuzzy pattern recognition to address the ever growing need in the oil and gas industry to perform accurate, but high speed simulation and modeling. Unlike conventional geo-statistical approaches (response surfaces, proxy models …) that require hundreds of simulation runs for development, SRM is developed only with a few (from 10 to 30 runs) simulation runs. SRM can be developed regularly (as new versions of the full field model become available) off-line and can be put online for real-time processing to guide important decisions. SRM has proven its value in the field. An SRM was developed for a giant oil field in the Middle East. The model included about one million grid blocks with more than 165 horizontal wells and took ten hours for a single run on 12 parallel CPUs. Using only 10 simulation runs, an SRM was developed that was able to accurately mimic the behavior of the reservoir simulation model. Performing a comprehensive reservoir analysis that included making millions of SRM runs, wells in the field were divided into five clusters. It was predicted that wells in cluster one & two are best candidates for rate relaxation with minimal, long term water production while wells in clusters four and five are susceptive to high water cuts. Two and a half years and 20 wells later, rate relaxation results from the field proved that all the predictions made by the SRM analysis were correct. While incremental oil production increased in all wells (wells in clusters 1 produced the most followed by wells in cluster 2, 3 …) the percent change in average monthly water cut for wells in each cluster clearly demonstrated the analytic power of SRM. As it was correctly predicted, wells in clusters 1 and 2 actually experience a reduction in water cut while a substantial increase in water cut was observed in wells classified into clusters 4 and 5. Performing these analyses would have been impossible using the original full field simulation model.
SOWFA + Super Controller User's Manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fleming, P.; Gebraad, P.; Churchfield, M.
2013-08-01
SOWFA + Super Controller is a modification of the NREL's SOWFA tool which allows for a user to apply multiturbine or centralized wind plant control algorithms within the high-fidelity SOWFA simulation environment. The tool is currently a branch of the main SOWFA program, but will one day will be merged into a single version. This manual introduces the tool and provides examples such that a usercan implement their own super controller and set up and run simulations. The manual only discusses enough about SOWFA itself to allow for the customization of controllers and running of simulations, and details of SOWFAmore » itself are reported elsewhere Churchfield and Lee (2013); Churchfield et al. (2012). SOWFA + Super Controller, and this manual, are in alpha mode.« less
Sailfish: A flexible multi-GPU implementation of the lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Januszewski, M.; Kostur, M.
2014-09-01
We present Sailfish, an open source fluid simulation package implementing the lattice Boltzmann method (LBM) on modern Graphics Processing Units (GPUs) using CUDA/OpenCL. We take a novel approach to GPU code implementation and use run-time code generation techniques and a high level programming language (Python) to achieve state of the art performance, while allowing easy experimentation with different LBM models and tuning for various types of hardware. We discuss the general design principles of the code, scaling to multiple GPUs in a distributed environment, as well as the GPU implementation and optimization of many different LBM models, both single component (BGK, MRT, ELBM) and multicomponent (Shan-Chen, free energy). The paper also presents results of performance benchmarks spanning the last three NVIDIA GPU generations (Tesla, Fermi, Kepler), which we hope will be useful for researchers working with this type of hardware and similar codes. Catalogue identifier: AETA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Lesser General Public License, version 3 No. of lines in distributed program, including test data, etc.: 225864 No. of bytes in distributed program, including test data, etc.: 46861049 Distribution format: tar.gz Programming language: Python, CUDA C, OpenCL. Computer: Any with an OpenCL or CUDA-compliant GPU. Operating system: No limits (tested on Linux and Mac OS X). RAM: Hundreds of megabytes to tens of gigabytes for typical cases. Classification: 12, 6.5. External routines: PyCUDA/PyOpenCL, Numpy, Mako, ZeroMQ (for multi-GPU simulations), scipy, sympy Nature of problem: GPU-accelerated simulation of single- and multi-component fluid flows. Solution method: A wide range of relaxation models (LBGK, MRT, regularized LB, ELBM, Shan-Chen, free energy, free surface) and boundary conditions within the lattice Boltzmann method framework. Simulations can be run in single or double precision using one or more GPUs. Restrictions: The lattice Boltzmann method works for low Mach number flows only. Unusual features: The actual numerical calculations run exclusively on GPUs. The numerical code is built dynamically at run-time in CUDA C or OpenCL, using templates and symbolic formulas. The high-level control of the simulation is maintained by a Python process. Additional comments: !!!!! The distribution file for this program is over 45 Mbytes and therefore is not delivered directly when Download or Email is requested. Instead a html file giving details of how the program can be obtained is sent. !!!!! Running time: Problem-dependent, typically minutes (for small cases or short simulations) to hours (large cases or long simulations).
LUXSim: A component-centric approach to low-background simulations
Akerib, D. S.; Bai, X.; Bedikian, S.; ...
2012-02-13
Geant4 has been used throughout the nuclear and high-energy physics community to simulate energy depositions in various detectors and materials. These simulations have mostly been run with a source beam outside the detector. In the case of low-background physics, however, a primary concern is the effect on the detector from radioactivity inherent in the detector parts themselves. From this standpoint, there is no single source or beam, but rather a collection of sources with potentially complicated spatial extent. LUXSim is a simulation framework used by the LUX collaboration that takes a component-centric approach to event generation and recording. A newmore » set of classes allows for multiple radioactive sources to be set within any number of components at run time, with the entire collection of sources handled within a single simulation run. Various levels of information can also be recorded from the individual components, with these record levels also being set at runtime. This flexibility in both source generation and information recording is possible without the need to recompile, reducing the complexity of code management and the proliferation of versions. Within the code itself, casting geometry objects within this new set of classes rather than as the default Geant4 classes automatically extends this flexibility to every individual component. No additional work is required on the part of the developer, reducing development time and increasing confidence in the results. Here, we describe the guiding principles behind LUXSim, detail some of its unique classes and methods, and give examples of usage.« less
PHREEQCI; a graphical user interface for the geochemical computer program PHREEQC
Charlton, Scott R.; Macklin, Clifford L.; Parkhurst, David L.
1997-01-01
PhreeqcI is a Windows-based graphical user interface for the geochemical computer program PHREEQC. PhreeqcI provides the capability to generate and edit input data files, run simulations, and view text files containing simulation results, all within the framework of a single interface. PHREEQC is a multipurpose geochemical program that can perform speciation, inverse, reaction-path, and 1D advective reaction-transport modeling. Interactive access to all of the capabilities of PHREEQC is available with PhreeqcI. The interface is written in Visual Basic and will run on personal computers under the Windows(3.1), Windows95, and WindowsNT operating systems.
Simulated fault injection - A methodology to evaluate fault tolerant microprocessor architectures
NASA Technical Reports Server (NTRS)
Choi, Gwan S.; Iyer, Ravishankar K.; Carreno, Victor A.
1990-01-01
A simulation-based fault-injection method for validating fault-tolerant microprocessor architectures is described. The approach uses mixed-mode simulation (electrical/logic analysis), and injects transient errors in run-time to assess the resulting fault impact. As an example, a fault-tolerant architecture which models the digital aspects of a dual-channel real-time jet-engine controller is used. The level of effectiveness of the dual configuration with respect to single and multiple transients is measured. The results indicate 100 percent coverage of single transients. Approximately 12 percent of the multiple transients affect both channels; none result in controller failure since two additional levels of redundancy exist.
Accelerating Molecular Dynamic Simulation on Graphics Processing Units
Friedrichs, Mark S.; Eastman, Peter; Vaidyanathan, Vishal; Houston, Mike; Legrand, Scott; Beberg, Adam L.; Ensign, Daniel L.; Bruns, Christopher M.; Pande, Vijay S.
2009-01-01
We describe a complete implementation of all-atom protein molecular dynamics running entirely on a graphics processing unit (GPU), including all standard force field terms, integration, constraints, and implicit solvent. We discuss the design of our algorithms and important optimizations needed to fully take advantage of a GPU. We evaluate its performance, and show that it can be more than 700 times faster than a conventional implementation running on a single CPU core. PMID:19191337
NASA Astrophysics Data System (ADS)
Franzoni, G.; Norkus, A.; Pol, A. A.; Srimanobhas, N.; Walker, J.
2017-10-01
Physics analysis at the Compact Muon Solenoid requires both the production of simulated events and processing of the data collected by the experiment. Since the end of the LHC Run-I in 2012, CMS has produced over 20 billion simulated events, from 75 thousand processing requests organised in one hundred different campaigns. These campaigns emulate different configurations of collision events, the detector, and LHC running conditions. In the same time span, sixteen data processing campaigns have taken place to reconstruct different portions of the Run-I and Run-II data with ever improving algorithms and calibrations. The scale and complexity of the events simulation and processing, and the requirement that multiple campaigns must proceed in parallel, demand that a comprehensive, frequently updated and easily accessible monitoring be made available. The monitoring must serve both the analysts, who want to know which and when datasets will become available, and the central production teams in charge of submitting, prioritizing, and running the requests across the distributed computing infrastructure. The Production Monitoring Platform (pMp) web-based service, has been developed in 2015 to address those needs. It aggregates information from multiple services used to define, organize, and run the processing requests. Information is updated hourly using a dedicated elastic database and the monitoring provides multiple configurable views to assess the status of single datasets as well as entire production campaigns. This contribution will describe the pMp development, the evolution of its functionalities, and one and half year of operational experience.
NASA Astrophysics Data System (ADS)
Wang, Zi-Qing; Wang, Guo-Dong; Shen, Wei-Bo
2010-10-01
Multimotor transport is studied by Monte-Carlo simulation with consideration of motor detachment from the filament. Our work shows, in the case of low load, the velocity of multi-motor system can decrease or increase with increasing motor numbers depending on the single motor force-velocity curve. The stall force and run-length reduced greatly compared to other models. Especially in the case of low ATP concentrations, the stall force of multi motor transport even smaller than the single motor's stall force.
Just-in-time adaptive disturbance estimation for run-to-run control of photolithography overlay
NASA Astrophysics Data System (ADS)
Firth, Stacy K.; Campbell, W. J.; Edgar, Thomas F.
2002-07-01
One of the main challenges to implementations of traditional run-to-run control in the semiconductor industry is a high mix of products in a single factory. To address this challenge, Just-in-time Adaptive Disturbance Estimation (JADE) has been developed. JADE uses a recursive weighted least-squares parameters estimation technique to identify the contributions to variation that are dependent on product, as well as the tools on which the lot was processed. As applied to photolithography overlay, JADE assigns these sources of variation to contributions from the context items: tool, product, reference tool, and reference reticle. Simulations demonstrate that JADE effectively identifies disturbances in contributing context items when the variations are known to be additive. The superior performance of JADE over traditional EWMA is also shown in these simulations. The results of application of JADE to data from a high mix production facility show that JADE still performs better than EWMA, even with the challenges of a real manufacturing environment.
Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).
Yang, Owen; Choi, Bernard
2013-01-01
To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches.
NASA Astrophysics Data System (ADS)
Ueda, Yoshikatsu; Omura, Yoshiharu; Kojima, Hiro
Spacecraft observation is essentially "one-point measurement", while numerical simulation can reproduce a whole system of physical processes on a computer. By performing particle simulations of plasma wave instabilities and calculating correlation of waves and particles observed at a single point, we examine how well we can infer the characteristics of the whole system by a one-point measurement. We perform various simulation runs with different plasma parameters using one-dimensional electromagnetic particle code (KEMPO1) and calculate 'E dot v' or other moments at a single point. We find good correlation between the measurement and the macroscopic fluctuations of the total simulation region. We make use of the results of the computer experiments in our system design of new instruments 'One-chip Wave Particle Interaction Analyzer (OWPIA)'.
Road simulation for four-wheel vehicle whole input power spectral density
NASA Astrophysics Data System (ADS)
Wang, Jiangbo; Qiang, Baomin
2017-05-01
As the vibration of running vehicle mainly comes from road and influence vehicle ride performance. So the road roughness power spectral density simulation has great significance to analyze automobile suspension vibration system parameters and evaluate ride comfort. Firstly, this paper based on the mathematical model of road roughness power spectral density, established the integral white noise road random method. Then in the MATLAB/Simulink environment, according to the research method of automobile suspension frame from simple two degree of freedom single-wheel vehicle model to complex multiple degrees of freedom vehicle model, this paper built the simple single incentive input simulation model. Finally the spectrum matrix was used to build whole vehicle incentive input simulation model. This simulation method based on reliable and accurate mathematical theory and can be applied to the random road simulation of any specified spectral which provides pavement incentive model and foundation to vehicle ride performance research and vibration simulation.
Comparing nonlinear MHD simulations of low-aspect-ratio RFPs to RELAX experiments
NASA Astrophysics Data System (ADS)
McCollam, K. J.; den Hartog, D. J.; Jacobson, C. M.; Sovinec, C. R.; Masamune, S.; Sanpei, A.
2016-10-01
Standard reversed-field pinch (RFP) plasmas provide a nonlinear dynamical system as a validation domain for numerical MHD simulation codes, with applications in general toroidal confinement scenarios including tokamaks. Using the NIMROD code, we simulate the nonlinear evolution of RFP plasmas similar to those in the RELAX experiment. The experiment's modest Lundquist numbers S (as low as a few times 104) make closely matching MHD simulations tractable given present computing resources. Its low aspect ratio ( 2) motivates a comparison study using cylindrical and toroidal geometries in NIMROD. We present initial results from nonlinear single-fluid runs at S =104 for both geometries and a range of equilibrium parameters, which preliminarily show that the magnetic fluctuations are roughly similar between the two geometries and between simulation and experiment, though there appear to be some qualitative differences in their temporal evolution. Runs at higher S are planned. This work is supported by the U.S. DOE and by the Japan Society for the Promotion of Science.
Comparative Implementation of High Performance Computing for Power System Dynamic Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Shuangshuang; Huang, Zhenyu; Diao, Ruisheng
Dynamic simulation for transient stability assessment is one of the most important, but intensive, computations for power system planning and operation. Present commercial software is mainly designed for sequential computation to run a single simulation, which is very time consuming with a single processer. The application of High Performance Computing (HPC) to dynamic simulations is very promising in accelerating the computing process by parallelizing its kernel algorithms while maintaining the same level of computation accuracy. This paper describes the comparative implementation of four parallel dynamic simulation schemes in two state-of-the-art HPC environments: Message Passing Interface (MPI) and Open Multi-Processing (OpenMP).more » These implementations serve to match the application with dedicated multi-processor computing hardware and maximize the utilization and benefits of HPC during the development process.« less
Implementation of unsteady sampling procedures for the parallel direct simulation Monte Carlo method
NASA Astrophysics Data System (ADS)
Cave, H. M.; Tseng, K.-C.; Wu, J.-S.; Jermy, M. C.; Huang, J.-C.; Krumdieck, S. P.
2008-06-01
An unsteady sampling routine for a general parallel direct simulation Monte Carlo method called PDSC is introduced, allowing the simulation of time-dependent flow problems in the near continuum range. A post-processing procedure called DSMC rapid ensemble averaging method (DREAM) is developed to improve the statistical scatter in the results while minimising both memory and simulation time. This method builds an ensemble average of repeated runs over small number of sampling intervals prior to the sampling point of interest by restarting the flow using either a Maxwellian distribution based on macroscopic properties for near equilibrium flows (DREAM-I) or output instantaneous particle data obtained by the original unsteady sampling of PDSC for strongly non-equilibrium flows (DREAM-II). The method is validated by simulating shock tube flow and the development of simple Couette flow. Unsteady PDSC is found to accurately predict the flow field in both cases with significantly reduced run-times over single processor code and DREAM greatly reduces the statistical scatter in the results while maintaining accurate particle velocity distributions. Simulations are then conducted of two applications involving the interaction of shocks over wedges. The results of these simulations are compared to experimental data and simulations from the literature where there these are available. In general, it was found that 10 ensembled runs of DREAM processing could reduce the statistical uncertainty in the raw PDSC data by 2.5-3.3 times, based on the limited number of cases in the present study.
The Effects of Training on Anxiety and Task Performance in Simulated Suborbital Spaceflight.
Blue, Rebecca S; Bonato, Frederick; Seaton, Kimberly; Bubka, Andrea; Vardiman, Johnené L; Mathers, Charles; Castleberry, Tarah L; Vanderploeg, James M
2017-07-01
In commercial spaceflight, anxiety could become mission-impacting, causing negative experiences or endangering the flight itself. We studied layperson response to four varied-length training programs (ranging from 1 h-2 d of preparation) prior to centrifuge simulation of launch and re-entry acceleration profiles expected during suborbital spaceflight. We examined subject task execution, evaluating performance in high-stress conditions. We sought to identify any trends in demographics, hemodynamics, or similar factors in subjects with the highest anxiety or poorest tolerance of the experience. Volunteers participated in one of four centrifuge training programs of varied complexity and duration, culminating in two simulated suborbital spaceflights. At most, subjects underwent seven centrifuge runs over 2 d, including two +Gz runs (peak +3.5 Gz, Run 2) and two +Gx runs (peak +6.0 Gx, Run 4) followed by three runs approximating suborbital spaceflight profiles (combined +Gx and +Gz, peak +6.0 Gx and +4.0 Gz). Two cohorts also received dedicated anxiety-mitigation training. Subjects were evaluated on their performance on various tasks, including a simulated emergency. Participating in 2-7 centrifuge exposures were 148 subjects (105 men, 43 women, age range 19-72 yr, mean 39.4 ± 13.2 yr, body mass index range 17.3-38.1, mean 25.1 ± 3.7). There were 10 subjects who withdrew or limited their G exposure; history of motion sickness was associated with opting out. Shorter length training programs were associated with elevated hemodynamic responses. Single-directional G training did not significantly improve tolerance. Training programs appear best when high fidelity and sequential exposures may improve tolerance of physical/psychological flight stressors. The studied variables did not predict anxiety-related responses to these centrifuge profiles.Blue RS, Bonato F, Seaton K, Bubka A, Vardiman JL, Mathers C, Castleberry TL, Vanderploeg JM. The effects of training on anxiety and task performance in simulated suborbital spaceflight. Aerosp Med Hum Perform. 2017; 88(7):641-650.
Automated Carrier Landing of an Unmanned Combat Aerial Vehicle Using Dynamic Inversion
2007-06-01
17 CN normal force coefficient . . . . . . . . . . . . . . . . . . . . 17 CA axial force coefficient...slug·ft2 Ixzb 0 slug·ft2 The aircraft has a single engine inlet for a single, centerline mounted turbofan engine. For purposes of this research, the...assumed to remain constant for each simulation run and were based on an assumed 10% fuel load with full weapons [2]. The rest of these values were
NASA Technical Reports Server (NTRS)
hoelzer, H. D.; Fourroux, K. A.; Rickman, D. L.; Schrader, C. M.
2011-01-01
Figures of Merit (FoMs) and the FoM software provide a method for quantitatively evaluating the quality of a regolith simulant by comparing the simulant to a reference material. FoMs may be used for comparing a simulant to actual regolith material, specification by stating the value a simulant s FoMs must attain to be suitable for a given application and comparing simulants from different vendors or production runs. FoMs may even be used to compare different simulants to each other. A single FoM is conceptually an algorithm that computes a single number for quantifying the similarity or difference of a single characteristic of a simulant material and a reference material and provides a clear measure of how well a simulant and reference material match or compare. FoMs have been constructed to lie between zero and 1, with zero indicating a poor or no match and 1 indicating a perfect match. FoMs are defined for modal composition, particle size distribution, particle shape distribution, (aspect ratio and angularity), and density. This TM covers the mathematics, use, installation, and licensing for the existing FoM code in detail.
MDGRAPE-4: a special-purpose computer system for molecular dynamics simulations.
Ohmura, Itta; Morimoto, Gentaro; Ohno, Yousuke; Hasegawa, Aki; Taiji, Makoto
2014-08-06
We are developing the MDGRAPE-4, a special-purpose computer system for molecular dynamics (MD) simulations. MDGRAPE-4 is designed to achieve strong scalability for protein MD simulations through the integration of general-purpose cores, dedicated pipelines, memory banks and network interfaces (NIFs) to create a system on chip (SoC). Each SoC has 64 dedicated pipelines that are used for non-bonded force calculations and run at 0.8 GHz. Additionally, it has 65 Tensilica Xtensa LX cores with single-precision floating-point units that are used for other calculations and run at 0.6 GHz. At peak performance levels, each SoC can evaluate 51.2 G interactions per second. It also has 1.8 MB of embedded shared memory banks and six network units with a peak bandwidth of 7.2 GB s(-1) for the three-dimensional torus network. The system consists of 512 (8×8×8) SoCs in total, which are mounted on 64 node modules with eight SoCs. The optical transmitters/receivers are used for internode communication. The expected maximum power consumption is 50 kW. While MDGRAPE-4 software has still been improved, we plan to run MD simulations on MDGRAPE-4 in 2014. The MDGRAPE-4 system will enable long-time molecular dynamics simulations of small systems. It is also useful for multiscale molecular simulations where the particle simulation parts often become bottlenecks.
MDGRAPE-4: a special-purpose computer system for molecular dynamics simulations
Ohmura, Itta; Morimoto, Gentaro; Ohno, Yousuke; Hasegawa, Aki; Taiji, Makoto
2014-01-01
We are developing the MDGRAPE-4, a special-purpose computer system for molecular dynamics (MD) simulations. MDGRAPE-4 is designed to achieve strong scalability for protein MD simulations through the integration of general-purpose cores, dedicated pipelines, memory banks and network interfaces (NIFs) to create a system on chip (SoC). Each SoC has 64 dedicated pipelines that are used for non-bonded force calculations and run at 0.8 GHz. Additionally, it has 65 Tensilica Xtensa LX cores with single-precision floating-point units that are used for other calculations and run at 0.6 GHz. At peak performance levels, each SoC can evaluate 51.2 G interactions per second. It also has 1.8 MB of embedded shared memory banks and six network units with a peak bandwidth of 7.2 GB s−1 for the three-dimensional torus network. The system consists of 512 (8×8×8) SoCs in total, which are mounted on 64 node modules with eight SoCs. The optical transmitters/receivers are used for internode communication. The expected maximum power consumption is 50 kW. While MDGRAPE-4 software has still been improved, we plan to run MD simulations on MDGRAPE-4 in 2014. The MDGRAPE-4 system will enable long-time molecular dynamics simulations of small systems. It is also useful for multiscale molecular simulations where the particle simulation parts often become bottlenecks. PMID:24982255
Multidisciplinary Simulation Acceleration using Multiple Shared-Memory Graphical Processing Units
NASA Astrophysics Data System (ADS)
Kemal, Jonathan Yashar
For purposes of optimizing and analyzing turbomachinery and other designs, the unsteady Favre-averaged flow-field differential equations for an ideal compressible gas can be solved in conjunction with the heat conduction equation. We solve all equations using the finite-volume multiple-grid numerical technique, with the dual time-step scheme used for unsteady simulations. Our numerical solver code targets CUDA-capable Graphical Processing Units (GPUs) produced by NVIDIA. Making use of MPI, our solver can run across networked compute notes, where each MPI process can use either a GPU or a Central Processing Unit (CPU) core for primary solver calculations. We use NVIDIA Tesla C2050/C2070 GPUs based on the Fermi architecture, and compare our resulting performance against Intel Zeon X5690 CPUs. Solver routines converted to CUDA typically run about 10 times faster on a GPU for sufficiently dense computational grids. We used a conjugate cylinder computational grid and ran a turbulent steady flow simulation using 4 increasingly dense computational grids. Our densest computational grid is divided into 13 blocks each containing 1033x1033 grid points, for a total of 13.87 million grid points or 1.07 million grid points per domain block. To obtain overall speedups, we compare the execution time of the solver's iteration loop, including all resource intensive GPU-related memory copies. Comparing the performance of 8 GPUs to that of 8 CPUs, we obtain an overall speedup of about 6.0 when using our densest computational grid. This amounts to an 8-GPU simulation running about 39.5 times faster than running than a single-CPU simulation.
HYDES: A generalized hybrid computer program for studying turbojet or turbofan engine dynamics
NASA Technical Reports Server (NTRS)
Szuch, J. R.
1974-01-01
This report describes HYDES, a hybrid computer program capable of simulating one-spool turbojet, two-spool turbojet, or two-spool turbofan engine dynamics. HYDES is also capable of simulating two- or three-stream turbofans with or without mixing of the exhaust streams. The program is intended to reduce the time required for implementing dynamic engine simulations. HYDES was developed for running on the Lewis Research Center's Electronic Associates (EAI) 690 Hybrid Computing System and satisfies the 16384-word core-size and hybrid-interface limits of that machine. The program could be modified for running on other computing systems. The use of HYDES to simulate a single-spool turbojet and a two-spool, two-stream turbofan engine is demonstrated. The form of the required input data is shown and samples of output listings (teletype) and transient plots (x-y plotter) are provided. HYDES is shown to be capable of performing both steady-state design and off-design analyses and transient analyses.
for the game. Subsequent duels , flown with single armed escorts, calculated reduction in losses and damage states. For the study, hybrid computer...6) a duel between a ground weapon, armed escort, and formation of lift aircraft. (Author)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, William Michael; Plimpton, Steven James; Wang, Peng
2010-03-01
LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. LAMMPS has potentials for soft materials (biomolecules, polymers) and solid-state materials (metals, semiconductors) and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale. LAMMPS runs on single processors or in parallel using message-passing techniques and a spatial-decomposition of the simulation domain. The code is designed to be easy to modify or extend with new functionality.
Rover Attitude and Pointing System Simulation Testbed
NASA Technical Reports Server (NTRS)
Vanelli, Charles A.; Grinblat, Jonathan F.; Sirlin, Samuel W.; Pfister, Sam
2009-01-01
The MER (Mars Exploration Rover) Attitude and Pointing System Simulation Testbed Environment (RAPSSTER) provides a simulation platform used for the development and test of GNC (guidance, navigation, and control) flight algorithm designs for the Mars rovers, which was specifically tailored to the MERs, but has since been used in the development of rover algorithms for the Mars Science Laboratory (MSL) as well. The software provides an integrated simulation and software testbed environment for the development of Mars rover attitude and pointing flight software. It provides an environment that is able to run the MER GNC flight software directly (as opposed to running an algorithmic model of the MER GNC flight code). This improves simulation fidelity and confidence in the results. Further more, the simulation environment allows the user to single step through its execution, pausing, and restarting at will. The system also provides for the introduction of simulated faults specific to Mars rover environments that cannot be replicated in other testbed platforms, to stress test the GNC flight algorithms under examination. The software provides facilities to do these stress tests in ways that cannot be done in the real-time flight system testbeds, such as time-jumping (both forwards and backwards), and introduction of simulated actuator faults that would be difficult, expensive, and/or destructive to implement in the real-time testbeds. Actual flight-quality codes can be incorporated back into the development-test suite of GNC developers, closing the loop between the GNC developers and the flight software developers. The software provides fully automated scripting, allowing multiple tests to be run with varying parameters, without human supervision.
NASA Astrophysics Data System (ADS)
Nabil, Mahdi; Rattner, Alexander S.
The volume-of-fluid (VOF) approach is a mature technique for simulating two-phase flows. However, VOF simulation of phase-change heat transfer is still in its infancy. Multiple closure formulations have been proposed in the literature, each suited to different applications. While these have enabled significant research advances, few implementations are publicly available, actively maintained, or inter-operable. Here, a VOF solver is presented (interThermalPhaseChangeFoam), which incorporates an extensible framework for phase-change heat transfer modeling, enabling simulation of diverse phenomena in a single environment. The solver employs object oriented OpenFOAM library features, including Run-Time-Type-Identification to enable rapid implementation and run-time selection of phase change and surface tension force models. The solver is packaged with multiple phase change and surface tension closure models, adapted and refined from earlier studies. This code has previously been applied to study wavy film condensation, Taylor flow evaporation, nucleate boiling, and dropwise condensation. Tutorial cases are provided for simulation of horizontal film condensation, smooth and wavy falling film condensation, nucleate boiling, and bubble condensation. Validation and grid sensitivity studies, interfacial transport models, effects of spurious currents from surface tension models, effects of artificial heat transfer due to numerical factors, and parallel scaling performance are described in detail in the Supplemental Material (see Appendix A). By incorporating the framework and demonstration cases into a single environment, users can rapidly apply the solver to study phase-change processes of interest.
Reducing EnergyPlus Run Time For Code Compliance Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Athalye, Rahul A.; Gowri, Krishnan; Schultz, Robert W.
2014-09-12
Integration of the EnergyPlus ™ simulation engine into performance-based code compliance software raises a concern about simulation run time, which impacts timely feedback of compliance results to the user. EnergyPlus annual simulations for proposed and code baseline building models, and mechanical equipment sizing result in simulation run times beyond acceptable limits. This paper presents a study that compares the results of a shortened simulation time period using 4 weeks of hourly weather data (one per quarter), to an annual simulation using full 52 weeks of hourly weather data. Three representative building types based on DOE Prototype Building Models and threemore » climate zones were used for determining the validity of using a shortened simulation run period. Further sensitivity analysis and run time comparisons were made to evaluate the robustness and run time savings of using this approach. The results of this analysis show that the shortened simulation run period provides compliance index calculations within 1% of those predicted using annual simulation results, and typically saves about 75% of simulation run time.« less
Simulation of LHC events on a millions threads
NASA Astrophysics Data System (ADS)
Childers, J. T.; Uram, T. D.; LeCompte, T. J.; Papka, M. E.; Benjamin, D. P.
2015-12-01
Demand for Grid resources is expected to double during LHC Run II as compared to Run I; the capacity of the Grid, however, will not double. The HEP community must consider how to bridge this computing gap by targeting larger compute resources and using the available compute resources as efficiently as possible. Argonne's Mira, the fifth fastest supercomputer in the world, can run roughly five times the number of parallel processes that the ATLAS experiment typically uses on the Grid. We ported Alpgen, a serial x86 code, to run as a parallel application under MPI on the Blue Gene/Q architecture. By analysis of the Alpgen code, we reduced the memory footprint to allow running 64 threads per node, utilizing the four hardware threads available per core on the PowerPC A2 processor. Event generation and unweighting, typically run as independent serial phases, are coupled together in a single job in this scenario, reducing intermediate writes to the filesystem. By these optimizations, we have successfully run LHC proton-proton physics event generation at the scale of a million threads, filling two-thirds of Mira.
AMITIS: A 3D GPU-Based Hybrid-PIC Model for Space and Plasma Physics
NASA Astrophysics Data System (ADS)
Fatemi, Shahab; Poppe, Andrew R.; Delory, Gregory T.; Farrell, William M.
2017-05-01
We have developed, for the first time, an advanced modeling infrastructure in space simulations (AMITIS) with an embedded three-dimensional self-consistent grid-based hybrid model of plasma (kinetic ions and fluid electrons) that runs entirely on graphics processing units (GPUs). The model uses NVIDIA GPUs and their associated parallel computing platform, CUDA, developed for general purpose processing on GPUs. The model uses a single CPU-GPU pair, where the CPU transfers data between the system and GPU memory, executes CUDA kernels, and writes simulation outputs on the disk. All computations, including moving particles, calculating macroscopic properties of particles on a grid, and solving hybrid model equations are processed on a single GPU. We explain various computing kernels within AMITIS and compare their performance with an already existing well-tested hybrid model of plasma that runs in parallel using multi-CPU platforms. We show that AMITIS runs ∼10 times faster than the parallel CPU-based hybrid model. We also introduce an implicit solver for computation of Faraday’s Equation, resulting in an explicit-implicit scheme for the hybrid model equation. We show that the proposed scheme is stable and accurate. We examine the AMITIS energy conservation and show that the energy is conserved with an error < 0.2% after 500,000 timesteps, even when a very low number of particles per cell is used.
OpenKnowledge for peer-to-peer experimentation in protein identification by MS/MS
2011-01-01
Background Traditional scientific workflow platforms usually run individual experiments with little evaluation and analysis of performance as required by automated experimentation in which scientists are being allowed to access numerous applicable workflows rather than being committed to a single one. Experimental protocols and data under a peer-to-peer environment could potentially be shared freely without any single point of authority to dictate how experiments should be run. In such environment it is necessary to have mechanisms by which each individual scientist (peer) can assess, locally, how he or she wants to be involved with others in experiments. This study aims to implement and demonstrate simple peer ranking under the OpenKnowledge peer-to-peer infrastructure by both simulated and real-world bioinformatics experiments involving multi-agent interactions. Methods A simulated experiment environment with a peer ranking capability was specified by the Lightweight Coordination Calculus (LCC) and automatically executed under the OpenKnowledge infrastructure. The peers such as MS/MS protein identification services (including web-enabled and independent programs) were made accessible as OpenKnowledge Components (OKCs) for automated execution as peers in the experiments. The performance of the peers in these automated experiments was monitored and evaluated by simple peer ranking algorithms. Results Peer ranking experiments with simulated peers exhibited characteristic behaviours, e.g., power law effect (a few dominant peers dominate), similar to that observed in the traditional Web. Real-world experiments were run using an interaction model in LCC involving two different types of MS/MS protein identification peers, viz., peptide fragment fingerprinting (PFF) and de novo sequencing with another peer ranking algorithm simply based on counting the successful and failed runs. This study demonstrated a novel integration and useful evaluation of specific proteomic peers and found MASCOT to be a dominant peer as judged by peer ranking. Conclusion The simulated and real-world experiments in the present study demonstrated that the OpenKnowledge infrastructure with peer ranking capability can serve as an evaluative environment for automated experimentation. PMID:22192521
Organization and use of a Software/Hardware Avionics Research Program (SHARP)
NASA Technical Reports Server (NTRS)
Karmarkar, J. S.; Kareemi, M. N.
1975-01-01
The organization and use is described of the software/hardware avionics research program (SHARP) developed to duplicate the automatic portion of the STOLAND simulator system, on a general-purpose computer system (i.e., IBM 360). The program's uses are: (1) to conduct comparative evaluation studies of current and proposed airborne and ground system concepts via single run or Monte Carlo simulation techniques, and (2) to provide a software tool for efficient algorithm evaluation and development for the STOLAND avionics computer.
FLAME: A platform for high performance computing of complex systems, applied for three case studies
Kiran, Mariam; Bicak, Mesude; Maleki-Dizaji, Saeedeh; ...
2011-01-01
FLAME allows complex models to be automatically parallelised on High Performance Computing (HPC) grids enabling large number of agents to be simulated over short periods of time. Modellers are hindered by complexities of porting models on parallel platforms and time taken to run large simulations on a single machine, which FLAME overcomes. Three case studies from different disciplines were modelled using FLAME, and are presented along with their performance results on a grid.
TWOS - TIME WARP OPERATING SYSTEM, VERSION 2.5.1
NASA Technical Reports Server (NTRS)
Bellenot, S. F.
1994-01-01
The Time Warp Operating System (TWOS) is a special-purpose operating system designed to support parallel discrete-event simulation. TWOS is a complete implementation of the Time Warp mechanism, a distributed protocol for virtual time synchronization based on process rollback and message annihilation. Version 2.5.1 supports simulations and other computations using both virtual time and dynamic load balancing; it does not support general time-sharing or multi-process jobs using conventional message synchronization and communication. The program utilizes the underlying operating system's resources. TWOS runs a single simulation at a time, executing it concurrently on as many processors of a distributed system as are allocated. The simulation needs only to be decomposed into objects (logical processes) that interact through time-stamped messages. TWOS provides transparent synchronization. The user does not have to add any more special logic to aid in synchronization, nor give any synchronization advice, nor even understand much about how the Time Warp mechanism works. The Time Warp Simulator (TWSIM) subdirectory contains a sequential simulation engine that is interface compatible with TWOS. This means that an application designer and programmer who wish to use TWOS can prototype code on TWSIM on a single processor and/or workstation before having to deal with the complexity of working on a distributed system. TWSIM also provides statistics about the application which may be helpful for determining the correctness of an application and for achieving good performance on TWOS. Version 2.5.1 has an updated interface that is not compatible with 2.0. The program's user manual assists the simulation programmer in the design, coding, and implementation of discrete-event simulations running on TWOS. The manual also includes a practical user's guide to the TWOS application benchmark, Colliding Pucks. TWOS supports simulations written in the C programming language. It is designed to run on the Sun3/Sun4 series computers and the BBN "Butterfly" GP-1000 computer. The standard distribution medium for this package is a .25 inch tape cartridge in TAR format. TWOS was developed in 1989 and updated in 1991. This program is a copyrighted work with all copyright vested in NASA. Sun3 and Sun4 are trademarks of Sun Microsystems, Inc.
Battery Storage Evaluation Tool, version 1.x
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-10-02
The battery storage evaluation tool developed at Pacific Northwest National Laboratory is used to run a one-year simulation to evaluate the benefits of battery storage for multiple grid applications, including energy arbitrage, balancing service, capacity value, distribution system equipment deferral, and outage mitigation. This tool is based on the optimal control strategies to capture multiple services from a single energy storage device. In this control strategy, at each hour, a lookahead optimization is first formulated and solved to determine the battery base operating point. The minute-by-minute simulation is then performed to simulate the actual battery operation.
Suppressing correlations in massively parallel simulations of lattice models
NASA Astrophysics Data System (ADS)
Kelling, Jeffrey; Ódor, Géza; Gemming, Sibylle
2017-11-01
For lattice Monte Carlo simulations parallelization is crucial to make studies of large systems and long simulation time feasible, while sequential simulations remain the gold-standard for correlation-free dynamics. Here, various domain decomposition schemes are compared, concluding with one which delivers virtually correlation-free simulations on GPUs. Extensive simulations of the octahedron model for 2 + 1 dimensional Kardar-Parisi-Zhang surface growth, which is very sensitive to correlation in the site-selection dynamics, were performed to show self-consistency of the parallel runs and agreement with the sequential algorithm. We present a GPU implementation providing a speedup of about 30 × over a parallel CPU implementation on a single socket and at least 180 × with respect to the sequential reference.
ms2: A molecular simulation tool for thermodynamic properties
NASA Astrophysics Data System (ADS)
Deublein, Stephan; Eckl, Bernhard; Stoll, Jürgen; Lishchuk, Sergey V.; Guevara-Carrion, Gabriela; Glass, Colin W.; Merker, Thorsten; Bernreuther, Martin; Hasse, Hans; Vrabec, Jadran
2011-11-01
This work presents the molecular simulation program ms2 that is designed for the calculation of thermodynamic properties of bulk fluids in equilibrium consisting of small electro-neutral molecules. ms2 features the two main molecular simulation techniques, molecular dynamics (MD) and Monte-Carlo. It supports the calculation of vapor-liquid equilibria of pure fluids and multi-component mixtures described by rigid molecular models on the basis of the grand equilibrium method. Furthermore, it is capable of sampling various classical ensembles and yields numerous thermodynamic properties. To evaluate the chemical potential, Widom's test molecule method and gradual insertion are implemented. Transport properties are determined by equilibrium MD simulations following the Green-Kubo formalism. ms2 is designed to meet the requirements of academia and industry, particularly achieving short response times and straightforward handling. It is written in Fortran90 and optimized for a fast execution on a broad range of computer architectures, spanning from single processor PCs over PC-clusters and vector computers to high-end parallel machines. The standard Message Passing Interface (MPI) is used for parallelization and ms2 is therefore easily portable to different computing platforms. Feature tools facilitate the interaction with the code and the interpretation of input and output files. The accuracy and reliability of ms2 has been shown for a large variety of fluids in preceding work. Program summaryProgram title:ms2 Catalogue identifier: AEJF_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJF_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Special Licence supplied by the authors No. of lines in distributed program, including test data, etc.: 82 794 No. of bytes in distributed program, including test data, etc.: 793 705 Distribution format: tar.gz Programming language: Fortran90 Computer: The simulation tool ms2 is usable on a wide variety of platforms, from single processor machines over PC-clusters and vector computers to vector-parallel architectures. (Tested with Fortran compilers: gfortran, Intel, PathScale, Portland Group and Sun Studio.) Operating system: Unix/Linux, Windows Has the code been vectorized or parallelized?: Yes. Message Passing Interface (MPI) protocol Scalability. Excellent scalability up to 16 processors for molecular dynamics and >512 processors for Monte-Carlo simulations. RAM:ms2 runs on single processors with 512 MB RAM. The memory demand rises with increasing number of processors used per node and increasing number of molecules. Classification: 7.7, 7.9, 12 External routines: Message Passing Interface (MPI) Nature of problem: Calculation of application oriented thermodynamic properties for rigid electro-neutral molecules: vapor-liquid equilibria, thermal and caloric data as well as transport properties of pure fluids and multi-component mixtures. Solution method: Molecular dynamics, Monte-Carlo, various classical ensembles, grand equilibrium method, Green-Kubo formalism. Restrictions: No. The system size is user-defined. Typical problems addressed by ms2 can be solved by simulating systems containing typically 2000 molecules or less. Unusual features: Feature tools are available for creating input files, analyzing simulation results and visualizing molecular trajectories. Additional comments: Sample makefiles for multiple operation platforms are provided. Documentation is provided with the installation package and is available at http://www.ms-2.de. Running time: The running time of ms2 depends on the problem set, the system size and the number of processes used in the simulation. Running four processes on a "Nehalem" processor, simulations calculating VLE data take between two and twelve hours, calculating transport properties between six and 24 hours.
Performance of a small compression ignition engine fuelled by liquified petroleum gas
NASA Astrophysics Data System (ADS)
Ambarita, Himsar; Yohanes Setyawan, Eko; Ginting, Sibuk; Naibaho, Waldemar
2017-09-01
In this work, a small air cooled single cylinder of diesel engine with a rated power of 2.5 kW at 3000 rpm is tested in two different modes. In the first mode, the CI engines run on diesel fuel mode. In the second mode, the CI engine run on liquified petroleum gas (LPG) mode. In order to simulate the load, a generator is employed. The load is fixed at 800 W and engine speed varies from 2400 rpm to 3400 rpm. The out power, specific fuel consumption, and brake thermal efficiency resulted from the engine in both modes are compared. The results show that the output power of the CI engine run on LPG fuel is comparable with the engine run on diesel fuel. However, the specific fuel consumption of the CI engine with LPG fuel is higher 17.53% in average in comparison with the CI engine run on diesel fuel. The efficiency of the CI engine with LPG fuel is lower 21.43% in average in comparison with the CI engine run on diesel fuel.
Optimum Vehicle Component Integration with InVeST (Integrated Vehicle Simulation Testbed)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ng, W; Paddack, E; Aceves, S
2001-12-27
We have developed an Integrated Vehicle Simulation Testbed (InVeST). InVeST is based on the concept of Co-simulation, and it allows the development of virtual vehicles that can be analyzed and optimized as an overall integrated system. The virtual vehicle is defined by selecting different vehicle components from a component library. Vehicle component models can be written in multiple programming languages running on different computer platforms. At the same time, InVeST provides full protection for proprietary models. Co-simulation is a cost-effective alternative to competing methodologies, such as developing a translator or selecting a single programming language for all vehicle components. InVeSTmore » has been recently demonstrated using a transmission model and a transmission controller model. The transmission model was written in SABER and ran on a Sun/Solaris workstation, while the transmission controller was written in MATRIXx and ran on a PC running Windows NT. The demonstration was successfully performed. Future plans include the applicability of Co-simulation and InVeST to analysis and optimization of multiple complex systems, including those of Intelligent Transportation Systems.« less
Visual Elements in Flight Simulation
1975-07-01
control. In consequence, current efforts tc create appropriate visual simulations run the gamut from efforts toward almost complete replication of the...create appropriate visual simulations run the gamut from efforts to create appropriate visual simulations run the gamut from efforts toward almost
The ISS as a platform for a fully simulated mars voyage
NASA Astrophysics Data System (ADS)
Narici, Livio; Reitz, Guenther
2016-07-01
The ISS can mimic the impact of microgravity, radiation, living and psychological conditions that astronauts will face during a deep space cruise, for example to Mars. This suggests the ISS as the most valuable "analogue" for deep space exploration. NASA has indeed suggested a 'full-up deep space simulation on last available ISS Mission: 6/7 crew for one year duration; full simulation of time delays & autonomous operations'. This idea should be pushed further. It is indeed conceivable to use the ISS as the final "analogue", performing a real 'dry-run' of a deep space mission (such as a mission to Mars), as close as reasonably possible to what will be the real voyage. This Mars ISS dry run (ISS4Mars) would last 500-800 days, mimicking most of the challenges which will be undertaken such as length, isolation, food provision, decision making, time delays, health monitoring diagnostic and therapeutic actions and more: not a collection of "single experiments", but a complete exploration simulation were all the pieces will come together for the first in space simulated Mars voyage. Most of these challenges are the same that those that will be encountered during a Moon voyage, with the most evident exceptions being the duration and the communication delay. At the time of the Mars ISS dry run all the science and technological challenges will have to be mostly solved by dedicated works. These solutions will be synergistically deployed in the dry run which will simulate all the different aspects of the voyage, the trip to Mars, the permanence on the planet and the return to Earth. During the dry run i) There will be no arrivals/departure of spacecrafts; 2) Proper communications delay with ground will be simulated; 3) Decision processes will migrate from Ground to ISS; 4) Permanence on Mars will be simulated. Mars ISS dry run will use just a portion of the ISS which will be totally isolated from the rest of the ISS, leaving to the other ISS portions the task to provide the needed operational support for the ISS survival as well as the support for emergency situations. Beside helping in focusing the attention of the many space and space related programs to the quest for Mars, ISS4Mars will maintain a high level of attention of the funding institutions and provide an important focus for the general public. This talk will present the many scientific issues still open to be addressed (see for example the disciplinary reports of the THESEUS project#), some example of the challenging tests that could be performed, some of the operational challenges, as well as list some of the issues not likely/possible to be simulated. # http://www.theseus-eu.org
CO2 Push-Pull Single Fault Injection Simulations
Borgia, Andrea; Oldenburg, Curtis (ORCID:0000000201326016); Zhang, Rui; Pan, Lehua; Daley, Thomas M.; Finsterle, Stefan; Ramakrishnan, T.S.; Doughty, Christine; Jung, Yoojin; Lee, Kyung Jae; Altundas, Bilgin; Chugunov, Nikita
2017-09-21
ASCII text files containing grid-block name, X-Y-Z location, and multiple parameters from TOUGH2 simulation output of CO2 injection into an idealized single fault representing a dipping normal fault at the Desert Peak geothermal field (readable by GMS). The fault is composed of a damage zone, a fault gouge and a slip plane. The runs are described in detail in the following: Borgia A., Oldenburg C.M., Zhang R., Jung Y., Lee K.J., Doughty C., Daley T.M., Chugunov N., Altundas B, Ramakrishnan T.S., 2017. Carbon Dioxide Injection for Enhanced Characterization of Faults and Fractures in Geothermal Systems. Proceedings of the 42st Workshop on Geothermal Reservoir Engineering, Stanford University, Stanford, California, February 13-17.
Covariance Analysis Tool (G-CAT) for Computing Ascent, Descent, and Landing Errors
NASA Technical Reports Server (NTRS)
Boussalis, Dhemetrios; Bayard, David S.
2013-01-01
G-CAT is a covariance analysis tool that enables fast and accurate computation of error ellipses for descent, landing, ascent, and rendezvous scenarios, and quantifies knowledge error contributions needed for error budgeting purposes. Because GCAT supports hardware/system trade studies in spacecraft and mission design, it is useful in both early and late mission/ proposal phases where Monte Carlo simulation capability is not mature, Monte Carlo simulation takes too long to run, and/or there is a need to perform multiple parametric system design trades that would require an unwieldy number of Monte Carlo runs. G-CAT is formulated as a variable-order square-root linearized Kalman filter (LKF), typically using over 120 filter states. An important property of G-CAT is that it is based on a 6-DOF (degrees of freedom) formulation that completely captures the combined effects of both attitude and translation errors on the propagated trajectories. This ensures its accuracy for guidance, navigation, and control (GN&C) analysis. G-CAT provides the desired fast turnaround analysis needed for error budgeting in support of mission concept formulations, design trade studies, and proposal development efforts. The main usefulness of a covariance analysis tool such as G-CAT is its ability to calculate the performance envelope directly from a single run. This is in sharp contrast to running thousands of simulations to obtain similar information using Monte Carlo methods. It does this by propagating the "statistics" of the overall design, rather than simulating individual trajectories. G-CAT supports applications to lunar, planetary, and small body missions. It characterizes onboard knowledge propagation errors associated with inertial measurement unit (IMU) errors (gyro and accelerometer), gravity errors/dispersions (spherical harmonics, masscons), and radar errors (multiple altimeter beams, multiple Doppler velocimeter beams). G-CAT is a standalone MATLAB- based tool intended to run on any engineer's desktop computer.
Nett, Michael; Avelar, Rui; Sheehan, Michael; Cushner, Fred
2011-03-01
Standard medial parapatellar arthrotomies of 10 cadaveric knees were closed with either conventional interrupted absorbable sutures (control group, mean of 19.4 sutures) or a single running knotless bidirectional barbed absorbable suture (experimental group). Water-tightness of the arthrotomy closure was compared by simulating a tense hemarthrosis and measuring arthrotomy leakage over 3 minutes. Mean total leakage was 356 mL and 89 mL in the control and experimental groups, respectively (p = 0.027). Using 8 of the 10 knees (4 closed with control sutures, 4 closed with an experimental suture), a tense hemarthrosis was again created, and iatrogenic suture rupture was performed: a proximal suture was cut at 1 minute; a distal suture was cut at 2 minutes. The impact of suture rupture was compared by measuring total arthrotomy leakage over 3 minutes. Mean total leakage was 601 mL and 174 mL in the control and experimental groups, respectively (p = 0.3). In summary, using a cadaveric model, arthrotomies closed with a single bidirectional barbed running suture were statistically significantly more water-tight than those closed using a standard interrupted technique. The sample size was insufficient to determine whether the two closure techniques differed in leakage volume after suture rupture.
DOT National Transportation Integrated Search
2000-08-01
The National Highway Traffic Safety Administration (NHTSA) has developed its Light Vehicle Antilock Brake Systems (ABS) Research Program in an effort to determine the cause(s) of the apparent increase in single-vehicle run-off-road crashes and decrea...
Optimal Run Strategies in Monte Carlo Iterated Fission Source Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romano, Paul K.; Lund, Amanda L.; Siegel, Andrew R.
2017-06-19
The method of successive generations used in Monte Carlo simulations of nuclear reactor models is known to suffer from intergenerational correlation between the spatial locations of fission sites. One consequence of the spatial correlation is that the convergence rate of the variance of the mean for a tally becomes worse than O(N–1). In this work, we consider how the true variance can be minimized given a total amount of work available as a function of the number of source particles per generation, the number of active/discarded generations, and the number of independent simulations. We demonstrate through both analysis and simulationmore » that under certain conditions the solution time for highly correlated reactor problems may be significantly reduced either by running an ensemble of multiple independent simulations or simply by increasing the generation size to the extent that it is practical. However, if too many simulations or too large a generation size is used, the large fraction of source particles discarded can result in an increase in variance. We also show that there is a strong incentive to reduce the number of generations discarded through some source convergence acceleration technique. Furthermore, we discuss the efficient execution of large simulations on a parallel computer; we argue that several practical considerations favor using an ensemble of independent simulations over a single simulation with very large generation size.« less
A high-order language for a system of closely coupled processing elements
NASA Technical Reports Server (NTRS)
Feyock, S.; Collins, W. R.
1986-01-01
The research reported in this paper was occasioned by the requirements on part of the Real-Time Digital Simulator (RTDS) project under way at NASA Lewis Research Center. The RTDS simulation scheme employs a network of CPUs running lock-step cycles in the parallel computations of jet airplane simulations. Their need for a high order language (HOL) that would allow non-experts to write simulation applications and that could be implemented on a possibly varying network can best be fulfilled by using the programming language Ada. We describe how the simulation problems can be modeled in Ada, how to map a single, multi-processing Ada program into code for individual processors, regardless of network reconfiguration, and why some Ada language features are particulary well-suited to network simulations.
Scaling NS-3 DCE Experiments on Multi-Core Servers
2016-06-15
that work well together. 3.2 Simulation Server Details We ran the simulations on a Dell® PowerEdge M520 blade server[8] running Ubuntu Linux 14.04...To minimize the amount of time needed to complete all of the simulations, we planned to run multiple simulations at the same time on a blade server...MacBook was running the simulation inside a virtual machine (Ubuntu 14.04), while the blade server was running the same operating system directly on
NO PLIF imaging in the CUBRC 48-inch shock tunnel
NASA Astrophysics Data System (ADS)
Jiang, N.; Bruzzese, J.; Patton, R.; Sutton, J.; Yentsch, R.; Gaitonde, D. V.; Lempert, W. R.; Miller, J. D.; Meyer, T. R.; Parker, R.; Wadham, T.; Holden, M.; Danehy, P. M.
2012-12-01
Nitric oxide planar laser-induced fluorescence (NO PLIF) imaging is demonstrated at a 10-kHz repetition rate in the Calspan University at Buffalo Research Center's (CUBRC) 48-inch Mach 9 hypervelocity shock tunnel using a pulse burst laser-based high frame rate imaging system. Sequences of up to ten images are obtained internal to a supersonic combustor model, located within the shock tunnel, during a single ~10-millisecond duration run of the ground test facility. Comparison with a CFD simulation shows good overall qualitative agreement in the jet penetration and spreading observed with an average of forty individual PLIF images obtained during several facility runs.
NASA Astrophysics Data System (ADS)
Benjamin, J.; Rosser, N. J.; Dunning, S.; Hardy, R. J.; Karim, K.; Szczucinski, W.; Norman, E. C.; Strzelecki, M.; Drewniak, M.
2014-12-01
Risk assessments of the threat posed by rock avalanches rely upon numerical modelling of potential run-out and spreading, and are contingent upon a thorough understanding of the flow dynamics inferred from deposits left by previous events. Few records exist of multiple rock avalanches with boundary conditions sufficiently consistent to develop a set of more generalised rules for behaviour across events. A unique cluster of 20 large (3 x 106 - 94 x 106 m3) rock avalanche deposits along the Vaigat Strait, West Greenland, offers a unique opportunity to model a large sample of adjacent events sourced from a stretch of coastal mountains of relatively uniform geology and structure. Our simulations of these events were performed using VolcFlow, a geophysical mass flow code developed to simulate volcanic debris avalanches. Rheological calibration of the model was performed using a well-constrained event at Paatuut (AD 2000). The best-fit simulation assumes a constant retarding stress with a collisional stress coefficient (T0 = 250 kPa, ξ = 0.01), and simulates run-out to within ±0.3% of that observed. Despite being widely used to simulate rock avalanche propagation, other models, that assume either a Coulomb frictional or a Voellmy rheology, failed to reproduce the observed event characteristics and deposit distribution at Paatuut. We applied this calibration to 19 other events, simulating rock avalanche motion across 3D terrain of varying levels of complexity. Our findings illustrate the utility and sensitivity of modelling a single rock avalanche satisfactorily as a function of rheology, alongside the validity of applying the same parameters elsewhere, even within similar boundary conditions. VolcFlow can plausibly account for the observed morphology of a series of deposits emplaced by events of different types, although its performance is sensitive to a range of topographic and geometric factors. These exercises show encouraging results in the model's ability to simulate a series of events using a single set of parameters obtained by back-analysis of the Paatuut event alone. The results also hold important implications for our process understanding of rock avalanches in confined fjord settings, where correctly modelling material flux at the point of entry into the water is critical in tsunami generation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gong, Zhenhuan; Boyuka, David; Zou, X
Download Citation Email Print Request Permissions Save to Project The size and scope of cutting-edge scientific simulations are growing much faster than the I/O and storage capabilities of their run-time environments. The growing gap is exacerbated by exploratory, data-intensive analytics, such as querying simulation data with multivariate, spatio-temporal constraints, which induces heterogeneous access patterns that stress the performance of the underlying storage system. Previous work addresses data layout and indexing techniques to improve query performance for a single access pattern, which is not sufficient for complex analytics jobs. We present PARLO a parallel run-time layout optimization framework, to achieve multi-levelmore » data layout optimization for scientific applications at run-time before data is written to storage. The layout schemes optimize for heterogeneous access patterns with user-specified priorities. PARLO is integrated with ADIOS, a high-performance parallel I/O middleware for large-scale HPC applications, to achieve user-transparent, light-weight layout optimization for scientific datasets. It offers simple XML-based configuration for users to achieve flexible layout optimization without the need to modify or recompile application codes. Experiments show that PARLO improves performance by 2 to 26 times for queries with heterogeneous access patterns compared to state-of-the-art scientific database management systems. Compared to traditional post-processing approaches, its underlying run-time layout optimization achieves a 56% savings in processing time and a reduction in storage overhead of up to 50%. PARLO also exhibits a low run-time resource requirement, while also limiting the performance impact on running applications to a reasonable level.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lacagnina, Carlo; Hasekamp, Otto P.; Bian, Huisheng
2015-09-27
The aerosol Single Scattering Albedo (SSA) over the global oceans is evaluated based on polarimetric measurements by the PARASOL satellite. The retrieved values for SSA and Aerosol Optical Depth (AOD) agree well with the ground-based measurements of the AErosol RObotic NETwork (AERONET). The global coverage provided by the PARASOL observations represents a unique opportunity to evaluate SSA and AOD simulated by atmospheric transport model runs, as performed in the AeroCom framework. The SSA estimate provided by the AeroCom models is generally higher than the SSA retrieved from both PARASOL and AERONET. On the other hand, the mean simulated AOD ismore » about right or slightly underestimated compared with observations. An overestimate of the SSA by the models would suggest that these simulate an overly strong aerosol radiative cooling at top-of-atmosphere (TOA) and underestimate it at surface. This implies that aerosols have a potential stronger impact within the atmosphere than currently simulated.« less
Run-up Variability due to Source Effects
NASA Astrophysics Data System (ADS)
Del Giudice, Tania; Zolezzi, Francesca; Traverso, Chiara; Valfrè, Giulio; Poggi, Pamela; Parker, Eric J.
2010-05-01
This paper investigates the variability of tsunami run-up at a specific location due to uncertainty in earthquake source parameters. It is important to quantify this 'inter-event' variability for probabilistic assessments of tsunami hazard. In principal, this aspect of variability could be studied by comparing field observations at a single location from a number of tsunamigenic events caused by the same source. As such an extensive dataset does not exist, we decided to study the inter-event variability through numerical modelling. We attempt to answer the question 'What is the potential variability of tsunami wave run-up at a specific site, for a given magnitude earthquake occurring at a known location'. The uncertainty is expected to arise from the lack of knowledge regarding the specific details of the fault rupture 'source' parameters. The following steps were followed: the statistical distributions of the main earthquake source parameters affecting the tsunami height were established by studying fault plane solutions of known earthquakes; a case study based on a possible tsunami impact on Egypt coast has been set up and simulated, varying the geometrical parameters of the source; simulation results have been analyzed deriving relationships between run-up height and source parameters; using the derived relationships a Monte Carlo simulation has been performed in order to create the necessary dataset to investigate the inter-event variability of the run-up height along the coast; the inter-event variability of the run-up height along the coast has been investigated. Given the distribution of source parameters and their variability, we studied how this variability propagates to the run-up height, using the Cornell 'Multi-grid coupled Tsunami Model' (COMCOT). The case study was based on the large thrust faulting offshore the south-western Greek coast, thought to have been responsible for the infamous 1303 tsunami. Numerical modelling of the event was used to assess the impact on the North African coast. The effects of uncertainty in fault parameters were assessed by perturbing the base model, and observing variation on wave height along the coast. The tsunami wave run-up was computed at 4020 locations along the Egyptian coast between longitudes 28.7 E and 33.8 E. To assess the effects of fault parameters uncertainty, input model parameters have been varied and effects on run-up have been analyzed. The simulations show that for a given point there are linear relationships between run-up and both fault dislocation and rupture length. A superposition analysis shows that a linear combination of the effects of the different source parameters (evaluated results) leads to a good approximation of the simulated results. This relationship is then used as the basis for a Monte Carlo simulation. The Monte Carlo simulation was performed for 1600 scenarios at each of the 4020 points along the coast. The coefficient of variation (the ratio between standard deviation of the results and the average of the run-up heights along the coast) is comprised between 0.14 and 3.11 with an average value along the coast equal to 0.67. The coefficient of variation of normalized run-up has been compared with the standard deviation of spectral acceleration attenuation laws used for probabilistic seismic hazard assessment studies. These values have a similar meaning, and the uncertainty in the two cases is similar. The 'rule of thumb' relationship between mean and sigma can be expressed as follows: ?+ σ ≈ 2?. The implication is that the uncertainty in run-up estimation should give a range of values within approximately two times the average. This uncertainty should be considered in tsunami hazard analysis, such as inundation and risk maps, evacuation plans and the other related steps.
NASA Astrophysics Data System (ADS)
Remillard, J.
2015-12-01
Two low-cloud periods from the CAP-MBL deployment of the ARM Mobile Facility at the Azores are selected through a cluster analysis of ISCCP cloud property matrices, so as to represent two low-cloud weather states that the GISS GCM severely underpredicts not only in that region but also globally. The two cases represent (1) shallow cumulus clouds occurring in a cold-air outbreak behind a cold front, and (2) stratocumulus clouds occurring when the region was dominated by a high-pressure system. Observations and MERRA reanalysis are used to derive specifications used for large-eddy simulations (LES) and single-column model (SCM) simulations. The LES captures the major differences in horizontal structure between the two low-cloud fields, but there are unconstrained uncertainties in cloud microphysics and challenges in reproducing W-band Doppler radar moments. The SCM run on the vertical grid used for CMIP-5 runs of the GCM does a poor job of representing the shallow cumulus case and is unable to maintain an overcast deck in the stratocumulus case, providing some clues regarding problems with low-cloud representation in the GCM. SCM sensitivity tests with a finer vertical grid in the boundary layer show substantial improvement in the representation of cloud amount for both cases. GCM simulations with CMIP-5 versus finer vertical gridding in the boundary layer are compared with observations. The adoption of a two-moment cloud microphysics scheme in the GCM is also tested in this framework. The methodology followed in this study, with the process-based examination of different time and space scales in both models and observations, represents a prototype for GCM cloud parameterization improvements.
Czaplewski, Cezary; Kalinowski, Sebastian; Liwo, Adam; Scheraga, Harold A
2009-03-10
The replica exchange (RE) method is increasingly used to improve sampling in molecular dynamics (MD) simulations of biomolecular systems. Recently, we implemented the united-residue UNRES force field for mesoscopic MD. Initial results from UNRES MD simulations show that we are able to simulate folding events that take place in a microsecond or even a millisecond time scale. To speed up the search further, we applied the multiplexing replica exchange molecular dynamics (MREMD) method. The multiplexed variant (MREMD) of the RE method, developed by Rhee and Pande, differs from the original RE method in that several trajectories are run at a given temperature. Each set of trajectories run at a different temperature constitutes a layer. Exchanges are attempted not only within a single layer but also between layers. The code has been parallelized and scales up to 4000 processors. We present a comparison of canonical MD, REMD, and MREMD simulations of protein folding with the UNRES force-field. We demonstrate that the multiplexed procedure increases the power of replica exchange MD considerably and convergence of the thermodynamic quantities is achieved much faster.
Czaplewski, Cezary; Kalinowski, Sebastian; Liwo, Adam; Scheraga, Harold A.
2009-01-01
The replica exchange (RE) method is increasingly used to improve sampling in molecular dynamics (MD) simulations of biomolecular systems. Recently, we implemented the united-residue UNRES force field for mesoscopic MD. Initial results from UNRES MD simulations show that we are able to simulate folding events that take place in a microsecond or even a millisecond time scale. To speed up the search further, we applied the multiplexing replica exchange molecular dynamics (MREMD) method. The multiplexed variant (MREMD) of the RE method, developed by Rhee and Pande, differs from the original RE method in that several trajectories are run at a given temperature. Each set of trajectories run at a different temperature constitutes a layer. Exchanges are attempted not only within a single layer but also between layers. The code has been parallelized and scales up to 4000 processors. We present a comparison of canonical MD, REMD, and MREMD simulations of protein folding with the UNRES force-field. We demonstrate that the multiplexed procedure increases the power of replica exchange MD considerably and convergence of the thermodynamic quantities is achieved much faster. PMID:20161452
Leduc, Renee Y M; Rauw, Gail; Baker, Glen B; McDermid, Heather E
2017-01-01
Environmental enrichment items such as running wheels can promote the wellbeing of laboratory mice. Growing evidence suggests that wheel running simulates exercise effects in many mouse models of human conditions, but this activity also might change other aspects of mouse behavior. In this case study, we show that the presence of running wheels leads to pronounced and permanent circling behavior with route-tracing in a proportion of the male mice of a genetically distinct cohort. The genetic background of this cohort includes a mutation in Arhgap19, but genetic crosses showed that an unknown second-site mutation likely caused the induced circling behavior. Behavioral tests for inner-ear function indicated a normal sense of gravity in the circling mice. However, the levels of dopamine, serotonin, and some dopamine metabolites were lower in the brains of circling male mice than in mice of the same genetic background that were weaned without wheels. Circling was seen in both singly and socially housed male mice. The additional stress of fighting may have exacerbated the predisposition to circling in the socially housed animals. Singly and socially housed male mice without wheels did not circle. Our current findings highlight the importance and possibly confounding nature of the environmental and genetic background in mouse behavioral studies, given that the circling behavior and alterations in dopamine and serotonin levels in this mouse cohort occurred only when the male mice were housed with running wheels. PMID:28315651
Absolute comparison of simulated and experimental protein-folding dynamics
NASA Astrophysics Data System (ADS)
Snow, Christopher D.; Nguyen, Houbi; Pande, Vijay S.; Gruebele, Martin
2002-11-01
Protein folding is difficult to simulate with classical molecular dynamics. Secondary structure motifs such as α-helices and β-hairpins can form in 0.1-10µs (ref. 1), whereas small proteins have been shown to fold completely in tens of microseconds. The longest folding simulation to date is a single 1-µs simulation of the villin headpiece; however, such single runs may miss many features of the folding process as it is a heterogeneous reaction involving an ensemble of transition states. Here, we have used a distributed computing implementation to produce tens of thousands of 5-20-ns trajectories (700µs) to simulate mutants of the designed mini-protein BBA5. The fast relaxation dynamics these predict were compared with the results of laser temperature-jump experiments. Our computational predictions are in excellent agreement with the experimentally determined mean folding times and equilibrium constants. The rapid folding of BBA5 is due to the swift formation of secondary structure. The convergence of experimentally and computationally accessible timescales will allow the comparison of absolute quantities characterizing in vitro and in silico (computed) protein folding.
CASL VMA FY16 Milestone Report (L3:VMA.VUQ.P13.07) Westinghouse Mixing with COBRA-TF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gordon, Natalie
2016-09-30
COBRA-TF (CTF) is a low-resolution code currently maintained as CASL's subchannel analysis tool. CTF operates as a two-phase, compressible code over a mesh comprised of subchannels and axial discretized nodes. In part because CTF is a low-resolution code, simulation run time is not computationally expensive, only on the order of minutes. Hi-resolution codes such as STAR-CCM+ can be used to train lower-fidelity codes such as CTF. Unlike STAR-CCM+, CTF has no turbulence model, only a two-phase turbulent mixing coefficient, β. β can be set to a constant value or calculated in terms of Reynolds number using an empirical correlation. Resultsmore » from STAR-CCM+ can be used to inform the appropriate value of β. Once β is calibrated, CTF runs can be an inexpensive alternative to costly STAR-CCM+ runs for scoping analyses. Based on the results of CTF runs, STAR-CCM+ can be run for specific parameters of interest. CASL areas of application are CIPS for single phase analysis and DNB-CTF for two-phase analysis.« less
Acceleration of discrete stochastic biochemical simulation using GPGPU.
Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira
2015-01-01
For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130.
Acceleration of discrete stochastic biochemical simulation using GPGPU
Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira
2015-01-01
For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130. PMID:25762936
NASA Astrophysics Data System (ADS)
Ivkin, N.; Liu, Z.; Yang, L. F.; Kumar, S. S.; Lemson, G.; Neyrinck, M.; Szalay, A. S.; Braverman, V.; Budavari, T.
2018-04-01
Cosmological N-body simulations play a vital role in studying models for the evolution of the Universe. To compare to observations and make a scientific inference, statistic analysis on large simulation datasets, e.g., finding halos, obtaining multi-point correlation functions, is crucial. However, traditional in-memory methods for these tasks do not scale to the datasets that are forbiddingly large in modern simulations. Our prior paper (Liu et al., 2015) proposes memory-efficient streaming algorithms that can find the largest halos in a simulation with up to 109 particles on a small server or desktop. However, this approach fails when directly scaling to larger datasets. This paper presents a robust streaming tool that leverages state-of-the-art techniques on GPU boosting, sampling, and parallel I/O, to significantly improve performance and scalability. Our rigorous analysis of the sketch parameters improves the previous results from finding the centers of the 103 largest halos (Liu et al., 2015) to ∼ 104 - 105, and reveals the trade-offs between memory, running time and number of halos. Our experiments show that our tool can scale to datasets with up to ∼ 1012 particles while using less than an hour of running time on a single GPU Nvidia GTX 1080.
Phast4Windows: A 3D graphical user interface for the reactive-transport simulator PHAST
Charlton, Scott R.; Parkhurst, David L.
2013-01-01
Phast4Windows is a Windows® program for developing and running groundwater-flow and reactive-transport models with the PHAST simulator. This graphical user interface allows definition of grid-independent spatial distributions of model properties—the porous media properties, the initial head and chemistry conditions, boundary conditions, and locations of wells, rivers, drains, and accounting zones—and other parameters necessary for a simulation. Spatial data can be defined without reference to a grid by drawing, by point-by-point definitions, or by importing files, including ArcInfo® shape and raster files. All definitions can be inspected, edited, deleted, moved, copied, and switched from hidden to visible through the data tree of the interface. Model features are visualized in the main panel of the interface, so that it is possible to zoom, pan, and rotate features in three dimensions (3D). PHAST simulates single phase, constant density, saturated groundwater flow under confined or unconfined conditions. Reactions among multiple solutes include mineral equilibria, cation exchange, surface complexation, solid solutions, and general kinetic reactions. The interface can be used to develop and run simple or complex models, and is ideal for use in the classroom, for analysis of laboratory column experiments, and for development of field-scale simulations of geochemical processes and contaminant transport.
Grace: A cross-platform micromagnetic simulator on graphics processing units
NASA Astrophysics Data System (ADS)
Zhu, Ru
2015-12-01
A micromagnetic simulator running on graphics processing units (GPUs) is presented. Different from GPU implementations of other research groups which are predominantly running on NVidia's CUDA platform, this simulator is developed with C++ Accelerated Massive Parallelism (C++ AMP) and is hardware platform independent. It runs on GPUs from venders including NVidia, AMD and Intel, and achieves significant performance boost as compared to previous central processing unit (CPU) simulators, up to two orders of magnitude. The simulator paved the way for running large size micromagnetic simulations on both high-end workstations with dedicated graphics cards and low-end personal computers with integrated graphics cards, and is freely available to download.
MHSS: a material handling system simulator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pomernacki, L.; Hollstien, R.B.
1976-04-07
A Material Handling System Simulator (MHSS) program is described that provides specialized functional blocks for modeling and simulation of nuclear material handling systems. Models of nuclear fuel fabrication plants may be built using functional blocks that simulate material receiving, storage, transport, inventory, processing, and shipping operations as well as the control and reporting tasks of operators or on-line computers. Blocks are also provided that allow the user to observe and gather statistical information on the dynamic behavior of simulated plants over single or replicated runs. Although it is currently being developed for the nuclear materials handling application, MHSS can bemore » adapted to other industries in which material accountability is important. In this paper, emphasis is on the simulation methodology of the MHSS program with application to the nuclear material safeguards problem. (auth)« less
Structural safety of trams in case of misguidance in a switch
NASA Astrophysics Data System (ADS)
Schindler, Christian; Schwickert, Martin; Simonis, Andreas
2010-08-01
Tram vehicles mainly operate on street tracks where sometimes misguidance in switches occurs due to unfavourable conditions. Generally, in this situation, the first running gear of the vehicle follows the bend track while the next running gears continue straight ahead. This leads to a constraint that can only be solved if the vehicle's articulation is damaged or the wheel derails. The last-mentioned situation is less critical in terms of safety and costs. Five different tram types, one of them high floor, the rest low floor, were examined analytically. Numerical simulation was used to determine which wheel would be the first to derail and what level of force is needed in the articulation area between two carbodies to make a tram derail. It was shown that with pure analytical simulation, only an idea of which tram type behaves better or worse in such a situation can be gained, while a three-dimensional computational simulation gives more realistic values for the forces that arise. Three of the four low-floor tram types need much higher articulation forces to make a wheel derail in a switch misguidance situation. One particular three-car type with two single-axle running gears underneath the centre car must be designed to withstand nearly three times higher articulation forces than a conventional high-floor articulated tram. Tram designers must be aware of that and should design the carbody accordingly.
NASA Astrophysics Data System (ADS)
Rougé, Charles; Harou, Julien J.; Pulido-Velazquez, Manuel; Matrosov, Evgenii S.
2017-04-01
The marginal opportunity cost of water refers to benefits forgone by not allocating an additional unit of water to its most economically productive use at a specific location in a river basin at a specific moment in time. Estimating the opportunity cost of water is an important contribution to water management as it can be used for better water allocation or better system operation, and can suggest where future water infrastructure could be most beneficial. Opportunity costs can be estimated using 'shadow values' provided by hydro-economic optimization models. Yet, such models' use of optimization means the models had difficulty accurately representing the impact of operating rules and regulatory and institutional mechanisms on actual water allocation. In this work we use more widely available river basin simulation models to estimate opportunity costs. This has been done before by adding in the model a small quantity of water at the place and time where the opportunity cost should be computed, then running a simulation and comparing the difference in system benefits. The added system benefits per unit of water added to the system then provide an approximation of the opportunity cost. This approximation can then be used to design efficient pricing policies that provide incentives for users to reduce their water consumption. Yet, this method requires one simulation run per node and per time step, which is demanding computationally for large-scale systems and short time steps (e.g., a day or a week). Besides, opportunity cost estimates are supposed to reflect the most productive use of an additional unit of water, yet the simulation rules do not necessarily use water that way. In this work, we propose an alternative approach, which computes the opportunity cost through a double backward induction, first recursively from outlet to headwaters within the river network at each time step, then recursively backwards in time. Both backward inductions only require linear operations, and the resulting algorithm tracks the maximal benefit that can be obtained by having an additional unit of water at any node in the network and at any date in time. Results 1) can be obtained from the results of a rule-based simulation using a single post-processing run, and 2) are exactly the (gross) benefit forgone by not allocating an additional unit of water to its most productive use. The proposed method is applied to London's water resource system to track the value of storage in the city's water supply reservoirs on the Thames River throughout a weekly 85-year simulation. Results, obtained in 0.4 seconds on a single processor, reflect the environmental cost of water shortage. This fast computation allows visualizing the seasonal variations of the opportunity cost depending on reservoir levels, demonstrating the potential of this approach for exploring water values and its variations using simulation models with multiple runs (e.g. of stochastically generated plausible future river inflows).
Interactions between hyporheic flow produced by stream meanders, bars, and dunes
Stonedahl, Susa H.; Harvey, Judson W.; Packman, Aaron I.
2013-01-01
Stream channel morphology from grain-scale roughness to large meanders drives hyporheic exchange flow. In practice, it is difficult to model hyporheic flow over the wide spectrum of topographic features typically found in rivers. As a result, many studies only characterize isolated exchange processes at a single spatial scale. In this work, we simulated hyporheic flows induced by a range of geomorphic features including meanders, bars and dunes in sand bed streams. Twenty cases were examined with 5 degrees of river meandering. Each meandering river model was run initially without any small topographic features. Models were run again after superimposing only bars and then only dunes, and then run a final time after including all scales of topographic features. This allowed us to investigate the relative importance and interactions between flows induced by different scales of topography. We found that dunes typically contributed more to hyporheic exchange than bars and meanders. Furthermore, our simulations show that the volume of water exchanged and the distributions of hyporheic residence times resulting from various scales of topographic features are close to, but not linearly additive. These findings can potentially be used to develop scaling laws for hyporheic flow that can be widely applied in streams and rivers.
NASA Technical Reports Server (NTRS)
Vrnak, Daniel R.; Stueber, Thomas J.; Le, Dzu K.
2012-01-01
This report presents a method for running a dynamic legacy inlet simulation in concert with another dynamic simulation that uses a graphical interface. The legacy code, NASA's LArge Perturbation INlet (LAPIN) model, was coded using the FORTRAN 77 (The Portland Group, Lake Oswego, OR) programming language to run in a command shell similar to other applications that used the Microsoft Disk Operating System (MS-DOS) (Microsoft Corporation, Redmond, WA). Simulink (MathWorks, Natick, MA) is a dynamic simulation that runs on a modern graphical operating system. The product of this work has both simulations, LAPIN and Simulink, running synchronously on the same computer with periodic data exchanges. Implementing the method described in this paper avoided extensive changes to the legacy code and preserved its basic operating procedure. This paper presents a novel method that promotes inter-task data communication between the synchronously running processes.
Incompressible SPH (ISPH) with fast Poisson solver on a GPU
NASA Astrophysics Data System (ADS)
Chow, Alex D.; Rogers, Benedict D.; Lind, Steven J.; Stansby, Peter K.
2018-05-01
This paper presents a fast incompressible SPH (ISPH) solver implemented to run entirely on a graphics processing unit (GPU) capable of simulating several millions of particles in three dimensions on a single GPU. The ISPH algorithm is implemented by converting the highly optimised open-source weakly-compressible SPH (WCSPH) code DualSPHysics to run ISPH on the GPU, combining it with the open-source linear algebra library ViennaCL for fast solutions of the pressure Poisson equation (PPE). Several challenges are addressed with this research: constructing a PPE matrix every timestep on the GPU for moving particles, optimising the limited GPU memory, and exploiting fast matrix solvers. The ISPH pressure projection algorithm is implemented as 4 separate stages, each with a particle sweep, including an algorithm for the population of the PPE matrix suitable for the GPU, and mixed precision storage methods. An accurate and robust ISPH boundary condition ideal for parallel processing is also established by adapting an existing WCSPH boundary condition for ISPH. A variety of validation cases are presented: an impulsively started plate, incompressible flow around a moving square in a box, and dambreaks (2-D and 3-D) which demonstrate the accuracy, flexibility, and speed of the methodology. Fragmentation of the free surface is shown to influence the performance of matrix preconditioners and therefore the PPE matrix solution time. The Jacobi preconditioner demonstrates robustness and reliability in the presence of fragmented flows. For a dambreak simulation, GPU speed ups demonstrate up to 10-18 times and 1.1-4.5 times compared to single-threaded and 16-threaded CPU run times respectively.
Optimal setups for forced-choice staircases with fixed step sizes.
García-Pérez, M A
2000-01-01
Forced-choice staircases with fixed step sizes are used in a variety of formats whose relative merits have never been studied. This paper presents a comparative study aimed at determining their optimal format. Factors included in the study were the up/down rule, the length (number of reversals), and the size of the steps. The study also addressed the issue of whether a protocol involving three staircases running for N reversals each (with a subsequent average of the estimates provided by each individual staircase) has better statistical properties than an alternative protocol involving a single staircase running for 3N reversals. In all cases the size of a step up was different from that of a step down, in the appropriate ratio determined by García-Pérez (Vision Research, 1998, 38, 1861 - 1881). The results of a simulation study indicate that a) there are no conditions in which the 1-down/1-up rule is advisable; b) different combinations of up/down rule and number of reversals appear equivalent in terms of precision and cost: c) using a single long staircase with 3N reversals is more efficient than running three staircases with N reversals each: d) to avoid bias and attain sufficient accuracy, threshold estimates should be based on at least 30 reversals: and e) to avoid excessive cost and imprecision, the size of the step up should be between 2/3 and 3/3 the (known or presumed) spread of the psychometric function. An empirical study with human subjects confirmed the major characteristics revealed by the simulations.
Rapid ISS Power Availability Simulator
NASA Technical Reports Server (NTRS)
Downing, Nicholas
2011-01-01
The ISS (International Space Station) Power Resource Officers (PROs) needed a tool to automate the calculation of thousands of ISS power availability simulations used to generate power constraint matrices. Each matrix contains 864 cells, and each cell represents a single power simulation that must be run. The tools available to the flight controllers were very operator intensive and not conducive to rapidly running the thousands of simulations necessary to generate the power constraint data. SOLAR is a Java-based tool that leverages commercial-off-the-shelf software (Satellite Toolkit) and an existing in-house ISS EPS model (SPEED) to rapidly perform thousands of power availability simulations. SOLAR has a very modular architecture and consists of a series of plug-ins that are loosely coupled. The modular architecture of the software allows for the easy replacement of the ISS power system model simulator, re-use of the Satellite Toolkit integration code, and separation of the user interface from the core logic. Satellite Toolkit (STK) is used to generate ISS eclipse and insulation times, solar beta angle, position of the solar arrays over time, and the amount of shadowing on the solar arrays, which is then provided to SPEED to calculate power generation forecasts. The power planning turn-around time is reduced from three months to two weeks (83-percent decrease) using SOLAR, and the amount of PRO power planning support effort is reduced by an estimated 30 percent.
Symplectic molecular dynamics simulations on specially designed parallel computers.
Borstnik, Urban; Janezic, Dusanka
2005-01-01
We have developed a computer program for molecular dynamics (MD) simulation that implements the Split Integration Symplectic Method (SISM) and is designed to run on specialized parallel computers. The MD integration is performed by the SISM, which analytically treats high-frequency vibrational motion and thus enables the use of longer simulation time steps. The low-frequency motion is treated numerically on specially designed parallel computers, which decreases the computational time of each simulation time step. The combination of these approaches means that less time is required and fewer steps are needed and so enables fast MD simulations. We study the computational performance of MD simulation of molecular systems on specialized computers and provide a comparison to standard personal computers. The combination of the SISM with two specialized parallel computers is an effective way to increase the speed of MD simulations up to 16-fold over a single PC processor.
Securing Sensitive Flight and Engine Simulation Data Using Smart Card Technology
NASA Technical Reports Server (NTRS)
Blaser, Tammy M.
2003-01-01
NASA Glenn Research Center has developed a smart card prototype capable of encrypting and decrypting disk files required to run a distributed aerospace propulsion simulation. Triple Data Encryption Standard (3DES) encryption is used to secure the sensitive intellectual property on disk pre, during, and post simulation execution. The prototype operates as a secure system and maintains its authorized state by safely storing and permanently retaining the encryption keys only on the smart card. The prototype is capable of authenticating a single smart card user and includes pre simulation and post simulation tools for analysis and training purposes. The prototype's design is highly generic and can be used to protect any sensitive disk files with growth capability to urn multiple simulations. The NASA computer engineer developed the prototype on an interoperable programming environment to enable porting to other Numerical Propulsion System Simulation (NPSS) capable operating system environments.
Implementation of Parallel Dynamic Simulation on Shared-Memory vs. Distributed-Memory Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Shuangshuang; Chen, Yousu; Wu, Di
2015-12-09
Power system dynamic simulation computes the system response to a sequence of large disturbance, such as sudden changes in generation or load, or a network short circuit followed by protective branch switching operation. It consists of a large set of differential and algebraic equations, which is computational intensive and challenging to solve using single-processor based dynamic simulation solution. High-performance computing (HPC) based parallel computing is a very promising technology to speed up the computation and facilitate the simulation process. This paper presents two different parallel implementations of power grid dynamic simulation using Open Multi-processing (OpenMP) on shared-memory platform, and Messagemore » Passing Interface (MPI) on distributed-memory clusters, respectively. The difference of the parallel simulation algorithms and architectures of the two HPC technologies are illustrated, and their performances for running parallel dynamic simulation are compared and demonstrated.« less
NASA TileWorld manual (system version 2.2)
NASA Technical Reports Server (NTRS)
Philips, Andrew B.; Bresina, John L.
1991-01-01
The commands are documented of the NASA TileWorld simulator, as well as providing information about how to run it and extend it. The simulator, implemented in Common Lisp with Common Windows, encodes a particular range in a spectrum of domains, for controllable research experiments. TileWorld consists of a two dimensional grid of cells, a set of polygonal tiles, and a single agent which can grasp and move tiles. In addition to agent executable actions, there is an external event over which the agent has not control; this event correspond to a 'gust of wind'.
Ignoring the Innocent: Non-combatants in Urban Operations and in Military Models and Simulations
2006-01-01
such a model yields is a sufficiency theorem , a single run does not provide any information on the robustness of such theorems . That is, given that...often formally resolvable via inspection, simple differentiation, the implicit function theorem , comparative statistics, and so on. The only way to... Pythagoras , and Bactowars. For each, Grieger discusses model parameters, data collection, terrain, and other features. Grieger also discusses
Agent-Based Framework for Discrete Entity Simulations
2006-11-01
Postgres database server for environment queries of neighbors and continuum data. As expected for raw database queries (no database optimizations in...form. Eventually the code was ported to GNU C++ on the same single Intel Pentium 4 CPU running RedHat Linux 9.0 and Postgres database server...Again Postgres was used for environmental queries, and the tool remained relatively slow because of the immense number of queries necessary to assess
Parallel Multi-cycle LES of an Optical Pent-roof DISI Engine Under Motored Operating Conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Dam, Noah; Sjöberg, Magnus; Zeng, Wei
The use of Large-eddy Simulations (LES) has increased due to their ability to resolve the turbulent fluctuations of engine flows and capture the resulting cycle-to-cycle variability. One drawback of LES, however, is the requirement to run multiple engine cycles to obtain the necessary cycle statistics for full validation. The standard method to obtain the cycles by running a single simulation through many engine cycles sequentially can take a long time to complete. Recently, a new strategy has been proposed by our research group to reduce the amount of time necessary to simulate the many engine cycles by running individual enginemore » cycle simulations in parallel. With modern large computing systems this has the potential to reduce the amount of time necessary for a full set of simulated engine cycles to finish by up to an order of magnitude. In this paper, the Parallel Perturbation Methodology (PPM) is used to simulate up to 35 engine cycles of an optically accessible, pent-roof Directinjection Spark-ignition (DISI) engine at two different motored engine operating conditions, one throttled and one un-throttled. Comparisons are made against corresponding sequential-cycle simulations to verify the similarity of results using either methodology. Mean results from the PPM approach are very similar to sequential-cycle results with less than 0.5% difference in pressure and a magnitude structure index (MSI) of 0.95. Differences in cycle-to-cycle variability (CCV) predictions are larger, but close to the statistical uncertainty in the measurement for the number of cycles simulated. PPM LES results were also compared against experimental data. Mean quantities such as pressure or mean velocities were typically matched to within 5- 10%. Pressure CCVs were under-predicted, mostly due to the lack of any perturbations in the pressure boundary conditions between cycles. Velocity CCVs for the simulations had the same average magnitude as experiments, but the experimental data showed greater spatial variation in the root-mean-square (RMS). Conversely, circular standard deviation results showed greater repeatability of the flow directionality and swirl vortex positioning than the simulations.« less
Analysis of WakeVAS Benefits Using ACES Build 3.2.1
NASA Technical Reports Server (NTRS)
Smith, Jeremy C.
2005-01-01
The FAA and NASA are currently engaged in a Wake Turbulence Research Program to revise wake turbulence separation standards, procedures, and criteria to increase airport capacity while maintaining or increasing safety. The research program is divided into three phases: Phase I near term procedural enhancements; Phase II wind dependent Wake Vortex Advisory System (WakeVAS) Concepts of Operations (ConOps); and Phase III farther term ConOps based on wake prediction and sensing. This report contains an analysis that evaluates the benefits of a closely spaced parallel runway (CSPR) Phase I ConOps, a single runway and CSPR Phase II ConOps and a single runway Phase III ConOps. A series of simulation runs were performed using the Airspace Concepts Evaluation System (ACES) Build 3.21 air traffic simulator to provide an initial assessment of the reduction in delay and cost savings obtained by the use of a WakeVAS at selected U.S. airports. The ACES simulator is being developed by NASA Ames Research Center as part of the Virtual Airspace Modelling and Simulation (VAMS) program.
Two graphical user interfaces for managing and analyzing MODFLOW groundwater-model scenarios
Banta, Edward R.
2014-01-01
Scenario Manager and Scenario Analyzer are graphical user interfaces that facilitate the use of calibrated, MODFLOW-based groundwater models for investigating possible responses to proposed stresses on a groundwater system. Scenario Manager allows a user, starting with a calibrated model, to design and run model scenarios by adding or modifying stresses simulated by the model. Scenario Analyzer facilitates the process of extracting data from model output and preparing such display elements as maps, charts, and tables. Both programs are designed for users who are familiar with the science on which groundwater modeling is based but who may not have a groundwater modeler’s expertise in building and calibrating a groundwater model from start to finish. With Scenario Manager, the user can manipulate model input to simulate withdrawal or injection wells, time-variant specified hydraulic heads, recharge, and such surface-water features as rivers and canals. Input for stresses to be simulated comes from user-provided geographic information system files and time-series data files. A Scenario Manager project can contain multiple scenarios and is self-documenting. Scenario Analyzer can be used to analyze output from any MODFLOW-based model; it is not limited to use with scenarios generated by Scenario Manager. Model-simulated values of hydraulic head, drawdown, solute concentration, and cell-by-cell flow rates can be presented in display elements. Map data can be represented as lines of equal value (contours) or as a gradated color fill. Charts and tables display time-series data obtained from output generated by a transient-state model run or from user-provided text files of time-series data. A display element can be based entirely on output of a single model run, or, to facilitate comparison of results of multiple scenarios, an element can be based on output from multiple model runs. Scenario Analyzer can export display elements and supporting metadata as a Portable Document Format file.
GRAPE-6A: A Single-Card GRAPE-6 for Parallel PC-GRAPE Cluster Systems
NASA Astrophysics Data System (ADS)
Fukushige, Toshiyuki; Makino, Junichiro; Kawai, Atsushi
2005-12-01
In this paper, we describe the design and performance of GRAPE-6A, a special-purpose computer for gravitational many-body simulations. It was designed to be used with a PC cluster, in which each node has one GRAPE-6A. Such a configuration is particularly cost-effective in running parallel tree algorithms. Though the use of parallel tree algorithms was possible with the original GRAPE-6 hardware, it was not very cost-effective since a single GRAPE-6 board was still too fast and too expensive. Therefore, we designed GRAPE-6A as a single PCI card to minimize the reproduction cost and to optimize the computing speed. The peak performance is 130 Gflops for one GRAPE-6A board and 3.1 Tflops for our 24 node cluster. We describe the implementation of the tree, TreePM and individual timestep algorithms on both a single GRAPE-6A system and GRAPE-6A cluster. Using the tree algorithm on our 16-node GRAPE-6A system, we can complete a collisionless simulation with 100 million particles (8000 steps) within 10 days.
Improving the Amazonian Hydrologic Cycle in a Coupled Land-Atmosphere, Single Column Model
NASA Astrophysics Data System (ADS)
Harper, A. B.; Denning, S.; Baker, I.; Prihodko, L.; Branson, M.
2006-12-01
We have coupled a land-surface model, the Simple Biosphere Model (SiB3), to a single column of the Colorado State University General Circulation Model (CSU-GCM) in the Amazon River Basin. This is a preliminary step in the broader goal of improved simulation of Basin-wide hydrology. A previous version of the coupled model (SiB2) showed drought and catastrophic dieback of the Amazon rain forest. SiB3 includes updated soil hydrology and root physiology. Our test area for the coupled single column model is near Santarem, Brazil, where measurements from the km 83 flux tower in the Tapajos National Forest can be used to evaluate model output. The model was run for 2001 using NCEP2 Reanalysis as driver data. Preliminary results show that the updated biosphere model coupled to the GCM produces improved simulations of the seasonal cycle of surface water balance and precipitation. Comparisons of the diurnal and seasonal cycles of surface fluxes are also being made.
Changes in running pattern due to fatigue and cognitive load in orienteering.
Millet, Guillaume Y; Divert, Caroline; Banizette, Marion; Morin, Jean-Benoit
2010-01-01
The aim of this study was to examine the influence of fatigue on running biomechanics in normal running, in normal running with a cognitive task, and in running while map reading. Nineteen international and less experienced orienteers performed a fatiguing running exercise of duration and intensity similar to a classic distance orienteering race on an instrumented treadmill while performing mental arithmetic, an orienteering simulation, and control running at regular intervals. Two-way repeated-measures analysis of variance did not reveal any significant difference between mental arithmetic and control running for any of the kinematic and kinetic parameters analysed eight times over the fatiguing protocol. However, these parameters were systematically different between the orienteering simulation and the other two conditions (mental arithmetic and control running). The adaptations in orienteering simulation running were significantly more pronounced in the elite group when step frequency, peak vertical ground reaction force, vertical stiffness, and maximal downward displacement of the centre of mass during contact were considered. The effects of fatigue on running biomechanics depended on whether the orienteers read their map or ran normally. It is concluded that adding a cognitive load does not modify running patterns. Therefore, all changes in running pattern observed during the orienteering simulation, particularly in elite orienteers, are the result of adaptations to enable efficient map reading and/or potentially prevent injuries. Finally, running patterns are not affected to the same extent by fatigue when a map reading task is added.
Running Parallel Discrete Event Simulators on Sierra
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnes, P. D.; Jefferson, D. R.
2015-12-03
In this proposal we consider porting the ROSS/Charm++ simulator and the discrete event models that run under its control so that they run on the Sierra architecture and make efficient use of the Volta GPUs.
Multi-Resolution Climate Ensemble Parameter Analysis with Nested Parallel Coordinates Plots.
Wang, Junpeng; Liu, Xiaotong; Shen, Han-Wei; Lin, Guang
2017-01-01
Due to the uncertain nature of weather prediction, climate simulations are usually performed multiple times with different spatial resolutions. The outputs of simulations are multi-resolution spatial temporal ensembles. Each simulation run uses a unique set of values for multiple convective parameters. Distinct parameter settings from different simulation runs in different resolutions constitute a multi-resolution high-dimensional parameter space. Understanding the correlation between the different convective parameters, and establishing a connection between the parameter settings and the ensemble outputs are crucial to domain scientists. The multi-resolution high-dimensional parameter space, however, presents a unique challenge to the existing correlation visualization techniques. We present Nested Parallel Coordinates Plot (NPCP), a new type of parallel coordinates plots that enables visualization of intra-resolution and inter-resolution parameter correlations. With flexible user control, NPCP integrates superimposition, juxtaposition and explicit encodings in a single view for comparative data visualization and analysis. We develop an integrated visual analytics system to help domain scientists understand the connection between multi-resolution convective parameters and the large spatial temporal ensembles. Our system presents intricate climate ensembles with a comprehensive overview and on-demand geographic details. We demonstrate NPCP, along with the climate ensemble visualization system, based on real-world use-cases from our collaborators in computational and predictive science.
Phast4Windows: a 3D graphical user interface for the reactive-transport simulator PHAST.
Charlton, Scott R; Parkhurst, David L
2013-01-01
Phast4Windows is a Windows® program for developing and running groundwater-flow and reactive-transport models with the PHAST simulator. This graphical user interface allows definition of grid-independent spatial distributions of model properties-the porous media properties, the initial head and chemistry conditions, boundary conditions, and locations of wells, rivers, drains, and accounting zones-and other parameters necessary for a simulation. Spatial data can be defined without reference to a grid by drawing, by point-by-point definitions, or by importing files, including ArcInfo® shape and raster files. All definitions can be inspected, edited, deleted, moved, copied, and switched from hidden to visible through the data tree of the interface. Model features are visualized in the main panel of the interface, so that it is possible to zoom, pan, and rotate features in three dimensions (3D). PHAST simulates single phase, constant density, saturated groundwater flow under confined or unconfined conditions. Reactions among multiple solutes include mineral equilibria, cation exchange, surface complexation, solid solutions, and general kinetic reactions. The interface can be used to develop and run simple or complex models, and is ideal for use in the classroom, for analysis of laboratory column experiments, and for development of field-scale simulations of geochemical processes and contaminant transport. Published 2012. This article is a U.S. Government work and is in the public domain in the USA.
Long-range interactions and parallel scalability in molecular simulations
NASA Astrophysics Data System (ADS)
Patra, Michael; Hyvönen, Marja T.; Falck, Emma; Sabouri-Ghomi, Mohsen; Vattulainen, Ilpo; Karttunen, Mikko
2007-01-01
Typical biomolecular systems such as cellular membranes, DNA, and protein complexes are highly charged. Thus, efficient and accurate treatment of electrostatic interactions is of great importance in computational modeling of such systems. We have employed the GROMACS simulation package to perform extensive benchmarking of different commonly used electrostatic schemes on a range of computer architectures (Pentium-4, IBM Power 4, and Apple/IBM G5) for single processor and parallel performance up to 8 nodes—we have also tested the scalability on four different networks, namely Infiniband, GigaBit Ethernet, Fast Ethernet, and nearly uniform memory architecture, i.e. communication between CPUs is possible by directly reading from or writing to other CPUs' local memory. It turns out that the particle-mesh Ewald method (PME) performs surprisingly well and offers competitive performance unless parallel runs on PC hardware with older network infrastructure are needed. Lipid bilayers of sizes 128, 512 and 2048 lipid molecules were used as the test systems representing typical cases encountered in biomolecular simulations. Our results enable an accurate prediction of computational speed on most current computing systems, both for serial and parallel runs. These results should be helpful in, for example, choosing the most suitable configuration for a small departmental computer cluster.
NASA Astrophysics Data System (ADS)
Wichmann, Volker
2017-09-01
The Gravitational Process Path (GPP) model can be used to simulate the process path and run-out area of gravitational processes based on a digital terrain model (DTM). The conceptual model combines several components (process path, run-out length, sink filling and material deposition) to simulate the movement of a mass point from an initiation site to the deposition area. For each component several modeling approaches are provided, which makes the tool configurable for different processes such as rockfall, debris flows or snow avalanches. The tool can be applied to regional-scale studies such as natural hazard susceptibility mapping but also contains components for scenario-based modeling of single events. Both the modeling approaches and precursor implementations of the tool have proven their applicability in numerous studies, also including geomorphological research questions such as the delineation of sediment cascades or the study of process connectivity. This is the first open-source implementation, completely re-written, extended and improved in many ways. The tool has been committed to the main repository of the System for Automated Geoscientific Analyses (SAGA) and thus will be available with every SAGA release.
Kasahara, Kota; Ma, Benson; Goto, Kota; Dasgupta, Bhaskar; Higo, Junichi; Fukuda, Ikuo; Mashimo, Tadaaki; Akiyama, Yutaka; Nakamura, Haruki
2016-01-01
Molecular dynamics (MD) is a promising computational approach to investigate dynamical behavior of molecular systems at the atomic level. Here, we present a new MD simulation engine named "myPresto/omegagene" that is tailored for enhanced conformational sampling methods with a non-Ewald electrostatic potential scheme. Our enhanced conformational sampling methods, e.g. , the virtual-system-coupled multi-canonical MD (V-McMD) method, replace a multi-process parallelized run with multiple independent runs to avoid inter-node communication overhead. In addition, adopting the non-Ewald-based zero-multipole summation method (ZMM) makes it possible to eliminate the Fourier space calculations altogether. The combination of these state-of-the-art techniques realizes efficient and accurate calculations of the conformational ensemble at an equilibrium state. By taking these advantages, myPresto/omegagene is specialized for the single process execution with Graphics Processing Unit (GPU). We performed benchmark simulations for the 20-mer peptide, Trp-cage, with explicit solvent. One of the most thermodynamically stable conformations generated by the V-McMD simulation is very similar to an experimentally solved native conformation. Furthermore, the computation speed is four-times faster than that of our previous simulation engine, myPresto/psygene-G. The new simulator, myPresto/omegagene, is freely available at the following URLs: http://www.protein.osaka-u.ac.jp/rcsfp/pi/omegagene/ and http://presto.protein.osaka-u.ac.jp/myPresto4/.
A PICKSC Science Gateway for enabling the common plasma physicist to run kinetic software
NASA Astrophysics Data System (ADS)
Hu, Q.; Winjum, B. J.; Zonca, A.; Youn, C.; Tsung, F. S.; Mori, W. B.
2017-10-01
Computer simulations offer tremendous opportunities for studying plasmas, ranging from simulations for students that illuminate fundamental educational concepts to research-level simulations that advance scientific knowledge. Nevertheless, there is a significant hurdle to using simulation tools. Users must navigate codes and software libraries, determine how to wrangle output into meaningful plots, and oftentimes confront a significant cyberinfrastructure with powerful computational resources. Science gateways offer a Web-based environment to run simulations without needing to learn or manage the underlying software and computing cyberinfrastructure. We discuss our progress on creating a Science Gateway for the Particle-in-Cell and Kinetic Simulation Software Center that enables users to easily run and analyze kinetic simulations with our software. We envision that this technology could benefit a wide range of plasma physicists, both in the use of our simulation tools as well as in its adaptation for running other plasma simulation software. Supported by NSF under Grant ACI-1339893 and by the UCLA Institute for Digital Research and Education.
NASA Astrophysics Data System (ADS)
Kunz, Robert; Haworth, Daniel; Dogan, Gulkiz; Kriete, Andres
2006-11-01
Three-dimensional, unsteady simulations of multiphase flow, gas exchange, and particle/aerosol deposition in the human lung are reported. Surface data for human tracheo-bronchial trees are derived from CT scans, and are used to generate three- dimensional CFD meshes for the first several generations of branching. One-dimensional meshes for the remaining generations down to the respiratory units are generated using branching algorithms based on those that have been proposed in the literature, and a zero-dimensional respiratory unit (pulmonary acinus) model is attached at the end of each terminal bronchiole. The process is automated to facilitate rapid model generation. The model is exercised through multiple breathing cycles to compute the spatial and temporal variations in flow, gas exchange, and particle/aerosol deposition. The depth of the 3D/1D transition (at branching generation n) is a key parameter, and can be varied. High-fidelity models (large n) are run on massively parallel distributed-memory clusters, and are used to generate physical insight and to calibrate/validate the 1D and 0D models. Suitably validated lower-order models (small n) can be run on single-processor PC’s with run times that allow model-based clinical intervention for individual patients.
NASA Astrophysics Data System (ADS)
Escobar Gómez, J. D.; Torres-Verdín, C.
2018-03-01
Single-well pressure-diffusion simulators enable improved quantitative understanding of hydraulic-testing measurements in the presence of arbitrary spatial variations of rock properties. Simulators of this type implement robust numerical algorithms which are often computationally expensive, thereby making the solution of the forward modeling problem onerous and inefficient. We introduce a time-domain perturbation theory for anisotropic permeable media to efficiently and accurately approximate the transient pressure response of spatially complex aquifers. Although theoretically valid for any spatially dependent rock/fluid property, our single-phase flow study emphasizes arbitrary spatial variations of permeability and anisotropy, which constitute key objectives of hydraulic-testing operations. Contrary to time-honored techniques, the perturbation method invokes pressure-flow deconvolution to compute the background medium's permeability sensitivity function (PSF) with a single numerical simulation run. Subsequently, the first-order term of the perturbed solution is obtained by solving an integral equation that weighs the spatial variations of permeability with the spatial-dependent and time-dependent PSF. Finally, discrete convolution transforms the constant-flow approximation to arbitrary multirate conditions. Multidimensional numerical simulation studies for a wide range of single-well field conditions indicate that perturbed solutions can be computed in less than a few CPU seconds with relative errors in pressure of <5%, corresponding to perturbations in background permeability of up to two orders of magnitude. Our work confirms that the proposed joint perturbation-convolution (JPC) method is an efficient alternative to analytical and numerical solutions for accurate modeling of pressure-diffusion phenomena induced by Neumann or Dirichlet boundary conditions.
Routine Microsecond Molecular Dynamics Simulations with AMBER on GPUs. 1. Generalized Born
2012-01-01
We present an implementation of generalized Born implicit solvent all-atom classical molecular dynamics (MD) within the AMBER program package that runs entirely on CUDA enabled NVIDIA graphics processing units (GPUs). We discuss the algorithms that are used to exploit the processing power of the GPUs and show the performance that can be achieved in comparison to simulations on conventional CPU clusters. The implementation supports three different precision models in which the contributions to the forces are calculated in single precision floating point arithmetic but accumulated in double precision (SPDP), or everything is computed in single precision (SPSP) or double precision (DPDP). In addition to performance, we have focused on understanding the implications of the different precision models on the outcome of implicit solvent MD simulations. We show results for a range of tests including the accuracy of single point force evaluations and energy conservation as well as structural properties pertainining to protein dynamics. The numerical noise due to rounding errors within the SPSP precision model is sufficiently large to lead to an accumulation of errors which can result in unphysical trajectories for long time scale simulations. We recommend the use of the mixed-precision SPDP model since the numerical results obtained are comparable with those of the full double precision DPDP model and the reference double precision CPU implementation but at significantly reduced computational cost. Our implementation provides performance for GB simulations on a single desktop that is on par with, and in some cases exceeds, that of traditional supercomputers. PMID:22582031
Computational Approaches to Simulation and Optimization of Global Aircraft Trajectories
NASA Technical Reports Server (NTRS)
Ng, Hok Kwan; Sridhar, Banavar
2016-01-01
This study examines three possible approaches to improving the speed in generating wind-optimal routes for air traffic at the national or global level. They are: (a) using the resources of a supercomputer, (b) running the computations on multiple commercially available computers and (c) implementing those same algorithms into NASAs Future ATM Concepts Evaluation Tool (FACET) and compares those to a standard implementation run on a single CPU. Wind-optimal aircraft trajectories are computed using global air traffic schedules. The run time and wait time on the supercomputer for trajectory optimization using various numbers of CPUs ranging from 80 to 10,240 units are compared with the total computational time for running the same computation on a single desktop computer and on multiple commercially available computers for potential computational enhancement through parallel processing on the computer clusters. This study also re-implements the trajectory optimization algorithm for further reduction of computational time through algorithm modifications and integrates that with FACET to facilitate the use of the new features which calculate time-optimal routes between worldwide airport pairs in a wind field for use with existing FACET applications. The implementations of trajectory optimization algorithms use MATLAB, Python, and Java programming languages. The performance evaluations are done by comparing their computational efficiencies and based on the potential application of optimized trajectories. The paper shows that in the absence of special privileges on a supercomputer, a cluster of commercially available computers provides a feasible approach for national and global air traffic system studies.
Volume sharing of reservoir water
NASA Astrophysics Data System (ADS)
Dudley, Norman J.
1988-05-01
Previous models optimize short-, intermediate-, and long-run irrigation decision making in a simplified river valley system characterized by highly variable water supplies and demands for a single decision maker controlling both reservoir releases and farm water use. A major problem in relaxing the assumption of one decision maker is communicating the stochastic nature of supplies and demands between reservoir and farm managers. In this paper, an optimizing model is used to develop release rules for reservoir management when all users share equally in releases, and computer simulation is used to generate an historical time sequence of announced releases. These announced releases become a state variable in a farm management model which optimizes farm area-to-irrigate decisions through time. Such modeling envisages the use of growing area climatic data by the reservoir authority to gauge water demand and the transfer of water supply data from reservoir to farm managers via computer data files. Alternative model forms, including allocating water on a priority basis, are discussed briefly. Results show lower mean aggregate farm income and lower variance of aggregate farm income than in the single decision-maker case. This short-run economic efficiency loss coupled with likely long-run economic efficiency losses due to the attenuated nature of property rights indicates the need for quite different ways of integrating reservoir and farm management.
NASA Astrophysics Data System (ADS)
Dilmen, Derya I.; Roe, Gerard H.; Wei, Yong; Titov, Vasily V.
2018-04-01
On September 29, 2009 at 17:48 UTC, an M w = 8.1 earthquake in the Tonga Trench generated a tsunami that caused heavy damage across Samoa, American Samoa, and Tonga. One of the worst hits was the volcanic island of Tutuila in American Samoa. Tutuila has a typical tropical island bathymetry setting influenced by coral reefs, and so the event provided an opportunity to evaluate the relationship between tsunami dynamics and the bathymetry in that typical island environment. Previous work has come to differing conclusions regarding how coral reefs affect tsunami dynamics through their influence on bathymetry and dissipation. This study presents numerical simulations of this event with a focus on two main issues: first, how roughness variations affect tsunami run-up and whether different values of Manning's roughness parameter, n, improve the simulated run-up compared to observations; and second, how depth variations in the shelf bathymetry with coral reefs control run-up and inundation on the island coastlines they shield. We find that no single value of n provides a uniformly good match to all observations; and we find substantial bay-to-bay variations in the impact of varying n. The results suggest that there are aspects of tsunami wave dissipation which are not captured by a simplified drag formulation used in shallow-water waves model. The study also suggests that the primary impact of removing the near-shore bathymetry in coral reef environment is to reduce run-up, from which we conclude that, at least in this setting, the impact of the near-shore bathymetry is to increase run-up and inundation.
NASA Astrophysics Data System (ADS)
Dilmen, Derya I.; Roe, Gerard H.; Wei, Yong; Titov, Vasily V.
2018-02-01
On September 29, 2009 at 17:48 UTC, an M w = 8.1 earthquake in the Tonga Trench generated a tsunami that caused heavy damage across Samoa, American Samoa, and Tonga. One of the worst hits was the volcanic island of Tutuila in American Samoa. Tutuila has a typical tropical island bathymetry setting influenced by coral reefs, and so the event provided an opportunity to evaluate the relationship between tsunami dynamics and the bathymetry in that typical island environment. Previous work has come to differing conclusions regarding how coral reefs affect tsunami dynamics through their influence on bathymetry and dissipation. This study presents numerical simulations of this event with a focus on two main issues: first, how roughness variations affect tsunami run-up and whether different values of Manning's roughness parameter, n, improve the simulated run-up compared to observations; and second, how depth variations in the shelf bathymetry with coral reefs control run-up and inundation on the island coastlines they shield. We find that no single value of n provides a uniformly good match to all observations; and we find substantial bay-to-bay variations in the impact of varying n. The results suggest that there are aspects of tsunami wave dissipation which are not captured by a simplified drag formulation used in shallow-water waves model. The study also suggests that the primary impact of removing the near-shore bathymetry in coral reef environment is to reduce run-up, from which we conclude that, at least in this setting, the impact of the near-shore bathymetry is to increase run-up and inundation.
NASA Astrophysics Data System (ADS)
Zhou, Hua; Su, Yang; Wang, Rong; Zhu, Yong; Shen, Huiping; Pu, Tao; Wu, Chuanxin; Zhao, Jiyong; Zhang, Baofu; Xu, Zhiyong
2017-10-01
Online reconstruction of a time-variant quantum state from the encoding/decoding results of quantum communication is addressed by developing a method of evolution reconstruction from a single measurement record with random time intervals. A time-variant two-dimensional state is reconstructed on the basis of recovering its expectation value functions of three nonorthogonal projectors from a random single measurement record, which is composed from the discarded qubits of the six-state protocol. The simulated results prove that our method is robust to typical metro quantum channels. Our work extends the Fourier-based method of evolution reconstruction from the version for a regular single measurement record with equal time intervals to a unified one, which can be applied to arbitrary single measurement records. The proposed protocol of evolution reconstruction runs concurrently with the one of quantum communication, which can facilitate the online quantum tomography.
Progress of the NASAUSGS Lunar Regolith Simulant Project
NASA Technical Reports Server (NTRS)
Rickman, Douglas; McLemore, C.; Stoeser, D.; Schrader, C.; Fikes, J.; Street, K.
2009-01-01
Beginning in 2004 personnel at MSFC began serious efforts to develop a new generation of lunar simulants. The first two products were a replication of the previous JSC-1 simulant under a contract to Orbitec and a major workshop in 2005 on future simulant development. It was recognized in early 2006 there were serious limitations with the standard approach of simply taking a single terrestrial rock and grinding it. To a geologist, even a cursory examination of the Lunar Sourcebook shows that matching lunar heterogeneity, crystal size, relative mineral abundances, lack of H2O, plagioclase chemistry and glass abundance simply can not be done with any simple combination of terrestrial rocks. Thus the project refocused its efforts and approached simulant development in a new and more comprehensive manner, examining new approaches in simulant development and ways to more accurately compare simulants to actual lunar materials. This led to a multi-year effort with five major tasks running in parallel. The five tasks are Requirements, Lunar Analysis, Process Development, Feed Stocks, and Standards.
cellGPU: Massively parallel simulations of dynamic vertex models
NASA Astrophysics Data System (ADS)
Sussman, Daniel M.
2017-10-01
Vertex models represent confluent tissue by polygonal or polyhedral tilings of space, with the individual cells interacting via force laws that depend on both the geometry of the cells and the topology of the tessellation. This dependence on the connectivity of the cellular network introduces several complications to performing molecular-dynamics-like simulations of vertex models, and in particular makes parallelizing the simulations difficult. cellGPU addresses this difficulty and lays the foundation for massively parallelized, GPU-based simulations of these models. This article discusses its implementation for a pair of two-dimensional models, and compares the typical performance that can be expected between running cellGPU entirely on the CPU versus its performance when running on a range of commercial and server-grade graphics cards. By implementing the calculation of topological changes and forces on cells in a highly parallelizable fashion, cellGPU enables researchers to simulate time- and length-scales previously inaccessible via existing single-threaded CPU implementations. Program Files doi:http://dx.doi.org/10.17632/6j2cj29t3r.1 Licensing provisions: MIT Programming language: CUDA/C++ Nature of problem: Simulations of off-lattice "vertex models" of cells, in which the interaction forces depend on both the geometry and the topology of the cellular aggregate. Solution method: Highly parallelized GPU-accelerated dynamical simulations in which the force calculations and the topological features can be handled on either the CPU or GPU. Additional comments: The code is hosted at https://gitlab.com/dmsussman/cellGPU, with documentation additionally maintained at http://dmsussman.gitlab.io/cellGPUdocumentation
Current Methods for Evaluation of Physical Security System Effectiveness.
1981-05-01
It also helps the user modify a data set before further processing. (c) Safeguards Engineering and Analysis Data Base (SEAD)--To complete SAFE’s...graphic display software in addition to a Fortran compiler, and up to about (3 35,000 words of storage. For a fairly complex problem, a single run through...operational software . 94 BIBLIOGRAPHY Lenz, J.E., "The PROSE (Protection System Evaluator) Model," Proc. 1979 Winter Simulation Conference, IEEE, 1979
Sodium Binding Sites and Permeation Mechanism in the NaChBac Channel: A Molecular Dynamics Study.
Guardiani, Carlo; Rodger, P Mark; Fedorenko, Olena A; Roberts, Stephen K; Khovanov, Igor A
2017-03-14
NaChBac was the first discovered bacterial sodium voltage-dependent channel, yet computational studies are still limited due to the lack of a crystal structure. In this work, a pore-only construct built using the NavMs template was investigated using unbiased molecular dynamics and metadynamics. The potential of mean force (PMF) from the unbiased run features four minima, three of which correspond to sites IN, CEN, and HFS discovered in NavAb. During the run, the selectivity filter (SF) is spontaneously occupied by two ions, and frequent access of a third one is often observed. In the innermost sites IN and CEN, Na + is fully hydrated by six water molecules and occupies an on-axis position. In site HFS sodium interacts with a glutamate and a serine from the same subunit and is forced to adopt an off-axis placement. Metadynamics simulations biasing one and two ions show an energy barrier in the SF that prevents single-ion permeation. An analysis of the permeation mechanism was performed both computing minimum energy paths in the axial-axial PMF and through a combination of Markov state modeling and transition path theory. Both approaches reveal a knock-on mechanism involving at least two but possibly three ions. The currents predicted from the unbiased simulation using linear response theory are in excellent agreement with single-channel patch-clamp recordings.
NASA Technical Reports Server (NTRS)
Chawner, David M.; Gomez, Ray J.
2010-01-01
In the Applied Aerosciences and CFD branch at Johnson Space Center, computational simulations are run that face many challenges. Two of which are the ability to customize software for specialized needs and the need to run simulations as fast as possible. There are many different tools that are used for running these simulations and each one has its own pros and cons. Once these simulations are run, there needs to be software capable of visualizing the results in an appealing manner. Some of this software is called open source, meaning that anyone can edit the source code to make modifications and distribute it to all other users in a future release. This is very useful, especially in this branch where many different tools are being used. File readers can be written to load any file format into a program, to ease the bridging from one tool to another. Programming such a reader requires knowledge of the file format that is being read as well as the equations necessary to obtain the derived values after loading. When running these CFD simulations, extremely large files are being loaded and having values being calculated. These simulations usually take a few hours to complete, even on the fastest machines. Graphics processing units (GPUs) are usually used to load the graphics for computers; however, in recent years, GPUs are being used for more generic applications because of the speed of these processors. Applications run on GPUs have been known to run up to forty times faster than they would on normal central processing units (CPUs). If these CFD programs are extended to run on GPUs, the amount of time they would require to complete would be much less. This would allow more simulations to be run in the same amount of time and possibly perform more complex computations.
Spatial heterogeneity of leaf area index across scales from simulation and remote sensing
NASA Astrophysics Data System (ADS)
Reichenau, Tim G.; Korres, Wolfgang; Montzka, Carsten; Schneider, Karl
2016-04-01
Leaf area index (LAI, single sided leaf area per ground area) influences mass and energy exchange of vegetated surfaces. Therefore LAI is an input variable for many land surface schemes of coupled large scale models, which do not simulate LAI. Since these models typically run on rather coarse resolution grids, LAI is often inferred from coarse resolution remote sensing. However, especially in agriculturally used areas, a grid cell of these products often covers more than a single land-use. In that case, the given LAI does not apply to any single land-use. Therefore, the overall spatial heterogeneity in these datasets differs from that on resolutions high enough to distinguish areas with differing land-use. Detailed process-based plant growth models simulate LAI for separate plant functional types or specific species. However, limited availability of observations causes reduced spatial heterogeneity of model input data (soil, weather, land-use). Since LAI is strongly heterogeneous in space and time and since processes depend on LAI in a nonlinear way, a correct representation of LAI spatial heterogeneity is also desirable on coarse resolutions. The current study assesses this issue by comparing the spatial heterogeneity of LAI from remote sensing (RapidEye) and process-based simulations (DANUBIA simulation system) across scales. Spatial heterogeneity is assessed by analyzing LAI frequency distributions (spatial variability) and semivariograms (spatial structure). Test case is the arable land in the fertile loess plain of the Rur catchment near the Germany-Netherlands border.
MOIL-opt: Energy-Conserving Molecular Dynamics on a GPU/CPU system
Ruymgaart, A. Peter; Cardenas, Alfredo E.; Elber, Ron
2011-01-01
We report an optimized version of the molecular dynamics program MOIL that runs on a shared memory system with OpenMP and exploits the power of a Graphics Processing Unit (GPU). The model is of heterogeneous computing system on a single node with several cores sharing the same memory and a GPU. This is a typical laboratory tool, which provides excellent performance at minimal cost. Besides performance, emphasis is made on accuracy and stability of the algorithm probed by energy conservation for explicit-solvent atomically-detailed-models. Especially for long simulations energy conservation is critical due to the phenomenon known as “energy drift” in which energy errors accumulate linearly as a function of simulation time. To achieve long time dynamics with acceptable accuracy the drift must be particularly small. We identify several means of controlling long-time numerical accuracy while maintaining excellent speedup. To maintain a high level of energy conservation SHAKE and the Ewald reciprocal summation are run in double precision. Double precision summation of real-space non-bonded interactions improves energy conservation. In our best option, the energy drift using 1fs for a time step while constraining the distances of all bonds, is undetectable in 10ns simulation of solvated DHFR (Dihydrofolate reductase). Faster options, shaking only bonds with hydrogen atoms, are also very well behaved and have drifts of less than 1kcal/mol per nanosecond of the same system. CPU/GPU implementations require changes in programming models. We consider the use of a list of neighbors and quadratic versus linear interpolation in lookup tables of different sizes. Quadratic interpolation with a smaller number of grid points is faster than linear lookup tables (with finer representation) without loss of accuracy. Atomic neighbor lists were found most efficient. Typical speedups are about a factor of 10 compared to a single-core single-precision code. PMID:22328867
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Tianzhen; Buhl, Fred; Haves, Philip
2008-09-20
EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences,more » identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.« less
Modeling and simulation of a counter-rotating turbine system for underwater vehicles
NASA Astrophysics Data System (ADS)
Wang, Xinping; Dang, Jianjun
2016-12-01
The structure of a counter-rotating turbine of an underwater vehicle is designed by adding the counter-rotating second-stage turbine disk after the conventional single-stage turbine. The available kinetic energy and the absorption power of the auxiliary system are calculated at different working conditions, and the results show that the power of the main engine and auxiliary system at the counter-rotating turbine system matches well with each other. The experimental simulation of the lubricating oil loop, fuel loop, and seawater loop are completed right before the technology scheme of the counter-rotating turbine system is proposed. The simulation results indicate that the hydraulic transmission system can satisfy the requirements for an underwater vehicle running at a steady sailing or variable working conditions.
Documentation of the Benson Diesel Engine Simulation Program
NASA Technical Reports Server (NTRS)
Vangerpen, Jon
1988-01-01
This report documents the Benson Diesel Engine Simulation Program and explains how it can be used to predict the performance of diesel engines. The program was obtained from the Garrett Turbine Engine Company but has been extensively modified since. The program is a thermodynamic simulation of the diesel engine cycle which uses a single zone combustion model. It can be used to predict the effect of changes in engine design and operating parameters such as valve timing, speed and boost pressure. The most significan change made to this program is the addition of a more detailed heat transfer model to predict metal part temperatures. This report contains a description of the sub-models used in the Benson program, a description of the input parameters and sample program runs.
Discretely Integrated Condition Event (DICE) Simulation for Pharmacoeconomics.
Caro, J Jaime
2016-07-01
Several decision-analytic modeling techniques are in use for pharmacoeconomic analyses. Discretely integrated condition event (DICE) simulation is proposed as a unifying approach that has been deliberately designed to meet the modeling requirements in a straightforward transparent way, without forcing assumptions (e.g., only one transition per cycle) or unnecessary complexity. At the core of DICE are conditions that represent aspects that persist over time. They have levels that can change and many may coexist. Events reflect instantaneous occurrences that may modify some conditions or the timing of other events. The conditions are discretely integrated with events by updating their levels at those times. Profiles of determinant values allow for differences among patients in the predictors of the disease course. Any number of valuations (e.g., utility, cost, willingness-to-pay) of conditions and events can be applied concurrently in a single run. A DICE model is conveniently specified in a series of tables that follow a consistent format and the simulation can be implemented fully in MS Excel, facilitating review and validation. DICE incorporates both state-transition (Markov) models and non-resource-constrained discrete event simulation in a single formulation; it can be executed as a cohort or a microsimulation; and deterministically or stochastically.
Sakhteman, Amirhossein; Zare, Bijan
2016-01-01
An interactive application, Modelface, was presented for Modeller software based on windows platform. The application is able to run all steps of homology modeling including pdb to fasta generation, running clustal, model building and loop refinement. Other modules of modeler including energy calculation, energy minimization and the ability to make single point mutations in the PDB structures are also implemented inside Modelface. The API is a simple batch based application with no memory occupation and is free of charge for academic use. The application is also able to repair missing atom types in the PDB structures making it suitable for many molecular modeling studies such as docking and molecular dynamic simulation. Some successful instances of modeling studies using Modelface are also reported. PMID:28243276
NO PLIF Imaging in the CUBRC 48 Inch Shock Tunnel
NASA Technical Reports Server (NTRS)
Jiang, N.; Bruzzese, J.; Patton, R.; Sutton J.; Lempert W.; Miller, J. D.; Meyer, T. R.; Parker, R.; Wadham, T.; Holden, M.;
2011-01-01
Nitric Oxide Planar Laser-Induced Fluorescence (NO PLIF) imaging is demonstrated at a 10 kHz repetition rate in the Calspan-University at Buffalo Research Center s (CUBRC) 48-inch Mach 9 hypervelocity shock tunnel using a pulse burst laser-based high frame rate imaging system. Sequences of up to ten images are obtained internal to a supersonic combustor model, located within the shock tunnel, during a single approx.10-millisecond duration run of the ground test facility. This represents over an order of magnitude improvement in data rate from previous PLIF-based diagnostic approaches. Comparison with a preliminary CFD simulation shows good overall qualitative agreement between the prediction of the mean NO density field and the observed PLIF image intensity, averaged over forty individual images obtained during several facility runs.
A hybrid gyrokinetic ion and isothermal electron fluid code for astrophysical plasma
NASA Astrophysics Data System (ADS)
Kawazura, Y.; Barnes, M.
2018-05-01
This paper describes a new code for simulating astrophysical plasmas that solves a hybrid model composed of gyrokinetic ions (GKI) and an isothermal electron fluid (ITEF) Schekochihin et al. (2009) [9]. This model captures ion kinetic effects that are important near the ion gyro-radius scale while electron kinetic effects are ordered out by an electron-ion mass ratio expansion. The code is developed by incorporating the ITEF approximation into AstroGK, an Eulerian δf gyrokinetics code specialized to a slab geometry Numata et al. (2010) [41]. The new code treats the linear terms in the ITEF equations implicitly while the nonlinear terms are treated explicitly. We show linear and nonlinear benchmark tests to prove the validity and applicability of the simulation code. Since the fast electron timescale is eliminated by the mass ratio expansion, the Courant-Friedrichs-Lewy condition is much less restrictive than in full gyrokinetic codes; the present hybrid code runs ∼ 2√{mi /me } ∼ 100 times faster than AstroGK with a single ion species and kinetic electrons where mi /me is the ion-electron mass ratio. The improvement of the computational time makes it feasible to execute ion scale gyrokinetic simulations with a high velocity space resolution and to run multiple simulations to determine the dependence of turbulent dynamics on parameters such as electron-ion temperature ratio and plasma beta.
GEO2D - Two-Dimensional Computer Model of a Ground Source Heat Pump System
James Menart
2013-06-07
This file contains a zipped file that contains many files required to run GEO2D. GEO2D is a computer code for simulating ground source heat pump (GSHP) systems in two-dimensions. GEO2D performs a detailed finite difference simulation of the heat transfer occurring within the working fluid, the tube wall, the grout, and the ground. Both horizontal and vertical wells can be simulated with this program, but it should be noted that the vertical wall is modeled as a single tube. This program also models the heat pump in conjunction with the heat transfer occurring. GEO2D simulates the heat pump and ground loop as a system. Many results are produced by GEO2D as a function of time and position, such as heat transfer rates, temperatures and heat pump performance. On top of this information from an economic comparison between the geothermal system simulated and a comparable air heat pump systems or a comparable gas, oil or propane heating systems with a vapor compression air conditioner. The version of GEO2D in the attached file has been coupled to the DOE heating and cooling load software called ENERGYPLUS. This is a great convenience for the user because heating and cooling loads are an input to GEO2D. GEO2D is a user friendly program that uses a graphical user interface for inputs and outputs. These make entering data simple and they produce many plotted results that are easy to understand. In order to run GEO2D access to MATLAB is required. If this program is not available on your computer you can download the program MCRInstaller.exe, the 64 bit version, from the MATLAB website or from this geothermal depository. This is a free download which will enable you to run GEO2D..
WMT: The CSDMS Web Modeling Tool
NASA Astrophysics Data System (ADS)
Piper, M.; Hutton, E. W. H.; Overeem, I.; Syvitski, J. P.
2015-12-01
The Community Surface Dynamics Modeling System (CSDMS) has a mission to enable model use and development for research in earth surface processes. CSDMS strives to expand the use of quantitative modeling techniques, promotes best practices in coding, and advocates for the use of open-source software. To streamline and standardize access to models, CSDMS has developed the Web Modeling Tool (WMT), a RESTful web application with a client-side graphical interface and a server-side database and API that allows users to build coupled surface dynamics models in a web browser on a personal computer or a mobile device, and run them in a high-performance computing (HPC) environment. With WMT, users can: Design a model from a set of components Edit component parameters Save models to a web-accessible server Share saved models with the community Submit runs to an HPC system Download simulation results The WMT client is an Ajax application written in Java with GWT, which allows developers to employ object-oriented design principles and development tools such as Ant, Eclipse and JUnit. For deployment on the web, the GWT compiler translates Java code to optimized and obfuscated JavaScript. The WMT client is supported on Firefox, Chrome, Safari, and Internet Explorer. The WMT server, written in Python and SQLite, is a layered system, with each layer exposing a web service API: wmt-db: database of component, model, and simulation metadata and output wmt-api: configure and connect components wmt-exe: launch simulations on remote execution servers The database server provides, as JSON-encoded messages, the metadata for users to couple model components, including descriptions of component exchange items, uses and provides ports, and input parameters. Execution servers are network-accessible computational resources, ranging from HPC systems to desktop computers, containing the CSDMS software stack for running a simulation. Once a simulation completes, its output, in NetCDF, is packaged and uploaded to a data server where it is stored and from which a user can download it as a single compressed archive file.
Parametric analysis of plastic strain and force distribution in single pass metal spinning
NASA Astrophysics Data System (ADS)
Choudhary, Shashank; Tejesh, Chiruvolu Mohan; Regalla, Srinivasa Prakash; Suresh, Kurra
2013-12-01
Metal spinning also known as spin forming is one of the sheet metal working processes by which an axis-symmetric part can be formed from a flat sheet metal blank. Parts are produced by pressing a blunt edged tool or roller on to the blank which in turn is mounted on a rotating mandrel. This paper discusses about the setting up a 3-D finite element simulation of single pass metal spinning in LS-Dyna. Four parameters were considered namely blank thickness, roller nose radius, feed ratio and mandrel speed and the variation in forces and plastic strain were analysed using the full-factorial design of experiments (DOE) method of simulation experiments. For some of these DOE runs, physical experiments on extra deep drawing (EDD) sheet metal were carried out using En31 tool on a lathe machine. Simulation results are able to predict the zone of unsafe thinning in the sheet and high forming forces that are hint to the necessity for less-expensive and semi-automated machine tools to help the household and small scale spinning workers widely prevalent in India.
Theoretical analysis of a novel ultrasound generator on an optical fiber tip
NASA Astrophysics Data System (ADS)
Wu, Nan; Wang, Wenhui; Tian, Ye; Guthy, Charles; Wang, Xingwei
2010-04-01
A novel ultrasound generator consisting of a single mode optical fiber with a layer of gold nanoparticles on its tip has been designed. The generator utilizes the optical and photo-acoustic properties of gold nanoparticles. When heated by laser pulses, a thin absorption layer made up of these nanoparticles at the cleaved surface of a single mode fiber generates a mechanical shock wave caused by thermal expansion. Mie's theory was applied in a MATLAB simulation to determine the relationship between the absorption efficiency and the optical resonance wavelengths of a layer of gold nanospheres. Results showed that the absorption efficiency and related resonance wavelengths of gold nanospheres varied based on the size of the gold nanosphere particles. In order to obtain the bandwidths associated with ultrasound, another MATLAB simulation was run to study the relationship between the power of the laser being used, the size of the gold nanosphere, and the energy decay time. The results of this and the previous simulation showed that the energy decay time is picoseconds in length.
Gray: a ray tracing-based Monte Carlo simulator for PET
NASA Astrophysics Data System (ADS)
Freese, David L.; Olcott, Peter D.; Buss, Samuel R.; Levin, Craig S.
2018-05-01
Monte Carlo simulation software plays a critical role in PET system design. Performing complex, repeated Monte Carlo simulations can be computationally prohibitive, as even a single simulation can require a large amount of time and a computing cluster to complete. Here we introduce Gray, a Monte Carlo simulation software for PET systems. Gray exploits ray tracing methods used in the computer graphics community to greatly accelerate simulations of PET systems with complex geometries. We demonstrate the implementation of models for positron range, annihilation acolinearity, photoelectric absorption, Compton scatter, and Rayleigh scatter. For validation, we simulate the GATE PET benchmark, and compare energy, distribution of hits, coincidences, and run time. We show a speedup using Gray, compared to GATE for the same simulation, while demonstrating nearly identical results. We additionally simulate the Siemens Biograph mCT system with both the NEMA NU-2 scatter phantom and sensitivity phantom. We estimate the total sensitivity within % when accounting for differences in peak NECR. We also estimate the peak NECR to be kcps, or within % of published experimental data. The activity concentration of the peak is also estimated within 1.3%.
Automatic Fitting of Spiking Neuron Models to Electrophysiological Recordings
Rossant, Cyrille; Goodman, Dan F. M.; Platkiewicz, Jonathan; Brette, Romain
2010-01-01
Spiking models can accurately predict the spike trains produced by cortical neurons in response to somatically injected currents. Since the specific characteristics of the model depend on the neuron, a computational method is required to fit models to electrophysiological recordings. The fitting procedure can be very time consuming both in terms of computer simulations and in terms of code writing. We present algorithms to fit spiking models to electrophysiological data (time-varying input and spike trains) that can run in parallel on graphics processing units (GPUs). The model fitting library is interfaced with Brian, a neural network simulator in Python. If a GPU is present it uses just-in-time compilation to translate model equations into optimized code. Arbitrary models can then be defined at script level and run on the graphics card. This tool can be used to obtain empirically validated spiking models of neurons in various systems. We demonstrate its use on public data from the INCF Quantitative Single-Neuron Modeling 2009 competition by comparing the performance of a number of neuron spiking models. PMID:20224819
General purpose molecular dynamics simulations fully implemented on graphics processing units
NASA Astrophysics Data System (ADS)
Anderson, Joshua A.; Lorenz, Chris D.; Travesset, A.
2008-05-01
Graphics processing units (GPUs), originally developed for rendering real-time effects in computer games, now provide unprecedented computational power for scientific applications. In this paper, we develop a general purpose molecular dynamics code that runs entirely on a single GPU. It is shown that our GPU implementation provides a performance equivalent to that of fast 30 processor core distributed memory cluster. Our results show that GPUs already provide an inexpensive alternative to such clusters and discuss implications for the future.
Memory access in shared virtual memory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berrendorf, R.
1992-01-01
Shared virtual memory (SVM) is a virtual memory layer with a single address space on top of a distributed real memory on parallel computers. We examine the behavior and performance of SVM running a parallel program with medium-grained, loop-level parallelism on top of it. A simulator for the underlying parallel architecture can be used to examine the behavior of SVM more deeply. The influence of several parameters, such as the number of processors, page size, cold or warm start, and restricted page replication, is studied.
Memory access in shared virtual memory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berrendorf, R.
1992-09-01
Shared virtual memory (SVM) is a virtual memory layer with a single address space on top of a distributed real memory on parallel computers. We examine the behavior and performance of SVM running a parallel program with medium-grained, loop-level parallelism on top of it. A simulator for the underlying parallel architecture can be used to examine the behavior of SVM more deeply. The influence of several parameters, such as the number of processors, page size, cold or warm start, and restricted page replication, is studied.
Simulation and video animation of canal flushing created by a tide gate
Schoellhamer, David H.
1988-01-01
A tide-gate algorithm was added to a one-dimensional unsteady flow model that was calibrated, verified, and used to determine the locations of as many as five tide gates that would maximize flushing in two canal systems. Results from the flow model were used to run a branched Lagrangian transport model to simulate the flushing of a conservative constituent from the canal systems both with and without tide gates. A tide gate produces a part-time riverine flow through the canal system that improves flushing along the flow path created by the tide gate. Flushing with no tide gates and with a single optimally located tide gate are shown with a video animation.
NASA Astrophysics Data System (ADS)
Childers, J. T.; Uram, T. D.; LeCompte, T. J.; Papka, M. E.; Benjamin, D. P.
2017-01-01
As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. This paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application and the performance that was achieved.
ms 2: A molecular simulation tool for thermodynamic properties, release 3.0
NASA Astrophysics Data System (ADS)
Rutkai, Gábor; Köster, Andreas; Guevara-Carrion, Gabriela; Janzen, Tatjana; Schappals, Michael; Glass, Colin W.; Bernreuther, Martin; Wafai, Amer; Stephan, Simon; Kohns, Maximilian; Reiser, Steffen; Deublein, Stephan; Horsch, Martin; Hasse, Hans; Vrabec, Jadran
2017-12-01
A new version release (3.0) of the molecular simulation tool ms 2 (Deublein et al., 2011; Glass et al. 2014) is presented. Version 3.0 of ms 2 features two additional ensembles, i.e. microcanonical (NVE) and isobaric-isoenthalpic (NpH), various Helmholtz energy derivatives in the NVE ensemble, thermodynamic integration as a method for calculating the chemical potential, the osmotic pressure for calculating the activity of solvents, the six Maxwell-Stefan diffusion coefficients of quaternary mixtures, statistics for sampling hydrogen bonds, smooth-particle mesh Ewald summation as well as the ability to carry out molecular dynamics runs for an arbitrary number of state points in a single program execution.
Voltage instability in a simulated fuel cell stack correlated to cathode water accumulation
NASA Astrophysics Data System (ADS)
Owejan, J. P.; Trabold, T. A.; Gagliardo, J. J.; Jacobson, D. L.; Carter, R. N.; Hussey, D. S.; Arif, M.
Single fuel cells running independently are often used for fundamental studies of water transport. It is also necessary to assess the dynamic behavior of fuel cell stacks comprised of multiple cells arranged in series, thus providing many paths for flow of reactant hydrogen on the anode and air (or pure oxygen) on the cathode. In the current work, the flow behavior of a fuel cell stack is simulated by using a single-cell test fixture coupled with a bypass flow loop for the cathode flow. This bypass simulates the presence of additional cells in a stack and provides an alternate path for airflow, thus avoiding forced convective purging of cathode flow channels. Liquid water accumulation in the cathode is shown to occur in two modes; initially nearly all the product water is retained in the gas diffusion layer until a critical saturation fraction is reached and then water accumulation in the flow channels begins. Flow redistribution and fuel cell performance loss result from channel slug formation. The application of in-situ neutron radiography affords a transient correlation of performance loss to liquid water accumulation. The current results identify a mechanism whereby depleted cathode flow on a single cell leads to performance loss, which can ultimately cause an operating proton exchange membrane fuel cell stack to fail.
CFS UPDATES NEW The CFS server has been upgraded. If you have been downloading data using anonymous data. Thank you for your cooperation. NEW CMIP2 Simulation Run extended CMIP126 Simulation Run has been extended from 100 to 225 years! Monthly data from this run is available! NEW CFS Retrospective Forecast
CMS Readiness for Multi-Core Workload Scheduling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perez-Calero Yzquierdo, A.; Balcas, J.; Hernandez, J.
In the present run of the LHC, CMS data reconstruction and simulation algorithms benefit greatly from being executed as multiple threads running on several processor cores. The complexity of the Run 2 events requires parallelization of the code to reduce the memory-per- core footprint constraining serial execution programs, thus optimizing the exploitation of present multi-core processor architectures. The allocation of computing resources for multi-core tasks, however, becomes a complex problem in itself. The CMS workload submission infrastructure employs multi-slot partitionable pilots, built on HTCondor and GlideinWMS native features, to enable scheduling of single and multi-core jobs simultaneously. This provides amore » solution for the scheduling problem in a uniform way across grid sites running a diversity of gateways to compute resources and batch system technologies. This paper presents this strategy and the tools on which it has been implemented. The experience of managing multi-core resources at the Tier-0 and Tier-1 sites during 2015, along with the deployment phase to Tier-2 sites during early 2016 is reported. The process of performance monitoring and optimization to achieve efficient and flexible use of the resources is also described.« less
Schürch, Roger; Couvillon, Margaret J; Burns, Dominic D R; Tasman, Kiah; Waxman, David; Ratnieks, Francis L W
2013-12-01
Honey bees communicate to nestmates locations of resources, including food, water, tree resin and nest sites, by making waggle dances. Dances are composed of repeated waggle runs, which encode the distance and direction vector from the hive or swarm to the resource. Distance is encoded in the duration of the waggle run, and direction is encoded in the angle of the dancer's body relative to vertical. Glass-walled observation hives enable researchers to observe or video, and decode waggle runs. However, variation in these signals makes it impossible to determine exact locations advertised. We present a Bayesian duration to distance calibration curve using Markov Chain Monte Carlo simulations that allows us to quantify how accurately distance to a food resource can be predicted from waggle run durations within a single dance. An angular calibration shows that angular precision does not change over distance, resulting in spatial scatter proportional to distance. We demonstrate how to combine distance and direction to produce a spatial probability distribution of the resource location advertised by the dance. Finally, we show how to map honey bee foraging and discuss how our approach can be integrated with Geographic Information Systems to better understand honey bee foraging ecology.
CMS readiness for multi-core workload scheduling
NASA Astrophysics Data System (ADS)
Perez-Calero Yzquierdo, A.; Balcas, J.; Hernandez, J.; Aftab Khan, F.; Letts, J.; Mason, D.; Verguilov, V.
2017-10-01
In the present run of the LHC, CMS data reconstruction and simulation algorithms benefit greatly from being executed as multiple threads running on several processor cores. The complexity of the Run 2 events requires parallelization of the code to reduce the memory-per- core footprint constraining serial execution programs, thus optimizing the exploitation of present multi-core processor architectures. The allocation of computing resources for multi-core tasks, however, becomes a complex problem in itself. The CMS workload submission infrastructure employs multi-slot partitionable pilots, built on HTCondor and GlideinWMS native features, to enable scheduling of single and multi-core jobs simultaneously. This provides a solution for the scheduling problem in a uniform way across grid sites running a diversity of gateways to compute resources and batch system technologies. This paper presents this strategy and the tools on which it has been implemented. The experience of managing multi-core resources at the Tier-0 and Tier-1 sites during 2015, along with the deployment phase to Tier-2 sites during early 2016 is reported. The process of performance monitoring and optimization to achieve efficient and flexible use of the resources is also described.
Regression with Small Data Sets: A Case Study using Code Surrogates in Additive Manufacturing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamath, C.; Fan, Y. J.
There has been an increasing interest in recent years in the mining of massive data sets whose sizes are measured in terabytes. While it is easy to collect such large data sets in some application domains, there are others where collecting even a single data point can be very expensive, so the resulting data sets have only tens or hundreds of samples. For example, when complex computer simulations are used to understand a scientific phenomenon, we want to run the simulation for many different values of the input parameters and analyze the resulting output. The data set relating the simulationmore » inputs and outputs is typically quite small, especially when each run of the simulation is expensive. However, regression techniques can still be used on such data sets to build an inexpensive \\surrogate" that could provide an approximate output for a given set of inputs. A good surrogate can be very useful in sensitivity analysis, uncertainty analysis, and in designing experiments. In this paper, we compare different regression techniques to determine how well they predict melt-pool characteristics in the problem domain of additive manufacturing. Our analysis indicates that some of the commonly used regression methods do perform quite well even on small data sets.« less
Litho hotspots fixing using model based algorithm
NASA Astrophysics Data System (ADS)
Zhang, Meili; Yu, Shirui; Mao, Zhibiao; Shafee, Marwa; Madkour, Kareem; ElManhawy, Wael; Kwan, Joe; Hu, Xinyi; Wan, Qijian; Du, Chunshan
2017-04-01
As technology advances, IC designs are getting more sophisticated, thus it becomes more critical and challenging to fix printability issues in the design flow. Running lithography checks before tapeout is now mandatory for designers, which creates a need for more advanced and easy-to-use techniques for fixing hotspots found after lithographic simulation without creating a new design rule checking (DRC) violation or generating a new hotspot. This paper presents a new methodology for fixing hotspots on layouts while using the same engine currently used to detect the hotspots. The fix is achieved by applying minimum movement of edges causing the hotspot, with consideration of DRC constraints. The fix is internally simulated by the lithographic simulation engine to verify that the hotspot is eliminated and that no new hotspot is generated by the new edge locations. Hotspot fix checking is enhanced by adding DRC checks to the litho-friendly design (LFD) rule file to guarantee that any fix options that violate DRC checks are removed from the output hint file. This extra checking eliminates the need to re-run both DRC and LFD checks to ensure the change successfully fixed the hotspot, which saves time and simplifies the designer's workflow. This methodology is demonstrated on industrial designs, where the fixing rate of single and dual layer hotspots is reported.
Topographic filtering simulation model for sediment source apportionment
NASA Astrophysics Data System (ADS)
Cho, Se Jong; Wilcock, Peter; Hobbs, Benjamin
2018-05-01
We propose a Topographic Filtering simulation model (Topofilter) that can be used to identify those locations that are likely to contribute most of the sediment load delivered from a watershed. The reduced complexity model links spatially distributed estimates of annual soil erosion, high-resolution topography, and observed sediment loading to determine the distribution of sediment delivery ratio across a watershed. The model uses two simple two-parameter topographic transfer functions based on the distance and change in elevation from upland sources to the nearest stream channel and then down the stream network. The approach does not attempt to find a single best-calibrated solution of sediment delivery, but uses a model conditioning approach to develop a large number of possible solutions. For each model run, locations that contribute to 90% of the sediment loading are identified and those locations that appear in this set in most of the 10,000 model runs are identified as the sources that are most likely to contribute to most of the sediment delivered to the watershed outlet. Because the underlying model is quite simple and strongly anchored by reliable information on soil erosion, topography, and sediment load, we believe that the ensemble of simulation outputs provides a useful basis for identifying the dominant sediment sources in the watershed.
Implicit Learning of a Finger Motor Sequence by Patients with Cerebral Palsy After Neurofeedback.
Alves-Pinto, Ana; Turova, Varvara; Blumenstein, Tobias; Hantuschke, Conny; Lampe, Renée
2017-03-01
Facilitation of implicit learning of a hand motor sequence after a single session of neurofeedback training of alpha power recorded from the motor cortex has been shown in healthy individuals (Ros et al., Biological Psychology 95:54-58, 2014). This facilitation effect could be potentially applied to improve the outcome of rehabilitation in patients with impaired hand motor function. In the current study a group of ten patients diagnosed with cerebral palsy trained reduction of alpha power derived from brain activity recorded from right and left motor areas. Training was distributed in three periods of 8 min each. In between, participants performed a serial reaction time task with their non-dominant hand, to a total of five runs. A similar procedure was repeated a week or more later but this time training was based on simulated brain activity. Reaction times pooled across participants decreased on each successive run faster after neurofeedback training than after the simulation training. Also recorded were two 3-min baseline conditions, once with the eyes open, another with the eyes closed, at the beginning and end of the experimental session. No significant changes in alpha power with neurofeedback or with simulation training were obtained and no correlation with the reductions in reaction time could be established. Contributions for this are discussed.
HYDRA : High-speed simulation architecture for precision spacecraft formation simulation
NASA Technical Reports Server (NTRS)
Martin, Bryan J.; Sohl, Garett.
2003-01-01
e Hierarchical Distributed Reconfigurable Architecture- is a scalable simulation architecture that provides flexibility and ease-of-use which take advantage of modern computation and communication hardware. It also provides the ability to implement distributed - or workstation - based simulations and high-fidelity real-time simulation from a common core. Originally designed to serve as a research platform for examining fundamental challenges in formation flying simulation for future space missions, it is also finding use in other missions and applications, all of which can take advantage of the underlying Object-Oriented structure to easily produce distributed simulations. Hydra automates the process of connecting disparate simulation components (Hydra Clients) through a client server architecture that uses high-level descriptions of data associated with each client to find and forge desirable connections (Hydra Services) at run time. Services communicate through the use of Connectors, which abstract messaging to provide single-interface access to any desired communication protocol, such as from shared-memory message passing to TCP/IP to ACE and COBRA. Hydra shares many features with the HLA, although providing more flexibility in connectivity services and behavior overriding.
Computational steering of GEM based detector simulations
NASA Astrophysics Data System (ADS)
Sheharyar, Ali; Bouhali, Othmane
2017-10-01
Gas based detector R&D relies heavily on full simulation of detectors and their optimization before final prototypes can be built and tested. These simulations in particular those with complex scenarios such as those involving high detector voltages or gas with larger gains are computationally intensive may take several days or weeks to complete. These long-running simulations usually run on the high-performance computers in batch mode. If the results lead to unexpected behavior, then the simulation might be rerun with different parameters. However, the simulations (or jobs) may have to wait in a queue until they get a chance to run again because the supercomputer is a shared resource that maintains a queue of other user programs as well and executes them as time and priorities permit. It may result in inefficient resource utilization and increase in the turnaround time for the scientific experiment. To overcome this issue, the monitoring of the behavior of a simulation, while it is running (or live), is essential. In this work, we employ the computational steering technique by coupling the detector simulations with a visualization package named VisIt to enable the exploration of the live data as it is produced by the simulation.
Low dose tomographic fluoroscopy: 4D intervention guidance with running prior
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flach, Barbara; Kuntz, Jan; Brehm, Marcus
Purpose: Today's standard imaging technique in interventional radiology is the single- or biplane x-ray fluoroscopy which delivers 2D projection images as a function of time (2D+T). This state-of-the-art technology, however, suffers from its projective nature and is limited by the superposition of the patient's anatomy. Temporally resolved tomographic volumes (3D+T) would significantly improve the visualization of complex structures. A continuous tomographic data acquisition, if carried out with today's technology, would yield an excessive patient dose. Recently the authors proposed a method that enables tomographic fluoroscopy at the same dose level as projective fluoroscopy which means that if scanning time ofmore » an intervention guided by projective fluoroscopy is the same as that of an intervention guided by tomographic fluoroscopy, almost the same dose is administered to the patient. The purpose of this work is to extend authors' previous work and allow for patient motion during the intervention.Methods: The authors propose the running prior technique for adaptation of a prior image. This adaptation is realized by a combination of registration and projection replacement. In a first step the prior is deformed to the current position via affine and deformable registration. Then the information from outdated projections is replaced by newly acquired projections using forward and backprojection steps. The thus adapted volume is the running prior. The proposed method is validated by simulated as well as measured data. To investigate motion during intervention a moving head phantom was simulated. Real in vivo data of a pig are acquired by a prototype CT system consisting of a flat detector and a continuously rotating clinical gantry.Results: With the running prior technique it is possible to correct for motion without additional dose. For an application in intervention guidance both steps of the running prior technique, registration and replacement, are necessary. Reconstructed volumes based on the running prior show high image quality without introducing new artifacts and the interventional materials are displayed at the correct position.Conclusions: The running prior improves the robustness of low dose 3D+T intervention guidance toward intended or unintended patient motion.« less
Lytton, William W; Neymotin, Samuel A; Hines, Michael L
2008-06-30
In an effort to design a simulation environment that is more similar to that of neurophysiology, we introduce a virtual slice setup in the NEURON simulator. The virtual slice setup runs continuously and permits parameter changes, including changes to synaptic weights and time course and to intrinsic cell properties. The virtual slice setup permits shocks to be applied at chosen locations and activity to be sampled intra- or extracellularly from chosen locations. By default, a summed population display is shown during a run to indicate the level of activity and no states are saved. Simulations can run for hours of model time, therefore it is not practical to save all of the state variables. These, in any case, are primarily of interest at discrete times when experiments are being run: the simulation can be stopped momentarily at such times to save activity patterns. The virtual slice setup maintains an automated notebook showing shocks and parameter changes as well as user comments. We demonstrate how interaction with a continuously running simulation encourages experimental prototyping and can suggest additional dynamical features such as ligand wash-in and wash-out-alternatives to typical instantaneous parameter change. The virtual slice setup currently uses event-driven cells and runs at approximately 2 min/h on a laptop.
Fast neural net simulation with a DSP processor array.
Muller, U A; Gunzinger, A; Guggenbuhl, W
1995-01-01
This paper describes the implementation of a fast neural net simulator on a novel parallel distributed-memory computer. A 60-processor system, named MUSIC (multiprocessor system with intelligent communication), is operational and runs the backpropagation algorithm at a speed of 330 million connection updates per second (continuous weight update) using 32-b floating-point precision. This is equal to 1.4 Gflops sustained performance. The complete system with 3.8 Gflops peak performance consumes less than 800 W of electrical power and fits into a 19-in rack. While reaching the speed of modern supercomputers, MUSIC still can be used as a personal desktop computer at a researcher's own disposal. In neural net simulation, this gives a computing performance to a single user which was unthinkable before. The system's real-time interfaces make it especially useful for embedded applications.
Challenge toward the prediction of typhoon behaviour and down pour
NASA Astrophysics Data System (ADS)
Takahashi, K.; Onishi, R.; Baba, Y.; Kida, S.; Matsuda, K.; Goto, K.; Fuchigami, H.
2013-08-01
Mechanisms of interactions among different scale phenomena play important roles for forecasting of weather and climate. Multi-scale Simulator for the Geoenvironment (MSSG), which deals with multi-scale multi-physics phenomena, is a coupled non-hydrostatic atmosphere-ocean model designed to be run efficiently on the Earth Simulator. We present simulation results with the world-highest 1.9km horizontal resolution for the entire globe and regional heavy rain with 1km horizontal resolution and 5m horizontal/vertical resolution for urban area simulation. To gain high performance by exploiting the system capabilities, we propose novel performance evaluation metrics introduced in previous studies that incorporate the effects of the data caching mechanism between CPU and memory. With a useful code optimization guideline based on such metrics, we demonstrate that MSSG can achieve an excellent peak performance ratio of 32.2% on the Earth Simulator with the single-core performance found to be a key to a reduced time-to-solution.
GUMICS4 Synthetic and Dynamic Simulations of the ECLAT Project
NASA Astrophysics Data System (ADS)
Facsko, G.; Palmroth, M. M.; Gordeev, E.; Hakkinen, L. V.; Honkonen, I. J.; Janhunen, P.; Sergeev, V. A.; Kauristie, K.; Milan, S. E.
2012-12-01
The European Commission funded the European Cluster Assimilation Techniques (ECLAT) project as a collaboration of five leader European universities and research institutes. A main contribution of the Finnish Meteorological Institute (FMI) is to provide a wide range of global MHD runs with the Grand Unified Magnetosphere Ionosphere Coupling simulation (GUMICS). The runs are divided in two categories: synthetic runs investigating the extent of solar wind drivers that can influence magnetospheric dynamics, as well as dynamic runs using measured solar wind data as input. Here we consider the first set of runs with synthetic solar wind input. The solar wind density, velocity and the interplanetary magnetic field had different magnitudes and orientations; furthermore two F10.7 flux values were selected for solar radiation minimum and maximum values. The solar wind parameter values were constant such that a constant stable solution was archived. All configurations were run several times with three different (-15°, 0°, +15°) tilt angles in the GSE X-Z plane. The Cray XT supercomputer of the FMI provides a unique opportunity in global magnetohydrodynamic simulation: running the GUMICS-4 based on one year real solar wind data. Solar wind magnetic field, density, temperature and velocity data based on Advanced Composition Explorer (ACE) and WIND measurements are downloaded from the OMNIWeb open database and a special input file is created for each Cluster orbit. All data gaps are replaced with linear interpolations between the last and first valid data values before and after the data gap. Minimum variance transformation is applied for the Interplanetary Magnetic Field data to clean and avoid the code of divergence. The Cluster orbits are divided into slices allowing parallel computation and each slice has an average tilt angle value. The file timestamps start one hour before the perigee to provide time for building up a magnetosphere in the simulation space. The real measurements were extrapolated into one minute intervals by the database and the time steps of the simulation result are shifted by 20-30 minutes calculated from the spacecraft position and the actual solar wind velocity. All simulation results are saved every 5th minutes (in calculation time). The result of the 162 simulations named so called "synthetic run library" were visualized and uploaded to the homepage of the FMI after validation as well as the year run savings. Here we present details of these runs.
Bremner, P D; Blacklock, C J; Paganga, G; Mullen, W; Rice-Evans, C A; Crozier, A
2000-06-01
After minimal sample preparation, two different HPLC methodologies, one based on a single gradient reversed-phase HPLC step, the other on multiple HPLC runs each optimised for specific components, were used to investigate the composition of flavonoids and phenolic acids in apple and tomato juices. The principal components in apple juice were identified as chlorogenic acid, phloridzin, caffeic acid and p-coumaric acid. Tomato juice was found to contain chlorogenic acid, caffeic acid, p-coumaric acid, naringenin and rutin. The quantitative estimates of the levels of these compounds, obtained with the two HPLC procedures, were very similar, demonstrating that either method can be used to analyse accurately the phenolic components of apple and tomato juices. Chlorogenic acid in tomato juice was the only component not fully resolved in the single run study and the multiple run analysis prior to enzyme treatment. The single run system of analysis is recommended for the initial investigation of plant phenolics and the multiple run approach for analyses where chromatographic resolution requires improvement.
International Oil Supplies and Demands
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1992-04-01
The eleventh Energy Modeling Forum (EMF) working group met four times over the 1989--1990 period to compare alternative perspectives on international oil supplies and demands through 2010 and to discuss how alternative supply and demand trends influence the world's dependence upon Middle Eastern oil. Proprietors of eleven economic models of the world oil market used their respective models to simulate a dozen scenarios using standardized assumptions. From its inception, the study was not designed to focus on the short-run impacts of disruptions on oil markets. Nor did the working group attempt to provide a forecast or just a single viewmore » of the likely future path for oil prices. The model results guided the group's thinking about many important longer-run market relationships and helped to identify differences of opinion about future oil supplies, demands, and dependence.« less
International Oil Supplies and Demands
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1991-09-01
The eleventh Energy Modeling Forum (EMF) working group met four times over the 1989--90 period to compare alternative perspectives on international oil supplies and demands through 2010 and to discuss how alternative supply and demand trends influence the world's dependence upon Middle Eastern oil. Proprietors of eleven economic models of the world oil market used their respective models to simulate a dozen scenarios using standardized assumptions. From its inception, the study was not designed to focus on the short-run impacts of disruptions on oil markets. Nor did the working group attempt to provide a forecast or just a single viewmore » of the likely future path for oil prices. The model results guided the group's thinking about many important longer-run market relationships and helped to identify differences of opinion about future oil supplies, demands, and dependence.« less
Numerical integration of detector response functions via Monte Carlo simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelly, Keegan John; O'Donnell, John M.; Gomez, Jaime A.
Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated inmore » this way can be used to create Monte Carlo simulation output spectra a factor of ~1000× faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. Here, this method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.« less
Numerical integration of detector response functions via Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Kelly, K. J.; O'Donnell, J. M.; Gomez, J. A.; Taddeucci, T. N.; Devlin, M.; Haight, R. C.; White, M. C.; Mosby, S. M.; Neudecker, D.; Buckner, M. Q.; Wu, C. Y.; Lee, H. Y.
2017-09-01
Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated in this way can be used to create Monte Carlo simulation output spectra a factor of ∼ 1000 × faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. This method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.
Numerical integration of detector response functions via Monte Carlo simulations
Kelly, Keegan John; O'Donnell, John M.; Gomez, Jaime A.; ...
2017-06-13
Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated inmore » this way can be used to create Monte Carlo simulation output spectra a factor of ~1000× faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. Here, this method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.« less
NASA Astrophysics Data System (ADS)
Choi, Y.; Li, X.; Czader, B.
2014-12-01
Three WRF simulations for the DISCOVER-AQ 2013 Texas campaign period (30 days in September) are performed to characterize uncertainties in the simulated meteorological and chemical conditions. These simulations differ in domain setup, and in performing observation nudging in WRF runs. There are around 7% index of agreement (IOA) gain in temperature and 9-12% boost in U-WIND and V-WIND when the observational nudging is employed in the simulation. Further performance gain from nested domains over single domain is marginal. The CMAQ simulations based on above WRF setups showed that the ozone performance slightly improved in the simulation for which objective analysis (OA) is carried out. Further IOA gain, though quite limited, is achieved with nested domains. This study shows that the high ozone episodes during the analyzed time periods were associated with the uncertainties of the simulated cold front passage, chemical boundary condition and small-scale temporal wind fields. All runs missed the observed high ozone values which reached above 150 ppb in La Porte on September 25, the only day with hourly ozone over 120 ppb. The failure is likely due to model's inability to catch small-scale wind shifts in the industrial zone, despite better wind directions in the simulations with nudging and nested domains. This study also shows that overestimated background ozone from the southerly chemical boundary is a critical source for the model's general overpredictions of the ozone concentrations from CMAQ during September of 2013. These results of this study shed a light on the necessity of (1) capturing the small-scale winds such as the onsets of bay-breeze or sea-breeze and (2) implementing more accurate chemical boundary conditions to reduce the simulated high-biased ozone concentrations. One promising remedy for (1) is implementing hourly observation nudging instead of the standard one which is done every three hours.
Analysis of physics-based preconditioning for single-phase subchannel equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansel, J. E.; Ragusa, J. C.; Allu, S.
2013-07-01
The (single-phase) subchannel approximations are used throughout nuclear engineering to provide an efficient flow simulation because the computational burden is much smaller than for computational fluid dynamics (CFD) simulations, and empirical relations have been developed and validated to provide accurate solutions in appropriate flow regimes. Here, the subchannel equations have been recast in a residual form suitable for a multi-physics framework. The Eigen spectrum of the Jacobian matrix, along with several potential physics-based preconditioning approaches, are evaluated, and the the potential for improved convergence from preconditioning is assessed. The physics-based preconditioner options include several forms of reduced equations that decouplemore » the subchannels by neglecting crossflow, conduction, and/or both turbulent momentum and energy exchange between subchannels. Eigen-scopy analysis shows that preconditioning moves clusters of eigenvalues away from zero and toward one. A test problem is run with and without preconditioning. Without preconditioning, the solution failed to converge using GMRES, but application of any of the preconditioners allowed the solution to converge. (authors)« less
Simulation of the National Aerospace System for Safety Analysis
NASA Technical Reports Server (NTRS)
Pritchett, Amy; Goldsman, Dave; Statler, Irv (Technical Monitor)
2002-01-01
Work started on this project on January 1, 1999, the first year of the grant. Following the outline of the grant proposal, a simulator architecture has been established which can incorporate the variety of types of models needed to accurately simulate national airspace dynamics. For the sake of efficiency, this architecture was based on an established single-aircraft flight simulator, the Reconfigurable Flight Simulator (RFS), already developed at Georgia Tech. Likewise, in the first year substantive changes and additions were made to the RFS to convert it into a simulation of the National Airspace System, with the flexibility to incorporate many types of models: aircraft models; controller models; airspace configuration generators; discrete event generators; embedded statistical functions; and display and data outputs. The architecture has been developed with the capability to accept any models of these types; due to its object-oriented structure, individual simulator components can be added and removed during run-time, and can be compiled separately. Simulation objects from other projects should be easy to convert to meet architecture requirements, with the intent that both this project may now be able to incorporate established simulation components from other projects, and that other projects may easily use this simulation without significant time investment.
Influence of Number of Contact Efforts on Running Performance During Game-Based Activities.
Johnston, Rich D; Gabbett, Tim J; Jenkins, David G
2015-09-01
To determine the influence the number of contact efforts during a single bout has on running intensity during game-based activities and assess relationships between physical qualities and distances covered in each game. Eighteen semiprofessional rugby league players (age 23.6 ± 2.8 y) competed in 3 off-side small-sided games (2 × 10-min halves) with a contact bout performed every 2 min. The rules of each game were identical except for the number of contact efforts performed in each bout. Players performed 1, 2, or 3 × 5-s wrestles in the single-, double-, and triple-contact game, respectively. The movement demands (including distance covered and intensity of exercise) in each game were monitored using global positioning system units. Bench-press and back-squat 1-repetition maximum and the 30-15 Intermittent Fitness Test (30-15IFT) assessed muscle strength and high-intensity-running ability, respectively. There was little change in distance covered during the single-contact game (ES = -0.16 to -0.61), whereas there were larger reductions in the double- (ES = -0.52 to -0.81) and triple-contact (ES = -0.50 to -1.15) games. Significant relationships (P < .05) were observed between 30-15IFT and high-speed running during the single- (r = .72) and double- (r = .75), but not triple-contact (r = .20) game. There is little change in running intensity when only single contacts are performed each bout; however, when multiple contacts are performed, greater reductions in running intensity result. In addition, high-intensity-running ability is only associated with running performance when contact demands are low.
Operating system for a real-time multiprocessor propulsion system simulator
NASA Technical Reports Server (NTRS)
Cole, G. L.
1984-01-01
The success of the Real Time Multiprocessor Operating System (RTMPOS) in the development and evaluation of experimental hardware and software systems for real time interactive simulation of air breathing propulsion systems was evaluated. The Real Time Multiprocessor Operating System (RTMPOS) provides the user with a versatile, interactive means for loading, running, debugging and obtaining results from a multiprocessor based simulator. A front end processor (FEP) serves as the simulator controller and interface between the user and the simulator. These functions are facilitated by the RTMPOS which resides on the FEP. The RTMPOS acts in conjunction with the FEP's manufacturer supplied disk operating system that provides typical utilities like an assembler, linkage editor, text editor, file handling services, etc. Once a simulation is formulated, the RTMPOS provides for engineering level, run time operations such as loading, modifying and specifying computation flow of programs, simulator mode control, data handling and run time monitoring. Run time monitoring is a powerful feature of RTMPOS that allows the user to record all actions taken during a simulation session and to receive advisories from the simulator via the FEP. The RTMPOS is programmed mainly in PASCAL along with some assembly language routines. The RTMPOS software is easily modified to be applicable to hardware from different manufacturers.
Simulation-Based Learning: The Learning-Forgetting-Relearning Process and Impact of Learning History
ERIC Educational Resources Information Center
Davidovitch, Lior; Parush, Avi; Shtub, Avy
2008-01-01
The results of empirical experiments evaluating the effectiveness and efficiency of the learning-forgetting-relearning process in a dynamic project management simulation environment are reported. Sixty-six graduate engineering students performed repetitive simulation-runs with a break period of several weeks between the runs. The students used a…
SPEEDES - A multiple-synchronization environment for parallel discrete-event simulation
NASA Technical Reports Server (NTRS)
Steinman, Jeff S.
1992-01-01
Synchronous Parallel Environment for Emulation and Discrete-Event Simulation (SPEEDES) is a unified parallel simulation environment. It supports multiple-synchronization protocols without requiring users to recompile their code. When a SPEEDES simulation runs on one node, all the extra parallel overhead is removed automatically at run time. When the same executable runs in parallel, the user preselects the synchronization algorithm from a list of options. SPEEDES currently runs on UNIX networks and on the California Institute of Technology/Jet Propulsion Laboratory Mark III Hypercube. SPEEDES also supports interactive simulations. Featured in the SPEEDES environment is a new parallel synchronization approach called Breathing Time Buckets. This algorithm uses some of the conservative techniques found in Time Bucket synchronization, along with the optimism that characterizes the Time Warp approach. A mathematical model derived from first principles predicts the performance of Breathing Time Buckets. Along with the Breathing Time Buckets algorithm, this paper discusses the rules for processing events in SPEEDES, describes the implementation of various other synchronization protocols supported by SPEEDES, describes some new ones for the future, discusses interactive simulations, and then gives some performance results.
Toward Interactive Scenario Analysis and Exploration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gayle, Thomas R.; Summers, Kenneth Lee; Jungels, John
2015-01-01
As Modeling and Simulation (M&S) tools have matured, their applicability and importance have increased across many national security challenges. In particular, they provide a way to test how something may behave without the need to do real world testing. However, current and future changes across several factors including capabilities, policy, and funding are driving a need for rapid response or evaluation in ways that many M&S tools cannot address. Issues around large data, computational requirements, delivery mechanisms, and analyst involvement already exist and pose significant challenges. Furthermore, rising expectations, rising input complexity, and increasing depth of analysis will only increasemore » the difficulty of these challenges. In this study we examine whether innovations in M&S software coupled with advances in ''cloud'' computing and ''big-data'' methodologies can overcome many of these challenges. In particular, we propose a simple, horizontally-scalable distributed computing environment that could provide the foundation (i.e. ''cloud'') for next-generation M&S-based applications based on the notion of ''parallel multi-simulation''. In our context, the goal of parallel multi- simulation is to consider as many simultaneous paths of execution as possible. Therefore, with sufficient resources, the complexity is dominated by the cost of single scenario runs as opposed to the number of runs required. We show the feasibility of this architecture through a stable prototype implementation coupled with the Umbra Simulation Framework [6]. Finally, we highlight the utility through multiple novel analysis tools and by showing the performance improvement compared to existing tools.« less
NASA Astrophysics Data System (ADS)
Taxak, A. K.; Ojha, C. S. P.
2017-12-01
Land use and land cover (LULC) changes within a watershed are recognised as an important factor affecting hydrological processes and water resources. LULC changes continuously not only in long term but also on the inter-annual and season level. Changes in LULC affects the interception, storage and moisture. A widely used approach in rainfall-runoff modelling through Land surface models (LSM)/ hydrological models is to keep LULC same throughout the model running period. In long term simulations where land use change take place during the run period, using a single LULC does not represent a true picture of ground conditions could result in stationarity of model responses. The present work presents a case study in which changes in LULC are incorporated by using multiple LULC layers. LULC for the study period were created using imageries from Landsat series, Sentinal, EO-1 ALI. Distributed, physically based Variable Infiltration Capacity (VIC) model was modified to allow inclusion of LULC as a time varying variable just like climate. The Narayani basin was simulated with LULC, leaf area index (LAI), albedo and climate data for 1992-2015. The results showed that the model simulation with varied parametrization approach has a large improvement over the conventional fixed parametrization approach in terms of long-term water balance. The proposed modelling approach could improve hydrological modelling for applications like land cover change studies, water budget studies etc.
Bridging the scales in atmospheric composition simulations using a nudging technique
NASA Astrophysics Data System (ADS)
D'Isidoro, Massimo; Maurizi, Alberto; Russo, Felicita; Tampieri, Francesco
2010-05-01
Studying the interaction between climate and anthropogenic activities, specifically those concentrated in megacities/hot spots, requires the description of processes in a very wide range of scales from local, where anthropogenic emissions are concentrated to global where we are interested to study the impact of these sources. The description of all the processes at all scales within the same numerical implementation is not feasible because of limited computer resources. Therefore, different phenomena are studied by means of different numerical models that can cover different range of scales. The exchange of information from small to large scale is highly non-trivial though of high interest. In fact uncertainties in large scale simulations are expected to receive large contribution from the most polluted areas where the highly inhomogeneous distribution of sources connected to the intrinsic non-linearity of the processes involved can generate non negligible departures between coarse and fine scale simulations. In this work a new method is proposed and investigated in a case study (August 2009) using the BOLCHEM model. Monthly simulations at coarse (0.5° European domain, run A) and fine (0.1° Central Mediterranean domain, run B) horizontal resolution are performed using the coarse resolution as boundary condition for the fine one. Then another coarse resolution run (run C) is performed, in which the high resolution fields remapped on to the coarse grid are used to nudge the concentrations on the Po Valley area. The nudging is applied to all gas and aerosol species of BOLCHEM. Averaged concentrations and variances over Po Valley and other selected areas for O3 and PM are computed. It is observed that although the variance of run B is markedly larger than that of run A, the variance of run C is smaller because the remapping procedure removes large portion of variance from run B fields. Mean concentrations show some differences depending on species: in general mean values of run C lie between run A and run B. A propagation of the signal outside the nudging region is observed, and is evaluated in terms of differences between coarse resolution (with and without nudging) and fine resolution simulations.
Adaptive control of servo system based on LuGre model
NASA Astrophysics Data System (ADS)
Jin, Wang; Niancong, Liu; Jianlong, Chen; Weitao, Geng
2018-03-01
This paper established a mechanical model of feed system based on LuGre model. In order to solve the influence of nonlinear factors on the system running stability, a nonlinear single observer is designed to estimate the parameter z in the LuGre model and an adaptive friction compensation controller is designed. Simulink simulation results show that the control method can effectively suppress the adverse effects of friction and external disturbances. The simulation show that the adaptive parameter kz is between 0.11-0.13, and the value of gamma1 is between 1.9-2.1. Position tracking error reaches level 10-3 and is stabilized near 0 values within 0.3 seconds, the compensation method has better tracking accuracy and robustness.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Childers, J. T.; Uram, T. D.; LeCompte, T. J.
As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the World- wide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. This paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application andmore » the performance that was achieved.« less
Childers, J. T.; Uram, T. D.; LeCompte, T. J.; ...
2016-09-29
As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. Finally, this paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application andmore » the performance that was achieved.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Childers, J. T.; Uram, T. D.; LeCompte, T. J.
As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. Finally, this paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application andmore » the performance that was achieved.« less
Simulation Study of Evacuation Control Center Operations Analysis
2011-06-01
28 4.3 Baseline Manning (Runs 1, 2, & 3) . . . . . . . . . . . . 30 4.3.1 Baseline Statistics Interpretation...46 Appendix B. Key Statistic Matrix: Runs 1-12 . . . . . . . . . . . . . 48 Appendix C. Blue Dart...Completion Time . . . 33 11. Paired T result - Run 5 v. Run 6: ECC Completion Time . . . 35 12. Key Statistics : Run 3 vs. Run 9
Blum, Yvonne; Vejdani, Hamid R; Birn-Jeffery, Aleksandra V; Hubicki, Christian M; Hurst, Jonathan W; Daley, Monica A
2014-01-01
To achieve robust and stable legged locomotion in uneven terrain, animals must effectively coordinate limb swing and stance phases, which involve distinct yet coupled dynamics. Recent theoretical studies have highlighted the critical influence of swing-leg trajectory on stability, disturbance rejection, leg loading and economy of walking and running. Yet, simulations suggest that not all these factors can be simultaneously optimized. A potential trade-off arises between the optimal swing-leg trajectory for disturbance rejection (to maintain steady gait) versus regulation of leg loading (for injury avoidance and economy). Here we investigate how running guinea fowl manage this potential trade-off by comparing experimental data to predictions of hypothesis-based simulations of running over a terrain drop perturbation. We use a simple model to predict swing-leg trajectory and running dynamics. In simulations, we generate optimized swing-leg trajectories based upon specific hypotheses for task-level control priorities. We optimized swing trajectories to achieve i) constant peak force, ii) constant axial impulse, or iii) perfect disturbance rejection (steady gait) in the stance following a terrain drop. We compare simulation predictions to experimental data on guinea fowl running over a visible step down. Swing and stance dynamics of running guinea fowl closely match simulations optimized to regulate leg loading (priorities i and ii), and do not match the simulations optimized for disturbance rejection (priority iii). The simulations reinforce previous findings that swing-leg trajectory targeting disturbance rejection demands large increases in stance leg force following a terrain drop. Guinea fowl negotiate a downward step using unsteady dynamics with forward acceleration, and recover to steady gait in subsequent steps. Our results suggest that guinea fowl use swing-leg trajectory consistent with priority for load regulation, and not for steadiness of gait. Swing-leg trajectory optimized for load regulation may facilitate economy and injury avoidance in uneven terrain.
Blum, Yvonne; Vejdani, Hamid R.; Birn-Jeffery, Aleksandra V.; Hubicki, Christian M.; Hurst, Jonathan W.; Daley, Monica A.
2014-01-01
To achieve robust and stable legged locomotion in uneven terrain, animals must effectively coordinate limb swing and stance phases, which involve distinct yet coupled dynamics. Recent theoretical studies have highlighted the critical influence of swing-leg trajectory on stability, disturbance rejection, leg loading and economy of walking and running. Yet, simulations suggest that not all these factors can be simultaneously optimized. A potential trade-off arises between the optimal swing-leg trajectory for disturbance rejection (to maintain steady gait) versus regulation of leg loading (for injury avoidance and economy). Here we investigate how running guinea fowl manage this potential trade-off by comparing experimental data to predictions of hypothesis-based simulations of running over a terrain drop perturbation. We use a simple model to predict swing-leg trajectory and running dynamics. In simulations, we generate optimized swing-leg trajectories based upon specific hypotheses for task-level control priorities. We optimized swing trajectories to achieve i) constant peak force, ii) constant axial impulse, or iii) perfect disturbance rejection (steady gait) in the stance following a terrain drop. We compare simulation predictions to experimental data on guinea fowl running over a visible step down. Swing and stance dynamics of running guinea fowl closely match simulations optimized to regulate leg loading (priorities i and ii), and do not match the simulations optimized for disturbance rejection (priority iii). The simulations reinforce previous findings that swing-leg trajectory targeting disturbance rejection demands large increases in stance leg force following a terrain drop. Guinea fowl negotiate a downward step using unsteady dynamics with forward acceleration, and recover to steady gait in subsequent steps. Our results suggest that guinea fowl use swing-leg trajectory consistent with priority for load regulation, and not for steadiness of gait. Swing-leg trajectory optimized for load regulation may facilitate economy and injury avoidance in uneven terrain. PMID:24979750
Scalable Domain Decomposed Monte Carlo Particle Transport
NASA Astrophysics Data System (ADS)
O'Brien, Matthew Joseph
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation. The main algorithms we consider are: • Domain decomposition of constructive solid geometry: enables extremely large calculations in which the background geometry is too large to fit in the memory of a single computational node. • Load Balancing: keeps the workload per processor as even as possible so the calculation runs efficiently. • Global Particle Find: if particles are on the wrong processor, globally resolve their locations to the correct processor based on particle coordinate and background domain. • Visualizing constructive solid geometry, sourcing particles, deciding that particle streaming communication is completed and spatial redecomposition. These algorithms are some of the most important parallel algorithms required for domain decomposed Monte Carlo particle transport. We demonstrate that our previous algorithms were not scalable, prove that our new algorithms are scalable, and run some of the algorithms up to 2 million MPI processes on the Sequoia supercomputer.
E Pluribus Analysis: Applying a Superforecasting Methodology to the Detection of Homegrown Violence
2018-03-01
actor violence and a set of predefined decision-making protocols. This research included running four simulations using the Monte Carlo technique, which...actor violence and a set of predefined decision-making protocols. This research included running four simulations using the Monte Carlo technique...PREDICTING RANDOMNESS.............................................................24 1. Using a “ Runs Test” to Determine a Temporal Pattern in Lone
Adaptive Integration of Nonsmooth Dynamical Systems
2017-10-11
controlled time stepping method to interactively design running robots. [1] John Shepherd, Samuel Zapolsky, and Evan M. Drumwright, “Fast multi-body...software like this to test software running on my robots. Started working in simulation after attempting to use software like this to test software... running on my robots. The libraries that produce these beautiful results have failed at simulating robotic manipulation. Postulate: It is easier to
Gray: a ray tracing-based Monte Carlo simulator for PET.
Freese, David L; Olcott, Peter D; Buss, Samuel R; Levin, Craig S
2018-05-21
Monte Carlo simulation software plays a critical role in PET system design. Performing complex, repeated Monte Carlo simulations can be computationally prohibitive, as even a single simulation can require a large amount of time and a computing cluster to complete. Here we introduce Gray, a Monte Carlo simulation software for PET systems. Gray exploits ray tracing methods used in the computer graphics community to greatly accelerate simulations of PET systems with complex geometries. We demonstrate the implementation of models for positron range, annihilation acolinearity, photoelectric absorption, Compton scatter, and Rayleigh scatter. For validation, we simulate the GATE PET benchmark, and compare energy, distribution of hits, coincidences, and run time. We show a [Formula: see text] speedup using Gray, compared to GATE for the same simulation, while demonstrating nearly identical results. We additionally simulate the Siemens Biograph mCT system with both the NEMA NU-2 scatter phantom and sensitivity phantom. We estimate the total sensitivity within [Formula: see text]% when accounting for differences in peak NECR. We also estimate the peak NECR to be [Formula: see text] kcps, or within [Formula: see text]% of published experimental data. The activity concentration of the peak is also estimated within 1.3%.
Simulation of ozone production in a complex circulation region using nested grids
NASA Astrophysics Data System (ADS)
Taghavi, M.; Cautenet, S.; Foret, G.
2003-07-01
During ESCOMPTE precampaign (15 June to 10 July 2000), three days of intensive pollution (IOP0) have been observed and simulated. The comprehensive RAMS model, version 4.3, coupled online with a chemical module including 29 species, has been used to follow the chemistry of the zone polluted over southern France. This online method can be used because the code is paralleled and the SGI 3800 computer is very powerful. Two runs have been performed: run1 with one grid and run2 with two nested grids. The redistribution of simulated chemical species (ozone, carbon monoxide, sulphur dioxide and nitrogen oxides) was compared to aircraft measurements and surface stations. The 2-grid run has given substantially better results than the one-grid run only because the former takes the outer pollutants into account. This online method helps to explain dynamics and to retrieve the chemical species redistribution with a good agreement.
Evolving hard problems: Generating human genetics datasets with a complex etiology.
Himmelstein, Daniel S; Greene, Casey S; Moore, Jason H
2011-07-07
A goal of human genetics is to discover genetic factors that influence individuals' susceptibility to common diseases. Most common diseases are thought to result from the joint failure of two or more interacting components instead of single component failures. This greatly complicates both the task of selecting informative genetic variants and the task of modeling interactions between them. We and others have previously developed algorithms to detect and model the relationships between these genetic factors and disease. Previously these methods have been evaluated with datasets simulated according to pre-defined genetic models. Here we develop and evaluate a model free evolution strategy to generate datasets which display a complex relationship between individual genotype and disease susceptibility. We show that this model free approach is capable of generating a diverse array of datasets with distinct gene-disease relationships for an arbitrary interaction order and sample size. We specifically generate eight-hundred Pareto fronts; one for each independent run of our algorithm. In each run the predictiveness of single genetic variation and pairs of genetic variants have been minimized, while the predictiveness of third, fourth, or fifth-order combinations is maximized. Two hundred runs of the algorithm are further dedicated to creating datasets with predictive four or five order interactions and minimized lower-level effects. This method and the resulting datasets will allow the capabilities of novel methods to be tested without pre-specified genetic models. This allows researchers to evaluate which methods will succeed on human genetics problems where the model is not known in advance. We further make freely available to the community the entire Pareto-optimal front of datasets from each run so that novel methods may be rigorously evaluated. These 76,600 datasets are available from http://discovery.dartmouth.edu/model_free_data/.
Ballistic Limit Equation for Single Wall Titanium
NASA Technical Reports Server (NTRS)
Ratliff, J. M.; Christiansen, Eric L.; Bryant, C.
2009-01-01
Hypervelocity impact tests and hydrocode simulations were used to determine the ballistic limit equation (BLE) for perforation of a titanium wall, as a function of wall thickness. Two titanium alloys were considered, and separate BLEs were derived for each. Tested wall thicknesses ranged from 0.5mm to 2.0mm. The single-wall damage equation of Cour-Palais [ref. 1] was used to analyze the Ti wall's shielding effectiveness. It was concluded that the Cour-Palais single-wall equation produced a non-conservative prediction of the ballistic limit for the Ti shield. The inaccurate prediction was not a particularly surprising result; the Cour-Palais single-wall BLE contains shield material properties as parameters, but it was formulated only from tests of different aluminum alloys. Single-wall Ti shield tests were run (thicknesses of 2.0 mm, 1.5 mm, 1.0 mm, and 0.5 mm) on Ti 15-3-3-3 material custom cut from rod stock. Hypervelocity impact (HVI) tests were used to establish the failure threshold empirically, using the additional constraint that the damage scales with impact energy, as was indicated by hydrocode simulations. The criterion for shield failure was defined as no detached spall from the shield back surface during HVI. Based on the test results, which confirmed an approximately energy-dependent shield effectiveness, the Cour-Palais equation was modified.
Supercomputers ready for use as discovery machines for neuroscience.
Helias, Moritz; Kunkel, Susanne; Masumoto, Gen; Igarashi, Jun; Eppler, Jochen Martin; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus
2012-01-01
NEST is a widely used tool to simulate biological spiking neural networks. Here we explain the improvements, guided by a mathematical model of memory consumption, that enable us to exploit for the first time the computational power of the K supercomputer for neuroscience. Multi-threaded components for wiring and simulation combine 8 cores per MPI process to achieve excellent scaling. K is capable of simulating networks corresponding to a brain area with 10(8) neurons and 10(12) synapses in the worst case scenario of random connectivity; for larger networks of the brain its hierarchical organization can be exploited to constrain the number of communicating computer nodes. We discuss the limits of the software technology, comparing maximum filling scaling plots for K and the JUGENE BG/P system. The usability of these machines for network simulations has become comparable to running simulations on a single PC. Turn-around times in the range of minutes even for the largest systems enable a quasi interactive working style and render simulations on this scale a practical tool for computational neuroscience.
Supercomputers Ready for Use as Discovery Machines for Neuroscience
Helias, Moritz; Kunkel, Susanne; Masumoto, Gen; Igarashi, Jun; Eppler, Jochen Martin; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus
2012-01-01
NEST is a widely used tool to simulate biological spiking neural networks. Here we explain the improvements, guided by a mathematical model of memory consumption, that enable us to exploit for the first time the computational power of the K supercomputer for neuroscience. Multi-threaded components for wiring and simulation combine 8 cores per MPI process to achieve excellent scaling. K is capable of simulating networks corresponding to a brain area with 108 neurons and 1012 synapses in the worst case scenario of random connectivity; for larger networks of the brain its hierarchical organization can be exploited to constrain the number of communicating computer nodes. We discuss the limits of the software technology, comparing maximum filling scaling plots for K and the JUGENE BG/P system. The usability of these machines for network simulations has become comparable to running simulations on a single PC. Turn-around times in the range of minutes even for the largest systems enable a quasi interactive working style and render simulations on this scale a practical tool for computational neuroscience. PMID:23129998
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wetzstein, M.; Nelson, Andrew F.; Naab, T.
2009-10-01
We present a numerical code for simulating the evolution of astrophysical systems using particles to represent the underlying fluid flow. The code is written in Fortran 95 and is designed to be versatile, flexible, and extensible, with modular options that can be selected either at the time the code is compiled or at run time through a text input file. We include a number of general purpose modules describing a variety of physical processes commonly required in the astrophysical community and we expect that the effort required to integrate additional or alternate modules into the code will be small. Inmore » its simplest form the code can evolve the dynamical trajectories of a set of particles in two or three dimensions using a module which implements either a Leapfrog or Runge-Kutta-Fehlberg integrator, selected by the user at compile time. The user may choose to allow the integrator to evolve the system using individual time steps for each particle or with a single, global time step for all. Particles may interact gravitationally as N-body particles, and all or any subset may also interact hydrodynamically, using the smoothed particle hydrodynamic (SPH) method by selecting the SPH module. A third particle species can be included with a module to model massive point particles which may accrete nearby SPH or N-body particles. Such particles may be used to model, e.g., stars in a molecular cloud. Free boundary conditions are implemented by default, and a module may be selected to include periodic boundary conditions. We use a binary 'Press' tree to organize particles for rapid access in gravity and SPH calculations. Modules implementing an interface with special purpose 'GRAPE' hardware may also be selected to accelerate the gravity calculations. If available, forces obtained from the GRAPE coprocessors may be transparently substituted for those obtained from the tree, or both tree and GRAPE may be used as a combination GRAPE/tree code. The code may be run without modification on single processors or in parallel using OpenMP compiler directives on large-scale, shared memory parallel machines. We present simulations of several test problems, including a merger simulation of two elliptical galaxies with 800,000 particles. In comparison to the Gadget-2 code of Springel, the gravitational force calculation, which is the most costly part of any simulation including self-gravity, is {approx}4.6-4.9 times faster with VINE when tested on different snapshots of the elliptical galaxy merger simulation when run on an Itanium 2 processor in an SGI Altix. A full simulation of the same setup with eight processors is a factor of 2.91 faster with VINE. The code is available to the public under the terms of the Gnu General Public License.« less
NASA Astrophysics Data System (ADS)
Wetzstein, M.; Nelson, Andrew F.; Naab, T.; Burkert, A.
2009-10-01
We present a numerical code for simulating the evolution of astrophysical systems using particles to represent the underlying fluid flow. The code is written in Fortran 95 and is designed to be versatile, flexible, and extensible, with modular options that can be selected either at the time the code is compiled or at run time through a text input file. We include a number of general purpose modules describing a variety of physical processes commonly required in the astrophysical community and we expect that the effort required to integrate additional or alternate modules into the code will be small. In its simplest form the code can evolve the dynamical trajectories of a set of particles in two or three dimensions using a module which implements either a Leapfrog or Runge-Kutta-Fehlberg integrator, selected by the user at compile time. The user may choose to allow the integrator to evolve the system using individual time steps for each particle or with a single, global time step for all. Particles may interact gravitationally as N-body particles, and all or any subset may also interact hydrodynamically, using the smoothed particle hydrodynamic (SPH) method by selecting the SPH module. A third particle species can be included with a module to model massive point particles which may accrete nearby SPH or N-body particles. Such particles may be used to model, e.g., stars in a molecular cloud. Free boundary conditions are implemented by default, and a module may be selected to include periodic boundary conditions. We use a binary "Press" tree to organize particles for rapid access in gravity and SPH calculations. Modules implementing an interface with special purpose "GRAPE" hardware may also be selected to accelerate the gravity calculations. If available, forces obtained from the GRAPE coprocessors may be transparently substituted for those obtained from the tree, or both tree and GRAPE may be used as a combination GRAPE/tree code. The code may be run without modification on single processors or in parallel using OpenMP compiler directives on large-scale, shared memory parallel machines. We present simulations of several test problems, including a merger simulation of two elliptical galaxies with 800,000 particles. In comparison to the Gadget-2 code of Springel, the gravitational force calculation, which is the most costly part of any simulation including self-gravity, is ~4.6-4.9 times faster with VINE when tested on different snapshots of the elliptical galaxy merger simulation when run on an Itanium 2 processor in an SGI Altix. A full simulation of the same setup with eight processors is a factor of 2.91 faster with VINE. The code is available to the public under the terms of the Gnu General Public License.
NASA Technical Reports Server (NTRS)
Zettle, Eugene V; Bolz, Ray E; Dittrich, R T
1947-01-01
As part of a study of the effects of fuel composition on the combustor performance of a turbojet engine, an investigation was made in a single I-16 combustor with the standard I-16 injection nozzle, supplied by the engine manufacturer, at simulated altitude conditions. The 10 fuels investigated included hydrocarbons of the paraffin olefin, naphthene, and aromatic classes having a boiling range from 113 degrees to 655 degrees F. They were hot-acid octane, diisobutylene, methylcyclohexane, benzene, xylene, 62-octane gasoline, kerosene, solvent 2, and Diesel fuel oil. The fuels were tested at combustor conditions simulating I-16 turbojet operation at an altitude of 45,000 feet and at a rotor speed of 12,200 rpm. At these conditions the combustor-inlet air temperature, static pressure, and velocity were 60 degrees F., 12.3 inches of mercury absolute, and 112 feet per second respectively, and were held approximately constant for the investigation. The reproducibility of the data is shown by check runs taken each day during the investigation. The combustion in the exhaust elbow was visually observed for each fuel investigated.
Validation of Extended MHD Models using MST RFP Plasmas
NASA Astrophysics Data System (ADS)
Jacobson, C. M.; Chapman, B. E.; Craig, D.; McCollam, K. J.; Sovinec, C. R.
2016-10-01
Significant effort has been devoted to improvement of computational models used in fusion energy sciences. Rigorous validation of these models is necessary in order to increase confidence in their ability to predict the performance of future devices. MST is a well diagnosed reversed-field pinch (RFP) capable of operation over a wide range of parameters. In particular, the Lundquist number S, a key parameter in resistive magnetohydrodynamics (MHD), can be varied over a wide range and provide substantial overlap with MHD RFP simulations. MST RFP plasmas are simulated using both DEBS, a nonlinear single-fluid visco-resistive MHD code, and NIMROD, a nonlinear extended MHD code, with S ranging from 104 to 5 ×104 for single-fluid runs, with the magnetic Prandtl number Pm = 1 . Experiments with plasma current IP ranging from 60 kA to 500 kA result in S from 4 ×104 to 8 ×106 . Validation metric comparisons are presented, focusing on how magnetic fluctuations b scale with S. Single-fluid NIMROD results give S b - 0.21 , and experiments give S b - 0.28 for the dominant m = 1 , n = 6 mode. Preliminary two-fluid NIMROD results are also presented. Work supported by US DOE.
NASA Technical Reports Server (NTRS)
Backes, Paul G. (Inventor); Tso, Kam S. (Inventor)
1993-01-01
This invention relates to an operator interface for controlling a telerobot to perform tasks in a poorly modeled environment and/or within unplanned scenarios. The telerobot control system includes a remote robot manipulator linked to an operator interface. The operator interface includes a setup terminal, simulation terminal, and execution terminal for the control of the graphics simulator and local robot actuator as well as the remote robot actuator. These terminals may be combined in a single terminal. Complex tasks are developed from sequential combinations of parameterized task primitives and recorded teleoperations, and are tested by execution on a graphics simulator and/or local robot actuator, together with adjustable time delays. The novel features of this invention include the shared and supervisory control of the remote robot manipulator via operator interface by pretested complex tasks sequences based on sequences of parameterized task primitives combined with further teleoperation and run-time binding of parameters based on task context.
Di Marino, Daniele; Oteri, Francesco; della Rocca, Blasco Morozzo; D'Annessa, Ilda; Falconi, Mattia
2012-06-01
The mitochondrial adenosine diphosphate/adenosine triphosphate (ADP/ATP) carrier-AAC-was crystallized in complex with its specific inhibitor carboxyatractyloside (CATR). The protein consists of a six-transmembrane helix bundle that defines the nucleotide translocation pathway, which is closed towards the matrix side due to sharp kinks in the odd-numbered helices. In this paper, we describe the interaction between the matrix side of the AAC transporter and the ATP(4-) molecule using carrier structures obtained through classical molecular dynamics simulation (MD) and a protein-ligand docking procedure. Fifteen structures were extracted from a previously published MD trajectory through clustering analysis, and 50 docking runs were carried out for each carrier conformation, for a total of 750 runs ("MD docking"). The results were compared to those from 750 docking runs performed on the X-ray structure ("X docking"). The docking procedure indicated the presence of a single interaction site in the X-ray structure that was conserved in the structures extracted from the MD trajectory. MD docking showed the presence of a second binding site that was not found in the X docking. The interaction strategy between the AAC transporter and the ATP(4-) molecule was analyzed by investigating the composition and 3D arrangement of the interaction pockets, together with the orientations of the substrate inside them. A relationship between sequence repeats and the ATP(4-) binding sites in the AAC carrier structure is proposed.
Accuracy of the lattice-Boltzmann method using the Cell processor
NASA Astrophysics Data System (ADS)
Harvey, M. J.; de Fabritiis, G.; Giupponi, G.
2008-11-01
Accelerator processors like the new Cell processor are extending the traditional platforms for scientific computation, allowing orders of magnitude more floating-point operations per second (flops) compared to standard central processing units. However, they currently lack double-precision support and support for some IEEE 754 capabilities. In this work, we develop a lattice-Boltzmann (LB) code to run on the Cell processor and test the accuracy of this lattice method on this platform. We run tests for different flow topologies, boundary conditions, and Reynolds numbers in the range Re=6 350 . In one case, simulation results show a reduced mass and momentum conservation compared to an equivalent double-precision LB implementation. All other cases demonstrate the utility of the Cell processor for fluid dynamics simulations. Benchmarks on two Cell-based platforms are performed, the Sony Playstation3 and the QS20/QS21 IBM blade, obtaining a speed-up factor of 7 and 21, respectively, compared to the original PC version of the code, and a conservative sustained performance of 28 gigaflops per single Cell processor. Our results suggest that choice of IEEE 754 rounding mode is possibly as important as double-precision support for this specific scientific application.
NASA Technical Reports Server (NTRS)
Caldwell, E. C.; Cowley, M. S.; Scott-Pandorf, M. M.
2010-01-01
Develop a model that simulates a human running in 0 G using the European Space Agency s (ESA) Subject Loading System (SLS). The model provides ground reaction forces (GRF) based on speed and pull-down forces (PDF). DESIGN The theoretical basis for the Running Model was based on a simple spring-mass model. The dynamic properties of the spring-mass model express theoretical vertical GRF (GRFv) and shear GRF in the posterior-anterior direction (GRFsh) during running gait. ADAMs VIEW software was used to build the model, which has a pelvis, thigh segment, shank segment, and a spring foot (see Figure 1).the model s movement simulates the joint kinematics of a human running at Earth gravity with the aim of generating GRF data. DEVELOPMENT & VERIFICATION ESA provided parabolic flight data of subjects running while using the SLS, for further characterization of the model s GRF. Peak GRF data were fit to a linear regression line dependent on PDF and speed. Interpolation and extrapolation of the regression equation provided a theoretical data matrix, which is used to drive the model s motion equations. Verification of the model was conducted by running the model at 4 different speeds, with each speed accounting for 3 different PDF. The model s GRF data fell within a 1-standard-deviation boundary derived from the empirical ESA data. CONCLUSION The Running Model aids in conducting various simulations (potential scenarios include a fatigued runner or a powerful runner generating high loads at a fast cadence) to determine limitations for the T2 vibration isolation system (VIS) aboard the International Space Station. This model can predict how running with the ESA SLS affects the T2 VIS and may be used for other exercise analyses in the future.
Fast and efficient compression of floating-point data.
Lindstrom, Peter; Isenburg, Martin
2006-01-01
Large scale scientific simulation codes typically run on a cluster of CPUs that write/read time steps to/from a single file system. As data sets are constantly growing in size, this increasingly leads to I/O bottlenecks. When the rate at which data is produced exceeds the available I/O bandwidth, the simulation stalls and the CPUs are idle. Data compression can alleviate this problem by using some CPU cycles to reduce the amount of data needed to be transfered. Most compression schemes, however, are designed to operate offline and seek to maximize compression, not throughput. Furthermore, they often require quantizing floating-point values onto a uniform integer grid, which disqualifies their use in applications where exact values must be retained. We propose a simple scheme for lossless, online compression of floating-point data that transparently integrates into the I/O of many applications. A plug-in scheme for data-dependent prediction makes our scheme applicable to a wide variety of data used in visualization, such as unstructured meshes, point sets, images, and voxel grids. We achieve state-of-the-art compression rates and speeds, the latter in part due to an improved entropy coder. We demonstrate that this significantly accelerates I/O throughput in real simulation runs. Unlike previous schemes, our method also adapts well to variable-precision floating-point and integer data.
Hard choices in assessing survival past dams — a comparison of single- and paired-release strategies
Zydlewski, Joseph D.; Stich, Daniel S.; Sigourney, Douglas B.
2017-01-01
Mark–recapture models are widely used to estimate survival of salmon smolts migrating past dams. Paired releases have been used to improve estimate accuracy by removing components of mortality not attributable to the dam. This method is accompanied by reduced precision because (i) sample size is reduced relative to a single, large release; and (ii) variance calculations inflate error. We modeled an idealized system with a single dam to assess trade-offs between accuracy and precision and compared methods using root mean squared error (RMSE). Simulations were run under predefined conditions (dam mortality, background mortality, detection probability, and sample size) to determine scenarios when the paired release was preferable to a single release. We demonstrate that a paired-release design provides a theoretical advantage over a single-release design only at large sample sizes and high probabilities of detection. At release numbers typical of many survival studies, paired release can result in overestimation of dam survival. Failures to meet model assumptions of a paired release may result in further overestimation of dam-related survival. Under most conditions, a single-release strategy was preferable.
An Evaluation of the Predictability of Austral Summer Season Precipitation over South America.
NASA Astrophysics Data System (ADS)
Misra, Vasubandhu
2004-03-01
In this study predictability of austral summer seasonal precipitation over South America is investigated using a 12-yr set of a 3.5-month range (seasonal) and a 17-yr range (continuous multiannual) five-member ensemble integrations of the Center for Ocean Land Atmosphere Studies (COLA) atmospheric general circulation model (AGCM). These integrations were performed with prescribed observed sea surface temperature (SST); therefore, skill attained represents an estimate of the upper bound of the skill achievable by COLA AGCM with predicted SST. The seasonal runs outperform the multiannual model integrations both in deterministic and probabilistic skill. The simulation of the January February March (JFM) seasonal climatology of precipitation is vastly superior in the seasonal runs except over the Nordeste region where the multiannual runs show a marginal improvement. The teleconnection of the ensemble mean JFM precipitation over tropical South America with global contemporaneous observed sea surface temperature in the seasonal runs conforms more closely to observations than in the multiannual runs. Both the sets of runs clearly beat persistence in predicting the interannual precipitation anomalies over the Amazon River basin, Nordeste, South Atlantic convergence zone, and subtropical South America. However, both types of runs display poorer simulations over subtropical regions than the tropical areas of South America. The examination of probabilistic skill of precipitation supports the conclusions from deterministic skill analysis that the seasonal runs yield superior simulations than the multiannual-type runs.
A Process for the Creation of T-MATS Propulsion System Models from NPSS data
NASA Technical Reports Server (NTRS)
Chapman, Jeffryes W.; Lavelle, Thomas M.; Litt, Jonathan S.; Guo, Ten-Huei
2014-01-01
A modular thermodynamic simulation package called the Toolbox for the Modeling and Analysis of Thermodynamic Systems (T-MATS) has been developed for the creation of dynamic simulations. The T-MATS software is designed as a plug-in for Simulink (Math Works, Inc.) and allows a developer to create system simulations of thermodynamic plants (such as gas turbines) and controllers in a single tool. Creation of such simulations can be accomplished by matching data from actual systems, or by matching data from steady state models and inserting appropriate dynamics, such as the rotor and actuator dynamics for an aircraft engine. This paper summarizes the process for creating T-MATS turbo-machinery simulations using data and input files obtained from a steady state model created in the Numerical Propulsion System Simulation (NPSS). The NPSS is a thermodynamic simulation environment that is commonly used for steady state gas turbine performance analysis. Completion of all the steps involved in the process results in a good match between T-MATS and NPSS at several steady state operating points. Additionally, the T-MATS model extended to run dynamically provides the possibility of simulating and evaluating closed loop responses.
A Process for the Creation of T-MATS Propulsion System Models from NPSS Data
NASA Technical Reports Server (NTRS)
Chapman, Jeffryes W.; Lavelle, Thomas M.; Litt, Jonathan S.; Guo, Ten-Huei
2014-01-01
A modular thermodynamic simulation package called the Toolbox for the Modeling and Analysis of Thermodynamic Systems (T-MATS) has been developed for the creation of dynamic simulations. The T-MATS software is designed as a plug-in for Simulink(Trademark) and allows a developer to create system simulations of thermodynamic plants (such as gas turbines) and controllers in a single tool. Creation of such simulations can be accomplished by matching data from actual systems, or by matching data from steady state models and inserting appropriate dynamics, such as the rotor and actuator dynamics for an aircraft engine. This paper summarizes the process for creating T-MATS turbo-machinery simulations using data and input files obtained from a steady state model created in the Numerical Propulsion System Simulation (NPSS). The NPSS is a thermodynamic simulation environment that is commonly used for steady state gas turbine performance analysis. Completion of all the steps involved in the process results in a good match between T-MATS and NPSS at several steady state operating points. Additionally, the T-MATS model extended to run dynamically provides the possibility of simulating and evaluating closed loop responses.
A Process for the Creation of T-MATS Propulsion System Models from NPSS Data
NASA Technical Reports Server (NTRS)
Chapman, Jeffryes W.; Lavelle, Thomas M.; Litt, Jonathan S.; Guo, Ten-Huei
2014-01-01
A modular thermodynamic simulation package called the Toolbox for the Modeling and Analysis of Thermodynamic Systems (T-MATS) has been developed for the creation of dynamic simulations. The T-MATS software is designed as a plug-in for Simulink(Registered TradeMark) and allows a developer to create system simulations of thermodynamic plants (such as gas turbines) and controllers in a single tool. Creation of such simulations can be accomplished by matching data from actual systems, or by matching data from steady state models and inserting appropriate dynamics, such as the rotor and actuator dynamics for an aircraft engine. This paper summarizes the process for creating T-MATS turbo-machinery simulations using data and input files obtained from a steady state model created in the Numerical Propulsion System Simulation (NPSS). The NPSS is a thermodynamic simulation environment that is commonly used for steady state gas turbine performance analysis. Completion of all the steps involved in the process results in a good match between T-MATS and NPSS at several steady state operating points. Additionally, the T-MATS model extended to run dynamically provides the possibility of simulating and evaluating closed loop responses.
Simulating three dimensional wave run-up over breakwaters covered by antifer units
NASA Astrophysics Data System (ADS)
Najafi-Jilani, A.; Niri, M. Zakiri; Naderi, Nader
2014-06-01
The paper presents the numerical analysis of wave run-up over rubble-mound breakwaters covered by antifer units using a technique integrating Computer-Aided Design (CAD) and Computational Fluid Dynamics (CFD) software. Direct application of Navier-Stokes equations within armour blocks, is used to provide a more reliable approach to simulate wave run-up over breakwaters. A well-tested Reynolds-averaged Navier-Stokes (RANS) Volume of Fluid (VOF) code (Flow-3D) was adopted for CFD computations. The computed results were compared with experimental data to check the validity of the model. Numerical results showed that the direct three dimensional (3D) simulation method can deliver accurate results for wave run-up over rubble mound breakwaters. The results showed that the placement pattern of antifer units had a great impact on values of wave run-up so that by changing the placement pattern from regular to double pyramid can reduce the wave run-up by approximately 30%. Analysis was done to investigate the influences of surface roughness, energy dissipation in the pores of the armour layer and reduced wave run-up due to inflow into the armour and stone layer.
VizieR Online Data Catalog: Horizon MareNostrum cosmological run (Gay+, 2010)
NASA Astrophysics Data System (ADS)
Gay, C.; Pichon, C.; Le Borgne, D.; Teyssier, R.; Sousbie, T.; Devriendt, J.
2010-11-01
The correlation between the large-scale distribution of galaxies and their spectroscopic properties at z=1.5 is investigated using the Horizon MareNostrum cosmological run. We have extracted a large sample of 105 galaxies from this large hydrodynamical simulation featuring standard galaxy formation physics. Spectral synthesis is applied to these single stellar populations to generate spectra and colours for all galaxies. We use the skeleton as a tracer of the cosmic web and study how our galaxy catalogue depends on the distance to the skeleton. We show that galaxies closer to the skeleton tend to be redder but that the effect is mostly due to the proximity of large haloes at the nodes of the skeleton, rather than the filaments themselves. The virtual catalogues (spectroscopical properties of the MareNostrum galaxies at various redshifts) are available online at http://www.iap.fr/users/pichon/MareNostrum/catalogues. (7 data files).
Interactive computer graphics applications for compressible aerodynamics
NASA Technical Reports Server (NTRS)
Benson, Thomas J.
1994-01-01
Three computer applications have been developed to solve inviscid compressible fluids problems using interactive computer graphics. The first application is a compressible flow calculator which solves for isentropic flow, normal shocks, and oblique shocks or centered expansions produced by two dimensional ramps. The second application couples the solutions generated by the first application to a more graphical presentation of the results to produce a desk top simulator of three compressible flow problems: 1) flow past a single compression ramp; 2) flow past two ramps in series; and 3) flow past two opposed ramps. The third application extends the results of the second to produce a design tool which solves for the flow through supersonic external or mixed compression inlets. The applications were originally developed to run on SGI or IBM workstations running GL graphics. They are currently being extended to solve additional types of flow problems and modified to operate on any X-based workstation.
International Oil Supplies and Demands. Volume 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1991-09-01
The eleventh Energy Modeling Forum (EMF) working group met four times over the 1989--90 period to compare alternative perspectives on international oil supplies and demands through 2010 and to discuss how alternative supply and demand trends influence the world`s dependence upon Middle Eastern oil. Proprietors of eleven economic models of the world oil market used their respective models to simulate a dozen scenarios using standardized assumptions. From its inception, the study was not designed to focus on the short-run impacts of disruptions on oil markets. Nor did the working group attempt to provide a forecast or just a single viewmore » of the likely future path for oil prices. The model results guided the group`s thinking about many important longer-run market relationships and helped to identify differences of opinion about future oil supplies, demands, and dependence.« less
International Oil Supplies and Demands. Volume 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1992-04-01
The eleventh Energy Modeling Forum (EMF) working group met four times over the 1989--1990 period to compare alternative perspectives on international oil supplies and demands through 2010 and to discuss how alternative supply and demand trends influence the world`s dependence upon Middle Eastern oil. Proprietors of eleven economic models of the world oil market used their respective models to simulate a dozen scenarios using standardized assumptions. From its inception, the study was not designed to focus on the short-run impacts of disruptions on oil markets. Nor did the working group attempt to provide a forecast or just a single viewmore » of the likely future path for oil prices. The model results guided the group`s thinking about many important longer-run market relationships and helped to identify differences of opinion about future oil supplies, demands, and dependence.« less
GPU accelerated Monte-Carlo simulation of SEM images for metrology
NASA Astrophysics Data System (ADS)
Verduin, T.; Lokhorst, S. R.; Hagen, C. W.
2016-03-01
In this work we address the computation times of numerical studies in dimensional metrology. In particular, full Monte-Carlo simulation programs for scanning electron microscopy (SEM) image acquisition are known to be notoriously slow. Our quest in reducing the computation time of SEM image simulation has led us to investigate the use of graphics processing units (GPUs) for metrology. We have succeeded in creating a full Monte-Carlo simulation program for SEM images, which runs entirely on a GPU. The physical scattering models of this GPU simulator are identical to a previous CPU-based simulator, which includes the dielectric function model for inelastic scattering and also refinements for low-voltage SEM applications. As a case study for the performance, we considered the simulated exposure of a complex feature: an isolated silicon line with rough sidewalls located on a at silicon substrate. The surface of the rough feature is decomposed into 408 012 triangles. We have used an exposure dose of 6 mC/cm2, which corresponds to 6 553 600 primary electrons on average (Poisson distributed). We repeat the simulation for various primary electron energies, 300 eV, 500 eV, 800 eV, 1 keV, 3 keV and 5 keV. At first we run the simulation on a GeForce GTX480 from NVIDIA. The very same simulation is duplicated on our CPU-based program, for which we have used an Intel Xeon X5650. Apart from statistics in the simulation, no difference is found between the CPU and GPU simulated results. The GTX480 generates the images (depending on the primary electron energy) 350 to 425 times faster than a single threaded Intel X5650 CPU. Although this is a tremendous speedup, we actually have not reached the maximum throughput because of the limited amount of available memory on the GTX480. Nevertheless, the speedup enables the fast acquisition of simulated SEM images for metrology. We now have the potential to investigate case studies in CD-SEM metrology, which otherwise would take unreasonable amounts of computation time.
NASA Astrophysics Data System (ADS)
Liu, Fei; Zhao, Jiuwei; Fu, Xiouhua; Huang, Gang
2018-02-01
By conducting idealized experiments in a general circulation model (GCM) and in a toy theoretical model, we test the hypothesis that shallow convection (SC) is responsible for explaining why the boreal summer intraseasonal oscillation (BSISO) prefers propagating northward. Two simulations are performed using ECHAM4, with the control run using a standard detrainment rate of SC and the sensitivity run turning off the detrainment rate of SC. These two simulations display dramatically different BSISO characteristics. The control run simulates the realistic northward propagation (NP) of the BSISO, while the sensitivity run with little SC only simulates stationary signals. In the sensitivity run, the meridional asymmetries of vorticity and humidity fields are simulated under the monsoon vertical wind shear (VWS); thus, the frictional convergence can be excited to the north of the BSISO. However, the lack of SC makes the lower and middle troposphere very dry, which prohibits further development of deeper convection. A theoretical BSISO model is also constructed, and the result shows that SC is a key to convey the asymmetric vorticity effect to induce the BSISO to move northward. Thus, both the GCM and theoretical model results demonstrate the importance of SC in promoting the NP of the BSISO.
Electron Thermalization in the Solar Wind and Planetary Plasma Boundaries
NASA Technical Reports Server (NTRS)
Krauss-Varban, Dietmar
1998-01-01
The work carried out under this contract attempts a better understanding of whistler wave generation and associated scattering of electrons in the solar wind. This task is accomplished through simulations using a particle-in-cell code and a Vlasov code. In addition, the work is supported by the utilization of a linear kinetic dispersion solver. Previously, we have concentrated on gaining a better understanding of the linear mode properties, and have tested the simulation codes within a known parameter regime. We are now in a new phase in which we implement, execute, and analyze production simulations. This phase is projected to last over several reporting periods, with this being the second cycle. In addition, we have started to research to what extent the evolution of the pertinent instabilities is two-dimensional. We are also continuing our work on the visualization aspects of the simulation results, and on a code version that runs on single-user Alpha-processor based workstations.
Simulating Sources of Superstorm Plasmas
NASA Technical Reports Server (NTRS)
Fok, Mei-Ching
2008-01-01
We evaluated the contributions to magnetospheric pressure (ring current) of the solar wind, polar wind, auroral wind, and plasmaspheric wind, with the surprising result that the main phase pressure is dominated by plasmaspheric protons. We used global simulation fields from the LFM single fluid ideal MHD model. We embedded the Comprehensive Ring Current Model within it, driven by the LFM transpolar potential, and supplied with plasmas at its boundary including solar wind protons, polar wind protons, auroral wind O+, and plasmaspheric protons. We included auroral outflows and acceleration driven by the LFM ionospheric boundary condition, including parallel ion acceleration driven by upward currents. Our plasmasphere model runs within the CRCM and is driven by it. Ionospheric sources were treated using our Global Ion Kinetics code based on full equations of motion. This treatment neglects inertial loading and pressure exerted by the ionospheric plasmas, and will be superceded by multifluid simulations that include those effects. However, these simulations provide new insights into the respective role of ionospheric sources in storm-time magnetospheric dynamics.
NASA Astrophysics Data System (ADS)
Yang, Lin; Zhang, Feng; Wang, Cai-Zhuang; Ho, Kai-Ming; Travesset, Alex
2018-04-01
We present an implementation of EAM and FS interatomic potentials, which are widely used in simulating metallic systems, in HOOMD-blue, a software designed to perform classical molecular dynamics simulations using GPU accelerations. We first discuss the details of our implementation and then report extensive benchmark tests. We demonstrate that single-precision floating point operations efficiently implemented on GPUs can produce sufficient accuracy when compared against double-precision codes, as demonstrated in test simulations of calculations of the glass-transition temperature of Cu64.5Zr35.5, and pair correlation function g (r) of liquid Ni3Al. Our code scales well with the size of the simulating system on NVIDIA Tesla M40 and P100 GPUs. Compared with another popular software LAMMPS running on 32 cores of AMD Opteron 6220 processors, the GPU/CPU performance ratio can reach as high as 4.6. The source code can be accessed through the HOOMD-blue web page for free by any interested user.
2014-10-07
is counted as. Per the TDTC, a test bridge with longitudinal and/or lateral symmetry under non- eccentric loading can be considered as 1, 2, or 4...Level Run036 3 MLC70T (tracked) BA Run046 6 AB Run055 9 AB Run060 9 BA Run064 12 BA Run071 15 AB Run155 3 MLC96W ( wheeled ) AB...Run331 9 AB Run359 15 AB Run430 12 MLC96W ( wheeled ) BA Run434 12 AB Run447 3 BA Bank Condition: Side Slope, Even Strain Channels High
Implementation of an open-scenario, long-term space debris simulation approach
NASA Astrophysics Data System (ADS)
Stupl, J.; Nelson, B.; Faber, N.; Perez, A.; Carlino, R.; Yang, F.; Henze, C.; Karacalioglu, A.; O'Toole, C.; Swenson, J.
This paper provides a status update on the implementation of a flexible, long-term space debris simulation approach. The motivation is to build a tool that can assess the long-term impact of various options for debris-remediation, including the LightForce space debris collision avoidance scheme. State-of-the-art simulation approaches that assess the long-term development of the debris environment use either completely statistical approaches, or they rely on large time steps in the order of several (5-15) days if they simulate the positions of single objects over time. They cannot be easily adapted to investigate the impact of specific collision avoidance schemes or de-orbit schemes, because the efficiency of a collision avoidance maneuver can depend on various input parameters, including ground station positions, space object parameters and orbital parameters of the conjunctions and take place in much smaller timeframes than 5-15 days. For example, LightForce only changes the orbit of a certain object (aiming to reduce the probability of collision), but it does not remove entire objects or groups of objects. In the same sense, it is also not straightforward to compare specific de-orbit methods in regard to potential collision risks during a de-orbit maneuver. To gain flexibility in assessing interactions with objects, we implement a simulation that includes every tracked space object in LEO, propagates all objects with high precision, and advances with variable-sized time-steps as small as one second. It allows the assessment of the (potential) impact of changes to any object. The final goal is to employ a Monte Carlo approach to assess the debris evolution during the simulation time-frame of 100 years and to compare a baseline scenario to debris remediation scenarios or other scenarios of interest. To populate the initial simulation, we use the entire space-track object catalog in LEO. We then use a high precision propagator to propagate all objects over the entire simulation duration. If collisions are detected, the appropriate number of debris objects are created and inserted into the simulation framework. Depending on the scenario, further objects, e.g. due to new launches, can be added. At the end of the simulation, the total number of objects above a cut-off size and the number of detected collisions provide benchmark parameters for the comparison between scenarios. The simulation approach is computationally intensive as it involves ten thousands of objects; hence we use a highly parallel approach employing up to a thousand cores on the NASA Pleiades supercomputer for a single run. This paper describes our simulation approach, the status of its implementation, the approach in developing scenarios and examples of first test runs.
Implementation of an Open-Scenario, Long-Term Space Debris Simulation Approach
NASA Technical Reports Server (NTRS)
Nelson, Bron; Yang Yang, Fan; Carlino, Roberto; Dono Perez, Andres; Faber, Nicolas; Henze, Chris; Karacalioglu, Arif Goktug; O'Toole, Conor; Swenson, Jason; Stupl, Jan
2015-01-01
This paper provides a status update on the implementation of a flexible, long-term space debris simulation approach. The motivation is to build a tool that can assess the long-term impact of various options for debris-remediation, including the LightForce space debris collision avoidance concept that diverts objects using photon pressure [9]. State-of-the-art simulation approaches that assess the long-term development of the debris environment use either completely statistical approaches, or they rely on large time steps on the order of several days if they simulate the positions of single objects over time. They cannot be easily adapted to investigate the impact of specific collision avoidance schemes or de-orbit schemes, because the efficiency of a collision avoidance maneuver can depend on various input parameters, including ground station positions and orbital and physical parameters of the objects involved in close encounters (conjunctions). Furthermore, maneuvers take place on timescales much smaller than days. For example, LightForce only changes the orbit of a certain object (aiming to reduce the probability of collision), but it does not remove entire objects or groups of objects. In the same sense, it is also not straightforward to compare specific de-orbit methods in regard to potential collision risks during a de-orbit maneuver. To gain flexibility in assessing interactions with objects, we implement a simulation that includes every tracked space object in Low Earth Orbit (LEO) and propagates all objects with high precision and variable time-steps as small as one second. It allows the assessment of the (potential) impact of physical or orbital changes to any object. The final goal is to employ a Monte Carlo approach to assess the debris evolution during the simulation time-frame of 100 years and to compare a baseline scenario to debris remediation scenarios or other scenarios of interest. To populate the initial simulation, we use the entire space-track object catalog in LEO. We then use a high precision propagator to propagate all objects over the entire simulation duration. If collisions are detected, the appropriate number of debris objects are created and inserted into the simulation framework. Depending on the scenario, further objects, e.g. due to new launches, can be added. At the end of the simulation, the total number of objects above a cut-off size and the number of detected collisions provide benchmark parameters for the comparison between scenarios. The simulation approach is computationally intensive as it involves tens of thousands of objects; hence we use a highly parallel approach employing up to a thousand cores on the NASA Pleiades supercomputer for a single run. This paper describes our simulation approach, the status of its implementation, the approach to developing scenarios and examples of first test runs.
Martin W. Ritchie; Robert F. Powers
1993-01-01
SYSTUM-1 is an individual-tree/distance-independent simulator developed for use in young plantations in California and southern Oregon. The program was developed to run under the DOS operating system and requires DOS 3.0 or higher running on an 8086 or higher processor. The simulator is designed to provide a link with existing PC-based simulators (CACTOS and ORGANON)...
Measurement and Modeling of Fugitive Dust from Off Road DoD Activities
2017-12-08
each soil and vehicle type (see Table 2). Note, no tracked vehicles were run at YTC. CT is the curve track sampling location, CR is the curve ridge...Soil is SL = sandy loam. ...................... 116 Figure 35. Single-event Wind Erosion Evaluation Program (SWEEP) Run example results. ... 121...Figure 36. Single-event Wind Erosion Evaluation Program (SWEEP) Threshold Run example results screen
Simple Queueing Model Applied to the City of Portland
NASA Astrophysics Data System (ADS)
Simon, Patrice M.; Esser, Jörg; Nagel, Kai
We use a simple traffic micro-simulation model based on queueing dynamics as introduced by Gawron [IJMPC, 9(3):393, 1998] in order to simulate traffic in Portland/Oregon. Links have a flow capacity, that is, they do not release more vehicles per second than is possible according to their capacity. This leads to queue built-up if demand exceeds capacity. Links also have a storage capacity, which means that once a link is full, vehicles that want to enter the link need to wait. This leads to queue spill-back through the network. The model is compatible with route-plan-based approaches such as TRANSIMS, where each vehicle attempts to follow its pre-computed path. Yet, both the data requirements and the computational requirements are considerably lower than for the full TRANSIMS microsimulation. Indeed, the model uses standard emme/2 network data, and runs about eight times faster than real time with more than 100 000 vehicles simultaneously in the simulation on a single Pentium-type CPU. We derive the model's fundamental diagrams and explain it. The simulation is used to simulate traffic on the emme/2 network of the Portland (Oregon) metropolitan region (20 000 links). Demand is generated by a simplified home-to-work destination assignment which generates about half a million trips for the morning peak. Route assignment is done by iterative feedback between micro-simulation and router. An iterative solution of the route assignment for the above problem can be achieved within about half a day of computing time on a desktop workstation. We compare results with field data and with results of traditional assignment runs by the Portland Metropolitan Planning Organization. Thus, with a model such as this one, it is possible to use a dynamic, activities-based approach to transportation simulation (such as in TRANSIMS) with affordable data and hardware. This should enable systematic research about the coupling of demand generation, route assignment, and micro-simulation output.
Kaneda, Shohei; Ono, Koichi; Fukuba, Tatsuhiro; Nojima, Takahiko; Yamamoto, Takatoki; Fujii, Teruo
2011-01-01
In this paper, a rapid and simple method to determine the optimal temperature conditions for denaturant electrophoresis using a temperature-controlled on-chip capillary electrophoresis (CE) device is presented. Since on-chip CE operations including sample loading, injection and separation are carried out just by switching the electric field, we can repeat consecutive run-to-run CE operations on a single on-chip CE device by programming the voltage sequences. By utilizing the high-speed separation and the repeatability of the on-chip CE, a series of electrophoretic operations with different running temperatures can be implemented. Using separations of reaction products of single-stranded DNA (ssDNA) with a peptide nucleic acid (PNA) oligomer, the effectiveness of the presented method to determine the optimal temperature conditions required to discriminate a single-base substitution (SBS) between two different ssDNAs is demonstrated. It is shown that a single run for one temperature condition can be executed within 4 min, and the optimal temperature to discriminate the SBS could be successfully found using the present method. PMID:21845077
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Z; Gao, M
Purpose: Monte Carlo simulation plays an important role for proton Pencil Beam Scanning (PBS) technique. However, MC simulation demands high computing power and is limited to few large proton centers that can afford a computer cluster. We study the feasibility of utilizing cloud computing in the MC simulation of PBS beams. Methods: A GATE/GEANT4 based MC simulation software was installed on a commercial cloud computing virtual machine (Linux 64-bits, Amazon EC2). Single spot Integral Depth Dose (IDD) curves and in-air transverse profiles were used to tune the source parameters to simulate an IBA machine. With the use of StarCluster softwaremore » developed at MIT, a Linux cluster with 2–100 nodes can be conveniently launched in the cloud. A proton PBS plan was then exported to the cloud where the MC simulation was run. Results: The simulated PBS plan has a field size of 10×10cm{sup 2}, 20cm range, 10cm modulation, and contains over 10,000 beam spots. EC2 instance type m1.medium was selected considering the CPU/memory requirement and 40 instances were used to form a Linux cluster. To minimize cost, master node was created with on-demand instance and worker nodes were created with spot-instance. The hourly cost for the 40-node cluster was $0.63 and the projected cost for a 100-node cluster was $1.41. Ten million events were simulated to plot PDD and profile, with each job containing 500k events. The simulation completed within 1 hour and an overall statistical uncertainty of < 2% was achieved. Good agreement between MC simulation and measurement was observed. Conclusion: Cloud computing is a cost-effective and easy to maintain platform to run proton PBS MC simulation. When proton MC packages such as GATE and TOPAS are combined with cloud computing, it will greatly facilitate the pursuing of PBS MC studies, especially for newly established proton centers or individual researchers.« less
NASA One-Dimensional Combustor Simulation--User Manual for S1D_ML
NASA Technical Reports Server (NTRS)
Stueber, Thomas J.; Paxson, Daniel E.
2014-01-01
The work presented in this paper is to promote research leading to a closed-loop control system to actively suppress thermo-acoustic instabilities. To serve as a model for such a closed-loop control system, a one-dimensional combustor simulation composed using MATLAB software tools has been written. This MATLAB based process is similar to a precursor one-dimensional combustor simulation that was formatted as FORTRAN 77 source code. The previous simulation process requires modification to the FORTRAN 77 source code, compiling, and linking when creating a new combustor simulation executable file. The MATLAB based simulation does not require making changes to the source code, recompiling, or linking. Furthermore, the MATLAB based simulation can be run from script files within the MATLAB environment or with a compiled copy of the executable file running in the Command Prompt window without requiring a licensed copy of MATLAB. This report presents a general simulation overview. Details regarding how to setup and initiate a simulation are also presented. Finally, the post-processing section describes the two types of files created while running the simulation and it also includes simulation results for a default simulation included with the source code.
Single Common Powertrain Lubricant Development
2012-01-01
2 2.2 ENGINE DURABILITY TESTING...Page Figure 1 – General Engine Products 6.5L(T) Test Cell Installation ............................................... 9 Figure 2 ... 2 Run 3 Repeatability Run - 1 Repeatability Run - 2 Repeatability Run - 3 3-Run Average Engine Oil Consumption [lb/hr] 0.061 0.082 0.086 0.076
Simulation of a master-slave event set processor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Comfort, J.C.
1984-03-01
Event set manipulation may consume a considerable amount of the computation time spent in performing a discrete-event simulation. One way of minimizing this time is to allow event set processing to proceed in parallel with the remainder of the simulation computation. The paper describes a multiprocessor simulation computer, in which all non-event set processing is performed by the principal processor (called the host). Event set processing is coordinated by a front end processor (the master) and actually performed by several other functionally identical processors (the slaves). A trace-driven simulation program modeling this system was constructed, and was run with tracemore » output taken from two different simulation programs. Output from this simulation suggests that a significant reduction in run time may be realized by this approach. Sensitivity analysis was performed on the significant parameters to the system (number of slave processors, relative processor speeds, and interprocessor communication times). A comparison between actual and simulation run times for a one-processor system was used to assist in the validation of the simulation. 7 references.« less
NASA Astrophysics Data System (ADS)
Kemp, E. M.; Putman, W. M.; Gurganus, J.; Burns, R. W.; Damon, M. R.; McConaughy, G. R.; Seablom, M. S.; Wojcik, G. S.
2009-12-01
We present a regional downscaling system (RDS) suitable for high-resolution weather and climate simulations in multiple supercomputing environments. The RDS is built on the NASA Workflow Tool, a software framework for configuring, running, and managing computer models on multiple platforms with a graphical user interface. The Workflow Tool is used to run the NASA Goddard Earth Observing System Model Version 5 (GEOS-5), a global atmospheric-ocean model for weather and climate simulations down to 1/4 degree resolution; the NASA Land Information System Version 6 (LIS-6), a land surface modeling system that can simulate soil temperature and moisture profiles; and the Weather Research and Forecasting (WRF) community model, a limited-area atmospheric model for weather and climate simulations down to 1-km resolution. The Workflow Tool allows users to customize model settings to user needs; saves and organizes simulation experiments; distributes model runs across different computer clusters (e.g., the DISCOVER cluster at Goddard Space Flight Center, the Cray CX-1 Desktop Supercomputer, etc.); and handles all file transfers and network communications (e.g., scp connections). Together, the RDS is intended to aid researchers by making simulations as easy as possible to generate on the computer resources available. Initial conditions for LIS-6 and GEOS-5 are provided by Modern Era Retrospective-Analysis for Research and Applications (MERRA) reanalysis data stored on DISCOVER. The LIS-6 is first run for 2-4 years forced by MERRA atmospheric analyses, generating initial conditions for the WRF soil physics. GEOS-5 is then initialized from MERRA data and run for the period of interest. Large-scale atmospheric data, sea-surface temperatures, and sea ice coverage from GEOS-5 are used as boundary conditions for WRF, which is run for the same period of interest. Multiply nested grids are used for both LIS-6 and WRF, with the innermost grid run at a resolution sufficient for typical local weather features (terrain, convection, etc.) All model runs, restarts, and file transfers are coordinated by the Workflow Tool. Two use cases are being pursued. First, the RDS generates regional climate simulations down to 4-km for the Chesapeake Bay region, with WRF output provided as input to more specialized models (e.g., ocean/lake, hydrological, marine biology, and air pollution). This will allow assessment of climate impact on local interests (e.g., changes in Bay water levels and temperatures, innundation, fish kills, etc.) Second, the RDS generates high-resolution hurricane simulations in the tropical North Atlantic. This use case will support Observing System Simulation Experiments (OSSEs) of dynamically-targeted lidar observations as part of the NASA Sensor Web Simulator project. Sample results will be presented at the AGU Fall Meeting.
NASA Astrophysics Data System (ADS)
Rohmer, Jeremy; Idier, Deborah; Bulteau, Thomas; Paris, François
2016-04-01
From a risk management perspective, it can be of high interest to identify the critical set of offshore conditions that lead to inundation on key assets for the studied territory (e.g., assembly points, evacuation routes, hospitals, etc.). This inverse approach of risk assessment (Idier et al., NHESS, 2013) can be of primary importance either for the estimation of the coastal flood hazard return period or for constraining the early warning networks based on hydro-meteorological forecast or observations. However, full-process based models for coastal flooding simulation have very large computational time cost (typically of several hours), which often limits the analysis to a few scenarios. Recently, it has been shown that meta-modelling approaches can efficiently handle this difficulty (e.g., Rohmer & Idier, NHESS, 2012). Yet, the full-process based models are expected to present strong non-linearities (non-regularities) or shocks (discontinuities), i.e. dynamics controlled by thresholds. For instance, in case of coastal defense, the dynamics is characterized first by a linear behavior of the waterline position (increase with increasing offshore conditions), as long as there is no overtopping, and then by a very strong increase (as soon as the offshore conditions are energetic enough to lead to wave overtopping, and then overflow). Such behavior might make the training phase of the meta-model very tedious. In the present study, we propose to explore the feasibility of active learning techniques, aka semi-supervised machine learning, to track the set of critical conditions with a reduced number of long-running simulations. The basic idea relies on identifying the simulation scenarios which should both reduce the meta-model error and improve the prediction of the critical contour of interest. To overcome the afore-described difficulty related to non-regularity, we rely on Support Vector Machines, which have shown very high performance for structural reliability assessment. The developments are done on a cross-shore case, using the process-based SWASH model. The related computational time is 10 hours for a single run. The dynamic forcing conditions are parametrized by several factors (storm surge S, significant wave height Hs, dephasing between tide and surge, etc.). In particular, we validated the approach with respect to a reference set of 400 long-running simulations in the domain of (S ; Hs). Our tests showed that the tracking of the critical contour can be achieved with a reasonable number of long-running simulations of a few tens.
Particle-in-cell modeling for MJ scale dense plasma focus with varied anode shape
DOE Office of Scientific and Technical Information (OSTI.GOV)
Link, A., E-mail: link6@llnl.gov; Halvorson, C., E-mail: link6@llnl.gov; Schmidt, A.
2014-12-15
Megajoule scale dense plasma focus (DPF) Z-pinches with deuterium gas fill are compact devices capable of producing 10{sup 12} neutrons per shot but past predictive models of large-scale DPF have not included kinetic effects such as ion beam formation or anomalous resistivity. We report on progress of developing a predictive DPF model by extending our 2D axisymmetric collisional kinetic particle-in-cell (PIC) simulations from the 4 kJ, 200 kA LLNL DPF to 1 MJ, 2 MA Gemini DPF using the PIC code LSP. These new simulations incorporate electrodes, an external pulsed-power driver circuit, and model the plasma from insulator lift-off throughmore » the pinch phase. To accommodate the vast range of relevant spatial and temporal scales involved in the Gemini DPF within the available computational resources, the simulations were performed using a new hybrid fluid-to-kinetic model. This new approach allows single simulations to begin in an electron/ion fluid mode from insulator lift-off through the 5-6 μs run-down of the 50+ cm anode, then transition to a fully kinetic PIC description during the run-in phase, when the current sheath is 2-3 mm from the central axis of the anode. Simulations are advanced through the final pinch phase using an adaptive variable time-step to capture the fs and sub-mm scales of the kinetic instabilities involved in the ion beam formation and neutron production. Validation assessments are being performed using a variety of different anode shapes, comparing against experimental measurements of neutron yield, neutron anisotropy and ion beam production.« less
Unsteady, Cooled Turbine Simulation Using a PC-Linux Analysis System
NASA Technical Reports Server (NTRS)
List, Michael G.; Turner, Mark G.; Chen, Jen-Pimg; Remotigue, Michael G.; Veres, Joseph P.
2004-01-01
The fist stage of the high-pressure turbine (HPT) of the GE90 engine was simulated with a three-dimensional unsteady Navier-Sokes solver, MSU Turbo, which uses source terms to simulate the cooling flows. In addition to the solver, its pre-processor, GUMBO, and a post-processing and visualization tool, Turbomachinery Visual3 (TV3) were run in a Linux environment to carry out the simulation and analysis. The solver was run both with and without cooling. The introduction of cooling flow on the blade surfaces, case, and hub and its effects on both rotor-vane interaction as well the effects on the blades themselves were the principle motivations for this study. The studies of the cooling flow show the large amount of unsteadiness in the turbine and the corresponding hot streak migration phenomenon. This research on the GE90 turbomachinery has also led to a procedure for running unsteady, cooled turbine analysis on commodity PC's running the Linux operating system.
Monte Carlo Solution to Find Input Parameters in Systems Design Problems
NASA Astrophysics Data System (ADS)
Arsham, Hossein
2013-06-01
Most engineering system designs, such as product, process, and service design, involve a framework for arriving at a target value for a set of experiments. This paper considers a stochastic approximation algorithm for estimating the controllable input parameter within a desired accuracy, given a target value for the performance function. Two different problems, what-if and goal-seeking problems, are explained and defined in an auxiliary simulation model, which represents a local response surface model in terms of a polynomial. A method of constructing this polynomial by a single run simulation is explained. An algorithm is given to select the design parameter for the local response surface model. Finally, the mean time to failure (MTTF) of a reliability subsystem is computed and compared with its known analytical MTTF value for validation purposes.
NASA Astrophysics Data System (ADS)
He, Xiulan; Sonnenborg, Torben O.; Jørgensen, Flemming; Jensen, Karsten H.
2017-03-01
Stationarity has traditionally been a requirement of geostatistical simulations. A common way to deal with non-stationarity is to divide the system into stationary sub-regions and subsequently merge the realizations for each region. Recently, the so-called partition approach that has the flexibility to model non-stationary systems directly was developed for multiple-point statistics simulation (MPS). The objective of this study is to apply the MPS partition method with conventional borehole logs and high-resolution airborne electromagnetic (AEM) data, for simulation of a real-world non-stationary geological system characterized by a network of connected buried valleys that incise deeply into layered Miocene sediments (case study in Denmark). The results show that, based on fragmented information of the formation boundaries, the MPS partition method is able to simulate a non-stationary system including valley structures embedded in a layered Miocene sequence in a single run. Besides, statistical information retrieved from the AEM data improved the simulation of the geology significantly, especially for the deep-seated buried valley sediments where borehole information is sparse.
Soleimani, Hamid; Drakakis, Emmanuel M
2017-06-01
Recent studies have demonstrated that calcium is a widespread intracellular ion that controls a wide range of temporal dynamics in the mammalian body. The simulation and validation of such studies using experimental data would benefit from a fast large scale simulation and modelling tool. This paper presents a compact and fully reconfigurable cellular calcium model capable of mimicking Hopf bifurcation phenomenon and various nonlinear responses of the biological calcium dynamics. The proposed cellular model is synthesized on a digital platform for a single unit and a network model. Hardware synthesis, physical implementation on FPGA, and theoretical analysis confirm that the proposed cellular model can mimic the biological calcium behaviors with considerably low hardware overhead. The approach has the potential to speed up large-scale simulations of slow intracellular dynamics by sharing more cellular units in real-time. To this end, various networks constructed by pipelining 10 k to 40 k cellular calcium units are compared with an equivalent simulation run on a standard PC workstation. Results show that the cellular hardware model is, on average, 83 times faster than the CPU version.
NASA Astrophysics Data System (ADS)
Brodeck, M.; Alvarez, F.; Arbe, A.; Juranyi, F.; Unruh, T.; Holderer, O.; Colmenero, J.; Richter, D.
2009-03-01
We performed quasielastic neutron scattering experiments and atomistic molecular dynamics simulations on a poly(ethylene oxide) (PEO) homopolymer system above the melting point. The excellent agreement found between both sets of data, together with a successful comparison with literature diffraction results, validates the condensed-phase optimized molecular potentials for atomistic simulation studies (COMPASS) force field used to produce our dynamic runs and gives support to their further analysis. This provided direct information on magnitudes which are not accessible from experiments such as the radial probability distribution functions of specific atoms at different times and their moments. The results of our simulations on the H-motions and different experiments indicate that in the high-temperature range investigated the dynamics is Rouse-like for Q-values below ≈0.6 Å-1. We then addressed the single chain dynamic structure factor with the simulations. A mode analysis, not possible directly experimentally, reveals the limits of applicability of the Rouse model to PEO. We discuss the possible origins for the observed deviations.
Brodeck, M; Alvarez, F; Arbe, A; Juranyi, F; Unruh, T; Holderer, O; Colmenero, J; Richter, D
2009-03-07
We performed quasielastic neutron scattering experiments and atomistic molecular dynamics simulations on a poly(ethylene oxide) (PEO) homopolymer system above the melting point. The excellent agreement found between both sets of data, together with a successful comparison with literature diffraction results, validates the condensed-phase optimized molecular potentials for atomistic simulation studies (COMPASS) force field used to produce our dynamic runs and gives support to their further analysis. This provided direct information on magnitudes which are not accessible from experiments such as the radial probability distribution functions of specific atoms at different times and their moments. The results of our simulations on the H-motions and different experiments indicate that in the high-temperature range investigated the dynamics is Rouse-like for Q-values below approximately 0.6 A(-1). We then addressed the single chain dynamic structure factor with the simulations. A mode analysis, not possible directly experimentally, reveals the limits of applicability of the Rouse model to PEO. We discuss the possible origins for the observed deviations.
Simplified and advanced modelling of traction control systems of heavy-haul locomotives
NASA Astrophysics Data System (ADS)
Spiryagin, Maksym; Wolfs, Peter; Szanto, Frank; Cole, Colin
2015-05-01
Improving tractive effort is a very complex task in locomotive design. It requires the development of not only mechanical systems but also power systems, traction machines and traction algorithms. At the initial design stage, traction algorithms can be verified by means of a simulation approach. A simple single wheelset simulation approach is not sufficient because all locomotive dynamics are not fully taken into consideration. Given that many traction control strategies exist, the best solution is to use more advanced approaches for such studies. This paper describes the modelling of a locomotive with a bogie traction control strategy based on a co-simulation approach in order to deliver more accurate results. The simplified and advanced modelling approaches of a locomotive electric power system are compared in this paper in order to answer a fundamental question. What level of modelling complexity is necessary for the investigation of the dynamic behaviours of a heavy-haul locomotive running under traction? The simulation results obtained provide some recommendations on simulation processes and the further implementation of advanced and simplified modelling approaches.
High-throughput full-length single-cell mRNA-seq of rare cells.
Ooi, Chin Chun; Mantalas, Gary L; Koh, Winston; Neff, Norma F; Fuchigami, Teruaki; Wong, Dawson J; Wilson, Robert J; Park, Seung-Min; Gambhir, Sanjiv S; Quake, Stephen R; Wang, Shan X
2017-01-01
Single-cell characterization techniques, such as mRNA-seq, have been applied to a diverse range of applications in cancer biology, yielding great insight into mechanisms leading to therapy resistance and tumor clonality. While single-cell techniques can yield a wealth of information, a common bottleneck is the lack of throughput, with many current processing methods being limited to the analysis of small volumes of single cell suspensions with cell densities on the order of 107 per mL. In this work, we present a high-throughput full-length mRNA-seq protocol incorporating a magnetic sifter and magnetic nanoparticle-antibody conjugates for rare cell enrichment, and Smart-seq2 chemistry for sequencing. We evaluate the efficiency and quality of this protocol with a simulated circulating tumor cell system, whereby non-small-cell lung cancer cell lines (NCI-H1650 and NCI-H1975) are spiked into whole blood, before being enriched for single-cell mRNA-seq by EpCAM-functionalized magnetic nanoparticles and the magnetic sifter. We obtain high efficiency (> 90%) capture and release of these simulated rare cells via the magnetic sifter, with reproducible transcriptome data. In addition, while mRNA-seq data is typically only used for gene expression analysis of transcriptomic data, we demonstrate the use of full-length mRNA-seq chemistries like Smart-seq2 to facilitate variant analysis of expressed genes. This enables the use of mRNA-seq data for differentiating cells in a heterogeneous population by both their phenotypic and variant profile. In a simulated heterogeneous mixture of circulating tumor cells in whole blood, we utilize this high-throughput protocol to differentiate these heterogeneous cells by both their phenotype (lung cancer versus white blood cells), and mutational profile (H1650 versus H1975 cells), in a single sequencing run. This high-throughput method can help facilitate single-cell analysis of rare cell populations, such as circulating tumor or endothelial cells, with demonstrably high-quality transcriptomic data.
Active local control of propeller-aircraft run-up noise.
Hodgson, Murray; Guo, Jingnan; Germain, Pierre
2003-12-01
Engine run-ups are part of the regular maintenance schedule at Vancouver International Airport. The noise generated by the run-ups propagates into neighboring communities, disturbing the residents. Active noise control is a potentially cost-effective alternative to passive methods, such as enclosures. Propeller aircraft generate low-frequency tonal noise that is highly compatible with active control. This paper presents a preliminary investigation of the feasibility and effectiveness of controlling run-up noise from propeller aircraft using local active control. Computer simulations for different configurations of multi-channel active-noise-control systems, aimed at reducing run-up noise in adjacent residential areas using a local-control strategy, were performed. These were based on an optimal configuration of a single-channel control system studied previously. The variations of the attenuation and amplification zones with the number of control channels, and with source/control-system geometry, were studied. Here, the aircraft was modeled using one or two sources, with monopole or multipole radiation patterns. Both free-field and half-space conditions were considered: for the configurations studied, results were similar in the two cases. In both cases, large triangular quiet zones, with local attenuations of 10 dB or more, were obtained when nine or more control channels were used. Increases of noise were predicted outside of these areas, but these were minimized as more control channels were employed. By combining predicted attenuations with measured noise spectra, noise levels after implementation of an active control system were estimated.
Parallel 3D Multi-Stage Simulation of a Turbofan Engine
NASA Technical Reports Server (NTRS)
Turner, Mark G.; Topp, David A.
1998-01-01
A 3D multistage simulation of each component of a modern GE Turbofan engine has been made. An axisymmetric view of this engine is presented in the document. This includes a fan, booster rig, high pressure compressor rig, high pressure turbine rig and a low pressure turbine rig. In the near future, all components will be run in a single calculation for a solution of 49 blade rows. The simulation exploits the use of parallel computations by using two levels of parallelism. Each blade row is run in parallel and each blade row grid is decomposed into several domains and run in parallel. 20 processors are used for the 4 blade row analysis. The average passage approach developed by John Adamczyk at NASA Lewis Research Center has been further developed and parallelized. This is APNASA Version A. It is a Navier-Stokes solver using a 4-stage explicit Runge-Kutta time marching scheme with variable time steps and residual smoothing for convergence acceleration. It has an implicit K-E turbulence model which uses an ADI solver to factor the matrix. Between 50 and 100 explicit time steps are solved before a blade row body force is calculated and exchanged with the other blade rows. This outer iteration has been coined a "flip." Efforts have been made to make the solver linearly scaleable with the number of blade rows. Enough flips are run (between 50 and 200) so the solution in the entire machine is not changing. The K-E equations are generally solved every other explicit time step. One of the key requirements in the development of the parallel code was to make the parallel solution exactly (bit for bit) match the serial solution. This has helped isolate many small parallel bugs and guarantee the parallelization was done correctly. The domain decomposition is done only in the axial direction since the number of points axially is much larger than the other two directions. This code uses MPI for message passing. The parallel speed up of the solver portion (no 1/0 or body force calculation) for a grid which has 227 points axially.
Developing a Learning Algorithm-Generated Empirical Relaxer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, Wayne; Kallman, Josh; Toreja, Allen
2016-03-30
One of the main difficulties when running Arbitrary Lagrangian-Eulerian (ALE) simulations is determining how much to relax the mesh during the Eulerian step. This determination is currently made by the user on a simulation-by-simulation basis. We present a Learning Algorithm-Generated Empirical Relaxer (LAGER) which uses a regressive random forest algorithm to automate this decision process. We also demonstrate that LAGER successfully relaxes a variety of test problems, maintains simulation accuracy, and has the potential to significantly decrease both the person-hours and computational hours needed to run a successful ALE simulation.
Drift and Behavior of E. coli Cells
NASA Astrophysics Data System (ADS)
Micali, Gabriele; Colin, Rémy; Sourjik, Victor; Endres, Robert G.
2017-12-01
Chemotaxis of the bacterium Escherichia coli is well understood in shallow chemical gradients, but its swimming behavior remains difficult to interpret in steep gradients. By focusing on single-cell trajectories from simulations, we investigated the dependence of the chemotactic drift velocity on attractant concentration in an exponential gradient. While maxima of the average drift velocity can be interpreted within analytical linear-response theory of chemotaxis in shallow gradients, limits in drift due to steep gradients and finite number of receptor-methylation sites for adaptation go beyond perturbation theory. For instance, we found a surprising pinning of the cells to the concentration in the gradient at which cells run out of methylation sites. To validate the positions of maximal drift, we recorded single-cell trajectories in carefully designed chemical gradients using microfluidics.
Numerical simulation of MPD thruster flows with anomalous transport
NASA Technical Reports Server (NTRS)
Caldo, Giuliano; Choueiri, Edgar Y.; Kelly, Arnold J.; Jahn, Robert G.
1992-01-01
Anomalous transport effects in an Ar self-field coaxial MPD thruster are presently studied by means of a fully 2D two-fluid numerical code; its calculations are extended to a range of typical operating conditions. An effort is made to compare the spatial distribution of the steady state flow and field properties and thruster power-dissipation values for simulation runs with and without anomalous transport. A conductivity law based on the nonlinear saturation of lower hybrid current-driven instability is used for the calculations. Anomalous-transport simulation runs have indicated that the resistivity in specific areas of the discharge is significantly higher than that calculated in classical runs.
Virtual Systems Pharmacology (ViSP) software for simulation from mechanistic systems-level models.
Ermakov, Sergey; Forster, Peter; Pagidala, Jyotsna; Miladinov, Marko; Wang, Albert; Baillie, Rebecca; Bartlett, Derek; Reed, Mike; Leil, Tarek A
2014-01-01
Multiple software programs are available for designing and running large scale system-level pharmacology models used in the drug development process. Depending on the problem, scientists may be forced to use several modeling tools that could increase model development time, IT costs and so on. Therefore, it is desirable to have a single platform that allows setting up and running large-scale simulations for the models that have been developed with different modeling tools. We developed a workflow and a software platform in which a model file is compiled into a self-contained executable that is no longer dependent on the software that was used to create the model. At the same time the full model specifics is preserved by presenting all model parameters as input parameters for the executable. This platform was implemented as a model agnostic, therapeutic area agnostic and web-based application with a database back-end that can be used to configure, manage and execute large-scale simulations for multiple models by multiple users. The user interface is designed to be easily configurable to reflect the specifics of the model and the user's particular needs and the back-end database has been implemented to store and manage all aspects of the systems, such as Models, Virtual Patients, User Interface Settings, and Results. The platform can be adapted and deployed on an existing cluster or cloud computing environment. Its use was demonstrated with a metabolic disease systems pharmacology model that simulates the effects of two antidiabetic drugs, metformin and fasiglifam, in type 2 diabetes mellitus patients.
Virtual Systems Pharmacology (ViSP) software for simulation from mechanistic systems-level models
Ermakov, Sergey; Forster, Peter; Pagidala, Jyotsna; Miladinov, Marko; Wang, Albert; Baillie, Rebecca; Bartlett, Derek; Reed, Mike; Leil, Tarek A.
2014-01-01
Multiple software programs are available for designing and running large scale system-level pharmacology models used in the drug development process. Depending on the problem, scientists may be forced to use several modeling tools that could increase model development time, IT costs and so on. Therefore, it is desirable to have a single platform that allows setting up and running large-scale simulations for the models that have been developed with different modeling tools. We developed a workflow and a software platform in which a model file is compiled into a self-contained executable that is no longer dependent on the software that was used to create the model. At the same time the full model specifics is preserved by presenting all model parameters as input parameters for the executable. This platform was implemented as a model agnostic, therapeutic area agnostic and web-based application with a database back-end that can be used to configure, manage and execute large-scale simulations for multiple models by multiple users. The user interface is designed to be easily configurable to reflect the specifics of the model and the user's particular needs and the back-end database has been implemented to store and manage all aspects of the systems, such as Models, Virtual Patients, User Interface Settings, and Results. The platform can be adapted and deployed on an existing cluster or cloud computing environment. Its use was demonstrated with a metabolic disease systems pharmacology model that simulates the effects of two antidiabetic drugs, metformin and fasiglifam, in type 2 diabetes mellitus patients. PMID:25374542
Pasta nucleosynthesis: Molecular dynamics simulations of nuclear statistical equilibrium
NASA Astrophysics Data System (ADS)
Caplan, M. E.; Schneider, A. S.; Horowitz, C. J.; Berry, D. K.
2015-06-01
Background: Exotic nonspherical nuclear pasta shapes are expected in nuclear matter at just below saturation density because of competition between short-range nuclear attraction and long-range Coulomb repulsion. Purpose: We explore the impact nuclear pasta may have on nucleosynthesis during neutron star mergers when cold dense nuclear matter is ejected and decompressed. Methods: We use a hybrid CPU/GPU molecular dynamics (MD) code to perform decompression simulations of cold dense matter with 51 200 and 409 600 nucleons from 0.080 fm-3 down to 0.00125 fm-3 . Simulations are run for proton fractions YP= 0.05, 0.10, 0.20, 0.30, and 0.40 at temperatures T = 0.5, 0.75, and 1.0 MeV. The final composition of each simulation is obtained using a cluster algorithm and compared to a constant density run. Results: Size of nuclei in the final state of decompression runs are in good agreement with nuclear statistical equilibrium (NSE) models for temperatures of 1 MeV while constant density runs produce nuclei smaller than the ones obtained with NSE. Our MD simulations produces unphysical results with large rod-like nuclei in the final state of T =0.5 MeV runs. Conclusions: Our MD model is valid at higher densities than simple nuclear statistical equilibrium models and may help determine the initial temperatures and proton fractions of matter ejected in mergers.
GRODY - GAMMA RAY OBSERVATORY DYNAMICS SIMULATOR IN ADA
NASA Technical Reports Server (NTRS)
Stark, M.
1994-01-01
Analysts use a dynamics simulator to test the attitude control system algorithms used by a satellite. The simulator must simulate the hardware, dynamics, and environment of the particular spacecraft and provide user services which enable the analyst to conduct experiments. Researchers at Goddard's Flight Dynamics Division developed GRODY alongside GROSS (GSC-13147), a FORTRAN simulator which performs the same functions, in a case study to assess the feasibility and effectiveness of the Ada programming language for flight dynamics software development. They used popular object-oriented design techniques to link the simulator's design with its function. GRODY is designed for analysts familiar with spacecraft attitude analysis. The program supports maneuver planning as well as analytical testing and evaluation of the attitude determination and control system used on board the Gamma Ray Observatory (GRO) satellite. GRODY simulates the GRO on-board computer and Control Processor Electronics. The analyst/user sets up and controls the simulation. GRODY allows the analyst to check and update parameter values and ground commands, obtain simulation status displays, interrupt the simulation, analyze previous runs, and obtain printed output of simulation runs. The video terminal screen display allows visibility of command sequences, full-screen display and modification of parameters using input fields, and verification of all input data. Data input available for modification includes alignment and performance parameters for all attitude hardware, simulation control parameters which determine simulation scheduling and simulator output, initial conditions, and on-board computer commands. GRODY generates eight types of output: simulation results data set, analysis report, parameter report, simulation report, status display, plots, diagnostic output (which helps the user trace any problems that have occurred during a simulation), and a permanent log of all runs and errors. The analyst can send results output in graphical or tabular form to a terminal, disk, or hardcopy device, and can choose to have any or all items plotted against time or against each other. Goddard researchers developed GRODY on a VAX 8600 running VMS version 4.0. For near real time performance, GRODY requires a VAX at least as powerful as a model 8600 running VMS 4.0 or a later version. To use GRODY, the VAX needs an Ada Compilation System (ACS), Code Management System (CMS), and 1200K memory. GRODY is written in Ada and FORTRAN.
Validation of Mission Plans Through Simulation
NASA Astrophysics Data System (ADS)
St-Pierre, J.; Melanson, P.; Brunet, C.; Crabtree, D.
2002-01-01
The purpose of a spacecraft mission planning system is to automatically generate safe and optimized mission plans for a single spacecraft, or more functioning in unison. The system verifies user input syntax, conformance to commanding constraints, absence of duty cycle violations, timing conflicts, state conflicts, etc. Present day constraint-based systems with state-based predictive models use verification rules derived from expert knowledge. A familiar solution found in Mission Operations Centers, is to complement the planning system with a high fidelity spacecraft simulator. Often a dedicated workstation, the simulator is frequently used for operator training and procedure validation, and may be interfaced to actual control stations with command and telemetry links. While there are distinct advantages to having a planning system offer realistic operator training using the actual flight control console, physical verification of data transfer across layers and procedure validation, experience has revealed some drawbacks and inefficiencies in ground segment operations: With these considerations, two simulation-based mission plan validation projects are under way at the Canadian Space Agency (CSA): RVMP and ViSION. The tools proposed in these projects will automatically run scenarios and provide execution reports to operations planning personnel, prior to actual command upload. This can provide an important safeguard for system or human errors that can only be detected with high fidelity, interdependent spacecraft models running concurrently. The core element common to these projects is a spacecraft simulator, built with off-the- shelf components such as CAE's Real-Time Object-Based Simulation Environment (ROSE) technology, MathWork's MATLAB/Simulink, and Analytical Graphics' Satellite Tool Kit (STK). To complement these tools, additional components were developed, such as an emulated Spacecraft Test and Operations Language (STOL) interpreter and CCSDS TM/TC encoders and decoders. This paper discusses the use of simulation in the context of space mission planning, describes the projects under way and proposes additional venues of investigation and development.
Implementing Parquet equations using HPX
NASA Astrophysics Data System (ADS)
Kellar, Samuel; Wagle, Bibek; Yang, Shuxiang; Tam, Ka-Ming; Kaiser, Hartmut; Moreno, Juana; Jarrell, Mark
A new C++ runtime system (HPX) enables simulations of complex systems to run more efficiently on parallel and heterogeneous systems. This increased efficiency allows for solutions to larger simulations of the parquet approximation for a system with impurities. The relevancy of the parquet equations depends upon the ability to solve systems which require long runs and large amounts of memory. These limitations, in addition to numerical complications arising from stability of the solutions, necessitate running on large distributed systems. As the computational resources trend towards the exascale and the limitations arising from computational resources vanish efficiency of large scale simulations becomes a focus. HPX facilitates efficient simulations through intelligent overlapping of computation and communication. Simulations such as the parquet equations which require the transfer of large amounts of data should benefit from HPX implementations. Supported by the the NSF EPSCoR Cooperative Agreement No. EPS-1003897 with additional support from the Louisiana Board of Regents.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, Robert K; Schlicher, Bob G
Vulnerability in security of an information system is quantitatively predicted. The information system may receive malicious actions against its security and may receive corrective actions for restoring the security. A game oriented agent based model is constructed in a simulator application. The game ABM model represents security activity in the information system. The game ABM model has two opposing participants including an attacker and a defender, probabilistic game rules and allowable game states. A specified number of simulations are run and a probabilistic number of the plurality of allowable game states are reached in each simulation run. The probability ofmore » reaching a specified game state is unknown prior to running each simulation. Data generated during the game states is collected to determine a probability of one or more aspects of security in the information system.« less
Malataras, G; Kappas, C; Lovelock, D M; Mohan, R
1997-01-01
This article presents a comparison between two implementations of an EGS4 Monte Carlo simulation of a radiation therapy machine. The first implementation was run on a high performance RISC workstation, and the second was run on an inexpensive PC. The simulation was performed using the MCRAD user code. The photon energy spectra, as measured at a plane transverse to the beam direction and containing the isocenter, were compared. The photons were also binned radially in order to compare the variation of the spectra with radius. With 500,000 photons recorded in each of the two simulations, the running times were 48 h and 116 h for the workstation and the PC, respectively. No significant statistical differences between the two implementations were found.
40 CFR 86.134-96 - Running loss test.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 19 2012-07-01 2012-07-01 false Running loss test. 86.134-96 Section... Heavy-Duty Vehicles; Test Procedures § 86.134-96 Running loss test. (a) Overview. Gasoline- and methanol-fueled vehicles are to be tested for running loss emissions during simulated high-temperature urban...
40 CFR 86.134-96 - Running loss test.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 18 2011-07-01 2011-07-01 false Running loss test. 86.134-96 Section... Heavy-Duty Vehicles; Test Procedures § 86.134-96 Running loss test. (a) Overview. Gasoline- and methanol-fueled vehicles are to be tested for running loss emissions during simulated high-temperature urban...
40 CFR 86.134-96 - Running loss test.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Running loss test. 86.134-96 Section... Heavy-Duty Vehicles; Test Procedures § 86.134-96 Running loss test. (a) Overview. Gasoline- and methanol-fueled vehicles are to be tested for running loss emissions during simulated high-temperature urban...
40 CFR 86.134-96 - Running loss test.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 19 2013-07-01 2013-07-01 false Running loss test. 86.134-96 Section... Heavy-Duty Vehicles; Test Procedures § 86.134-96 Running loss test. (a) Overview. Gasoline- and methanol-fueled vehicles are to be tested for running loss emissions during simulated high-temperature urban...
Evaluation of a grid based molecular dynamics approach for polypeptide simulations.
Merelli, Ivan; Morra, Giulia; Milanesi, Luciano
2007-09-01
Molecular dynamics is very important for biomedical research because it makes possible simulation of the behavior of a biological macromolecule in silico. However, molecular dynamics is computationally rather expensive: the simulation of some nanoseconds of dynamics for a large macromolecule such as a protein takes very long time, due to the high number of operations that are needed for solving the Newton's equations in the case of a system of thousands of atoms. In order to obtain biologically significant data, it is desirable to use high-performance computation resources to perform these simulations. Recently, a distributed computing approach based on replacing a single long simulation with many independent short trajectories has been introduced, which in many cases provides valuable results. This study concerns the development of an infrastructure to run molecular dynamics simulations on a grid platform in a distributed way. The implemented software allows the parallel submission of different simulations that are singularly short but together bring important biological information. Moreover, each simulation is divided into a chain of jobs to avoid data loss in case of system failure and to contain the dimension of each data transfer from the grid. The results confirm that the distributed approach on grid computing is particularly suitable for molecular dynamics simulations thanks to the elevated scalability.
NASA Technical Reports Server (NTRS)
Thompson, David S.; Soni, Bharat K.
2000-01-01
An integrated software package, ICEG2D, was developed to automate computational fluid dynamics (CFD) simulations for single-element airfoils with ice accretion. ICEG2D is designed to automatically perform three primary functions: (1) generating a grid-ready, surface definition based on the geometrical characteristics of the iced airfoil surface, (2) generating a high-quality grid using the generated surface point distribution, and (3) generating the input and restart files needed to run the general purpose CFD solver NPARC. ICEG2D can be executed in batch mode using a script file or in an interactive mode by entering directives from a command line. This report summarizes activities completed in the first year of a three-year research and development program to address issues related to CFD simulations for aircraft components with ice accretion. Specifically, this document describes the technology employed in the software, the installation procedure, and a description of the operation of the software package. Validation of the geometry and grid generation modules of ICEG2D is also discussed.
A Storm Surge and Inundation Model of the Back River Watershed at NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Loftis, Jon Derek; Wang, Harry V.; DeYoung, Russell J.
2013-01-01
This report on a Virginia Institute for Marine Science project demonstrates that the sub-grid modeling technology (now as part of Chesapeake Bay Inundation Prediction System, CIPS) can incorporate high-resolution Lidar measurements provided by NASA Langley Research Center into the sub-grid model framework to resolve detailed topographic features for use as a hydrological transport model for run-off simulations within NASA Langley and Langley Air Force Base. The rainfall over land accumulates in the ditches/channels resolved via the model sub-grid was tested to simulate the run-off induced by heavy precipitation. Possessing both the capabilities for storm surge and run-off simulations, the CIPS model was then applied to simulate real storm events starting with Hurricane Isabel in 2003. It will be shown that the model can generate highly accurate on-land inundation maps as demonstrated by excellent comparison of the Langley tidal gauge time series data (CAPABLE.larc.nasa.gov) and spatial patterns of real storm wrack line measurements with the model results simulated during Hurricanes Isabel (2003), Irene (2011), and a 2009 Nor'easter. With confidence built upon the model's performance, sea level rise scenarios from the ICCP (International Climate Change Partnership) were also included in the model scenario runs to simulate future inundation cases.
NASA Astrophysics Data System (ADS)
Yoshida, Takashi
Combined-levitation-and-propulsion single-sided linear induction motor (SLIM) vehicle can be levitated without any additional levitation system. When the vehicle runs, the attractive-normal force varies depending on the phase of primary current because of the short primary end effect. The ripple of the attractive-normal force causes the vertical vibration of the vehicle. In this paper, instantaneous attractive-normal force is analyzed by using space harmonic analysis method. And based on the analysis, vertical vibration control is proposed. The validity of the proposed control method is verified by numerical simulation.
Aging in the three-dimensional random-field Ising model
NASA Astrophysics Data System (ADS)
von Ohr, Sebastian; Manssen, Markus; Hartmann, Alexander K.
2017-07-01
We studied the nonequilibrium aging behavior of the random-field Ising model in three dimensions for various values of the disorder strength. This allowed us to investigate how the aging behavior changes across the ferromagnetic-paramagnetic phase transition. We investigated a large system size of N =2563 spins and up to 108 Monte Carlo sweeps. To reach these necessary long simulation times, we employed an implementation running on Intel Xeon Phi coprocessors, reaching single-spin-flip times as short as 6 ps. We measured typical correlation functions in space and time to extract a growing length scale and corresponding exponents.
Challenges in Visual Analysis of Ensembles
Crossno, Patricia
2018-04-12
Modeling physical phenomena through computational simulation increasingly relies on generating a collection of related runs, known as an ensemble. In this paper, we explore the challenges we face in developing analysis and visualization systems for large and complex ensemble data sets, which we seek to understand without having to view the results of every simulation run. Implementing approaches and ideas developed in response to this goal, we demonstrate the analysis of a 15K run material fracturing study using Slycat, our ensemble analysis system.
Shadow: Running Tor in a Box for Accurate and Efficient Experimentation
2011-09-23
Modeling the speed of a target CPU is done by running an OpenSSL [31] speed test on a real CPU of that type. This provides us with the raw CPU processing...rate, but we are also interested in the processing speed of an application. By running application 5 benchmarks on the same CPU as the OpenSSL speed test...simulation, saving CPU cy- cles on our simulation host machine. Shadow removes cryptographic processing by preloading the main OpenSSL [31] functions used
Challenges in Visual Analysis of Ensembles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crossno, Patricia
Modeling physical phenomena through computational simulation increasingly relies on generating a collection of related runs, known as an ensemble. In this paper, we explore the challenges we face in developing analysis and visualization systems for large and complex ensemble data sets, which we seek to understand without having to view the results of every simulation run. Implementing approaches and ideas developed in response to this goal, we demonstrate the analysis of a 15K run material fracturing study using Slycat, our ensemble analysis system.
High-resolution dynamical downscaling of the future Alpine climate
NASA Astrophysics Data System (ADS)
Bozhinova, Denica; José Gómez-Navarro, Juan; Raible, Christoph
2017-04-01
The Alpine region and Switzerland is a challenging area for simulating and analysing Global Climate Model (GCM) results. This is mostly due to the combination of a very complex topography and the still rather coarse horizontal resolution of current GCMs, in which not all of the many-scale processes that drive the local weather and climate can be resolved. In our study, the Weather Research and Forecasting (WRF) model is used to dynamically downscale a GCM simulation to a resolution as high as 2 km x 2 km. WRF is driven by initial and boundary conditions produced with the Community Earth System Model (CESM) for the recent past (control run) and until 2100 using the RCP8.5 climate scenario (future run). The control run downscaled with WRF covers the period 1976-2005, while the future run investigates a 20-year-slice simulated for the 2080-2099. We compare the control WRF-CESM simulations to an observational product provided by MeteoSwiss and an additional WRF simulation driven by the ERA-Interim reanalysis, to estimate the bias that is introduced by the extra modelling step of our framework. Several bias-correction methods are evaluated, including a quantile mapping technique, to ameliorate the bias in the control WRF-CESM simulation. In the next step of our study these corrections are applied to our future WRF-CESM run. The resulting downscaled and bias-corrected data is analysed for the properties of precipitation and wind speed in the future climate. Our special interest focuses on the absolute quantities simulated for these meteorological variables as these are used to identify extreme events, such as wind storms and situations that can lead to floods.
Design and performance of the virtualization platform for offline computing on the ATLAS TDAQ Farm
NASA Astrophysics Data System (ADS)
Ballestrero, S.; Batraneanu, S. M.; Brasolin, F.; Contescu, C.; Di Girolamo, A.; Lee, C. J.; Pozo Astigarraga, M. E.; Scannicchio, D. A.; Twomey, M. S.; Zaytsev, A.
2014-06-01
With the LHC collider at CERN currently going through the period of Long Shutdown 1 there is an opportunity to use the computing resources of the experiments' large trigger farms for other data processing activities. In the case of the ATLAS experiment, the TDAQ farm, consisting of more than 1500 compute nodes, is suitable for running Monte Carlo (MC) production jobs that are mostly CPU and not I/O bound. This contribution gives a thorough review of the design and deployment of a virtualized platform running on this computing resource and of its use to run large groups of CernVM based virtual machines operating as a single CERN-P1 WLCG site. This platform has been designed to guarantee the security and the usability of the ATLAS private network, and to minimize interference with TDAQ's usage of the farm. Openstack has been chosen to provide a cloud management layer. The experience gained in the last 3.5 months shows that the use of the TDAQ farm for the MC simulation contributes to the ATLAS data processing at the level of a large Tier-1 WLCG site, despite the opportunistic nature of the underlying computing resources being used.
Numerical simulations of Hurricane Katrina (2005) in the turbulent gray zone
NASA Astrophysics Data System (ADS)
Green, Benjamin W.; Zhang, Fuqing
2015-03-01
Current numerical simulations of tropical cyclones (TCs) use a horizontal grid spacing as small as Δx = 103 m, with all boundary layer (BL) turbulence parameterized. Eventually, TC simulations can be conducted at Large Eddy Simulation (LES) resolution, which requires Δx to fall in the inertial subrange (often <102 m) to adequately resolve the large, energy-containing eddies. Between the two lies the so-called "terra incognita" because some of the assumptions used by mesoscale models and LES to treat BL turbulence are invalid. This study performs several 4-6 h simulations of Hurricane Katrina (2005) without a BL parameterization at extremely fine Δx [333, 200, and 111 m, hereafter "Large Eddy Permitting (LEP) runs"] and compares with mesoscale simulations with BL parameterizations (Δx = 3 km, 1 km, and 333 m, hereafter "PBL runs"). There are profound differences in the hurricane BL structure between the PBL and LEP runs: the former have a deeper inflow layer and secondary eyewall formation, whereas the latter have a shallow inflow layer without a secondary eyewall. Among the LEP runs, decreased Δx yields weaker subgrid-scale vertical momentum fluxes, but the sum of subgrid-scale and "grid-scale" fluxes remain similar. There is also evidence that the size of the prevalent BL eddies depends upon Δx, suggesting that convergence to true LES has not yet been reached. Nevertheless, the similarities in the storm-scale BL structure among the LEP runs indicate that the net effect of the BL on the rest of the hurricane may be somewhat independent of Δx.
NASA Astrophysics Data System (ADS)
Brewer, Jeffrey David
The National Aeronautics and Space Administration is planning for long-duration manned missions to the Moon and Mars. For feasible long-duration space travel, improvements in exercise countermeasures are necessary to maintain cardiovascular fitness, bone mass throughout the body and the ability to perform coordinated movements in a constant gravitational environment that is six orders of magnitude higher than the "near weightlessness" condition experienced during transit to and/or orbit of the Moon, Mars, and Earth. In such gravitational transitions feedback and feedforward postural control strategies must be recalibrated to ensure optimal locomotion performance. In order to investigate methods of improving postural control adaptation during these gravitational transitions, a treadmill based precision stepping task was developed to reveal changes in neuromuscular control of locomotion following both simulated partial gravity exposure and post-simulation exercise countermeasures designed to speed lower extremity impedance adjustment mechanisms. The exercise countermeasures included a short period of running with or without backpack loads immediately after partial gravity running. A novel suspension type partial gravity simulator incorporating spring balancers and a motor-driven treadmill was developed to facilitate body weight off loading and various gait patterns in both simulated partial and full gravitational environments. Studies have provided evidence that suggests: the environmental simulator constructed for this thesis effort does induce locomotor adaptations following partial gravity running; the precision stepping task may be a helpful test for illuminating these adaptations; and musculoskeletal loading in the form of running with or without backpack loads may improve the locomotor adaptation process.
NASA Astrophysics Data System (ADS)
Anantharaj, V. G.; Venzke, J.; Lingerfelt, E.; Messer, B.
2015-12-01
Climate model simulations are used to understand the evolution and variability of earth's climate. Unfortunately, high-resolution multi-decadal climate simulations can take days to weeks to complete. Typically, the simulation results are not analyzed until the model runs have ended. During the course of the simulation, the output may be processed periodically to ensure that the model is preforming as expected. However, most of the data analytics and visualization are not performed until the simulation is finished. The lengthy time period needed for the completion of the simulation constrains the productivity of climate scientists. Our implementation of near real-time data visualization analytics capabilities allows scientists to monitor the progress of their simulations while the model is running. Our analytics software executes concurrently in a co-scheduling mode, monitoring data production. When new data are generated by the simulation, a co-scheduled data analytics job is submitted to render visualization artifacts of the latest results. These visualization output are automatically transferred to Bellerophon's data server located at ORNL's Compute and Data Environment for Science (CADES) where they are processed and archived into Bellerophon's database. During the course of the experiment, climate scientists can then use Bellerophon's graphical user interface to view animated plots and their associated metadata. The quick turnaround from the start of the simulation until the data are analyzed permits research decisions and projections to be made days or sometimes even weeks sooner than otherwise possible! The supercomputer resources used to run the simulation are unaffected by co-scheduling the data visualization jobs, so the model runs continuously while the data are visualized. Our just-in-time data visualization software looks to increase climate scientists' productivity as climate modeling moves into exascale era of computing.
NASA Technical Reports Server (NTRS)
Thompson David S.; Soni, Bharat K.
2001-01-01
An integrated geometry/grid/simulation software package, ICEG2D, is being developed to automate computational fluid dynamics (CFD) simulations for single- and multi-element airfoils with ice accretions. The current version, ICEG213 (v2.0), was designed to automatically perform four primary functions: (1) generate a grid-ready surface definition based on the geometrical characteristics of the iced airfoil surface, (2) generate high-quality structured and generalized grids starting from a defined surface definition, (3) generate the input and restart files needed to run the structured grid CFD solver NPARC or the generalized grid CFD solver HYBFL2D, and (4) using the flow solutions, generate solution-adaptive grids. ICEG2D (v2.0) can be operated in either a batch mode using a script file or in an interactive mode by entering directives from a command line within a Unix shell. This report summarizes activities completed in the first two years of a three-year research and development program to address automation issues related to CFD simulations for airfoils with ice accretions. As well as describing the technology employed in the software, this document serves as a users manual providing installation and operating instructions. An evaluation of the software is also presented.
NASA Technical Reports Server (NTRS)
Mcenulty, R. E.
1977-01-01
The G189A simulation of the Shuttle Orbiter ECLSS was upgraded. All simulation library versions and simulation models were converted from the EXEC2 to the EXEC8 computer system and a new program, G189PL, was added to the combination master program library. The program permits the post-plotting of up to 100 frames of plot data over any time interval of a G189 simulation run. The overlay structure of the G189A simulations were restructured for the purpose of conserving computer core requirements and minimizing run time requirements.
A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations
NASA Astrophysics Data System (ADS)
Demir, I.; Agliamzanov, R.
2014-12-01
Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.
Simulated tsunami run-up amplification factors around Penang Island for preliminary risk assessment
NASA Astrophysics Data System (ADS)
Lim, Yong Hui; Kh'ng, Xin Yi; Teh, Su Yean; Koh, Hock Lye; Tan, Wai Kiat
2017-08-01
The mega-tsunami Andaman that struck Malaysia on 26 December 2004 affected 200 kilometers of northwest Peninsular Malaysia coastline from Perlis to Selangor. It is anticipated by the tsunami scientific community that the next mega-tsunami is due to occur any time soon. This rare catastrophic event has awakened the attention of Malaysian government to take appropriate risk reduction measures, including timely and orderly evacuation. To effectively evacuate ordinary citizens to a safe ground or a nearest designated emergency shelter, a well prepared evacuation route is essential with the estimated tsunami run-up heights and inundation distances on land clearly indicated on the evacuation map. The run-up heights and inundation distances are simulated by an in-house model 2-D TUNA-RP based upon credible scientific tsunami source scenarios derived from tectonic activity around the region. To provide a useful tool for estimating the run-up heights along the entire coast of Penang Island, we computed tsunami amplification factors based upon 2-D TUNA-RP model simulations in this paper. The inundation map and run-up amplification factors in six domains along the entire coastline of Penang Island are provided. The comparison between measured tsunami wave heights for the 2004 Andaman tsunami and TUNA-RP model simulated values demonstrates good agreement.
Runtime visualization of the human arterial tree.
Insley, Joseph A; Papka, Michael E; Dong, Suchuan; Karniadakis, George; Karonis, Nicholas T
2007-01-01
Large-scale simulation codes typically execute for extended periods of time and often on distributed computational resources. Because these simulations can run for hours, or even days, scientists like to get feedback about the state of the computation and the validity of its results as it runs. It is also important that these capabilities be made available with little impact on the performance and stability of the simulation. Visualizing and exploring data in the early stages of the simulation can help scientists identify problems early, potentially avoiding a situation where a simulation runs for several days, only to discover that an error with an input parameter caused both time and resources to be wasted. We describe an application that aids in the monitoring and analysis of a simulation of the human arterial tree. The application provides researchers with high-level feedback about the state of the ongoing simulation and enables them to investigate particular areas of interest in greater detail. The application also offers monitoring information about the amount of data produced and data transfer performance among the various components of the application.
ALEGRA -- A massively parallel h-adaptive code for solid dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Summers, R.M.; Wong, M.K.; Boucheron, E.A.
1997-12-31
ALEGRA is a multi-material, arbitrary-Lagrangian-Eulerian (ALE) code for solid dynamics designed to run on massively parallel (MP) computers. It combines the features of modern Eulerian shock codes, such as CTH, with modern Lagrangian structural analysis codes using an unstructured grid. ALEGRA is being developed for use on the teraflop supercomputers to conduct advanced three-dimensional (3D) simulations of shock phenomena important to a variety of systems. ALEGRA was designed with the Single Program Multiple Data (SPMD) paradigm, in which the mesh is decomposed into sub-meshes so that each processor gets a single sub-mesh with approximately the same number of elements. Usingmore » this approach the authors have been able to produce a single code that can scale from one processor to thousands of processors. A current major effort is to develop efficient, high precision simulation capabilities for ALEGRA, without the computational cost of using a global highly resolved mesh, through flexible, robust h-adaptivity of finite elements. H-adaptivity is the dynamic refinement of the mesh by subdividing elements, thus changing the characteristic element size and reducing numerical error. The authors are working on several major technical challenges that must be met to make effective use of HAMMER on MP computers.« less
Validation of MHD Models using MST RFP Plasmas
NASA Astrophysics Data System (ADS)
Jacobson, C. M.; Chapman, B. E.; den Hartog, D. J.; McCollam, K. J.; Sarff, J. S.; Sovinec, C. R.
2017-10-01
Rigorous validation of computational models used in fusion energy sciences over a large parameter space and across multiple magnetic configurations can increase confidence in their ability to predict the performance of future devices. MST is a well diagnosed reversed-field pinch (RFP) capable of operation with plasma current ranging from 60 kA to 500 kA. The resulting Lundquist number S, a key parameter in resistive magnetohydrodynamics (MHD), ranges from 4 ×104 to 8 ×106 for standard RFP plasmas and provides substantial overlap with MHD RFP simulations. MST RFP plasmas are simulated using both DEBS, a nonlinear single-fluid visco-resistive MHD code, and NIMROD, a nonlinear extended MHD code, with S ranging from 104 to 105 for single-fluid runs, and the magnetic Prandtl number Pm = 1 . Validation metric comparisons are presented, focusing on how normalized magnetic fluctuations at the edge b scale with S. Preliminary results for the dominant n = 6 mode are b S - 0 . 20 +/- 0 . 02 for single-fluid NIMROD, b S - 0 . 25 +/- 0 . 05 for DEBS, and b S - 0 . 20 +/- 0 . 02 for experimental measurements, however there is a significant discrepancy in mode amplitudes. Preliminary two-fluid NIMROD results are also presented. Work supported by US DOE.
Modeling a Full Coronal Loop Observed with Hinode EIS and SDO AIA
NASA Technical Reports Server (NTRS)
Alexander, Caroline; Winebarger, Amy R.
2015-01-01
Physical parameters measured from an observation of a coronal loop from Gupta et al. (2015) using Hinode/EIS and SDO/AIA were used as input for the hydrodynamic, impulsively heating NRLSOFM 1-d loop model. The model was run at eight different energy inputs and used the measured quantities of temperature (0.73 MK), density (10(sup 8.5)cm(sup -3) and minimum loop lifetime to evaluate the success of the model at recreating the observations. The loop was measured by us to have an unprojected length of 236 Mm and was assumed to be almost perpendicular to the solar surface (tilt of 3.5 degrees) and have a dipolar geometry. Our results show that two of our simulation runs (with input energies of 0.01 and 0.02 ergs cm(sup -3)S(sup -1) closely match the temperature/density combination exhibited by the loop observation. However, our simulated loops only remain in the temperature sensitive region of the Mg 278.4 Angstrom filter for 500 and 800 seconds respectively which is less than the 1200 seconds that the loop is observed for with EIS in order to make the temperature/density measurements over the loop's entire length. This leads us to conclude that impulsive heating of a single loop is not complex enough to explain this observation. Additional steady heating or a collection of additional strands along the line-of-sight would help to align the simulation with the observation.
2007-06-01
particle accelerators cannot run unless enough network band- width is available to absorb their data streams. DOE scientists running simulations routinely...send tuples to TelegraphCQ. To simulate a less-powerful machine, I increased the playback rate of the trace by a factor of 10 and reduced the query...III CPUs and 1.5 GB of main memory. To simulate using a less powerful embedded CPU, I wrote a program that would “play back” the trace at a multiple
Spatial application of WEPS for estimating wind erosion in the Pacific Northwest
USDA-ARS?s Scientific Manuscript database
The Wind Erosion Prediction System (WEPS) is used to simulate soil erosion on croplands and was originally designed to run field scale simulations. This research is an extension of the WEPS model to run on multiple fields (grids) covering a larger region. We modified the WEPS source code to allow it...
Spatial application of WEPS for estimating wind erosion in the Pacific Northwest
USDA-ARS?s Scientific Manuscript database
The Wind Erosion Prediction System (WEPS) is used to simulate soil erosion on cropland and was originally designed to run simulations on a field-scale size. This study extended WEPS to run on multiple fields (grids) independently to cover a large region and to conduct an initial investigation to ass...
THE VIRTUAL INSTRUMENT: SUPPORT FOR GRID-ENABLED MCELL SIMULATIONS
Casanova, Henri; Berman, Francine; Bartol, Thomas; Gokcay, Erhan; Sejnowski, Terry; Birnbaum, Adam; Dongarra, Jack; Miller, Michelle; Ellisman, Mark; Faerman, Marcio; Obertelli, Graziano; Wolski, Rich; Pomerantz, Stuart; Stiles, Joel
2010-01-01
Ensembles of widely distributed, heterogeneous resources, or Grids, have emerged as popular platforms for large-scale scientific applications. In this paper we present the Virtual Instrument project, which provides an integrated application execution environment that enables end-users to run and interact with running scientific simulations on Grids. This work is performed in the specific context of MCell, a computational biology application. While MCell provides the basis for running simulations, its capabilities are currently limited in terms of scale, ease-of-use, and interactivity. These limitations preclude usage scenarios that are critical for scientific advances. Our goal is to create a scientific “Virtual Instrument” from MCell by allowing its users to transparently access Grid resources while being able to steer running simulations. In this paper, we motivate the Virtual Instrument project and discuss a number of relevant issues and accomplishments in the area of Grid software development and application scheduling. We then describe our software design and report on the current implementation. We verify and evaluate our design via experiments with MCell on a real-world Grid testbed. PMID:20689618
SQUEEZE-E: The Optimal Solution for Molecular Simulations with Periodic Boundary Conditions.
Wassenaar, Tsjerk A; de Vries, Sjoerd; Bonvin, Alexandre M J J; Bekker, Henk
2012-10-09
In molecular simulations of macromolecules, it is desirable to limit the amount of solvent in the system to avoid spending computational resources on uninteresting solvent-solvent interactions. As a consequence, periodic boundary conditions are commonly used, with a simulation box chosen as small as possible, for a given minimal distance between images. Here, we describe how such a simulation cell can be set up for ensembles, taking into account a priori available or estimable information regarding conformational flexibility. Doing so ensures that any conformation present in the input ensemble will satisfy the distance criterion during the simulation. This helps avoid periodicity artifacts due to conformational changes. The method introduces three new approaches in computational geometry: (1) The first is the derivation of an optimal packing of ensembles, for which the mathematical framework is described. (2) A new method for approximating the α-hull and the contact body for single bodies and ensembles is presented, which is orders of magnitude faster than existing routines, allowing the calculation of packings of large ensembles and/or large bodies. 3. A routine is described for searching a combination of three vectors on a discretized contact body forming a reduced base for a lattice with minimal cell volume. The new algorithms reduce the time required to calculate packings of single bodies from minutes or hours to seconds. The use and efficacy of the method is demonstrated for ensembles obtained from NMR, MD simulations, and elastic network modeling. An implementation of the method has been made available online at http://haddock.chem.uu.nl/services/SQUEEZE/ and has been made available as an option for running simulations through the weNMR GRID MD server at http://haddock.science.uu.nl/enmr/services/GROMACS/main.php .
morphforge: a toolbox for simulating small networks of biologically detailed neurons in Python
Hull, Michael J.; Willshaw, David J.
2014-01-01
The broad structure of a modeling study can often be explained over a cup of coffee, but converting this high-level conceptual idea into graphs of the final simulation results may require many weeks of sitting at a computer. Although models themselves can be complex, often many mental resources are wasted working around complexities of the software ecosystem such as fighting to manage files, interfacing between tools and data formats, finding mistakes in code or working out the units of variables. morphforge is a high-level, Python toolbox for building and managing simulations of small populations of multicompartmental biophysical model neurons. An entire in silico experiment, including the definition of neuronal morphologies, channel descriptions, stimuli, visualization and analysis of results can be written within a single short Python script using high-level objects. Multiple independent simulations can be created and run from a single script, allowing parameter spaces to be investigated. Consideration has been given to the reuse of both algorithmic and parameterizable components to allow both specific and stochastic parameter variations. Some other features of the toolbox include: the automatic generation of human-readable documentation (e.g., PDF files) about a simulation; the transparent handling of different biophysical units; a novel mechanism for plotting simulation results based on a system of tags; and an architecture that supports both the use of established formats for defining channels and synapses (e.g., MODL files), and the possibility to support other libraries and standards easily. We hope that this toolbox will allow scientists to quickly build simulations of multicompartmental model neurons for research and serve as a platform for further tool development. PMID:24478690
Accelerating Wright–Fisher Forward Simulations on the Graphics Processing Unit
Lawrie, David S.
2017-01-01
Forward Wright–Fisher simulations are powerful in their ability to model complex demography and selection scenarios, but suffer from slow execution on the Central Processor Unit (CPU), thus limiting their usefulness. However, the single-locus Wright–Fisher forward algorithm is exceedingly parallelizable, with many steps that are so-called “embarrassingly parallel,” consisting of a vast number of individual computations that are all independent of each other and thus capable of being performed concurrently. The rise of modern Graphics Processing Units (GPUs) and programming languages designed to leverage the inherent parallel nature of these processors have allowed researchers to dramatically speed up many programs that have such high arithmetic intensity and intrinsic concurrency. The presented GPU Optimized Wright–Fisher simulation, or “GO Fish” for short, can be used to simulate arbitrary selection and demographic scenarios while running over 250-fold faster than its serial counterpart on the CPU. Even modest GPU hardware can achieve an impressive speedup of over two orders of magnitude. With simulations so accelerated, one can not only do quick parametric bootstrapping of previously estimated parameters, but also use simulated results to calculate the likelihoods and summary statistics of demographic and selection models against real polymorphism data, all without restricting the demographic and selection scenarios that can be modeled or requiring approximations to the single-locus forward algorithm for efficiency. Further, as many of the parallel programming techniques used in this simulation can be applied to other computationally intensive algorithms important in population genetics, GO Fish serves as an exciting template for future research into accelerating computation in evolution. GO Fish is part of the Parallel PopGen Package available at: http://dl42.github.io/ParallelPopGen/. PMID:28768689
Analyzing Spacecraft Telecommunication Systems
NASA Technical Reports Server (NTRS)
Kordon, Mark; Hanks, David; Gladden, Roy; Wood, Eric
2004-01-01
Multi-Mission Telecom Analysis Tool (MMTAT) is a C-language computer program for analyzing proposed spacecraft telecommunication systems. MMTAT utilizes parameterized input and computational models that can be run on standard desktop computers to perform fast and accurate analyses of telecommunication links. MMTAT is easy to use and can easily be integrated with other software applications and run as part of almost any computational simulation. It is distributed as either a stand-alone application program with a graphical user interface or a linkable library with a well-defined set of application programming interface (API) calls. As a stand-alone program, MMTAT provides both textual and graphical output. The graphs make it possible to understand, quickly and easily, how telecommunication performance varies with variations in input parameters. A delimited text file that can be read by any spreadsheet program is generated at the end of each run. The API in the linkable-library form of MMTAT enables the user to control simulation software and to change parameters during a simulation run. Results can be retrieved either at the end of a run or by use of a function call at any time step.
DWPF Simulant CPC Studies For SB8
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newell, J. D.
2013-09-25
Prior to processing a Sludge Batch (SB) in the Defense Waste Processing Facility (DWPF), flowsheet studies using simulants are performed. Typically, the flowsheet studies are conducted based on projected composition(s). The results from the flowsheet testing are used to 1) guide decisions during sludge batch preparation, 2) serve as a preliminary evaluation of potential processing issues, and 3) provide a basis to support the Shielded Cells qualification runs performed at the Savannah River National Laboratory (SRNL). SB8 was initially projected to be a combination of the Tank 40 heel (Sludge Batch 7b), Tank 13, Tank 12, and the Tank 51more » heel. In order to accelerate preparation of SB8, the decision was made to delay the oxalate-rich material from Tank 12 to a future sludge batch. SB8 simulant studies without Tank 12 were reported in a separate report.1 The data presented in this report will be useful when processing future sludge batches containing Tank 12. The wash endpoint target for SB8 was set at a significantly higher sodium concentration to allow acceptable glass compositions at the targeted waste loading. Four non-coupled tests were conducted using simulant representing Tank 40 at 110-146% of the Koopman Minimum Acid requirement. Hydrogen was generated during high acid stoichiometry (146% acid) SRAT testing up to 31% of the DWPF hydrogen limit. SME hydrogen generation reached 48% of of the DWPF limit for the high acid run. Two non-coupled tests were conducted using simulant representing Tank 51 at 110-146% of the Koopman Minimum Acid requirement. Hydrogen was generated during high acid stoichiometry SRAT testing up to 16% of the DWPF limit. SME hydrogen generation reached 49% of the DWPF limit for hydrogen in the SME for the high acid run. Simulant processing was successful using previously established antifoam addition strategy. Foaming during formic acid addition was not observed in any of the runs. Nitrite was destroyed in all runs and no N2O was detected during SME processing. Mercury behavior was consistent with that seen in previous SRAT runs. Mercury was stripped below the DWPF limit on 0.8 wt% for all runs. Rheology yield stress fell within or below the design basis of 1-5 Pa. The low acid Tank 40 run (106% acid stoichiometry) had the highest yield stress at 3.78 Pa.« less
The structure of a market containing boundedly rational firms
NASA Astrophysics Data System (ADS)
Ibrahim, Adyda; Zura, Nerda; Saaban, Azizan
2017-11-01
The structure of a market is determined by the number of active firms in it. Over time, this number is affected by the exit of existing firms, called incumbents, and entries of new firms, called entrant. In this paper, we considered a market governed by the Cobb-Douglas utility function such that the demand function is isoelastic. Each firm is assumed to produce a single homogenous product under a constant unit cost. Furthermore, firms are assumed to be boundedly rational in adjusting their outputs at each period. A firm is considered to exit the market if its output is negative. In this paper, the market is assumed to have zero barrier-to-entry. Therefore, the exiting firm can reenter the market if its output is positive again, and new firms can enter the market easily. Based on these assumptions and rules, a mathematical model was developed and numerical simulations were run using Matlab. By setting certain values for the parameters in the model, initial numerical simulations showed that in the long run, the number of firms that manages to survive the market varies between zero to 30. This initial result is consistent with the idea that a zero barrier-to-entry may produce a perfectly competitive market.
NASA Astrophysics Data System (ADS)
Wang, Aiming; Cheng, Xiaohan; Meng, Guoying; Xia, Yun; Wo, Lei; Wang, Ziyi
2017-03-01
Identification of rotor unbalance is critical for normal operation of rotating machinery. The single-disc and single-span rotor, as the most fundamental rotor-bearing system, has attracted research attention over a long time. In this paper, the continuous single-disc and single-span rotor is modeled as a homogeneous and elastic Euler-Bernoulli beam, and the forces applied by bearings and disc on the shaft are considered as point forces. A fourth-order non-homogeneous partial differential equation set with homogeneous boundary condition is solved for analytical solution, which expresses the unbalance response as a function of position, rotor unbalance and the stiffness and damping coefficients of bearings. Based on this analytical method, a novel Measurement Point Vector Method (MPVM) is proposed to identify rotor unbalance while operating. Only a measured unbalance response registered for four selected cross-sections of the rotor-shaft under steady-state operating conditions is needed when using the method. Numerical simulation shows that the detection error of the proposed method is very small when measurement error is negligible. The proposed method provides an efficient way for rotor balancing without test runs and external excitations.
Development and validation of the European Cluster Assimilation Techniques run libraries
NASA Astrophysics Data System (ADS)
Facskó, G.; Gordeev, E.; Palmroth, M.; Honkonen, I.; Janhunen, P.; Sergeev, V.; Kauristie, K.; Milan, S.
2012-04-01
The European Commission funded the European Cluster Assimilation Techniques (ECLAT) project as a collaboration of five leader European universities and research institutes. A main contribution of the Finnish Meteorological Institute (FMI) is to provide a wide range global MHD runs with the Grand Unified Magnetosphere Ionosphere Coupling simulation (GUMICS). The runs are divided in two categories: Synthetic runs investigating the extent of solar wind drivers that can influence magnetospheric dynamics, as well as dynamic runs using measured solar wind data as input. Here we consider the first set of runs with synthetic solar wind input. The solar wind density, velocity and the interplanetary magnetic field had different magnitudes and orientations; furthermore two F10.7 flux values were selected for solar radiation minimum and maximum values. The solar wind parameter values were constant such that a constant stable solution was archived. All configurations were run several times with three different (-15°, 0°, +15°) tilt angles in the GSE X-Z plane. The result of the 192 simulations named so called "synthetic run library" were visualized and uploaded to the homepage of the FMI after validation. Here we present details of these runs.
DOT National Transportation Integrated Search
2012-09-01
The Center for Health and Safety Culture conducted research for the Idaho Transportation Department to develop media messages and tools to reduce fatalities and serious injuries related to Run-Off-the-Road, single-vehicle crashes in Idaho using the P...
Comprehensive model of a hermetic reciprocating compressor
NASA Astrophysics Data System (ADS)
Yang, B.; Ziviani, D.; Groll, E. A.
2017-08-01
A comprehensive simulation model is presented to predict the performance of a hermetic reciprocating compressor and to reveal the underlying mechanisms when the compressor is running. The presented model is composed of sub-models simulating the in-cylinder compression process, piston ring/journal bearing frictional power loss, single phase induction motor and the overall compressor energy balance among different compressor components. The valve model, leakage through piston ring model and in-cylinder heat transfer model are also incorporated into the in-cylinder compression process model. A numerical algorithm solving the model is introduced. The predicted results of the compressor mass flow rate and input power consumption are compared to the published compressor map values. Future work will focus on detailed experimental validation of the model and parametric studies investigating the effects of structural parameters, including the stroke-to-bore ratio, on the compressor performance.
NOTE: Implementation of angular response function modeling in SPECT simulations with GATE
NASA Astrophysics Data System (ADS)
Descourt, P.; Carlier, T.; Du, Y.; Song, X.; Buvat, I.; Frey, E. C.; Bardies, M.; Tsui, B. M. W.; Visvikis, D.
2010-05-01
Among Monte Carlo simulation codes in medical imaging, the GATE simulation platform is widely used today given its flexibility and accuracy, despite long run times, which in SPECT simulations are mostly spent in tracking photons through the collimators. In this work, a tabulated model of the collimator/detector response was implemented within the GATE framework to significantly reduce the simulation times in SPECT. This implementation uses the angular response function (ARF) model. The performance of the implemented ARF approach has been compared to standard SPECT GATE simulations in terms of the ARF tables' accuracy, overall SPECT system performance and run times. Considering the simulation of the Siemens Symbia T SPECT system using high-energy collimators, differences of less than 1% were measured between the ARF-based and the standard GATE-based simulations, while considering the same noise level in the projections, acceleration factors of up to 180 were obtained when simulating a planar 364 keV source seen with the same SPECT system. The ARF-based and the standard GATE simulation results also agreed very well when considering a four-head SPECT simulation of a realistic Jaszczak phantom filled with iodine-131, with a resulting acceleration factor of 100. In conclusion, the implementation of an ARF-based model of collimator/detector response for SPECT simulations within GATE significantly reduces the simulation run times without compromising accuracy.
NASA Astrophysics Data System (ADS)
Smith, L. A.
2001-05-01
Many sources of uncertainty come into play when modelling geophysical systems by simulation. These include uncertainty in the initial condition, uncertainty in model parameter values (and the parameterisations themselves) and error in the model class from which the model(s) was selected. In recent decades, climate simulations have focused resources on reducing the last of these by including more and more details into the model. One can question when this ``kitchen sink'' approach should be complimented with realistic estimates of the impact from other uncertainties noted above. Indeed while the impact of model error can never be fully quantified, as all simulation experiments are interpreted a the rosy scenario which assumes a priori that nothing crucial is missing, the impact of other uncertainties can be quantified at only the cost of computational power; as illustrated, for example, in ensemble climate modelling experiments like Casino-21. This talk illustrates the interplay uncertainties in the context of a trivial nonlinear system and an ensemble of models. The simple systems considered in this small scale experiment, Keno-21, are meant to illustrate issues of experimental design; they are not intended to provide true climate simulations. The use of simulation models with huge numbers of parameters given limited data is usually justified by an appeal to the Laws of Physics: the number of free degrees-of-freedom are many fewer than the number of variables; both variables, parameterisations, and parameter values are constrained by ``the physics" and the resulting simulation yields a realistic reproduction of the entire planet's climate system to within reasonable bounds. But what bounds? exactly? In a single model run under transient forcing scenario, there are good statistical grounds for considering only large space and time averages; most of these reasons vanish if an ensemble of runs are made. Ensemble runs can quantify the (in)ability of a model to provide insight on regional changes: if a model cannot capture regional variations in the data on which the model was constructed (that is, in-sample) claims that out-of-sample predictions of those same regional averages should be used in policy making are vacuous. While motivated by climate modelling and illustrated on a trivial nonlinear system, these issues have implications across the range of geophysical modelling. These include implications for appropriate resource allocation, on the making of science policy, and on the public understanding of science and the role of uncertainty in decision making.
Spirou, Spiridon V; Papadimitroulas, Panagiotis; Liakou, Paraskevi; Georgoulias, Panagiotis; Loudos, George
2015-09-01
To present and evaluate a new methodology to investigate the effect of attenuation correction (AC) in single-photon emission computed tomography (SPECT) using textural features analysis, Monte Carlo techniques, and a computational anthropomorphic model. The GATE Monte Carlo toolkit was used to simulate SPECT experiments using the XCAT computational anthropomorphic model, filled with a realistic biodistribution of (99m)Tc-N-DBODC. The simulated gamma camera was the Siemens ECAM Dual-Head, equipped with a parallel hole lead collimator, with an image resolution of 3.54 × 3.54 mm(2). Thirty-six equispaced camera positions, spanning a full 360° arc, were simulated. Projections were calculated after applying a ± 20% energy window or after eliminating all scattered photons. The activity of the radioisotope was reconstructed using the MLEM algorithm. Photon attenuation was accounted for by calculating the radiological pathlength in a perpendicular line from the center of each voxel to the gamma camera. Twenty-two textural features were calculated on each slice, with and without AC, using 16 and 64 gray levels. A mask was used to identify only those pixels that belonged to each organ. Twelve of the 22 features showed almost no dependence on AC, irrespective of the organ involved. In both the heart and the liver, the mean and SD were the features most affected by AC. In the liver, six features were affected by AC only on some slices. Depending on the slice, skewness decreased by 22-34% with AC, kurtosis by 35-50%, long-run emphasis mean by 71-91%, and long-run emphasis range by 62-95%. In contrast, gray-level non-uniformity mean increased by 78-218% compared with the value without AC and run percentage mean by 51-159%. These results were not affected by the number of gray levels (16 vs. 64) or the data used for reconstruction: with the energy window or without scattered photons. The mean and SD were the main features affected by AC. In the heart, no other feature was affected. In the liver, other features were affected, but the effect was slice dependent. The number of gray levels did not affect the results.
Breadth-First Search-Based Single-Phase Algorithms for Bridge Detection in Wireless Sensor Networks
Akram, Vahid Khalilpour; Dagdeviren, Orhan
2013-01-01
Wireless sensor networks (WSNs) are promising technologies for exploring harsh environments, such as oceans, wild forests, volcanic regions and outer space. Since sensor nodes may have limited transmission range, application packets may be transmitted by multi-hop communication. Thus, connectivity is a very important issue. A bridge is a critical edge whose removal breaks the connectivity of the network. Hence, it is crucial to detect bridges and take preventions. Since sensor nodes are battery-powered, services running on nodes should consume low energy. In this paper, we propose energy-efficient and distributed bridge detection algorithms for WSNs. Our algorithms run single phase and they are integrated with the Breadth-First Search (BFS) algorithm, which is a popular routing algorithm. Our first algorithm is an extended version of Milic's algorithm, which is designed to reduce the message length. Our second algorithm is novel and uses ancestral knowledge to detect bridges. We explain the operation of the algorithms, analyze their proof of correctness, message, time, space and computational complexities. To evaluate practical importance, we provide testbed experiments and extensive simulations. We show that our proposed algorithms provide less resource consumption, and the energy savings of our algorithms are up by 5.5-times. PMID:23845930
A Monte-Carlo maplet for the study of the optical properties of biological tissues
NASA Astrophysics Data System (ADS)
Yip, Man Ho; Carvalho, M. J.
2007-12-01
Monte-Carlo simulations are commonly used to study complex physical processes in various fields of physics. In this paper we present a Maple program intended for Monte-Carlo simulations of photon transport in biological tissues. The program has been designed so that the input data and output display can be handled by a maplet (an easy and user-friendly graphical interface), named the MonteCarloMaplet. A thorough explanation of the programming steps and how to use the maplet is given. Results obtained with the Maple program are compared with corresponding results available in the literature. Program summaryProgram title:MonteCarloMaplet Catalogue identifier:ADZU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:3251 No. of bytes in distributed program, including test data, etc.:296 465 Distribution format: tar.gz Programming language:Maple 10 Computer: Acer Aspire 5610 (any running Maple 10) Operating system: Windows XP professional (any running Maple 10) Classification: 3.1, 5 Nature of problem: Simulate the transport of radiation in biological tissues. Solution method: The Maple program follows the steps of the C program of L. Wang et al. [L. Wang, S.L. Jacques, L. Zheng, Computer Methods and Programs in Biomedicine 47 (1995) 131-146]; The Maple library routine for random number generation is used [Maple 10 User Manual c Maplesoft, a division of Waterloo Maple Inc., 2005]. Restrictions: Running time increases rapidly with the number of photons used in the simulation. Unusual features: A maplet (graphical user interface) has been programmed for data input and output. Note that the Monte-Carlo simulation was programmed with Maple 10. If attempting to run the simulation with an earlier version of Maple, appropriate modifications (regarding typesetting fonts) are required and once effected the worksheet runs without problem. However some of the windows of the maplet may still appear distorted. Running time: Depends essentially on the number of photons used in the simulation. Elapsed times for particular runs are reported in the main text.
NASA Astrophysics Data System (ADS)
von Trentini, F.; Schmid, F. J.; Braun, M.; Brisette, F.; Frigon, A.; Leduc, M.; Martel, J. L.; Willkofer, F.; Wood, R. R.; Ludwig, R.
2017-12-01
Meteorological extreme events seem to become more frequent in the present and future, and a seperation of natural climate variability and a clear climate change effect on these extreme events gains more and more interest. Since there is only one realisation of historical events, natural variability in terms of very long timeseries for a robust statistical analysis is not possible with observation data. A new single model large ensemble (SMLE), developed for the ClimEx project (Climate change and hydrological extreme events - risks and perspectives for water management in Bavaria and Québec) is supposed to overcome this lack of data by downscaling 50 members of the CanESM2 (RCP 8.5) with the Canadian CRCM5 regional model (using the EURO-CORDEX grid specifications) for timeseries of 1950-2099 each, resulting in 7500 years of simulated climate. This allows for a better probabilistic analysis of rare and extreme events than any preceding dataset. Besides seasonal sums, several extreme indicators like R95pTOT, RX5day and others are calculated for the ClimEx ensemble and several EURO-CORDEX runs. This enables us to investigate the interaction between natural variability (as it appears in the CanESM2-CRCM5 members) and a climate change signal of those members for past, present and future conditions. Adding the EURO-CORDEX results to this, we can also assess the role of internal model variability (or natural variability) in climate change simulations. A first comparison shows similar magnitudes of variability of climate change signals between the ClimEx large ensemble and the CORDEX runs for some indicators, while for most indicators the spread of the SMLE is smaller than the spread of different CORDEX models.
NASA Astrophysics Data System (ADS)
Davini, Paolo; von Hardenberg, Jost; Corti, Susanna; Christensen, Hannah M.; Juricke, Stephan; Subramanian, Aneesh; Watson, Peter A. G.; Weisheimer, Antje; Palmer, Tim N.
2017-03-01
The Climate SPHINX (Stochastic Physics HIgh resolutioN eXperiments) project is a comprehensive set of ensemble simulations aimed at evaluating the sensitivity of present and future climate to model resolution and stochastic parameterisation. The EC-Earth Earth system model is used to explore the impact of stochastic physics in a large ensemble of 30-year climate integrations at five different atmospheric horizontal resolutions (from 125 up to 16 km). The project includes more than 120 simulations in both a historical scenario (1979-2008) and a climate change projection (2039-2068), together with coupled transient runs (1850-2100). A total of 20.4 million core hours have been used, made available from a single year grant from PRACE (the Partnership for Advanced Computing in Europe), and close to 1.5 PB of output data have been produced on SuperMUC IBM Petascale System at the Leibniz Supercomputing Centre (LRZ) in Garching, Germany. About 140 TB of post-processed data are stored on the CINECA supercomputing centre archives and are freely accessible to the community thanks to an EUDAT data pilot project. This paper presents the technical and scientific set-up of the experiments, including the details on the forcing used for the simulations performed, defining the SPHINX v1.0 protocol. In addition, an overview of preliminary results is given. An improvement in the simulation of Euro-Atlantic atmospheric blocking following resolution increase is observed. It is also shown that including stochastic parameterisation in the low-resolution runs helps to improve some aspects of the tropical climate - specifically the Madden-Julian Oscillation and the tropical rainfall variability. These findings show the importance of representing the impact of small-scale processes on the large-scale climate variability either explicitly (with high-resolution simulations) or stochastically (in low-resolution simulations).
Yang, Lin; Zhang, Feng; Wang, Cai-Zhuang; ...
2018-01-12
We present an implementation of EAM and FS interatomic potentials, which are widely used in simulating metallic systems, in HOOMD-blue, a software designed to perform classical molecular dynamics simulations using GPU accelerations. We first discuss the details of our implementation and then report extensive benchmark tests. We demonstrate that single-precision floating point operations efficiently implemented on GPUs can produce sufficient accuracy when compared against double-precision codes, as demonstrated in test simulations of calculations of the glass-transition temperature of Cu 64.5Zr 35.5, and pair correlation function of liquid Ni 3Al. Our code scales well with the size of the simulating systemmore » on NVIDIA Tesla M40 and P100 GPUs. Compared with another popular software LAMMPS running on 32 cores of AMD Opteron 6220 processors, the GPU/CPU performance ratio can reach as high as 4.6. In conclusion, the source code can be accessed through the HOOMD-blue web page for free by any interested user.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Lin; Zhang, Feng; Wang, Cai-Zhuang
We present an implementation of EAM and FS interatomic potentials, which are widely used in simulating metallic systems, in HOOMD-blue, a software designed to perform classical molecular dynamics simulations using GPU accelerations. We first discuss the details of our implementation and then report extensive benchmark tests. We demonstrate that single-precision floating point operations efficiently implemented on GPUs can produce sufficient accuracy when compared against double-precision codes, as demonstrated in test simulations of calculations of the glass-transition temperature of Cu 64.5Zr 35.5, and pair correlation function of liquid Ni 3Al. Our code scales well with the size of the simulating systemmore » on NVIDIA Tesla M40 and P100 GPUs. Compared with another popular software LAMMPS running on 32 cores of AMD Opteron 6220 processors, the GPU/CPU performance ratio can reach as high as 4.6. In conclusion, the source code can be accessed through the HOOMD-blue web page for free by any interested user.« less
NASA Astrophysics Data System (ADS)
Constantine, P. G.; Emory, M.; Larsson, J.; Iaccarino, G.
2015-12-01
We present a computational analysis of the reactive flow in a hypersonic scramjet engine with focus on effects of uncertainties in the operating conditions. We employ a novel methodology based on active subspaces to characterize the effects of the input uncertainty on the scramjet performance. The active subspace identifies one-dimensional structure in the map from simulation inputs to quantity of interest that allows us to reparameterize the operating conditions; instead of seven physical parameters, we can use a single derived active variable. This dimension reduction enables otherwise infeasible uncertainty quantification, considering the simulation cost of roughly 9500 CPU-hours per run. For two values of the fuel injection rate, we use a total of 68 simulations to (i) identify the parameters that contribute the most to the variation in the output quantity of interest, (ii) estimate upper and lower bounds on the quantity of interest, (iii) classify sets of operating conditions as safe or unsafe corresponding to a threshold on the output quantity of interest, and (iv) estimate a cumulative distribution function for the quantity of interest.
NASA Technical Reports Server (NTRS)
Lindsey, Tony; Pecheur, Charles
2004-01-01
Livingstone PathFinder (LPF) is a simulation-based computer program for verifying autonomous diagnostic software. LPF is designed especially to be applied to NASA s Livingstone computer program, which implements a qualitative-model-based algorithm that diagnoses faults in a complex automated system (e.g., an exploratory robot, spacecraft, or aircraft). LPF forms a software test bed containing a Livingstone diagnosis engine, embedded in a simulated operating environment consisting of a simulator of the system to be diagnosed by Livingstone and a driver program that issues commands and faults according to a nondeterministic scenario provided by the user. LPF runs the test bed through all executions allowed by the scenario, checking for various selectable error conditions after each step. All components of the test bed are instrumented, so that execution can be single-stepped both backward and forward. The architecture of LPF is modular and includes generic interfaces to facilitate substitution of alternative versions of its different parts. Altogether, LPF provides a flexible, extensible framework for simulation-based analysis of diagnostic software; these characteristics also render it amenable to application to diagnostic programs other than Livingstone.
Tough2{_}MP: A parallel version of TOUGH2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Keni; Wu, Yu-Shu; Ding, Chris
2003-04-09
TOUGH2{_}MP is a massively parallel version of TOUGH2. It was developed for running on distributed-memory parallel computers to simulate large simulation problems that may not be solved by the standard, single-CPU TOUGH2 code. The new code implements an efficient massively parallel scheme, while preserving the full capacity and flexibility of the original TOUGH2 code. The new software uses the METIS software package for grid partitioning and AZTEC software package for linear-equation solving. The standard message-passing interface is adopted for communication among processors. Numerical performance of the current version code has been tested on CRAY-T3E and IBM RS/6000 SP platforms. Inmore » addition, the parallel code has been successfully applied to real field problems of multi-million-cell simulations for three-dimensional multiphase and multicomponent fluid and heat flow, as well as solute transport. In this paper, we will review the development of the TOUGH2{_}MP, and discuss the basic features, modules, and their applications.« less
Programs for Testing Processor-in-Memory Computing Systems
NASA Technical Reports Server (NTRS)
Katz, Daniel S.
2006-01-01
The Multithreaded Microbenchmarks for Processor-In-Memory (PIM) Compilers, Simulators, and Hardware are computer programs arranged in a series for use in testing the performances of PIM computing systems, including compilers, simulators, and hardware. The programs at the beginning of the series test basic functionality; the programs at subsequent positions in the series test increasingly complex functionality. The programs are intended to be used while designing a PIM system, and can be used to verify that compilers, simulators, and hardware work correctly. The programs can also be used to enable designers of these system components to examine tradeoffs in implementation. Finally, these programs can be run on non-PIM hardware (either single-threaded or multithreaded) using the POSIX pthreads standard to verify that the benchmarks themselves operate correctly. [POSIX (Portable Operating System Interface for UNIX) is a set of standards that define how programs and operating systems interact with each other. pthreads is a library of pre-emptive thread routines that comply with one of the POSIX standards.
NASA Technical Reports Server (NTRS)
Knight, Norman F., Jr.; Phillips, Dawn R.; Raju, Ivatury S.
2008-01-01
The structural analyses described in the present report were performed in support of the NASA Engineering and Safety Center (NESC) Critical Initial Flaw Size (CIFS) assessment for the ARES I-X Upper Stage Simulator (USS) common shell segment. The structural analysis effort for the NESC assessment had three thrusts: shell buckling analyses, detailed stress analyses of the single-bolt joint test; and stress analyses of two-segment 10 degree-wedge models for the peak axial tensile running load. Elasto-plastic, large-deformation simulations were performed. Stress analysis results indicated that the stress levels were well below the material yield stress for the bounding axial tensile design load. This report also summarizes the analyses and results from parametric studies on modeling the shell-to-gusset weld, flange-surface mismatch, bolt preload, and washer-bearing-surface modeling. These analyses models were used to generate the stress levels specified for the fatigue crack growth assessment using the design load with a factor of safety.
Scientific Benefits of Space Science Models Archiving at Community Coordinated Modeling Center
NASA Technical Reports Server (NTRS)
Kuznetsova, Maria M.; Berrios, David; Chulaki, Anna; Hesse, Michael; MacNeice, Peter J.; Maddox, Marlo M.; Pulkkinen, Antti; Rastaetter, Lutz; Taktakishvili, Aleksandre
2009-01-01
The Community Coordinated Modeling Center (CCMC) hosts a set of state-of-the-art space science models ranging from the solar atmosphere to the Earth's upper atmosphere. CCMC provides a web-based Run-on-Request system, by which the interested scientist can request simulations for a broad range of space science problems. To allow the models to be driven by data relevant to particular events CCMC developed a tool that automatically downloads data from data archives and transform them to required formats. CCMC also provides a tailored web-based visualization interface for the model output, as well as the capability to download the simulation output in portable format. CCMC offers a variety of visualization and output analysis tools to aid scientists in interpretation of simulation results. During eight years since the Run-on-request system became available the CCMC archived the results of almost 3000 runs that are covering significant space weather events and time intervals of interest identified by the community. The simulation results archived at CCMC also include a library of general purpose runs with modeled conditions that are used for education and research. Archiving results of simulations performed in support of several Modeling Challenges helps to evaluate the progress in space weather modeling over time. We will highlight the scientific benefits of CCMC space science model archive and discuss plans for further development of advanced methods to interact with simulation results.
Frequency domain phase noise analysis of dual injection-locked optoelectronic oscillators.
Jahanbakht, Sajad
2016-10-01
Dual injection-locked optoelectronic oscillators (DIL-OEOs) have been introduced as a means to achieve very low-noise microwave oscillations while avoiding the large spurious peaks that occur in the phase noise of the conventional single-loop OEOs. In these systems, two OEOs are inter-injection locked to each other. The OEO with the longer optical fiber delay line is called the master OEO, and the other is called the slave OEO. Here, a frequency domain approach for simulating the phase noise spectrum of each of the OEOs in a DIL-OEO system and based on the conversion matrix approach is presented. The validity of the new approach is verified by comparing its results with previously published data in the literature. In the new approach, first, in each of the master or slave OEOs, the power spectral densities (PSDs) of two white and 1/f noise sources are optimized such that the resulting simulated phase noise of any of the master or slave OEOs in the free-running state matches the measured phase noise of that OEO. After that, the proposed approach is able to simulate the phase noise PSD of both OEOs at the injection-locked state. Because of the short run-time requirements, especially compared to previously proposed time domain approaches, the new approach is suitable for optimizing the power injection ratios (PIRs), and potentially other circuit parameters, in order to achieve good performance regarding the phase noise in each of the OEOs. Through various numerical simulations, the optimum PIRs for achieving good phase noise performance are presented and discussed; they are in agreement with the previously published results. This further verifies the applicability of the new approach. Moreover, some other interesting results regarding the spur levels are also presented.
Computer Simulation of Great Lakes-St. Lawrence Seaway Icebreaker Requirements.
1980-01-01
of Run No. 1 for Taconite Task Command ... ....... 6-41 6.22d Results of Run No. I for Oil Can Task Command ........ ... 6-42 6.22e Results of Run No...Port and Period for Run No. 2 ... .. ... ... 6-47 6.23c Results of Run No. 2 for Taconite Task Command ... ....... 6-48 6.23d Results of Run No. 2 for...6-53 6.24b Predicted Icebreaker Fleet by Home Port and Period for Run No. 3 6-54 6.24c Results of Run No. 3 for Taconite Task Command. ....... 6
Climate change impact modelling needs to include cross-sectoral interactions
NASA Astrophysics Data System (ADS)
Harrison, Paula A.; Dunford, Robert W.; Holman, Ian P.; Rounsevell, Mark D. A.
2016-09-01
Climate change impact assessments often apply models of individual sectors such as agriculture, forestry and water use without considering interactions between these sectors. This is likely to lead to misrepresentation of impacts, and consequently to poor decisions about climate adaptation. However, no published research assesses the differences between impacts simulated by single-sector and integrated models. Here we compare 14 indicators derived from a set of impact models run within single-sector and integrated frameworks across a range of climate and socio-economic scenarios in Europe. We show that single-sector studies misrepresent the spatial pattern, direction and magnitude of most impacts because they omit the complex interdependencies within human and environmental systems. The discrepancies are particularly pronounced for indicators such as food production and water exploitation, which are highly influenced by other sectors through changes in demand, land suitability and resource competition. Furthermore, the discrepancies are greater under different socio-economic scenarios than different climate scenarios, and at the sub-regional rather than Europe-wide scale.
Measurement of transient gas flow parameters by diode laser absorption spectroscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bolshov, M A; Kuritsyn, Yu A; Liger, V V
2015-04-30
An absorption spectrometer based on diode lasers is developed for measuring two-dimension maps of temperature and water vapour concentration distributions in the combustion zones of two mixing supersonic flows of fuel and oxidiser in the single run regime. The method of measuring parameters of hot combustion zones is based on detection of transient spectra of water vapour absorption. The design of the spectrometer considerably reduces the influence of water vapour absorption along the path of a sensing laser beam outside the burning chamber. The optical scheme is developed, capable of matching measurement results in different runs of mixture burning. Amore » new algorithm is suggested for obtaining information about the mixture temperature by constructing the correlation functions of the experimental spectrum with those simulated from databases. A two-dimensional map of temperature distribution in a test chamber is obtained for the first time under the conditions of plasma-induced combusion of the ethylene – air mixture. (laser applications and other topics in quantum electronics)« less
Autonomous proximity operations using machine vision for trajectory control and pose estimation
NASA Technical Reports Server (NTRS)
Cleghorn, Timothy F.; Sternberg, Stanley R.
1991-01-01
A machine vision algorithm was developed which permits guidance control to be maintained during autonomous proximity operations. At present this algorithm exists as a simulation, running upon an 80386 based personal computer, using a ModelMATE CAD package to render the target vehicle. However, the algorithm is sufficiently simple, so that following off-line training on a known target vehicle, it should run in real time with existing vision hardware. The basis of the algorithm is a sequence of single camera images of the target vehicle, upon which radial transforms were performed. Selected points of the resulting radial signatures are fed through a decision tree, to determine whether the signature matches that of the known reference signatures for a particular view of the target. Based upon recognized scenes, the position of the maneuvering vehicle with respect to the target vehicles can be calculated, and adjustments made in the former's trajectory. In addition, the pose and spin rates of the target satellite can be estimated using this method.
NASA Technical Reports Server (NTRS)
Meng, J. C. S.; Thomson, J. A. L.
1975-01-01
A data analysis program constructed to assess LDV system performance, to validate the simulation model, and to test various vortex location algorithms is presented. Real or simulated Doppler spectra versus range and elevation is used and the spatial distributions of various spectral moments or other spectral characteristics are calculated and displayed. Each of the real or simulated scans can be processed by one of three different procedures: simple frequency or wavenumber filtering, matched filtering, and deconvolution filtering. The final output is displayed as contour plots in an x-y coordinate system, as well as in the form of vortex tracks deduced from the maxima of the processed data. A detailed analysis of run number 1023 and run number 2023 is presented to demonstrate the data analysis procedure. Vortex tracks and system range resolutions are compared with theoretical predictions.
History of Satellite Orbit Determination at NSWCDD
2018-01-31
run . Segment 40 did pass editing and its use was optional after Segment 20. Segment 30 needed to be run before Segment 80. Segment 70 was run as...control cards required to run the program. These included a CHARGE card related to usage charges and various REQUEST, ATTACH, and CATALOG cards...each) could be done in a single run after the long-arc solution had converged. These short arcs used the pass matrices from the long-arc run in their
NVIDIA OptiX ray-tracing engine as a new tool for modelling medical imaging systems
NASA Astrophysics Data System (ADS)
Pietrzak, Jakub; Kacperski, Krzysztof; Cieślar, Marek
2015-03-01
The most accurate technique to model the X- and gamma radiation path through a numerically defined object is the Monte Carlo simulation which follows single photons according to their interaction probabilities. A simplified and much faster approach, which just integrates total interaction probabilities along selected paths, is known as ray tracing. Both techniques are used in medical imaging for simulating real imaging systems and as projectors required in iterative tomographic reconstruction algorithms. These approaches are ready for massive parallel implementation e.g. on Graphics Processing Units (GPU), which can greatly accelerate the computation time at a relatively low cost. In this paper we describe the application of the NVIDIA OptiX ray-tracing engine, popular in professional graphics and rendering applications, as a new powerful tool for X- and gamma ray-tracing in medical imaging. It allows the implementation of a variety of physical interactions of rays with pixel-, mesh- or nurbs-based objects, and recording any required quantities, like path integrals, interaction sites, deposited energies, and others. Using the OptiX engine we have implemented a code for rapid Monte Carlo simulations of Single Photon Emission Computed Tomography (SPECT) imaging, as well as the ray-tracing projector, which can be used in reconstruction algorithms. The engine generates efficient, scalable and optimized GPU code, ready to run on multi GPU heterogeneous systems. We have compared the results our simulations with the GATE package. With the OptiX engine the computation time of a Monte Carlo simulation can be reduced from days to minutes.
Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng
2017-08-05
Large-scale forcing data, such as vertical velocity and advective tendencies, are required to drive single-column models (SCMs), cloud-resolving models, and large-eddy simulations. Previous studies suggest that some errors of these model simulations could be attributed to the lack of spatial variability in the specified domain-mean large-scale forcing. This study investigates the spatial variability of the forcing and explores its impact on SCM simulated precipitation and clouds. A gridded large-scale forcing data during the March 2000 Cloud Intensive Operational Period at the Atmospheric Radiation Measurement program's Southern Great Plains site is used for analysis and to drive the single-column version ofmore » the Community Atmospheric Model Version 5 (SCAM5). When the gridded forcing data show large spatial variability, such as during a frontal passage, SCAM5 with the domain-mean forcing is not able to capture the convective systems that are partly located in the domain or that only occupy part of the domain. This problem has been largely reduced by using the gridded forcing data, which allows running SCAM5 in each subcolumn and then averaging the results within the domain. This is because the subcolumns have a better chance to capture the timing of the frontal propagation and the small-scale systems. As a result, other potential uses of the gridded forcing data, such as understanding and testing scale-aware parameterizations, are also discussed.« less
Design of Flight Control Panel Layout using Graphical User Interface in MATLAB
NASA Astrophysics Data System (ADS)
Wirawan, A.; Indriyanto, T.
2018-04-01
This paper introduces the design of Flight Control Panel (FCP) Layout using Graphical User Interface in MATLAB. The FCP is the interface to give the command to the simulation and to monitor model variables while the simulation is running. The command accommodates by the FCP are altitude command, the angle of sideslip command, heading command, and setting command for turbulence model. The FCP was also designed to monitor the flight parameter while the simulation is running.
The Q continuum simulation: Harnessing the power of GPU accelerated supercomputers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heitmann, Katrin; Frontiere, Nicholas; Sewell, Chris
2015-08-01
Modeling large-scale sky survey observations is a key driver for the continuing development of high-resolution, large-volume, cosmological simulations. We report the first results from the "Q Continuum" cosmological N-body simulation run carried out on the GPU-accelerated supercomputer Titan. The simulation encompasses a volume of (1300 Mpc)(3) and evolves more than half a trillion particles, leading to a particle mass resolution of m(p) similar or equal to 1.5 . 10(8) M-circle dot. At thismass resolution, the Q Continuum run is currently the largest cosmology simulation available. It enables the construction of detailed synthetic sky catalogs, encompassing different modeling methodologies, including semi-analyticmore » modeling and sub-halo abundance matching in a large, cosmological volume. Here we describe the simulation and outputs in detail and present first results for a range of cosmological statistics, such as mass power spectra, halo mass functions, and halo mass-concentration relations for different epochs. We also provide details on challenges connected to running a simulation on almost 90% of Titan, one of the fastest supercomputers in the world, including our usage of Titan's GPU accelerators.« less
NASA Astrophysics Data System (ADS)
Shen, Wenqiang; Tang, Jianping; Wang, Yuan; Wang, Shuyu; Niu, Xiaorui
2017-04-01
In this study, the characteristics of tropical cyclones (TCs) over the East Asia Coordinated Regional Downscaling Experiment domain are examined with the Weather Research and Forecasting (WRF) model. Eight 20-year (1989-2008) simulations are performed using the WRF model, with lateral boundary forcing from the ERA-Interim reanalysis, to test the sensitivity of TC simulation to interior spectral nudging (SN, including nudging time interval, nudging variables) and radiation schemes [Community Atmosphere Model (CAM), Rapid Radiative Transfer Model (RRTM)]. The simulated TCs are compared with the observation from the Regional Specialized Meteorological Centers TC best tracks. It is found that all WRF runs can simulate the climatology of key TC features such as the tracks and location/frequency of genesis reasonably well, and reproduce the inter-annual variations and seasonal cycle of TC counts. The SN runs produce enhanced TC activity compare to the runs without SN. The thermodynamic profile suggests that nudging with horizontal wind increases the unstable of thermodynamic states in tropics, which results in excessive TCs genesis. The experiments with wind and temperature nudging improve the overestimation of TCs numbers, especially suppress the TCs intensification by correct the thermodynamic profile. Weak SN coefficient enhances TCs activity significantly even with wind and temperature nudging. The analysis of TCs numbers and large scale circulation shows that the SN parameters adopted in our experiments do not appear to suppress the formation of TC. The excessive TCs activity in CAM runs relative to RRTM runs are also due to the enhanced atmospheric instability.
Anhøj, Jacob; Olesen, Anne Vingaard
2014-01-01
A run chart is a line graph of a measure plotted over time with the median as a horizontal line. The main purpose of the run chart is to identify process improvement or degradation, which may be detected by statistical tests for non-random patterns in the data sequence. We studied the sensitivity to shifts and linear drifts in simulated processes using the shift, crossings and trend rules for detecting non-random variation in run charts. The shift and crossings rules are effective in detecting shifts and drifts in process centre over time while keeping the false signal rate constant around 5% and independent of the number of data points in the chart. The trend rule is virtually useless for detection of linear drift over time, the purpose it was intended for.
Vulnerability Model. A Simulation System for Assessing Damage Resulting from Marine Spills
1975-06-01
used and the scenario simulated. The test runs were made on an IBM 360/65 computer. Running times were generally between 15 and 35 CPU seconds...fect filrthcr north. A petroleum tank-truck operation was located within 600 feet Of L:- stock pond on which the crude oil had dammred itp . At 5 A-M
2006-06-01
result of changes that run the gamut from space and staff levels to changes in training requirements to the unit composition on a particular...required of them, as well as a simulation tool that can identify the potential impacts on training as a result of changes that run the gamut from space
High-speed GPU-based finite element simulations for NDT
NASA Astrophysics Data System (ADS)
Huthwaite, P.; Shi, F.; Van Pamel, A.; Lowe, M. J. S.
2015-03-01
The finite element method solved with explicit time increments is a general approach which can be applied to many ultrasound problems. It is widely used as a powerful tool within NDE for developing and testing inspection techniques, and can also be used in inversion processes. However, the solution technique is computationally intensive, requiring many calculations to be performed for each simulation, so traditionally speed has been an issue. For maximum speed, an implementation of the method, called Pogo [Huthwaite, J. Comp. Phys. 2014, doi: 10.1016/j.jcp.2013.10.017], has been developed to run on graphics cards, exploiting the highly parallelisable nature of the algorithm. Pogo typically demonstrates speed improvements of 60-90x over commercial CPU alternatives. Pogo is applied to three NDE examples, where the speed improvements are important: guided wave tomography, where a full 3D simulation must be run for each source transducer and every different defect size; scattering from rough cracks, where many simulations need to be run to build up a statistical model of the behaviour; and ultrasound propagation within coarse-grained materials where the mesh must be highly refined and many different cases run.
Kang, Xianbiao; Zhang, Rong-Hua; Gao, Chuan; Zhu, Jieshun
2017-12-07
The El Niño-Southern oscillation (ENSO) simulated in the Community Earth System Model of the National Center for Atmospheric Research (NCAR CESM) is much stronger than in reality. Here, satellite data are used to derive a statistical relationship between interannual variations in oceanic chlorophyll (CHL) and sea surface temperature (SST), which is then incorporated into the CESM to represent oceanic chlorophyll -induced climate feedback in the tropical Pacific. Numerical runs with and without the feedback (referred to as feedback and non-feedback runs) are performed and compared with each other. The ENSO amplitude simulated in the feedback run is more accurate than that in the non-feedback run; quantitatively, the Niño3 SST index is reduced by 35% when the feedback is included. The underlying processes are analyzed and the results show that interannual CHL anomalies exert a systematic modulating effect on the solar radiation penetrating into the subsurface layers, which induces differential heating in the upper ocean that affects vertical mixing and thus SST. The statistical modeling approach proposed in this work offers an effective and economical way for improving climate simulations.
Validation of Dissolution Testing with Biorelevant Media: An OrBiTo Study.
Mann, James; Dressman, Jennifer; Rosenblatt, Karin; Ashworth, Lee; Muenster, Uwe; Frank, Kerstin; Hutchins, Paul; Williams, James; Klumpp, Lukas; Wielockx, Kristina; Berben, Philippe; Augustijns, Patrick; Holm, Rene; Hofmann, Michael; Patel, Sanjaykumar; Beato, Stefania; Ojala, Krista; Tomaszewska, Irena; Bruel, Jean-Luc; Butler, James
2017-12-04
Dissolution testing with biorelevant media has become widespread in the pharmaceutical industry as a means of better understanding how drugs and formulations behave in the gastrointestinal tract. Until now, however, there have been few attempts to gauge the reproducibility of results obtained with these methods. The aim of this study was to determine the interlaboratory reproducibility of biorelevant dissolution testing, using the paddle apparatus (USP 2). Thirteen industrial and three academic laboratories participated in this study. All laboratories were provided with standard protocols for running the tests: dissolution in FaSSGF to simulate release in the stomach, dissolution in a single intestinal medium, FaSSIF, to simulate release in the small intestine, and a "transfer" (two-stage) protocol to simulate the concentration profile when conditions are changed from the gastric to the intestinal environment. The test products chosen were commercially available ibuprofen tablets and zafirlukast tablets. The biorelevant dissolution tests showed a high degree of reproducibility among the participating laboratories, even though several different batches of the commercially available medium preparation powder were used. Likewise, results were almost identicalbetween the commercial biorelevant media and those produced in-house. Comparing results to previous ring studies, including those performed with USP calibrator tablets or commercially available pharmaceutical products in a single medium, the results for the biorelevant studies were highly reproducible on an interlaboratory basis. Interlaboratory reproducibility with the two-stage test was also acceptable, although the variability was somewhat greater than with the single medium tests. Biorelevant dissolution testing is highly reproducible among laboratories and can be relied upon for cross-laboratory comparisons.
A numerical method for shock driven multiphase flow with evaporating particles
NASA Astrophysics Data System (ADS)
Dahal, Jeevan; McFarland, Jacob A.
2017-09-01
A numerical method for predicting the interaction of active, phase changing particles in a shock driven flow is presented in this paper. The Particle-in-Cell (PIC) technique was used to couple particles in a Lagrangian coordinate system with a fluid in an Eulerian coordinate system. The Piecewise Parabolic Method (PPM) hydrodynamics solver was used for solving the conservation equations and was modified with mass, momentum, and energy source terms from the particle phase. The method was implemented in the open source hydrodynamics software FLASH, developed at the University of Chicago. A simple validation of the methods is accomplished by comparing velocity and temperature histories from a single particle simulation with the analytical solution. Furthermore, simple single particle parcel simulations were run at two different sizes to study the effect of particle size on vorticity deposition in a shock-driven multiphase instability. Large particles were found to have lower enstrophy production at early times and higher enstrophy dissipation at late times due to the advection of the particle vorticity source term through the carrier gas. A 2D shock-driven instability of a circular perturbation is studied in simulations and compared to previous experimental data as further validation of the numerical methods. The effect of the particle size distribution and particle evaporation is examined further for this case. The results show that larger particles reduce the vorticity deposition, while particle evaporation increases it. It is also shown that for a distribution of particles sizes the vorticity deposition is decreased compared to single particle size case at the mean diameter.
Enhancing physical performance in elite junior tennis players with a caffeinated energy drink.
Gallo-Salazar, César; Areces, Francisco; Abián-Vicén, Javier; Lara, Beatriz; Salinero, Juan José; Gonzalez-Millán, Cristina; Portillo, Javier; Muñoz, Victor; Juarez, Daniel; Del Coso, Juan
2015-04-01
The aim of this study was to investigate the effectiveness of a caffeinated energy drink to enhance physical performance in elite junior tennis players. In 2 different sessions separated by 1 wk, 14 young (16 ± 1 y) elite-level tennis players ingested 3 mg caffeine per kg body mass in the form of an energy drink or the same drink without caffeine (placebo). After 60 min, participants performed a handgrip-strength test, a maximal-velocity serving test, and an 8 × 15-m sprint test and then played a simulated singles match (best of 3 sets). Instantaneous running speed during the matches was assessed using global positioning (GPS) devices. Furthermore, the matches were videotaped and notated afterward. In comparison with the placebo drink, the ingestion of the caffeinated energy drink increased handgrip force by ~4.2% ± 7.2% (P = .03) in both hands, the running pace at high intensity (46.7 ± 28.5 vs 63.3 ± 27.7 m/h, P = .02), and the number of sprints (12.1 ± 1.7 vs 13.2 ± 1.7, P = .05) during the simulated match. There was a tendency for increased maximal running velocity during the sprint test (22.3 ± 2.0 vs 22.9 ± 2.1 km/h, P = .07) and higher percentage of points won on service with the caffeinated energy drink (49.7% ± 9.8% vs 56.4% ± 10.0%, P = .07) in comparison with the placebo drink. The energy drink did not improve ball velocity during the serving test (42.6 ± 4.8 vs 42.7 ± 5.0 m/s, P = .49). The preexercise ingestion of caffeinated energy drinks was effective to enhance some aspects of physical performance of elite junior tennis players.
flexCloud: Deployment of the FLEXPART Atmospheric Transport Model as a Cloud SaaS Environment
NASA Astrophysics Data System (ADS)
Morton, Don; Arnold, Dèlia
2014-05-01
FLEXPART (FLEXible PARTicle dispersion model) is a Lagrangian transport and dispersion model used by a growing international community. We have used it to simulate and forecast the atmospheric transport of wildfire smoke, volcanic ash and radionuclides. Additionally, FLEXPART may be run in backwards mode to provide information for the determination of emission sources such as nuclear emissions and greenhouse gases. This open source software is distributed in source code form, and has several compiler and library dependencies that users need to address. Although well-documented, getting it compiled, set up, running, and post-processed is often tedious, making it difficult for the inexperienced user. Our interest is in moving scientific modeling and simulation activities from site-specific clusters and supercomputers to a cloud model as a service paradigm. Choosing FLEXPART for our prototyping, our vision is to construct customised IaaS images containing fully-compiled and configured FLEXPART codes, including pre-processing, execution and postprocessing components. In addition, with the inclusion of a small web server in the image, we introduce a web-accessible graphical user interface that drives the system. A further initiative being pursued is the deployment of multiple, simultaneous FLEXPART ensembles in the cloud. A single front-end web interface is used to define the ensemble members, and separate cloud instances are launched, on-demand, to run the individual models and to conglomerate the outputs into a unified display. The outcome of this work is a Software as a Service (Saas) deployment whereby the details of the underlying modeling systems are hidden, allowing modelers to perform their science activities without the burden of considering implementation details.
Karamanidis, Kiros; Arampatzis, Adamantios
2007-01-01
The goals of this study were to investigate whether the lower muscle-tendon units (MTUs) capacities in older affect their ability to recover balance with a single-step after a fall, and to examine whether running experience enhances and protects this motor skill in young and old adults. The investigation was conducted on 30 older and 19 younger divided into two subgroups: runners versus non-active. In previous studies we documented that the older had lower leg extensor muscle strength and tendon stiffness while running had no effect on MTUs capacities. The current study examined recovery mechanics of the same individuals after an induced forward fall. Younger were better able to recover balance with a single-step compared to older (P < 0.001); this ability was associated with a more effective body configuration at touchdown (more posterior COM position relative to the recovery foot, P <0.001). MTUs capacities classified 88.6% of the subjects into single- or multiple-steppers. Runners showed a superior ability to recover balance with a single-step (P < 0.001) compared to non-active subjects due to a more effective mechanical response during the stance phase (greater knee joint flexion, P <0.05). We concluded that the age-related degeneration of the MTUs significantly diminished the older adults' ability to restore balance with a single-step. Running seems to enhance and protect this motor skill. We suggested that runners, due to their running experience, could update the internal representation of mechanisms responsible for the control of dynamic stability during a forward fall and, thus, were able to restore balance more often with a single-step compared to the non-active subjects.
NASA Astrophysics Data System (ADS)
Chen, Xiuhong; Huang, Xianglei; Jiao, Chaoyi; Flanner, Mark G.; Raeker, Todd; Palen, Brock
2017-01-01
The suites of numerical models used for simulating climate of our planet are usually run on dedicated high-performance computing (HPC) resources. This study investigates an alternative to the usual approach, i.e. carrying out climate model simulations on commercially available cloud computing environment. We test the performance and reliability of running the CESM (Community Earth System Model), a flagship climate model in the United States developed by the National Center for Atmospheric Research (NCAR), on Amazon Web Service (AWS) EC2, the cloud computing environment by Amazon.com, Inc. StarCluster is used to create virtual computing cluster on the AWS EC2 for the CESM simulations. The wall-clock time for one year of CESM simulation on the AWS EC2 virtual cluster is comparable to the time spent for the same simulation on a local dedicated high-performance computing cluster with InfiniBand connections. The CESM simulation can be efficiently scaled with the number of CPU cores on the AWS EC2 virtual cluster environment up to 64 cores. For the standard configuration of the CESM at a spatial resolution of 1.9° latitude by 2.5° longitude, increasing the number of cores from 16 to 64 reduces the wall-clock running time by more than 50% and the scaling is nearly linear. Beyond 64 cores, the communication latency starts to outweigh the benefit of distributed computing and the parallel speedup becomes nearly unchanged.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crossno, Patricia J.; Gittinger, Jaxon; Hunt, Warren L.
Slycat™ is a web-based system for performing data analysis and visualization of potentially large quantities of remote, high-dimensional data. Slycat™ specializes in working with ensemble data. An ensemble is a group of related data sets, which typically consists of a set of simulation runs exploring the same problem space. An ensemble can be thought of as a set of samples within a multi-variate domain, where each sample is a vector whose value defines a point in high-dimensional space. To understand and describe the underlying problem being modeled in the simulations, ensemble analysis looks for shared behaviors and common features acrossmore » the group of runs. Additionally, ensemble analysis tries to quantify differences found in any members that deviate from the rest of the group. The Slycat™ system integrates data management, scalable analysis, and visualization. Results are viewed remotely on a user’s desktop via commodity web clients using a multi-tiered hierarchy of computation and data storage, as shown in Figure 1. Our goal is to operate on data as close to the source as possible, thereby reducing time and storage costs associated with data movement. Consequently, we are working to develop parallel analysis capabilities that operate on High Performance Computing (HPC) platforms, to explore approaches for reducing data size, and to implement strategies for staging computation across the Slycat™ hierarchy. Within Slycat™, data and visual analysis are organized around projects, which are shared by a project team. Project members are explicitly added, each with a designated set of permissions. Although users sign-in to access Slycat™, individual accounts are not maintained. Instead, authentication is used to determine project access. Within projects, Slycat™ models capture analysis results and enable data exploration through various visual representations. Although for scientists each simulation run is a model of real-world phenomena given certain conditions, we use the term model to refer to our modeling of the ensemble data, not the physics. Different model types often provide complementary perspectives on data features when analyzing the same data set. Each model visualizes data at several levels of abstraction, allowing the user to range from viewing the ensemble holistically to accessing numeric parameter values for a single run. Bookmarks provide a mechanism for sharing results, enabling interesting model states to be labeled and saved.« less
NASA Astrophysics Data System (ADS)
Hannon, E.; Boyd, P. W.; Silvoso, M.; Lancelot, C.
The impact of a mesoscale in situ iron-enrichment experiment (SOIREE) on the planktonic ecosystem and biological pump in the Australasian-Pacific sector of the Southern Ocean was investigated through model simulations over a period of 60-d following an initial iron infusion. For this purpose we used a revised version of the biogeochemical SWAMCO model ( Lancelot et al., 2000), which describes the cycling of C, N, P, Si, Fe through aggregated chemical and biological components of the planktonic ecosystem in the high nitrate low chlorophyll (HNLC) waters of the Southern Ocean. Model runs were conducted for both the iron-fertilized waters and the surrounding HNLC waters, using in situ meteorological forcing. Validation was performed by comparing model predictions with observations recorded during the 13-d site occupation of SOIREE. Considerable agreement was found for the magnitude and temporal trends in most chemical and biological variables (the microbial food web excepted). Comparison of simulations run for 13- and 60-d showed that the effects of iron fertilization on the biota were incomplete over the 13-d monitoring of the SOIREE bloom. The model results indicate that after the vessel departed the SOIREE site there were further iron-mediated increases in properties such as phytoplankton biomass, production, export production, and uptake of atmospheric CO 2, which peaked 20-30 days after the initial iron infusion. Based on model simulations, the increase in net carbon production at the scale of the fertilized patch (assuming an area of 150 km2) was estimated to 9725 t C by day 60. Much of this production accumulated in the upper ocean, so that the predicted downward export of particulate organic carbon (POC) only represented 22% of the accumulated C in the upper ocean. Further model runs that implemented improved parameterization of diatom sedimentation (i.e. including iron-mediated diatom sinking rate, diatom chain-forming and aggregation) suggested that the downward POC flux predicted by the standard run might have been underestimated by a factor of up to 3. Finally, a sensitivity analysis of the biological response to iron-enrichment at locales with different initial oceanographic conditions (such as mixed-layer depth) or using different iron fertilization strategies (single vs. pulsed additions) was conducted. The outcomes of this analysis offer insights in the design and location of future in situ iron-enrichments.
Simulation Framework for Intelligent Transportation Systems
DOT National Transportation Integrated Search
1996-10-01
A simulation framework has been developed for a large-scale, comprehensive, scaleable simulation of an Intelligent Transportation System. The simulator is designed for running on parellel computers and distributed (networked) computer systems, but ca...
Neural network-based run-to-run controller using exposure and resist thickness adjustment
NASA Astrophysics Data System (ADS)
Geary, Shane; Barry, Ronan
2003-06-01
This paper describes the development of a run-to-run control algorithm using a feedforward neural network, trained using the backpropagation training method. The algorithm is used to predict the critical dimension of the next lot using previous lot information. It is compared to a common prediction algorithm - the exponentially weighted moving average (EWMA) and is shown to give superior prediction performance in simulations. The manufacturing implementation of the final neural network showed significantly improved process capability when compared to the case where no run-to-run control was utilised.
Evolution of CMS workload management towards multicore job support
NASA Astrophysics Data System (ADS)
Pérez-Calero Yzquierdo, A.; Hernández, J. M.; Khan, F. A.; Letts, J.; Majewski, K.; Rodrigues, A. M.; McCrea, A.; Vaandering, E.
2015-12-01
The successful exploitation of multicore processor architectures is a key element of the LHC distributed computing system in the coming era of the LHC Run 2. High-pileup complex-collision events represent a challenge for the traditional sequential programming in terms of memory and processing time budget. The CMS data production and processing framework is introducing the parallel execution of the reconstruction and simulation algorithms to overcome these limitations. CMS plans to execute multicore jobs while still supporting singlecore processing for other tasks difficult to parallelize, such as user analysis. The CMS strategy for job management thus aims at integrating single and multicore job scheduling across the Grid. This is accomplished by employing multicore pilots with internal dynamic partitioning of the allocated resources, capable of running payloads of various core counts simultaneously. An extensive test programme has been conducted to enable multicore scheduling with the various local batch systems available at CMS sites, with the focus on the Tier-0 and Tier-1s, responsible during 2015 of the prompt data reconstruction. Scale tests have been run to analyse the performance of this scheduling strategy and ensure an efficient use of the distributed resources. This paper presents the evolution of the CMS job management and resource provisioning systems in order to support this hybrid scheduling model, as well as its deployment and performance tests, which will enable CMS to transition to a multicore production model for the second LHC run.
Evolution of CMS Workload Management Towards Multicore Job Support
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perez-Calero Yzquierdo, A.; Hernández, J. M.; Khan, F. A.
The successful exploitation of multicore processor architectures is a key element of the LHC distributed computing system in the coming era of the LHC Run 2. High-pileup complex-collision events represent a challenge for the traditional sequential programming in terms of memory and processing time budget. The CMS data production and processing framework is introducing the parallel execution of the reconstruction and simulation algorithms to overcome these limitations. CMS plans to execute multicore jobs while still supporting singlecore processing for other tasks difficult to parallelize, such as user analysis. The CMS strategy for job management thus aims at integrating single andmore » multicore job scheduling across the Grid. This is accomplished by employing multicore pilots with internal dynamic partitioning of the allocated resources, capable of running payloads of various core counts simultaneously. An extensive test programme has been conducted to enable multicore scheduling with the various local batch systems available at CMS sites, with the focus on the Tier-0 and Tier-1s, responsible during 2015 of the prompt data reconstruction. Scale tests have been run to analyse the performance of this scheduling strategy and ensure an efficient use of the distributed resources. This paper presents the evolution of the CMS job management and resource provisioning systems in order to support this hybrid scheduling model, as well as its deployment and performance tests, which will enable CMS to transition to a multicore production model for the second LHC run.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hasenkamp, Daren; Sim, Alexander; Wehner, Michael
Extensive computing power has been used to tackle issues such as climate changes, fusion energy, and other pressing scientific challenges. These computations produce a tremendous amount of data; however, many of the data analysis programs currently only run a single processor. In this work, we explore the possibility of using the emerging cloud computing platform to parallelize such sequential data analysis tasks. As a proof of concept, we wrap a program for analyzing trends of tropical cyclones in a set of virtual machines (VMs). This approach allows the user to keep their familiar data analysis environment in the VMs, whilemore » we provide the coordination and data transfer services to ensure the necessary input and output are directed to the desired locations. This work extensively exercises the networking capability of the cloud computing systems and has revealed a number of weaknesses in the current cloud system software. In our tests, we are able to scale the parallel data analysis job to a modest number of VMs and achieve a speedup that is comparable to running the same analysis task using MPI. However, compared to MPI based parallelization, the cloud-based approach has a number of advantages. The cloud-based approach is more flexible because the VMs can capture arbitrary software dependencies without requiring the user to rewrite their programs. The cloud-based approach is also more resilient to failure; as long as a single VM is running, it can make progress while as soon as one MPI node fails the whole analysis job fails. In short, this initial work demonstrates that a cloud computing system is a viable platform for distributed scientific data analyses traditionally conducted on dedicated supercomputing systems.« less
Weak simulated extratropical responses to complete tropical deforestation
Findell, K.L.; Knutson, T.R.; Milly, P.C.D.
2006-01-01
The Geophysical Fluid Dynamics Laboratory atmosphere-land model version 2 (AM2/LM2) coupled to a 50-m-thick slab ocean model has been used to investigate remote responses to tropical deforestation. Magnitudes and significance of differences between a control run and a deforested run are assessed through comparisons of 50-yr time series, accounting for autocorrelation and field significance. Complete conversion of the broadleaf evergreen forests of South America, central Africa, and the islands of Oceania to grasslands leads to highly significant local responses. In addition, a broad but mild warming is seen throughout the tropical troposphere (<0.2??C between 700 and 150 mb), significant in northern spring and summer. However, the simulation results show very little statistically significant response beyond the Tropics. There are no significant differences in any hydroclimatic variables (e.g., precipitation, soil moisture, evaporation) in either the northern or the southern extratropics. Small but statistically significant local differences in some geopotential height and wind fields are present in the southeastern Pacific Ocean. Use of the same statistical tests on two 50-yr segments of the control run show that the small but significant extratropical differences between the deforested run and the control run are similar in magnitude and area to the differences between nonoverlapping segments of the control run. These simulations suggest that extratropical responses to complete tropical deforestation are unlikely to be distinguishable from natural climate variability.
Weather model performance on extreme rainfall events simulation's over Western Iberian Peninsula
NASA Astrophysics Data System (ADS)
Pereira, S. C.; Carvalho, A. C.; Ferreira, J.; Nunes, J. P.; Kaiser, J. J.; Rocha, A.
2012-08-01
This study evaluates the performance of the WRF-ARW numerical weather model in simulating the spatial and temporal patterns of an extreme rainfall period over a complex orographic region in north-central Portugal. The analysis was performed for the December month of 2009, during the Portugal Mainland rainy season. The heavy rainfall to extreme heavy rainfall periods were due to several low surface pressure's systems associated with frontal surfaces. The total amount of precipitation for December exceeded, in average, the climatological mean for the 1971-2000 time period in +89 mm, varying from 190 mm (south part of the country) to 1175 mm (north part of the country). Three model runs were conducted to assess possible improvements in model performance: (1) the WRF-ARW is forced with the initial fields from a global domain model (RunRef); (2) data assimilation for a specific location (RunObsN) is included; (3) nudging is used to adjust the analysis field (RunGridN). Model performance was evaluated against an observed hourly precipitation dataset of 15 rainfall stations using several statistical parameters. The WRF-ARW model reproduced well the temporal rainfall patterns but tended to overestimate precipitation amounts. The RunGridN simulation provided the best results but model performance of the other two runs was good too, so that the selected extreme rainfall episode was successfully reproduced.
Trusted computing strengthens cloud authentication.
Ghazizadeh, Eghbal; Zamani, Mazdak; Ab Manan, Jamalul-lail; Alizadeh, Mojtaba
2014-01-01
Cloud computing is a new generation of technology which is designed to provide the commercial necessities, solve the IT management issues, and run the appropriate applications. Another entry on the list of cloud functions which has been handled internally is Identity Access Management (IAM). Companies encounter IAM as security challenges while adopting more technologies became apparent. Trust Multi-tenancy and trusted computing based on a Trusted Platform Module (TPM) are great technologies for solving the trust and security concerns in the cloud identity environment. Single sign-on (SSO) and OpenID have been released to solve security and privacy problems for cloud identity. This paper proposes the use of trusted computing, Federated Identity Management, and OpenID Web SSO to solve identity theft in the cloud. Besides, this proposed model has been simulated in .Net environment. Security analyzing, simulation, and BLP confidential model are three ways to evaluate and analyze our proposed model.
Trusted Computing Strengthens Cloud Authentication
2014-01-01
Cloud computing is a new generation of technology which is designed to provide the commercial necessities, solve the IT management issues, and run the appropriate applications. Another entry on the list of cloud functions which has been handled internally is Identity Access Management (IAM). Companies encounter IAM as security challenges while adopting more technologies became apparent. Trust Multi-tenancy and trusted computing based on a Trusted Platform Module (TPM) are great technologies for solving the trust and security concerns in the cloud identity environment. Single sign-on (SSO) and OpenID have been released to solve security and privacy problems for cloud identity. This paper proposes the use of trusted computing, Federated Identity Management, and OpenID Web SSO to solve identity theft in the cloud. Besides, this proposed model has been simulated in .Net environment. Security analyzing, simulation, and BLP confidential model are three ways to evaluate and analyze our proposed model. PMID:24701149
Improved Temperature Diagnostic for Non-Neutral Plasmas with Single-Electron Resolution
NASA Astrophysics Data System (ADS)
Shanman, Sabrina; Evans, Lenny; Fajans, Joel; Hunter, Eric; Nelson, Cheyenne; Sierra, Carlos; Wurtele, Jonathan
2016-10-01
Plasma temperature diagnostics in a Penning-Malmberg trap are essential for reliably obtaining cold, non-neutral plasmas. We have developed a setup for detecting the initial electrons that escape from a trapped pure electron plasma as the confining electrode potential is slowly reduced. The setup minimizes external noise by using a silicon photomultiplier to capture light emitted from an MCP-amplified phosphor screen. To take advantage of this enhanced resolution, we have developed a new plasma temperature diagnostic analysis procedure which takes discrete electron arrival times as input. We have run extensive simulations comparing this new discrete algorithm to our existing exponential fitting algorithm. These simulations are used to explore the behavior of these two temperature diagnostic procedures at low N and at high electronic noise. This work was supported by the DOE DE-FG02-06ER54904, and the NSF 1500538-PHY.
Calculation of open and closed system elastic coefficients for multicomponent solids
NASA Astrophysics Data System (ADS)
Mishin, Y.
2015-06-01
Thermodynamic equilibrium in multicomponent solids subject to mechanical stresses is a complex nonlinear problem whose exact solution requires extensive computations. A few decades ago, Larché and Cahn proposed a linearized solution of the mechanochemical equilibrium problem by introducing the concept of open system elastic coefficients [Acta Metall. 21, 1051 (1973), 10.1016/0001-6160(73)90021-7]. Using the Ni-Al solid solution as a model system, we demonstrate that open system elastic coefficients can be readily computed by semigrand canonical Monte Carlo simulations in conjunction with the shape fluctuation approach. Such coefficients can be derived from a single simulation run, together with other thermodynamic properties needed for prediction of compositional fields in solid solutions containing defects. The proposed calculation approach enables streamlined solutions of mechanochemical equilibrium problems in complex alloys. Second order corrections to the linear theory are extended to multicomponent systems.
NASA Technical Reports Server (NTRS)
Narayanan, R.; Zimmerman, W. F.; Poon, P. T. Y.
1981-01-01
Test results on a modular simulation of the thermal transport and heat storage characteristics of a heat pipe solar receiver (HPSR) with thermal energy storage (TES) are presented. The HPSR features a 15-25 kWe Stirling engine power conversion system at the focal point of a parabolic dish concentrator operating at 827 C. The system collects and retrieves solar heat with sodium pipes and stores the heat in NaF-MgF2 latent heat storage material. The trials were run with a single full scale heat pipe, three full scale TES containers, and an air-cooled heat extraction coil to replace the Stirling engine heat exchanger. Charging and discharging, constant temperature operation, mixed mode operation, thermal inertial, etc. were studied. The heat pipe performance was verified, as were the thermal energy storage and discharge rates and isothermal discharges.
Study of a two-bed silica gel-water adsorption chiller: performance analysis
NASA Astrophysics Data System (ADS)
Sah, Ramesh P.; Choudhury, Biplab; Das, Ranadip K.
2018-01-01
In this study, a lumped parameter simulation model has been developed for analysis of the thermal performance of a single-stage two-bed adsorption chiller. Since silica gel has low regeneration temperature and water has high latent heat of vaporisation, silica gel-water pair has been chosen as the working pair of the adsorption chiller. Low-grade waste heat or solar heat at around 70-80°C can be used to run this adsorption chiller. In this model, the effects of operating parameters on the performance of the chiller have been studied. The simulated results show that the cooling capacity of the chiller has an optimum value of 5.95 kW for a cycle time of 1600 s with the hot, cooling, and chilled water inlet temperatures at 85°C, 25°C, and 14°C, respectively. The present model can be utilised to investigate and optimise adsorption chillers.
NASA Astrophysics Data System (ADS)
Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing
2018-05-01
The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.
SWIFT: SPH With Inter-dependent Fine-grained Tasking
NASA Astrophysics Data System (ADS)
Schaller, Matthieu; Gonnet, Pedro; Chalk, Aidan B. G.; Draper, Peter W.
2018-05-01
SWIFT runs cosmological simulations on peta-scale machines for solving gravity and SPH. It uses the Fast Multipole Method (FMM) to calculate gravitational forces between nearby particles, combining these with long-range forces provided by a mesh that captures both the periodic nature of the calculation and the expansion of the simulated universe. SWIFT currently uses a single fixed but time-variable softening length for all the particles. Many useful external potentials are also available, such as galaxy haloes or stratified boxes that are used in idealised problems. SWIFT implements a standard LCDM cosmology background expansion and solves the equations in a comoving frame; equations of state of dark-energy evolve with scale-factor. The structure of the code allows implementation for modified-gravity solvers or self-interacting dark matter schemes to be implemented. Many hydrodynamics schemes are implemented in SWIFT and the software allows users to add their own.
Rapid Parallel Calculation of shell Element Based On GPU
NASA Astrophysics Data System (ADS)
Wanga, Jian Hua; Lia, Guang Yao; Lib, Sheng; Li, Guang Yao
2010-06-01
Long computing time bottlenecked the application of finite element. In this paper, an effective method to speed up the FEM calculation by using the existing modern graphic processing unit and programmable colored rendering tool was put forward, which devised the representation of unit information in accordance with the features of GPU, converted all the unit calculation into film rendering process, solved the simulation work of all the unit calculation of the internal force, and overcame the shortcomings of lowly parallel level appeared ever before when it run in a single computer. Studies shown that this method could improve efficiency and shorten calculating hours greatly. The results of emulation calculation about the elasticity problem of large number cells in the sheet metal proved that using the GPU parallel simulation calculation was faster than using the CPU's. It is useful and efficient to solve the project problems in this way.
Multiresolution modeling with a JMASS-JWARS HLA Federation
NASA Astrophysics Data System (ADS)
Prince, John D.; Painter, Ron D.; Pendell, Brian; Richert, Walt; Wolcott, Christopher
2002-07-01
CACI, Inc.-Federal has built, tested, and demonstrated the use of a JMASS-JWARS HLA Federation that supports multi- resolution modeling of a weapon system and its subsystems in a JMASS engineering and engagement model environment, while providing a realistic JWARS theater campaign-level synthetic battle space and operational context to assess the weapon system's value added and deployment/employment supportability in a multi-day, combined force-on-force scenario. Traditionally, acquisition analyses require a hierarchical suite of simulation models to address engineering, engagement, mission and theater/campaign measures of performance, measures of effectiveness and measures of merit. Configuring and running this suite of simulations and transferring the appropriate data between each model is both time consuming and error prone. The ideal solution would be a single simulation with the requisite resolution and fidelity to perform all four levels of acquisition analysis. However, current computer hardware technologies cannot deliver the runtime performance necessary to support the resulting extremely large simulation. One viable alternative is to integrate the current hierarchical suite of simulation models using the DoD's High Level Architecture in order to support multi- resolution modeling. An HLA integration eliminates the extremely large model problem, provides a well-defined and manageable mixed resolution simulation and minimizes VV&A issues.
NASA Astrophysics Data System (ADS)
Magaldi, Marcello G.; Haine, Thomas W. N.
2015-02-01
The cascade of dense waters of the Southeast Greenland shelf during summer 2003 is investigated with two very high-resolution (0.5-km) simulations. The first simulation is non-hydrostatic. The second simulation is hydrostatic and about 3.75 times less expensive. Both simulations are compared to a 2-km hydrostatic run, about 31 times less expensive than the 0.5 km non-hydrostatic case. Time-averaged volume transport values for deep waters are insensitive to the changes in horizontal resolution and vertical momentum dynamics. By this metric, both lateral stirring and vertical shear instabilities associated with the cascading process are accurately parameterized by the turbulent schemes used at 2-km horizontal resolution. All runs compare well with observations and confirm that the cascade is mainly driven by cyclones which are linked to dense overflow boluses at depth. The passage of the cyclones is also associated with the generation of internal gravity waves (IGWs) near the shelf. Surface fields and kinetic energy spectra do not differ significantly between the runs for horizontal scales L > 30 km. Complex structures emerge and the spectra flatten at scales L < 30 km in the 0.5-km runs. In the non-hydrostatic case, additional energy is found in the vertical kinetic energy spectra at depth in the 2 km < L < 10 km range and with frequencies around 7 times the inertial frequency. This enhancement is missing in both hydrostatic runs and is here argued to be due to the different IGW evolution and propagation offshore. The different IGW behavior in the non-hydrostatic case has strong implications for the energetics: compared to the 2-km case, the baroclinic conversion term and vertical kinetic energy are about 1.4 and at least 34 times larger, respectively. This indicates that the energy transfer from the geostrophic eddy field to IGWs and their propagation away from the continental slope is not properly represented in the hydrostatic runs.
Testing and Validating Gadget2 for GPUs
NASA Astrophysics Data System (ADS)
Wibking, Benjamin; Holley-Bockelmann, K.; Berlind, A. A.
2013-01-01
We are currently upgrading a version of Gadget2 (Springel et al., 2005) that is optimized for NVIDIA's CUDA GPU architecture (Frigaard, unpublished) to work with the latest libraries and graphics cards. Preliminary tests of its performance indicate a ~40x speedup in the particle force tree approximation calculation, with overall speedup of 5-10x for cosmological simulations run with GPUs compared to running on the same CPU cores without GPU acceleration. We believe this speedup can be reasonably increased by an additional factor of two with futher optimization, including overlap of computation on CPU and GPU. Tests of single-precision GPU numerical fidelity currently indicate accuracy of the mass function and the spectral power density to within a few percent of extended-precision CPU results with the unmodified form of Gadget. Additionally, we plan to test and optimize the GPU code for Millenium-scale "grand challenge" simulations of >10^9 particles, a scale that has been previously untested with this code, with the aid of the NSF XSEDE flagship GPU-based supercomputing cluster codenamed "Keeneland." Current work involves additional validation of numerical results, extending the numerical precision of the GPU calculations to double precision, and evaluating performance/accuracy tradeoffs. We believe that this project, if successful, will yield substantial computational performance benefits to the N-body research community as the next generation of GPU supercomputing resources becomes available, both increasing the electrical power efficiency of ever-larger computations (making simulations possible a decade from now at scales and resolutions unavailable today) and accelerating the pace of research in the field.
High Performance Computing for Modeling Wind Farms and Their Impact
NASA Astrophysics Data System (ADS)
Mavriplis, D.; Naughton, J. W.; Stoellinger, M. K.
2016-12-01
As energy generated by wind penetrates further into our electrical system, modeling of power production, power distribution, and the economic impact of wind-generated electricity is growing in importance. The models used for this work can range in fidelity from simple codes that run on a single computer to those that require high performance computing capabilities. Over the past several years, high fidelity models have been developed and deployed on the NCAR-Wyoming Supercomputing Center's Yellowstone machine. One of the primary modeling efforts focuses on developing the capability to compute the behavior of a wind farm in complex terrain under realistic atmospheric conditions. Fully modeling this system requires the simulation of continental flows to modeling the flow over a wind turbine blade, including down to the blade boundary level, fully 10 orders of magnitude in scale. To accomplish this, the simulations are broken up by scale, with information from the larger scales being passed to the lower scale models. In the code being developed, four scale levels are included: the continental weather scale, the local atmospheric flow in complex terrain, the wind plant scale, and the turbine scale. The current state of the models in the latter three scales will be discussed. These simulations are based on a high-order accurate dynamic overset and adaptive mesh approach, which runs at large scale on the NWSC Yellowstone machine. A second effort on modeling the economic impact of new wind development as well as improvement in wind plant performance and enhancements to the transmission infrastructure will also be discussed.
Evaluation of Convective Transport in the GEOS-5 Chemistry and Climate Model
NASA Technical Reports Server (NTRS)
Pickering, Kenneth E.; Ott, Lesley E.; Shi, Jainn J.; Tao. Wei-Kuo; Mari, Celine; Schlager, Hans
2011-01-01
The NASA Goddard Earth Observing System (GEOS-5) Chemistry and Climate Model (CCM) consists of a global atmospheric general circulation model and the combined stratospheric and tropospheric chemistry package from the NASA Global Modeling Initiative (GMI) chemical transport model. The subgrid process of convective tracer transport is represented through the Relaxed Arakawa-Schubert parameterization in the GEOS-5 CCM. However, substantial uncertainty for tracer transport is associated with this parameterization, as is the case with all global and regional models. We have designed a project to comprehensively evaluate this parameterization from the point of view of tracer transport, and determine the most appropriate improvements that can be made to the GEOS-5 convection algorithm, allowing improvement in our understanding of the role of convective processes in determining atmospheric composition. We first simulate tracer transport in individual observed convective events with a cloud-resolving model (WRF). Initial condition tracer profiles (CO, CO2, O3) are constructed from aircraft data collected in undisturbed air, and the simulations are evaluated using aircraft data taken in the convective anvils. A single-column (SCM) version of the GEOS-5 GCM with online tracers is then run for the same convective events. SCM output is evaluated based on averaged tracer fields from the cloud-resolving model. Sensitivity simulations with adjusted parameters will be run in the SCM to determine improvements in the representation of convective transport. The focus of the work to date is on tropical continental convective events from the African Monsoon Multidisciplinary Analyses (AMMA) field mission in August 2006 that were extensively sampled by multiple research aircraft.
Simulating maize yield and bomass with spatial variability of soil field capacity
Ma, Liwang; Ahuja, Lajpat; Trout, Thomas; Nolan, Bernard T.; Malone, Robert W.
2015-01-01
Spatial variability in field soil properties is a challenge for system modelers who use single representative values, such as means, for model inputs, rather than their distributions. In this study, the root zone water quality model (RZWQM2) was first calibrated for 4 yr of maize (Zea mays L.) data at six irrigation levels in northern Colorado and then used to study spatial variability of soil field capacity (FC) estimated in 96 plots on maize yield and biomass. The best results were obtained when the crop parameters were fitted along with FCs, with a root mean squared error (RMSE) of 354 kg ha–1 for yield and 1202 kg ha–1 for biomass. When running the model using each of the 96 sets of field-estimated FC values, instead of calibrating FCs, the average simulated yield and biomass from the 96 runs were close to measured values with a RMSE of 376 kg ha–1 for yield and 1504 kg ha–1 for biomass. When an average of the 96 FC values for each soil layer was used, simulated yield and biomass were also acceptable with a RMSE of 438 kg ha–1 for yield and 1627 kg ha–1 for biomass. Therefore, when there are large numbers of FC measurements, an average value might be sufficient for model inputs. However, when the ranges of FC measurements were known for each soil layer, a sampled distribution of FCs using the Latin hypercube sampling (LHS) might be used for model inputs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng
Large-scale forcing data, such as vertical velocity and advective tendencies, are required to drive single-column models (SCMs), cloud-resolving models, and large-eddy simulations. Previous studies suggest that some errors of these model simulations could be attributed to the lack of spatial variability in the specified domain-mean large-scale forcing. This study investigates the spatial variability of the forcing and explores its impact on SCM simulated precipitation and clouds. A gridded large-scale forcing data during the March 2000 Cloud Intensive Operational Period at the Atmospheric Radiation Measurement program's Southern Great Plains site is used for analysis and to drive the single-column version ofmore » the Community Atmospheric Model Version 5 (SCAM5). When the gridded forcing data show large spatial variability, such as during a frontal passage, SCAM5 with the domain-mean forcing is not able to capture the convective systems that are partly located in the domain or that only occupy part of the domain. This problem has been largely reduced by using the gridded forcing data, which allows running SCAM5 in each subcolumn and then averaging the results within the domain. This is because the subcolumns have a better chance to capture the timing of the frontal propagation and the small-scale systems. As a result, other potential uses of the gridded forcing data, such as understanding and testing scale-aware parameterizations, are also discussed.« less
Aquilina, Peter; Parr, William C.H.; Chamoli, Uphar; Wroe, Stephen; Clausen, Philip
2014-01-01
The most stable pattern of internal fixation for mandibular condyle fractures is an area of ongoing discussion. This study investigates the stability of three patterns of plate fixation using readily available, commercially pure titanium implants. Finite element models of a simulated mandibular condyle fracture were constructed. The completed models were heterogeneous in bone material properties, contained approximately 1.2 million elements and incorporated simulated jaw adducting musculature. Models were run assuming linear elasticity and isotropic material properties for bone. No human subjects were involved in this investigation. The stability of the simulated condylar fracture reduced with the different implant configurations, and the von Mises stresses of a 1.5-mm X-shaped plate, a 1.5-mm rectangular plate, and a 1.5-mm square plate (all Synthes (Synthes GmbH, Zuchwil, Switzerland) were compared. The 1.5-mm X plate was the most stable of the three 1.5-mm profile plate configurations examined and had comparable mechanical performance to a single 2.0-mm straight four-hole plate. This study does not support the use of rectangular or square plate patterns in the open reduction and internal fixation of mandibular condyle fractures. It does provide some support for the use of a 1.5-mm X plate to reduce condylar fractures in selected clinical cases. PMID:25136411
Preventing Pirates from Boarding Commercial Vessels - A Systems Approach
2014-09-01
was developed in MATLAB to run simulations designed to estimate the relative effectiveness of each assessed countermeasure. A cost analysis was...project indicated that the P-Trap countermeasure, designed to entangle the pirate’s propellers with thin lines, is both effective and economically viable...vessels. A model of the operational environment was developed in MATLAB to run simulations designed to estimate the relative effectiveness of each
The UPSCALE project: a large simulation campaign
NASA Astrophysics Data System (ADS)
Mizielinski, Matthew; Roberts, Malcolm; Vidale, Pier Luigi; Schiemann, Reinhard; Demory, Marie-Estelle; Strachan, Jane
2014-05-01
The development of a traceable hierarchy of HadGEM3 global climate models, based upon the Met Office Unified Model, at resolutions from 135 km to 25 km, now allows the impact of resolution on the mean state, variability and extremes of climate to be studied in a robust fashion. In 2011 we successfully obtained a single-year grant of 144 million core hours of supercomputing time from the PRACE organization to run ensembles of 27 year atmosphere-only (HadGEM3-A GA3.0) climate simulations at 25km resolution, as used in present global weather forecasting, on HERMIT at HLRS. Through 2012 the UPSCALE project (UK on PRACE: weather-resolving Simulations of Climate for globAL Environmental risk) ran over 650 years of simulation at resolutions of 25 km (N512), 60 km (N216) and 135 km (N96) to look at the value of high resolution climate models in the study of both present climate and a potential future climate scenario based on RCP8.5. Over 400 TB of data was produced using HERMIT, with additional simulations run on HECToR (UK supercomputer) and MONSooN (Met Office NERC Supercomputing Node). The data generated was transferred to the JASMIN super-data cluster, hosted by STFC CEDA in the UK, where analysis facilities are allowing rapid scientific exploitation of the data set. Many groups across the UK and Europe are already taking advantage of these facilities and we welcome approaches from other interested scientists. This presentation will briefly cover the following points; Purpose and requirements of the UPSCALE project and facilities used. Technical implementation and hurdles (model porting and optimisation, automation, numerical failures, data transfer). Ensemble specification. Current analysis projects and access to the data set. A full description of UPSCALE and the data set generated has been submitted to Geoscientific Model development, with overview information available from http://proj.badc.rl.ac.uk/upscale .
Argonne Simulation Framework for Intelligent Transportation Systems
DOT National Transportation Integrated Search
1996-01-01
A simulation framework has been developed which defines a high-level architecture for a large-scale, comprehensive, scalable simulation of an Intelligent Transportation System (ITS). The simulator is designed to run on parallel computers and distribu...
Sams, J. I.; Witt, E. C.
1995-01-01
The Hydrological Simulation Program - Fortran (HSPF) was used to simulate streamflow and sediment transport in two surface-mined basins of Fayette County, Pa. Hydrologic data from the Stony Fork Basin (0.93 square miles) was used to calibrate HSPF parameters. The calibrated parameters were applied to an HSPF model of the Poplar Run Basin (8.83 square miles) to evaluate the transfer value of model parameters. The results of this investigation provide information to the Pennsylvania Department of Environmental Resources, Bureau of Mining and Reclamation, regarding the value of the simulated hydrologic data for use in cumulative hydrologic-impact assessments of surface-mined basins. The calibration period was October 1, 1985, through September 30, 1988 (water years 1986-88). The simulated data were representative of the observed data from the Stony Fork Basin. Mean simulated streamflow was 1.64 cubic feet per second compared to measured streamflow of 1.58 cubic feet per second for the 3-year period. The difference between the observed and simulated peak stormflow ranged from 4.0 to 59.7 percent for 12 storms. The simulated sediment load for the 1987 water year was 127.14 tons (0.21 ton per acre), which compares to a measured sediment load of 147.09 tons (0.25 ton per acre). The total simulated suspended-sediment load for the 3-year period was 538.2 tons (0.30 ton per acre per year), which compares to a measured sediment load of 467.61 tons (0.26 ton per acre per year). The model was verified by comparing observed and simulated data from October 1, 1988, through September 30, 1989. The results obtained were comparable to those from the calibration period. The simulated mean daily discharge was representative of the range of data observed from the basin and of the frequency with which specific discharges were equalled or exceeded. The calibrated and verified parameters from the Stony Fork model were applied to an HSPF model of the Poplar Run Basin. The two basins are in a similar physical setting. Data from October 1, 1987, through September 30, 1989, were used to evaluate the Poplar Run model. In general, the results from the Poplar Run model were comparable to those obtained from the Stony Fork model. The difference between observed and simulated total streamflow was 1.1 percent for the 2-year period. The mean annual streamflow simulated by the Poplar Run model was 18.3 cubic feet per second. This compares to an observed streamflow of 18.15 cubic feet per second. For the 2-year period, the simulated sediment load was 2,754 tons (0.24 ton per acre per year), which compares to a measured sediment load of 3,051.2 tons (0.27 ton per acre per year) for the Poplar Run Basin. Cumulative frequency-distribution curves of the observed and simulated streamflow compared well. The comparison between observed and simulated data improved as the time span increased. Simulated annual means and totals were more representative of the observed data than hourly data used in comparing storm events. The structure and organization of the HSPF model facilitated the simulation of a wide range of hydrologic processes. The simulation results from this investigation indicate that model parameters may be transferred to ungaged basins to generate representative hydrologic data through modeling techniques.
Dysrhythmias in Laypersons During Centrifuge-Simulated Suborbital Spaceflight.
Suresh, Rahul; Blue, Rebecca S; Mathers, Charles H; Castleberry, Tarah L; Vanderploeg, James M
2017-11-01
There are limited data on cardiac dysrhythmias in laypersons during hypergravity exposure. We report layperson electrocardiograph (ECG) findings and tolerance of dysrhythmias during centrifuge-simulated suborbital spaceflight. Volunteers participated in varied-length centrifuge training programs of 2-7 centrifuge runs over 0.5-2 d, culminating in two simulated suborbital spaceflights of combined +Gz and +Gx (peak +4.0 Gz, +6.0 Gx, duration 5 s). Monitors recorded pre- and post-run mean arterial blood pressure (MAP), 6-s average heart rate (HR) collected at prespecified points during exposures, documented dysrhythmias observed on continuous 3-lead ECG, self-reported symptoms, and objective signs of intolerance on real-time video monitoring. Participating in the study were 148 subjects (43 women). Documented dysrhythmias included sinus pause (N = 5), couplet premature ventricular contractions (N = 4), bigeminy (N = 3), accelerated idioventricular rhythm (N = 1), and relative bradycardia (RB, defined as a transient HR drop of >20 bpm; N = 63). None were associated with subjective symptoms or objective signs of acceleration intolerance. Episodes of RB occurred only during +Gx exposures. Subjects had a higher post-run vs. pre-run MAP after all exposures, but demonstrated no difference in pre- and post-run HR. RB was more common in men, younger individuals, and subjects experiencing more centrifuge runs. Dysrhythmias in laypersons undergoing simulated suborbital spaceflight were well tolerated, though RB was frequently noted during short-duration +Gx exposure. No subjects demonstrated associated symptoms or objective hemodynamic sequelae from these events. Even so, heightened caution remains warranted when monitoring dysrhythmias in laypersons with significant cardiopulmonary disease or taking medications that modulate cardiac conduction.Suresh R, Blue RS, Mathers CH, Castleberry TL, Vanderploeg JM. Dysrhythmias in laypersons during centrifuge-stimulated suborbital spaceflight. Aerosp Med Hum Perform. 2017; 88(11):1008-1015.
Predictive simulation of gait at low gravity reveals skipping as the preferred locomotion strategy
Ackermann, Marko; van den Bogert, Antonie J.
2012-01-01
The investigation of gait strategies at low gravity environments gained momentum recently as manned missions to the Moon and to Mars are reconsidered. Although reports by astronauts of the Apollo missions indicate alternative gait strategies might be favored on the Moon, computational simulations and experimental investigations have been almost exclusively limited to the study of either walking or running, the locomotion modes preferred under Earth's gravity. In order to investigate the gait strategies likely to be favored at low gravity a series of predictive, computational simulations of gait are performed using a physiological model of the musculoskeletal system, without assuming any particular type of gait. A computationally efficient optimization strategy is utilized allowing for multiple simulations. The results reveal skipping as more efficient and less fatiguing than walking or running and suggest the existence of a walk-skip rather than a walk-run transition at low gravity. The results are expected to serve as a background to the design of experimental investigations of gait under simulated low gravity. PMID:22365845
Predictive simulation of gait at low gravity reveals skipping as the preferred locomotion strategy.
Ackermann, Marko; van den Bogert, Antonie J
2012-04-30
The investigation of gait strategies at low gravity environments gained momentum recently as manned missions to the Moon and to Mars are reconsidered. Although reports by astronauts of the Apollo missions indicate alternative gait strategies might be favored on the Moon, computational simulations and experimental investigations have been almost exclusively limited to the study of either walking or running, the locomotion modes preferred under Earth's gravity. In order to investigate the gait strategies likely to be favored at low gravity a series of predictive, computational simulations of gait are performed using a physiological model of the musculoskeletal system, without assuming any particular type of gait. A computationally efficient optimization strategy is utilized allowing for multiple simulations. The results reveal skipping as more efficient and less fatiguing than walking or running and suggest the existence of a walk-skip rather than a walk-run transition at low gravity. The results are expected to serve as a background to the design of experimental investigations of gait under simulated low gravity. Copyright © 2012 Elsevier Ltd. All rights reserved.
Antonioletti, Mario; Biktashev, Vadim N; Jackson, Adrian; Kharche, Sanjay R; Stary, Tomas; Biktasheva, Irina V
2017-01-01
The BeatBox simulation environment combines flexible script language user interface with the robust computational tools, in order to setup cardiac electrophysiology in-silico experiments without re-coding at low-level, so that cell excitation, tissue/anatomy models, stimulation protocols may be included into a BeatBox script, and simulation run either sequentially or in parallel (MPI) without re-compilation. BeatBox is a free software written in C language to be run on a Unix-based platform. It provides the whole spectrum of multi scale tissue modelling from 0-dimensional individual cell simulation, 1-dimensional fibre, 2-dimensional sheet and 3-dimensional slab of tissue, up to anatomically realistic whole heart simulations, with run time measurements including cardiac re-entry tip/filament tracing, ECG, local/global samples of any variables, etc. BeatBox solvers, cell, and tissue/anatomy models repositories are extended via robust and flexible interfaces, thus providing an open framework for new developments in the field. In this paper we give an overview of the BeatBox current state, together with a description of the main computational methods and MPI parallelisation approaches.
NASA Astrophysics Data System (ADS)
Tong, Qiujie; Wang, Qianqian; Li, Xiaoyang; Shan, Bin; Cui, Xuntai; Li, Chenyu; Peng, Zhong
2016-11-01
In order to satisfy the requirements of the real-time and generality, a laser target simulator in semi-physical simulation system based on RTX+LabWindows/CVI platform is proposed in this paper. Compared with the upper-lower computers simulation platform architecture used in the most of the real-time system now, this system has better maintainability and portability. This system runs on the Windows platform, using Windows RTX real-time extension subsystem to ensure the real-time performance of the system combining with the reflective memory network to complete some real-time tasks such as calculating the simulation model, transmitting the simulation data, and keeping real-time communication. The real-time tasks of simulation system run under the RTSS process. At the same time, we use the LabWindows/CVI to compile a graphical interface, and complete some non-real-time tasks in the process of simulation such as man-machine interaction, display and storage of the simulation data, which run under the Win32 process. Through the design of RTX shared memory and task scheduling algorithm, the data interaction between the real-time tasks process of RTSS and non-real-time tasks process of Win32 is completed. The experimental results show that this system has the strongly real-time performance, highly stability, and highly simulation accuracy. At the same time, it also has the good performance of human-computer interaction.
Effect of monthly areal rainfall uncertainty on streamflow simulation
NASA Astrophysics Data System (ADS)
Ndiritu, J. G.; Mkhize, N.
2017-08-01
Areal rainfall is mostly obtained from point rainfall measurements that are sparsely located and several studies have shown that this results in large areal rainfall uncertainties at the daily time step. However, water resources assessment is often carried out a monthly time step and streamflow simulation is usually an essential component of this assessment. This study set out to quantify monthly areal rainfall uncertainties and assess their effect on streamflow simulation. This was achieved by; i) quantifying areal rainfall uncertainties and using these to generate stochastic monthly areal rainfalls, and ii) finding out how the quality of monthly streamflow simulation and streamflow variability change if stochastic areal rainfalls are used instead of historic areal rainfalls. Tests on monthly rainfall uncertainty were carried out using data from two South African catchments while streamflow simulation was confined to one of them. A non-parametric model that had been applied at a daily time step was used for stochastic areal rainfall generation and the Pitman catchment model calibrated using the SCE-UA optimizer was used for streamflow simulation. 100 randomly-initialised calibration-validation runs using 100 stochastic areal rainfalls were compared with 100 runs obtained using the single historic areal rainfall series. By using 4 rain gauges alternately to obtain areal rainfall, the resulting differences in areal rainfall averaged to 20% of the mean monthly areal rainfall and rainfall uncertainty was therefore highly significant. Pitman model simulations obtained coefficient of efficiencies averaging 0.66 and 0.64 in calibration and validation using historic rainfalls while the respective values using stochastic areal rainfalls were 0.59 and 0.57. Average bias was less than 5% in all cases. The streamflow ranges using historic rainfalls averaged to 29% of the mean naturalised flow in calibration and validation and the respective average ranges using stochastic monthly rainfalls were 86 and 90% of the mean naturalised streamflow. In calibration, 33% of the naturalised flow located within the streamflow ranges with historic rainfall simulations and using stochastic rainfalls increased this to 66%. In validation the respective percentages of naturalised flows located within the simulated streamflow ranges were 32 and 72% respectively. The analysis reveals that monthly areal rainfall uncertainty is significant and incorporating it into streamflow simulation would add validity to the results.
Real-time simulation of large-scale floods
NASA Astrophysics Data System (ADS)
Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.
2016-08-01
According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ansari, A.; Mohaghegh, S.; Shahnam, M.
To ensure the usefulness of simulation technologies in practice, their credibility needs to be established with Uncertainty Quantification (UQ) methods. In this project, smart proxy is introduced to significantly reduce the computational cost of conducting large number of multiphase CFD simulations, which is typically required for non-intrusive UQ analysis. Smart proxy for CFD models are developed using pattern recognition capabilities of Artificial Intelligence (AI) and Data Mining (DM) technologies. Several CFD simulation runs with different inlet air velocities for a rectangular fluidized bed are used to create a smart CFD proxy that is capable of replicating the CFD results formore » the entire geometry and inlet velocity range. The smart CFD proxy is validated with blind CFD runs (CFD runs that have not played any role during the development of the smart CFD proxy). The developed and validated smart CFD proxy generates its results in seconds with reasonable error (less than 10%). Upon completion of this project, UQ studies that rely on hundreds or thousands of smart CFD proxy runs can be accomplished in minutes. Following figure demonstrates a validation example (blind CFD run) showing the results from the MFiX simulation and the smart CFD proxy for pressure distribution across a fluidized bed at a given time-step (the layer number corresponds to the vertical location in the bed).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, S. Y.
Au beam at the RHIC ramp in run 2014 is reviewed together with the run 2011 and run 2012. Observed bunch length and longitudinal emittance are compared with the IBS simulations. The IBS growth rate of the longitudinal emittance in run 2014 is similar to run 2011, and both are larger than run 2012. This is explained by the large transverse emittance at high intensity observed in run 2012, but not in run 2014. The big improvement of the AGS ramping in run 2014 might be related to this change. The importance of the injector intensity improvement in run 2014more » is emphasized, which gives rise to the initial luminosity improvement of 50% in run 2014, compared with the previous Au-Au run 2011. In addition, a modified IBS model, which is calibrated using the RHIC Au runs from 9.8 GeV/n to 100 GeV/n, is presented and used in the study.« less
Designing Crop Simulation Web Service with Service Oriented Architecture Principle
NASA Astrophysics Data System (ADS)
Chinnachodteeranun, R.; Hung, N. D.; Honda, K.
2015-12-01
Crop simulation models are efficient tools for simulating crop growth processes and yield. Running crop models requires data from various sources as well as time-consuming data processing, such as data quality checking and data formatting, before those data can be inputted to the model. It makes the use of crop modeling limited only to crop modelers. We aim to make running crop models convenient for various users so that the utilization of crop models will be expanded, which will directly improve agricultural applications. As the first step, we had developed a prototype that runs DSSAT on Web called as Tomorrow's Rice (v. 1). It predicts rice yields based on a planting date, rice's variety and soil characteristics using DSSAT crop model. A user only needs to select a planting location on the Web GUI then the system queried historical weather data from available sources and expected yield is returned. Currently, we are working on weather data connection via Sensor Observation Service (SOS) interface defined by Open Geospatial Consortium (OGC). Weather data can be automatically connected to a weather generator for generating weather scenarios for running the crop model. In order to expand these services further, we are designing a web service framework consisting of layers of web services to support compositions and executions for running crop simulations. This framework allows a third party application to call and cascade each service as it needs for data preparation and running DSSAT model using a dynamic web service mechanism. The framework has a module to manage data format conversion, which means users do not need to spend their time curating the data inputs. Dynamic linking of data sources and services are implemented using the Service Component Architecture (SCA). This agriculture web service platform demonstrates interoperability of weather data using SOS interface, convenient connections between weather data sources and weather generator, and connecting various services for running crop models for decision support.
NASA Astrophysics Data System (ADS)
Dilmen, Derya I.; Titov, Vasily V.; Roe, Gerard H.
2015-12-01
On September 29, 2009, an Mw = 8.1 earthquake at 17:48 UTC in Tonga Trench generated a tsunami that caused heavy damage across Samoa, American Samoa, and Tonga islands. Tutuila island, which is located 250 km from the earthquake epicenter, experienced tsunami flooding and strong currents on the north and east coasts, causing 34 fatalities (out of 192 total deaths from this tsunami) and widespread structural and ecological damage. The surrounding coral reefs also suffered heavy damage. The damage was formally evaluated based on detailed surveys before and immediately after the tsunami. This setting thus provides a unique opportunity to evaluate the relationship between tsunami dynamics and coral damage. In this study, estimates of the maximum wave amplitudes and coastal inundation of the tsunami are obtained with the MOST model (T itov and S ynolakis, J. Waterway Port Coast Ocean Eng: pp 171, 1998; T itov and G onzalez, NOAA Tech. Memo. ERL PMEL 112:11, 1997), which is now the operational tsunami forecast tool used by the National Oceanic and Atmospheric Administration (NOAA). The earthquake source function was constrained using the real-time deep-ocean tsunami data from three DART® (Deep-ocean Assessment and Reporting for Tsunamis) systems in the far field, and by tide-gauge observations in the near field. We compare the simulated run-up with observations to evaluate the simulation performance. We present an overall synthesis of the tide-gauge data, survey results of the run-up, inundation measurements, and the datasets of coral damage around the island. These data are used to assess the overall accuracy of the model run-up prediction for Tutuila, and to evaluate the model accuracy over the coral reef environment during the tsunami event. Our primary findings are that: (1) MOST-simulated run-up correlates well with observed run-up for this event ( r = 0.8), it tends to underestimated amplitudes over coral reef environment around Tutuila (for 15 of 31 villages, run-up is underestimated by more than 10 %; in only 5 was run-up overestimated by more than 10 %), and (2) the locations where the model underestimates run-up also tend to have experienced heavy or very heavy coral damage (8 of the 15 villages), whereas well-estimated run-up locations characteristically experience low or very low damage (7 of 11 villages). These findings imply that a numerical model may overestimate the energy loss of the tsunami waves during their interaction with the coral reef. We plan future studies to quantify this energy loss and to explore what improvements can be made in simulations of tsunami run-up when simulating coastal environments with fringing coral reefs.
Determining dark matter properties with a XENONnT/LZ signal and LHC Run 3 monojet searches
NASA Astrophysics Data System (ADS)
Baum, Sebastian; Catena, Riccardo; Conrad, Jan; Freese, Katherine; Krauss, Martin B.
2018-04-01
We develop a method to forecast the outcome of the LHC Run 3 based on the hypothetical detection of O (100 ) signal events at XENONnT. Our method relies on a systematic classification of renormalizable single-mediator models for dark matter-quark interactions and is valid for dark matter candidates of spin less than or equal to one. Applying our method to simulated data, we find that at the end of the LHC Run 3 only two mutually exclusive scenarios would be compatible with the detection of O (100 ) signal events at XENONnT. In the first scenario, the energy distribution of the signal events is featureless, as for canonical spin-independent interactions. In this case, if a monojet signal is detected at the LHC, dark matter must have spin 1 /2 and interact with nucleons through a unique velocity-dependent operator. If a monojet signal is not detected, dark matter interacts with nucleons through canonical spin-independent interactions. In a second scenario, the spectral distribution of the signal events exhibits a bump at nonzero recoil energies. In this second case, a monojet signal can be detected at the LHC Run 3; dark matter must have spin 1 /2 and interact with nucleons through a unique momentum-dependent operator. We therefore conclude that the observation of O (100 ) signal events at XENONnT combined with the detection, or the lack of detection, of a monojet signal at the LHC Run 3 would significantly narrow the range of possible dark matter-nucleon interactions. As we argued above, it can also provide key information on the dark matter particle spin.
Arnold, Edith M.; Hamner, Samuel R.; Seth, Ajay; Millard, Matthew; Delp, Scott L.
2013-01-01
SUMMARY The lengths and velocities of muscle fibers have a dramatic effect on muscle force generation. It is unknown, however, whether the lengths and velocities of lower limb muscle fibers substantially affect the ability of muscles to generate force during walking and running. We examined this issue by developing simulations of muscle–tendon dynamics to calculate the lengths and velocities of muscle fibers from electromyographic recordings of 11 lower limb muscles and kinematic measurements of the hip, knee and ankle made as five subjects walked at speeds of 1.0–1.75 m s−1 and ran at speeds of 2.0–5.0 m s−1. We analyzed the simulated fiber lengths, fiber velocities and forces to evaluate the influence of force–length and force–velocity properties on force generation at different walking and running speeds. The simulations revealed that force generation ability (i.e. the force generated per unit of activation) of eight of the 11 muscles was significantly affected by walking or running speed. Soleus force generation ability decreased with increasing walking speed, but the transition from walking to running increased the force generation ability by reducing fiber velocities. Our results demonstrate the influence of soleus muscle architecture on the walk-to-run transition and the effects of muscle–tendon compliance on the plantarflexors' ability to generate ankle moment and power. The study presents data that permit lower limb muscles to be studied in unprecedented detail by relating muscle fiber dynamics and force generation to the mechanical demands of walking and running. PMID:23470656
Massively parallel quantum computer simulator
NASA Astrophysics Data System (ADS)
De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.
2007-01-01
We describe portable software to simulate universal quantum computers on massive parallel computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray X1E, a SGI Altix 3700 and clusters of PCs running Windows XP. We study the performance of the software by simulating quantum computers containing up to 36 qubits, using up to 4096 processors and up to 1 TB of memory. Our results demonstrate that the simulator exhibits nearly ideal scaling as a function of the number of processors and suggest that the simulation software described in this paper may also serve as benchmark for testing high-end parallel computers.
Advanced overlay: sampling and modeling for optimized run-to-run control
NASA Astrophysics Data System (ADS)
Subramany, Lokesh; Chung, WoongJae; Samudrala, Pavan; Gao, Haiyong; Aung, Nyan; Gomez, Juan Manuel; Gutjahr, Karsten; Park, DongSuk; Snow, Patrick; Garcia-Medina, Miguel; Yap, Lipkong; Demirer, Onur Nihat; Pierson, Bill; Robinson, John C.
2016-03-01
In recent years overlay (OVL) control schemes have become more complicated in order to meet the ever shrinking margins of advanced technology nodes. As a result, this brings up new challenges to be addressed for effective run-to- run OVL control. This work addresses two of these challenges by new advanced analysis techniques: (1) sampling optimization for run-to-run control and (2) bias-variance tradeoff in modeling. The first challenge in a high order OVL control strategy is to optimize the number of measurements and the locations on the wafer, so that the "sample plan" of measurements provides high quality information about the OVL signature on the wafer with acceptable metrology throughput. We solve this tradeoff between accuracy and throughput by using a smart sampling scheme which utilizes various design-based and data-based metrics to increase model accuracy and reduce model uncertainty while avoiding wafer to wafer and within wafer measurement noise caused by metrology, scanner or process. This sort of sampling scheme, combined with an advanced field by field extrapolated modeling algorithm helps to maximize model stability and minimize on product overlay (OPO). Second, the use of higher order overlay models means more degrees of freedom, which enables increased capability to correct for complicated overlay signatures, but also increases sensitivity to process or metrology induced noise. This is also known as the bias-variance trade-off. A high order model that minimizes the bias between the modeled and raw overlay signature on a single wafer will also have a higher variation from wafer to wafer or lot to lot, that is unless an advanced modeling approach is used. In this paper, we characterize the bias-variance trade off to find the optimal scheme. The sampling and modeling solutions proposed in this study are validated by advanced process control (APC) simulations to estimate run-to-run performance, lot-to-lot and wafer-to- wafer model term monitoring to estimate stability and ultimately high volume manufacturing tests to monitor OPO by densely measured OVL data.
CMacIonize: Monte Carlo photoionisation and moving-mesh radiation hydrodynamics
NASA Astrophysics Data System (ADS)
Vandenbroucke, Bert; Wood, Kenneth
2018-02-01
CMacIonize simulates the self-consistent evolution of HII regions surrounding young O and B stars, or other sources of ionizing radiation. The code combines a Monte Carlo photoionization algorithm that uses a complex mix of hydrogen, helium and several coolants in order to self-consistently solve for the ionization and temperature balance at any given time, with a standard first order hydrodynamics scheme. The code can be run as a post-processing tool to get the line emission from an existing simulation snapshot, but can also be used to run full radiation hydrodynamical simulations. Both the radiation transfer and the hydrodynamics are implemented in a general way that is independent of the grid structure that is used to discretize the system, allowing it to be run both as a standard fixed grid code and also as a moving-mesh code.
Visualization and Tracking of Parallel CFD Simulations
NASA Technical Reports Server (NTRS)
Vaziri, Arsi; Kremenetsky, Mark
1995-01-01
We describe a system for interactive visualization and tracking of a 3-D unsteady computational fluid dynamics (CFD) simulation on a parallel computer. CM/AVS, a distributed, parallel implementation of a visualization environment (AVS) runs on the CM-5 parallel supercomputer. A CFD solver is run as a CM/AVS module on the CM-5. Data communication between the solver, other parallel visualization modules, and a graphics workstation, which is running AVS, are handled by CM/AVS. Partitioning of the visualization task, between CM-5 and the workstation, can be done interactively in the visual programming environment provided by AVS. Flow solver parameters can also be altered by programmable interactive widgets. This system partially removes the requirement of storing large solution files at frequent time steps, a characteristic of the traditional 'simulate (yields) store (yields) visualize' post-processing approach.
An experimental investigation of flow around a vehicle passing through a tornado
NASA Astrophysics Data System (ADS)
Suzuki, Masahiro; Obara, Kouhei; Okura, Nobuyuki
2016-03-01
Flow around a vehicle running through a tornado was investigated experimentally. A tornado simulator was developed to generate a tornado-like swirl flow. PIV study confirmed that the simulator generates two-celled vortices which are observed in the natural tornadoes. A moving test rig was developed to run a 1/40 scaled train-shaped model vehicle under the tornado simulator. The car contained pressure sensors, a data logger with an AD converter to measure unsteady surface pressures during its run through the swirling flow. Aerodynamic forces acting on the vehicle were estimated from the pressure data. The results show that the aerodynamic forces change its magnitude and direction depending on the position of the car in the swirling flow. The asymmetry of the forces about the vortex centre suggests the vehicle itself may deform the flow field.
Coupling vibration research on Vehicle-bridge system
NASA Astrophysics Data System (ADS)
Zhou, Jiguo; Wang, Guihua
2018-01-01
The vehicle-bridge coupling system forms when vehicle running on a bridge. It will generate a relatively large influence on the driving comfort and driving safe when the vibration of the vehicle is bigger. A three-dimensional vehicle-bridge system with biaxial seven degrees of freedom has been establish in this paper based on finite numerical simulation. Adopting the finite element transient numerical simulation to realize the numerical simulation of vehicle-bridge system coupling vibration. Then, analyze the dynamic response of vehicle and bridge while different numbers of vehicles running on the bridge. Got the variation rule of vertical vibration of car body and bridge, and that of the contact force between the wheel and bridge deck. The research results have a reference value for the analysis about the vehicle running on a large-span cabled bridge.
DWPF SIMULANT CPC STUDIES FOR SB7B
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koopman, D.
2011-11-01
Lab-scale DWPF simulations of Sludge Batch 7b (SB7b) processing were performed. Testing was performed at the Savannah River National Laboratory - Aiken County Technology Laboratory (SRNL-ACTL). The primary goal of the simulations was to define a likely operating window for acid stoichiometry for the DWPF Sludge Receipt and Adjustment Tank (SRAT). In addition, the testing established conditions for the SRNL Shielded Cells qualification simulation of SB7b-Tank 40 blend, supported validation of the current glass redox model, and validated the coupled process flowsheet at the nominal acid stoichiometry. An acid window of 105-140% by the Koopman minimum acid (KMA) equation (107-142%more » DWPF Hsu equation) worked for the sludge-only flowsheet. Nitrite was present in the SRAT product for the 105% KMA run at 366 mg/kg, while SME cycle hydrogen reached 94% of the DWPF Slurry Mix Evaporator (SME) cycle limit in the 140% KMA run. The window was determined for sludge with added caustic (0.28M additional base, or roughly 12,000 gallons 50% NaOH to 820,000 gallons waste slurry). A suitable processing window appears to be 107-130% DWPF acid equation for sludge-only processing allowing some conservatism for the mapping of lab-scale simulant data to full-scale real waste processing including potentially non-conservative noble metal and mercury concentrations. This window should be usable with or without the addition of up to 7,000 gallons of caustic to the batch. The window could potentially be wider if caustic is not added to SB7b. It is recommended that DWPF begin processing SB7b at 115% stoichiometry using the current DWPF equation. The factor could be increased if necessary, but changes should be made with caution and in small increments. DWPF should not concentrate past 48 wt.% total solids in the SME cycle if moderate hydrogen generation is occurring simultaneously. The coupled flowsheet simulation made more hydrogen in the SRAT and SME cycles than the sludge-only run with the same acid stoichiometric factor. The slow acid addition in MCU seemed to alter the reactions that consumed the small excess acid present such that hydrogen generation was promoted relative to sludge-only processing. The coupled test reached higher wt.% total solids, and this likely contributed to the SME cycle hydrogen limit being exceeded at 110% KMA. It is clear from the trends in the SME processing GC data, however, that the frit slurry formic acid contributed to driving the hydrogen generation rate above the SME cycle limit. Hydrogen generation rates after the second frit addition generally exceeded those after the first frit addition. SRAT formate loss increased with increasing acid stoichiometry (15% to 35%). A substantial nitrate gain which was observed to have occurred after acid addition (and nitrite destruction) was reversed to a net nitrate loss in runs with higher acid stoichiometry (nitrate in SRAT product less than sum of sludge nitrate and added nitric acid). Increased ammonium ion formation was also indicated in the runs with nitrate loss. Oxalate loss on the order 20% was indicated in three of the four acid stoichiometry runs and in the coupled flowsheet run. The minimum acid stoichiometry run had no indicated loss. The losses were of the same order as the official analytical uncertainty of the oxalate concentration measurement, but were not randomly distributed about zero loss, so some actual loss was likely occurring. Based on the entire set of SB7b test data, it is recommended that DWPF avoid concentrating additional sludge solids in single SRAT batches to limit the concentrations of noble metals to SB7a processing levels (on a grams noble metal per SRAT batch basis). It is also recommended that DWPF drop the formic acid addition that accompanies the process frit 418 additions, since SME cycle data showed considerable catalytic activity for hydrogen generation from this additional acid (about 5% increase in stoichiometry occurred from the frit formic acid). Frit 418 also does not appear to need formic acid addition to prevent gel formation in the frit slurry. Simulant processing was successful using 100 ppm of 747 antifoam added prior to nitric acid instead of 200 ppm. This is a potential area for DWPF to cut antifoam usage in any future test program. An additional 100 ppm was added before formic acid addition. Foaming during formic acid addition was not observed. No build-up of oily or waxy material was observed in the off-gas equipment. Lab-scale mercury stripping behavior was similar to SB6 and SB7a. More mercury was unaccounted for as the acid stoichiometry increased.« less
The Trick Simulation Toolkit: A NASA/Opensource Framework for Running Time Based Physics Models
NASA Technical Reports Server (NTRS)
Penn, John M.
2016-01-01
The Trick Simulation Toolkit is a simulation development environment used to create high fidelity training and engineering simulations at the NASA Johnson Space Center and many other NASA facilities. Its purpose is to generate a simulation executable from a collection of user-supplied models and a simulation definition file. For each Trick-based simulation, Trick automatically provides job scheduling, numerical integration, the ability to write and restore human readable checkpoints, data recording, interactive variable manipulation, a run-time interpreter, and many other commonly needed capabilities. This allows simulation developers to concentrate on their domain expertise and the algorithms and equations of their models. Also included in Trick are tools for plotting recorded data and various other supporting utilities and libraries. Trick is written in C/C++ and Java and supports both Linux and MacOSX computer operating systems. This paper describes Trick's design and use at NASA Johnson Space Center.
29 CFR 1910.305 - Wiring methods, components, and equipment for general use.
Code of Federal Regulations, 2010 CFR
2010-07-01
... distribution center. (B) Conductors shall be run as multiconductor cord or cable assemblies. However, if... persons, feeders may be run as single insulated conductors. (v) The following requirements apply to branch... shall be multiconductor cord or cable assemblies or open conductors. If run as open conductors, they...
29 CFR 1910.305 - Wiring methods, components, and equipment for general use.
Code of Federal Regulations, 2011 CFR
2011-07-01
... distribution center. (B) Conductors shall be run as multiconductor cord or cable assemblies. However, if... persons, feeders may be run as single insulated conductors. (v) The following requirements apply to branch... shall be multiconductor cord or cable assemblies or open conductors. If run as open conductors, they...
29 CFR 1910.305 - Wiring methods, components, and equipment for general use.
Code of Federal Regulations, 2013 CFR
2013-07-01
... distribution center. (B) Conductors shall be run as multiconductor cord or cable assemblies. However, if... persons, feeders may be run as single insulated conductors. (v) The following requirements apply to branch... shall be multiconductor cord or cable assemblies or open conductors. If run as open conductors, they...
29 CFR 1910.305 - Wiring methods, components, and equipment for general use.
Code of Federal Regulations, 2014 CFR
2014-07-01
... distribution center. (B) Conductors shall be run as multiconductor cord or cable assemblies. However, if... persons, feeders may be run as single insulated conductors. (v) The following requirements apply to branch... shall be multiconductor cord or cable assemblies or open conductors. If run as open conductors, they...
29 CFR 1910.305 - Wiring methods, components, and equipment for general use.
Code of Federal Regulations, 2012 CFR
2012-07-01
... distribution center. (B) Conductors shall be run as multiconductor cord or cable assemblies. However, if... persons, feeders may be run as single insulated conductors. (v) The following requirements apply to branch... shall be multiconductor cord or cable assemblies or open conductors. If run as open conductors, they...
Hollander, Karsten; Argubi-Wollesen, Andreas; Reer, Rüdiger; Zech, Astrid
2015-01-01
Possible benefits of barefoot running have been widely discussed in recent years. Uncertainty exists about which footwear strategy adequately simulates barefoot running kinematics. The objective of this study was to investigate the effects of athletic footwear with different minimalist strategies on running kinematics. Thirty-five distance runners (22 males, 13 females, 27.9 ± 6.2 years, 179.2 ± 8.4 cm, 73.4 ± 12.1 kg, 24.9 ± 10.9 km.week-1) performed a treadmill protocol at three running velocities (2.22, 2.78 and 3.33 m.s-1) using four footwear conditions: barefoot, uncushioned minimalist shoes, cushioned minimalist shoes, and standard running shoes. 3D kinematic analysis was performed to determine ankle and knee angles at initial foot-ground contact, rate of rear-foot strikes, stride frequency and step length. Ankle angle at foot strike, step length and stride frequency were significantly influenced by footwear conditions (p<0.001) at all running velocities. Posthoc pairwise comparisons showed significant differences (p<0.001) between running barefoot and all shod situations as well as between the uncushioned minimalistic shoe and both cushioned shoe conditions. The rate of rear-foot strikes was lowest during barefoot running (58.6% at 3.33 m.s-1), followed by running with uncushioned minimalist shoes (62.9%), cushioned minimalist (88.6%) and standard shoes (94.3%). Aside from showing the influence of shod conditions on running kinematics, this study helps to elucidate differences between footwear marked as minimalist shoes and their ability to mimic barefoot running adequately. These findings have implications on the use of footwear applied in future research debating the topic of barefoot or minimalist shoe running. PMID:26011042
Hollander, Karsten; Argubi-Wollesen, Andreas; Reer, Rüdiger; Zech, Astrid
2015-01-01
Possible benefits of barefoot running have been widely discussed in recent years. Uncertainty exists about which footwear strategy adequately simulates barefoot running kinematics. The objective of this study was to investigate the effects of athletic footwear with different minimalist strategies on running kinematics. Thirty-five distance runners (22 males, 13 females, 27.9 ± 6.2 years, 179.2 ± 8.4 cm, 73.4 ± 12.1 kg, 24.9 ± 10.9 km x week(-1)) performed a treadmill protocol at three running velocities (2.22, 2.78 and 3.33 m x s(-1)) using four footwear conditions: barefoot, uncushioned minimalist shoes, cushioned minimalist shoes, and standard running shoes. 3D kinematic analysis was performed to determine ankle and knee angles at initial foot-ground contact, rate of rear-foot strikes, stride frequency and step length. Ankle angle at foot strike, step length and stride frequency were significantly influenced by footwear conditions (p<0.001) at all running velocities. Posthoc pairwise comparisons showed significant differences (p<0.001) between running barefoot and all shod situations as well as between the uncushioned minimalistic shoe and both cushioned shoe conditions. The rate of rear-foot strikes was lowest during barefoot running (58.6% at 3.33 m x s(-1)), followed by running with uncushioned minimalist shoes (62.9%), cushioned minimalist (88.6%) and standard shoes (94.3%). Aside from showing the influence of shod conditions on running kinematics, this study helps to elucidate differences between footwear marked as minimalist shoes and their ability to mimic barefoot running adequately. These findings have implications on the use of footwear applied in future research debating the topic of barefoot or minimalist shoe running.
Durham extremely large telescope adaptive optics simulation platform.
Basden, Alastair; Butterley, Timothy; Myers, Richard; Wilson, Richard
2007-03-01
Adaptive optics systems are essential on all large telescopes for which image quality is important. These are complex systems with many design parameters requiring optimization before good performance can be achieved. The simulation of adaptive optics systems is therefore necessary to categorize the expected performance. We describe an adaptive optics simulation platform, developed at Durham University, which can be used to simulate adaptive optics systems on the largest proposed future extremely large telescopes as well as on current systems. This platform is modular, object oriented, and has the benefit of hardware application acceleration that can be used to improve the simulation performance, essential for ensuring that the run time of a given simulation is acceptable. The simulation platform described here can be highly parallelized using parallelization techniques suited for adaptive optics simulation, while still offering the user complete control while the simulation is running. The results from the simulation of a ground layer adaptive optics system are provided as an example to demonstrate the flexibility of this simulation platform.
Development of the CELSS emulator at NASA. Johnson Space Center
NASA Technical Reports Server (NTRS)
Cullingford, Hatice S.
1990-01-01
The Closed Ecological Life Support System (CELSS) Emulator is under development. It will be used to investigate computer simulations of integrated CELSS operations involving humans, plants, and process machinery. Described here is Version 1.0 of the CELSS Emulator that was initiated in 1988 on the Johnson Space Center (JSC) Multi Purpose Applications Console Test Bed as the simulation framework. The run model of the simulation system now contains a CELSS model called BLSS. The CELSS simulator empowers us to generate model data sets, store libraries of results for further analysis, and also display plots of model variables as a function of time. The progress of the project is presented with sample test runs and simulation display pages.
Multi-GPGPU Tsunami simulation at Toyama-bay
NASA Astrophysics Data System (ADS)
Furuyama, Shoichi; Ueda, Yuki
2017-07-01
Accelerated multi General Purpose Graphics Processing Unit (GPGPU) calculation for Tsunami run-up simulation was achieved at the wide area (whole Toyama-bay in Japan) by faster computation technique. Toyama-bay has active-faults at the sea-bed. It has a high possibility to occur earthquakes and Tsunami waves in the case of the huge earthquake, that's why to predict the area of Tsunami run-up is important for decreasing damages to residents by the disaster. However it is very hard task to achieve the simulation by the computer resources problem. A several meter's order of the high resolution calculation is required for the running-up Tsunami simulation because artificial structures on the ground such as roads, buildings, and houses are very small. On the other hand the huge area simulation is also required. In the Toyama-bay case the area is 42 [km] × 15 [km]. When 5 [m] × 5 [m] size computational cells are used for the simulation, over 26,000,000 computational cells are generated. To calculate the simulation, a normal CPU desktop computer took about 10 hours for the calculation. An improvement of calculation time is important problem for the immediate prediction system of Tsunami running-up, as a result it will contribute to protect a lot of residents around the coastal region. The study tried to decrease this calculation time by using multi GPGPU system which is equipped with six NVIDIA TESLA K20xs, InfiniBand network connection between computer nodes by MVAPICH library. As a result 5.16 times faster calculation was achieved on six GPUs than one GPU case and it was 86% parallel efficiency to the linear speed up.
The Role of Sea Ice in 2 x CO2 Climate Model Sensitivity. Part 2; Hemispheric Dependencies
NASA Technical Reports Server (NTRS)
Rind, D.; Healy, R.; Parkinson, C.; Martinson, D.
1997-01-01
How sensitive are doubled CO2 simulations to GCM control-run sea ice thickness and extent? This issue is examined in a series of 10 control-run simulations with different sea ice and corresponding doubled CO2 simulations. Results show that with increased control-run sea ice coverage in the Southern Hemisphere, temperature sensitivity with climate change is enhanced, while there is little effect on temperature sensitivity of (reasonable) variations in control-run sea ice thickness. In the Northern Hemisphere the situation is reversed: sea ice thickness is the key parameter, while (reasonable) variations in control-run sea ice coverage are of less importance. In both cases, the quantity of sea ice that can be removed in the warmer climate is the determining factor. Overall, the Southern Hemisphere sea ice coverage change had a larger impact on global temperature, because Northern Hemisphere sea ice was sufficiently thick to limit its response to doubled CO2, and sea ice changes generally occurred at higher latitudes, reducing the sea ice-albedo feedback. In both these experiments and earlier ones in which sea ice was not allowed to change, the model displayed a sensitivity of -0.02 C global warming per percent change in Southern Hemisphere sea ice coverage.
Applying Monte Carlo Simulation to Launch Vehicle Design and Requirements Analysis
NASA Technical Reports Server (NTRS)
Hanson, J. M.; Beard, B. B.
2010-01-01
This Technical Publication (TP) is meant to address a number of topics related to the application of Monte Carlo simulation to launch vehicle design and requirements analysis. Although the focus is on a launch vehicle application, the methods may be applied to other complex systems as well. The TP is organized so that all the important topics are covered in the main text, and detailed derivations are in the appendices. The TP first introduces Monte Carlo simulation and the major topics to be discussed, including discussion of the input distributions for Monte Carlo runs, testing the simulation, how many runs are necessary for verification of requirements, what to do if results are desired for events that happen only rarely, and postprocessing, including analyzing any failed runs, examples of useful output products, and statistical information for generating desired results from the output data. Topics in the appendices include some tables for requirements verification, derivation of the number of runs required and generation of output probabilistic data with consumer risk included, derivation of launch vehicle models to include possible variations of assembled vehicles, minimization of a consumable to achieve a two-dimensional statistical result, recontact probability during staging, ensuring duplicated Monte Carlo random variations, and importance sampling.
Limits to high-speed simulations of spiking neural networks using general-purpose computers.
Zenke, Friedemann; Gerstner, Wulfram
2014-01-01
To understand how the central nervous system performs computations using recurrent neuronal circuitry, simulations have become an indispensable tool for theoretical neuroscience. To study neuronal circuits and their ability to self-organize, increasing attention has been directed toward synaptic plasticity. In particular spike-timing-dependent plasticity (STDP) creates specific demands for simulations of spiking neural networks. On the one hand a high temporal resolution is required to capture the millisecond timescale of typical STDP windows. On the other hand network simulations have to evolve over hours up to days, to capture the timescale of long-term plasticity. To do this efficiently, fast simulation speed is the crucial ingredient rather than large neuron numbers. Using different medium-sized network models consisting of several thousands of neurons and off-the-shelf hardware, we compare the simulation speed of the simulators: Brian, NEST and Neuron as well as our own simulator Auryn. Our results show that real-time simulations of different plastic network models are possible in parallel simulations in which numerical precision is not a primary concern. Even so, the speed-up margin of parallelism is limited and boosting simulation speeds beyond one tenth of real-time is difficult. By profiling simulation code we show that the run times of typical plastic network simulations encounter a hard boundary. This limit is partly due to latencies in the inter-process communications and thus cannot be overcome by increased parallelism. Overall, these results show that to study plasticity in medium-sized spiking neural networks, adequate simulation tools are readily available which run efficiently on small clusters. However, to run simulations substantially faster than real-time, special hardware is a prerequisite.
Simulations of Eurasian winter temperature trends in coupled and uncoupled CFSv2
NASA Astrophysics Data System (ADS)
Collow, Thomas W.; Wang, Wanqiu; Kumar, Arun
2018-01-01
Conflicting results have been presented regarding the link between Arctic sea-ice loss and midlatitude cooling, particularly over Eurasia. This study analyzes uncoupled (atmosphere-only) and coupled (ocean-atmosphere) simulations by the Climate Forecast System, version 2 (CFSv2), to examine this linkage during the Northern Hemisphere winter, focusing on the simulation of the observed surface cooling trend over Eurasia during the last three decades. The uncoupled simulations are Atmospheric Model Intercomparison Project (AMIP) runs forced with mean seasonal cycles of sea surface temperature (SST) and sea ice, using combinations of SST and sea ice from different time periods to assess the role that each plays individually, and to assess the role of atmospheric internal variability. Coupled runs are used to further investigate the role of internal variability via the analysis of initialized predictions and the evolution of the forecast with lead time. The AMIP simulations show a mean warming response over Eurasia due to SST changes, but little response to changes in sea ice. Individual runs simulate cooler periods over Eurasia, and this is shown to be concurrent with a stronger Siberian high and warming over Greenland. No substantial differences in the variability of Eurasian surface temperatures are found between the different model configurations. In the coupled runs, the region of significant warming over Eurasia is small at short leads, but increases at longer leads. It is concluded that, although the models have some capability in highlighting the temperature variability over Eurasia, the observed cooling may still be a consequence of internal variability.
Do downscaled general circulation models reliably simulate historical climatic conditions?
Bock, Andrew R.; Hay, Lauren E.; McCabe, Gregory J.; Markstrom, Steven L.; Atkinson, R. Dwight
2018-01-01
The accuracy of statistically downscaled (SD) general circulation model (GCM) simulations of monthly surface climate for historical conditions (1950–2005) was assessed for the conterminous United States (CONUS). The SD monthly precipitation (PPT) and temperature (TAVE) from 95 GCMs from phases 3 and 5 of the Coupled Model Intercomparison Project (CMIP3 and CMIP5) were used as inputs to a monthly water balance model (MWBM). Distributions of MWBM input (PPT and TAVE) and output [runoff (RUN)] variables derived from gridded station data (GSD) and historical SD climate were compared using the Kolmogorov–Smirnov (KS) test For all three variables considered, the KS test results showed that variables simulated using CMIP5 generally are more reliable than those derived from CMIP3, likely due to improvements in PPT simulations. At most locations across the CONUS, the largest differences between GSD and SD PPT and RUN occurred in the lowest part of the distributions (i.e., low-flow RUN and low-magnitude PPT). Results indicate that for the majority of the CONUS, there are downscaled GCMs that can reliably simulate historical climatic conditions. But, in some geographic locations, none of the SD GCMs replicated historical conditions for two of the three variables (PPT and RUN) based on the KS test, with a significance level of 0.05. In these locations, improved GCM simulations of PPT are needed to more reliably estimate components of the hydrologic cycle. Simple metrics and statistical tests, such as those described here, can provide an initial set of criteria to help simplify GCM selection.
Arifin, S M Niaz; Madey, Gregory R; Collins, Frank H
2013-08-21
Agent-based models (ABMs) have been used to estimate the effects of malaria-control interventions. Early studies have shown the efficacy of larval source management (LSM) and insecticide-treated nets (ITNs) as vector-control interventions, applied both in isolation and in combination. However, the robustness of results can be affected by several important modelling assumptions, including the type of boundary used for landscapes, and the number of replicated simulation runs reported in results. Selection of the ITN coverage definition may also affect the predictive findings. Hence, by replication, independent verification of prior findings of published models bears special importance. A spatially-explicit entomological ABM of Anopheles gambiae is used to simulate the resource-seeking process of mosquitoes in grid-based landscapes. To explore LSM and replicate results of an earlier LSM study, the original landscapes and scenarios are replicated by using a landscape generator tool, and 1,800 replicated simulations are run using absorbing and non-absorbing boundaries. To explore ITNs and evaluate the relative impacts of the different ITN coverage schemes, the settings of an earlier ITN study are replicated, the coverage schemes are defined and simulated, and 9,000 replicated simulations for three ITN parameters (coverage, repellence and mortality) are run. To evaluate LSM and ITNs in combination, landscapes with varying densities of houses and human populations are generated, and 12,000 simulations are run. General agreement with an earlier LSM study is observed when an absorbing boundary is used. However, using a non-absorbing boundary produces significantly different results, which may be attributed to the unrealistic killing effect of an absorbing boundary. Abundance cannot be completely suppressed by removing aquatic habitats within 300 m of houses. Also, with density-dependent oviposition, removal of insufficient number of aquatic habitats may prove counter-productive. The importance of performing large number of simulation runs is also demonstrated. For ITNs, the choice of coverage scheme has important implications, and too high repellence yields detrimental effects. When LSM and ITNs are applied in combination, ITNs' mortality can play more important roles with higher densities of houses. With partial mortality, increasing ITN coverage is more effective than increasing LSM coverage, and integrating both interventions yields more synergy as the densities of houses increase. Using a non-absorbing boundary and reporting average results from sufficiently large number of simulation runs are strongly recommended for malaria ABMs. Several guidelines (code and data sharing, relevant documentation, and standardized models) for future modellers are also recommended.
Zhang, Di; Li, Ruiqi; Batchelor, William D; Ju, Hui; Li, Yanming
2018-01-01
The North China Plain is one of the most important grain production regions in China, but is facing serious water shortages. To achieve a balance between water use and the need for food self-sufficiency, new water efficient irrigation strategies need to be developed that balance water use with farmer net return. The Crop Environment Resource Synthesis Wheat (CERES-Wheat model) was calibrated and evaluated with two years of data which consisted of 3-4 irrigation treatments, and the model was used to investigate long-term winter wheat productivity and water use from irrigation management in the North China Plain. The calibrated model simulated accurately above-ground biomass, grain yield and evapotranspiration of winter wheat in response to irrigation management. The calibrated model was then run using weather data from 1994-2016 in order to evaluate different irrigation strategies. The simulated results using historical weather data showed that grain yield and water use was sensitive to different irrigation strategies including amounts and dates of irrigation applications. The model simulated the highest yield when irrigation was applied at jointing (T9) in normal and dry rainfall years, and gave the highest simulated yields for irrigation at double ridge (T8) in wet years. A single simulated irrigation at jointing (T9) produced yields that were 88% compared to using a double irrigation treatment at T1 and T9 in wet years, 86% of that in normal years, and 91% of that in dry years. A single irrigation at jointing or double ridge produced higher water use efficiency because it obtained higher evapotranspiration. The simulated farmer irrigation practices produced the highest yield and net income. When the cost of water was taken into account, limited irrigation was found to be more profitable based on assumptions about future water costs. In order to increase farmer income, a subsidy will likely be needed to compensate farmers for yield reductions due to water savings. These results showed that there is a cost to the farmer for water conservation, but limiting irrigation to a single irrigation at jointing would minimize impact on farmer net return in North China Plain.
Adaptive Grid Refinement for Atmospheric Boundary Layer Simulations
NASA Astrophysics Data System (ADS)
van Hooft, Antoon; van Heerwaarden, Chiel; Popinet, Stephane; van der linden, Steven; de Roode, Stephan; van de Wiel, Bas
2017-04-01
We validate and benchmark an adaptive mesh refinement (AMR) algorithm for numerical simulations of the atmospheric boundary layer (ABL). The AMR technique aims to distribute the computational resources efficiently over a domain by refining and coarsening the numerical grid locally and in time. This can be beneficial for studying cases in which length scales vary significantly in time and space. We present the results for a case describing the growth and decay of a convective boundary layer. The AMR results are benchmarked against two runs using a fixed, fine meshed grid. First, with the same numerical formulation as the AMR-code and second, with a code dedicated to ABL studies. Compared to the fixed and isotropic grid runs, the AMR algorithm can coarsen and refine the grid such that accurate results are obtained whilst using only a fraction of the grid cells. Performance wise, the AMR run was cheaper than the fixed and isotropic grid run with similar numerical formulations. However, for this specific case, the dedicated code outperformed both aforementioned runs.
Pre-Stall Behavior of a Transonic Axial Compressor Stage via Time-Accurate Numerical Simulation
NASA Technical Reports Server (NTRS)
Chen, Jen-Ping; Hathaway, Michael D.; Herrick, Gregory P.
2008-01-01
CFD calculations using high-performance parallel computing were conducted to simulate the pre-stall flow of a transonic compressor stage, NASA compressor Stage 35. The simulations were run with a full-annulus grid that models the 3D, viscous, unsteady blade row interaction without the need for an artificial inlet distortion to induce stall. The simulation demonstrates the development of the rotating stall from the growth of instabilities. Pressure-rise performance and pressure traces are compared with published experimental data before the study of flow evolution prior to the rotating stall. Spatial FFT analysis of the flow indicates a rotating long-length disturbance of one rotor circumference, which is followed by a spike-type breakdown. The analysis also links the long-length wave disturbance with the initiation of the spike inception. The spike instabilities occur when the trajectory of the tip clearance flow becomes perpendicular to the axial direction. When approaching stall, the passage shock changes from a single oblique shock to a dual-shock, which distorts the perpendicular trajectory of the tip clearance vortex but shows no evidence of flow separation that may contribute to stall.
NASA Technical Reports Server (NTRS)
Kasahara, Hironori; Honda, Hiroki; Narita, Seinosuke
1989-01-01
Parallel processing of real-time dynamic systems simulation on a multiprocessor system named OSCAR is presented. In the simulation of dynamic systems, generally, the same calculation are repeated every time step. However, we cannot apply to Do-all or the Do-across techniques for parallel processing of the simulation since there exist data dependencies from the end of an iteration to the beginning of the next iteration and furthermore data-input and data-output are required every sampling time period. Therefore, parallelism inside the calculation required for a single time step, or a large basic block which consists of arithmetic assignment statements, must be used. In the proposed method, near fine grain tasks, each of which consists of one or more floating point operations, are generated to extract the parallelism from the calculation and assigned to processors by using optimal static scheduling at compile time in order to reduce large run time overhead caused by the use of near fine grain tasks. The practicality of the scheme is demonstrated on OSCAR (Optimally SCheduled Advanced multiprocessoR) which has been developed to extract advantageous features of static scheduling algorithms to the maximum extent.
NVU dynamics. I. Geodesic motion on the constant-potential-energy hypersurface.
Ingebrigtsen, Trond S; Toxvaerd, Søren; Heilmann, Ole J; Schrøder, Thomas B; Dyre, Jeppe C
2011-09-14
An algorithm is derived for computer simulation of geodesics on the constant-potential-energy hypersurface of a system of N classical particles. First, a basic time-reversible geodesic algorithm is derived by discretizing the geodesic stationarity condition and implementing the constant-potential-energy constraint via standard Lagrangian multipliers. The basic NVU algorithm is tested by single-precision computer simulations of the Lennard-Jones liquid. Excellent numerical stability is obtained if the force cutoff is smoothed and the two initial configurations have identical potential energy within machine precision. Nevertheless, just as for NVE algorithms, stabilizers are needed for very long runs in order to compensate for the accumulation of numerical errors that eventually lead to "entropic drift" of the potential energy towards higher values. A modification of the basic NVU algorithm is introduced that ensures potential-energy and step-length conservation; center-of-mass drift is also eliminated. Analytical arguments confirmed by simulations demonstrate that the modified NVU algorithm is absolutely stable. Finally, we present simulations showing that the NVU algorithm and the standard leap-frog NVE algorithm have identical radial distribution functions for the Lennard-Jones liquid. © 2011 American Institute of Physics
Lessons learned from comparing molecular dynamics engines on the SAMPL5 dataset.
Shirts, Michael R; Klein, Christoph; Swails, Jason M; Yin, Jian; Gilson, Michael K; Mobley, David L; Case, David A; Zhong, Ellen D
2017-01-01
We describe our efforts to prepare common starting structures and models for the SAMPL5 blind prediction challenge. We generated the starting input files and single configuration potential energies for the host-guest in the SAMPL5 blind prediction challenge for the GROMACS, AMBER, LAMMPS, DESMOND and CHARMM molecular simulation programs. All conversions were fully automated from the originally prepared AMBER input files using a combination of the ParmEd and InterMol conversion programs. We find that the energy calculations for all molecular dynamics engines for this molecular set agree to better than 0.1 % relative absolute energy for all energy components, and in most cases an order of magnitude better, when reasonable choices are made for different cutoff parameters. However, there are some surprising sources of statistically significant differences. Most importantly, different choices of Coulomb's constant between programs are one of the largest sources of discrepancies in energies. We discuss the measures required to get good agreement in the energies for equivalent starting configurations between the simulation programs, and the energy differences that occur when simulations are run with program-specific default simulation parameter values. Finally, we discuss what was required to automate this conversion and comparison.
Lessons learned from comparing molecular dynamics engines on the SAMPL5 dataset
Shirts, Michael R.; Klein, Christoph; Swails, Jason M.; Yin, Jian; Gilson, Michael K.; Mobley, David L.; Case, David A.; Zhong, Ellen D.
2017-01-01
We describe our efforts to prepare common starting structures and models for the SAMPL5 blind prediction challenge. We generated the starting input files and single configuration potential energies for the host-guest in the SAMPL5 blind prediction challenge for the GROMACS, AMBER, LAMMPS, DESMOND and CHARMM molecular simulation programs. All conversions were fully automated from the originally prepared AMBER input files using a combination of the ParmEd and InterMol conversion programs. We find that the energy calculations for all molecular dynamics engines for this molecular set agree to a better than 0.1% relative absolute energy for all energy components, and in most cases an order of magnitude better, when reasonable choices are made for different cutoff parameters. However, there are some surprising sources of statistically significant differences. Most importantly, different choices of Coulomb’s constant between programs are one of the largest sources of discrepancies in energies. We discuss the measures required to get good agreement in the energies for equivalent starting configurations between the simulation programs, and the energy differences that occur when simulations are run with program-specific default simulation parameter values. Finally, we discuss what was required to automate this conversion and comparison. PMID:27787702
Lessons learned from comparing molecular dynamics engines on the SAMPL5 dataset
NASA Astrophysics Data System (ADS)
Shirts, Michael R.; Klein, Christoph; Swails, Jason M.; Yin, Jian; Gilson, Michael K.; Mobley, David L.; Case, David A.; Zhong, Ellen D.
2017-01-01
We describe our efforts to prepare common starting structures and models for the SAMPL5 blind prediction challenge. We generated the starting input files and single configuration potential energies for the host-guest in the SAMPL5 blind prediction challenge for the GROMACS, AMBER, LAMMPS, DESMOND and CHARMM molecular simulation programs. All conversions were fully automated from the originally prepared AMBER input files using a combination of the ParmEd and InterMol conversion programs. We find that the energy calculations for all molecular dynamics engines for this molecular set agree to better than 0.1 % relative absolute energy for all energy components, and in most cases an order of magnitude better, when reasonable choices are made for different cutoff parameters. However, there are some surprising sources of statistically significant differences. Most importantly, different choices of Coulomb's constant between programs are one of the largest sources of discrepancies in energies. We discuss the measures required to get good agreement in the energies for equivalent starting configurations between the simulation programs, and the energy differences that occur when simulations are run with program-specific default simulation parameter values. Finally, we discuss what was required to automate this conversion and comparison.
Microcanonical ensemble simulation method applied to discrete potential fluids
NASA Astrophysics Data System (ADS)
Sastre, Francisco; Benavides, Ana Laura; Torres-Arenas, José; Gil-Villegas, Alejandro
2015-09-01
In this work we extend the applicability of the microcanonical ensemble simulation method, originally proposed to study the Ising model [A. Hüller and M. Pleimling, Int. J. Mod. Phys. C 13, 947 (2002), 10.1142/S0129183102003693], to the case of simple fluids. An algorithm is developed by measuring the transition rates probabilities between macroscopic states, that has as advantage with respect to conventional Monte Carlo NVT (MC-NVT) simulations that a continuous range of temperatures are covered in a single run. For a given density, this new algorithm provides the inverse temperature, that can be parametrized as a function of the internal energy, and the isochoric heat capacity is then evaluated through a numerical derivative. As an illustrative example we consider a fluid composed of particles interacting via a square-well (SW) pair potential of variable range. Equilibrium internal energies and isochoric heat capacities are obtained with very high accuracy compared with data obtained from MC-NVT simulations. These results are important in the context of the application of the Hüller-Pleimling method to discrete-potential systems, that are based on a generalization of the SW and square-shoulder fluids properties.
BioFVM: an efficient, parallelized diffusive transport solver for 3-D biological simulations
Ghaffarizadeh, Ahmadreza; Friedman, Samuel H.; Macklin, Paul
2016-01-01
Motivation: Computational models of multicellular systems require solving systems of PDEs for release, uptake, decay and diffusion of multiple substrates in 3D, particularly when incorporating the impact of drugs, growth substrates and signaling factors on cell receptors and subcellular systems biology. Results: We introduce BioFVM, a diffusive transport solver tailored to biological problems. BioFVM can simulate release and uptake of many substrates by cell and bulk sources, diffusion and decay in large 3D domains. It has been parallelized with OpenMP, allowing efficient simulations on desktop workstations or single supercomputer nodes. The code is stable even for large time steps, with linear computational cost scalings. Solutions are first-order accurate in time and second-order accurate in space. The code can be run by itself or as part of a larger simulator. Availability and implementation: BioFVM is written in C ++ with parallelization in OpenMP. It is maintained and available for download at http://BioFVM.MathCancer.org and http://BioFVM.sf.net under the Apache License (v2.0). Contact: paul.macklin@usc.edu. Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26656933
Ko, Sungahn; Zhao, Jieqiong; Xia, Jing; Afzal, Shehzad; Wang, Xiaoyu; Abram, Greg; Elmqvist, Niklas; Kne, Len; Van Riper, David; Gaither, Kelly; Kennedy, Shaun; Tolone, William; Ribarsky, William; Ebert, David S
2014-12-01
We present VASA, a visual analytics platform consisting of a desktop application, a component model, and a suite of distributed simulation components for modeling the impact of societal threats such as weather, food contamination, and traffic on critical infrastructure such as supply chains, road networks, and power grids. Each component encapsulates a high-fidelity simulation model that together form an asynchronous simulation pipeline: a system of systems of individual simulations with a common data and parameter exchange format. At the heart of VASA is the Workbench, a visual analytics application providing three distinct features: (1) low-fidelity approximations of the distributed simulation components using local simulation proxies to enable analysts to interactively configure a simulation run; (2) computational steering mechanisms to manage the execution of individual simulation components; and (3) spatiotemporal and interactive methods to explore the combined results of a simulation run. We showcase the utility of the platform using examples involving supply chains during a hurricane as well as food contamination in a fast food restaurant chain.
Real-time visual simulation of APT system based on RTW and Vega
NASA Astrophysics Data System (ADS)
Xiong, Shuai; Fu, Chengyu; Tang, Tao
2012-10-01
The Matlab/Simulink simulation model of APT (acquisition, pointing and tracking) system is analyzed and established. Then the model's C code which can be used for real-time simulation is generated by RTW (Real-Time Workshop). Practical experiments show, the simulation result of running the C code is the same as running the Simulink model directly in the Matlab environment. MultiGen-Vega is a real-time 3D scene simulation software system. With it and OpenGL, the APT scene simulation platform is developed and used to render and display the virtual scenes of the APT system. To add some necessary graphics effects to the virtual scenes real-time, GLSL (OpenGL Shading Language) shaders are used based on programmable GPU. By calling the C code, the scene simulation platform can adjust the system parameters on-line and get APT system's real-time simulation data to drive the scenes. Practical application shows that this visual simulation platform has high efficiency, low charge and good simulation effect.
NASA Astrophysics Data System (ADS)
Konya, Andrew; Santangelo, Christian; Selinger, Robin
2014-03-01
When the underlying microstructure of an actuatable material varies in space, simple sheets can transform into complex shapes. Using nonlinear finite element elastodynamic simulations, we explore the design space of two such materials: liquid crystal elastomers and swelling polymer gels. Liquid crystal elastomers (LCE) undergo shape transformations induced by stimuli such as heating/cooling or illumination; complex deformations may be programmed by ``blueprinting'' a non-uniform director field in the sample when the polymer is cross-linked. Similarly, swellable gels can undergo shape change when they are swollen anisotropically as programmed by recently developed halftone gel lithography techniques. For each of these materials we design and test programmable motifs which give rise to complex deformation trajectories including folded structures, soft swimmers, apertures that open and close, bas relief patterns, and other shape transformations inspired by art and nature. In order to accommodate the large computational needs required to model these materials, our 3-d nonlinear finite element elastodynamics simulation algorithm is implemented in CUDA, running on a single GPU-enabled workstation.
Dewaraja, Yuni K; Ljungberg, Michael; Majumdar, Amitava; Bose, Abhijit; Koral, Kenneth F
2002-02-01
This paper reports the implementation of the SIMIND Monte Carlo code on an IBM SP2 distributed memory parallel computer. Basic aspects of running Monte Carlo particle transport calculations on parallel architectures are described. Our parallelization is based on equally partitioning photons among the processors and uses the Message Passing Interface (MPI) library for interprocessor communication and the Scalable Parallel Random Number Generator (SPRNG) to generate uncorrelated random number streams. These parallelization techniques are also applicable to other distributed memory architectures. A linear increase in computing speed with the number of processors is demonstrated for up to 32 processors. This speed-up is especially significant in Single Photon Emission Computed Tomography (SPECT) simulations involving higher energy photon emitters, where explicit modeling of the phantom and collimator is required. For (131)I, the accuracy of the parallel code is demonstrated by comparing simulated and experimental SPECT images from a heart/thorax phantom. Clinically realistic SPECT simulations using the voxel-man phantom are carried out to assess scatter and attenuation correction.
Research on three-phase traffic flow modeling based on interaction range
NASA Astrophysics Data System (ADS)
Zeng, Jun-Wei; Yang, Xu-Gang; Qian, Yong-Sheng; Wei, Xu-Ting
2017-12-01
On the basis of the multiple velocity difference effect (MVDE) model and under short-range interaction, a new three-phase traffic flow model (S-MVDE) is proposed through careful consideration of the influence of the relationship between the speeds of the two adjacent cars on the running state of the rear car. The random slowing rule in the MVDE model is modified in order to emphasize the influence of vehicle interaction between two vehicles on the probability of vehicles’ deceleration. A single-lane model which without bottleneck structure under periodic boundary conditions is simulated, and it is proved that the traffic flow simulated by S-MVDE model will generate the synchronous flow of three-phase traffic theory. Under the open boundary, the model is expanded by adding an on-ramp, the congestion pattern caused by the bottleneck is simulated at different main road flow rates and on-ramp flow rates, which is compared with the traffic congestion pattern observed by Kerner et al. and it is found that the results are consistent with the congestion characteristics in the three-phase traffic flow theory.
Simulating Ideal Assistive Devices to Reduce the Metabolic Cost of Running
Uchida, Thomas K.; Seth, Ajay; Pouya, Soha; Dembia, Christopher L.; Hicks, Jennifer L.; Delp, Scott L.
2016-01-01
Tools have been used for millions of years to augment the capabilities of the human body, allowing us to accomplish tasks that would otherwise be difficult or impossible. Powered exoskeletons and other assistive devices are sophisticated modern tools that have restored bipedal locomotion in individuals with paraplegia and have endowed unimpaired individuals with superhuman strength. Despite these successes, designing assistive devices that reduce energy consumption during running remains a substantial challenge, in part because these devices disrupt the dynamics of a complex, finely tuned biological system. Furthermore, designers have hitherto relied primarily on experiments, which cannot report muscle-level energy consumption and are fraught with practical challenges. In this study, we use OpenSim to generate muscle-driven simulations of 10 human subjects running at 2 and 5 m/s. We then add ideal, massless assistive devices to our simulations and examine the predicted changes in muscle recruitment patterns and metabolic power consumption. Our simulations suggest that an assistive device should not necessarily apply the net joint moment generated by muscles during unassisted running, and an assistive device can reduce the activity of muscles that do not cross the assisted joint. Our results corroborate and suggest biomechanical explanations for similar effects observed by experimentalists, and can be used to form hypotheses for future experimental studies. The models, simulations, and software used in this study are freely available at simtk.org and can provide insight into assistive device design that complements experimental approaches. PMID:27656901
Advanced ETC/LSS computerized analytical models, CO2 concentration. Volume 1: Summary document
NASA Technical Reports Server (NTRS)
Taylor, B. N.; Loscutoff, A. V.
1972-01-01
Computer simulations have been prepared for the concepts of C02 concentration which have the potential for maintaining a C02 partial pressure of 3.0 mmHg, or less, in a spacecraft environment. The simulations were performed using the G-189A Generalized Environmental Control computer program. In preparing the simulations, new subroutines to model the principal functional components for each concept were prepared and integrated into the existing program. Sample problems were run to demonstrate the methods of simulation and performance characteristics of the individual concepts. Comparison runs for each concept can be made for parametric values of cabin pressure, crew size, cabin air dry and wet bulb temperatures, and mission duration.
Runway Incursion Prevention System Simulation Evaluation
NASA Technical Reports Server (NTRS)
Jones, Denise R.
2002-01-01
A Runway Incursion Prevention System (RIPS) was evaluated in a full mission simulation study at the NASA Langley Research center in March 2002. RIPS integrates airborne and ground-based technologies to provide (1) enhanced surface situational awareness to avoid blunders and (2) alerts of runway conflicts in order to prevent runway incidents while also improving operational capability. A series of test runs was conducted in a high fidelity simulator. The purpose of the study was to evaluate the RIPS airborne incursion detection algorithms and associated alerting and airport surface display concepts. Eight commercial airline crews participated as test subjects completing 467 test runs. This paper gives an overview of the RIPS, simulation study, and test results.
An extension of the OpenModelica compiler for using Modelica models in a discrete event simulation
Nutaro, James
2014-11-03
In this article, a new back-end and run-time system is described for the OpenModelica compiler. This new back-end transforms a Modelica model into a module for the adevs discrete event simulation package, thereby extending adevs to encompass complex, hybrid dynamical systems. The new run-time system that has been built within the adevs simulation package supports models with state-events and time-events and that comprise differential-algebraic systems with high index. Finally, although the procedure for effecting this transformation is based on adevs and the Discrete Event System Specification, it can be adapted to any discrete event simulation package.
Defensive Swarm: An Agent Based Modeling Analysis
2017-12-01
INITIAL ALGORITHM (SINGLE- RUN ) TESTING .........................43 1. Patrol Algorithm—Passive...scalability are therefore quite important to modeling in this highly variable domain. One can force the software to run the gamut of options to see...changes in operating constructs or procedures. Additionally, modelers can run thousands of iterations testing the model under different circumstances
Simulations of isoprene: Ozone reactions for a general circulation/chemical transport model
NASA Technical Reports Server (NTRS)
Makar, P. A.; Mcconnell, J. C.
1994-01-01
A parameterized reaction mechanism has been created to examine the interactions between isoprene and other tropospheric gas-phase chemicals. Tests of the parameterization have shown that its results match those of a more complex reaction set to a high degree of accuracy. Comparisons between test runs have shown that the presence of isoprene at the start of a six day interval can enhance later ozone concentrations by as much as twenty-nine percent. The test cases used no input fluxes beyond the initial time, implying that a single input of a biogenic hydrocarbon to an airmass can alter its ozone chemistry over a time scale on the order of a week.
SolarPILOT | Concentrating Solar Power | NREL
tools. Unlike exclusively ray-tracing tools, SolarPILOT runs the analytical simulation engine that uses engine alongside a ray-tracing core for more detailed simulations. The SolTrace simulation engine is
Electromagnetic Simulations for Aerospace Application Final Report CRADA No. TC-0376-92
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madsen, N.; Meredith, S.
Electromagnetic (EM) simulation tools play an important role in the design cycle, allowing optimization of a design before it is fabricated for testing. The purpose of this cooperative project was to provide Lockheed with state-of-the-art electromagnetic (EM) simulation software that will enable the optimal design of the next generation of low-observable (LO) military aircraft through the VHF regime. More particularly, the project was principally code development and validation, its goal to produce a 3-D, conforming grid,time-domain (TD) EM simulation tool, consisting of a mesh generator, a DS13D-based simulation kernel, and an RCS postprocessor, which was useful in the optimization ofmore » LO aircraft, both for full-aircraft simulations run on a massively parallel computer and for small scale problems run on a UNIX workstation.« less
Xyce parallel electronic simulator : reference guide.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mei, Ting; Rankin, Eric Lamont; Thornquist, Heidi K.
2011-05-01
This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide. The Xyce Parallel Electronic Simulator has been written to support, in a rigorous manner, the simulation needs of the Sandia National Laboratories electrical designers. It is targeted specifically to runmore » on large-scale parallel computing platforms but also runs well on a variety of architectures including single processor workstations. It also aims to support a variety of devices and models specific to Sandia needs. This document is intended to complement the Xyce Users Guide. It contains comprehensive, detailed information about a number of topics pertinent to the usage of Xyce. Included in this document is a netlist reference for the input-file commands and elements supported within Xyce; a command line reference, which describes the available command line arguments for Xyce; and quick-references for users of other circuit codes, such as Orcad's PSpice and Sandia's ChileSPICE.« less
Numerical simulation support to the ESA/THOR mission
NASA Astrophysics Data System (ADS)
Valentini, F.; Servidio, S.; Perri, S.; Perrone, D.; De Marco, R.; Marcucci, M. F.; Daniele, B.; Bruno, R.; Camporeale, E.
2016-12-01
THOR is a spacecraft concept currently undergoing study phase as acandidate for the next ESA medium size mission M4. THOR has been designedto solve the longstanding physical problems of particle heating andenergization in turbulent plasmas. It will provide high resolutionmeasurements of electromagnetic fields and particle distribution functionswith unprecedented resolution, with the aim of exploring the so-calledkinetic scales. We present the numerical simulation framework which is supporting the THOR mission during the study phase. The THOR teamincludes many scientists developing and running different simulation codes(Eulerian-Vlasov, Particle-In-Cell, Gyrokinetics, Two-fluid, MHD, etc.),addressing the physics of plasma turbulence, shocks, magnetic reconnectionand so on.These numerical codes are being used during the study phase, mainly withthe aim of addressing the following points:(i) to simulate the response of real particle instruments on board THOR, byemploying an electrostatic analyser simulator which mimics the response ofthe CSW, IMS and TEA instruments to the particle velocity distributions ofprotons, alpha particle and electrons, as obtained from kinetic numericalsimulations of plasma turbulence.(ii) to compare multi-spacecraft with single-spacecraft configurations inmeasuring current density, by making use of both numerical models ofsynthetic turbulence and real data from MMS spacecraft.(iii) to investigate the validity of the Taylor hypothesis indifferent configurations of plasma turbulence
Heat transfer analysis of underground U-type heat exchanger of ground source heat pump system.
Pei, Guihong; Zhang, Liyin
2016-01-01
Ground source heat pumps is a building energy conservation technique. The underground buried pipe heat exchanging system of a ground source heat pump (GSHP) is the basis for the normal operation of an entire heat pump system. Computational-fluid-dynamics (CFD) numerical simulation software, ANSYS-FLUENT17.0 have been performed the calculations under the working conditions of a continuous and intermittent operation over 7 days on a GSHP with a single-well, single-U and double-U heat exchanger and the impact of single-U and double-U buried heat pipes on the surrounding rock-soil temperature field and the impact of intermittent operation and continuous operation on the outlet water temperature. The influence on the rock-soil temperature is approximately 13 % higher for the double-U heat exchanger than that of the single-U heat exchanger. The extracted energy of the intermittent operation is 36.44 kw·h higher than that of the continuous mode, although the running time is lower than that of continuous mode, over the course of 7 days. The thermal interference loss and quantity of heat exchanged for unit well depths at steady-state condition of 2.5 De, 3 De, 4 De, 4.5 De, 5 De, 5.5 De and 6 De of sidetube spacing are detailed in this work. The simulation results of seven working conditions are compared. It is recommended that the side-tube spacing of double-U underground pipes shall be greater than or equal to five times of outer diameter (borehole diameter: 180 mm).
Dynamic Analysis of Heavy Vehicle Medium Duty Drive Shaft Using Conventional and Composite Material
NASA Astrophysics Data System (ADS)
Kumar, Ashwani; Jain, Rajat; Patil, Pravin P.
2016-09-01
The main highlight of this study is structural and modal analysis of single piece drive shaft for selection of material. Drive shaft is used for torque carrying from vehicle transmission to rear wheel differential system. Heavy vehicle medium duty transmission drive shaft was selected as research object. Conventional materials (Steel SM45 C, Stainless Steel) and composite materials (HS carbon epoxy, E Glass Polyester Resin Composite) were selected for the analysis. Single piece composite material drive shaft has advantage over conventional two-piece steel drive shaft. It has higher specific strength, longer life, less weight, high critical speed and higher torque carrying capacity. The main criteria for drive shaft failure are strength and weight. Maximum modal frequency obtained is 919 Hz. Various harmful vibration modes (lateral vibration and torsional vibration) were identified and maximum deflection region was specified. For single-piece drive shaft the natural bending frequency should be higher because it is subjected to torsion and shear stress. Single piece drive shaft was modelled using Solid Edge and Pro-E. Finite Element Analysis was used for structural and modal analysis with actual running boundary condition like frictional support, torque and moment. FEA simulation results were validated with experimental literature results.
Conducting Simulation Studies in the R Programming Environment.
Hallgren, Kevin A
2013-10-12
Simulation studies allow researchers to answer specific questions about data analysis, statistical power, and best-practices for obtaining accurate results in empirical research. Despite the benefits that simulation research can provide, many researchers are unfamiliar with available tools for conducting their own simulation studies. The use of simulation studies need not be restricted to researchers with advanced skills in statistics and computer programming, and such methods can be implemented by researchers with a variety of abilities and interests. The present paper provides an introduction to methods used for running simulation studies using the R statistical programming environment and is written for individuals with minimal experience running simulation studies or using R. The paper describes the rationale and benefits of using simulations and introduces R functions relevant for many simulation studies. Three examples illustrate different applications for simulation studies, including (a) the use of simulations to answer a novel question about statistical analysis, (b) the use of simulations to estimate statistical power, and (c) the use of simulations to obtain confidence intervals of parameter estimates through bootstrapping. Results and fully annotated syntax from these examples are provided.
Bressel, Eadric; Louder, Talin J; Hoover, James P; Roberts, Luke C; Dolny, Dennis G
2017-11-01
The aim of this study was to determine if selected kinematic measures (foot strike index [SI], knee contact angle and overstride angle) were different between aquatic treadmill (ATM) and land treadmill (LTM) running, and to determine if these measures were altered during LTM running as a result of 6 weeks of ATM training. Acute effects were tested using 15 competitive distance runners who completed 1 session of running on each treadmill type at 5 different running speeds. Subsequently, three recreational runners completed 6 weeks of ATM training following a single-subject baseline, intervention and withdrawal experiment. Kinematic measures were quantified from digitisation of video. Regardless of speed, SI values during ATM running (61.3 ± 17%) were significantly greater (P = 0.002) than LTM running (42.7 ± 23%). Training on the ATM did not change (pre/post) the SI (26 ± 3.2/27 ± 3.1), knee contact angle (165 ± 0.3/164 ± 0.8) or overstride angle (89 ± 0.4/89 ± 0.1) during LTM running. Although SI values were different between acute ATM and LTM running, 6 weeks of ATM training did not appear to alter LTM running kinematics as evidenced by no change in kinematic values from baseline to post intervention assessments.
Web-HLA and Service-Enabled RTI in the Simulation Grid
NASA Astrophysics Data System (ADS)
Huang, Jijie; Li, Bo Hu; Chai, Xudong; Zhang, Lin
HLA-based simulations in a grid environment have now become a main research hotspot in the M&S community, but there are many shortcomings of the current HLA running in a grid environment. This paper analyzes the analogies between HLA and OGSA from the software architecture point of view, and points out the service-oriented method should be introduced into the three components of HLA to overcome its shortcomings. This paper proposes an expanded running architecture that can integrate the HLA with OGSA and realizes a service-enabled RTI (SE-RTI). In addition, in order to handle the bottleneck problem that is how to efficiently realize the HLA time management mechanism, this paper proposes a centralized way by which the CRC of the SE-RTI takes charge of the time management and the dispatching of TSO events of each federate. Benchmark experiments indicate that the running velocity of simulations in Internet or WAN is properly improved.
DualSPHysics: A numerical tool to simulate real breakwaters
NASA Astrophysics Data System (ADS)
Zhang, Feng; Crespo, Alejandro; Altomare, Corrado; Domínguez, José; Marzeddu, Andrea; Shang, Shao-ping; Gómez-Gesteira, Moncho
2018-02-01
The open-source code DualSPHysics is used in this work to compute the wave run-up in an existing dike in the Chinese coast using realistic dimensions, bathymetry and wave conditions. The GPU computing power of the DualSPHysics allows simulating real-engineering problems that involve complex geometries with a high resolution in a reasonable computational time. The code is first validated by comparing the numerical free-surface elevation, the wave orbital velocities and the time series of the run-up with physical data in a wave flume. Those experiments include a smooth dike and an armored dike with two layers of cubic blocks. After validation, the code is applied to a real case to obtain the wave run-up under different incident wave conditions. In order to simulate the real open sea, the spurious reflections from the wavemaker are removed by using an active wave absorption technique.
Effects of simulated weightlessness on fish otolith growth: Clinostat versus Rotating-Wall Vessel
NASA Astrophysics Data System (ADS)
Brungs, Sonja; Hauslage, Jens; Hilbig, Reinhard; Hemmersbach, Ruth; Anken, Ralf
2011-09-01
Stimulus dependence is a general feature of developing sensory systems. It has been shown earlier that the growth of inner ear heavy stones (otoliths) of late-stage Cichlid fish ( Oreochromis mossambicus) and Zebrafish ( Danio rerio) is slowed down by hypergravity, whereas microgravity during space flight yields an opposite effect, i.e. larger than 1 g otoliths, in Swordtail ( Xiphophorus helleri) and in Cichlid fish late-stage embryos. These and related studies proposed that otolith growth is actively adjusted via a feedback mechanism to produce a test mass of the appropriate physical capacity. Using ground-based techniques to apply simulated weightlessness, long-term clinorotation (CR; exposure on a fast-rotating Clinostat with one axis of rotation) led to larger than 1 g otoliths in late-stage Cichlid fish. Larger than normal otoliths were also found in early-staged Zebrafish embryos after short-term Wall Vessel Rotation (WVR; also regarded as a method to simulate weightlessness). These results are basically in line with the results obtained on Swordtails from space flight. Thus, the growth of fish inner ear otoliths seems to be an appropriate parameter to assess the quality of "simulated weightlessness" provided by a particular simulation device. Since CR and WVR are in worldwide use to simulate weightlessness conditions on ground using small-sized specimens, we were prompted to directly compare the effects of CR and WVR on otolith growth using developing Cichlids as model organism. Animals were simultaneously subjected to CR and WVR from a point of time when otolith primordia had begun to calcify both within the utricle (gravity perception) and the saccule (hearing); the respective otoliths are the lapilli and the sagittae. Three such runs were subsequently carried out, using three different batches of fish. The runs were discontinued when the animals began to hatch. In the course of all three runs performed, CR led to larger than normal lapilli, whereas WVR had no effect on the growth of these otoliths. Regarding sagittae, CR resulted in larger than normal stones in one of the three runs. The other CR runs and all WVR runs had no effect on sagittal growth. These results clearly indicate that CR rather than WVR can be regarded as a device to simulate weightlessness using the Cichlid as model organism. Since WVR has earlier been shown to affect otolith growth in Zebrafish, the lifestyle of an animal (mouth-breeding versus egg-laying) seems to be of considerable importance. Further studies using a variety of simulation techniques (including, e.g. magnetic levitation and random positioning) and various species are needed in order to identify the most appropriate technique to simulate weightlessness regarding a particular model organism.
NASA Astrophysics Data System (ADS)
Yang, Sheng-Chun; Lu, Zhong-Yuan; Qian, Hu-Jun; Wang, Yong-Lei; Han, Jie-Ping
2017-11-01
In this work, we upgraded the electrostatic interaction method of CU-ENUF (Yang, et al., 2016) which first applied CUNFFT (nonequispaced Fourier transforms based on CUDA) to the reciprocal-space electrostatic computation and made the computation of electrostatic interaction done thoroughly in GPU. The upgraded edition of CU-ENUF runs concurrently in a hybrid parallel way that enables the computation parallelizing on multiple computer nodes firstly, then further on the installed GPU in each computer. By this parallel strategy, the size of simulation system will be never restricted to the throughput of a single CPU or GPU. The most critical technical problem is how to parallelize a CUNFFT in the parallel strategy, which is conquered effectively by deep-seated research of basic principles and some algorithm skills. Furthermore, the upgraded method is capable of computing electrostatic interactions for both the atomistic molecular dynamics (MD) and the dissipative particle dynamics (DPD). Finally, the benchmarks conducted for validation and performance indicate that the upgraded method is able to not only present a good precision when setting suitable parameters, but also give an efficient way to compute electrostatic interactions for huge simulation systems. Program Files doi:http://dx.doi.org/10.17632/zncf24fhpv.1 Licensing provisions: GNU General Public License 3 (GPL) Programming language: C, C++, and CUDA C Supplementary material: The program is designed for effective electrostatic interactions of large-scale simulation systems, which runs on particular computers equipped with NVIDIA GPUs. It has been tested on (a) single computer node with Intel(R) Core(TM) i7-3770@ 3.40 GHz (CPU) and GTX 980 Ti (GPU), and (b) MPI parallel computer nodes with the same configurations. Nature of problem: For molecular dynamics simulation, the electrostatic interaction is the most time-consuming computation because of its long-range feature and slow convergence in simulation space, which approximately take up most of the total simulation time. Although the parallel method CU-ENUF (Yang et al., 2016) based on GPU has achieved a qualitative leap compared with previous methods in electrostatic interactions computation, the computation capability is limited to the throughput capacity of a single GPU for super-scale simulation system. Therefore, we should look for an effective method to handle the calculation of electrostatic interactions efficiently for a simulation system with super-scale size. Solution method: We constructed a hybrid parallel architecture, in which CPU and GPU are combined to accelerate the electrostatic computation effectively. Firstly, the simulation system is divided into many subtasks via domain-decomposition method. Then MPI (Message Passing Interface) is used to implement the CPU-parallel computation with each computer node corresponding to a particular subtask, and furthermore each subtask in one computer node will be executed in GPU in parallel efficiently. In this hybrid parallel method, the most critical technical problem is how to parallelize a CUNFFT (nonequispaced fast Fourier transform based on CUDA) in the parallel strategy, which is conquered effectively by deep-seated research of basic principles and some algorithm skills. Restrictions: The HP-ENUF is mainly oriented to super-scale system simulations, in which the performance superiority is shown adequately. However, for a small simulation system containing less than 106 particles, the mode of multiple computer nodes has no apparent efficiency advantage or even lower efficiency due to the serious network delay among computer nodes, than the mode of single computer node. References: (1) S.-C. Yang, H.-J. Qian, Z.-Y. Lu, Appl. Comput. Harmon. Anal. 2016, http://dx.doi.org/10.1016/j.acha.2016.04.009. (2) S.-C. Yang, Y.-L. Wang, G.-S. Jiao, H.-J. Qian, Z.-Y. Lu, J. Comput. Chem. 37 (2016) 378. (3) S.-C. Yang, Y.-L. Zhu, H.-J. Qian, Z.-Y. Lu, Appl. Chem. Res. Chin. Univ., 2017, http://dx.doi.org/10.1007/s40242-016-6354-5. (4) Y.-L. Zhu, H. Liu, Z.-W. Li, H.-J. Qian, G. Milano, Z.-Y. Lu, J. Comput. Chem. 34 (2013) 2197.
SIM_EXPLORE: Software for Directed Exploration of Complex Systems
NASA Technical Reports Server (NTRS)
Burl, Michael; Wang, Esther; Enke, Brian; Merline, William J.
2013-01-01
Physics-based numerical simulation codes are widely used in science and engineering to model complex systems that would be infeasible to study otherwise. While such codes may provide the highest- fidelity representation of system behavior, they are often so slow to run that insight into the system is limited. Trying to understand the effects of inputs on outputs by conducting an exhaustive grid-based sweep over the input parameter space is simply too time-consuming. An alternative approach called "directed exploration" has been developed to harvest information from numerical simulators more efficiently. The basic idea is to employ active learning and supervised machine learning to choose cleverly at each step which simulation trials to run next based on the results of previous trials. SIM_EXPLORE is a new computer program that uses directed exploration to explore efficiently complex systems represented by numerical simulations. The software sequentially identifies and runs simulation trials that it believes will be most informative given the results of previous trials. The results of new trials are incorporated into the software's model of the system behavior. The updated model is then used to pick the next round of new trials. This process, implemented as a closed-loop system wrapped around existing simulation code, provides a means to improve the speed and efficiency with which a set of simulations can yield scientifically useful results. The software focuses on the case in which the feedback from the simulation trials is binary-valued, i.e., the learner is only informed of the success or failure of the simulation trial to produce a desired output. The software offers a number of choices for the supervised learning algorithm (the method used to model the system behavior given the results so far) and a number of choices for the active learning strategy (the method used to choose which new simulation trials to run given the current behavior model). The software also makes use of the LEGION distributed computing framework to leverage the power of a set of compute nodes. The approach has been demonstrated on a planetary science application in which numerical simulations are used to study the formation of asteroid families.
NASA Astrophysics Data System (ADS)
Rodrigue Mbombo, Brice
New high time resolution measurements of the evolution of the electron temperature profile through a sawtooth event in high current reversed-field pinch (RFP) discharges in the Madison Symmetric Torus (MST) have been made using the enhanced capabilities of the multipoint, multi-pulse Thomson scattering system. Using this and other data, the electron thermal diffusion chie determined and is found to vary by orders of magnitude over the course of the sawtooth cycle. This experimental data is compared directly to simulations run at experimentally relevant parameters. This includes zero beta, single fluid, nonlinear, resistive magnetohydrodynamic (MHD) simulations run with the aspect ratio, resistivity profile, and Lundquist number (S ˜ 4 x 106) of high current RFP discharges in MST. These simulations display MHD activity and sawtooth like behavior similar to that observed in the MST. This includes both the sawtooth period and the duration of the sawtooth crash. The radial shape of the magnetic mode amplitudes, scaled to match edge measurements made in MST, are then used to compute the expected level of thermal diffusion due to parallel losses along diffusing magnetic field lines, chiMD = upsilon∥Dmag. The evolution of the Dmag profile was determined for over 20 sawteeth so that the ensemble averaged evolution could be compared to the sawtooth ensembled data from MST. The resulting comparison to the measured chi e shows that chiMD is larger than chi e at most times. However, if electrons are trapped in a magnetic well, they cannot carry energy along the diffusing magnetic field lines, reducing the thermal transport. Accounting for trapped particles brings chi MD to within uncertainty of chie in the mid radius at most times throughout the sawtooth cycle. In the core, the measured chie is greater than chi MD leading up to and including the sawtooth crash, suggesting other transport mechanisms are important at these times. Additionally, in a simulation including pressure evolution, a striking agreement is found between the temperature fluctuations seen in the simulation and those previously measured in MST. This work supported by the US DOE and NSF.
NASA Astrophysics Data System (ADS)
Wells, J. R.; Kim, J. B.
2011-12-01
Parameters in dynamic global vegetation models (DGVMs) are thought to be weakly constrained and can be a significant source of errors and uncertainties. DGVMs use between 5 and 26 plant functional types (PFTs) to represent the average plant life form in each simulated plot, and each PFT typically has a dozen or more parameters that define the way it uses resource and responds to the simulated growing environment. Sensitivity analysis explores how varying parameters affects the output, but does not do a full exploration of the parameter solution space. The solution space for DGVM parameter values are thought to be complex and non-linear; and multiple sets of acceptable parameters may exist. In published studies, PFT parameters are estimated from published literature, and often a parameter value is estimated from a single published value. Further, the parameters are "tuned" using somewhat arbitrary, "trial-and-error" methods. BIOMAP is a new DGVM created by fusing MAPSS biogeography model with Biome-BGC. It represents the vegetation of North America using 26 PFTs. We are using simulated annealing, a global search method, to systematically and objectively explore the solution space for the BIOMAP PFTs and system parameters important for plant water use. We defined the boundaries of the solution space by obtaining maximum and minimum values from published literature, and where those were not available, using +/-20% of current values. We used stratified random sampling to select a set of grid cells representing the vegetation of the conterminous USA. Simulated annealing algorithm is applied to the parameters for spin-up and a transient run during the historical period 1961-1990. A set of parameter values is considered acceptable if the associated simulation run produces a modern potential vegetation distribution map that is as accurate as one produced by trial-and-error calibration. We expect to confirm that the solution space is non-linear and complex, and that multiple acceptable parameter sets exist. Further we expect to demonstrate that the multiple parameter sets produce significantly divergent future forecasts in NEP, C storage, and ET and runoff; and thereby identify a highly important source of DGVM uncertainty
Chaste: A test-driven approach to software development for biological modelling
NASA Astrophysics Data System (ADS)
Pitt-Francis, Joe; Pathmanathan, Pras; Bernabeu, Miguel O.; Bordas, Rafel; Cooper, Jonathan; Fletcher, Alexander G.; Mirams, Gary R.; Murray, Philip; Osborne, James M.; Walter, Alex; Chapman, S. Jon; Garny, Alan; van Leeuwen, Ingeborg M. M.; Maini, Philip K.; Rodríguez, Blanca; Waters, Sarah L.; Whiteley, Jonathan P.; Byrne, Helen M.; Gavaghan, David J.
2009-12-01
Chaste ('Cancer, heart and soft-tissue environment') is a software library and a set of test suites for computational simulations in the domain of biology. Current functionality has arisen from modelling in the fields of cancer, cardiac physiology and soft-tissue mechanics. It is released under the LGPL 2.1 licence. Chaste has been developed using agile programming methods. The project began in 2005 when it was reasoned that the modelling of a variety of physiological phenomena required both a generic mathematical modelling framework, and a generic computational/simulation framework. The Chaste project evolved from the Integrative Biology (IB) e-Science Project, an inter-institutional project aimed at developing a suitable IT infrastructure to support physiome-level computational modelling, with a primary focus on cardiac and cancer modelling. Program summaryProgram title: Chaste Catalogue identifier: AEFD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: LGPL 2.1 No. of lines in distributed program, including test data, etc.: 5 407 321 No. of bytes in distributed program, including test data, etc.: 42 004 554 Distribution format: tar.gz Programming language: C++ Operating system: Unix Has the code been vectorised or parallelized?: Yes. Parallelized using MPI. RAM:<90 Megabytes for two of the scenarios described in Section 6 of the manuscript (Monodomain re-entry on a slab or Cylindrical crypt simulation). Up to 16 Gigabytes (distributed across processors) for full resolution bidomain cardiac simulation. Classification: 3. External routines: Boost, CodeSynthesis XSD, CxxTest, HDF5, METIS, MPI, PETSc, Triangle, Xerces Nature of problem: Chaste may be used for solving coupled ODE and PDE systems arising from modelling biological systems. Use of Chaste in two application areas are described in this paper: cardiac electrophysiology and intestinal crypt dynamics. Solution method: Coupled multi-physics with PDE, ODE and discrete mechanics simulation. Running time: The largest cardiac simulation described in the manuscript takes about 6 hours to run on a single 3 GHz core. See results section (Section 6) of the manuscript for discussion on parallel scaling.
Wedge Experiment Modeling and Simulation for Reactive Flow Model Calibration
NASA Astrophysics Data System (ADS)
Maestas, Joseph T.; Dorgan, Robert J.; Sutherland, Gerrit T.
2017-06-01
Wedge experiments are a typical method for generating pop-plot data (run-to-detonation distance versus input shock pressure), which is used to assess an explosive material's initiation behavior. Such data can be utilized to calibrate reactive flow models by running hydrocode simulations and successively tweaking model parameters until a match between experiment is achieved. Typical simulations are performed in 1D and typically use a flyer impact to achieve the prescribed shock loading pressure. In this effort, a wedge experiment performed at the Army Research Lab (ARL) was modeled using CTH (SNL hydrocode) in 1D, 2D, and 3D space in order to determine if there was any justification in using simplified models. A simulation was also performed using the BCAT code (CTH companion tool) that assumes a plate impact shock loading. Results from the simulations were compared to experimental data and show that the shock imparted into an explosive specimen is accurately captured with 2D and 3D simulations, but changes significantly in 1D space and with the BCAT tool. The difference in shock profile is shown to only affect numerical predictions for large run distances. This is attributed to incorrectly capturing the energy fluence for detonation waves versus flat shock loading. Portions of this work were funded through the Joint Insensitive Munitions Technology Program.
The Prodiguer Messaging Platform
NASA Astrophysics Data System (ADS)
Greenslade, Mark; Denvil, Sebastien; Raciazek, Jerome; Carenton, Nicolas; Levavasseur, Guillame
2014-05-01
CONVERGENCE is a French multi-partner national project designed to gather HPC and informatics expertise to innovate in the context of running French climate models with differing grids and at differing resolutions. Efficient and reliable execution of these models and the management and dissemination of model output (data and meta-data) are just some of the complexities that CONVERGENCE aims to resolve. The Institut Pierre Simon Laplace (IPSL) is responsible for running climate simulations upon a set of heterogenous HPC environments within France. With heterogeneity comes added complexity in terms of simulation instrumentation and control. Obtaining a global perspective upon the state of all simulations running upon all HPC environments has hitherto been problematic. In this presentation we detail how, within the context of CONVERGENCE, the implementation of the Prodiguer messaging platform resolves complexity and permits the development of real-time applications such as: 1. a simulation monitoring dashboard; 2. a simulation metrics visualizer; 3. an automated simulation runtime notifier; 4. an automated output data & meta-data publishing pipeline; The Prodiguer messaging platform leverages a widely used open source message broker software called RabbitMQ. RabbitMQ itself implements the Advanced Message Queue Protocol (AMPQ). Hence it will be demonstrated that the Prodiguer messaging platform is built upon both open source and open standards.
The ATLAS Level-1 Topological Trigger performance in Run 2
NASA Astrophysics Data System (ADS)
Riu, Imma; ATLAS Collaboration
2017-10-01
The Level-1 trigger is the first event rate reducing step in the ATLAS detector trigger system, with an output rate of up to 100 kHz and decision latency smaller than 2.5 μs. During the LHC shutdown after Run 1, the Level-1 trigger system was upgraded at hardware, firmware and software levels. In particular, a new electronics sub-system was introduced in the real-time data processing path: the Level-1 Topological trigger system. It consists of a single electronics shelf equipped with two Level-1 Topological processor blades. They receive real-time information from the Level-1 calorimeter and muon triggers, which is processed to measure angles between trigger objects, invariant masses or other kinematic variables. Complementary to other requirements, these measurements are taken into account in the final Level-1 trigger decision. The system was installed and commissioning started in 2015 and continued during 2016. As part of the commissioning, the decisions from individual algorithms were simulated and compared with the hardware response. An overview of the Level-1 Topological trigger system design, commissioning process and impact on several event selections are illustrated.
Adaptive mesh fluid simulations on GPU
NASA Astrophysics Data System (ADS)
Wang, Peng; Abel, Tom; Kaehler, Ralf
2010-10-01
We describe an implementation of compressible inviscid fluid solvers with block-structured adaptive mesh refinement on Graphics Processing Units using NVIDIA's CUDA. We show that a class of high resolution shock capturing schemes can be mapped naturally on this architecture. Using the method of lines approach with the second order total variation diminishing Runge-Kutta time integration scheme, piecewise linear reconstruction, and a Harten-Lax-van Leer Riemann solver, we achieve an overall speedup of approximately 10 times faster execution on one graphics card as compared to a single core on the host computer. We attain this speedup in uniform grid runs as well as in problems with deep AMR hierarchies. Our framework can readily be applied to more general systems of conservation laws and extended to higher order shock capturing schemes. This is shown directly by an implementation of a magneto-hydrodynamic solver and comparing its performance to the pure hydrodynamic case. Finally, we also combined our CUDA parallel scheme with MPI to make the code run on GPU clusters. Close to ideal speedup is observed on up to four GPUs.
NASA Astrophysics Data System (ADS)
Perez Montes, Diego A.; Añel Cabanelas, Juan A.; Wallom, David C. H.; Arribas, Alberto; Uhe, Peter; Caderno, Pablo V.; Pena, Tomas F.
2017-04-01
Cloud Computing is a technological option that offers great possibilities for modelling in geosciences. We have studied how two different climate models, HadAM3P-HadRM3P and CESM-WACCM, can be adapted in two different ways to run on Cloud Computing Environments from three different vendors: Amazon, Google and Microsoft. Also, we have evaluated qualitatively how the use of Cloud Computing can affect the allocation of resources by funding bodies and issues related to computing security, including scientific reproducibility. Our first experiments were developed using the well known ClimatePrediction.net (CPDN), that uses BOINC, over the infrastructure from two cloud providers, namely Microsoft Azure and Amazon Web Services (hereafter AWS). For this comparison we ran a set of thirteen month climate simulations for CPDN in Azure and AWS using a range of different virtual machines (VMs) for HadRM3P (50 km resolution over South America CORDEX region) nested in the global atmosphere-only model HadAM3P. These simulations were run on a single processor and took between 3 and 5 days to compute depending on the VM type. The last part of our simulation experiments was running WACCM over different VMS on the Google Compute Engine (GCE) and make a comparison with the supercomputer (SC) Finisterrae1 from the Centro de Supercomputacion de Galicia. It was shown that GCE gives better performance than the SC for smaller number of cores/MPI tasks but the model throughput shows clearly how the SC performance is better after approximately 100 cores (related with network speed and latency differences). From a cost point of view, Cloud Computing moves researchers from a traditional approach where experiments were limited by the available hardware resources to monetary resources (how many resources can be afforded). As there is an increasing movement and recommendation for budgeting HPC projects on this technology (budgets can be calculated in a more realistic way) we could see a shift on the trends over the next years to consolidate Cloud as the preferred solution.
Impact of CYGNSS Data on Tropical Cyclone Analyses and Forecasts in a Regional OSSE Framework
NASA Astrophysics Data System (ADS)
Annane, B.; McNoldy, B. D.; Leidner, S. M.; Atlas, R. M.; Hoffman, R.; Majumdar, S.
2016-12-01
The Cyclone Global Navigation Satellite System, or CYGNSS, is a planned constellation of micro-satellites that will utilize reflected Global Positioning System (GPS) satellite signals to retrieve ocean surface wind speed along the satellites' ground tracks. The orbits are designed so that there is excellent coverage of the tropics and subtropics, resulting in more thorough spatial sampling and improved sampling intervals over tropical cyclones than is possible with current spaceborne scatterometer and passive microwave sensor platforms. Furthermore, CYGNSS will be able to retrieve winds under all precipitating conditions, and over a large range of wind speeds.A regional Observing System Simulation Experiment (OSSE) framework was developed at NOAA/AOML and University of Miami that features a high-resolution regional nature run (27-km regional domain with 9/3/1 km storm-following nests; Nolan et al., 2013) embedded within a lower-resolution global nature run . Simulated observations are generated by sampling from the nature run and are provided to a data assimilation scheme, which produces analyses for a high-resolution regional forecast model, the 2014 operational Hurricane-WRF model. For data assimilation, NOAA's GSI and EnKF systems are used. Analyses are performed on the parent domain at 9-km resolution. The forecast model uses a single storm-following 3-km resolution nest. Synthetic CYGNSS wind speed data have also been created, and the impacts of the assimilation of these data on the forecasts of tropical cyclone track and intensity will be discussed.In addition to the choice of assimilation scheme, we have also examined a number of other factors/parameters that effect the impact of simulated CYGNSS observations, including frequency of data assimilation cycling (e.g., hourly, 3-hourly and 6-hourly) and the assimilation of scalar versus vector synthetic CYGNSS winds.We have found sensitivity to all of the factors tested and will summarize the methods used for testing as well as results. Generally, we have found that more frequent cycling is better than less; and flow-dependent background error covariances (e.g., EnKF) are better than static or climatological assumptions about the background error covariance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, S; Lo, P; Hoffman, J
Purpose: To evaluate the robustness of CAD or Quantitative Imaging methods, they should be tested on a variety of cases and under a variety of image acquisition and reconstruction conditions that represent the heterogeneity encountered in clinical practice. The purpose of this work was to develop a fully-automated pipeline for generating CT images that represent a wide range of dose and reconstruction conditions. Methods: The pipeline consists of three main modules: reduced-dose simulation, image reconstruction, and quantitative analysis. The first two modules of the pipeline can be operated in a completely automated fashion, using configuration files and running the modulesmore » in a batch queue. The input to the pipeline is raw projection CT data; this data is used to simulate different levels of dose reduction using a previously-published algorithm. Filtered-backprojection reconstructions are then performed using FreeCT-wFBP, a freely-available reconstruction software for helical CT. We also added support for an in-house, model-based iterative reconstruction algorithm using iterative coordinate-descent optimization, which may be run in tandem with the more conventional recon methods. The reduced-dose simulations and image reconstructions are controlled automatically by a single script, and they can be run in parallel on our research cluster. The pipeline was tested on phantom and lung screening datasets from a clinical scanner (Definition AS, Siemens Healthcare). Results: The images generated from our test datasets appeared to represent a realistic range of acquisition and reconstruction conditions that we would expect to find clinically. The time to generate images was approximately 30 minutes per dose/reconstruction combination on a hybrid CPU/GPU architecture. Conclusion: The automated research pipeline promises to be a useful tool for either training or evaluating performance of quantitative imaging software such as classifiers and CAD algorithms across the range of acquisition and reconstruction parameters present in the clinical environment. Funding support: NIH U01 CA181156; Disclosures (McNitt-Gray): Institutional research agreement, Siemens Healthcare; Past recipient, research grant support, Siemens Healthcare; Consultant, Toshiba America Medical Systems; Consultant, Samsung Electronics.« less
Performance Evaluation of an Actuator Dust Seal for Lunar Operation
NASA Technical Reports Server (NTRS)
Delgado, Irebert R.; Gaier, James R.; Handschuh, Michael; Panko, Scott; Sechkar, Ed
2013-01-01
Exploration of extraterrestrial surfaces (e.g. moon, Mars, asteroid) will require durable space mechanisms that will survive potentially dusty surface conditions in addition to the hard vacuum and extreme temperatures of space. Baseline tests with lunar simulant were recently completed at NASA GRC on a new Low-Temperature Mechanism (LTM) dust seal for space actuator application. Following are top-level findings of the tests completed to date in vacuum using NU-LHT-2M lunar-highlands simulant. A complete set of findings are found in the conclusions section.Tests were run at approximately 10-7 torr with unidirectional rotational speed of 39 RPM.Initial break-in runs were performed at atmospheric conditions with no simulant. During the break-in runs, the maximum torque observed was 16.7 lbf-in. while the maximum seal outer diameter temperature was 103F. Only 0.4 milligrams of NU-LHT-2M simulant passed through the sealshaft interface in the first 511,000 cycles while under vacuum despite a chip on the secondary sealing surface.Approximately 650,000 of a planned 1,000,000 cycles were completed in vacuum with NU-LHT-2M simulant.Upon test disassembly NU-LHT-2M was found on the secondary sealing surface.
NASA Technical Reports Server (NTRS)
Hueschen, Richard M.; Hankins, Walter W., III; Barker, L. Keith
2001-01-01
This report examines a rollout and turnoff (ROTO) system for reducing the runway occupancy time for transport aircraft in low-visibility weather. Simulator runs were made to evaluate the system that includes a head-up display (HUD) to show the pilot a graphical overlay of the runway along with guidance and steering information to a chosen exit. Fourteen pilots (airline, corporate jet, and research pilots) collectively flew a total of 560 rollout and turnoff runs using all eight runways at Hartsfield Atlanta International Airport. The runs consisted of 280 runs for each of two runway visual ranges (RVRs) (300 and 1200 ft). For each visual range, half the runs were conducted with the HUD information and half without. For the runs conducted with the HUD information, the runway occupancy times were lower and more consistent. The effect was more pronounced as visibility decreased. For the 1200-ft visibility, the runway occupancy times were 13% lower with HUD information (46.1 versus 52.8 sec). Similarly, for the 300-ft visibility, the times were 28% lower (45.4 versus 63.0 sec). Also, for the runs with HUD information, 78% (RVR 1200) and 75% (RVR 300) had runway occupancy times less than 50 sec, versus 41 and 20%, respectively, without HUD information.
NASA Astrophysics Data System (ADS)
Buxton, Carly S.
In most of Washington and Oregon, USA, mountain snowpack stores water which will be available through spring and early summer, when water demand in the region is at its highest. Therefore, understanding the numerous factors that influence winter precipitation variability is a key component in water resource planning. This project examines the effects of the Pacific Decadal Oscillation (PDO) on winter precipitation in the Pacific Northwest U.S. using the WRF-ARW regional climate model. A significant component of this work was evaluating the many options that WRF-ARW provides for representing sub-grid scale cloud microphysical processes. Because the "best" choice of microphysics parameterization can vary depending on the application, this project also seeks to determine which option leads to the most accurate simulation of winter precipitation (when compared to observations) in the complex terrain of the Pacific Northwest. A series of test runs were performed with eight different combinations of physics parameterizations, and the results of these test runs were used to narrow the number of physics options down to three for the final runs. Mean total precipitation and coefficient of variation of the final model runs were compared against observational data. As RCMs tend to do, WRF over-predicts mean total precipitation compared to observations, but the double-moment microphysics schemes, Thompson and Morrison, over-predict to a lesser extent than the single-moment scheme. Two WRF microphysics schemes, Thompson and Lin, were more likely to have a coefficient of variation within the range of observations. Overall, the Thompson scheme produced the most accurate simulation as compared to observations. To focus on the effects of the PDO, WRF simulations were performed for two ten-member ensembles, one for positive PDO Decembers, and one for negative PDO Decembers. WRF output of total precipitation was compared to both station and gridded observational data. During positive PDO conditions, there is a strong latitudinal signal at low elevations, while during negative PDO conditions, there is a strong latitudinal signal at high elevations. This shift in where the PDO signal is most visible is due to changes in mid-level westerly winds and upper-level circulation and temperature advection. Under positive PDO conditions, wind direction and moisture transport are the most important factors, and frequent warm, moist southwesterly winds cause a PDO signal at low elevations. Under negative PDO conditions, differences in westerly wind speed, and therefore orographic precipitation enhancement, lead to a latitudinal PDO signal at high elevations. This PDO signal is robust, appearing in both the WRF simulations and observational data, and the differences due to PDO phase exceed the differences due to choice of microphysics scheme, WRF internal variability, and observational data uncertainty.
The State of Simulations: Soft-Skill Simulations Emerge as a Powerful New Form of E-Learning.
ERIC Educational Resources Information Center
Aldrich, Clark
2001-01-01
Presents responses of leaders from six simulation companies about challenges and opportunities of soft-skills simulations in e-learning. Discussion includes: evaluation metrics; role of subject matter experts in developing simulations; video versus computer graphics; technology needed to run simulations; technology breakthroughs; pricing;…
A Study of the Unstable Modes in High Mach Number Gaseous Jets and Shear Layers
NASA Astrophysics Data System (ADS)
Bassett, Gene Marcel
1993-01-01
Instabilities affecting the propagation of supersonic gaseous jets have been studied using high resolution computer simulations with the Piecewise-Parabolic-Method (PPM). These results are discussed in relation to jets from galactic nuclei. These studies involve a detailed treatment of a single section of a very long jet, approximating the dynamics by using periodic boundary conditions. Shear layer simulations have explored the effects of shear layers on the growth of nonlinear instabilities. Convergence of the numerical approximations has been tested by comparing jet simulations with different grid resolutions. The effects of initial conditions and geometry on the dominant disruptive instabilities have also been explored. Simulations of shear layers with a variety of thicknesses, Mach numbers and densities perturbed by incident sound waves imply that the time for the excited kink modes to grow large in amplitude and disrupt the shear layer is taug = (546 +/- 24) (M/4)^{1.7 } (Apert/0.02) ^{-0.4} delta/c, where M is the jet Mach number, delta is the half-width of the shear layer, and A_ {pert} is the perturbation amplitude. For simulations of periodic jets, the initial velocity perturbations set up zig-zag shock patterns inside the jet. In each case a single zig-zag shock pattern (an odd mode) or a double zig-zag shock pattern (an even mode) grows to dominate the flow. The dominant kink instability responsible for these shock patterns moves approximately at the linear resonance velocity, nu_ {mode} = cextnu_ {relative}/(cjet + c_ {ext}). For high resolution simulations (those with 150 or more computational zones across the jet width), the even mode dominates if the even penetration is higher in amplitude initially than the odd perturbation. For low resolution simulations, the odd mode dominates even for a stronger even mode perturbation. In high resolution simulations the jet boundary rolls up and large amounts of external gas are entrained into the jet. In low resolution simulations this entrainment process is impeded by numerical viscosity. The three-dimensional jet simulations behave similarly to two-dimensional jet runs with the same grid resolutions.
Hall, Benjamin A; Halim, Khairul Abd; Buyan, Amanda; Emmanouil, Beatrice; Sansom, Mark S P
2016-01-01
The interactions of transmembrane (TM) α-helices with the phospholipid membrane and with one another are central to understanding the structure and stability of integral membrane proteins. These interactions may be analysed via coarse-grained molecular dynamics (CGMD) simulations. To obtain statistically meaningful analysis of TM helix interactions, large (N ca. 100) ensembles of CGMD simulations are needed. To facilitate the running and analysis of such ensembles of simulations we have developed Sidekick, an automated pipeline software for performing high throughput CGMD simulations of α-helical peptides in lipid bilayer membranes. Through an end-to-end approach, which takes as input a helix sequence and outputs analytical metrics derived from CGMD simulations, we are able to predict the orientation and likelihood of insertion into a lipid bilayer of a given helix of family of helix sequences. We illustrate this software via analysis of insertion into a membrane of short hydrophobic TM helices containing a single cationic arginine residue positioned at different positions along the length of the helix. From analysis of these ensembles of simulations we estimate apparent energy barriers to insertion which are comparable to experimentally determined values. In a second application we use CGMD simulations to examine self-assembly of dimers of TM helices from the ErbB1 receptor tyrosine kinase, and analyse the numbers of simulation repeats necessary to obtain convergence of simple descriptors of the mode of packing of the two helices within a dimer. Our approach offers proof-of-principle platform for the further employment of automation in large ensemble CGMD simulations of membrane proteins. PMID:26580541
Hall, Benjamin A; Halim, Khairul Bariyyah Abd; Buyan, Amanda; Emmanouil, Beatrice; Sansom, Mark S P
2014-05-13
The interactions of transmembrane (TM) α-helices with the phospholipid membrane and with one another are central to understanding the structure and stability of integral membrane proteins. These interactions may be analyzed via coarse grained molecular dynamics (CGMD) simulations. To obtain statistically meaningful analysis of TM helix interactions, large (N ca. 100) ensembles of CGMD simulations are needed. To facilitate the running and analysis of such ensembles of simulations, we have developed Sidekick, an automated pipeline software for performing high throughput CGMD simulations of α-helical peptides in lipid bilayer membranes. Through an end-to-end approach, which takes as input a helix sequence and outputs analytical metrics derived from CGMD simulations, we are able to predict the orientation and likelihood of insertion into a lipid bilayer of a given helix of a family of helix sequences. We illustrate this software via analyses of insertion into a membrane of short hydrophobic TM helices containing a single cationic arginine residue positioned at different positions along the length of the helix. From analyses of these ensembles of simulations, we estimate apparent energy barriers to insertion which are comparable to experimentally determined values. In a second application, we use CGMD simulations to examine the self-assembly of dimers of TM helices from the ErbB1 receptor tyrosine kinase and analyze the numbers of simulation repeats necessary to obtain convergence of simple descriptors of the mode of packing of the two helices within a dimer. Our approach offers a proof-of-principle platform for the further employment of automation in large ensemble CGMD simulations of membrane proteins.
Simulation framework for intelligent transportation systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ewing, T.; Doss, E.; Hanebutte, U.
1996-10-01
A simulation framework has been developed for a large-scale, comprehensive, scaleable simulation of an Intelligent Transportation System (ITS). The simulator is designed for running on parallel computers and distributed (networked) computer systems, but can run on standalone workstations for smaller simulations. The simulator currently models instrumented smart vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide two-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphicalmore » user interfaces to support human-factors studies. Realistic modeling of variations of the posted driving speed are based on human factors studies that take into consideration weather, road conditions, driver personality and behavior, and vehicle type. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on parallel computers, such as ANL`s IBM SP-2, for large-scale problems. A novel feature of the approach is that vehicles are represented by autonomous computer processes which exchange messages with other processes. The vehicles have a behavior model which governs route selection and driving behavior, and can react to external traffic events much like real vehicles. With this approach, the simulation is scaleable to take advantage of emerging massively parallel processor (MPP) systems.« less
Assessment of SFR Wire Wrap Simulation Uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delchini, Marc-Olivier G.; Popov, Emilian L.; Pointer, William David
Predictive modeling and simulation of nuclear reactor performance and fuel are challenging due to the large number of coupled physical phenomena that must be addressed. Models that will be used for design or operational decisions must be analyzed for uncertainty to ascertain impacts to safety or performance. Rigorous, structured uncertainty analyses are performed by characterizing the model’s input uncertainties and then propagating the uncertainties through the model to estimate output uncertainty. This project is part of the ongoing effort to assess modeling uncertainty in Nek5000 simulations of flow configurations relevant to the advanced reactor applications of the Nuclear Energy Advancedmore » Modeling and Simulation (NEAMS) program. Three geometries are under investigation in these preliminary assessments: a 3-D pipe, a 3-D 7-pin bundle, and a single pin from the Thermal-Hydraulic Out-of-Reactor Safety (THORS) facility. Initial efforts have focused on gaining an understanding of Nek5000 modeling options and integrating Nek5000 with Dakota. These tasks are being accomplished by demonstrating the use of Dakota to assess parametric uncertainties in a simple pipe flow problem. This problem is used to optimize performance of the uncertainty quantification strategy and to estimate computational requirements for assessments of complex geometries. A sensitivity analysis to three turbulent models was conducted for a turbulent flow in a single wire wrapped pin (THOR) geometry. Section 2 briefly describes the software tools used in this study and provides appropriate references. Section 3 presents the coupling interface between Dakota and a computational fluid dynamic (CFD) code (Nek5000 or STARCCM+), with details on the workflow, the scripts used for setting up the run, and the scripts used for post-processing the output files. In Section 4, the meshing methods used to generate the THORS and 7-pin bundle meshes are explained. Sections 5, 6 and 7 present numerical results for the 3-D pipe, the single pin THORS mesh, and the 7-pin bundle mesh, respectively.« less
New operator assistance features in the CMS Run Control System
NASA Astrophysics Data System (ADS)
Andre, J.-M.; Behrens, U.; Branson, J.; Brummer, P.; Chaze, O.; Cittolin, S.; Contescu, C.; Craigs, B. G.; Darlea, G.-L.; Deldicque, C.; Demiragli, Z.; Dobson, M.; Doualot, N.; Erhan, S.; Fulcher, J. R.; Gigi, D.; Gładki, M.; Glege, F.; Gomez-Ceballos, G.; Hegeman, J.; Holzner, A.; Janulis, M.; Jimenez-Estupiñán, R.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; Morovic, S.; O'Dell, V.; Orsini, L.; Paus, C.; Petrova, P.; Pieri, M.; Racz, A.; Reis, T.; Sakulin, H.; Schwick, C.; Simelevicius, D.; Vougioukas, M.; Zejdl, P.
2017-10-01
During Run-1 of the LHC, many operational procedures have been automated in the run control system of the Compact Muon Solenoid (CMS) experiment. When detector high voltages are ramped up or down or upon certain beam mode changes of the LHC, the DAQ system is automatically partially reconfigured with new parameters. Certain types of errors such as errors caused by single-event upsets may trigger an automatic recovery procedure. Furthermore, the top-level control node continuously performs cross-checks to detect sub-system actions becoming necessary because of changes in configuration keys, changes in the set of included front-end drivers or because of potential clock instabilities. The operator is guided to perform the necessary actions through graphical indicators displayed next to the relevant command buttons in the user interface. Through these indicators, consistent configuration of CMS is ensured. However, manually following the indicators can still be inefficient at times. A new assistant to the operator has therefore been developed that can automatically perform all the necessary actions in a streamlined order. If additional problems arise, the new assistant tries to automatically recover from these. With the new assistant, a run can be started from any state of the sub-systems with a single click. An ongoing run may be recovered with a single click, once the appropriate recovery action has been selected. We review the automation features of CMS Run Control and discuss the new assistant in detail including first operational experience.
New Operator Assistance Features in the CMS Run Control System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andre, J.M.; et al.
During Run-1 of the LHC, many operational procedures have been automated in the run control system of the Compact Muon Solenoid (CMS) experiment. When detector high voltages are ramped up or down or upon certain beam mode changes of the LHC, the DAQ system is automatically partially reconfigured with new parameters. Certain types of errors such as errors caused by single-event upsets may trigger an automatic recovery procedure. Furthermore, the top-level control node continuously performs cross-checks to detect sub-system actions becoming necessary because of changes in configuration keys, changes in the set of included front-end drivers or because of potentialmore » clock instabilities. The operator is guided to perform the necessary actions through graphical indicators displayed next to the relevant command buttons in the user interface. Through these indicators, consistent configuration of CMS is ensured. However, manually following the indicators can still be inefficient at times. A new assistant to the operator has therefore been developed that can automatically perform all the necessary actions in a streamlined order. If additional problems arise, the new assistant tries to automatically recover from these. With the new assistant, a run can be started from any state of the sub-systems with a single click. An ongoing run may be recovered with a single click, once the appropriate recovery action has been selected. We review the automation features of CMS Run Control and discuss the new assistant in detail including first operational experience.« less
Evaluation of Tsunami Run-Up on Coastal Areas at Regional Scale
NASA Astrophysics Data System (ADS)
González, M.; Aniel-Quiroga, Í.; Gutiérrez, O.
2017-12-01
Tsunami hazard assessment is tackled by means of numerical simulations, giving as a result, the areas flooded by tsunami wave inland. To get this, some input data is required, i.e., the high resolution topobathymetry of the study area, the earthquake focal mechanism parameters, etc. The computational cost of these kinds of simulations are still excessive. An important restriction for the elaboration of large scale maps at National or regional scale is the reconstruction of high resolution topobathymetry on the coastal zone. An alternative and traditional method consists of the application of empirical-analytical formulations to calculate run-up at several coastal profiles (i.e. Synolakis, 1987), combined with numerical simulations offshore without including coastal inundation. In this case, the numerical simulations are faster but some limitations are added as the coastal bathymetric profiles are very simply idealized. In this work, we present a complementary methodology based on a hybrid numerical model, formed by 2 models that were coupled ad hoc for this work: a non-linear shallow water equations model (NLSWE) for the offshore part of the propagation and a Volume of Fluid model (VOF) for the areas near the coast and inland, applying each numerical scheme where they better reproduce the tsunami wave. The run-up of a tsunami scenario is obtained by applying the coupled model to an ad-hoc numerical flume. To design this methodology, hundreds of worldwide topobathymetric profiles have been parameterized, using 5 parameters (2 depths and 3 slopes). In addition, tsunami waves have been also parameterized by their height and period. As an application of the numerical flume methodology, the coastal parameterized profiles and tsunami waves have been combined to build a populated database of run-up calculations. The combination was tackled by means of numerical simulations in the numerical flume The result is a tsunami run-up database that considers real profiles shape, realistic tsunami waves, and optimized numerical simulations. This database allows the calculation of the run-up of any new tsunami wave by interpolation on the database, in a short period of time, based on the tsunami wave characteristics provided as an output of the NLSWE model along the coast at a large scale domain (regional or National scale).
Global and local waveform simulations using the VERCE platform
NASA Astrophysics Data System (ADS)
Garth, Thomas; Saleh, Rafiq; Spinuso, Alessandro; Gemund, Andre; Casarotti, Emanuele; Magnoni, Federica; Krischner, Lion; Igel, Heiner; Schlichtweg, Horst; Frank, Anton; Michelini, Alberto; Vilotte, Jean-Pierre; Rietbrock, Andreas
2017-04-01
In recent years the potential to increase resolution of seismic imaging by full waveform inversion has been demonstrated on a range of scales from basin to continental scales. These techniques rely on harnessing the computational power of large supercomputers, and running large parallel codes to simulate the seismic wave field in a three-dimensional geological setting. The VERCE platform is designed to make these full waveform techniques accessible to a far wider spectrum of the seismological community. The platform supports the two widely used spectral element simulation programs SPECFEM3D Cartesian, and SPECFEM3D globe, allowing users to run a wide range of simulations. In the SPECFEM3D Cartesian implementation the user can run waveform simulations on a range of pre-loaded meshes and velocity models for specific areas, or upload their own velocity model and mesh. In the new SPECFEM3D globe implementation, the user will be able to select from a number of continent scale model regions, or perform waveform simulations for the whole earth. Earthquake focal mechanisms can be downloaded within the platform, for example from the GCMT catalogue, or users can upload their own focal mechanism catalogue through the platform. The simulations can be run on a range of European supercomputers in the PRACE network. Once a job has been submitted and run through the platform, the simulated waveforms can be manipulated or downloaded for further analysis. The misfit between the simulated and recorded waveforms can then be calculated through the platform through three interoperable workflows, for raw-data access (FDSN) and caching, pre-processing and finally misfit. The last workflow makes use of the Pyflex analysis software. In addition, the VERCE platform can be used to produce animations of waveform propagation through the velocity model, and synthetic shakemaps. All these data-products are made discoverable and re-usable thanks to the VERCE data and metadata management layer. We demonstrate the functionality of the VERCE platform with two use cases, one using the pre-loaded velocity model and mesh for the Maule area of Chile using the SPECFEM3D Cartesian workflow, and one showing the output of a global simulation using the SPECFEM3D globe workflow. It is envisioned that this tool will allow a much greater range of seismologists to access these full waveform inversion tools, and aid full waveform tomographic and source inversion, synthetic shakemap production and other full waveform applications, in a wide range of tectonic settings.
Understanding casing flow in Pelton turbines by numerical simulation
NASA Astrophysics Data System (ADS)
Rentschler, M.; Neuhauser, M.; Marongiu, J. C.; Parkinson, E.
2016-11-01
For rehabilitation projects of Pelton turbines, the flow in the casing may have an important influence on the overall performance of the machine. Water sheets returning on the jets or on the runner significantly reduce efficiency, and run-away speed depends on the flow in the casing. CFD simulations can provide a detailed insight into this type of flow, but these simulations are computationally intensive. As in general the volume of water in a Pelton turbine is small compared to the complete volume of the turbine housing, a single phase simulation greatly reduces the complexity of the simulation. In the present work a numerical tool based on the SPH-ALE meshless method is used to simulate the casing flow in a Pelton turbine. Using improved order schemes reduces the numerical viscosity. This is necessary to resolve the flow in the jet and on the casing wall, where the velocity differs by two orders of magnitude. The results are compared to flow visualizations and measurement in a hydraulic laboratory. Several rehabilitation projects proved the added value of understanding the flow in the Pelton casing. The flow simulation helps designing casing insert, not only to see their influence on the flow, but also to calculate the stress in the inserts. In some projects, the casing simulation leads to the understanding of unexpected behavior of the flow. One such example is presented where the backsplash of a deflector hit the runner, creating a reversed rotation of the runner.
Connectionist agent-based learning in bank-run decision making
NASA Astrophysics Data System (ADS)
Huang, Weihong; Huang, Qiao
2018-05-01
It is of utter importance for the policy makers, bankers, and investors to thoroughly understand the probability of bank-run (PBR) which was often neglected in the classical models. Bank-run is not merely due to miscoordination (Diamond and Dybvig, 1983) or deterioration of bank assets (Allen and Gale, 1998) but various factors. This paper presents the simulation results of the nonlinear dynamic probabilities of bank runs based on the global games approach, with the distinct assumption that heterogenous agents hold highly correlated but unidentical beliefs about the true payoffs. The specific technique used in the simulation is to let agents have an integrated cognitive-affective network. It is observed that, even when the economy is good, agents are significantly affected by the cognitive-affective network to react to bad news which might lead to bank-run. Both the rise of the late payoffs, R, and the early payoffs, r, will decrease the effect of the affective process. The increased risk sharing might or might not increase PBR, and the increase in late payoff is beneficial for preventing the bank run. This paper is one of the pioneers that links agent-based computational economics and behavioral economics.
Realtime Space Weather Forecasts Via Android Phone App
NASA Astrophysics Data System (ADS)
Crowley, G.; Haacke, B.; Reynolds, A.
2010-12-01
For the past several years, ASTRA has run a first-principles global 3-D fully coupled thermosphere-ionosphere model in real-time for space weather applications. The model is the Thermosphere-Ionosphere Mesosphere Electrodynamics General Circulation Model (TIMEGCM). ASTRA also runs the Assimilative Mapping of Ionospheric Electrodynamics (AMIE) in real-time. Using AMIE to drive the high latitude inputs to the TIMEGCM produces high fidelity simulations of the global thermosphere and ionosphere. These simulations can be viewed on the Android Phone App developed by ASTRA. The SpaceWeather app for the Android operating system is free and can be downloaded from the Google Marketplace. We present the current status of realtime thermosphere-ionosphere space-weather forcasting and discuss the way forward. We explore some of the issues in maintaining real-time simulations with assimilative data feeds in a quasi-operational setting. We also discuss some of the challenges of presenting large amounts of data on a smartphone. The ASTRA SpaceWeather app includes the broadest and most unique range of space weather data yet to be found on a single smartphone app. This is a one-stop-shop for space weather and the only app where you can get access to ASTRA’s real-time predictions of the global thermosphere and ionosphere, high latitude convection and geomagnetic activity. Because of the phone's GPS capability, users can obtain location specific vertical profiles of electron density, temperature, and time-histories of various parameters from the models. The SpaceWeather app has over 9000 downloads, 30 reviews, and a following of active users. It is clear that real-time space weather on smartphones is here to stay, and must be included in planning for any transition to operational space-weather use.
Flexible Environments for Grand-Challenge Simulation in Climate Science
NASA Astrophysics Data System (ADS)
Pierrehumbert, R.; Tobis, M.; Lin, J.; Dieterich, C.; Caballero, R.
2004-12-01
Current climate models are monolithic codes, generally in Fortran, aimed at high-performance simulation of the modern climate. Though they adequately serve their designated purpose, they present major barriers to application in other problems. Tailoring them to paleoclimate of planetary simulations, for instance, takes months of work. Theoretical studies, where one may want to remove selected processes or break feedback loops, are similarly hindered. Further, current climate models are of little value in education, since the implementation of textbook concepts and equations in the code is obscured by technical detail. The Climate Systems Center at the University of Chicago seeks to overcome these limitations by bringing modern object-oriented design into the business of climate modeling. Our ultimate goal is to produce an end-to-end modeling environment capable of configuring anything from a simple single-column radiative-convective model to a full 3-D coupled climate model using a uniform, flexible interface. Technically, the modeling environment is implemented as a Python-based software component toolkit: key number-crunching procedures are implemented as discrete, compiled-language components 'glued' together and co-ordinated by Python, combining the high performance of compiled languages and the flexibility and extensibility of Python. We are incrementally working towards this final objective following a series of distinct, complementary lines. We will present an overview of these activities, including PyOM, a Python-based finite-difference ocean model allowing run-time selection of different Arakawa grids and physical parameterizations; CliMT, an atmospheric modeling toolkit providing a library of 'legacy' radiative, convective and dynamical modules which can be knitted into dynamical models, and PyCCSM, a version of NCAR's Community Climate System Model in which the coupler and run-control architecture are re-implemented in Python, augmenting its flexibility and adaptability.
Unleashing spatially distributed ecohydrology modeling using Big Data tools
NASA Astrophysics Data System (ADS)
Miles, B.; Idaszak, R.
2015-12-01
Physically based spatially distributed ecohydrology models are useful for answering science and management questions related to the hydrology and biogeochemistry of prairie, savanna, forested, as well as urbanized ecosystems. However, these models can produce hundreds of gigabytes of spatial output for a single model run over decadal time scales when run at regional spatial scales and moderate spatial resolutions (~100-km2+ at 30-m spatial resolution) or when run for small watersheds at high spatial resolutions (~1-km2 at 3-m spatial resolution). Numerical data formats such as HDF5 can store arbitrarily large datasets. However even in HPC environments, there are practical limits on the size of single files that can be stored and reliably backed up. Even when such large datasets can be stored, querying and analyzing these data can suffer from poor performance due to memory limitations and I/O bottlenecks, for example on single workstations where memory and bandwidth are limited, or in HPC environments where data are stored separately from computational nodes. The difficulty of storing and analyzing spatial data from ecohydrology models limits our ability to harness these powerful tools. Big Data tools such as distributed databases have the potential to surmount the data storage and analysis challenges inherent to large spatial datasets. Distributed databases solve these problems by storing data close to computational nodes while enabling horizontal scalability and fault tolerance. Here we present the architecture of and preliminary results from PatchDB, a distributed datastore for managing spatial output from the Regional Hydro-Ecological Simulation System (RHESSys). The initial version of PatchDB uses message queueing to asynchronously write RHESSys model output to an Apache Cassandra cluster. Once stored in the cluster, these data can be efficiently queried to quickly produce both spatial visualizations for a particular variable (e.g. maps and animations), as well as point time series of arbitrary variables at arbitrary points in space within a watershed or river basin. By treating ecohydrology modeling as a Big Data problem, we hope to provide a platform for answering transformative science and management questions related to water quantity and quality in a world of non-stationary climate.
Implications of random variation in the Stand Prognosis Model
David A. Hamilton
1991-01-01
Although the Stand Prognosis Model has several stochastic components, features have been included in the model in an attempt to minimize run-to-run variation attributable to these stochastic components. This has led many users to assume that comparisons of management alternatives could be made based on a single run of the model for each alternative. Recent analyses...
Dual Optical Comb LWIR Source and Sensor
2017-10-12
Figure 39. Locking loop only controls one parameter, whereas there are two free- running parameters to control...optical frequency, along with a 12 point running average (black) equivalent to a 4 cm -1 resolution. .............................. 52 Figure 65...and processed on a single epitaxial substrate. Each OFC will be electrically driven and free- running (requiring no optical locking mechanisms). This
Hulme, Adam; Thompson, Jason; Nielsen, Rasmus Oestergaard; Read, Gemma J M; Salmon, Paul M
2018-06-18
There have been recent calls for the application of the complex systems approach in sports injury research. However, beyond theoretical description and static models of complexity, little progress has been made towards formalising this approach in way that is practical to sports injury scientists and clinicians. Therefore, our objective was to use a computational modelling method and develop a dynamic simulation in sports injury research. Agent-based modelling (ABM) was used to model the occurrence of sports injury in a synthetic athlete population. The ABM was developed based on sports injury causal frameworks and was applied in the context of distance running-related injury (RRI). Using the acute:chronic workload ratio (ACWR), we simulated the dynamic relationship between changes in weekly running distance and RRI through the manipulation of various 'athlete management tools'. The findings confirmed that building weekly running distances over time, even within the reported ACWR 'sweet spot', will eventually result in RRI as athletes reach and surpass their individual physical workload limits. Introducing training-related error into the simulation and the modelling of a 'hard ceiling' dynamic resulted in a higher RRI incidence proportion across the population at higher absolute workloads. The presented simulation offers a practical starting point to further apply more sophisticated computational models that can account for the complex nature of sports injury aetiology. Alongside traditional forms of scientific inquiry, the use of ABM and other simulation-based techniques could be considered as a complementary and alternative methodological approach in sports injury research. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Australia's marine virtual laboratory
NASA Astrophysics Data System (ADS)
Proctor, Roger; Gillibrand, Philip; Oke, Peter; Rosebrock, Uwe
2014-05-01
In all modelling studies of realistic scenarios, a researcher has to go through a number of steps to set up a model in order to produce a model simulation of value. The steps are generally the same, independent of the modelling system chosen. These steps include determining the time and space scales and processes of the required simulation; obtaining data for the initial set up and for input during the simulation time; obtaining observation data for validation or data assimilation; implementing scripts to run the simulation(s); and running utilities or custom-built software to extract results. These steps are time consuming and resource hungry, and have to be done every time irrespective of the simulation - the more complex the processes, the more effort is required to set up the simulation. The Australian Marine Virtual Laboratory (MARVL) is a new development in modelling frameworks for researchers in Australia. MARVL uses the TRIKE framework, a java-based control system developed by CSIRO that allows a non-specialist user configure and run a model, to automate many of the modelling preparation steps needed to bring the researcher faster to the stage of simulation and analysis. The tool is seen as enhancing the efficiency of researchers and marine managers, and is being considered as an educational aid in teaching. In MARVL we are developing a web-based open source application which provides a number of model choices and provides search and recovery of relevant observations, allowing researchers to: a) efficiently configure a range of different community ocean and wave models for any region, for any historical time period, with model specifications of their choice, through a user-friendly web application, b) access data sets to force a model and nest a model into, c) discover and assemble ocean observations from the Australian Ocean Data Network (AODN, http://portal.aodn.org.au/webportal/) in a format that is suitable for model evaluation or data assimilation, and d) run the assembled configuration in a cloud computing environment, or download the assembled configuration and packaged data to run on any other system of the user's choice. MARVL is now being applied in a number of case studies around Australia ranging in scale from locally confined estuaries to the Tasman Sea between Australia and New Zealand. In time we expect the range of models offered will include biogeochemical models.
Building Simulation Modelers are we big-data ready?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanyal, Jibonananda; New, Joshua Ryan
Recent advances in computing and sensor technologies have pushed the amount of data we collect or generate to limits previously unheard of. Sub-minute resolution data from dozens of channels is becoming increasingly common and is expected to increase with the prevalence of non-intrusive load monitoring. Experts are running larger building simulation experiments and are faced with an increasingly complex data set to analyze and derive meaningful insight. This paper focuses on the data management challenges that building modeling experts may face in data collected from a large array of sensors, or generated from running a large number of building energy/performancemore » simulations. The paper highlights the technical difficulties that were encountered and overcome in order to run 3.5 million EnergyPlus simulations on supercomputers and generating over 200 TBs of simulation output. This extreme case involved development of technologies and insights that will be beneficial to modelers in the immediate future. The paper discusses different database technologies (including relational databases, columnar storage, and schema-less Hadoop) in order to contrast the advantages and disadvantages of employing each for storage of EnergyPlus output. Scalability, analysis requirements, and the adaptability of these database technologies are discussed. Additionally, unique attributes of EnergyPlus output are highlighted which make data-entry non-trivial for multiple simulations. Practical experience regarding cost-effective strategies for big-data storage is provided. The paper also discusses network performance issues when transferring large amounts of data across a network to different computing devices. Practical issues involving lag, bandwidth, and methods for synchronizing or transferring logical portions of the data are presented. A cornerstone of big-data is its use for analytics; data is useless unless information can be meaningfully derived from it. In addition to technical aspects of managing big data, the paper details design of experiments in anticipation of large volumes of data. The cost of re-reading output into an analysis program is elaborated and analysis techniques that perform analysis in-situ with the simulations as they are run are discussed. The paper concludes with an example and elaboration of the tipping point where it becomes more expensive to store the output than re-running a set of simulations.« less
Validation of nonlinear gyrokinetic simulations of L- and I-mode plasmas on Alcator C-Mod
DOE Office of Scientific and Technical Information (OSTI.GOV)
Creely, A. J.; Howard, N. T.; Rodriguez-Fernandez, P.
New validation of global, nonlinear, ion-scale gyrokinetic simulations (GYRO) is carried out for L- and I-mode plasmas on Alcator C-Mod, utilizing heat fluxes, profile stiffness, and temperature fluctuations. Previous work at C-Mod found that ITG/TEM-scale GYRO simulations can match both electron and ion heat fluxes within error bars in I-mode [White PoP 2015], suggesting that multi-scale (cross-scale coupling) effects [Howard PoP 2016] may be less important in I-mode than in L-mode. New results presented here, however, show that global, nonlinear, ion-scale GYRO simulations are able to match the experimental ion heat flux, but underpredict electron heat flux (at most radii),more » electron temperature fluctuations, and perturbative thermal diffusivity in both L- and I-mode. Linear addition of electron heat flux from electron scale runs does not resolve this discrepancy. These results indicate that single-scale simulations do not sufficiently describe the I-mode core transport, and that multi-scale (coupled electron- and ion-scale) transport models are needed. In conclusion a preliminary investigation with multi-scale TGLF, however, was unable to resolve the discrepancy between ion-scale GYRO and experimental electron heat fluxes and perturbative diffusivity, motivating further work with multi-scale GYRO simulations and a more comprehensive study with multi-scale TGLF.« less
Validation of nonlinear gyrokinetic simulations of L- and I-mode plasmas on Alcator C-Mod
Creely, A. J.; Howard, N. T.; Rodriguez-Fernandez, P.; ...
2017-03-02
New validation of global, nonlinear, ion-scale gyrokinetic simulations (GYRO) is carried out for L- and I-mode plasmas on Alcator C-Mod, utilizing heat fluxes, profile stiffness, and temperature fluctuations. Previous work at C-Mod found that ITG/TEM-scale GYRO simulations can match both electron and ion heat fluxes within error bars in I-mode [White PoP 2015], suggesting that multi-scale (cross-scale coupling) effects [Howard PoP 2016] may be less important in I-mode than in L-mode. New results presented here, however, show that global, nonlinear, ion-scale GYRO simulations are able to match the experimental ion heat flux, but underpredict electron heat flux (at most radii),more » electron temperature fluctuations, and perturbative thermal diffusivity in both L- and I-mode. Linear addition of electron heat flux from electron scale runs does not resolve this discrepancy. These results indicate that single-scale simulations do not sufficiently describe the I-mode core transport, and that multi-scale (coupled electron- and ion-scale) transport models are needed. In conclusion a preliminary investigation with multi-scale TGLF, however, was unable to resolve the discrepancy between ion-scale GYRO and experimental electron heat fluxes and perturbative diffusivity, motivating further work with multi-scale GYRO simulations and a more comprehensive study with multi-scale TGLF.« less
Effect of suspension kinematic on 14 DOF vehicle model
NASA Astrophysics Data System (ADS)
Wongpattananukul, T.; Chantharasenawong, C.
2017-12-01
Computer simulations play a major role in shaping modern science and engineering. They reduce time and resource consumption in new studies and designs. Vehicle simulations have been studied extensively to achieve a vehicle model used in minimum lap time solution. Simulation result accuracy depends on the abilities of these models to represent real phenomenon. Vehicles models with 7 degrees of freedom (DOF), 10 DOF and 14 DOF are normally used in optimal control to solve for minimum lap time. However, suspension kinematics are always neglected on these models. Suspension kinematics are defined as wheel movements with respect to the vehicle body. Tire forces are expressed as a function of wheel slip and wheel position. Therefore, the suspension kinematic relation is appended to the 14 DOF vehicle model to investigate its effects on the accuracy of simulate trajectory. Classical 14 DOF vehicle model is chosen as baseline model. Experiment data is collected from formula student style car test runs as baseline data for simulation and comparison between baseline model and model with suspension kinematic. Results show that in a single long turn there is an accumulated trajectory error in baseline model compared to model with suspension kinematic. While in short alternate turns, the trajectory error is much smaller. These results show that suspension kinematic had an effect on the trajectory simulation of vehicle. Which optimal control that use baseline model will result in inaccuracy control scheme.
FPGA in-the-loop simulations of cardiac excitation model under voltage clamp conditions
NASA Astrophysics Data System (ADS)
Othman, Norliza; Adon, Nur Atiqah; Mahmud, Farhanahani
2017-01-01
Voltage clamp technique allows the detection of single channel currents in biological membranes in identifying variety of electrophysiological problems in the cellular level. In this paper, a simulation study of the voltage clamp technique has been presented to analyse current-voltage (I-V) characteristics of ion currents based on Luo-Rudy Phase-I (LR-I) cardiac model by using a Field Programmable Gate Array (FPGA). Nowadays, cardiac models are becoming increasingly complex which can cause a vast amount of time to run the simulation. Thus, a real-time hardware implementation using FPGA could be one of the best solutions for high-performance real-time systems as it provides high configurability and performance, and able to executes in parallel mode operation. For shorter time development while retaining high confidence results, FPGA-based rapid prototyping through HDL Coder from MATLAB software has been used to construct the algorithm for the simulation system. Basically, the HDL Coder is capable to convert the designed MATLAB Simulink blocks into hardware description language (HDL) for the FPGA implementation. As a result, the voltage-clamp fixed-point design of LR-I model has been successfully conducted in MATLAB Simulink and the simulation of the I-V characteristics of the ionic currents has been verified on Xilinx FPGA Virtex-6 XC6VLX240T development board through an FPGA-in-the-loop (FIL) simulation.
Wu, Jiawei; Yan, Xuedong; Radwan, Essam
2016-06-01
Due to comfort, convenience, and flexibility, taxis have become increasingly more prevalent in China, especially in large cities. However, many violations and road crashes that occurred frequently were related to taxi drivers. This study aimed to investigate differences in driving performance between taxi drivers and non-professional drivers from the perspectives of red-light running violation and potential crash involvement based on a driving simulation experiment. Two typical scenarios were established in a driving simulator, which includes the red-light running violation scenario and the crash avoidance scenario. There were 49 participants, including 23 taxi drivers (14 males and 9 females) and 26 non-professional drivers (13 males and 13 females) recruited for this experiment. The driving simulation experiment results indicated that non-professional drivers paid more attention to red-light running violations in comparison to taxi drivers who had a higher probability of red-light running violation. Furthermore, it was found that taxi drivers were more inclined to turn the steering wheel in an attempt to avoid a potential collision and non-professional drivers had more abrupt deceleration behaviors when facing a potential crash. Moreover, the experiment results showed that taxi drivers had a smaller crash rate compared to non-professional drivers and had a better performance in terms of crash avoidance at the intersection. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Shen, B.-W.; Atlas, R.; Reale, O.; Lin, S.-J.; Chern, J.-D.; Chang, J.; Henze, C.
2006-01-01
Hurricane Katrina was the sixth most intense hurricane in the Atlantic. Katrina's forecast poses major challenges, the most important of which is its rapid intensification. Hurricane intensity forecast with General Circulation Models (GCMs) is difficult because of their coarse resolution. In this article, six 5-day simulations with the ultra-high resolution finite-volume GCM are conducted on the NASA Columbia supercomputer to show the effects of increased resolution on the intensity predictions of Katrina. It is found that the 0.125 degree runs give comparable tracks to the 0.25 degree, but provide better intensity forecasts, bringing the center pressure much closer to observations with differences of only plus or minus 12 hPa. In the runs initialized at 1200 UTC 25 AUG, the 0.125 degree simulates a more realistic intensification rate and better near-eye wind distributions. Moreover, the first global 0.125 degree simulation without convection parameterization (CP) produces even better intensity evolution and near-eye winds than the control run with CP.
New NASA 3D Animation Shows Seven Days of Simulated Earth Weather
2014-08-11
This visualization shows early test renderings of a global computational model of Earth's atmosphere based on data from NASA's Goddard Earth Observing System Model, Version 5 (GEOS-5). This particular run, called Nature Run 2, was run on a supercomputer, spanned 2 years of simulation time at 30 minute intervals, and produced Petabytes of output. The visualization spans a little more than 7 days of simulation time which is 354 time steps. The time period was chosen because a simulated category-4 typhoon developed off the coast of China. The 7 day period is repeated several times during the course of the visualization. Credit: NASA's Scientific Visualization Studio Read more or download here: svs.gsfc.nasa.gov/goto?4180 NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
Design of a neural network simulator on a transputer array
NASA Technical Reports Server (NTRS)
Mcintire, Gary; Villarreal, James; Baffes, Paul; Rua, Monica
1987-01-01
A brief summary of neural networks is presented which concentrates on the design constraints imposed. Major design issues are discussed together with analysis methods and the chosen solutions. Although the system will be capable of running on most transputer architectures, it currently is being implemented on a 40-transputer system connected to a toroidal architecture. Predictions show a performance level equivalent to that of a highly optimized simulator running on the SX-2 supercomputer.
Terascale Cluster for Advanced Turbulent Combustion Simulations
2008-07-25
the system We have given the name CATS (for Combustion And Turbulence Simulator) to the terascale system that was obtained through this grant. CATS ...lnfiniBand interconnect. CATS includes an interactive login node and a file server, each holding in excess of 1 terabyte of file storage. The 35 active...compute nodes of CATS enable us to run up to 140-core parallel MPI batch jobs; one node is reserved to run the scheduler. CATS is operated and
User's instructions for the cardiovascular Walters model
NASA Technical Reports Server (NTRS)
Croston, R. C.
1973-01-01
The model is a combined, steady-state cardiovascular and thermal model. It was originally developed for interactive use, but was converted to batch mode simulation for the Sigma 3 computer. The model has the purpose to compute steady-state circulatory and thermal variables in response to exercise work loads and environmental factors. During a computer simulation run, several selected variables are printed at each time step. End conditions are also printed at the completion of the run.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cleveland, Mathew A., E-mail: cleveland7@llnl.gov; Brunner, Thomas A.; Gentile, Nicholas A.
2013-10-15
We describe and compare different approaches for achieving numerical reproducibility in photon Monte Carlo simulations. Reproducibility is desirable for code verification, testing, and debugging. Parallelism creates a unique problem for achieving reproducibility in Monte Carlo simulations because it changes the order in which values are summed. This is a numerical problem because double precision arithmetic is not associative. Parallel Monte Carlo, both domain replicated and decomposed simulations, will run their particles in a different order during different runs of the same simulation because the non-reproducibility of communication between processors. In addition, runs of the same simulation using different domain decompositionsmore » will also result in particles being simulated in a different order. In [1], a way of eliminating non-associative accumulations using integer tallies was described. This approach successfully achieves reproducibility at the cost of lost accuracy by rounding double precision numbers to fewer significant digits. This integer approach, and other extended and reduced precision reproducibility techniques, are described and compared in this work. Increased precision alone is not enough to ensure reproducibility of photon Monte Carlo simulations. Non-arbitrary precision approaches require a varying degree of rounding to achieve reproducibility. For the problems investigated in this work double precision global accuracy was achievable by using 100 bits of precision or greater on all unordered sums which where subsequently rounded to double precision at the end of every time-step.« less
NASA Astrophysics Data System (ADS)
Bruntz, R.; Lopez, R. E.; Bhattarai, S. K.; Pham, K. H.; Deng, Y.; Huang, Y.; Wiltberger, M.; Lyon, J. G.
2012-07-01
The Whole Heliosphere Interval (WHI), comprising March 20-April 16, 2008 (DOY 80-107), is a single Carrington Rotation (2068) designated for intense study through observations and simulations. We used solar wind data from the WHI to run the Coupled Magnetosphere-Ionosphere-Thermosphere (CMIT) and stand-alone Lyon-Fedder-Mobarry (LFM) models. The LFM model was also run with the WHI solar wind plasma parameters but with zero interplanetary magnetic field (IMF). With no IMF, we expect that the cross-polar cap potential (CPCP) is due entirely to the viscous interaction. Comparing the LFM runs with and without the IMF, we found that during strong driving with southward IMF Bz, the viscous potential could be a significant fraction of the total CPCP. During times of northward IMF Bz, the CPCP was generally lower than the CPCP value from the IMF=0 run. LFM tends to produce high polar cap potentials, but by using the Bruntz et al. (2012) viscous potential formula (ΦV=μn0.439V1.33, where μ=0.00431) and the IMF=0 LFM run, we calculated a scaling factor γ=1.54, which can be used to scale the LFM CPCP during the WHI down to realistic values. The Newell et al. (2008) viscous merging term can similarly be used to predict the viscous potential using the formula: ΦV=νn1/2V2, where the value ν=6.39×10-5 was also found using the zero IMF run. Both formulas were found to perform better when V (solar wind)=Vx, rather than Vtotal, yielding similar, accurate predictions of the LFM viscous potential, with R2>0.91 for both formulas. The γ factor was also used to scale down the LFM CPCP from the full solar wind run, with most of the resultant values matching the CPCP from the Weimer05 model well, even though γ was derived independent of the Weimer05 model or the full LFM data. We interpret this to be an indication that the conductivity model in LFM is producing values that are too low, thus elevating the CPCP values.
Liu, Yi; Li, Yuefen; Harris, Paul; Cardenas, Laura M; Dunn, Robert M; Sint, Hadewij; Murray, Phil J; Lee, Michael R F; Wu, Lianhai
2018-04-01
In this study, we evaluated the ability of the SPACSYS model to simulate water run-off, soil moisture, N 2 O fluxes and grass growth using data generated from a field of the North Wyke Farm Platform. The field-scale model is adapted via a linked and grid-based approach (grid-to-grid) to account for not only temporal dynamics but also the within-field spatial variation in these key ecosystem indicators. Spatial variability in nutrient and water presence at the field-scale is a key source of uncertainty when quantifying nutrient cycling and water movement in an agricultural system. Results demonstrated that the new spatially distributed version of SPACSYS provided a worthy improvement in accuracy over the standard (single-point) version for biomass productivity. No difference in model prediction performance was observed for water run-off, reflecting the closed-system nature of this variable. Similarly, no difference in model prediction performance was found for N 2 O fluxes, but here the N 2 O predictions were noticeably poor in both cases. Further developmental work, informed by this study's findings, is proposed to improve model predictions for N 2 O. Soil moisture results with the spatially distributed version appeared promising but this promise could not be objectively verified.
Zhu, Q.; Jiang, H.; Liu, J.; Wei, X.; Peng, C.; Fang, X.; Liu, S.; Zhou, G.; Yu, S.; Ju, W.
2010-01-01
The Integrated Biosphere Simulator is used to evaluate the spatial and temporal patterns of the crucial hydrological variables [run-off and actual evapotranspiration (AET)] of the water balance across China for the period 1951–2006 including a precipitation analysis. Results suggest three major findings. First, simulated run-off captured 85% of the spatial variability and 80% of the temporal variability for 85 hydrological gauges across China. The mean relative errors were within 20% for 66% of the studied stations and within 30% for 86% of the stations. The Nash–Sutcliffe coefficients indicated that the quantity pattern of run-off was also captured acceptably except for some watersheds in southwestern and northwestern China. The possible reasons for underestimation of run-off in the Tibetan plateau include underestimation of precipitation and uncertainties in other meteorological data due to complex topography, and simplified representations of the soil depth attribute and snow processes in the model. Second, simulated AET matched reasonably with estimated values calculated as the residual of precipitation and run-off for watersheds controlled by the hydrological gauges. Finally, trend analysis based on the Mann–Kendall method indicated that significant increasing and decreasing patterns in precipitation appeared in the northwest part of China and the Yellow River region, respectively. Significant increasing and decreasing trends in AET were detected in the Southwest region and the Yangtze River region, respectively. In addition, the Southwest region, northern China (including the Heilongjiang, Liaohe, and Haihe Basins), and the Yellow River Basin showed significant decreasing trends in run-off, and the Zhemin hydrological region showed a significant increasing trend.
NASA Technical Reports Server (NTRS)
Harvey, Jason; Moore, Michael
2013-01-01
The General-Use Nodal Network Solver (GUNNS) is a modeling software package that combines nodal analysis and the hydraulic-electric analogy to simulate fluid, electrical, and thermal flow systems. GUNNS is developed by L-3 Communications under the TS21 (Training Systems for the 21st Century) project for NASA Johnson Space Center (JSC), primarily for use in space vehicle training simulators at JSC. It has sufficient compactness and fidelity to model the fluid, electrical, and thermal aspects of space vehicles in real-time simulations running on commodity workstations, for vehicle crew and flight controller training. It has a reusable and flexible component and system design, and a Graphical User Interface (GUI), providing capability for rapid GUI-based simulator development, ease of maintenance, and associated cost savings. GUNNS is optimized for NASA's Trick simulation environment, but can be run independently of Trick.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Setiani, Tia Dwi, E-mail: tiadwisetiani@gmail.com; Suprijadi; Nuclear Physics and Biophysics Reaserch Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132
Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic imagesmore » and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 – 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 10{sup 8} and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.« less
The Scylla Multi-Code Comparison Project
NASA Astrophysics Data System (ADS)
Maller, Ariyeh; Stewart, Kyle; Bullock, James; Oñorbe, Jose; Scylla Team
2016-01-01
Cosmological hydrodynamical simulations are one of the main techniques used to understand galaxy formation and evolution. However, it is far from clear to what extent different numerical techniques and different implementations of feedback yield different results. The Scylla Multi-Code Comparison Project seeks to address this issue by running idenitical initial condition simulations with different popular hydrodynamic galaxy formation codes. Here we compare simulations of a Milky Way mass halo using the codes enzo, ramses, art, arepo and gizmo-psph. The different runs produce galaxies with a variety of properties. There are many differences, but also many similarities. For example we find that in all runs cold flow disks exist; extended gas structures, far beyond the galactic disk, that show signs of rotation. Also, the angular momentum of warm gas in the halo is much larger than the angular momentum of the dark matter. We also find notable differences between runs. The temperature and density distribution of hot gas can differ by over an order of magnitude between codes and the stellar mass to halo mass relation also varies widely. These results suggest that observations of galaxy gas halos and the stellar mass to halo mass relation can be used to constarin the correct model of feedback.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galassi, Mark C.
Diorama is written as a collection of modules that can run in separate threads or in separate processes. This defines a clear interface between the modules and also allows concurrent processing of different parts of the pipeline. The pipeline is determined by a description in a scenario file[Norman and Tornga, 2012, Tornga and Norman, 2014]. The scenario manager parses the XML scenario and sets up the sequence of modules which will generate an event, propagate the signal to a set of sensors, and then run processing modules on the results provided by those sensor simulations. During a run a varietymore » of “observer” and “processor” modules can be invoked to do interim analysis of results. Observers do not modify the simulation results, while processors may affect the final result. At the end of a run results are collated and final reports are put out. A detailed description of the scenario file and how it puts together a simulation are given in [Tornga and Norman, 2014]. The processing pipeline and how to program it with the Diorama API is described in Tornga et al. [2015] and Tornga and Wakeford [2015]. In this report I describe the communications infrastructure that is used.« less
NASA Astrophysics Data System (ADS)
von Trentini, F.; Schmid, F. J.; Braun, M.; Frigon, A.; Leduc, M.; Martel, J. L.; Willkofer, F.; Wood, R. R.; Ludwig, R.
2017-12-01
Meteorological extreme events seem to become more frequent in the present and future, and a seperation of natural climate variability and a clear climate change effect on these extreme events gains more and more interest. Since there is only one realisation of historical events, natural variability in terms of very long timeseries for a robust statistical analysis is not possible with observation data. A new single model large ensemble (SMLE), developed for the ClimEx project (Climate change and hydrological extreme events - risks and perspectives for water management in Bavaria and Québec) is supposed to overcome this lack of data by downscaling 50 members of the CanESM2 (RCP 8.5) with the Canadian CRCM5 regional model (using the EURO-CORDEX grid specifications) for timeseries of 1950-2099 each, resulting in 7500 years of simulated climate. This allows for a better probabilistic analysis of rare and extreme events than any preceding dataset. Besides seasonal sums, several indicators concerning heatwave frequency, duration and mean temperature a well as number and maximum length of dry periods (cons. days <1mm) are calculated for the ClimEx ensemble and several EURO-CORDEX runs. This enables us to investigate the interaction between natural variability (as it appears in the CanESM2-CRCM5 members) and a climate change signal of those members for past, present and future conditions. Adding the EURO-CORDEX results to this, we can also assess the role of internal model variability (or natural variability) in climate change simulations. A first comparison shows similar magnitudes of variability of climate change signals between the ClimEx large ensemble and the CORDEX runs for some indicators, while for most indicators the spread of the SMLE is smaller than the spread of different CORDEX models.
Development of the CELSS Emulator at NASA JSC
NASA Technical Reports Server (NTRS)
Cullingford, Hatice S.
1989-01-01
The Controlled Ecological Life Support System (CELSS) Emulator is under development at the NASA Johnson Space Center (JSC) with the purpose to investigate computer simulations of integrated CELSS operations involving humans, plants, and process machinery. This paper describes Version 1.0 of the CELSS Emulator that was initiated in 1988 on the JSC Multi Purpose Applications Console Test Bed as the simulation framework. The run module of the simulation system now contains a CELSS model called BLSS. The CELSS Emulator makes it possible to generate model data sets, store libraries of results for further analysis, and also display plots of model variables as a function of time. The progress of the project is presented with sample test runs and simulation display pages.
BaHaMAS A Bash Handler to Monitor and Administrate Simulations
NASA Astrophysics Data System (ADS)
Sciarra, Alessandro
2018-03-01
Numerical QCD is often extremely resource demanding and it is not rare to run hundreds of simulations at the same time. Each of these can last for days or even months and it typically requires a job-script file as well as an input file with the physical parameters for the application to be run. Moreover, some monitoring operations (i.e. copying, moving, deleting or modifying files, resume crashed jobs, etc.) are often required to guarantee that the final statistics is correctly accumulated. Proceeding manually in handling simulations is probably the most error-prone way and it is deadly uncomfortable and inefficient! BaHaMAS was developed and successfully used in the last years as a tool to automatically monitor and administrate simulations.
Implementation of the force decomposition machine for molecular dynamics simulations.
Borštnik, Urban; Miller, Benjamin T; Brooks, Bernard R; Janežič, Dušanka
2012-09-01
We present the design and implementation of the force decomposition machine (FDM), a cluster of personal computers (PCs) that is tailored to running molecular dynamics (MD) simulations using the distributed diagonal force decomposition (DDFD) parallelization method. The cluster interconnect architecture is optimized for the communication pattern of the DDFD method. Our implementation of the FDM relies on standard commodity components even for networking. Although the cluster is meant for DDFD MD simulations, it remains general enough for other parallel computations. An analysis of several MD simulation runs on both the FDM and a standard PC cluster demonstrates that the FDM's interconnect architecture provides a greater performance compared to a more general cluster interconnect. Copyright © 2012 Elsevier Inc. All rights reserved.
Bulk Chemical and Hf/W Isotopic Consequences of Lossy Accretion
NASA Astrophysics Data System (ADS)
Dwyer, C. A.; Nimmo, F.; Chambers, J.
2013-12-01
The late stages of planetary accretion involve stochastic, large collisions [1]. Many of these collisions likely resulted in hit-and-run events [2] or erosion of existing bodies' crusts [3] or mantles [4]. Here we present a preliminary investigation into the effects of lossy late-stage accretion on the bulk chemistry and isotopic characteristics of the resulting planets. Our model is composed of two parts: (1) an N-body accretion code [5] tracks the orbital and collisional evolution of the terrestrial bodies, including hit-and-run and fragmentation events; (2) post-processing evolves the chemistry in light of radioactive decay and impact-related mixing and partial equilibration. Sixteen runs were performed using the MERCURY N-body code [5]; each run contained Jupiter and Saturn in their current orbits as well as approx 150 initial bodies. Different collisional outcomes including fragmentation are possible depending on the velocity, angle, mass ratio, and total mass of the impact (modified from [6, 7]). The masses of the core and mantle of each body are tracked throughout the simulation. All bodies are assigned an initial mantle mass fraction, y, of 0.7. We track the Hf and W evolution of these bodies. Radioactive decay occurs between impacts. We calculate the effect of an impact by assuming an idealized model of mixing and partial equilibration [8]. The core equilibration factor is a free parameter; we use 0.4. Partition coefficients are assumed constant. Diversity increases as final mass decreases. The range in final y changes from 0.66-0.72 for approx Earth-mass planets to 0.41-1 for the smallest bodies in the simulation. The scatter in tungsten anomaly increases from 0.79-4.0 for approx Earth-mass to 0.11-18 for the smallest masses. This behavior is similar to that observed in our solar system in terms of both bulk and isotopic chemistry. There is no single impact event which defines the final state of the body, therefore talking about a single, specific age of formation does not make sense. Instead, it must be recognized that terrestrial planet formation occurs over a range of time spanning many tens to perhaps hundreds of millions of years. We are currently performing sensitivity analyses to determine the effect on the tungsten isotopic anomalies of the final bodies. [1] Agnor et al. (1999), Icarus 142, 219-237. [2] Asphaug et al. (2006), Nature 439, 155-160. [3] O'Neill & Palme (2008), Phil Trans R Soc Lond A 366, 4205-4238. [4] Benz et al. (2007), Sp SciRev 132, 189-202. [5] Chambers (2013), Icarus, 224, 43-56. [6] Genda et al. (2012), ApJ 744, 137. [7] Leinhardt & Stewart (2012), ApJ 745, 79. [8] Nimmo et al. (2010), EPSL 292, 363-370.
Kitanaka, Nobue; Kitanaka, Junichi; Hall, F. Scott; Uhl, George R.; Watabe, Kaname; Kubo, Hitoshi; Takahashi, Hitoshi; Tatsuta, Tomohiro; Morita, Yoshio; Takemura, Motohiko
2014-01-01
Repeated intermittent administration of amphetamines acutely increases appetitive and consummatory aspects of motivated behaviors as well as general activity and exploratory behavior, including voluntary running wheel activity. Subsequently, if the drug is withdrawn, the frequency of these behaviors decrease, which is thought to be indicative of dysphoric symptoms associated with amphetamine withdrawal. Such decreases may be observed after chronic treatment or even after single drug administrations. In the present study, the effect of acute methamphetamine (METH) on running wheel activity, horizontal locomotion, appetitive behavior (food access), and consummatory behavior (food and water intake) was investigated in mice. A multi-configuration behavior apparatus designed to monitor the five behaviors was developed, where combined measures were recorded simultaneously. In the first experiment, naïve male ICR mice showed gradually increasing running wheel activity over three consecutive days after exposure to a running wheel, while mice without a running wheel showed gradually decreasing horizontal locomotion, consistent with running wheel activity being a positively motivated form of natural motor activity. In experiment 2, increased horizontal locomotion and food access, and decreased food intake, were observed for the initial 3 h after acute METH challenge. Subsequently, during the dark phase period decreased running wheel activity and horizontal locomotion were observed. The reductions in running wheel activity and horizontal locomotion may be indicative of reduced dopaminergic function, although it remains to be seen if these changes may be more pronounced after more prolonged METH treatments. PMID:22079320
First Vlasiator results on foreshock ULF wave activity
NASA Astrophysics Data System (ADS)
Palmroth, M.; Eastwood, J. P.; Pokhotelov, D.; Hietala, H.; Kempf, Y.; Hoilijoki, S.; von Alfthan, S.; Vainio, R. O.
2013-12-01
For decades, a certain type of ultra low frequency waves with a period of about 30 seconds have been observed in the Earth's quasi-parallel foreshock. These waves, with a wavelength of about an Earth radius, are compressive and propagate obliquely with respect to the interplanetary magnetic field (IMF). The latter property has caused trouble to scientists as the growth rate for the instability causing the waves is maximized along the magnetic field. So far, these waves have been characterized by single or multi-spacecraft methods and 2-dimensional hybrid-PIC simulations, which have not fully reproduced the wave properties. Vlasiator is a newly developed, global hybrid-Vlasov simulation, which solves ions in the six-dimensional phase space using the Vlasov equation and electrons using magnetohydrodynamics (MHD). The outcome of the simulation is a global reproduction of ion-scale physics in a holistic manner where the generation of physical features can be followed in time and their consequences can be quantitatively characterized. Vlasiator produces the ion distribution functions and the related kinetic physics in unprecedented detail, in the global magnetospheric scale presently with a resolution of 0.13 RE in the ordinary space and 20 km/s in the velocity space. We run two simulations, where we use both a typical Parker-spiral and a radial IMF as an input to the code. The runs are carried out in the ecliptic 2-dimensional plane in the ordinary space, and with three dimensions in the velocity space. We observe the generation of the 30-second ULF waves, and characterize their evolution and physical properties in time, comparing to observations by Cluster spacecraft. We find that Vlasiator reproduces these waves in all reported observational aspects, i.e., they are of the observed size in wavelength and period, they are compressive and propagate obliquely to the IMF. In particular, we investigate the oblique propagation and discuss the issues related to the long-standing question of oblique propagation.
The evolution of extreme precipitations in high resolution scenarios over France
NASA Astrophysics Data System (ADS)
Colin, J.; Déqué, M.; Somot, S.
2009-09-01
Over the past years, improving the modelling of extreme events and their variability at climatic time scales has become one of the challenging issue raised in the regional climate research field. This study shows the results of a high resolution (12 km) scenario run over France with the limited area model (LAM) ALADIN-Climat, regarding the representation of extreme precipitations. The runs were conducted in the framework of the ANR-SCAMPEI national project on high resolution scenarios over French mountains. As a first step, we attempt to quantify one of the uncertainties implied by the use of LAM : the size of the area on which the model is run. In particular, we address the issue of whether a relatively small domain allows the model to create its small scale process. Indeed, high resolution scenarios cannot be run on large domains because of the computation time. Therefore one needs to answer this preliminary question before producing and analyzing such scenarios. To do so, we worked in the framework of a « big brother » experiment. We performed a 23-year long global simulation in present-day climate (1979-2001) with the ARPEGE-Climat GCM, at a resolution of approximately 50 km over Europe (stretched grid). This first simulation, named ARP50, constitutes the « big brother » reference of our experiment. It has been validated in comparison with the CRU climatology. Then we filtered the short waves (up to 200 km) from ARP50 in order to obtain the equivalent of coarse resolution lateral boundary conditions (LBC). We have carried out three ALADIN-Climat simulations at a 50 km resolution with these LBC, using different configurations of the model : * FRA50, run over a small domain (2000 x 2000 km, centered over France), * EUR50, run over a larger domain (5000 x 5000 km, centered over France as well), * EUR50-SN, run over the large domain (using spectral nudging). Considering the facts that ARPEGE-Climat and ALADIN-Climat models share the same physics and dynamics and that both regional and global simulations were run at the same resolution, ARP50 can be regarded as a reference with which FRA50, EUR50 and EUR50-SN should each be compared. After an analysis of the differences between the regional simulations and ARP50 in annual and seasonal mean, we focus on the representation of rainfall extremes comparing two dimensional fields of various index inspired from STARDEX and quantile-quantile plots. The results show a good agreement with the ARP50 reference for all three regional simulations and little differences are found between them. This result indicates that the use of small domains is not significantly detrimental to the modelling of extreme precipitation events. It also shows that the spectral nudging technique has no detrimental effect on the extreme precipitation. Therefore, high resolution scenarios performed on a relatively small domain such as the ones run for SCAMPEI, can be regarded as good tools to explore their possible evolution in the future climate. Preliminary results on the response of precipitation extremes over South-East France are given.
Stochastic Individual-Based Modeling of Bacterial Growth and Division Using Flow Cytometry.
García, Míriam R; Vázquez, José A; Teixeira, Isabel G; Alonso, Antonio A
2017-01-01
A realistic description of the variability in bacterial growth and division is critical to produce reliable predictions of safety risks along the food chain. Individual-based modeling of bacteria provides the theoretical framework to deal with this variability, but it requires information about the individual behavior of bacteria inside populations. In this work, we overcome this problem by estimating the individual behavior of bacteria from population statistics obtained with flow cytometry. For this objective, a stochastic individual-based modeling framework is defined based on standard assumptions during division and exponential growth. The unknown single-cell parameters required for running the individual-based modeling simulations, such as cell size growth rate, are estimated from the flow cytometry data. Instead of using directly the individual-based model, we make use of a modified Fokker-Plank equation. This only equation simulates the population statistics in function of the unknown single-cell parameters. We test the validity of the approach by modeling the growth and division of Pediococcus acidilactici within the exponential phase. Estimations reveal the statistics of cell growth and division using only data from flow cytometry at a given time. From the relationship between the mother and daughter volumes, we also predict that P. acidilactici divide into two successive parallel planes.
2013-01-01
Background Agent-based models (ABMs) have been used to estimate the effects of malaria-control interventions. Early studies have shown the efficacy of larval source management (LSM) and insecticide-treated nets (ITNs) as vector-control interventions, applied both in isolation and in combination. However, the robustness of results can be affected by several important modelling assumptions, including the type of boundary used for landscapes, and the number of replicated simulation runs reported in results. Selection of the ITN coverage definition may also affect the predictive findings. Hence, by replication, independent verification of prior findings of published models bears special importance. Methods A spatially-explicit entomological ABM of Anopheles gambiae is used to simulate the resource-seeking process of mosquitoes in grid-based landscapes. To explore LSM and replicate results of an earlier LSM study, the original landscapes and scenarios are replicated by using a landscape generator tool, and 1,800 replicated simulations are run using absorbing and non-absorbing boundaries. To explore ITNs and evaluate the relative impacts of the different ITN coverage schemes, the settings of an earlier ITN study are replicated, the coverage schemes are defined and simulated, and 9,000 replicated simulations for three ITN parameters (coverage, repellence and mortality) are run. To evaluate LSM and ITNs in combination, landscapes with varying densities of houses and human populations are generated, and 12,000 simulations are run. Results General agreement with an earlier LSM study is observed when an absorbing boundary is used. However, using a non-absorbing boundary produces significantly different results, which may be attributed to the unrealistic killing effect of an absorbing boundary. Abundance cannot be completely suppressed by removing aquatic habitats within 300 m of houses. Also, with density-dependent oviposition, removal of insufficient number of aquatic habitats may prove counter-productive. The importance of performing large number of simulation runs is also demonstrated. For ITNs, the choice of coverage scheme has important implications, and too high repellence yields detrimental effects. When LSM and ITNs are applied in combination, ITNs’ mortality can play more important roles with higher densities of houses. With partial mortality, increasing ITN coverage is more effective than increasing LSM coverage, and integrating both interventions yields more synergy as the densities of houses increase. Conclusions Using a non-absorbing boundary and reporting average results from sufficiently large number of simulation runs are strongly recommended for malaria ABMs. Several guidelines (code and data sharing, relevant documentation, and standardized models) for future modellers are also recommended. PMID:23965136
NASA Astrophysics Data System (ADS)
Gomes, J. L.; Chou, S. C.; Yaguchi, S. M.
2012-04-01
Physics parameterizations and the model vertical and horizontal resolutions, for example, can significantly contribute to the uncertainty in the numerical weather predictions, especially at regions with complex topography. The objective of this study is to assess the influences of model precipitation production schemes and horizontal resolution on the diurnal cycle of precipitation in the Eta Model . The model was run in hydrostatic mode at 3- and 5-km grid sizes, the vertical resolution was set to 50 layers, and the time steps to 6 and 10 s, respectively. The initial and boundary conditions were taken from ERA-Interim reanalysis. Over the sea the 0.25-deg sea surface temperature from NOAA was used. The model was setup to run for each resolution over Angra dos Reis, located in the Southeast region of Brazil, for the rainy period between 18 December 2009 and 01 de January 2010, the model simulation range was 48 hours. In one set of runs the cumulus parameterization was switched off, in this case the model precipitation was fully simulated by cloud microphysics scheme, and in the other set the model was run with weak cumulus convection. The results show that as the model horizontal resolution increases from 5 to 3 km, the spatial pattern of the precipitation hardly changed, although the maximum precipitation core increased in magnitude. Daily data from automatic station data was used to evaluate the runs and shows that the diurnal cycle of temperature and precipitation were better simulated for 3 km when compared against observations. The model configuration results without cumulus convection shows a small contraction in the precipitating area and an increase in the simulated maximum values. The diurnal cycle of precipitation was better simulated with some activity of the cumulus convection scheme. The skill scores for the period and for different forecast ranges are higher at weak and moderate precipitation rates.
Modeling and Simulation: PowerBoosting Productivity with Simulation.
ERIC Educational Resources Information Center
Riley, Suzanne
Minnesota high school students and teachers are learning the technology of simulation and integrating it into business and industrial technology courses. Modeling and simulation is the science of using software to construct a system within an organization and then running simulations of proposed changes to assess results before funds are spent. In…
Simulations of the Neutron Gas in the Inner Crust of Neutron Stars
NASA Astrophysics Data System (ADS)
Vandegriff, Elizabeth; Horowitz, Charles; Caplan, Matthew
2017-09-01
Inside neutron stars, the structures known as `nuclear pasta' are found in the crust. This pasta forms near nuclear density as nucleons arrange in spaghetti- or lasagna-like structures to minimize their energy. We run classical molecular dynamics simulations to visualize the geometry of this pasta and study the distribution of nucleons. In the simulations, we observe that the pasta is embedded in a gas of neutrons, which we call the `sauce'. In this work, we developed two methods for determining the density of neutrons in the gas, one which is accurate at low temperatures and a second which justifies an extrapolation at high temperatures. Running simulations with no Coulomb interactions, we find that the neutron density increases linearly with temperature for every proton fraction we simulated. NSF REU Grant PHY-1460882 at Indiana University.
Sensitivity study of a dynamic thermodynamic sea ice model
NASA Astrophysics Data System (ADS)
Holland, David M.; Mysak, Lawrence A.; Manak, Davinder K.; Oberhuber, Josef M.
1993-02-01
A numerical simulation of the seasonal sea ice cover in the Arctic Ocean and the Greenland, Iceland, and Norwegian seas is presented. The sea ice model is extracted from Oberhuber's (1990) coupled sea ice-mixed layer-isopycnal general circulation model and is written in spherical coordinates. The advantage of such a model over previous sea ice models is that it can be easily coupled to either global atmospheric or ocean general circulation models written in spherical coordinates. In this model, the thermodynamics are a modification of that of Parkinson and Washington (1979), while the dynamics use the full Hibler (1979) viscous-plastic rheology. Monthly thermodynamic and dynamic forcing fields for the atmosphere and ocean are specified. The simulations of the seasonal cycle of ice thickness, compactness, and velocity, for a control set of parameters, compare favorably with the known seasonal characteristics of these fields. A sensitivity study of the control simulation of the seasonal sea ice cover is presented. The sensitivity runs are carried out under three different themes, namely, numerical conditions, parameter values, and physical processes. This last theme refers to experiments in which physical processes are either newly added or completely removed from the model. Approximately 80 sensitivity runs have been performed in which a change from the control run environment has been implemented. Comparisons have been made between the control run and a particular sensitivity run based on time series of the seasonal cycle of the domain-averaged ice thickness, compactness, areal coverage, and kinetic energy. In addition, spatially varying fields of ice thickness, compactness, velocity, and surface temperature for each season are presented for selected experiments. A brief description and discussion of the more interesting experiments are presented. The simulation of the seasonal cycle of Arctic sea ice cover is shown to be robust.
Performance of a supercharged direct-injection stratified-charge rotary combustion engine
NASA Technical Reports Server (NTRS)
Bartrand, Timothy A.; Willis, Edward A.
1990-01-01
A zero-dimensional thermodynamic performance computer model for direct-injection stratified-charge rotary combustion engines was modified and run for a single rotor supercharged engine. Operating conditions for the computer runs were a single boost pressure and a matrix of speeds, loads and engine materials. A representative engine map is presented showing the predicted range of efficient operation. After discussion of the engine map, a number of engine features are analyzed individually. These features are: heat transfer and the influence insulating materials have on engine performance and exhaust energy; intake manifold pressure oscillations and interactions with the combustion chamber; and performance losses and seal friction. Finally, code running times and convergence data are presented.