GneimoSim: A Modular Internal Coordinates Molecular Dynamics Simulation Package
Larsen, Adrien B.; Wagner, Jeffrey R.; Kandel, Saugat; Salomon-Ferrer, Romelia; Vaidehi, Nagarajan; Jain, Abhinandan
2014-01-01
The Generalized Newton Euler Inverse Mass Operator (GNEIMO) method is an advanced method for internal coordinates molecular dynamics (ICMD). GNEIMO includes several theoretical and algorithmic advancements that address longstanding challenges with ICMD simulations. In this paper we describe the GneimoSim ICMD software package that implements the GNEIMO method. We believe that GneimoSim is the first software package to include advanced features such as the equipartition principle derived for internal coordinates, and a method for including the Fixman potential to eliminate systematic statistical biases introduced by the use of hard constraints. Moreover, by design, GneimoSim is extensible and can be easily interfaced with third party force field packages for ICMD simulations. Currently, GneimoSim includes interfaces to LAMMPS, OpenMM, Rosetta force field calculation packages. The availability of a comprehensive Python interface to the underlying C++ classes and their methods provides a powerful and versatile mechanism for users to develop simulation scripts to configure the simulation and control the simulation flow. GneimoSim has been used extensively for studying the dynamics of protein structures, refinement of protein homology models, and for simulating large scale protein conformational changes with enhanced sampling methods. GneimoSim is not limited to proteins and can also be used for the simulation of polymeric materials. PMID:25263538
GneimoSim: a modular internal coordinates molecular dynamics simulation package.
Larsen, Adrien B; Wagner, Jeffrey R; Kandel, Saugat; Salomon-Ferrer, Romelia; Vaidehi, Nagarajan; Jain, Abhinandan
2014-12-05
The generalized Newton-Euler inverse mass operator (GNEIMO) method is an advanced method for internal coordinates molecular dynamics (ICMD). GNEIMO includes several theoretical and algorithmic advancements that address longstanding challenges with ICMD simulations. In this article, we describe the GneimoSim ICMD software package that implements the GNEIMO method. We believe that GneimoSim is the first software package to include advanced features such as the equipartition principle derived for internal coordinates, and a method for including the Fixman potential to eliminate systematic statistical biases introduced by the use of hard constraints. Moreover, by design, GneimoSim is extensible and can be easily interfaced with third party force field packages for ICMD simulations. Currently, GneimoSim includes interfaces to LAMMPS, OpenMM, and Rosetta force field calculation packages. The availability of a comprehensive Python interface to the underlying C++ classes and their methods provides a powerful and versatile mechanism for users to develop simulation scripts to configure the simulation and control the simulation flow. GneimoSim has been used extensively for studying the dynamics of protein structures, refinement of protein homology models, and for simulating large scale protein conformational changes with enhanced sampling methods. GneimoSim is not limited to proteins and can also be used for the simulation of polymeric materials. © 2014 Wiley Periodicals, Inc.
2D Quantum Simulation of MOSFET Using the Non Equilibrium Green's Function Method
NASA Technical Reports Server (NTRS)
Svizhenko, Alexel; Anantram, M. P.; Govindan, T. R.; Yan, Jerry (Technical Monitor)
2000-01-01
The objectives this viewgraph presentation summarizes include: (1) the development of a quantum mechanical simulator for ultra short channel MOSFET simulation, including theory, physical approximations, and computer code; (2) explore physics that is not accessible by semiclassical methods; (3) benchmarking of semiclassical and classical methods; and (4) study other two-dimensional devices and molecular structure, from discretized Hamiltonian to tight-binding Hamiltonian.
Bio-threat microparticle simulants
Farquar, George Roy; Leif, Roald N
2012-10-23
A bio-threat simulant that includes a carrier and DNA encapsulated in the carrier. Also a method of making a simulant including the steps of providing a carrier and encapsulating DNA in the carrier to produce the bio-threat simulant.
Bio-threat microparticle simulants
Farquar, George Roy; Leif, Roald
2014-09-16
A bio-threat simulant that includes a carrier and DNA encapsulated in the carrier. Also a method of making a simulant including the steps of providing a carrier and encapsulating DNA in the carrier to produce the bio-threat simulant.
Hybrid Particle-Element Simulation of Impact on Composite Orbital Debris Shields
NASA Technical Reports Server (NTRS)
Fahrenthold, Eric P.
2004-01-01
This report describes the development of new numerical methods and new constitutive models for the simulation of hypervelocity impact effects on spacecraft. The research has included parallel implementation of the numerical methods and material models developed under the project. Validation work has included both one dimensional simulations, for comparison with exact solutions, and three dimensional simulations of published hypervelocity impact experiments. The validated formulations have been applied to simulate impact effects in a velocity and kinetic energy regime outside the capabilities of current experimental methods. The research results presented here allow for the expanded use of numerical simulation, as a complement to experimental work, in future design of spacecraft for hypervelocity impact effects.
Method and system for fault accommodation of machines
NASA Technical Reports Server (NTRS)
Goebel, Kai Frank (Inventor); Subbu, Rajesh Venkat (Inventor); Rausch, Randal Thomas (Inventor); Frederick, Dean Kimball (Inventor)
2011-01-01
A method for multi-objective fault accommodation using predictive modeling is disclosed. The method includes using a simulated machine that simulates a faulted actual machine, and using a simulated controller that simulates an actual controller. A multi-objective optimization process is performed, based on specified control settings for the simulated controller and specified operational scenarios for the simulated machine controlled by the simulated controller, to generate a Pareto frontier-based solution space relating performance of the simulated machine to settings of the simulated controller, including adjustment to the operational scenarios to represent a fault condition of the simulated machine. Control settings of the actual controller are adjusted, represented by the simulated controller, for controlling the actual machine, represented by the simulated machine, in response to a fault condition of the actual machine, based on the Pareto frontier-based solution space, to maximize desirable operational conditions and minimize undesirable operational conditions while operating the actual machine in a region of the solution space defined by the Pareto frontier.
Farquar, George Roy; Leif, Roald N; Wheeler, Elizabeth
2015-05-05
A simulant that includes a carrier and DNA encapsulated in the carrier. Also a method of making a simulant including the steps of providing a carrier and encapsulating DNA in the carrier to produce the simulant.
Simulation verification techniques study
NASA Technical Reports Server (NTRS)
Schoonmaker, P. B.; Wenglinski, T. H.
1975-01-01
Results are summarized of the simulation verification techniques study which consisted of two tasks: to develop techniques for simulator hardware checkout and to develop techniques for simulation performance verification (validation). The hardware verification task involved definition of simulation hardware (hardware units and integrated simulator configurations), survey of current hardware self-test techniques, and definition of hardware and software techniques for checkout of simulator subsystems. The performance verification task included definition of simulation performance parameters (and critical performance parameters), definition of methods for establishing standards of performance (sources of reference data or validation), and definition of methods for validating performance. Both major tasks included definition of verification software and assessment of verification data base impact. An annotated bibliography of all documents generated during this study is provided.
Methods for Analysis and Simulation of Ballistic Impact
2017-04-01
ARL-RP-0597 ● Apr 2017 US Army Research Laboratory Methods for Analysis and Simulation of Ballistic Impact by John D Clayton...Laboratory Methods for Analysis and Simulation of Ballistic Impact by John D Clayton Weapons and Materials Research Directorate, ARL...analytical, and numerical methods of ballistics research . Similar lengthy references dealing with pertinent aspects include [8, 9]. In contrast, the
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shu, Dewu; Xie, Xiaorong; Jiang, Qirong
With steady increase of power electronic devices and nonlinear dynamic loads in large scale AC/DC systems, the traditional hybrid simulation method, which incorporates these components into a single EMT subsystem and hence causes great difficulty for network partitioning and significant deterioration in simulation efficiency. To resolve these issues, a novel distributed hybrid simulation method is proposed in this paper. The key to realize this method is a distinct interfacing technique, which includes: i) a new approach based on the two-level Schur complement to update the interfaces by taking full consideration of the couplings between different EMT subsystems; and ii) amore » combined interaction protocol to further improve the efficiency while guaranteeing the simulation accuracy. The advantages of the proposed method in terms of both efficiency and accuracy have been verified by using it for the simulation study of an AC/DC hybrid system including a two-terminal VSC-HVDC and nonlinear dynamic loads.« less
Determination of the transmission coefficients for quantum structures using FDTD method.
Peng, Yangyang; Wang, Xiaoying; Sui, Wenquan
2011-12-01
The purpose of this work is to develop a simple method to incorporate quantum effect in traditional finite-difference time-domain (FDTD) simulators. Witch could make it possible to co-simulate systems include quantum structures and traditional components. In this paper, tunneling transmission coefficient is calculated by solving time-domain Schrödinger equation with a developed FDTD technique, called FDTD-S method. To validate the feasibility of the method, a simple resonant tunneling diode (RTD) structure model has been simulated using the proposed method. The good agreement between the numerical and analytical results proves its accuracy. The effectness and accuracy of this approach makes it a potential method for analysis and design of hybrid systems includes quantum structures and traditional components.
Simulation Learning: PC-Screen Based (PCSB) versus High Fidelity Simulation (HFS)
2012-08-01
methods for the use of simulation for teaching clinical skills to military and civilian clinicians . High fidelity simulation is an expensive method of...without the knowledge and approval of the IRB. Changes include, but not limited to, modifications in study design, recruitment process and number of...Person C-Collar simulation algorithm Pathway A Scenario A - Spinal stabilization: Sub processes Legend: Pathway Points Complex task to be performed by
High-speed extended-term time-domain simulation for online cascading analysis of power system
NASA Astrophysics Data System (ADS)
Fu, Chuan
A high-speed extended-term (HSET) time domain simulator (TDS), intended to become a part of an energy management system (EMS), has been newly developed for use in online extended-term dynamic cascading analysis of power systems. HSET-TDS includes the following attributes for providing situational awareness of high-consequence events: (i) online analysis, including n-1 and n-k events, (ii) ability to simulate both fast and slow dynamics for 1-3 hours in advance, (iii) inclusion of rigorous protection-system modeling, (iv) intelligence for corrective action ID, storage, and fast retrieval, and (v) high-speed execution. Very fast on-line computational capability is the most desired attribute of this simulator. Based on the process of solving algebraic differential equations describing the dynamics of power system, HSET-TDS seeks to develop computational efficiency at each of the following hierarchical levels, (i) hardware, (ii) strategies, (iii) integration methods, (iv) nonlinear solvers, and (v) linear solver libraries. This thesis first describes the Hammer-Hollingsworth 4 (HH4) implicit integration method. Like the trapezoidal rule, HH4 is symmetrically A-Stable but it possesses greater high-order precision (h4 ) than the trapezoidal rule. Such precision enables larger integration steps and therefore improves simulation efficiency for variable step size implementations. This thesis provides the underlying theory on which we advocate use of HH4 over other numerical integration methods for power system time-domain simulation. Second, motivated by the need to perform high speed extended-term time domain simulation (HSET-TDS) for on-line purposes, this thesis presents principles for designing numerical solvers of differential algebraic systems associated with power system time-domain simulation, including DAE construction strategies (Direct Solution Method), integration methods(HH4), nonlinear solvers(Very Dishonest Newton), and linear solvers(SuperLU). We have implemented a design appropriate for HSET-TDS, and we compare it to various solvers, including the commercial grade PSSE program, with respect to computational efficiency and accuracy, using as examples the New England 39 bus system, the expanded 8775 bus system, and PJM 13029 buses system. Third, we have explored a stiffness-decoupling method, intended to be part of parallel design of time domain simulation software for super computers. The stiffness-decoupling method is able to combine the advantages of implicit methods (A-stability) and explicit method(less computation). With the new stiffness detection method proposed herein, the stiffness can be captured. The expanded 975 buses system is used to test simulation efficiency. Finally, several parallel strategies for super computer deployment to simulate power system dynamics are proposed and compared. Design A partitions the task via scale with the stiffness decoupling method, waveform relaxation, and parallel linear solver. Design B partitions the task via the time axis using a highly precise integration method, the Kuntzmann-Butcher Method - order 8 (KB8). The strategy of partitioning events is designed to partition the whole simulation via the time axis through a simulated sequence of cascading events. For all strategies proposed, a strategy of partitioning cascading events is recommended, since the sub-tasks for each processor are totally independent, and therefore minimum communication time is needed.
A scalable parallel black oil simulator on distributed memory parallel computers
NASA Astrophysics Data System (ADS)
Wang, Kun; Liu, Hui; Chen, Zhangxin
2015-11-01
This paper presents our work on developing a parallel black oil simulator for distributed memory computers based on our in-house parallel platform. The parallel simulator is designed to overcome the performance issues of common simulators that are implemented for personal computers and workstations. The finite difference method is applied to discretize the black oil model. In addition, some advanced techniques are employed to strengthen the robustness and parallel scalability of the simulator, including an inexact Newton method, matrix decoupling methods, and algebraic multigrid methods. A new multi-stage preconditioner is proposed to accelerate the solution of linear systems from the Newton methods. Numerical experiments show that our simulator is scalable and efficient, and is capable of simulating extremely large-scale black oil problems with tens of millions of grid blocks using thousands of MPI processes on parallel computers.
NASA Technical Reports Server (NTRS)
Duncan, L. M.; Reddell, J. P.; Schoonmaker, P. B.
1975-01-01
Techniques and support software for the efficient performance of simulation validation are discussed. Overall validation software structure, the performance of validation at various levels of simulation integration, guidelines for check case formulation, methods for real time acquisition and formatting of data from an all up operational simulator, and methods and criteria for comparison and evaluation of simulation data are included. Vehicle subsystems modules, module integration, special test requirements, and reference data formats are also described.
Improved Density Functional Tight Binding Potentials for Metalloid Aluminum Clusters
2016-06-01
simulations of the oxidation of Al4Cp * 4 show reasonable comparison with a DFT-based Car -Parrinello method, including correct prediction of hydride transfers...comparison with a DFT-based Car -Parrinello method, including correct prediction of hydride transfers from Cp* to the metal centers during the...initio molecular dynamics of the oxidation of Al4Cp * 4 using a DFT-based Car -Parrinello method. This simulation, which 43 several months on the
The Development and Comparison of Molecular Dynamics Simulation and Monte Carlo Simulation
NASA Astrophysics Data System (ADS)
Chen, Jundong
2018-03-01
Molecular dynamics is an integrated technology that combines physics, mathematics and chemistry. Molecular dynamics method is a computer simulation experimental method, which is a powerful tool for studying condensed matter system. This technique not only can get the trajectory of the atom, but can also observe the microscopic details of the atomic motion. By studying the numerical integration algorithm in molecular dynamics simulation, we can not only analyze the microstructure, the motion of particles and the image of macroscopic relationship between them and the material, but can also study the relationship between the interaction and the macroscopic properties more conveniently. The Monte Carlo Simulation, similar to the molecular dynamics, is a tool for studying the micro-molecular and particle nature. In this paper, the theoretical background of computer numerical simulation is introduced, and the specific methods of numerical integration are summarized, including Verlet method, Leap-frog method and Velocity Verlet method. At the same time, the method and principle of Monte Carlo Simulation are introduced. Finally, similarities and differences of Monte Carlo Simulation and the molecular dynamics simulation are discussed.
NASA Astrophysics Data System (ADS)
Shirley, Rachel Elizabeth
Nuclear power plant (NPP) simulators are proliferating in academic research institutions and national laboratories in response to the availability of affordable, digital simulator platforms. Accompanying the new research facilities is a renewed interest in using data collected in NPP simulators for Human Reliability Analysis (HRA) research. An experiment conducted in The Ohio State University (OSU) NPP Simulator Facility develops data collection methods and analytical tools to improve use of simulator data in HRA. In the pilot experiment, student operators respond to design basis accidents in the OSU NPP Simulator Facility. Thirty-three undergraduate and graduate engineering students participated in the research. Following each accident scenario, student operators completed a survey about perceived simulator biases and watched a video of the scenario. During the video, they periodically recorded their perceived strength of significant Performance Shaping Factors (PSFs) such as Stress. This dissertation reviews three aspects of simulator-based research using the data collected in the OSU NPP Simulator Facility: First, a qualitative comparison of student operator performance to computer simulations of expected operator performance generated by the Information Decision Action Crew (IDAC) HRA method. Areas of comparison include procedure steps, timing of operator actions, and PSFs. Second, development of a quantitative model of the simulator bias introduced by the simulator environment. Two types of bias are defined: Environmental Bias and Motivational Bias. This research examines Motivational Bias--that is, the effect of the simulator environment on an operator's motivations, goals, and priorities. A bias causal map is introduced to model motivational bias interactions in the OSU experiment. Data collected in the OSU NPP Simulator Facility are analyzed using Structural Equation Modeling (SEM). Data include crew characteristics, operator surveys, and time to recognize and diagnose the accident in the scenario. These models estimate how the effects of the scenario conditions are mediated by simulator bias, and demonstrate how to quantify the strength of the simulator bias. Third, development of a quantitative model of subjective PSFs based on objective data (plant parameters, alarms, etc.) and PSF values reported by student operators. The objective PSF model is based on the PSF network in the IDAC HRA method. The final model is a mixed effects Bayesian hierarchical linear regression model. The subjective PSF model includes three factors: The Environmental PSF, the simulator Bias, and the Context. The Environmental Bias is mediated by an operator sensitivity coefficient that captures the variation in operator reactions to plant conditions. The data collected in the pilot experiments are not expected to reflect professional NPP operator performance, because the students are still novice operators. However, the models used in this research and the methods developed to analyze them demonstrate how to consider simulator bias in experiment design and how to use simulator data to enhance the technical basis of a complex HRA method. The contributions of the research include a framework for discussing simulator bias, a quantitative method for estimating simulator bias, a method for obtaining operator-reported PSF values, and a quantitative method for incorporating the variability in operator perception into PSF models. The research demonstrates applications of Structural Equation Modeling and hierarchical Bayesian linear regression models in HRA. Finally, the research demonstrates the benefits of using student operators as a test platform for HRA research.
Deployment Simulation Methods for Ultra-Lightweight Inflatable Structures
NASA Technical Reports Server (NTRS)
Wang, John T.; Johnson, Arthur R.
2003-01-01
Two dynamic inflation simulation methods are employed for modeling the deployment of folded thin-membrane tubes. The simulations are necessary because ground tests include gravity effects and may poorly represent deployment in space. The two simulation methods are referred to as the Control Volume (CV) method and the Arbitrary Lagrangian Eulerian (ALE) method. They are available in the LS-DYNA nonlinear dynamic finite element code. Both methods are suitable for modeling the interactions between the inflation gas and the thin-membrane tube structures. The CV method only considers the pressure induced by the inflation gas in the simulation, while the ALE method models the actual flow of the inflation gas. Thus, the transient fluid properties at any location within the tube can be predicted by the ALE method. Deployment simulations of three packaged tube models; namely coiled, Z-folded, and telescopically-folded configurations, are performed. Results predicted by both methods for the telescopically-folded configuration are correlated and computational efficiency issues are discussed.
NASA Technical Reports Server (NTRS)
Haley, D. C.; Almand, B. J.; Thomas, M. M.; Krauze, L. D.; Gremban, K. D.; Sanborn, J. C.; Kelly, J. H.; Depkovich, T. M.
1984-01-01
A generic computer simulation for manipulator systems (ROBSIM) was implemented and the specific technologies necessary to increase the role of automation in various missions were developed. The specific items developed were: (1) Capability for definition of a manipulator system consisting of multiple arms, load objects, and an environment; (2) Capability for kinematic analysis, requirements analysis, and response simulation of manipulator motion; (3) Postprocessing options such as graphic replay of simulated motion and manipulator parameter plotting; (4) Investigation and simulation of various control methods including manual force/torque and active compliance control; (5) Evaluation and implementation of three obstacle avoidance methods; (6) Video simulation and edge detection; and (7) Software simulation validation. This appendix is the user's guide and includes examples of program runs and outputs as well as instructions for program use.
KU-Band rendezvous radar performance computer simulation model
NASA Technical Reports Server (NTRS)
Griffin, J. W.
1980-01-01
The preparation of a real time computer simulation model of the KU band rendezvous radar to be integrated into the shuttle mission simulator (SMS), the shuttle engineering simulator (SES), and the shuttle avionics integration laboratory (SAIL) simulator is described. To meet crew training requirements a radar tracking performance model, and a target modeling method were developed. The parent simulation/radar simulation interface requirements, and the method selected to model target scattering properties, including an application of this method to the SPAS spacecraft are described. The radar search and acquisition mode performance model and the radar track mode signal processor model are examined and analyzed. The angle, angle rate, range, and range rate tracking loops are also discussed.
Boore, David M.
2000-01-01
A simple and powerful method for simulating ground motions is based on the assumption that the amplitude of ground motion at a site can be specified in a deterministic way, with a random phase spectrum modified such that the motion is distributed over a duration related to the earthquake magnitude and to distance from the source. This method of simulating ground motions often goes by the name "the stochastic method." It is particularly useful for simulating the higher-frequency ground motions of most interest to engineers, and it is widely used to predict ground motions for regions of the world in which recordings of motion from damaging earthquakes are not available. This simple method has been successful in matching a variety of ground-motion measures for earthquakes with seismic moments spanning more than 12 orders of magnitude. One of the essential characteristics of the method is that it distills what is known about the various factors affecting ground motions (source, path, and site) into simple functional forms that can be used to predict ground motions. SMSIM is a set of programs for simulating ground motions based on the stochastic method. This Open-File Report is a revision of an earlier report (Boore, 1996) describing a set of programs for simulating ground motions from earthquakes. The programs are based on modifications I have made to the stochastic method first introduced by Hanks and McGuire (1981). The report contains source codes, written in Fortran, and executables that can be used on a PC. Programs are included both for time-domain and for random vibration simulations. In addition, programs are included to produce Fourier amplitude spectra for the models used in the simulations and to convert shear velocity vs. depth into frequency-dependent amplification. The revision to the previous report is needed because the input and output files have changed significantly, and a number of new programs have been included in the set.
Development of a New 47-Group Library for the CASL Neutronics Simulators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Kang Seog; Williams, Mark L; Wiarda, Dorothea
The CASL core simulator MPACT is under development for the neutronics and thermal-hydraulics coupled simulation for the pressurized light water reactors. The key characteristics of the MPACT code include a subgroup method for resonance self-shielding, and a whole core solver with a 1D/2D synthesis method. The ORNL AMPX/SCALE code packages have been significantly improved to support various intermediate resonance self-shielding approximations such as the subgroup and embedded self-shielding methods. New 47-group AMPX and MPACT libraries based on ENDF/B-VII.0 have been generated for the CASL core simulator MPACT of which group structure comes from the HELIOS library. The new 47-group MPACTmore » library includes all nuclear data required for static and transient core simulations. This study discusses a detailed procedure to generate the 47-group AMPX and MPACT libraries and benchmark results for the VERA progression problems.« less
Hybrid statistics-simulations based method for atom-counting from ADF STEM images.
De Wael, Annelies; De Backer, Annick; Jones, Lewys; Nellist, Peter D; Van Aert, Sandra
2017-06-01
A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. Copyright © 2017 Elsevier B.V. All rights reserved.
Galaxy two-point covariance matrix estimation for next generation surveys
NASA Astrophysics Data System (ADS)
Howlett, Cullan; Percival, Will J.
2017-12-01
We perform a detailed analysis of the covariance matrix of the spherically averaged galaxy power spectrum and present a new, practical method for estimating this within an arbitrary survey without the need for running mock galaxy simulations that cover the full survey volume. The method uses theoretical arguments to modify the covariance matrix measured from a set of small-volume cubic galaxy simulations, which are computationally cheap to produce compared to larger simulations and match the measured small-scale galaxy clustering more accurately than is possible using theoretical modelling. We include prescriptions to analytically account for the window function of the survey, which convolves the measured covariance matrix in a non-trivial way. We also present a new method to include the effects of super-sample covariance and modes outside the small simulation volume which requires no additional simulations and still allows us to scale the covariance matrix. As validation, we compare the covariance matrix estimated using our new method to that from a brute-force calculation using 500 simulations originally created for analysis of the Sloan Digital Sky Survey Main Galaxy Sample. We find excellent agreement on all scales of interest for large-scale structure analysis, including those dominated by the effects of the survey window, and on scales where theoretical models of the clustering normally break down, but the new method produces a covariance matrix with significantly better signal-to-noise ratio. Although only formally correct in real space, we also discuss how our method can be extended to incorporate the effects of redshift space distortions.
Freud: a software suite for high-throughput simulation analysis
NASA Astrophysics Data System (ADS)
Harper, Eric; Spellings, Matthew; Anderson, Joshua; Glotzer, Sharon
Computer simulation is an indispensable tool for the study of a wide variety of systems. As simulations scale to fill petascale and exascale supercomputing clusters, so too does the size of the data produced, as well as the difficulty in analyzing these data. We present Freud, an analysis software suite for efficient analysis of simulation data. Freud makes no assumptions about the system being analyzed, allowing for general analysis methods to be applied to nearly any type of simulation. Freud includes standard analysis methods such as the radial distribution function, as well as new methods including the potential of mean force and torque and local crystal environment analysis. Freud combines a Python interface with fast, parallel C + + analysis routines to run efficiently on laptops, workstations, and supercomputing clusters. Data analysis on clusters reduces data transfer requirements, a prohibitive cost for petascale computing. Used in conjunction with simulation software, Freud allows for smart simulations that adapt to the current state of the system, enabling the study of phenomena such as nucleation and growth, intelligent investigation of phases and phase transitions, and determination of effective pair potentials.
Using Simulations to Investigate Decision Making in Airline Operations
NASA Technical Reports Server (NTRS)
Bruce, Peter J.; Gray, Judy H.
2003-01-01
This paper examines a range of methods to collect data for the investigation of decision-making in airline Operations Control Centres (OCCs). A study was conducted of 52 controllers in five OCCs of both domestic and international airlines in the Asia-Pacific region. A range of methods was used including: surveys, interviews, observations, simulations, and think-aloud protocol. The paper compares and evaluates the suitability of these techniques for gathering data and provides recommendations on the application of simulations. Keywords Data Collection, Decision-Making, Research Methods, Simulation, Think-Aloud Protocol.
Computational simulation of progressive fracture in fiber composites
NASA Technical Reports Server (NTRS)
Chamis, C. C.
1986-01-01
Computational methods for simulating and predicting progressive fracture in fiber composite structures are presented. These methods are integrated into a computer code of modular form. The modules include composite mechanics, finite element analysis, and fracture criteria. The code is used to computationally simulate progressive fracture in composite laminates with and without defects. The simulation tracks the fracture progression in terms of modes initiating fracture, damage growth, and imminent global (catastrophic) laminate fracture.
Heinz, Hendrik; Ramezani-Dakhel, Hadi
2016-01-21
Natural and man-made materials often rely on functional interfaces between inorganic and organic compounds. Examples include skeletal tissues and biominerals, drug delivery systems, catalysts, sensors, separation media, energy conversion devices, and polymer nanocomposites. Current laboratory techniques are limited to monitor and manipulate assembly on the 1 to 100 nm scale, time-consuming, and costly. Computational methods have become increasingly reliable to understand materials assembly and performance. This review explores the merit of simulations in comparison to experiment at the 1 to 100 nm scale, including connections to smaller length scales of quantum mechanics and larger length scales of coarse-grain models. First, current simulation methods, advances in the understanding of chemical bonding, in the development of force fields, and in the development of chemically realistic models are described. Then, the recognition mechanisms of biomolecules on nanostructured metals, semimetals, oxides, phosphates, carbonates, sulfides, and other inorganic materials are explained, including extensive comparisons between modeling and laboratory measurements. Depending on the substrate, the role of soft epitaxial binding mechanisms, ion pairing, hydrogen bonds, hydrophobic interactions, and conformation effects is described. Applications of the knowledge from simulation to predict binding of ligands and drug molecules to the inorganic surfaces, crystal growth and shape development, catalyst performance, as well as electrical properties at interfaces are examined. The quality of estimates from molecular dynamics and Monte Carlo simulations is validated in comparison to measurements and design rules described where available. The review further describes applications of simulation methods to polymer composite materials, surface modification of nanofillers, and interfacial interactions in building materials. The complexity of functional multiphase materials creates opportunities to further develop accurate force fields, including reactive force fields, and chemically realistic surface models, to enable materials discovery at a million times lower computational cost compared to quantum mechanical methods. The impact of modeling and simulation could further be increased by the advancement of a uniform simulation platform for organic and inorganic compounds across the periodic table and new simulation methods to evaluate system performance in silico.
Deterministic absorbed dose estimation in computed tomography using a discrete ordinates method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Norris, Edward T.; Liu, Xin, E-mail: xinliu@mst.edu; Hsieh, Jiang
Purpose: Organ dose estimation for a patient undergoing computed tomography (CT) scanning is very important. Although Monte Carlo methods are considered gold-standard in patient dose estimation, the computation time required is formidable for routine clinical calculations. Here, the authors instigate a deterministic method for estimating an absorbed dose more efficiently. Methods: Compared with current Monte Carlo methods, a more efficient approach to estimating the absorbed dose is to solve the linear Boltzmann equation numerically. In this study, an axial CT scan was modeled with a software package, Denovo, which solved the linear Boltzmann equation using the discrete ordinates method. Themore » CT scanning configuration included 16 x-ray source positions, beam collimators, flat filters, and bowtie filters. The phantom was the standard 32 cm CT dose index (CTDI) phantom. Four different Denovo simulations were performed with different simulation parameters, including the number of quadrature sets and the order of Legendre polynomial expansions. A Monte Carlo simulation was also performed for benchmarking the Denovo simulations. A quantitative comparison was made of the simulation results obtained by the Denovo and the Monte Carlo methods. Results: The difference in the simulation results of the discrete ordinates method and those of the Monte Carlo methods was found to be small, with a root-mean-square difference of around 2.4%. It was found that the discrete ordinates method, with a higher order of Legendre polynomial expansions, underestimated the absorbed dose near the center of the phantom (i.e., low dose region). Simulations of the quadrature set 8 and the first order of the Legendre polynomial expansions proved to be the most efficient computation method in the authors’ study. The single-thread computation time of the deterministic simulation of the quadrature set 8 and the first order of the Legendre polynomial expansions was 21 min on a personal computer. Conclusions: The simulation results showed that the deterministic method can be effectively used to estimate the absorbed dose in a CTDI phantom. The accuracy of the discrete ordinates method was close to that of a Monte Carlo simulation, and the primary benefit of the discrete ordinates method lies in its rapid computation speed. It is expected that further optimization of this method in routine clinical CT dose estimation will improve its accuracy and speed.« less
Human swallowing simulation based on videofluorography images using Hamiltonian MPS method
NASA Astrophysics Data System (ADS)
Kikuchi, Takahiro; Michiwaki, Yukihiro; Kamiya, Tetsu; Toyama, Yoshio; Tamai, Tasuku; Koshizuka, Seiichi
2015-09-01
In developed nations, swallowing disorders and aspiration pneumonia have become serious problems. We developed a method to simulate the behavior of the organs involved in swallowing to clarify the mechanisms of swallowing and aspiration. The shape model is based on anatomically realistic geometry, and the motion model utilizes forced displacements based on realistic dynamic images to reflect the mechanisms of human swallowing. The soft tissue organs are modeled as nonlinear elastic material using the Hamiltonian MPS method. This method allows for stable simulation of the complex swallowing movement. A penalty method using metaballs is employed to simulate contact between organ walls and smooth sliding along the walls. We performed four numerical simulations under different analysis conditions to represent four cases of swallowing, including a healthy volunteer and a patient with a swallowing disorder. The simulation results were compared to examine the epiglottic downfolding mechanism, which strongly influences the risk of aspiration.
Web-based emergency response exercise management systems and methods thereof
Goforth, John W.; Mercer, Michael B.; Heath, Zach; Yang, Lynn I.
2014-09-09
According to one embodiment, a method for simulating portions of an emergency response exercise includes generating situational awareness outputs associated with a simulated emergency and sending the situational awareness outputs to a plurality of output devices. Also, the method includes outputting to a user device a plurality of decisions associated with the situational awareness outputs at a decision point, receiving a selection of one of the decisions from the user device, generating new situational awareness outputs based on the selected decision, and repeating the sending, outputting and receiving steps based on the new situational awareness outputs. Other methods, systems, and computer program products are included according to other embodiments of the invention.
NASA Astrophysics Data System (ADS)
Del Carpio R., Maikol; Hashemi, M. Javad; Mosqueda, Gilberto
2017-10-01
This study examines the performance of integration methods for hybrid simulation of large and complex structural systems in the context of structural collapse due to seismic excitations. The target application is not necessarily for real-time testing, but rather for models that involve large-scale physical sub-structures and highly nonlinear numerical models. Four case studies are presented and discussed. In the first case study, the accuracy of integration schemes including two widely used methods, namely, modified version of the implicit Newmark with fixed-number of iteration (iterative) and the operator-splitting (non-iterative) is examined through pure numerical simulations. The second case study presents the results of 10 hybrid simulations repeated with the two aforementioned integration methods considering various time steps and fixed-number of iterations for the iterative integration method. The physical sub-structure in these tests consists of a single-degree-of-freedom (SDOF) cantilever column with replaceable steel coupons that provides repeatable highlynonlinear behavior including fracture-type strength and stiffness degradations. In case study three, the implicit Newmark with fixed-number of iterations is applied for hybrid simulations of a 1:2 scale steel moment frame that includes a relatively complex nonlinear numerical substructure. Lastly, a more complex numerical substructure is considered by constructing a nonlinear computational model of a moment frame coupled to a hybrid model of a 1:2 scale steel gravity frame. The last two case studies are conducted on the same porotype structure and the selection of time steps and fixed number of iterations are closely examined in pre-test simulations. The generated unbalance forces is used as an index to track the equilibrium error and predict the accuracy and stability of the simulations.
SimBA: simulation algorithm to fit extant-population distributions.
Parida, Laxmi; Haiminen, Niina
2015-03-14
Simulation of populations with specified characteristics such as allele frequencies, linkage disequilibrium etc., is an integral component of many studies, including in-silico breeding optimization. Since the accuracy and sensitivity of population simulation is critical to the quality of the output of the applications that use them, accurate algorithms are required to provide a strong foundation to the methods in these studies. In this paper we present SimBA (Simulation using Best-fit Algorithm) a non-generative approach, based on a combination of stochastic techniques and discrete methods. We optimize a hill climbing algorithm and extend the framework to include multiple subpopulation structures. Additionally, we show that SimBA is very sensitive to the input specifications, i.e., very similar but distinct input characteristics result in distinct outputs with high fidelity to the specified distributions. This property of the simulation is not explicitly modeled or studied by previous methods. We show that SimBA outperforms the existing population simulation methods, both in terms of accuracy as well as time-efficiency. Not only does it construct populations that meet the input specifications more stringently than other published methods, SimBA is also easy to use. It does not require explicit parameter adaptations or calibrations. Also, it can work with input specified as distributions, without an exemplar matrix or population as required by some methods. SimBA is available at http://researcher.ibm.com/project/5669 .
AESS: Accelerated Exact Stochastic Simulation
NASA Astrophysics Data System (ADS)
Jenkins, David D.; Peterson, Gregory D.
2011-12-01
The Stochastic Simulation Algorithm (SSA) developed by Gillespie provides a powerful mechanism for exploring the behavior of chemical systems with small species populations or with important noise contributions. Gene circuit simulations for systems biology commonly employ the SSA method, as do ecological applications. This algorithm tends to be computationally expensive, so researchers seek an efficient implementation of SSA. In this program package, the Accelerated Exact Stochastic Simulation Algorithm (AESS) contains optimized implementations of Gillespie's SSA that improve the performance of individual simulation runs or ensembles of simulations used for sweeping parameters or to provide statistically significant results. Program summaryProgram title: AESS Catalogue identifier: AEJW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: University of Tennessee copyright agreement No. of lines in distributed program, including test data, etc.: 10 861 No. of bytes in distributed program, including test data, etc.: 394 631 Distribution format: tar.gz Programming language: C for processors, CUDA for NVIDIA GPUs Computer: Developed and tested on various x86 computers and NVIDIA C1060 Tesla and GTX 480 Fermi GPUs. The system targets x86 workstations, optionally with multicore processors or NVIDIA GPUs as accelerators. Operating system: Tested under Ubuntu Linux OS and CentOS 5.5 Linux OS Classification: 3, 16.12 Nature of problem: Simulation of chemical systems, particularly with low species populations, can be accurately performed using Gillespie's method of stochastic simulation. Numerous variations on the original stochastic simulation algorithm have been developed, including approaches that produce results with statistics that exactly match the chemical master equation (CME) as well as other approaches that approximate the CME. Solution method: The Accelerated Exact Stochastic Simulation (AESS) tool provides implementations of a wide variety of popular variations on the Gillespie method. Users can select the specific algorithm considered most appropriate. Comparisons between the methods and with other available implementations indicate that AESS provides the fastest known implementation of Gillespie's method for a variety of test models. Users may wish to execute ensembles of simulations to sweep parameters or to obtain better statistical results, so AESS supports acceleration of ensembles of simulation using parallel processing with MPI, SSE vector units on x86 processors, and/or using NVIDIA GPUs with CUDA.
NASA Technical Reports Server (NTRS)
Fahrenthold, Eric P.; Shivarama, Ravishankar
2004-01-01
The hybrid particle-finite element method of Fahrenthold and Horban, developed for the simulation of hypervelocity impact problems, has been extended to include new formulations of the particle-element kinematics, additional constitutive models, and an improved numerical implementation. The extended formulation has been validated in three dimensional simulations of published impact experiments. The test cases demonstrate good agreement with experiment, good parallel speedup, and numerical convergence of the simulation results.
Comparison of Analysis, Simulation, and Measurement of Wire-to-Wire Crosstalk. Part 2
NASA Technical Reports Server (NTRS)
Bradley, Arthur T.; Yavoich, Brian James; Hodson, Shane M.; Godley, Franklin
2010-01-01
In this investigation, we compare crosstalk analysis, simulation, and measurement results for electrically short configurations. Methods include hand calculations, PSPICE simulations, Microstripes transient field solver, and empirical measurement. In total, four representative physical configurations are examined, including a single wire over a ground plane, a twisted pair over a ground plane, generator plus receptor wires inside a cylindrical conduit, and a single receptor wire inside a cylindrical conduit. Part 1 addresses the first two cases, and Part 2 addresses the final two. Agreement between the analysis methods and test data is shown to be very good.
Icing simulation: A survey of computer models and experimental facilities
NASA Technical Reports Server (NTRS)
Potapczuk, M. G.; Reinmann, J. J.
1991-01-01
A survey of the current methods for simulation of the response of an aircraft or aircraft subsystem to an icing encounter is presented. The topics discussed include a computer code modeling of aircraft icing and performance degradation, an evaluation of experimental facility simulation capabilities, and ice protection system evaluation tests in simulated icing conditions. Current research focussed on upgrading simulation fidelity of both experimental and computational methods is discussed. The need for increased understanding of the physical processes governing ice accretion, ice shedding, and iced airfoil aerodynamics is examined.
Icing simulation: A survey of computer models and experimental facilities
NASA Technical Reports Server (NTRS)
Potapczuk, M. G.; Reinmann, J. J.
1991-01-01
A survey of the current methods for simulation of the response of an aircraft or aircraft subsystem to an icing encounter is presented. The topics discussed include a computer code modeling of aircraft icing and performance degradation, an evaluation of experimental facility simulation capabilities, and ice protection system evaluation tests in simulated icing conditions. Current research focused on upgrading simulation fidelity of both experimental and computational methods is discussed. The need for the increased understanding of the physical processes governing ice accretion, ice shedding, and iced aerodynamics is examined.
Simulation of the optical coating deposition
NASA Astrophysics Data System (ADS)
Grigoriev, Fedor; Sulimov, Vladimir; Tikhonravov, Alexander
2018-04-01
A brief review of the mathematical methods of thin-film growth simulation and results of their applications is presented. Both full-atomistic and multi-scale approaches that were used in the studies of thin-film deposition are considered. The results of the structural parameter simulation including density profiles, roughness, porosity, point defect concentration, and others are discussed. The application of the quantum level methods to the simulation of the thin-film electronic and optical properties is considered. Special attention is paid to the simulation of the silicon dioxide thin films.
Real time flight simulation methodology
NASA Technical Reports Server (NTRS)
Parrish, E. A.; Cook, G.; Mcvey, E. S.
1977-01-01
Substitutional methods for digitization, input signal-dependent integrator approximations, and digital autopilot design were developed. The software framework of a simulator design package is described. Included are subroutines for iterative designs of simulation models and a rudimentary graphics package.
A Review of Computational Methods in Materials Science: Examples from Shock-Wave and Polymer Physics
Steinhauser, Martin O.; Hiermaier, Stefan
2009-01-01
This review discusses several computational methods used on different length and time scales for the simulation of material behavior. First, the importance of physical modeling and its relation to computer simulation on multiscales is discussed. Then, computational methods used on different scales are shortly reviewed, before we focus on the molecular dynamics (MD) method. Here we survey in a tutorial-like fashion some key issues including several MD optimization techniques. Thereafter, computational examples for the capabilities of numerical simulations in materials research are discussed. We focus on recent results of shock wave simulations of a solid which are based on two different modeling approaches and we discuss their respective assets and drawbacks with a view to their application on multiscales. Then, the prospects of computer simulations on the molecular length scale using coarse-grained MD methods are covered by means of examples pertaining to complex topological polymer structures including star-polymers, biomacromolecules such as polyelectrolytes and polymers with intrinsic stiffness. This review ends by highlighting new emerging interdisciplinary applications of computational methods in the field of medical engineering where the application of concepts of polymer physics and of shock waves to biological systems holds a lot of promise for improving medical applications such as extracorporeal shock wave lithotripsy or tumor treatment. PMID:20054467
Simulations of 6-DOF Motion with a Cartesian Method
NASA Technical Reports Server (NTRS)
Murman, Scott M.; Aftosmis, Michael J.; Berger, Marsha J.; Kwak, Dochan (Technical Monitor)
2003-01-01
Coupled 6-DOF/CFD trajectory predictions using an automated Cartesian method are demonstrated by simulating a GBU-32/JDAM store separating from an F-18C aircraft. Numerical simulations are performed at two Mach numbers near the sonic speed, and compared with flight-test telemetry and photographic-derived data. Simulation results obtained with a sequential-static series of flow solutions are contrasted with results using a time-dependent flow solver. Both numerical methods show good agreement with the flight-test data through the first half of the simulations. The sequential-static and time-dependent methods diverge over the last half of the trajectory prediction. after the store produces peak angular rates. A cost comparison for the Cartesian method is included, in terms of absolute cost and relative to computing uncoupled 6-DOF trajectories. A detailed description of the 6-DOF method, as well as a verification of its accuracy, is provided in an appendix.
A fast simulation method for radiation maps using interpolation in a virtual environment.
Li, Meng-Kun; Liu, Yong-Kuo; Peng, Min-Jun; Xie, Chun-Li; Yang, Li-Qun
2018-05-10
In nuclear decommissioning, virtual simulation technology is a useful tool to achieve an effective work process by using virtual environments to represent the physical and logical scheme of a real decommissioning project. This technology is cost-saving and time-saving, with the capacity to develop various decommissioning scenarios and reduce the risk of retrofitting. The method utilises a radiation map in a virtual simulation as the basis for the assessment of exposure to a virtual human. In this paper, we propose a fast simulation method using a known radiation source. The method has a unique advantage over point kernel and Monte Carlo methods because it generates the radiation map using interpolation in a virtual environment. The simulation of the radiation map including the calculation and the visualisation were realised using UNITY and MATLAB. The feasibility of the proposed method was tested on a hypothetical case and the results obtained are discussed in this paper.
Analysis of Waves in Space Plasma (WISP) near field simulation and experiment
NASA Technical Reports Server (NTRS)
Richie, James E.
1992-01-01
The WISP payload scheduler for a 1995 space transportation system (shuttle flight) will include a large power transmitter on board at a wide range of frequencies. The levels of electromagnetic interference/electromagnetic compatibility (EMI/EMC) must be addressed to insure the safety of the shuttle crew. This report is concerned with the simulation and experimental verification of EMI/EMC for the WISP payload in the shuttle cargo bay. The simulations have been carried out using the method of moments for both thin wires and patches to stimulate closed solids. Data obtained from simulation is compared with experimental results. An investigation of the accuracy of the modeling approach is also included. The report begins with a description of the WISP experiment. A description of the model used to simulate the cargo bay follows. The results of the simulation are compared to experimental data on the input impedance of the WISP antenna with the cargo bay present. A discussion of the methods used to verify the accuracy of the model is shown to illustrate appropriate methods for obtaining this information. Finally, suggestions for future work are provided.
A method for simulating a flux-locked DC SQUID
NASA Technical Reports Server (NTRS)
Gutt, G. M.; Kasdin, N. J.; Condron, M. R., II; Muhlfelder, B.; Lockhart, J. M.; Cromar, M. W.
1993-01-01
The authors describe a computationally efficient and accurate method for simulating a dc SQUID's V-Phi (voltage-flux) and I-V characteristics which has proven valuable in evaluating and improving various SQUID readout methods. The simulation of the SQUID is based on fitting of previously acquired data from either a real or a modeled device using the Fourier transform of the V-Phi curve. This method does not predict SQUID behavior, but rather is a way of replicating a known behavior efficiently with portability into various simulation programs such as SPICE. The authors discuss the methods used to simulate the SQUID and the flux-locking control electronics, and present specific examples of this approach. Results include an estimate of the slew rate and linearity of a simple flux-locked loop using a characterized dc SQUID.
Mixture and method for simulating soiling and weathering of surfaces
Sleiman, Mohamad; Kirchstetter, Thomas; Destaillats, Hugo; Levinson, Ronnen; Berdahl, Paul; Akbari, Hashem
2018-01-02
This disclosure provides systems, methods, and apparatus related to simulated soiling and weathering of materials. In one aspect, a soiling mixture may include an aqueous suspension of various amounts of salt, soot, dust, and humic acid. In another aspect, a method may include weathering a sample of material in a first exposure of the sample to ultraviolet light, water vapor, and elevated temperatures, depositing a soiling mixture on the sample, and weathering the sample in a second exposure of the sample to ultraviolet light, water vapor, and elevated temperatures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ortoleva, Peter J.
Illustrative embodiments of systems and methods for the deductive multiscale simulation of macromolecules are disclosed. In one illustrative embodiment, a deductive multiscale simulation method may include (i) constructing a set of order parameters that model one or more structural characteristics of a macromolecule, (ii) simulating an ensemble of atomistic configurations for the macromolecule using instantaneous values of the set of order parameters, (iii) simulating thermal-average forces and diffusivities for the ensemble of atomistic configurations, and (iv) evolving the set of order parameters via Langevin dynamics using the thermal-average forces and diffusivities.
High Order Accurate Algorithms for Shocks, Rapidly Changing Solutions and Multiscale Problems
2014-11-13
for front propagation with obstacles, and homotopy method for steady states. Applications include high order simulations for 3D gaseous detonations ...obstacles, and homotopy method for steady states. Applications include high order simulations for 3D gaseous detonations , sound generation study via... detonation waves, Combustion and Flame, (02 2013): 0. doi: 10.1016/j.combustflame.2012.10.002 Yang Yang, Ishani Roy, Chi-Wang Shu, Li-Zhi Fang. THE
Requirements and Techniques for Developing and Measuring Simulant Materials
NASA Technical Reports Server (NTRS)
Rickman, Doug; Owens, Charles; Howard, Rick
2006-01-01
The 1989 workshop report entitled Workshop on Production and Uses of Simulated Lunar Materials and the Lunar Regolith Simulant Materials: Recommendations for Standardization, Production, and Usage, NASA Technical Publication identify and reinforced a need for a set of standards and requirements for the production and usage of the lunar simulant materials. As NASA need prepares to return to the moon, a set of requirements have been developed for simulant materials and methods to produce and measure those simulants have been defined. Addressed in the requirements document are: 1) a method for evaluating the quality of any simulant of a regolith, 2) the minimum Characteristics for simulants of lunar regolith, and 3) a method to produce lunar regolith simulants needed for NASA's exploration mission. A method to evaluate new and current simulants has also been rigorously defined through the mathematics of Figures of Merit (FoM), a concept new to simulant development. A single FoM is conceptually an algorithm defining a single characteristic of a simulant and provides a clear comparison of that characteristic for both the simulant and a reference material. Included as an intrinsic part of the algorithm is a minimum acceptable performance for the characteristic of interest. The algorithms for the FoM for Standard Lunar Regolith Simulants are also explicitly keyed to a recommended method to make lunar simulants.
Naveros, Francisco; Luque, Niceto R; Garrido, Jesús A; Carrillo, Richard R; Anguita, Mancia; Ros, Eduardo
2015-07-01
Time-driven simulation methods in traditional CPU architectures perform well and precisely when simulating small-scale spiking neural networks. Nevertheless, they still have drawbacks when simulating large-scale systems. Conversely, event-driven simulation methods in CPUs and time-driven simulation methods in graphic processing units (GPUs) can outperform CPU time-driven methods under certain conditions. With this performance improvement in mind, we have developed an event-and-time-driven spiking neural network simulator suitable for a hybrid CPU-GPU platform. Our neural simulator is able to efficiently simulate bio-inspired spiking neural networks consisting of different neural models, which can be distributed heterogeneously in both small layers and large layers or subsystems. For the sake of efficiency, the low-activity parts of the neural network can be simulated in CPU using event-driven methods while the high-activity subsystems can be simulated in either CPU (a few neurons) or GPU (thousands or millions of neurons) using time-driven methods. In this brief, we have undertaken a comparative study of these different simulation methods. For benchmarking the different simulation methods and platforms, we have used a cerebellar-inspired neural-network model consisting of a very dense granular layer and a Purkinje layer with a smaller number of cells (according to biological ratios). Thus, this cerebellar-like network includes a dense diverging neural layer (increasing the dimensionality of its internal representation and sparse coding) and a converging neural layer (integration) similar to many other biologically inspired and also artificial neural networks.
Experimental analysis of computer system dependability
NASA Technical Reports Server (NTRS)
Iyer, Ravishankar, K.; Tang, Dong
1993-01-01
This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance.
NASA Technical Reports Server (NTRS)
Heinrichs, J. A.; Fee, J. J.
1972-01-01
Space station and solar array data and the analyses which were performed in support of the integrated dynamic analysis study. The analysis methods and the formulated digital simulation were developed. Control systems for space station altitude control and solar array orientation control include generic type control systems. These systems have been digitally coded and included in the simulation.
FRAGMENTATION AND EVOLUTION OF MOLECULAR CLOUDS. II. THE EFFECT OF DUST HEATING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urban, Andrea; Evans, Neal J.; Martel, Hugo
2010-02-20
We investigate the effect of heating by luminosity sources in a simulation of clustered star formation. Our heating method involves a simplified continuum radiative transfer method that calculates the dust temperature. The gas temperature is set by the dust temperature. We present the results of four simulations; two simulations assume an isothermal equation of state and the two other simulations include dust heating. We investigate two mass regimes, i.e., 84 M{sub sun} and 671 M{sub sun}, using these two different energetics algorithms. The mass functions for the isothermal simulations and simulations that include dust heating are drastically different. In themore » isothermal simulation, we do not form any objects with masses above 1 M{sub sun}. However, the simulation with dust heating, while missing some of the low-mass objects, forms high-mass objects ({approx}20 M{sub sun}) which have a distribution similar to the Salpeter initial mass function. The envelope density profiles around the stars formed in our simulation match observed values around isolated, low-mass star-forming cores. We find the accretion rates to be highly variable and, on average, increasing with final stellar mass. By including radiative feedback from stars in a cluster-scale simulation, we have determined that it is a very important effect which drastically affects the mass function and yields important insights into the formation of massive stars.« less
ERIC Educational Resources Information Center
Barbian, Jeff
2001-01-01
Explains how low-tech experiential methods thrive in companies interested in fostering the human touch. Examples include NASA's paper airplane simulation, total immersion simulation, and fantasy multisensory environments. (JOW)
A Review of High-Order and Optimized Finite-Difference Methods for Simulating Linear Wave Phenomena
NASA Technical Reports Server (NTRS)
Zingg, David W.
1996-01-01
This paper presents a review of high-order and optimized finite-difference methods for numerically simulating the propagation and scattering of linear waves, such as electromagnetic, acoustic, or elastic waves. The spatial operators reviewed include compact schemes, non-compact schemes, schemes on staggered grids, and schemes which are optimized to produce specific characteristics. The time-marching methods discussed include Runge-Kutta methods, Adams-Bashforth methods, and the leapfrog method. In addition, the following fourth-order fully-discrete finite-difference methods are considered: a one-step implicit scheme with a three-point spatial stencil, a one-step explicit scheme with a five-point spatial stencil, and a two-step explicit scheme with a five-point spatial stencil. For each method studied, the number of grid points per wavelength required for accurate simulation of wave propagation over large distances is presented. Recommendations are made with respect to the suitability of the methods for specific problems and practical aspects of their use, such as appropriate Courant numbers and grid densities. Avenues for future research are suggested.
simulation methods for materials physics and chemistry, with particular expertise in post-DFT, high accuracy methods such as the GW approximation for electronic structure and random phase approximation (RPA) total the art in computational methods, including efficient methods for including the effects of substrates
Veijola, Timo; Råback, Peter
2007-01-01
We present a straightforward method to solve gas damping problems for perforated structures in two dimensions (2D) utilising a Perforation Profile Reynolds (PPR) solver. The PPR equation is an extended Reynolds equation that includes additional terms modelling the leakage flow through the perforations, and variable diffusivity and compressibility profiles. The solution method consists of two phases: 1) determination of the specific admittance profile and relative diffusivity (and relative compressibility) profiles due to the perforation, and 2) solution of the PPR equation with a FEM solver in 2D. Rarefied gas corrections in the slip-flow region are also included. Analytic profiles for circular and square holes with slip conditions are presented in the paper. To verify the method, square perforated dampers with 16–64 holes were simulated with a three-dimensional (3D) Navier-Stokes solver, a homogenised extended Reynolds solver, and a 2D PPR solver. Cases for both translational (in normal to the surfaces) and torsional motion were simulated. The presented method extends the region of accurate simulation of perforated structures to cases where the homogenisation method is inaccurate and the full 3D Navier-Stokes simulation is too time-consuming.
NASA Technical Reports Server (NTRS)
Haley, D. C.; Almand, B. J.; Thomas, M. M.; Krauze, L. D.; Gremban, K. D.; Sanborn, J. C.; Kelly, J. H.; Depkovich, T. M.
1984-01-01
A generic computer simulation for manipulator systems (ROBSIM) was implemented and the specific technologies necessary to increase the role of automation in various missions were developed. The specific items developed are: (1) capability for definition of a manipulator system consisting of multiple arms, load objects, and an environment; (2) capability for kinematic analysis, requirements analysis, and response simulation of manipulator motion; (3) postprocessing options such as graphic replay of simulated motion and manipulator parameter plotting; (4) investigation and simulation of various control methods including manual force/torque and active compliances control; (5) evaluation and implementation of three obstacle avoidance methods; (6) video simulation and edge detection; and (7) software simulation validation.
NASA Astrophysics Data System (ADS)
Simon-Liedtke, Joschua T.; Farup, Ivar; Laeng, Bruno
2015-01-01
Color deficient people might be confronted with minor difficulties when navigating through daily life, for example when reading websites or media, navigating with maps, retrieving information from public transport schedules and others. Color deficiency simulation and daltonization methods have been proposed to better understand problems of color deficient individuals and to improve color displays for their use. However, it remains unclear whether these color prosthetic" methods really work and how well they improve the performance of color deficient individuals. We introduce here two methods to evaluate color deficiency simulation and daltonization methods based on behavioral experiments that are widely used in the field of psychology. Firstly, we propose a Sample-to-Match Simulation Evaluation Method (SaMSEM); secondly, we propose a Visual Search Daltonization Evaluation Method (ViSDEM). Both methods can be used to validate and allow the generalization of the simulation and daltonization methods related to color deficiency. We showed that both the response times (RT) and the accuracy of SaMSEM can be used as an indicator of the success of color deficiency simulation methods and that performance in the ViSDEM can be used as an indicator for the efficacy of color deficiency daltonization methods. In future work, we will include comparison and analysis of different color deficiency simulation and daltonization methods with the help of SaMSEM and ViSDEM.
Franc, Jeffrey Michael; Ingrassia, Pier Luigi; Verde, Manuela; Colombo, Davide; Della Corte, Francesco
2015-02-01
Surge capacity, or the ability to manage an extraordinary volume of patients, is fundamental for hospital management of mass-casualty incidents. However, quantification of surge capacity is difficult and no universal standard for its measurement has emerged, nor has a standardized statistical method been advocated. As mass-casualty incidents are rare, simulation may represent a viable alternative to measure surge capacity. Hypothesis/Problem The objective of the current study was to develop a statistical method for the quantification of surge capacity using a combination of computer simulation and simple process-control statistical tools. Length-of-stay (LOS) and patient volume (PV) were used as metrics. The use of this method was then demonstrated on a subsequent computer simulation of an emergency department (ED) response to a mass-casualty incident. In the derivation phase, 357 participants in five countries performed 62 computer simulations of an ED response to a mass-casualty incident. Benchmarks for ED response were derived from these simulations, including LOS and PV metrics for triage, bed assignment, physician assessment, and disposition. In the application phase, 13 students of the European Master in Disaster Medicine (EMDM) program completed the same simulation scenario, and the results were compared to the standards obtained in the derivation phase. Patient-volume metrics included number of patients to be triaged, assigned to rooms, assessed by a physician, and disposed. Length-of-stay metrics included median time to triage, room assignment, physician assessment, and disposition. Simple graphical methods were used to compare the application phase group to the derived benchmarks using process-control statistical tools. The group in the application phase failed to meet the indicated standard for LOS from admission to disposition decision. This study demonstrates how simulation software can be used to derive values for objective benchmarks of ED surge capacity using PV and LOS metrics. These objective metrics can then be applied to other simulation groups using simple graphical process-control tools to provide a numeric measure of surge capacity. Repeated use in simulations of actual EDs may represent a potential means of objectively quantifying disaster management surge capacity. It is hoped that the described statistical method, which is simple and reusable, will be useful for investigators in this field to apply to their own research.
A multiscale quantum mechanics/electromagnetics method for device simulations.
Yam, ChiYung; Meng, Lingyi; Zhang, Yu; Chen, GuanHua
2015-04-07
Multiscale modeling has become a popular tool for research applying to different areas including materials science, microelectronics, biology, chemistry, etc. In this tutorial review, we describe a newly developed multiscale computational method, incorporating quantum mechanics into electronic device modeling with the electromagnetic environment included through classical electrodynamics. In the quantum mechanics/electromagnetics (QM/EM) method, the regions of the system where active electron scattering processes take place are treated quantum mechanically, while the surroundings are described by Maxwell's equations and a semiclassical drift-diffusion model. The QM model and the EM model are solved, respectively, in different regions of the system in a self-consistent manner. Potential distributions and current densities at the interface between QM and EM regions are employed as the boundary conditions for the quantum mechanical and electromagnetic simulations, respectively. The method is illustrated in the simulation of several realistic systems. In the case of junctionless field-effect transistors, transfer characteristics are obtained and a good agreement between experiments and simulations is achieved. Optical properties of a tandem photovoltaic cell are studied and the simulations demonstrate that multiple QM regions are coupled through the classical EM model. Finally, the study of a carbon nanotube-based molecular device shows the accuracy and efficiency of the QM/EM method.
Coincidental match of numerical simulation and physics
NASA Astrophysics Data System (ADS)
Pierre, B.; Gudmundsson, J. S.
2010-08-01
Consequences of rapid pressure transients in pipelines range from increased fatigue to leakages and to complete ruptures of pipeline. Therefore, accurate predictions of rapid pressure transients in pipelines using numerical simulations are critical. State of the art modelling of pressure transient in general, and water hammer in particular include unsteady friction in addition to the steady frictional pressure drop, and numerical simulations rely on the method of characteristics. Comparison of rapid pressure transient calculations by the method of characteristics and a selected high resolution finite volume method highlights issues related to modelling of pressure waves and illustrates that matches between numerical simulations and physics are purely coincidental.
NASA Technical Reports Server (NTRS)
Radespiel, Rolf; Hemsch, Michael J.
2007-01-01
The complexity of modern military systems, as well as the cost and difficulty associated with experimentally verifying system and subsystem design makes the use of high-fidelity based simulation a future alternative for design and development. The predictive ability of such simulations such as computational fluid dynamics (CFD) and computational structural mechanics (CSM) have matured significantly. However, for numerical simulations to be used with confidence in design and development, quantitative measures of uncertainty must be available. The AVT 147 Symposium has been established to compile state-of-the art methods of assessing computational uncertainty, to identify future research and development needs associated with these methods, and to present examples of how these needs are being addressed and how the methods are being applied. Papers were solicited that address uncertainty estimation associated with high fidelity, physics-based simulations. The solicitation included papers that identify sources of error and uncertainty in numerical simulation from either the industry perspective or from the disciplinary or cross-disciplinary research perspective. Examples of the industry perspective were to include how computational uncertainty methods are used to reduce system risk in various stages of design or development.
Deep data fusion method for missile-borne inertial/celestial system
NASA Astrophysics Data System (ADS)
Zhang, Chunxi; Chen, Xiaofei; Lu, Jiazhen; Zhang, Hao
2018-05-01
Strap-down inertial-celestial integrated navigation system has the advantages of autonomy and high precision and is very useful for ballistic missiles. The star sensor installation error and inertial measurement error have a great influence for the system performance. Based on deep data fusion, this paper establishes measurement equations including star sensor installation error and proposes the deep fusion filter method. Simulations including misalignment error, star sensor installation error, IMU error are analyzed. Simulation results indicate that the deep fusion method can estimate the star sensor installation error and IMU error. Meanwhile, the method can restrain the misalignment errors caused by instrument errors.
Prytkova, Vera; Heyden, Matthias; Khago, Domarin; Freites, J Alfredo; Butts, Carter T; Martin, Rachel W; Tobias, Douglas J
2016-08-25
We present a novel multi-conformation Monte Carlo simulation method that enables the modeling of protein-protein interactions and aggregation in crowded protein solutions. This approach is relevant to a molecular-scale description of realistic biological environments, including the cytoplasm and the extracellular matrix, which are characterized by high concentrations of biomolecular solutes (e.g., 300-400 mg/mL for proteins and nucleic acids in the cytoplasm of Escherichia coli). Simulation of such environments necessitates the inclusion of a large number of protein molecules. Therefore, computationally inexpensive methods, such as rigid-body Brownian dynamics (BD) or Monte Carlo simulations, can be particularly useful. However, as we demonstrate herein, the rigid-body representation typically employed in simulations of many-protein systems gives rise to certain artifacts in protein-protein interactions. Our approach allows us to incorporate molecular flexibility in Monte Carlo simulations at low computational cost, thereby eliminating ambiguities arising from structure selection in rigid-body simulations. We benchmark and validate the methodology using simulations of hen egg white lysozyme in solution, a well-studied system for which extensive experimental data, including osmotic second virial coefficients, small-angle scattering structure factors, and multiple structures determined by X-ray and neutron crystallography and solution NMR, as well as rigid-body BD simulation results, are available for comparison.
Wendell, David C.; Samyn, Margaret M.; Cava, Joseph R.; Ellwein, Laura M.; Krolikowski, Mary M.; Gandy, Kimberly L.; Pelech, Andrew N.; Shadden, Shawn C.; LaDisa, John F.
2012-01-01
Computational fluid dynamics (CFD) simulations quantifying thoracic aortic flow patterns have not included disturbances from the aortic valve (AoV). 80% of patients with aortic coarctation (CoA) have a bicuspid aortic valve (BAV) which may cause adverse flow patterns contributing to morbidity. Our objectives were to develop a method to account for the AoV in CFD simulations, and quantify its impact on local hemodynamics. The method developed facilitates segmentation of the AoV, spatiotemporal interpolation of segments, and anatomic positioning of segments at the CFD model inlet. The AoV was included in CFD model examples of a normal (tricuspid AoV) and a post-surgical CoA patient (BAV). Velocity, turbulent kinetic energy (TKE), time-averaged wall shear stress (TAWSS), and oscillatory shear index (OSI) results were compared to equivalent simulations using a plug inlet profile. The plug inlet greatly underestimated TKE for both examples. TAWSS differences extended throughout the thoracic aorta for the CoA BAV, but were limited to the arch for the normal example. OSI differences existed mainly in the ascending aorta for both cases. The impact of AoV can now be included with CFD simulations to identify regions of deleterious hemodynamics thereby advancing simulations of the thoracic aorta one step closer to reality. PMID:22917990
Simulated Data for High Temperature Composite Design
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Abumeri, Galib H.
2006-01-01
The paper describes an effective formal method that can be used to simulate design properties for composites that is inclusive of all the effects that influence those properties. This effective simulation method is integrated computer codes that include composite micromechanics, composite macromechanics, laminate theory, structural analysis, and multi-factor interaction model. Demonstration of the method includes sample examples for static, thermal, and fracture reliability for a unidirectional metal matrix composite as well as rupture strength and fatigue strength for a high temperature super alloy. Typical results obtained for a unidirectional composite show that the thermal properties are more sensitive to internal local damage, the longitudinal properties degrade slowly with temperature, the transverse and shear properties degrade rapidly with temperature as do rupture strength and fatigue strength for super alloys.
10 CFR 430.24 - Units to be tested.
Code of Federal Regulations, 2010 CFR
2010-01-01
... the method includes an ARM/simulation adjustment factor(s), determine the value(s) of the factors(s... process. (v) If request for approval is for an updated ARM, manufacturers must identify modifications made to the ARM since the last submittal, including any ARM/simulation adjustment factor(s) added since...
Networking Labs in the Online Environment: Indicators for Success
ERIC Educational Resources Information Center
Lahoud, Hilmi A.; Krichen, Jack P.
2010-01-01
Several techniques have been used to provide hands-on educational experiences to online learners, including remote labs, simulation software, and virtual labs, which offer a more structured environment, including simulations and scheduled asynchronous access to physical resources. This exploratory study investigated how these methods can be used…
Multiple shooting shadowing for sensitivity analysis of chaotic dynamical systems
NASA Astrophysics Data System (ADS)
Blonigan, Patrick J.; Wang, Qiqi
2018-02-01
Sensitivity analysis methods are important tools for research and design with simulations. Many important simulations exhibit chaotic dynamics, including scale-resolving turbulent fluid flow simulations. Unfortunately, conventional sensitivity analysis methods are unable to compute useful gradient information for long-time-averaged quantities in chaotic dynamical systems. Sensitivity analysis with least squares shadowing (LSS) can compute useful gradient information for a number of chaotic systems, including simulations of chaotic vortex shedding and homogeneous isotropic turbulence. However, this gradient information comes at a very high computational cost. This paper presents multiple shooting shadowing (MSS), a more computationally efficient shadowing approach than the original LSS approach. Through an analysis of the convergence rate of MSS, it is shown that MSS can have lower memory usage and run time than LSS.
Intercomparison of 3D pore-scale flow and solute transport simulation methods
Mehmani, Yashar; Schoenherr, Martin; Pasquali, Andrea; ...
2015-09-28
Multiple numerical approaches have been developed to simulate porous media fluid flow and solute transport at the pore scale. These include 1) methods that explicitly model the three-dimensional geometry of pore spaces and 2) methods that conceptualize the pore space as a topologically consistent set of stylized pore bodies and pore throats. In previous work we validated a model of the first type, using computational fluid dynamics (CFD) codes employing a standard finite volume method (FVM), against magnetic resonance velocimetry (MRV) measurements of pore-scale velocities. Here we expand that validation to include additional models of the first type based onmore » the lattice Boltzmann method (LBM) and smoothed particle hydrodynamics (SPH), as well as a model of the second type, a pore-network model (PNM). The PNM approach used in the current study was recently improved and demonstrated to accurately simulate solute transport in a two-dimensional experiment. While the PNM approach is computationally much less demanding than direct numerical simulation methods, the effect of conceptualizing complex three-dimensional pore geometries on solute transport in the manner of PNMs has not been fully determined. We apply all four approaches (FVM-based CFD, LBM, SPH and PNM) to simulate pore-scale velocity distributions and (for capable codes) nonreactive solute transport, and intercompare the model results. Comparisons are drawn both in terms of macroscopic variables (e.g., permeability, solute breakthrough curves) and microscopic variables (e.g., local velocities and concentrations). Generally good agreement was achieved among the various approaches, but some differences were observed depending on the model context. The intercomparison work was challenging because of variable capabilities of the codes, and inspired some code enhancements to allow consistent comparison of flow and transport simulations across the full suite of methods. This paper provides support for confidence in a variety of pore-scale modeling methods and motivates further development and application of pore-scale simulation methods.« less
Intercomparison of 3D pore-scale flow and solute transport simulation methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiaofan; Mehmani, Yashar; Perkins, William A.
2016-09-01
Multiple numerical approaches have been developed to simulate porous media fluid flow and solute transport at the pore scale. These include 1) methods that explicitly model the three-dimensional geometry of pore spaces and 2) methods that conceptualize the pore space as a topologically consistent set of stylized pore bodies and pore throats. In previous work we validated a model of the first type, using computational fluid dynamics (CFD) codes employing a standard finite volume method (FVM), against magnetic resonance velocimetry (MRV) measurements of pore-scale velocities. Here we expand that validation to include additional models of the first type based onmore » the lattice Boltzmann method (LBM) and smoothed particle hydrodynamics (SPH), as well as a model of the second type, a pore-network model (PNM). The PNM approach used in the current study was recently improved and demonstrated to accurately simulate solute transport in a two-dimensional experiment. While the PNM approach is computationally much less demanding than direct numerical simulation methods, the effect of conceptualizing complex three-dimensional pore geometries on solute transport in the manner of PNMs has not been fully determined. We apply all four approaches (FVM-based CFD, LBM, SPH and PNM) to simulate pore-scale velocity distributions and (for capable codes) nonreactive solute transport, and intercompare the model results. Comparisons are drawn both in terms of macroscopic variables (e.g., permeability, solute breakthrough curves) and microscopic variables (e.g., local velocities and concentrations). Generally good agreement was achieved among the various approaches, but some differences were observed depending on the model context. The intercomparison work was challenging because of variable capabilities of the codes, and inspired some code enhancements to allow consistent comparison of flow and transport simulations across the full suite of methods. This study provides support for confidence in a variety of pore-scale modeling methods and motivates further development and application of pore-scale simulation methods.« less
Communication: Multiple atomistic force fields in a single enhanced sampling simulation
NASA Astrophysics Data System (ADS)
Hoang Viet, Man; Derreumaux, Philippe; Nguyen, Phuong H.
2015-07-01
The main concerns of biomolecular dynamics simulations are the convergence of the conformational sampling and the dependence of the results on the force fields. While the first issue can be addressed by employing enhanced sampling techniques such as simulated tempering or replica exchange molecular dynamics, repeating these simulations with different force fields is very time consuming. Here, we propose an automatic method that includes different force fields into a single advanced sampling simulation. Conformational sampling using three all-atom force fields is enhanced by simulated tempering and by formulating the weight parameters of the simulated tempering method in terms of the energy fluctuations, the system is able to perform random walk in both temperature and force field spaces. The method is first demonstrated on a 1D system and then validated by the folding of the 10-residue chignolin peptide in explicit water.
A regularized vortex-particle mesh method for large eddy simulation
NASA Astrophysics Data System (ADS)
Spietz, H. J.; Walther, J. H.; Hejlesen, M. M.
2017-11-01
We present recent developments of the remeshed vortex particle-mesh method for simulating incompressible fluid flow. The presented method relies on a parallel higher-order FFT based solver for the Poisson equation. Arbitrary high order is achieved through regularization of singular Green's function solutions to the Poisson equation and recently we have derived novel high order solutions for a mixture of open and periodic domains. With this approach the simulated variables may formally be viewed as the approximate solution to the filtered Navier Stokes equations, hence we use the method for Large Eddy Simulation by including a dynamic subfilter-scale model based on test-filters compatible with the aforementioned regularization functions. Further the subfilter-scale model uses Lagrangian averaging, which is a natural candidate in light of the Lagrangian nature of vortex particle methods. A multiresolution variation of the method is applied to simulate the benchmark problem of the flow past a square cylinder at Re = 22000 and the obtained results are compared to results from the literature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pruess, K.; Oldenburg, C.; Moridis, G.
1997-12-31
This paper summarizes recent advances in methods for simulating water and tracer injection, and presents illustrative applications to liquid- and vapor-dominated geothermal reservoirs. High-resolution simulations of water injection into heterogeneous, vertical fractures in superheated vapor zones were performed. Injected water was found to move in dendritic patterns, and to experience stronger lateral flow effects than predicted from homogeneous medium models. Higher-order differencing methods were applied to modeling water and tracer injection into liquid-dominated systems. Conventional upstream weighting techniques were shown to be adequate for predicting the migration of thermal fronts, while higher-order methods give far better accuracy for tracer transport.more » A new fluid property module for the TOUGH2 simulator is described which allows a more accurate description of geofluids, and includes mineral dissolution and precipitation effects with associated porosity and permeability change. Comparisons between numerical simulation predictions and data for laboratory and field injection experiments are summarized. Enhanced simulation capabilities include a new linear solver package for TOUGH2, and inverse modeling techniques for automatic history matching and optimization.« less
Dynamics Modeling and Simulation of Large Transport Airplanes in Upset Conditions
NASA Technical Reports Server (NTRS)
Foster, John V.; Cunningham, Kevin; Fremaux, Charles M.; Shah, Gautam H.; Stewart, Eric C.; Rivers, Robert A.; Wilborn, James E.; Gato, William
2005-01-01
As part of NASA's Aviation Safety and Security Program, research has been in progress to develop aerodynamic modeling methods for simulations that accurately predict the flight dynamics characteristics of large transport airplanes in upset conditions. The motivation for this research stems from the recognition that simulation is a vital tool for addressing loss-of-control accidents, including applications to pilot training, accident reconstruction, and advanced control system analysis. The ultimate goal of this effort is to contribute to the reduction of the fatal accident rate due to loss-of-control. Research activities have involved accident analyses, wind tunnel testing, and piloted simulation. Results have shown that significant improvements in simulation fidelity for upset conditions, compared to current training simulations, can be achieved using state-of-the-art wind tunnel testing and aerodynamic modeling methods. This paper provides a summary of research completed to date and includes discussion on key technical results, lessons learned, and future research needs.
SUPG Finite Element Simulations of Compressible Flows
NASA Technical Reports Server (NTRS)
Kirk, Brnjamin, S.
2006-01-01
The Streamline-Upwind Petrov-Galerkin (SUPG) finite element simulations of compressible flows is presented. The topics include: 1) Introduction; 2) SUPG Galerkin Finite Element Methods; 3) Applications; and 4) Bibliography.
Comparison of AGE and Spectral Methods for the Simulation of Far-Wakes
NASA Technical Reports Server (NTRS)
Bisset, D. K.; Rogers, M. M.; Kega, Dennis (Technical Monitor)
1999-01-01
Turbulent flow simulation methods based on finite differences are attractive for their simplicity, flexibility and efficiency, but not always for accuracy or stability. This report demonstrates that a good compromise is possible with the Advected Grid Explicit (AGE) method. AGE has proven to be both efficient and accurate for simulating turbulent free-shear flows, including planar mixing layers and planar jets. Its efficiency results from its localized fully explicit finite difference formulation (Bisset 1998a,b) that is very straightforward to compute, outweighing the need for a fairly small timestep. Also, most of the successful simulations were slightly under-resolved, and therefore they were, in effect, large-eddy simulations (LES) without a sub-grid-scale (SGS) model, rather than direct numerical simulations (DNS). The principle is that the role of the smallest scales of turbulent motion (when the Reynolds number is not too low) is to dissipate turbulent energy, and therefore they do not have to be simulated when the numerical method is inherently dissipative at its resolution limits. Such simulations are termed 'auto-LES' (LES with automatic SGS modeling) in this report.
NASA Astrophysics Data System (ADS)
Francés, J.; Bleda, S.; Neipp, C.; Márquez, A.; Pascual, I.; Beléndez, A.
2013-03-01
The finite-difference time-domain method (FDTD) allows electromagnetic field distribution analysis as a function of time and space. The method is applied to analyze holographic volume gratings (HVGs) for the near-field distribution at optical wavelengths. Usually, this application requires the simulation of wide areas, which implies more memory and time processing. In this work, we propose a specific implementation of the FDTD method including several add-ons for a precise simulation of optical diffractive elements. Values in the near-field region are computed considering the illumination of the grating by means of a plane wave for different angles of incidence and including absorbing boundaries as well. We compare the results obtained by FDTD with those obtained using a matrix method (MM) applied to diffraction gratings. In addition, we have developed two optimized versions of the algorithm, for both CPU and GPU, in order to analyze the improvement of using the new NVIDIA Fermi GPU architecture versus highly tuned multi-core CPU as a function of the size simulation. In particular, the optimized CPU implementation takes advantage of the arithmetic and data transfer streaming SIMD (single instruction multiple data) extensions (SSE) included explicitly in the code and also of multi-threading by means of OpenMP directives. A good agreement between the results obtained using both FDTD and MM methods is obtained, thus validating our methodology. Moreover, the performance of the GPU is compared to the SSE+OpenMP CPU implementation, and it is quantitatively determined that a highly optimized CPU program can be competitive for a wider range of simulation sizes, whereas GPU computing becomes more powerful for large-scale simulations.
A Simulation Study of Methods for Selecting Subgroup-Specific Doses in Phase I Trials
Morita, Satoshi; Thall, Peter F.; Takeda, Kentaro
2016-01-01
Summary Patient heterogeneity may complicate dose-finding in phase I clinical trials if the dose-toxicity curves differ between subgroups. Conducting separate trials within subgroups may lead to infeasibly small sample sizes in subgroups having low prevalence. Alternatively, it is not obvious how to conduct a single trial while accounting for heterogeneity. To address this problem, we consider a generalization of the continual reassessment method (O’Quigley, et al., 1990) based on a hierarchical Bayesian dose-toxicity model that borrows strength between subgroups under the assumption that the subgroups are exchangeable. We evaluate a design using this model that includes subgroup-specific dose selection and safety rules. A simulation study is presented that includes comparison of this method to three alternative approaches, based on non-hierarchical models, that make different types of assumptions about within-subgroup dose-toxicity curves. The simulations show that the hierarchical model-based method is recommended in settings where the dose-toxicity curves are exchangeable between subgroups. We present practical guidelines for application, and provide computer programs for trial simulation and conduct. PMID:28111916
SU-F-T-242: A Method for Collision Avoidance in External Beam Radiation Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buzurovic, I; Cormack, R
2016-06-15
Purpose: We proposed a method for collision avoidance (CA) in external beam radiation therapy (EBRT). The method encompasses the analysis of all positions of the moving components of the beam delivery system such as the treatment table and gantry, including patient specific information obtained from the CT images. This method eliminates the need for time-consuming dry-runs prior to the actual treatments. Methods: The QA procedure for EBRT requires that the collision should be checked prior to treatment. We developed a system capable of a rigorous computer simulation of all moving components including positions of the couch and gantry during themore » delivery, position of the patients, and imaging equipment. By running this treatment simulation it is possible to quantify and graphically represent all positions and corresponding trajectories of all points of the moving parts during the treatment delivery. The development of the workflow for implementation of the CA includes several steps: a) derivation of combined dynamic equation of motion of the EBRT delivery systems, b) developing the simulation model capable of drawing the motion trajectories of the specific points, c) developing the interface between the model and the treatment plan parameters such as couch and gantry parameters for each field. Results: The patient CT images were registered to the treatment couch so the patient dimensions were included into the simulation. The treatment field parameters were structured in the xml-file which was used as the input into the dynamic equations. The trajectories of the moving components were plotted on the same graph using the dynamic equations. If the trajectories intersect that was the signal that collision exists. Conclusion: This CA method was proved to be effective in the simulation of treatment delivery. The proper implementation of this system can potentially improve the QA program and increase the efficacy in the clinical setup.« less
Plane-Wave DFT Methods for Chemistry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bylaska, Eric J.
A detailed description of modern plane-wave DFT methods and software (contained in the NWChem package) are described that allow for both geometry optimization and ab initio molecular dynamics simulations. Significant emphasis is placed on aspects of these methods that are of interest to computational chemists and useful for simulating chemistry, including techniques for calculating charged systems, exact exchange (i.e. hybrid DFT methods), and highly efficient AIMD/MM methods. Sample applications on the structure of the goethite+water interface and the hydrolysis of nitroaromatic molecules are described.
A Method for Functional Task Alignment Analysis of an Arthrocentesis Simulator.
Adams, Reid A; Gilbert, Gregory E; Buckley, Lisa A; Nino Fong, Rodolfo; Fuentealba, I Carmen; Little, Erika L
2018-05-16
During simulation-based education, simulators are subjected to procedures composed of a variety of tasks and processes. Simulators should functionally represent a patient in response to the physical action of these tasks. The aim of this work was to describe a method for determining whether a simulator does or does not have sufficient functional task alignment (FTA) to be used in a simulation. Potential performance checklist items were gathered from published arthrocentesis guidelines and aggregated into a performance checklist using Lawshe's method. An expert panel used this performance checklist and an FTA analysis questionnaire to evaluate a simulator's ability to respond to the physical actions required by the performance checklist. Thirteen items, from a pool of 39, were included on the performance checklist. Experts had mixed reviews of the simulator's FTA and its suitability for use in simulation. Unexpectedly, some positive FTA was found for several tasks where the simulator lacked functionality. By developing a detailed list of specific tasks required to complete a clinical procedure, and surveying experts on the simulator's response to those actions, educators can gain insight into the simulator's clinical accuracy and suitability. Unexpected of positive FTA ratings of function deficits suggest that further revision of the survey method is required.
Fung, Lillia; Boet, Sylvain; Bould, M Dylan; Qosa, Haytham; Perrier, Laure; Tricco, Andrea; Tavares, Walter; Reeves, Scott
2015-01-01
Crisis resource management (CRM) abilities are important for different healthcare providers to effectively manage critical clinical events. This study aims to review the effectiveness of simulation-based CRM training for interprofessional and interdisciplinary teams compared to other instructional methods (e.g., didactics). Interprofessional teams are composed of several professions (e.g., nurse, physician, midwife) while interdisciplinary teams are composed of several disciplines from the same profession (e.g., cardiologist, anaesthesiologist, orthopaedist). Medline, EMBASE, CINAHL, Cochrane Central Register of Controlled Trials, and ERIC were searched using terms related to CRM, crisis management, crew resource management, teamwork, and simulation. Trials comparing simulation-based CRM team training versus any other methods of education were included. The educational interventions involved interprofessional or interdisciplinary healthcare teams. The initial search identified 7456 publications; 12 studies were included. Simulation-based CRM team training was associated with significant improvements in CRM skill acquisition in all but two studies when compared to didactic case-based CRM training or simulation without CRM training. Of the 12 included studies, one showed significant improvements in team behaviours in the workplace, while two studies demonstrated sustained reductions in adverse patient outcomes after a single simulation-based CRM team intervention. In conclusion, CRM simulation-based training for interprofessional and interdisciplinary teams show promise in teaching CRM in the simulator when compared to didactic case-based CRM education or simulation without CRM teaching. More research, however, is required to demonstrate transfer of learning to workplaces and potential impact on patient outcomes.
Bürger, Raimund; Diehl, Stefan; Mejías, Camilo
2016-01-01
The main purpose of the recently introduced Bürger-Diehl simulation model for secondary settling tanks was to resolve spatial discretization problems when both hindered settling and the phenomena of compression and dispersion are included. Straightforward time integration unfortunately means long computational times. The next step in the development is to introduce and investigate time-integration methods for more efficient simulations, but where other aspects such as implementation complexity and robustness are equally considered. This is done for batch settling simulations. The key findings are partly a new time-discretization method and partly its comparison with other specially tailored and standard methods. Several advantages and disadvantages for each method are given. One conclusion is that the new linearly implicit method is easier to implement than another one (semi-implicit method), but less efficient based on two types of batch sedimentation tests.
Agent-based modeling: Methods and techniques for simulating human systems
Bonabeau, Eric
2002-01-01
Agent-based modeling is a powerful simulation modeling technique that has seen a number of applications in the last few years, including applications to real-world business problems. After the basic principles of agent-based simulation are briefly introduced, its four areas of application are discussed by using real-world applications: flow simulation, organizational simulation, market simulation, and diffusion simulation. For each category, one or several business applications are described and analyzed. PMID:12011407
Global linear gyrokinetic simulations for LHD including collisions
NASA Astrophysics Data System (ADS)
Kauffmann, K.; Kleiber, R.; Hatzky, R.; Borchardt, M.
2010-11-01
The code EUTERPE uses a Particle-In-Cell (PIC) method to solve the gyrokinetic equation globally (full radius, full flux surface) for three-dimensional equilibria calculated with VMEC. Recently this code has been extended to include multiple kinetic species and electromagnetic effects. Additionally, a pitch-angle scattering operator has been implemented in order to include collisional effects in the simulation of instabilities and to be able to simulate neoclassical transport. As a first application of this extended code we study the effects of collisions on electrostatic ion-temperature-gradient (ITG) instabilities in LHD.
An experimental method for the assessment of color simulation tools.
Lillo, Julio; Alvaro, Leticia; Moreira, Humberto
2014-07-22
The Simulcheck method for evaluating the accuracy of color simulation tools in relation to dichromats is described and used to test three color simulation tools: Variantor, Coblis, and Vischeck. A total of 10 dichromats (five protanopes, five deuteranopes) and 10 normal trichromats participated in the current study. Simulcheck includes two psychophysical tasks: the Pseudoachromatic Stimuli Identification task and the Minimum Achromatic Contrast task. The Pseudoachromatic Stimuli Identification task allows determination of the two chromatic angles (h(uv) values) that generate a minimum response in the yellow–blue opponent mechanism and, consequently, pseudoachromatic stimuli (greens or reds). The Minimum Achromatic Contrast task requires the selection of the gray background that produces minimum contrast (near zero change in the achromatic mechanism) for each pseudoachromatic stimulus selected in the previous task (L(R) values). Results showed important differences in the colorimetric transformations performed by the three evaluated simulation tools and their accuracy levels. Vischeck simulation accurately implemented the algorithm of Brettel, Viénot, and Mollon (1997). Only Vischeck appeared accurate (similarity in huv and L(R) values between real and simulated dichromats) and, consequently, could render reliable color selections. It is concluded that Simulcheck is a consistent method because it provided an equivalent pattern of results for huv and L(R) values irrespective of the stimulus set used to evaluate a simulation tool. Simulcheck was also considered valid because real dichromats provided expected huv and LR values when performing the two psychophysical tasks included in this method. © 2014 ARVO.
Simulation of Mirror Electron Microscopy Caustic Images in Three-Dimensions
NASA Astrophysics Data System (ADS)
Kennedy, S. M.; Zheng, C. X.; Jesson, D. E.
A full, three-dimensional (3D) ray tracing approach is developed to simulate the caustics visible in mirror electron microscopy (MEM). The method reproduces MEM image contrast resulting from 3D surface relief. To illustrate the potential of the simulation methods, we study the evolution of crater contrast associated with a movie of GaAs structures generated by the droplet epitaxy technique. Specifically, we simulate the image contrast resulting from both a precursor stage and the final crater morphology which is consistent with an inverted pyramid consisting of (111) facet walls. The method therefore facilities the study of how self-assembled quantum structures evolve with time and, in particular, the development of anisotropic features including faceting.
Extension of a hybrid particle-continuum method for a mixture of chemical species
NASA Astrophysics Data System (ADS)
Verhoff, Ashley M.; Boyd, Iain D.
2012-11-01
Due to the physical accuracy and numerical efficiency achieved by analyzing transitional, hypersonic flow fields with hybrid particle-continuum methods, this paper describes a Modular Particle-Continuum (MPC) method and its extension to include multiple chemical species. Considerations that are specific to a hybrid approach for simulating gas mixtures are addressed, including a discussion of the Chapman-Enskog velocity distribution function (VDF) for near-equilibrium flows, and consistent viscosity models for the individual CFD and DSMC modules of the MPC method. Representative results for a hypersonic blunt-body flow are then presented, where the flow field properties, surface properties, and computational performance are compared for simulations employing full CFD, full DSMC, and the MPC method.
Chen, Mohan; Vella, Joseph R.; Panagiotopoulos, Athanassios Z.; ...
2015-04-08
The structure and dynamics of liquid lithium are studied using two simulation methods: orbital-free (OF) first-principles molecular dynamics (MD), which employs OF density functional theory (DFT), and classical MD utilizing a second nearest-neighbor embedded-atom method potential. The properties we studied include the dynamic structure factor, the self-diffusion coefficient, the dispersion relation, the viscosity, and the bond angle distribution function. Our simulation results were compared to available experimental data when possible. Each method has distinct advantages and disadvantages. For example, OFDFT gives better agreement with experimental dynamic structure factors, yet is more computationally demanding than classical simulations. Classical simulations can accessmore » a broader temperature range and longer time scales. The combination of first-principles and classical simulations is a powerful tool for studying properties of liquid lithium.« less
Process to Produce Iron Nanoparticle Lunar Dust Simulant Composite
NASA Technical Reports Server (NTRS)
Hung, Ching-cheh; McNatt, Jeremiah
2010-01-01
A document discusses a method for producing nanophase iron lunar dust composite simulant by heating a mixture of carbon black and current lunar simulant types (mixed oxide including iron oxide) at a high temperature to reduce ionic iron into elemental iron. The product is a chemically modified lunar simulant that can be attracted by a magnet, and has a surface layer with an iron concentration that is increased during the reaction. The iron was found to be -iron and Fe3O4 nanoparticles. The simulant produced with this method contains iron nanoparticles not available previously, and they are stable in ambient air. These nanoparticles can be mass-produced simply.
Simulation of financial market via nonlinear Ising model
NASA Astrophysics Data System (ADS)
Ko, Bonggyun; Song, Jae Wook; Chang, Woojin
2016-09-01
In this research, we propose a practical method for simulating the financial return series whose distribution has a specific heaviness. We employ the Ising model for generating financial return series to be analogous to those of the real series. The similarity between real financial return series and simulated one is statistically verified based on their stylized facts including the power law behavior of tail distribution. We also suggest the scheme for setting the parameters in order to simulate the financial return series with specific tail behavior. The simulation method introduced in this paper is expected to be applied to the other financial products whose price return distribution is fat-tailed.
Analysis and application of Fourier transform spectroscopy in atmospheric remote sensing
NASA Technical Reports Server (NTRS)
Park, J. H.
1984-01-01
An analysis method for Fourier transform spectroscopy is summarized with applications to various types of distortion in atmospheric absorption spectra. This analysis method includes the fast Fourier transform method for simulating the interferometric spectrum and the nonlinear least-squares method for retrieving the information from a measured spectrum. It is shown that spectral distortions can be simulated quite well and that the correct information can be retrieved from a distorted spectrum by this analysis technique.
NASA Technical Reports Server (NTRS)
DeLaat, John C.; Paxson, Daniel E.
2008-01-01
Extensive research is being done toward the development of ultra-low-emissions combustors for aircraft gas turbine engines. However, these combustors have an increased susceptibility to thermoacoustic instabilities. This type of instability was recently observed in an advanced, low emissions combustor prototype installed in a NASA Glenn Research Center test stand. The instability produces pressure oscillations that grow with increasing fuel/air ratio, preventing full power operation. The instability behavior makes the combustor a potentially useful test bed for research into active control methods for combustion instability suppression. The instability behavior was characterized by operating the combustor at various pressures, temperatures, and fuel and air flows representative of operation within an aircraft gas turbine engine. Trends in instability behavior versus operating condition have been identified and documented, and possible explanations for the trends provided. A simulation developed at NASA Glenn captures the observed instability behavior. The physics-based simulation includes the relevant physical features of the combustor and test rig, employs a Sectored 1-D approach, includes simplified reaction equations, and provides time-accurate results. A computationally efficient method is used for area transitions, which decreases run times and allows the simulation to be used for parametric studies, including control method investigations. Simulation results show that the simulation exhibits a self-starting, self-sustained combustion instability and also replicates the experimentally observed instability trends versus operating condition. Future plans are to use the simulation to investigate active control strategies to suppress combustion instabilities and then to experimentally demonstrate active instability suppression with the low emissions combustor prototype, enabling full power, stable operation.
Characterization and Simulation of Thermoacoustic Instability in a Low Emissions Combustor Prototype
NASA Technical Reports Server (NTRS)
DeLaat, John C.; Paxson, Daniel E.
2008-01-01
Extensive research is being done toward the development of ultra-low-emissions combustors for aircraft gas turbine engines. However, these combustors have an increased susceptibility to thermoacoustic instabilities. This type of instability was recently observed in an advanced, low emissions combustor prototype installed in a NASA Glenn Research Center test stand. The instability produces pressure oscillations that grow with increasing fuel/air ratio, preventing full power operation. The instability behavior makes the combustor a potentially useful test bed for research into active control methods for combustion instability suppression. The instability behavior was characterized by operating the combustor at various pressures, temperatures, and fuel and air flows representative of operation within an aircraft gas turbine engine. Trends in instability behavior vs. operating condition have been identified and documented. A simulation developed at NASA Glenn captures the observed instability behavior. The physics-based simulation includes the relevant physical features of the combustor and test rig, employs a Sectored 1-D approach, includes simplified reaction equations, and provides time-accurate results. A computationally efficient method is used for area transitions, which decreases run times and allows the simulation to be used for parametric studies, including control method investigations. Simulation results show that the simulation exhibits a self-starting, self-sustained combustion instability and also replicates the experimentally observed instability trends vs. operating condition. Future plans are to use the simulation to investigate active control strategies to suppress combustion instabilities and then to experimentally demonstrate active instability suppression with the low emissions combustor prototype, enabling full power, stable operation.
A coupling method for a cardiovascular simulation model which includes the Kalman filter.
Hasegawa, Yuki; Shimayoshi, Takao; Amano, Akira; Matsuda, Tetsuya
2012-01-01
Multi-scale models of the cardiovascular system provide new insight that was unavailable with in vivo and in vitro experiments. For the cardiovascular system, multi-scale simulations provide a valuable perspective in analyzing the interaction of three phenomenons occurring at different spatial scales: circulatory hemodynamics, ventricular structural dynamics, and myocardial excitation-contraction. In order to simulate these interactions, multiscale cardiovascular simulation systems couple models that simulate different phenomena. However, coupling methods require a significant amount of calculation, since a system of non-linear equations must be solved for each timestep. Therefore, we proposed a coupling method which decreases the amount of calculation by using the Kalman filter. In our method, the Kalman filter calculates approximations for the solution to the system of non-linear equations at each timestep. The approximations are then used as initial values for solving the system of non-linear equations. The proposed method decreases the number of iterations required by 94.0% compared to the conventional strong coupling method. When compared with a smoothing spline predictor, the proposed method required 49.4% fewer iterations.
Rapid Harmonic Analysis of Piezoelectric MEMS Resonators.
Puder, Jonathan M; Pulskamp, Jeffrey S; Rudy, Ryan Q; Cassella, Cristian; Rinaldi, Matteo; Chen, Guofeng; Bhave, Sunil A; Polcawich, Ronald G
2018-06-01
This paper reports on a novel simulation method combining the speed of analytical evaluation with the accuracy of finite-element analysis (FEA). This method is known as the rapid analytical-FEA technique (RAFT). The ability of the RAFT to accurately predict frequency response orders of magnitude faster than conventional simulation methods while providing deeper insights into device design not possible with other types of analysis is detailed. Simulation results from the RAFT across wide bandwidths are compared to measured results of resonators fabricated with various materials, frequencies, and topologies with good agreement. These include resonators targeting beam extension, disk flexure, and Lamé beam modes. An example scaling analysis is presented and other applications enabled are discussed as well. The supplemental material includes example code for implementation in ANSYS, although any commonly employed FEA package may be used.
NASA Astrophysics Data System (ADS)
Wang, Jinting; Lu, Liqiao; Zhu, Fei
2018-01-01
Finite element (FE) is a powerful tool and has been applied by investigators to real-time hybrid simulations (RTHSs). This study focuses on the computational efficiency, including the computational time and accuracy, of numerical integrations in solving FE numerical substructure in RTHSs. First, sparse matrix storage schemes are adopted to decrease the computational time of FE numerical substructure. In this way, the task execution time (TET) decreases such that the scale of the numerical substructure model increases. Subsequently, several commonly used explicit numerical integration algorithms, including the central difference method (CDM), the Newmark explicit method, the Chang method and the Gui-λ method, are comprehensively compared to evaluate their computational time in solving FE numerical substructure. CDM is better than the other explicit integration algorithms when the damping matrix is diagonal, while the Gui-λ (λ = 4) method is advantageous when the damping matrix is non-diagonal. Finally, the effect of time delay on the computational accuracy of RTHSs is investigated by simulating structure-foundation systems. Simulation results show that the influences of time delay on the displacement response become obvious with the mass ratio increasing, and delay compensation methods may reduce the relative error of the displacement peak value to less than 5% even under the large time-step and large time delay.
Models and Methods for Adaptive Management of Individual and Team-Based Training Using a Simulator
NASA Astrophysics Data System (ADS)
Lisitsyna, L. S.; Smetyuh, N. P.; Golikov, S. P.
2017-05-01
Research of adaptive individual and team-based training has been analyzed and helped find out that both in Russia and abroad, individual and team-based training and retraining of AASTM operators usually includes: production training, training of general computer and office equipment skills, simulator training including virtual simulators which use computers to simulate real-world manufacturing situation, and, as a rule, the evaluation of AASTM operators’ knowledge determined by completeness and adequacy of their actions under the simulated conditions. Such approach to training and re-training of AASTM operators stipulates only technical training of operators and testing their knowledge based on assessing their actions in a simulated environment.
Research in Distance Education: A System Modeling Approach.
ERIC Educational Resources Information Center
Saba, Farhad; Twitchell, David
1988-01-01
Describes how a computer simulation research method can be used for studying distance education systems. Topics discussed include systems research in distance education; a technique of model development using the System Dynamics approach and DYNAMO simulation language; and a computer simulation of a prototype model. (18 references) (LRW)
AGR-1 Thermocouple Data Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeff Einerson
2012-05-01
This report documents an effort to analyze measured and simulated data obtained in the Advanced Gas Reactor (AGR) fuel irradiation test program conducted in the INL's Advanced Test Reactor (ATR) to support the Next Generation Nuclear Plant (NGNP) R&D program. The work follows up on a previous study (Pham and Einerson, 2010), in which statistical analysis methods were applied for AGR-1 thermocouple data qualification. The present work exercises the idea that, while recognizing uncertainties inherent in physics and thermal simulations of the AGR-1 test, results of the numerical simulations can be used in combination with the statistical analysis methods tomore » further improve qualification of measured data. Additionally, the combined analysis of measured and simulation data can generate insights about simulation model uncertainty that can be useful for model improvement. This report also describes an experimental control procedure to maintain fuel target temperature in the future AGR tests using regression relationships that include simulation results. The report is organized into four chapters. Chapter 1 introduces the AGR Fuel Development and Qualification program, AGR-1 test configuration and test procedure, overview of AGR-1 measured data, and overview of physics and thermal simulation, including modeling assumptions and uncertainties. A brief summary of statistical analysis methods developed in (Pham and Einerson 2010) for AGR-1 measured data qualification within NGNP Data Management and Analysis System (NDMAS) is also included for completeness. Chapters 2-3 describe and discuss cases, in which the combined use of experimental and simulation data is realized. A set of issues associated with measurement and modeling uncertainties resulted from the combined analysis are identified. This includes demonstration that such a combined analysis led to important insights for reducing uncertainty in presentation of AGR-1 measured data (Chapter 2) and interpretation of simulation results (Chapter 3). The statistics-based simulation-aided experimental control procedure described for the future AGR tests is developed and demonstrated in Chapter 4. The procedure for controlling the target fuel temperature (capsule peak or average) is based on regression functions of thermocouple readings and other relevant parameters and accounting for possible changes in both physical and thermal conditions and in instrument performance.« less
NASA Astrophysics Data System (ADS)
Nakada, Tomohiro; Takadama, Keiki; Watanabe, Shigeyoshi
This paper proposes the classification method using Bayesian analytical method to classify the time series data in the international emissions trading market depend on the agent-based simulation and compares the case with Discrete Fourier transform analytical method. The purpose demonstrates the analytical methods mapping time series data such as market price. These analytical methods have revealed the following results: (1) the classification methods indicate the distance of mapping from the time series data, it is easier the understanding and inference than time series data; (2) these methods can analyze the uncertain time series data using the distance via agent-based simulation including stationary process and non-stationary process; and (3) Bayesian analytical method can show the 1% difference description of the emission reduction targets of agent.
Boundary pint corrections for variable radius plots - simulation results
Margaret Penner; Sam Otukol
2000-01-01
The boundary plot problem is encountered when a forest inventory plot includes two or more forest conditions. Depending on the correction method used, the resulting estimates can be biased. The various correction alternatives are reviewed. No correction, area correction, half sweep, and toss-back methods are evaluated using simulation on an actual data set. Based on...
Automatic insertion of simulated microcalcification clusters in a software breast phantom
NASA Astrophysics Data System (ADS)
Shankla, Varsha; Pokrajac, David D.; Weinstein, Susan P.; DeLeo, Michael; Tuite, Catherine; Roth, Robyn; Conant, Emily F.; Maidment, Andrew D.; Bakic, Predrag R.
2014-03-01
An automated method has been developed to insert realistic clusters of simulated microcalcifications (MCs) into computer models of breast anatomy. This algorithm has been developed as part of a virtual clinical trial (VCT) software pipeline, which includes the simulation of breast anatomy, mechanical compression, image acquisition, image processing, display and interpretation. An automated insertion method has value in VCTs involving large numbers of images. The insertion method was designed to support various insertion placement strategies, governed by probability distribution functions (pdf). The pdf can be predicated on histological or biological models of tumor growth, or estimated from the locations of actual calcification clusters. To validate the automated insertion method, a 2-AFC observer study was designed to compare two placement strategies, undirected and directed. The undirected strategy could place a MC cluster anywhere within the phantom volume. The directed strategy placed MC clusters within fibroglandular tissue on the assumption that calcifications originate from epithelial breast tissue. Three radiologists were asked to select between two simulated phantom images, one from each placement strategy. Furthermore, questions were posed to probe the rationale behind the observer's selection. The radiologists found the resulting cluster placement to be realistic in 92% of cases, validating the automated insertion method. There was a significant preference for the cluster to be positioned on a background of adipose or mixed adipose/fibroglandular tissues. Based upon these results, this automated lesion placement method will be included in our VCT simulation pipeline.
Ground simulation of wide frequency band angular vibration for Lander's optic sensors
NASA Astrophysics Data System (ADS)
Xing, Zhigang; Xiang, Jianwei; Zheng, Gangtie
2017-11-01
To guide a lander of Moon or Mars exploration spacecraft during the stage of descent onto a desired place, optic sensors have been chosen to take the task, which include optic cameras and laser distance meters. However, such optic sensors are sensitive to vibrations, especially angular vibrations, from the lander. To reduce the risk of abnormal function and ensure the performance of optic sensors, ground simulations are necessary. More importantly, the simulations can be used as a method for examining the sensor performance and finding possible improvement on the sensor design. In the present paper, we proposed an angular vibration simulation method during the landing. This simulation method has been realized into product and applied to optic sensor tests for the moon lander. This simulator can generate random angular vibration in a frequency range from 0 to 2000Hz, the control precision is +/-1dB, and the linear translational speed can be set to the required descent speed. The operation and data processing methods of this developed simulator are the same as a normal shake table. The analysis and design methods are studied in the present paper, and test results are also provided.
Development of automation and robotics for space via computer graphic simulation methods
NASA Technical Reports Server (NTRS)
Fernandez, Ken
1988-01-01
A robot simulation system, has been developed to perform automation and robotics system design studies. The system uses a procedure-oriented solid modeling language to produce a model of the robotic mechanism. The simulator generates the kinematics, inverse kinematics, dynamics, control, and real-time graphic simulations needed to evaluate the performance of the model. Simulation examples are presented, including simulation of the Space Station and the design of telerobotics for the Orbital Maneuvering Vehicle.
Numerical simulation for the air entrainment of aerated flow with an improved multiphase SPH model
NASA Astrophysics Data System (ADS)
Wan, Hang; Li, Ran; Pu, Xunchi; Zhang, Hongwei; Feng, Jingjie
2017-11-01
Aerated flow is a complex hydraulic phenomenon that exists widely in the field of environmental hydraulics. It is generally characterised by large deformation and violent fragmentation of the free surface. Compared to Euler methods (volume of fluid (VOF) method or rigid-lid hypothesis method), the existing single-phase Smooth Particle Hydrodynamics (SPH) method has performed well for solving particle motion. A lack of research on interphase interaction and air concentration, however, has affected the application of SPH model. In our study, an improved multiphase SPH model is presented to simulate aeration flows. A drag force was included in the momentum equation to ensure accuracy of the air particle slip velocity. Furthermore, a calculation method for air concentration is developed to analyse the air entrainment characteristics. Two studies were used to simulate the hydraulic and air entrainment characteristics. And, compared with the experimental results, the simulation results agree with the experimental results well.
Salis, Howard; Kaznessis, Yiannis N
2005-12-01
Stochastic chemical kinetics more accurately describes the dynamics of "small" chemical systems, such as biological cells. Many real systems contain dynamical stiffness, which causes the exact stochastic simulation algorithm or other kinetic Monte Carlo methods to spend the majority of their time executing frequently occurring reaction events. Previous methods have successfully applied a type of probabilistic steady-state approximation by deriving an evolution equation, such as the chemical master equation, for the relaxed fast dynamics and using the solution of that equation to determine the slow dynamics. However, because the solution of the chemical master equation is limited to small, carefully selected, or linear reaction networks, an alternate equation-free method would be highly useful. We present a probabilistic steady-state approximation that separates the time scales of an arbitrary reaction network, detects the convergence of a marginal distribution to a quasi-steady-state, directly samples the underlying distribution, and uses those samples to accurately predict the state of the system, including the effects of the slow dynamics, at future times. The numerical method produces an accurate solution of both the fast and slow reaction dynamics while, for stiff systems, reducing the computational time by orders of magnitude. The developed theory makes no approximations on the shape or form of the underlying steady-state distribution and only assumes that it is ergodic. We demonstrate the accuracy and efficiency of the method using multiple interesting examples, including a highly nonlinear protein-protein interaction network. The developed theory may be applied to any type of kinetic Monte Carlo simulation to more efficiently simulate dynamically stiff systems, including existing exact, approximate, or hybrid stochastic simulation techniques.
NASA Astrophysics Data System (ADS)
Jaschke, Daniel; Wall, Michael L.; Carr, Lincoln D.
2018-04-01
Numerical simulations are a powerful tool to study quantum systems beyond exactly solvable systems lacking an analytic expression. For one-dimensional entangled quantum systems, tensor network methods, amongst them Matrix Product States (MPSs), have attracted interest from different fields of quantum physics ranging from solid state systems to quantum simulators and quantum computing. Our open source MPS code provides the community with a toolset to analyze the statics and dynamics of one-dimensional quantum systems. Here, we present our open source library, Open Source Matrix Product States (OSMPS), of MPS methods implemented in Python and Fortran2003. The library includes tools for ground state calculation and excited states via the variational ansatz. We also support ground states for infinite systems with translational invariance. Dynamics are simulated with different algorithms, including three algorithms with support for long-range interactions. Convenient features include built-in support for fermionic systems and number conservation with rotational U(1) and discrete Z2 symmetries for finite systems, as well as data parallelism with MPI. We explain the principles and techniques used in this library along with examples of how to efficiently use the general interfaces to analyze the Ising and Bose-Hubbard models. This description includes the preparation of simulations as well as dispatching and post-processing of them.
Gargallo, Raimundo; Hünenberger, Philippe H.; Avilés, Francesc X.; Oliva, Baldomero
2003-01-01
Molecular dynamics (MD) simulations of the activation domain of porcine procarboxypeptidase B (ADBp) were performed to examine the effect of using the particle-particle particle-mesh (P3M) or the reaction field (RF) method for calculating electrostatic interactions in simulations of highly charged proteins. Several structural, thermodynamic, and dynamic observables were derived from the MD trajectories, including estimated entropies and solvation free energies and essential dynamics (ED). The P3M method leads to slightly higher atomic positional fluctuations and deviations from the crystallographic structure, along with somewhat lower values of the total energy and solvation free energy. However, the ED analysis of the system leads to nearly identical results for both simulations. Because of the strong similarity between the results, both methods appear well suited for the simulation of highly charged globular proteins in explicit solvent. However, the lower computational demand of the RF method in the present implementation represents a clear advantage over the P3M method. PMID:14500874
NASA Astrophysics Data System (ADS)
Jiang, Wang-Qiang; Zhang, Min; Nie, Ding; Jiao, Yong-Chang
2018-04-01
To simulate the multiple scattering effect of target in synthetic aperture radar (SAR) image, the hybrid method GO/PO method, which combines the geometrical optics (GO) and physical optics (PO), is employed to simulate the scattering field of target. For ray tracing is time-consuming, the Open Graphics Library (OpenGL) is usually employed to accelerate the process of ray tracing. Furthermore, the GO/PO method is improved for the simulation in low pixel situation. For the improved GO/PO method, the pixels are arranged corresponding to the rectangular wave beams one by one, and the GO/PO result is the sum of the contribution values of all the rectangular wave beams. To get high-resolution SAR image, the wideband echo signal is simulated which includes information of many electromagnetic (EM) waves with different frequencies. Finally, the improved GO/PO method is used to simulate the SAR image of targets above rough surface. And the effects of reflected rays and the size of pixel matrix on the SAR image are also discussed.
Analogs of microgravity: head-down tilt and water immersion.
Watenpaugh, Donald E
2016-04-15
This article briefly reviews the fidelity of ground-based methods used to simulate human existence in weightlessness (spaceflight). These methods include horizontal bed rest (BR), head-down tilt bed rest (HDT), head-out water immersion (WI), and head-out dry immersion (DI; immersion with an impermeable elastic cloth barrier between subject and water). Among these, HDT has become by far the most commonly used method, especially for longer studies. DI is less common but well accepted for long-duration studies. Very few studies exist that attempt to validate a specific simulation mode against actual microgravity. Many fundamental physical, and thus physiological, differences exist between microgravity and our methods to simulate it, and between the different methods. Also, although weightlessness is the salient feature of spaceflight, several ancillary factors of space travel complicate Earth-based simulation. In spite of these discrepancies and complications, the analogs duplicate many responses to 0 G reasonably well. As we learn more about responses to microgravity and spaceflight, investigators will continue to fine-tune simulation methods to optimize accuracy and applicability. Copyright © 2016 the American Physiological Society.
Intercomparison of 3D pore-scale flow and solute transport simulation methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiaofan; Mehmani, Yashar; Perkins, William A.
2016-09-01
Multiple numerical approaches have been developed to simulate porous media fluid flow and solute transport at the pore scale. These include methods that 1) explicitly model the three-dimensional geometry of pore spaces and 2) those that conceptualize the pore space as a topologically consistent set of stylized pore bodies and pore throats. In previous work we validated a model of class 1, based on direct numerical simulation using computational fluid dynamics (CFD) codes, against magnetic resonance velocimetry (MRV) measurements of pore-scale velocities. Here we expand that validation to include additional models of class 1 based on the immersed-boundary method (IMB),more » lattice Boltzmann method (LBM), smoothed particle hydrodynamics (SPH), as well as a model of class 2 (a pore-network model or PNM). The PNM approach used in the current study was recently improved and demonstrated to accurately simulate solute transport in a two-dimensional experiment. While the PNM approach is computationally much less demanding than direct numerical simulation methods, the effect of conceptualizing complex three-dimensional pore geometries on solute transport in the manner of PNMs has not been fully determined. We apply all four approaches (CFD, LBM, SPH and PNM) to simulate pore-scale velocity distributions and nonreactive solute transport, and intercompare the model results with previously reported experimental observations. Experimental observations are limited to measured pore-scale velocities, so solute transport comparisons are made only among the various models. Comparisons are drawn both in terms of macroscopic variables (e.g., permeability, solute breakthrough curves) and microscopic variables (e.g., local velocities and concentrations).« less
Spatial Evaluation and Verification of Earthquake Simulators
NASA Astrophysics Data System (ADS)
Wilson, John Max; Yoder, Mark R.; Rundle, John B.; Turcotte, Donald L.; Schultz, Kasey W.
2017-06-01
In this paper, we address the problem of verifying earthquake simulators with observed data. Earthquake simulators are a class of computational simulations which attempt to mirror the topological complexity of fault systems on which earthquakes occur. In addition, the physics of friction and elastic interactions between fault elements are included in these simulations. Simulation parameters are adjusted so that natural earthquake sequences are matched in their scaling properties. Physically based earthquake simulators can generate many thousands of years of simulated seismicity, allowing for a robust capture of the statistical properties of large, damaging earthquakes that have long recurrence time scales. Verification of simulations against current observed earthquake seismicity is necessary, and following past simulator and forecast model verification methods, we approach the challenges in spatial forecast verification to simulators; namely, that simulator outputs are confined to the modeled faults, while observed earthquake epicenters often occur off of known faults. We present two methods for addressing this discrepancy: a simplistic approach whereby observed earthquakes are shifted to the nearest fault element and a smoothing method based on the power laws of the epidemic-type aftershock (ETAS) model, which distributes the seismicity of each simulated earthquake over the entire test region at a decaying rate with epicentral distance. To test these methods, a receiver operating characteristic plot was produced by comparing the rate maps to observed m>6.0 earthquakes in California since 1980. We found that the nearest-neighbor mapping produced poor forecasts, while the ETAS power-law method produced rate maps that agreed reasonably well with observations.
State-and-transition simulation models: a framework for forecasting landscape change
Daniel, Colin; Frid, Leonardo; Sleeter, Benjamin M.; Fortin, Marie-Josée
2016-01-01
SummaryA wide range of spatially explicit simulation models have been developed to forecast landscape dynamics, including models for projecting changes in both vegetation and land use. While these models have generally been developed as separate applications, each with a separate purpose and audience, they share many common features.We present a general framework, called a state-and-transition simulation model (STSM), which captures a number of these common features, accompanied by a software product, called ST-Sim, to build and run such models. The STSM method divides a landscape into a set of discrete spatial units and simulates the discrete state of each cell forward as a discrete-time-inhomogeneous stochastic process. The method differs from a spatially interacting Markov chain in several important ways, including the ability to add discrete counters such as age and time-since-transition as state variables, to specify one-step transition rates as either probabilities or target areas, and to represent multiple types of transitions between pairs of states.We demonstrate the STSM method using a model of land-use/land-cover (LULC) change for the state of Hawai'i, USA. Processes represented in this example include expansion/contraction of agricultural lands, urbanization, wildfire, shrub encroachment into grassland and harvest of tree plantations; the model also projects shifts in moisture zones due to climate change. Key model output includes projections of the future spatial and temporal distribution of LULC classes and moisture zones across the landscape over the next 50 years.State-and-transition simulation models can be applied to a wide range of landscapes, including questions of both land-use change and vegetation dynamics. Because the method is inherently stochastic, it is well suited for characterizing uncertainty in model projections. When combined with the ST-Sim software, STSMs offer a simple yet powerful means for developing a wide range of models of landscape dynamics.
Implementation and Testing of Turbulence Models for the F18-HARV Simulation
NASA Technical Reports Server (NTRS)
Yeager, Jessie C.
1998-01-01
This report presents three methods of implementing the Dryden power spectral density model for atmospheric turbulence. Included are the equations which define the three methods and computer source code written in Advanced Continuous Simulation Language to implement the equations. Time-history plots and sample statistics of simulated turbulence results from executing the code in a test program are also presented. Power spectral densities were computed for sample sequences of turbulence and are plotted for comparison with the Dryden spectra. The three model implementations were installed in a nonlinear six-degree-of-freedom simulation of the High Alpha Research Vehicle airplane. Aircraft simulation responses to turbulence generated with the three implementations are presented as plots.
Dunbar-Reid, Kylie; Sinclair, Peter M; Hudson, Denis
2015-06-01
Simulation is a well-established and proven teaching method, yet its use in renal education is not widely reported. Criticisms of simulation-based teaching include limited realism and a lack of authentic patient interaction. This paper discusses the benefits and challenges of high-fidelity simulation and suggests hybrid simulation as a complementary model to existing simulation programmes. Through the use of a simulated patient, hybrid simulation can improve the authenticity of renal simulation-based education while simultaneously teaching and assessing technologically enframed caring. © 2015 European Dialysis and Transplant Nurses Association/European Renal Care Association.
Ultrasonic NDE Simulation for Composite Manufacturing Defects
NASA Technical Reports Server (NTRS)
Leckey, Cara A. C.; Juarez, Peter D.
2016-01-01
The increased use of composites in aerospace components is expected to continue into the future. The large scale use of composites in aerospace necessitates the development of composite-appropriate nondestructive evaluation (NDE) methods to quantitatively characterize defects in as-manufactured parts and damage incurred during or post manufacturing. Ultrasonic techniques are one of the most common approaches for defect/damage detection in composite materials. One key technical challenge area included in NASA's Advanced Composite's Project is to develop optimized rapid inspection methods for composite materials. Common manufacturing defects in carbon fiber reinforced polymer (CFRP) composites include fiber waviness (in-plane and out-of-plane), porosity, and disbonds; among others. This paper is an overview of ongoing work to develop ultrasonic wavefield based methods for characterizing manufacturing waviness defects. The paper describes the development and implementation of a custom ultrasound simulation tool that is used to model ultrasonic wave interaction with in-plane fiber waviness (also known as marcelling). Wavefield data processing methods are applied to the simulation data to explore possible routes for quantitative defect characterization.
Comparison of Analysis, Simulation, and Measurement of Wire-to-Wire Crosstalk. Part 1
NASA Technical Reports Server (NTRS)
Bradley, Arthur T.; Yavoich, Brian James; Hodson, Shame M.; Godley, Richard Franklin
2010-01-01
In this investigation, we compare crosstalk analysis, simulation, and measurement results for electrically short configurations. Methods include hand calculations, PSPICE simulations, Microstripes transient field solver, and empirical measurement. In total, four representative physical configurations are examined, including a single wire over a ground plane, a twisted pair over a ground plane, generator plus receptor wires inside a cylindrical conduit, and a single receptor wire inside a cylindrical conduit. Part 1 addresses the first two cases, and Part 2 addresses the final two. Agreement between the analysis, simulation, and test data is shown to be very good.
NASA Astrophysics Data System (ADS)
Trautmann, L.; Petrausch, S.; Bauer, M.
2005-09-01
The functional transformation method (FTM) is an established mathematical method for accurate simulation of multidimensional physical systems from various fields of science, including optics, heat and mass transfer, electrical engineering, and acoustics. It is a frequency-domain method based on the decomposition into eigenvectors and eigenfrequencies of the underlying physical problem. In this article, the FTM is applied to real-time simulations of vibrating strings which are ideally fixed at one end while the fixing at the other end is modeled by a frequency-dependent input impedance. Thus, boundary conditions of third kind are applied to the model at the end fixed with the input impedance. It is shown that accurate and stable simulations are achieved with nearly the same computational cost as with strings ideally fixed at both ends.
Hydrogeologic Unit Flow Characterization Using Transition Probability Geostatistics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, N L; Walker, J R; Carle, S F
2003-11-21
This paper describes a technique for applying the transition probability geostatistics method for stochastic simulation to a MODFLOW model. Transition probability geostatistics has several advantages over traditional indicator kriging methods including a simpler and more intuitive framework for interpreting geologic relationships and the ability to simulate juxtapositional tendencies such as fining upwards sequences. The indicator arrays generated by the transition probability simulation are converted to layer elevation and thickness arrays for use with the new Hydrogeologic Unit Flow (HUF) package in MODFLOW 2000. This makes it possible to preserve complex heterogeneity while using reasonably sized grids. An application of themore » technique involving probabilistic capture zone delineation for the Aberjona Aquifer in Woburn, Ma. is included.« less
Mounts, W M; Liebman, M N
1997-07-01
We have developed a method for representing biological pathways and simulating their behavior based on the use of stochastic activity networks (SANs). SANs, an extension of the original Petri net, have been used traditionally to model flow systems including data-communications networks and manufacturing processes. We apply the methodology to the blood coagulation cascade, a biological flow system, and present the representation method as well as results of simulation studies based on published experimental data. In addition to describing the dynamic model, we also present the results of its utilization to perform simulations of clinical states including hemophilia's A and B as well as sensitivity analysis of individual factors and their impact on thrombin production.
OpenMM 7: Rapid development of high performance algorithms for molecular dynamics
Swails, Jason; Zhao, Yutong; Beauchamp, Kyle A.; Wang, Lee-Ping; Stern, Chaya D.; Brooks, Bernard R.; Pande, Vijay S.
2017-01-01
OpenMM is a molecular dynamics simulation toolkit with a unique focus on extensibility. It allows users to easily add new features, including forces with novel functional forms, new integration algorithms, and new simulation protocols. Those features automatically work on all supported hardware types (including both CPUs and GPUs) and perform well on all of them. In many cases they require minimal coding, just a mathematical description of the desired function. They also require no modification to OpenMM itself and can be distributed independently of OpenMM. This makes it an ideal tool for researchers developing new simulation methods, and also allows those new methods to be immediately available to the larger community. PMID:28746339
Nagasaka, Masanari; Kondoh, Hiroshi; Nakai, Ikuyo; Ohta, Toshiaki
2007-01-28
The dynamics of adsorbate structures during CO oxidation on Pt(111) surfaces and its effects on the reaction were studied by the dynamic Monte Carlo method including lateral interactions of adsorbates. The lateral interaction energies between adsorbed species were calculated by the density functional theory method. Dynamic Monte Carlo simulations were performed for the oxidation reaction over a mesoscopic scale, where the experimentally determined activation energies of elementary paths were altered by the calculated lateral interaction energies. The simulated results reproduced the characteristics of the microscopic and mesoscopic scale adsorbate structures formed during the reaction, and revealed that the complicated reaction kinetics is comprehensively explained by a single reaction path affected by the surrounding adsorbates. We also propose from the simulations that weakly adsorbed CO molecules at domain boundaries promote the island-periphery specific reaction.
Material point method modeling in oil and gas reservoirs
Vanderheyden, William Brian; Zhang, Duan
2016-06-28
A computer system and method of simulating the behavior of an oil and gas reservoir including changes in the margins of frangible solids. A system of equations including state equations such as momentum, and conservation laws such as mass conservation and volume fraction continuity, are defined and discretized for at least two phases in a modeled volume, one of which corresponds to frangible material. A material point model technique for numerically solving the system of discretized equations, to derive fluid flow at each of a plurality of mesh nodes in the modeled volume, and the velocity of at each of a plurality of particles representing the frangible material in the modeled volume. A time-splitting technique improves the computational efficiency of the simulation while maintaining accuracy on the deformation scale. The method can be applied to derive accurate upscaled model equations for larger volume scale simulations.
Code of Federal Regulations, 2013 CFR
2013-07-01
... temperature simulation devices. (v) Conduct a visual inspection of each sensor every quarter if redundant... simulations or via relative accuracy testing. (v) Conduct an accuracy audit every quarter and after every deviation. Accuracy audit methods include comparisons of sensor values with electronic signal simulations or...
Code of Federal Regulations, 2014 CFR
2014-07-01
... temperature simulation devices. (v) Conduct a visual inspection of each sensor every quarter if redundant... simulations or via relative accuracy testing. (v) Conduct an accuracy audit every quarter and after every deviation. Accuracy audit methods include comparisons of sensor values with electronic signal simulations or...
Code of Federal Regulations, 2012 CFR
2012-07-01
... temperature simulation devices. (v) Conduct a visual inspection of each sensor every quarter if redundant... simulations or via relative accuracy testing. (v) Conduct an accuracy audit every quarter and after every deviation. Accuracy audit methods include comparisons of sensor values with electronic signal simulations or...
Development of a Standalone Thermal Wellbore Simulator
NASA Astrophysics Data System (ADS)
Xiong, Wanqiang
With continuous developments of various different sophisticated wells in the petroleum industry, wellbore modeling and simulation have increasingly received more attention. Especially in unconventional oil and gas recovery processes, there is a growing demand for more accurate wellbore modeling. Despite notable advancements made in wellbore modeling, none of the existing wellbore simulators has been as successful as reservoir simulators such as Eclipse and CMG's and further research works on handling issues such as accurate heat loss modeling and multi-tubing wellbore modeling are really necessary. A series of mathematical equations including main governing equations, auxiliary equations, PVT equations, thermodynamic equations, drift-flux model equations, and wellbore heat loss calculation equations are collected and screened from publications. Based on these modeling equations, workflows for wellbore simulation and software development are proposed. Research works are conducted in key steps for developing a wellbore simulator: discretization, a grid system, a solution method, a linear equation solver, and computer language. A standalone thermal wellbore simulator is developed by using standard C++ language. This wellbore simulator can simulate single-phase injection and production, two-phase steam injection and two-phase oil and water production. By implementing a multi-part scheme which divides a wellbore with sophisticated configuration into several relative simple simulation running units, this simulator can handle different complex wellbores: wellbore with multistage casings, horizontal wells, multilateral wells and double tubing. In pursuance of improved accuracy of heat loss calculations to surrounding formations, a semi-numerical method is proposed and a series of FLUENT simulations have been conducted in this study. This semi-numerical method involves extending the 2D formation heat transfer simulation to include a casing wall and cement and adopting new correlations regressed by this study. Meanwhile, a correlation for handling heat transfer in double-tubing annulus is regressed. This work initiates the research on heat transfer in a double-tubing wellbore system. A series of validation and test works are performed in hot water injection, steam injection, real filed data, a horizontal well, a double-tubing well and comparison with the Ramey method. The program in this study also performs well in matching with real measured field data, simulation in horizontal wells and double-tubing wells.
NASA Astrophysics Data System (ADS)
Hardie, Russell C.; Power, Jonathan D.; LeMaster, Daniel A.; Droege, Douglas R.; Gladysz, Szymon; Bose-Pillai, Santasri
2017-07-01
We present a numerical wave propagation method for simulating imaging of an extended scene under anisoplanatic conditions. While isoplanatic simulation is relatively common, few tools are specifically designed for simulating the imaging of extended scenes under anisoplanatic conditions. We provide a complete description of the proposed simulation tool, including the wave propagation method used. Our approach computes an array of point spread functions (PSFs) for a two-dimensional grid on the object plane. The PSFs are then used in a spatially varying weighted sum operation, with an ideal image, to produce a simulated image with realistic optical turbulence degradation. The degradation includes spatially varying warping and blurring. To produce the PSF array, we generate a series of extended phase screens. Simulated point sources are numerically propagated from an array of positions on the object plane, through the phase screens, and ultimately to the focal plane of the simulated camera. Note that the optical path for each PSF will be different, and thus, pass through a different portion of the extended phase screens. These different paths give rise to a spatially varying PSF to produce anisoplanatic effects. We use a method for defining the individual phase screen statistics that we have not seen used in previous anisoplanatic simulations. We also present a validation analysis. In particular, we compare simulated outputs with the theoretical anisoplanatic tilt correlation and a derived differential tilt variance statistic. This is in addition to comparing the long- and short-exposure PSFs and isoplanatic angle. We believe this analysis represents the most thorough validation of an anisoplanatic simulation to date. The current work is also unique that we simulate and validate both constant and varying Cn2(z) profiles. Furthermore, we simulate sequences with both temporally independent and temporally correlated turbulence effects. Temporal correlation is introduced by generating even larger extended phase screens and translating this block of screens in front of the propagation area. Our validation analysis shows an excellent match between the simulation statistics and the theoretical predictions. Thus, we think this tool can be used effectively to study optical anisoplanatic turbulence and to aid in the development of image restoration methods.
An efficient surrogate-based simulation-optimization method for calibrating a regional MODFLOW model
NASA Astrophysics Data System (ADS)
Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.
2017-01-01
Simulation-optimization method entails a large number of model simulations, which is computationally intensive or even prohibitive if the model simulation is extremely time-consuming. Statistical models have been examined as a surrogate of the high-fidelity physical model during simulation-optimization process to tackle this problem. Among them, Multivariate Adaptive Regression Splines (MARS), a non-parametric adaptive regression method, is superior in overcoming problems of high-dimensions and discontinuities of the data. Furthermore, the stability and accuracy of MARS model can be improved by bootstrap aggregating methods, namely, bagging. In this paper, Bagging MARS (BMARS) method is integrated to a surrogate-based simulation-optimization framework to calibrate a three-dimensional MODFLOW model, which is developed to simulate the groundwater flow in an arid hardrock-alluvium region in northwestern Oman. The physical MODFLOW model is surrogated by the statistical model developed using BMARS algorithm. The surrogate model, which is fitted and validated using training dataset generated by the physical model, can approximate solutions rapidly. An efficient Sobol' method is employed to calculate global sensitivities of head outputs to input parameters, which are used to analyze their importance for the model outputs spatiotemporally. Only sensitive parameters are included in the calibration process to further improve the computational efficiency. Normalized root mean square error (NRMSE) between measured and simulated heads at observation wells is used as the objective function to be minimized during optimization. The reasonable history match between the simulated and observed heads demonstrated feasibility of this high-efficient calibration framework.
Wang, Candice; Huang, Chin-Chou; Lin, Shing-Jong; Chen, Jaw-Wen
2016-01-01
Objectives The goal of our study was to shed light on educational methods to strengthen medical students' cardiopulmonary resuscitation (CPR) leadership and team skills in order to optimise CPR understanding and success using didactic videos and high-fidelity simulations. Design An observational study. Setting A tertiary medical centre in Northern Taiwan. Participants A total of 104 5–7th year medical students, including 72 men and 32 women. Interventions We provided the medical students with a 2-hour training session on advanced CPR. During each class, we divided the students into 1–2 groups; each group consisted of 4–6 team members. Medical student teams were trained by using either method A or B. Method A started with an instructional CPR video followed by a first CPR simulation. Method B started with a first CPR simulation followed by an instructional CPR video. All students then participated in a second CPR simulation. Outcome measures Student teams were assessed with checklist rating scores in leadership, teamwork and team member skills, global rating scores by an attending physician and video-recording evaluation by 2 independent individuals. Results The 104 medical students were divided into 22 teams. We trained 11 teams using method A and 11 using method B. Total second CPR simulation scores were significantly higher than first CPR simulation scores in leadership (p<0.001), teamwork (p<0.001) and team member skills (p<0.001). For methods A and B students' first CPR simulation scores were similar, but method A students' second CPR simulation scores were significantly higher than those of method B in leadership skills (p=0.034), specifically in the support subcategory (p=0.049). Conclusions Although both teaching strategies improved leadership, teamwork and team member performance, video exposure followed by CPR simulation further increased students' leadership skills compared with CPR simulation followed by video exposure. PMID:27678539
Raemer, Daniel B
2014-06-01
The story of Ignaz Semmelweis suggests a lesson to beware of unintended consequences, especially with in situ simulation. In situ simulation offers many important advantages over center-based simulation such as learning about the real setting, putting participants at ease, saving travel time, minimizing space requirements, involving patients and families. Some substantial disadvantages include frequent distractions, lack of privacy, logistics of setup, availability of technology, and supply costs. Importantly, in situ simulation amplifies some of the safety hazards of simulation itself including maintaining control of simulated medications and equipment, limiting the use of valuable hospital resources, preventing incorrect learning from simulation shortcuts, and profoundly upsetting patients and their families. Mitigating these hazards by labeling effectively, publishing policies and procedures, securing simulation supplies and equipment, educating simulation staff, and informing participants of the risks are all methods that may lessen the potential for an accident. Each requires a serious effort of analysis, design, and implementation.
Global Flowfield About the V-22 Tiltrotor Aircraft
NASA Technical Reports Server (NTRS)
Meakin, Robert L.
1996-01-01
This final report includes five publications that resulted from the studies of the global flowfield about the V-22 Tiltrotor Aircraft. The first of the five is 'The Chimera Method of Simulation for Unsteady Three-Dimensional Viscous Flow', as presented in 'Computational Fluid Dynamics Review 1995.' The remaining papers, all presented at AIAA conferences, are 'Unsteady Simulation of the Viscous Flow About a V-22 Rotor and Wing in Hover', 'An Efficient Means of Adaptive Refinement Within Systems of Overset Grids', 'On the Spatial and Temporal Accuracy of Overset Grid Methods for MOving Body Problems', and 'Moving Body Overset Grid Methods for Complete Aircraft Tiltrotor Simulations.'
Fast Learning for Immersive Engagement in Energy Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bush, Brian W; Bugbee, Bruce; Gruchalla, Kenny M
The fast computation which is critical for immersive engagement with and learning from energy simulations would be furthered by developing a general method for creating rapidly computed simplified versions of NREL's computation-intensive energy simulations. Created using machine learning techniques, these 'reduced form' simulations can provide statistically sound estimates of the results of the full simulations at a fraction of the computational cost with response times - typically less than one minute of wall-clock time - suitable for real-time human-in-the-loop design and analysis. Additionally, uncertainty quantification techniques can document the accuracy of the approximate models and their domain of validity. Approximationmore » methods are applicable to a wide range of computational models, including supply-chain models, electric power grid simulations, and building models. These reduced-form representations cannot replace or re-implement existing simulations, but instead supplement them by enabling rapid scenario design and quality assurance for large sets of simulations. We present an overview of the framework and methods we have implemented for developing these reduced-form representations.« less
Advances in Integrated Computational Materials Engineering "ICME"
NASA Astrophysics Data System (ADS)
Hirsch, Jürgen
The methods of Integrated Computational Materials Engineering that were developed and successfully applied for Aluminium have been constantly improved. The main aspects and recent advances of integrated material and process modeling are simulations of material properties like strength and forming properties and for the specific microstructure evolution during processing (rolling, extrusion, annealing) under the influence of material constitution and process variations through the production process down to the final application. Examples are discussed for the through-process simulation of microstructures and related properties of Aluminium sheet, including DC ingot casting, pre-heating and homogenization, hot and cold rolling, final annealing. New results are included of simulation solution annealing and age hardening of 6xxx alloys for automotive applications. Physically based quantitative descriptions and computer assisted evaluation methods are new ICME methods of integrating new simulation tools also for customer applications, like heat affected zones in welding of age hardening alloys. The aspects of estimating the effect of specific elements due to growing recycling volumes requested also for high end Aluminium products are also discussed, being of special interest in the Aluminium producing industries.
Subresolution Displacements in Finite Difference Simulations of Ultrasound Propagation and Imaging.
Pinton, Gianmarco F
2017-03-01
Time domain finite difference simulations are used extensively to simulate wave propagation. They approximate the wave field on a discrete domain with a grid spacing that is typically on the order of a tenth of a wavelength. The smallest displacements that can be modeled by this type of simulation are thus limited to discrete values that are integer multiples of the grid spacing. This paper presents a method to represent continuous and subresolution displacements by varying the impedance of individual elements in a multielement scatterer. It is demonstrated that this method removes the limitations imposed by the discrete grid spacing by generating a continuum of displacements as measured by the backscattered signal. The method is first validated on an ideal perfect correlation case with a single scatterer. It is subsequently applied to a more complex case with a field of scatterers that model an acoustic radiation force-induced displacement used in ultrasound elasticity imaging. A custom finite difference simulation tool is used to simulate propagation from ultrasound imaging pulses in the scatterer field. These simulated transmit-receive events are then beamformed into images, which are tracked with a correlation-based algorithm to determine the displacement. A linear predictive model is developed to analytically describe the relationship between element impedance and backscattered phase shift. The error between model and simulation is λ/ 1364 , where λ is the acoustical wavelength. An iterative method is also presented that reduces the simulation error to λ/ 5556 over one iteration. The proposed technique therefore offers a computationally efficient method to model continuous subresolution displacements of a scattering medium in ultrasound imaging. This method has applications that include ultrasound elastography, blood flow, and motion tracking. This method also extends generally to finite difference simulations of wave propagation, such as electromagnetic or seismic waves.
NASA Astrophysics Data System (ADS)
Edwards, T.
2015-12-01
Modelling Antarctic marine ice sheet instability (MISI) - the potential for sustained grounding line retreat along downsloping bedrock - is very challenging because high resolution at the grounding line is required for reliable simulation. Assessing modelling uncertainties is even more difficult, because such models are very computationally expensive, restricting the number of simulations that can be performed. Quantifying uncertainty in future Antarctic instability has therefore so far been limited. There are several ways to tackle this problem, including: Simulating a small domain, to reduce expense and allow the use of ensemble methods; Parameterising response of the grounding line to the onset of MISI, for the same reasons; Emulating the simulator with a statistical model, to explore the impacts of uncertainties more thoroughly; Substituting physical models with expert-elicited statistical distributions. Methods 2-4 require rigorous testing against observations and high resolution models to have confidence in their results. We use all four to examine the dependence of MISI in the Amundsen Sea Embayment (ASE) on uncertain model inputs, including bedrock topography, ice viscosity, basal friction, model structure (sliding law and treatment of grounding line migration) and MISI triggers (including basal melting and risk of ice shelf collapse). We compare simulations from a 3000 member ensemble with GRISLI (methods 2, 4) with a 284 member ensemble from BISICLES (method 1) and also use emulation (method 3). Results from the two ensembles show similarities, despite very different model structures and ensemble designs. Basal friction and topography have a large effect on the extent of grounding line retreat, and the sliding law strongly modifies sea level contributions through changes in the rate and extent of grounding line retreat and the rate of ice thinning. Over 50 years, MISI in the ASE gives up to 1.1 mm/year (95% quantile) SLE in GRISLI (calibrated with ASE mass losses in a Bayesian framework), and up to 1.2 mm/year SLE (95% quantile) in the 270 completed BISICLES simulations (no calibration). We will show preliminary results emulating the models, calibrating with observations, and comparing them to assess structural uncertainty. We use these to improve MISI projections for the whole continent.
Nonlinear vs. linear biasing in Trp-cage folding simulations
NASA Astrophysics Data System (ADS)
Spiwok, Vojtěch; Oborský, Pavel; Pazúriková, Jana; Křenek, Aleš; Králová, Blanka
2015-03-01
Biased simulations have great potential for the study of slow processes, including protein folding. Atomic motions in molecules are nonlinear, which suggests that simulations with enhanced sampling of collective motions traced by nonlinear dimensionality reduction methods may perform better than linear ones. In this study, we compare an unbiased folding simulation of the Trp-cage miniprotein with metadynamics simulations using both linear (principle component analysis) and nonlinear (Isomap) low dimensional embeddings as collective variables. Folding of the mini-protein was successfully simulated in 200 ns simulation with linear biasing and non-linear motion biasing. The folded state was correctly predicted as the free energy minimum in both simulations. We found that the advantage of linear motion biasing is that it can sample a larger conformational space, whereas the advantage of nonlinear motion biasing lies in slightly better resolution of the resulting free energy surface. In terms of sampling efficiency, both methods are comparable.
Nonlinear vs. linear biasing in Trp-cage folding simulations.
Spiwok, Vojtěch; Oborský, Pavel; Pazúriková, Jana; Křenek, Aleš; Králová, Blanka
2015-03-21
Biased simulations have great potential for the study of slow processes, including protein folding. Atomic motions in molecules are nonlinear, which suggests that simulations with enhanced sampling of collective motions traced by nonlinear dimensionality reduction methods may perform better than linear ones. In this study, we compare an unbiased folding simulation of the Trp-cage miniprotein with metadynamics simulations using both linear (principle component analysis) and nonlinear (Isomap) low dimensional embeddings as collective variables. Folding of the mini-protein was successfully simulated in 200 ns simulation with linear biasing and non-linear motion biasing. The folded state was correctly predicted as the free energy minimum in both simulations. We found that the advantage of linear motion biasing is that it can sample a larger conformational space, whereas the advantage of nonlinear motion biasing lies in slightly better resolution of the resulting free energy surface. In terms of sampling efficiency, both methods are comparable.
Patel, Ravi G.; Desjardins, Olivier; Kong, Bo; ...
2017-09-01
Here, we present a verification study of three simulation techniques for fluid–particle flows, including an Euler–Lagrange approach (EL) inspired by Jackson's seminal work on fluidized particles, a quadrature–based moment method based on the anisotropic Gaussian closure (AG), and the traditional two-fluid model. We perform simulations of two problems: particles in frozen homogeneous isotropic turbulence (HIT) and cluster-induced turbulence (CIT). For verification, we evaluate various techniques for extracting statistics from EL and study the convergence properties of the three methods under grid refinement. The convergence is found to depend on the simulation method and on the problem, with CIT simulations posingmore » fewer difficulties than HIT. Specifically, EL converges under refinement for both HIT and CIT, but statistics exhibit dependence on the postprocessing parameters. For CIT, AG produces similar results to EL. For HIT, converging both TFM and AG poses challenges. Overall, extracting converged, parameter-independent Eulerian statistics remains a challenge for all methods.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patel, Ravi G.; Desjardins, Olivier; Kong, Bo
Here, we present a verification study of three simulation techniques for fluid–particle flows, including an Euler–Lagrange approach (EL) inspired by Jackson's seminal work on fluidized particles, a quadrature–based moment method based on the anisotropic Gaussian closure (AG), and the traditional two-fluid model. We perform simulations of two problems: particles in frozen homogeneous isotropic turbulence (HIT) and cluster-induced turbulence (CIT). For verification, we evaluate various techniques for extracting statistics from EL and study the convergence properties of the three methods under grid refinement. The convergence is found to depend on the simulation method and on the problem, with CIT simulations posingmore » fewer difficulties than HIT. Specifically, EL converges under refinement for both HIT and CIT, but statistics exhibit dependence on the postprocessing parameters. For CIT, AG produces similar results to EL. For HIT, converging both TFM and AG poses challenges. Overall, extracting converged, parameter-independent Eulerian statistics remains a challenge for all methods.« less
Determination of component volumes of lipid bilayers from simulations.
Petrache, H I; Feller, S E; Nagle, J F
1997-01-01
An efficient method for extracting volumetric data from simulations is developed. The method is illustrated using a recent atomic-level molecular dynamics simulation of L alpha phase 1,2-dipalmitoyl-sn-glycero-3-phosphocholine bilayer. Results from this simulation are obtained for the volumes of water (VW), lipid (V1), chain methylenes (V2), chain terminal methyls (V3), and lipid headgroups (VH), including separate volumes for carboxyl (Vcoo), glyceryl (Vgl), phosphoryl (VPO4), and choline (Vchol) groups. The method assumes only that each group has the same average volume regardless of its location in the bilayer, and this assumption is then tested with the current simulation. The volumes obtained agree well with the values VW and VL that have been obtained directly from experiment, as well as with the volumes VH, V2, and V3 that require certain assumptions in addition to the experimental data. This method should help to support and refine some assumptions that are necessary when interpreting experimental data. Images FIGURE 4 PMID:9129826
Elcock, Adrian H.
2013-01-01
Inclusion of hydrodynamic interactions (HIs) is essential in simulations of biological macromolecules that treat the solvent implicitly if the macromolecules are to exhibit correct translational and rotational diffusion. The present work describes the development and testing of a simple approach aimed at allowing more rapid computation of HIs in coarse-grained Brownian dynamics simulations of systems that contain large numbers of flexible macromolecules. The method combines a complete treatment of intramolecular HIs with an approximate treatment of the intermolecular HIs which assumes that the molecules are effectively spherical; all of the HIs are calculated at the Rotne-Prager-Yamakawa level of theory. When combined with Fixman’s Chebyshev polynomial method for calculating correlated random displacements, the proposed method provides an approach that is simple to program but sufficiently fast that it makes it computationally viable to include HIs in large-scale simulations. Test calculations performed on very coarse-grained models of the pyruvate dehydrogenase (PDH) E2 complex and on oligomers of ParM (ranging in size from 1 to 20 monomers) indicate that the method reproduces the translational diffusion behavior seen in more complete HI simulations surprisingly well; the method performs less well at capturing rotational diffusion but its discrepancies diminish with increasing size of the simulated assembly. Simulations of residue-level models of two tetrameric protein models demonstrate that the method also works well when more structurally detailed models are used in the simulations. Finally, test simulations of systems containing up to 1024 coarse-grained PDH molecules indicate that the proposed method rapidly becomes more efficient than the conventional BD approach in which correlated random displacements are obtained via a Cholesky decomposition of the complete diffusion tensor. PMID:23914146
Simulation of one-sided heating of boiler unit membrane-type water walls
NASA Astrophysics Data System (ADS)
Kurepin, M. P.; Serbinovskiy, M. Yu.
2017-03-01
This study describes the results of simulation of the temperature field and the stress-strain state of membrane-type gastight water walls of boiler units using the finite element method. The methods of analytical and standard calculation of one-sided heating of fin-tube water walls by a radiative heat flux are analyzed. The methods and software for input data calculation in the finite-element simulation, including thermoelastic moments in welded panels that result from their one-sided heating, are proposed. The method and software modules are used for water wall simulation using ANSYS. The results of simulation of the temperature field, stress field, deformations and displacement of the membrane-type panel for the boiler furnace water wall using the finite-element method, as well as the results of calculation of the panel tube temperature, stresses and deformations using the known methods, are presented. The comparison of the known experimental results on heating and bending by given moments of membrane-type water walls and numerical simulations is performed. It is demonstrated that numerical results agree with high accuracy with the experimental data. The relative temperature difference does not exceed 1%. The relative difference of the experimental fin mutual turning angle caused by one-sided heating by radiative heat flux and the results obtained in the finite element simulation does not exceed 8.5% for nondisplaced fins and 7% for fins with displacement. The same difference for the theoretical results and the simulation using the finite-element method does not exceed 3% and 7.1%, respectively. The proposed method and software modules for simulation of the temperature field and stress-strain state of the water walls are verified and the feasibility of their application in practical design is proven.
Development of the V4.2m5 and V5.0m0 Multigroup Cross Section Libraries for MPACT for PWR and BWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Kang Seog; Clarno, Kevin T.; Gentry, Cole
2017-03-01
The MPACT neutronics module of the Consortium for Advanced Simulation of Light Water Reactors (CASL) core simulator is a 3-D whole core transport code being developed for the CASL toolset, Virtual Environment for Reactor Analysis (VERA). Key characteristics of the MPACT code include (1) a subgroup method for resonance selfshielding and (2) a whole-core transport solver with a 2-D/1-D synthesis method. The MPACT code requires a cross section library to support all the MPACT core simulation capabilities which would be the most influencing component for simulation accuracy.
The SCEC Broadband Platform: Open-Source Software for Strong Ground Motion Simulation and Validation
NASA Astrophysics Data System (ADS)
Silva, F.; Goulet, C. A.; Maechling, P. J.; Callaghan, S.; Jordan, T. H.
2016-12-01
The Southern California Earthquake Center (SCEC) Broadband Platform (BBP) is a carefully integrated collection of open-source scientific software programs that can simulate broadband (0-100 Hz) ground motions for earthquakes at regional scales. The BBP can run earthquake rupture and wave propagation modeling software to simulate ground motions for well-observed historical earthquakes and to quantify how well the simulated broadband seismograms match the observed seismograms. The BBP can also run simulations for hypothetical earthquakes. In this case, users input an earthquake location and magnitude description, a list of station locations, and a 1D velocity model for the region of interest, and the BBP software then calculates ground motions for the specified stations. The BBP scientific software modules implement kinematic rupture generation, low- and high-frequency seismogram synthesis using wave propagation through 1D layered velocity structures, several ground motion intensity measure calculations, and various ground motion goodness-of-fit tools. These modules are integrated into a software system that provides user-defined, repeatable, calculation of ground-motion seismograms, using multiple alternative ground motion simulation methods, and software utilities to generate tables, plots, and maps. The BBP has been developed over the last five years in a collaborative project involving geoscientists, earthquake engineers, graduate students, and SCEC scientific software developers. The SCEC BBP software released in 2016 can be compiled and run on recent Linux and Mac OS X systems with GNU compilers. It includes five simulation methods, seven simulation regions covering California, Japan, and Eastern North America, and the ability to compare simulation results against empirical ground motion models (aka GMPEs). The latest version includes updated ground motion simulation methods, a suite of new validation metrics and a simplified command line user interface.
3D Simulation of Multiple Simultaneous Hydraulic Fractures with Different Initial Lengths in Rock
NASA Astrophysics Data System (ADS)
Tang, X.; Rayudu, N. M.; Singh, G.
2017-12-01
Hydraulic fracturing is widely used technique for extracting shale gas. During this process, fractures with various initial lengths are induced in rock mass with hydraulic pressure. Understanding the mechanism of propagation and interaction between these induced hydraulic cracks is critical for optimizing the fracking process. In this work, numerical results are presented for investigating the effect of in-situ parameters and fluid properties on growth and interaction of multi simultaneous hydraulic fractures. A fully coupled 3D fracture simulator, TOUGH- GFEM is used for simulating the effect of different vital parameters, including in-situ stress, initial fracture length, fracture spacing, fluid viscosity and flow rate on induced hydraulic fractures growth. This TOUGH-GFEM simulator is based on 3D finite volume method (FVM) and partition of unity element method (PUM). Displacement correlation method (DCM) is used for calculating multi - mode (Mode I, II, III) stress intensity factors. Maximum principal stress criteria is used for crack propagation. Key words: hydraulic fracturing, TOUGH, partition of unity element method , displacement correlation method, 3D fracturing simulator
Simulation-based training for prostate surgery.
Khan, Raheej; Aydin, Abdullatif; Khan, Muhammad Shamim; Dasgupta, Prokar; Ahmed, Kamran
2015-10-01
To identify and review the currently available simulators for prostate surgery and to explore the evidence supporting their validity for training purposes. A review of the literature between 1999 and 2014 was performed. The search terms included a combination of urology, prostate surgery, robotic prostatectomy, laparoscopic prostatectomy, transurethral resection of the prostate (TURP), simulation, virtual reality, animal model, human cadavers, training, assessment, technical skills, validation and learning curves. Furthermore, relevant abstracts from the American Urological Association, European Association of Urology, British Association of Urological Surgeons and World Congress of Endourology meetings, between 1999 and 2013, were included. Only studies related to prostate surgery simulators were included; studies regarding other urological simulators were excluded. A total of 22 studies that carried out a validation study were identified. Five validated models and/or simulators were identified for TURP, one for photoselective vaporisation of the prostate, two for holmium enucleation of the prostate, three for laparoscopic radical prostatectomy (LRP) and four for robot-assisted surgery. Of the TURP simulators, all five have demonstrated content validity, three face validity and four construct validity. The GreenLight laser simulator has demonstrated face, content and construct validities. The Kansai HoLEP Simulator has demonstrated face and content validity whilst the UroSim HoLEP Simulator has demonstrated face, content and construct validity. All three animal models for LRP have been shown to have construct validity whilst the chicken skin model was also content valid. Only two robotic simulators were identified with relevance to robot-assisted laparoscopic prostatectomy, both of which demonstrated construct validity. A wide range of different simulators are available for prostate surgery, including synthetic bench models, virtual-reality platforms, animal models, human cadavers, distributed simulation and advanced training programmes and modules. The currently validated simulators can be used by healthcare organisations to provide supplementary training sessions for trainee surgeons. Further research should be conducted to validate simulated environments, to determine which simulators have greater efficacy than others and to assess the cost-effectiveness of the simulators and the transferability of skills learnt. With surgeons investigating new possibilities for easily reproducible and valid methods of training, simulation offers great scope for implementation alongside traditional methods of training. © 2014 The Authors BJU International © 2014 BJU International Published by John Wiley & Sons Ltd.
An Investigation of High-Order Shock-Capturing Methods for Computational Aeroacoustics
NASA Technical Reports Server (NTRS)
Casper, Jay; Baysal, Oktay
1997-01-01
Topics covered include: Low-dispersion scheme for nonlinear acoustic waves in nonuniform flow; Computation of acoustic scattering by a low-dispersion scheme; Algorithmic extension of low-dispersion scheme and modeling effects for acoustic wave simulation; The accuracy of shock capturing in two spatial dimensions; Using high-order methods on lower-order geometries; and Computational considerations for the simulation of discontinuous flows.
New method for qualitative simulations of water resources systems. 2. Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Antunes, M.P.; Seixas, M.J.; Camara, A.S.
1987-11-01
SLIN (Simulacao Linguistica) is a new method for qualitative dynamic simulation. As was presented previously, SLIN relies upon a categorical representation of variables which are manipulated by logical rules. Two applications to water resources systems are included to illustrate SLIN's potential usefulness: the environmental impact evaluation of a hydropower plant and the assessment of oil dispersion in the sea after a tanker wreck.
Direct Harmonic Linear Navier-Stokes Methods for Efficient Simulation of Wave Packets
NASA Technical Reports Server (NTRS)
Streett, C. L.
1998-01-01
Wave packets produced by localized disturbances play an important role in transition in three-dimensional boundary layers, such as that on a swept wing. Starting with the receptivity process, we show the effects of wave-space energy distribution on the development of packets and other three-dimensional disturbance patterns. Nonlinearity in the receptivity process is specifically addressed, including demonstration of an effect which can enhance receptivity of traveling crossflow disturbances. An efficient spatial numerical simulation method is allowing most of the simulations presented to be carried out on a workstation.
Laboratory simulation of the astrophysical burst processes in non-uniform magnetised media
NASA Astrophysics Data System (ADS)
Antonov, V. M.; Zakharov, Yu. P.; Orishich, A. M.; Ponomarenko, A. G.; Posukh, V. G.; Snytnikov, V. N.; Stoyanovsky, V. O.
1990-08-01
Under various astrophysical conditions the dynamics of nonstationary burst processes with mass and energy release may be defined by the inhomogeneity of the surrounding medium. In the presence of external magnetic field such a problem in general case becomes a three dimensional one and very complicated both from the observable and theoretical point of view (including the computer simulation method). The application of the laboratory simulation methods in such kinds of problems therefore seems to be rather promising and is demonstrated, mainly on the example of peculiar supernova.
Simulation of Hypervelocity Impact on Aluminum-Nextel-Kevlar Orbital Debris Shields
NASA Technical Reports Server (NTRS)
Fahrenthold, Eric P.
2000-01-01
An improved hybrid particle-finite element method has been developed for hypervelocity impact simulation. The method combines the general contact-impact capabilities of particle codes with the true Lagrangian kinematics of large strain finite element formulations. Unlike some alternative schemes which couple Lagrangian finite element models with smooth particle hydrodynamics, the present formulation makes no use of slidelines or penalty forces. The method has been implemented in a parallel, three dimensional computer code. Simulations of three dimensional orbital debris impact problems using this parallel hybrid particle-finite element code, show good agreement with experiment and good speedup in parallel computation. The simulations included single and multi-plate shields as well as aluminum and composite shielding materials. at an impact velocity of eleven kilometers per second.
Computational Methods Development at Ames
NASA Technical Reports Server (NTRS)
Kwak, Dochan; Smith, Charles A. (Technical Monitor)
1998-01-01
This viewgraph presentation outlines the development at Ames Research Center of advanced computational methods to provide appropriate fidelity computational analysis/design capabilities. Current thrusts of the Ames research include: 1) methods to enhance/accelerate viscous flow simulation procedures, and the development of hybrid/polyhedral-grid procedures for viscous flow; 2) the development of real time transonic flow simulation procedures for a production wind tunnel, and intelligent data management technology; and 3) the validation of methods and the flow physics study gives historical precedents to above research, and speculates on its future course.
Simulating Free Surface Flows with SPH
NASA Astrophysics Data System (ADS)
Monaghan, J. J.
1994-02-01
The SPH (smoothed particle hydrodynamics) method is extended to deal with free surface incompressible flows. The method is easy to use, and examples will be given of its application to a breaking dam, a bore, the simulation of a wave maker, and the propagation of waves towards a beach. Arbitrary moving boundaries can be included by modelling the boundaries by particles which repel the fluid particles. The method is explicit, and the time steps are therefore much shorter than required by other less flexible methods, but it is robust and easy to program.
An adaptive approach to the dynamic allocation of buffer storage. M.S. Thesis
NASA Technical Reports Server (NTRS)
Crooke, S. C.
1970-01-01
Several strategies for the dynamic allocation of buffer storage are simulated and compared. The basic algorithms investigated, using actual statistics observed in the Univac 1108 EXEC 8 System, include the buddy method and the first-fit method. Modifications are made to the basic methods in an effort to improve and to measure allocation performance. A simulation model of an adaptive strategy is developed which permits interchanging the two different methods, the buddy and the first-fit methods with some modifications. Using an adaptive strategy, each method may be employed in the statistical environment in which its performance is superior to the other method.
NASA Astrophysics Data System (ADS)
Cao, Huijun; Cao, Yong; Chu, Yuchuan; He, Xiaoming; Lin, Tao
2018-06-01
Surface evolution is an unavoidable issue in engineering plasma applications. In this article an iterative method for modeling plasma-surface interactions with moving interface is proposed and validated. In this method, the plasma dynamics is simulated by an immersed finite element particle-in-cell (IFE-PIC) method, and the surface evolution is modeled by the Huygens wavelet method which is coupled with the iteration of the IFE-PIC method. Numerical experiments, including prototypical engineering applications, such as the erosion of Hall thruster channel wall, are presented to demonstrate features of this Huygens IFE-PIC method for simulating the dynamic plasma-surface interactions.
A method to reproduce alpha-particle spectra measured with semiconductor detectors.
Timón, A Fernández; Vargas, M Jurado; Sánchez, A Martín
2010-01-01
A method is proposed to reproduce alpha-particle spectra measured with silicon detectors, combining analytical and computer simulation techniques. The procedure includes the use of the Monte Carlo method to simulate the tracks of alpha-particles within the source and in the detector entrance window. The alpha-particle spectrum is finally obtained by the convolution of this simulated distribution and the theoretical distributions representing the contributions of the alpha-particle spectrometer to the spectrum. Experimental spectra from (233)U and (241)Am sources were compared with the predictions given by the proposed procedure, showing good agreement. The proposed method can be an important aid for the analysis and deconvolution of complex alpha-particle spectra. Copyright 2009 Elsevier Ltd. All rights reserved.
Ni, Haochen; Rui, Yikang; Wang, Jiechen; Cheng, Liang
2014-09-05
The chemical industry poses a potential security risk to factory personnel and neighboring residents. In order to mitigate prospective damage, a synthetic method must be developed for an emergency response. With the development of environmental numeric simulation models, model integration methods, and modern information technology, many Decision Support Systems (DSSs) have been established. However, existing systems still have limitations, in terms of synthetic simulation and network interoperation. In order to resolve these limitations, the matured simulation model for chemical accidents was integrated into the WEB Geographic Information System (WEBGIS) platform. The complete workflow of the emergency response, including raw data (meteorology information, and accident information) management, numeric simulation of different kinds of accidents, environmental impact assessments, and representation of the simulation results were achieved. This allowed comprehensive and real-time simulation of acute accidents in the chemical industry. The main contribution of this paper is that an organizational mechanism of the model set, based on the accident type and pollutant substance; a scheduling mechanism for the parallel processing of multi-accident-type, multi-accident-substance, and multi-simulation-model; and finally a presentation method for scalar and vector data on the web browser on the integration of a WEB Geographic Information System (WEBGIS) platform. The outcomes demonstrated that this method could provide effective support for deciding emergency responses of acute chemical accidents.
Ni, Haochen; Rui, Yikang; Wang, Jiechen; Cheng, Liang
2014-01-01
The chemical industry poses a potential security risk to factory personnel and neighboring residents. In order to mitigate prospective damage, a synthetic method must be developed for an emergency response. With the development of environmental numeric simulation models, model integration methods, and modern information technology, many Decision Support Systems (DSSs) have been established. However, existing systems still have limitations, in terms of synthetic simulation and network interoperation. In order to resolve these limitations, the matured simulation model for chemical accidents was integrated into the WEB Geographic Information System (WEBGIS) platform. The complete workflow of the emergency response, including raw data (meteorology information, and accident information) management, numeric simulation of different kinds of accidents, environmental impact assessments, and representation of the simulation results were achieved. This allowed comprehensive and real-time simulation of acute accidents in the chemical industry. The main contribution of this paper is that an organizational mechanism of the model set, based on the accident type and pollutant substance; a scheduling mechanism for the parallel processing of multi-accident-type, multi-accident-substance, and multi-simulation-model; and finally a presentation method for scalar and vector data on the web browser on the integration of a WEB Geographic Information System (WEBGIS) platform. The outcomes demonstrated that this method could provide effective support for deciding emergency responses of acute chemical accidents. PMID:25198686
Multiscale Hy3S: hybrid stochastic simulation for supercomputers.
Salis, Howard; Sotiropoulos, Vassilios; Kaznessis, Yiannis N
2006-02-24
Stochastic simulation has become a useful tool to both study natural biological systems and design new synthetic ones. By capturing the intrinsic molecular fluctuations of "small" systems, these simulations produce a more accurate picture of single cell dynamics, including interesting phenomena missed by deterministic methods, such as noise-induced oscillations and transitions between stable states. However, the computational cost of the original stochastic simulation algorithm can be high, motivating the use of hybrid stochastic methods. Hybrid stochastic methods partition the system into multiple subsets and describe each subset as a different representation, such as a jump Markov, Poisson, continuous Markov, or deterministic process. By applying valid approximations and self-consistently merging disparate descriptions, a method can be considerably faster, while retaining accuracy. In this paper, we describe Hy3S, a collection of multiscale simulation programs. Building on our previous work on developing novel hybrid stochastic algorithms, we have created the Hy3S software package to enable scientists and engineers to both study and design extremely large well-mixed biological systems with many thousands of reactions and chemical species. We have added adaptive stochastic numerical integrators to permit the robust simulation of dynamically stiff biological systems. In addition, Hy3S has many useful features, including embarrassingly parallelized simulations with MPI; special discrete events, such as transcriptional and translation elongation and cell division; mid-simulation perturbations in both the number of molecules of species and reaction kinetic parameters; combinatorial variation of both initial conditions and kinetic parameters to enable sensitivity analysis; use of NetCDF optimized binary format to quickly read and write large datasets; and a simple graphical user interface, written in Matlab, to help users create biological systems and analyze data. We demonstrate the accuracy and efficiency of Hy3S with examples, including a large-scale system benchmark and a complex bistable biochemical network with positive feedback. The software itself is open-sourced under the GPL license and is modular, allowing users to modify it for their own purposes. Hy3S is a powerful suite of simulation programs for simulating the stochastic dynamics of networks of biochemical reactions. Its first public version enables computational biologists to more efficiently investigate the dynamics of realistic biological systems.
NASA Technical Reports Server (NTRS)
Woo, Myeung-Jouh; Greber, Isaac
1995-01-01
Molecular dynamics simulation is used to study the piston driven shock wave at Mach 1.5, 3, and 10. A shock tube, whose shape is a circular cylinder, is filled with hard sphere molecules having a Maxwellian thermal velocity distribution and zero mean velocity. The piston moves and a shock wave is generated. All collisions are specular, including those between the molecules and the computational boundaries, so that the shock development is entirely causal, with no imposed statistics. The structure of the generated shock is examined in detail, and the wave speed; profiles of density, velocity, and temperature; and shock thickness are determined. The results are compared with published results of other methods, especially the direct simulation Monte-Carlo method. Property profiles are similar to those generated by direct simulation Monte-Carlo method. The shock wave thicknesses are smaller than the direct simulation Monte-Carlo results, but larger than those of the other methods. Simulation of a shock wave, which is one-dimensional, is a severe test of the molecular dynamics method, which is always three-dimensional. A major challenge of the thesis is to examine the capability of the molecular dynamics methods by choosing a difficult task.
Simulation in teaching regional anesthesia: current perspectives.
Udani, Ankeet D; Kim, T Edward; Howard, Steven K; Mariano, Edward R
2015-01-01
The emerging subspecialty of regional anesthesiology and acute pain medicine represents an opportunity to evaluate critically the current methods of teaching regional anesthesia techniques and the practice of acute pain medicine. To date, there have been a wide variety of simulation applications in this field, and efficacy has largely been assumed. However, a thorough review of the literature reveals that effective teaching strategies, including simulation, in regional anesthesiology and acute pain medicine are not established completely yet. Future research should be directed toward comparative-effectiveness of simulation versus other accepted teaching methods, exploring the combination of procedural training with realistic clinical scenarios, and the application of simulation-based teaching curricula to a wider range of learner, from the student to the practicing physician.
Simulation in teaching regional anesthesia: current perspectives
Udani, Ankeet D; Kim, T Edward; Howard, Steven K; Mariano, Edward R
2015-01-01
The emerging subspecialty of regional anesthesiology and acute pain medicine represents an opportunity to evaluate critically the current methods of teaching regional anesthesia techniques and the practice of acute pain medicine. To date, there have been a wide variety of simulation applications in this field, and efficacy has largely been assumed. However, a thorough review of the literature reveals that effective teaching strategies, including simulation, in regional anesthesiology and acute pain medicine are not established completely yet. Future research should be directed toward comparative-effectiveness of simulation versus other accepted teaching methods, exploring the combination of procedural training with realistic clinical scenarios, and the application of simulation-based teaching curricula to a wider range of learner, from the student to the practicing physician. PMID:26316812
Code Samples Used for Complexity and Control
NASA Astrophysics Data System (ADS)
Ivancevic, Vladimir G.; Reid, Darryn J.
2015-11-01
The following sections are included: * MathematicaⓇ Code * Generic Chaotic Simulator * Vector Differential Operators * NLS Explorer * 2C++ Code * C++ Lambda Functions for Real Calculus * Accelerometer Data Processor * Simple Predictor-Corrector Integrator * Solving the BVP with the Shooting Method * Linear Hyperbolic PDE Solver * Linear Elliptic PDE Solver * Method of Lines for a Set of the NLS Equations * C# Code * Iterative Equation Solver * Simulated Annealing: A Function Minimum * Simple Nonlinear Dynamics * Nonlinear Pendulum Simulator * Lagrangian Dynamics Simulator * Complex-Valued Crowd Attractor Dynamics * Freeform Fortran Code * Lorenz Attractor Simulator * Complex Lorenz Attractor * Simple SGE Soliton * Complex Signal Presentation * Gaussian Wave Packet * Hermitian Matrices * Euclidean L2-Norm * Vector/Matrix Operations * Plain C-Code: Levenberg-Marquardt Optimizer * Free Basic Code: 2D Crowd Dynamics with 3000 Agents
Geochemical Reaction Mechanism Discovery from Molecular Simulation
Stack, Andrew G.; Kent, Paul R. C.
2014-11-10
Methods to explore reactions using computer simulation are becoming increasingly quantitative, versatile, and robust. In this review, a rationale for how molecular simulation can help build better geochemical kinetics models is first given. We summarize some common methods that geochemists use to simulate reaction mechanisms, specifically classical molecular dynamics and quantum chemical methods and discuss their strengths and weaknesses. Useful tools such as umbrella sampling and metadynamics that enable one to explore reactions are discussed. Several case studies wherein geochemists have used these tools to understand reaction mechanisms are presented, including water exchange and sorption on aqueous species and mineralmore » surfaces, surface charging, crystal growth and dissolution, and electron transfer. The impact that molecular simulation has had on our understanding of geochemical reactivity are highlighted in each case. In the future, it is anticipated that molecular simulation of geochemical reaction mechanisms will become more commonplace as a tool to validate and interpret experimental data, and provide a check on the plausibility of geochemical kinetic models.« less
DOT National Transportation Integrated Search
2003-01-01
This study evaluated existing traffic signal optimization programs including Synchro,TRANSYT-7F, and genetic algorithm optimization using real-world data collected in Virginia. As a first step, a microscopic simulation model, VISSIM, was extensively ...
Experimental and numerical characterization of expanded glass granules
NASA Astrophysics Data System (ADS)
Chaudry, Mohsin Ali; Woitzik, Christian; Düster, Alexander; Wriggers, Peter
2018-07-01
In this paper, the material response of expanded glass granules at different scales and under different boundary conditions is investigated. At grain scale, single particle tests can be used to determine properties like Young's modulus or crushing strength. With experiments like triaxial and oedometer tests, it is possible to examine the bulk mechanical behaviour of the granular material. Our experimental investigation is complemented by a numerical simulation where the discrete element method is used to compute the mechanical behaviour of such materials. In order to improve the simulation quality, effects such as rolling resistance, inelastic behaviour, damage, and crushing are also included in the discrete element method. Furthermore, the variation of the material properties of granules is modelled by a statistical distribution and included in our numerical simulation.
Conducting Simulation Studies in the R Programming Environment.
Hallgren, Kevin A
2013-10-12
Simulation studies allow researchers to answer specific questions about data analysis, statistical power, and best-practices for obtaining accurate results in empirical research. Despite the benefits that simulation research can provide, many researchers are unfamiliar with available tools for conducting their own simulation studies. The use of simulation studies need not be restricted to researchers with advanced skills in statistics and computer programming, and such methods can be implemented by researchers with a variety of abilities and interests. The present paper provides an introduction to methods used for running simulation studies using the R statistical programming environment and is written for individuals with minimal experience running simulation studies or using R. The paper describes the rationale and benefits of using simulations and introduces R functions relevant for many simulation studies. Three examples illustrate different applications for simulation studies, including (a) the use of simulations to answer a novel question about statistical analysis, (b) the use of simulations to estimate statistical power, and (c) the use of simulations to obtain confidence intervals of parameter estimates through bootstrapping. Results and fully annotated syntax from these examples are provided.
Analysis of drift correction in different simulated weighing schemes
NASA Astrophysics Data System (ADS)
Beatrici, A.; Rebelo, A.; Quintão, D.; Cacais, F. L.; Loayza, V. M.
2015-10-01
In the calibration of high accuracy mass standards some weighing schemes are used to reduce or eliminate the zero drift effects in mass comparators. There are different sources for the drift and different methods for its treatment. By using numerical methods, drift functions were simulated and a random term was included in each function. The comparison between the results obtained from ABABAB and ABBA weighing series was carried out. The results show a better efficacy of ABABAB method for drift with smooth variation and small randomness.
A data-driven dynamics simulation framework for railway vehicles
NASA Astrophysics Data System (ADS)
Nie, Yinyu; Tang, Zhao; Liu, Fengjia; Chang, Jian; Zhang, Jianjun
2018-03-01
The finite element (FE) method is essential for simulating vehicle dynamics with fine details, especially for train crash simulations. However, factors such as the complexity of meshes and the distortion involved in a large deformation would undermine its calculation efficiency. An alternative method, the multi-body (MB) dynamics simulation provides satisfying time efficiency but limited accuracy when highly nonlinear dynamic process is involved. To maintain the advantages of both methods, this paper proposes a data-driven simulation framework for dynamics simulation of railway vehicles. This framework uses machine learning techniques to extract nonlinear features from training data generated by FE simulations so that specific mesh structures can be formulated by a surrogate element (or surrogate elements) to replace the original mechanical elements, and the dynamics simulation can be implemented by co-simulation with the surrogate element(s) embedded into a MB model. This framework consists of a series of techniques including data collection, feature extraction, training data sampling, surrogate element building, and model evaluation and selection. To verify the feasibility of this framework, we present two case studies, a vertical dynamics simulation and a longitudinal dynamics simulation, based on co-simulation with MATLAB/Simulink and Simpack, and a further comparison with a popular data-driven model (the Kriging model) is provided. The simulation result shows that using the legendre polynomial regression model in building surrogate elements can largely cut down the simulation time without sacrifice in accuracy.
Ryan, Patrick B; Schuemie, Martijn J
2013-10-01
There has been only limited evaluation of statistical methods for identifying safety risks of drug exposure in observational healthcare data. Simulations can support empirical evaluation, but have not been shown to adequately model the real-world phenomena that challenge observational analyses. To design and evaluate a probabilistic framework (OSIM2) for generating simulated observational healthcare data, and to use this data for evaluating the performance of methods in identifying associations between drug exposure and health outcomes of interest. Seven observational designs, including case-control, cohort, self-controlled case series, and self-controlled cohort design were applied to 399 drug-outcome scenarios in 6 simulated datasets with no effect and injected relative risks of 1.25, 1.5, 2, 4, and 10, respectively. Longitudinal data for 10 million simulated patients were generated using a model derived from an administrative claims database, with associated demographics, periods of drug exposure derived from pharmacy dispensings, and medical conditions derived from diagnoses on medical claims. Simulation validation was performed through descriptive comparison with real source data. Method performance was evaluated using Area Under ROC Curve (AUC), bias, and mean squared error. OSIM2 replicates prevalence and types of confounding observed in real claims data. When simulated data are injected with relative risks (RR) ≥ 2, all designs have good predictive accuracy (AUC > 0.90), but when RR < 2, no methods achieve 100 % predictions. Each method exhibits a different bias profile, which changes with the effect size. OSIM2 can support methodological research. Results from simulation suggest method operating characteristics are far from nominal properties.
A Survey of Model Evaluation Approaches with a Tutorial on Hierarchical Bayesian Methods
ERIC Educational Resources Information Center
Shiffrin, Richard M.; Lee, Michael D.; Kim, Woojae; Wagenmakers, Eric-Jan
2008-01-01
This article reviews current methods for evaluating models in the cognitive sciences, including theoretically based approaches, such as Bayes factors and minimum description length measures; simulation approaches, including model mimicry evaluations; and practical approaches, such as validation and generalization measures. This article argues…
Canestrari, Niccolo; Chubar, Oleg; Reininger, Ruben
2014-09-01
X-ray beamlines in modern synchrotron radiation sources make extensive use of grazing-incidence reflective optics, in particular Kirkpatrick-Baez elliptical mirror systems. These systems can focus the incoming X-rays down to nanometer-scale spot sizes while maintaining relatively large acceptance apertures and high flux in the focused radiation spots. In low-emittance storage rings and in free-electron lasers such systems are used with partially or even nearly fully coherent X-ray beams and often target diffraction-limited resolution. Therefore, their accurate simulation and modeling has to be performed within the framework of wave optics. Here the implementation and benchmarking of a wave-optics method for the simulation of grazing-incidence mirrors based on the local stationary-phase approximation or, in other words, the local propagation of the radiation electric field along geometrical rays, is described. The proposed method is CPU-efficient and fully compatible with the numerical methods of Fourier optics. It has been implemented in the Synchrotron Radiation Workshop (SRW) computer code and extensively tested against the geometrical ray-tracing code SHADOW. The test simulations have been performed for cases without and with diffraction at mirror apertures, including cases where the grazing-incidence mirrors can be hardly approximated by ideal lenses. Good agreement between the SRW and SHADOW simulation results is observed in the cases without diffraction. The differences between the simulation results obtained by the two codes in diffraction-dominated cases for illumination with fully or partially coherent radiation are analyzed and interpreted. The application of the new method for the simulation of wavefront propagation through a high-resolution X-ray microspectroscopy beamline at the National Synchrotron Light Source II (Brookhaven National Laboratory, USA) is demonstrated.
Front panel engineering with CAD simulation tool
NASA Astrophysics Data System (ADS)
Delacour, Jacques; Ungar, Serge; Mathieu, Gilles; Hasna, Guenther; Martinez, Pascal; Roche, Jean-Christophe
1999-04-01
THe progress made recently in display technology covers many fields of application. The specification of radiance, colorimetry and lighting efficiency creates some new challenges for designers. Photometric design is limited by the capability of correctly predicting the result of a lighting system, to save on the costs and time taken to build multiple prototypes or bread board benches. The second step of the research carried out by company OPTIS is to propose an optimization method to be applied to the lighting system, developed in the software SPEOS. The main features of the tool requires include the CAD interface, to enable fast and efficient transfer between mechanical and light design software, the source modeling, the light transfer model and an optimization tool. The CAD interface is mainly a prototype of transfer, which is not the subjects here. Photometric simulation is efficiently achieved by using the measured source encoding and a simulation by the Monte Carlo method. Today, the advantages and the limitations of the Monte Carlo method are well known. The noise reduction requires a long calculation time, which increases with the complexity of the display panel. A successful optimization is difficult to achieve, due to the long calculation time required for each optimization pass including a Monte Carlo simulation. The problem was initially defined as an engineering method of study. The experience shows that good understanding and mastering of the phenomenon of light transfer is limited by the complexity of non sequential propagation. The engineer must call for the help of a simulation and optimization tool. The main point needed to be able to perform an efficient optimization is a quick method for simulating light transfer. Much work has been done in this area and some interesting results can be observed. It must be said that the Monte Carlo method wastes time calculating some results and information which are not required for the needs of the simulation. Low efficiency transfer system cost a lot of lost time. More generally, the light transfer simulation can be treated efficiently when the integrated result is composed of elementary sub results that include quick analytical calculated intersections. The first axis of research appear. The quick integration research and the quick calculation of geometric intersections. The first axis of research brings some general solutions also valid for multi-reflection systems. The second axis requires some deep thinking on the intersection calculation. An interesting way is the subdivision of space in VOXELS. This is an adapted method of 3D division of space according to the objects and their location. An experimental software has been developed to provide a validation of the method. The gain is particularly high in complex systems. An important reduction in the calculation time has been achieved.
Design of a digital phantom population for myocardial perfusion SPECT imaging research.
Ghaly, Michael; Du, Yong; Fung, George S K; Tsui, Benjamin M W; Links, Jonathan M; Frey, Eric
2014-06-21
Digital phantoms and Monte Carlo (MC) simulations have become important tools for optimizing and evaluating instrumentation, acquisition and processing methods for myocardial perfusion SPECT (MPS). In this work, we designed a new adult digital phantom population and generated corresponding Tc-99m and Tl-201 projections for use in MPS research. The population is based on the three-dimensional XCAT phantom with organ parameters sampled from the Emory PET Torso Model Database. Phantoms included three variations each in body size, heart size, and subcutaneous adipose tissue level, for a total of 27 phantoms of each gender. The SimSET MC code and angular response functions were used to model interactions in the body and the collimator-detector system, respectively. We divided each phantom into seven organs, each simulated separately, allowing use of post-simulation summing to efficiently model uptake variations. Also, we adapted and used a criterion based on the relative Poisson effective count level to determine the required number of simulated photons for each simulated organ. This technique provided a quantitative estimate of the true noise in the simulated projection data, including residual MC simulation noise. Projections were generated in 1 keV wide energy windows from 48-184 keV assuming perfect energy resolution to permit study of the effects of window width, energy resolution, and crosstalk in the context of dual isotope MPS. We have developed a comprehensive method for efficiently simulating realistic projections for a realistic population of phantoms in the context of MPS imaging. The new phantom population and realistic database of simulated projections will be useful in performing mathematical and human observer studies to evaluate various acquisition and processing methods such as optimizing the energy window width, investigating the effect of energy resolution on image quality and evaluating compensation methods for degrading factors such as crosstalk in the context of single and dual isotope MPS.
Design of a digital phantom population for myocardial perfusion SPECT imaging research
NASA Astrophysics Data System (ADS)
Ghaly, Michael; Du, Yong; Fung, George S. K.; Tsui, Benjamin M. W.; Links, Jonathan M.; Frey, Eric
2014-06-01
Digital phantoms and Monte Carlo (MC) simulations have become important tools for optimizing and evaluating instrumentation, acquisition and processing methods for myocardial perfusion SPECT (MPS). In this work, we designed a new adult digital phantom population and generated corresponding Tc-99m and Tl-201 projections for use in MPS research. The population is based on the three-dimensional XCAT phantom with organ parameters sampled from the Emory PET Torso Model Database. Phantoms included three variations each in body size, heart size, and subcutaneous adipose tissue level, for a total of 27 phantoms of each gender. The SimSET MC code and angular response functions were used to model interactions in the body and the collimator-detector system, respectively. We divided each phantom into seven organs, each simulated separately, allowing use of post-simulation summing to efficiently model uptake variations. Also, we adapted and used a criterion based on the relative Poisson effective count level to determine the required number of simulated photons for each simulated organ. This technique provided a quantitative estimate of the true noise in the simulated projection data, including residual MC simulation noise. Projections were generated in 1 keV wide energy windows from 48-184 keV assuming perfect energy resolution to permit study of the effects of window width, energy resolution, and crosstalk in the context of dual isotope MPS. We have developed a comprehensive method for efficiently simulating realistic projections for a realistic population of phantoms in the context of MPS imaging. The new phantom population and realistic database of simulated projections will be useful in performing mathematical and human observer studies to evaluate various acquisition and processing methods such as optimizing the energy window width, investigating the effect of energy resolution on image quality and evaluating compensation methods for degrading factors such as crosstalk in the context of single and dual isotope MPS.
spMC: an R-package for 3D lithological reconstructions based on spatial Markov chains
NASA Astrophysics Data System (ADS)
Sartore, Luca; Fabbri, Paolo; Gaetan, Carlo
2016-09-01
The paper presents the spatial Markov Chains (spMC) R-package and a case study of subsoil simulation/prediction located in a plain site of Northeastern Italy. spMC is a quite complete collection of advanced methods for data inspection, besides spMC implements Markov Chain models to estimate experimental transition probabilities of categorical lithological data. Furthermore, simulation methods based on most known prediction methods (as indicator Kriging and CoKriging) were implemented in spMC package. Moreover, other more advanced methods are available for simulations, e.g. path methods and Bayesian procedures, that exploit the maximum entropy. Since the spMC package was developed for intensive geostatistical computations, part of the code is implemented for parallel computations via the OpenMP constructs. A final analysis of this computational efficiency compares the simulation/prediction algorithms by using different numbers of CPU cores, and considering the example data set of the case study included in the package.
SimZones: An Organizational Innovation for Simulation Programs and Centers.
Roussin, Christopher J; Weinstock, Peter
2017-08-01
The complexity and volume of simulation-based learning programs have increased dramatically over the last decade, presenting several major challenges for those who lead and manage simulation programs and centers. The authors present five major issues affecting the organization of simulation programs: (1) supporting both single- and double-loop learning experiences; (2) managing the training of simulation teaching faculty; (3) optimizing the participant mix, including individuals, professional groups, teams, and other role-players, to ensure learning; (4) balancing in situ, node-based, and center-based simulation delivery; and (5) organizing simulation research and measuring value. They then introduce the SimZones innovation, a system of organization for simulation-based learning, and explain how it can alleviate the problems associated with these five issues.Simulations are divided into four zones (Zones 0-3). Zone 0 simulations include autofeedback exercises typically practiced by solitary learners, often using virtual simulation technology. Zone 1 simulations include hands-on instruction of foundational clinical skills. Zone 2 simulations include acute situational instruction, such as clinical mock codes. Zone 3 simulations involve authentic, native teams of participants and facilitate team and system development.The authors also discuss the translation of debriefing methods from Zone 3 simulations to real patient care settings (Zone 4), and they illustrate how the SimZones approach can enable the development of longitudinal learning systems in both teaching and nonteaching hospitals. The SimZones approach was initially developed in the context of the Boston Children's Hospital Simulator Program, which the authors use to illustrate this innovation in action.
VERA Core Simulator Methodology for PWR Cycle Depletion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kochunas, Brendan; Collins, Benjamin S; Jabaay, Daniel
2015-01-01
This paper describes the methodology developed and implemented in MPACT for performing high-fidelity pressurized water reactor (PWR) multi-cycle core physics calculations. MPACT is being developed primarily for application within the Consortium for the Advanced Simulation of Light Water Reactors (CASL) as one of the main components of the VERA Core Simulator, the others being COBRA-TF and ORIGEN. The methods summarized in this paper include a methodology for performing resonance self-shielding and computing macroscopic cross sections, 2-D/1-D transport, nuclide depletion, thermal-hydraulic feedback, and other supporting methods. These methods represent a minimal set needed to simulate high-fidelity models of a realistic nuclearmore » reactor. Results demonstrating this are presented from the simulation of a realistic model of the first cycle of Watts Bar Unit 1. The simulation, which approximates the cycle operation, is observed to be within 50 ppm boron (ppmB) reactivity for all simulated points in the cycle and approximately 15 ppmB for a consistent statepoint. The verification and validation of the PWR cycle depletion capability in MPACT is the focus of two companion papers.« less
Numerical integration of detector response functions via Monte Carlo simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelly, Keegan John; O'Donnell, John M.; Gomez, Jaime A.
Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated inmore » this way can be used to create Monte Carlo simulation output spectra a factor of ~1000× faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. Here, this method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.« less
Numerical integration of detector response functions via Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Kelly, K. J.; O'Donnell, J. M.; Gomez, J. A.; Taddeucci, T. N.; Devlin, M.; Haight, R. C.; White, M. C.; Mosby, S. M.; Neudecker, D.; Buckner, M. Q.; Wu, C. Y.; Lee, H. Y.
2017-09-01
Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated in this way can be used to create Monte Carlo simulation output spectra a factor of ∼ 1000 × faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. This method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.
Numerical integration of detector response functions via Monte Carlo simulations
Kelly, Keegan John; O'Donnell, John M.; Gomez, Jaime A.; ...
2017-06-13
Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated inmore » this way can be used to create Monte Carlo simulation output spectra a factor of ~1000× faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. Here, this method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.« less
The Formation of a Milky Way-sized Disk Galaxy. I. A Comparison of Numerical Methods
NASA Astrophysics Data System (ADS)
Zhu, Qirong; Li, Yuexing
2016-11-01
The long-standing challenge of creating a Milky Way- (MW-) like disk galaxy from cosmological simulations has motivated significant developments in both numerical methods and physical models. We investigate these two fundamental aspects in a new comparison project using a set of cosmological hydrodynamic simulations of an MW-sized galaxy. In this study, we focus on the comparison of two particle-based hydrodynamics methods: an improved smoothed particle hydrodynamics (SPH) code Gadget, and a Lagrangian Meshless Finite-Mass (MFM) code Gizmo. All the simulations in this paper use the same initial conditions and physical models, which include star formation, “energy-driven” outflows, metal-dependent cooling, stellar evolution, and metal enrichment. We find that both numerical schemes produce a late-type galaxy with extended gaseous and stellar disks. However, notable differences are present in a wide range of galaxy properties and their evolution, including star-formation history, gas content, disk structure, and kinematics. Compared to Gizmo, the Gadget simulation produced a larger fraction of cold, dense gas at high redshift which fuels rapid star formation and results in a higher stellar mass by 20% and a lower gas fraction by 10% at z = 0, and the resulting gas disk is smoother and more coherent in rotation due to damping of turbulent motion by the numerical viscosity in SPH, in contrast to the Gizmo simulation, which shows a more prominent spiral structure. Given its better convergence properties and lower computational cost, we argue that the MFM method is a promising alternative to SPH in cosmological hydrodynamic simulations.
THE FORMATION OF A MILKY WAY-SIZED DISK GALAXY. I. A COMPARISON OF NUMERICAL METHODS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Qirong; Li, Yuexing, E-mail: qxz125@psu.edu
The long-standing challenge of creating a Milky Way- (MW-) like disk galaxy from cosmological simulations has motivated significant developments in both numerical methods and physical models. We investigate these two fundamental aspects in a new comparison project using a set of cosmological hydrodynamic simulations of an MW-sized galaxy. In this study, we focus on the comparison of two particle-based hydrodynamics methods: an improved smoothed particle hydrodynamics (SPH) code Gadget, and a Lagrangian Meshless Finite-Mass (MFM) code Gizmo. All the simulations in this paper use the same initial conditions and physical models, which include star formation, “energy-driven” outflows, metal-dependent cooling, stellarmore » evolution, and metal enrichment. We find that both numerical schemes produce a late-type galaxy with extended gaseous and stellar disks. However, notable differences are present in a wide range of galaxy properties and their evolution, including star-formation history, gas content, disk structure, and kinematics. Compared to Gizmo, the Gadget simulation produced a larger fraction of cold, dense gas at high redshift which fuels rapid star formation and results in a higher stellar mass by 20% and a lower gas fraction by 10% at z = 0, and the resulting gas disk is smoother and more coherent in rotation due to damping of turbulent motion by the numerical viscosity in SPH, in contrast to the Gizmo simulation, which shows a more prominent spiral structure. Given its better convergence properties and lower computational cost, we argue that the MFM method is a promising alternative to SPH in cosmological hydrodynamic simulations.« less
Analog Design for Digital Deployment of a Serious Leadership Game
NASA Technical Reports Server (NTRS)
Maxwell, Nicholas; Lang, Tristan; Herman, Jeffrey L.; Phares, Richard
2012-01-01
This paper presents the design, development, and user testing of a leadership development simulation. The authors share lessons learned from using a design process for a board game to allow for quick and inexpensive revision cycles during the development of a serious leadership development game. The goal of this leadership simulation is to accelerate the development of leadership capacity in high-potential mid-level managers (GS-15 level) in a federal government agency. Simulation design included a mixed-method needs analysis, using both quantitative and qualitative approaches to determine organizational leadership needs. Eight design iterations were conducted, including three user testing phases. Three re-design iterations followed initial development, enabling game testing as part of comprehensive instructional events. Subsequent design, development and testing processes targeted digital application to a computer- and tablet-based environment. Recommendations include pros and cons of development and learner testing of an initial analog simulation prior to full digital simulation development.
NASA Astrophysics Data System (ADS)
Zapp, Kai; Orús, Román
2017-06-01
The simulation of lattice gauge theories with tensor network (TN) methods is becoming increasingly fruitful. The vision is that such methods will, eventually, be used to simulate theories in (3 +1 ) dimensions in regimes difficult for other methods. So far, however, TN methods have mostly simulated lattice gauge theories in (1 +1 ) dimensions. The aim of this paper is to explore the simulation of quantum electrodynamics (QED) on infinite lattices with TNs, i.e., fermionic matter fields coupled to a U (1 ) gauge field, directly in the thermodynamic limit. With this idea in mind we first consider a gauge-invariant infinite density matrix renormalization group simulation of the Schwinger model—i.e., QED in (1 +1 ) d . After giving a precise description of the numerical method, we benchmark our simulations by computing the subtracted chiral condensate in the continuum, in good agreement with other approaches. Our simulations of the Schwinger model allow us to build intuition about how a simulation should proceed in (2 +1 ) dimensions. Based on this, we propose a variational ansatz using infinite projected entangled pair states (PEPS) to describe the ground state of (2 +1 ) d QED. The ansatz includes U (1 ) gauge symmetry at the level of the tensors, as well as fermionic (matter) and bosonic (gauge) degrees of freedom both at the physical and virtual levels. We argue that all the necessary ingredients for the simulation of (2 +1 ) d QED are, a priori, already in place, paving the way for future upcoming results.
Lund, Bodil; Fors, Uno; Sejersen, Ronny; Sallnäs, Eva-Lotta; Rosén, Annika
2011-10-12
Yearly surveys among the undergraduate students in oral and maxillofacial surgery at Karolinska Institutet have conveyed a wish for increased clinical training, and in particular, in surgical removal of mandibular third molars. Due to lack of resources, this kind of clinical supervision has so far not been possible to implement. One possible solution to this problem might be to introduce simulation into the curriculum. The purpose of this study was to investigate undergraduate students' perception of two different simulation methods for practicing clinical reasoning skills and technical skills in oral and maxillofacial surgery. Forty-seven students participating in the oral and maxillofacial surgery course at Karolinska Institutet during their final year were included. Three different oral surgery patient cases were created in a Virtual Patient (VP) Simulation system (Web-SP) and used for training clinical reasoning. A mandibular third molar surgery simulator with tactile feedback, providing hands on training in the bone removal and tooth sectioning in third molar surgery, was also tested. A seminar was performed using the combination of these two simulators where students' perception of the two different simulation methods was assessed by means of a questionnaire. The response rate was 91.5% (43/47). The students were positive to the VP cases, although they rated their possible improvement of clinical reasoning skills as moderate. The students' perception of improved technical skills after training in the mandibular third molar surgery simulator was rated high. The majority of the students agreed that both simulation techniques should be included in the curriculum and strongly agreed that it was a good idea to use the two simulators in concert. The importance of feedback from the senior experts during simulator training was emphasised. The two tested simulation methods were well accepted and most students agreed that the future curriculum would benefit from permanent inclusion of these exercises, especially when used in combination. The results also stress the importance of teaching technical skills and clinical reasoning in concert.
Curuksu, Jeremy; Zacharias, Martin
2009-03-14
Although molecular dynamics (MD) simulations have been applied frequently to study flexible molecules, the sampling of conformational states separated by barriers is limited due to currently possible simulation time scales. Replica-exchange (Rex)MD simulations that allow for exchanges between simulations performed at different temperatures (T-RexMD) can achieve improved conformational sampling. However, in the case of T-RexMD the computational demand grows rapidly with system size. A Hamiltonian RexMD method that specifically enhances coupled dihedral angle transitions has been developed. The method employs added biasing potentials as replica parameters that destabilize available dihedral substates and was applied to study coupled dihedral transitions in nucleic acid molecules. The biasing potentials can be either fixed at the beginning of the simulation or optimized during an equilibration phase. The method was extensively tested and compared to conventional MD simulations and T-RexMD simulations on an adenine dinucleotide system and on a DNA abasic site. The biasing potential RexMD method showed improved sampling of conformational substates compared to conventional MD simulations similar to T-RexMD simulations but at a fraction of the computational demand. It is well suited to study systematically the fine structure and dynamics of large nucleic acids under realistic conditions including explicit solvent and ions and can be easily extended to other types of molecules.
Framework for Architecture Trade Study Using MBSE and Performance Simulation
NASA Technical Reports Server (NTRS)
Ryan, Jessica; Sarkani, Shahram; Mazzuchim, Thomas
2012-01-01
Increasing complexity in modern systems as well as cost and schedule constraints require a new paradigm of system engineering to fulfill stakeholder needs. Challenges facing efficient trade studies include poor tool interoperability, lack of simulation coordination (design parameters) and requirements flowdown. A recent trend toward Model Based System Engineering (MBSE) includes flexible architecture definition, program documentation, requirements traceability and system engineering reuse. As a new domain MBSE still lacks governing standards and commonly accepted frameworks. This paper proposes a framework for efficient architecture definition using MBSE in conjunction with Domain Specific simulation to evaluate trade studies. A general framework is provided followed with a specific example including a method for designing a trade study, defining candidate architectures, planning simulations to fulfill requirements and finally a weighted decision analysis to optimize system objectives.
A Selection of Composites Simulation Practices at NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Ratcliffe, James G.
2007-01-01
One of the major areas of study at NASA Langley Research Center is the development of technologies that support the use of advanced composite materials in aerospace applications. Amongst the supporting technologies are analysis tools used to simulate the behavior of these materials. This presentation will discuss a number of examples of analysis tools and simulation practices conducted at NASA Langley. The presentation will include examples of damage tolerance analyses for both interlaminar and intralaminar failure modes. Tools for modeling interlaminar failure modes include fracture mechanics and cohesive methods, whilst tools for modeling intralaminar failure involve the development of various progressive failure analyses. Other examples of analyses developed at NASA Langley include a thermo-mechanical model of an orthotropic material and the simulation of delamination growth in z-pin reinforced laminates.
The consideration of atmospheric stability within wind farm AEP calculations
NASA Astrophysics Data System (ADS)
Schmidt, Jonas; Chang, Chi-Yao; Dörenkämper, Martin; Salimi, Milad; Teichmann, Tim; Stoevesandt, Bernhard
2016-09-01
The annual energy production of an existing wind farm including thermal stratification is calculated with two different methods and compared to the average of three years of SCADA data. The first method is based on steady state computational fluid dynamics simulations and the assumption of Reynolds-similarity at hub height. The second method is a wake modelling calculation, where a new stratification transformation model was imposed on the Jensen an Ainslie wake models. The inflow states for both approaches were obtained from one year WRF simulation data of the site. Although all models underestimate the mean wind speed and wake effects, the results from the phenomenological wake transformation are compatible with high-fidelity simulation results.
NASA Astrophysics Data System (ADS)
García, M. F.; Restrepo-Parra, E.; Riaño-Rojas, J. C.
2015-05-01
This work develops a model that mimics the growth of diatomic, polycrystalline thin films by artificially splitting the growth into deposition and relaxation processes including two stages: (1) a grain-based stochastic method (grains orientation randomly chosen) is considered and by means of the Kinetic Monte Carlo method employing a non-standard version, known as Constant Time Stepping, the deposition is simulated. The adsorption of adatoms is accepted or rejected depending on the neighborhood conditions; furthermore, the desorption process is not included in the simulation and (2) the Monte Carlo method combined with the metropolis algorithm is used to simulate the diffusion. The model was developed by accounting for parameters that determine the morphology of the film, such as the growth temperature, the interacting atomic species, the binding energy and the material crystal structure. The modeled samples exhibited an FCC structure with grain formation with orientations in the family planes of < 111 >, < 200 > and < 220 >. The grain size and film roughness were analyzed. By construction, the grain size decreased, and the roughness increased, as the growth temperature increased. Although, during the growth process of real materials, the deposition and relaxation occurs simultaneously, this method may perhaps be valid to build realistic polycrystalline samples.
The Impact of Content Area Focus on the Effectiveness of a Web-Based Simulation
ERIC Educational Resources Information Center
Adcock, Amy B.; Duggan, Molly H.; Watson, Ginger S.; Belfore, Lee A.
2010-01-01
This paper describes an assessment of a web-based interview simulation designed to teach empathetic helping skills. The system includes an animated character acting as a client and responses designed to recreate a simulated role-play, a common assessment method used for teaching these skills. The purpose of this study was to determine whether…
USDA-ARS?s Scientific Manuscript database
Computer simulation is a useful tool for benchmarking the electrical and fuel energy consumption and water use in a fluid milk plant. In this study, a computer simulation model of the fluid milk process based on high temperature short time (HTST) pasteurization was extended to include models for pr...
Simulation modeling of forest landscape disturbances: Where do we go from here?
Ajith H. Perera; Brian R. Sturtevant; Lisa J. Buse
2015-01-01
It was nearly a quarter-century ago when Turner and Gardner (1991) drew attention to methods of quantifying landscape patterns and processes, including simulation modeling. The many authors who contributed to that seminal text collectively signaled the emergence of a new fieldâspatially explicit simulation modeling of broad-scale ecosystem dynamics. Of particular note...
NASA Astrophysics Data System (ADS)
Kreis, Karsten; Kremer, Kurt; Potestio, Raffaello; Tuckerman, Mark E.
2017-12-01
Path integral-based methodologies play a crucial role for the investigation of nuclear quantum effects by means of computer simulations. However, these techniques are significantly more demanding than corresponding classical simulations. To reduce this numerical effort, we recently proposed a method, based on a rigorous Hamiltonian formulation, which restricts the quantum modeling to a small but relevant spatial region within a larger reservoir where particles are treated classically. In this work, we extend this idea and show how it can be implemented along with state-of-the-art path integral simulation techniques, including path-integral molecular dynamics, which allows for the calculation of quantum statistical properties, and ring-polymer and centroid molecular dynamics, which allow the calculation of approximate quantum dynamical properties. To this end, we derive a new integration algorithm that also makes use of multiple time-stepping. The scheme is validated via adaptive classical-path-integral simulations of liquid water. Potential applications of the proposed multiresolution method are diverse and include efficient quantum simulations of interfaces as well as complex biomolecular systems such as membranes and proteins.
Yang, Yong; Christakos, George; Huang, Wei; Lin, Chengda; Fu, Peihong; Mei, Yang
2016-04-12
Because of the rapid economic growth in China, many regions are subjected to severe particulate matter pollution. Thus, improving the methods of determining the spatiotemporal distribution and uncertainty of air pollution can provide considerable benefits when developing risk assessments and environmental policies. The uncertainty assessment methods currently in use include the sequential indicator simulation (SIS) and indicator kriging techniques. However, these methods cannot be employed to assess multi-temporal data. In this work, a spatiotemporal sequential indicator simulation (STSIS) based on a non-separable spatiotemporal semivariogram model was used to assimilate multi-temporal data in the mapping and uncertainty assessment of PM2.5 distributions in a contaminated atmosphere. PM2.5 concentrations recorded throughout 2014 in Shandong Province, China were used as the experimental dataset. Based on the number of STSIS procedures, we assessed various types of mapping uncertainties, including single-location uncertainties over one day and multiple days and multi-location uncertainties over one day and multiple days. A comparison of the STSIS technique with the SIS technique indicate that a better performance was obtained with the STSIS method.
NASA Astrophysics Data System (ADS)
Yang, Yong; Christakos, George; Huang, Wei; Lin, Chengda; Fu, Peihong; Mei, Yang
2016-04-01
Because of the rapid economic growth in China, many regions are subjected to severe particulate matter pollution. Thus, improving the methods of determining the spatiotemporal distribution and uncertainty of air pollution can provide considerable benefits when developing risk assessments and environmental policies. The uncertainty assessment methods currently in use include the sequential indicator simulation (SIS) and indicator kriging techniques. However, these methods cannot be employed to assess multi-temporal data. In this work, a spatiotemporal sequential indicator simulation (STSIS) based on a non-separable spatiotemporal semivariogram model was used to assimilate multi-temporal data in the mapping and uncertainty assessment of PM2.5 distributions in a contaminated atmosphere. PM2.5 concentrations recorded throughout 2014 in Shandong Province, China were used as the experimental dataset. Based on the number of STSIS procedures, we assessed various types of mapping uncertainties, including single-location uncertainties over one day and multiple days and multi-location uncertainties over one day and multiple days. A comparison of the STSIS technique with the SIS technique indicate that a better performance was obtained with the STSIS method.
RuleMonkey: software for stochastic simulation of rule-based models
2010-01-01
Background The system-level dynamics of many molecular interactions, particularly protein-protein interactions, can be conveniently represented using reaction rules, which can be specified using model-specification languages, such as the BioNetGen language (BNGL). A set of rules implicitly defines a (bio)chemical reaction network. The reaction network implied by a set of rules is often very large, and as a result, generation of the network implied by rules tends to be computationally expensive. Moreover, the cost of many commonly used methods for simulating network dynamics is a function of network size. Together these factors have limited application of the rule-based modeling approach. Recently, several methods for simulating rule-based models have been developed that avoid the expensive step of network generation. The cost of these "network-free" simulation methods is independent of the number of reactions implied by rules. Software implementing such methods is now needed for the simulation and analysis of rule-based models of biochemical systems. Results Here, we present a software tool called RuleMonkey, which implements a network-free method for simulation of rule-based models that is similar to Gillespie's method. The method is suitable for rule-based models that can be encoded in BNGL, including models with rules that have global application conditions, such as rules for intramolecular association reactions. In addition, the method is rejection free, unlike other network-free methods that introduce null events, i.e., steps in the simulation procedure that do not change the state of the reaction system being simulated. We verify that RuleMonkey produces correct simulation results, and we compare its performance against DYNSTOC, another BNGL-compliant tool for network-free simulation of rule-based models. We also compare RuleMonkey against problem-specific codes implementing network-free simulation methods. Conclusions RuleMonkey enables the simulation of rule-based models for which the underlying reaction networks are large. It is typically faster than DYNSTOC for benchmark problems that we have examined. RuleMonkey is freely available as a stand-alone application http://public.tgen.org/rulemonkey. It is also available as a simulation engine within GetBonNie, a web-based environment for building, analyzing and sharing rule-based models. PMID:20673321
A Cartesian cut cell method for rarefied flow simulations around moving obstacles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dechristé, G., E-mail: Guillaume.Dechriste@math.u-bordeaux1.fr; CNRS, IMB, UMR 5251, F-33400 Talence; Mieussens, L., E-mail: Luc.Mieussens@math.u-bordeaux1.fr
2016-06-01
For accurate simulations of rarefied gas flows around moving obstacles, we propose a cut cell method on Cartesian grids: it allows exact conservation and accurate treatment of boundary conditions. Our approach is designed to treat Cartesian cells and various kinds of cut cells by the same algorithm, with no need to identify the specific shape of each cut cell. This makes the implementation quite simple, and allows a direct extension to 3D problems. Such simulations are also made possible by using an adaptive mesh refinement technique and a hybrid parallel implementation. This is illustrated by several test cases, including amore » 3D unsteady simulation of the Crookes radiometer.« less
A Comparative Study of High and Low Fidelity Fan Models for Turbofan Engine System Simulation
NASA Technical Reports Server (NTRS)
Reed, John A.; Afjeh, Abdollah A.
1991-01-01
In this paper, a heterogeneous propulsion system simulation method is presented. The method is based on the formulation of a cycle model of a gas turbine engine. The model includes the nonlinear characteristics of the engine components via use of empirical data. The potential to simulate the entire engine operation on a computer without the aid of data is demonstrated by numerically generating "performance maps" for a fan component using two flow models of varying fidelity. The suitability of the fan models were evaluated by comparing the computed performance with experimental data. A discussion of the potential benefits and/or difficulties in connecting simulations solutions of differing fidelity is given.
System and Method for Finite Element Simulation of Helicopter Turbulence
NASA Technical Reports Server (NTRS)
McFarland, R. E. (Inventor); Dulsenberg, Ken (Inventor)
1999-01-01
The present invention provides a turbulence model that has been developed for blade-element helicopter simulation. This model uses an innovative temporal and geometrical distribution algorithm that preserves the statistical characteristics of the turbulence spectra over the rotor disc, while providing velocity components in real time to each of five blade-element stations along each of four blades. for a total of twenty blade-element stations. The simulator system includes a software implementation of flight dynamics that adheres to the guidelines for turbulence set forth in military specifications. One of the features of the present simulator system is that it applies simulated turbulence to the rotor blades of the helicopter, rather than to its center of gravity. The simulator system accurately models the rotor penetration into a gust field. It includes time correlation between the front and rear of the main rotor, as well as between the side forces felt at the center of gravity and at the tail rotor. It also includes features for added realism, such as patchy turbulence and vertical gusts in to which the rotor disc penetrates. These features are realized by a unique real time implementation of the turbulence filters. The new simulator system uses two arrays one on either side of the main rotor to record the turbulence field and to produce time-correlation from the front to the rear of the rotor disc. The use of Gaussian Interpolation between the two arrays maintains the statistical properties of the turbulence across the rotor disc. The present simulator system and method may be used in future and existing real-time helicopter simulations with minimal increase in computational workload.
NASA Handbook for Models and Simulations: An Implementation Guide for NASA-STD-7009
NASA Technical Reports Server (NTRS)
Steele, Martin J.
2013-01-01
The purpose of this Handbook is to provide technical information, clarification, examples, processes, and techniques to help institute good modeling and simulation practices in the National Aeronautics and Space Administration (NASA). As a companion guide to NASA-STD- 7009, Standard for Models and Simulations, this Handbook provides a broader scope of information than may be included in a Standard and promotes good practices in the production, use, and consumption of NASA modeling and simulation products. NASA-STD-7009 specifies what a modeling and simulation activity shall or should do (in the requirements) but does not prescribe how the requirements are to be met, which varies with the specific engineering discipline, or who is responsible for complying with the requirements, which depends on the size and type of project. A guidance document, which is not constrained by the requirements of a Standard, is better suited to address these additional aspects and provide necessary clarification. This Handbook stems from the Space Shuttle Columbia Accident Investigation (2003), which called for Agency-wide improvements in the "development, documentation, and operation of models and simulations"' that subsequently elicited additional guidance from the NASA Office of the Chief Engineer to include "a standard method to assess the credibility of the models and simulations."2 General methods applicable across the broad spectrum of model and simulation (M&S) disciplines were sought to help guide the modeling and simulation processes within NASA and to provide for consistent reporting ofM&S activities and analysis results. From this, the standardized process for the M&S activity was developed. The major contents of this Handbook are the implementation details of the general M&S requirements ofNASA-STD-7009, including explanations, examples, and suggestions for improving the credibility assessment of an M&S-based analysis.
NASA Astrophysics Data System (ADS)
Hadgu, T.; Kalinina, E.; Klise, K. A.; Wang, Y.
2016-12-01
Disposal of high-level radioactive waste in a deep geological repository in crystalline host rock is one of the potential options for long term isolation. Characterization of the natural barrier system is an important component of the disposal option. In this study we present numerical modeling of flow and transport in fractured crystalline rock using an updated fracture continuum model (FCM). The FCM is a stochastic method that maps the permeability of discrete fractures onto a regular grid. The original method by McKenna and Reeves (2005) has been updated to provide capabilities that enhance representation of fractured rock. As reported in Hadgu et al. (2015) the method was first modified to include fully three-dimensional representations of anisotropic permeability, multiple independent fracture sets, and arbitrary fracture dips and orientations, and spatial correlation. More recently the FCM has been extended to include three different methods. (1) The Sequential Gaussian Simulation (SGSIM) method uses spatial correlation to generate fractures and define their properties for FCM (2) The ELLIPSIM method randomly generates a specified number of ellipses with properties defined by probability distributions. Each ellipse represents a single fracture. (3) Direct conversion of discrete fracture network (DFN) output. Test simulations were conducted to simulate flow and transport using ELLIPSIM and direct conversion of DFN methods. The simulations used a 1 km x 1km x 1km model domain and a structured with grid block of size of 10 m x 10m x 10m, resulting in a total of 106 grid blocks. Distributions of fracture parameters were used to generate a selected number of realizations. For each realization, the different methods were applied to generate representative permeability fields. The PFLOTRAN (Hammond et al., 2014) code was used to simulate flow and transport in the domain. Simulation results and analysis are presented. The results indicate that the FCM approach is a viable method to model fractured crystalline rocks. The FCM is a computationally efficient way to generate realistic representation of complex fracture systems. This approach is of interest for nuclear waste disposal models applied over large domains. SAND2016-7509 A
NASA Astrophysics Data System (ADS)
He, Yue-Jing; Hung, Wei-Chih; Syu, Cheng-Jyun
2017-12-01
The finite-element method (FEM) and eigenmode expansion method (EEM) were adopted to analyze the guided modes and spectrum of phase-shift fiber Bragg grating at five phase-shift degrees (including zero, 1/4π, 1/2π, 3/4π, and π). In previous studies on optical fiber grating, conventional coupled-mode theory was crucial. This theory contains abstruse knowledge about physics and complex computational processes, and thus is challenging for users. Therefore, a numerical simulation method was coupled with a simple and rigorous design procedure to help beginners and users to overcome difficulty in entering the field; in addition, graphical simulation results were presented. To reduce the difference between the simulated context and the actual context, a perfectly matched layer and perfectly reflecting boundary were added to the FEM and the EEM. When the FEM was used for grid cutting, the object meshing method and the boundary meshing method proposed in this study were used to effectively enhance computational accuracy and substantially reduce the time required for simulation. In summary, users can use the simulation results in this study to easily and rapidly design an optical fiber communication system and optical sensors with spectral characteristics.
[COMPARATIVE CHARACTERISTIC OF VARIOUS METHODS OF SIMULATION OF BILIARY PERITONITIS IN EXPERIMENT].
Nichitaylo, M Yu; Furmanov, Yu O; Gutsulyak, A I; Savytska, I M; Zagriychuk, M S; Goman, A V
2016-02-01
In experiment on rabbits a comparative analysis of various methods of a biliary peritonitis simulation was conducted. In 6 animals a biliary peritonitis was simulated, using perforation of a gallbladder, local serous-fibrinous peritonitis have occurred in 50% of them. In 7 animals biliary peritonitis was simulated, applying intraabdominal injection of medical sterile bile in a 5-40 ml volume. Diffuse peritonitis with exudates and stratification of fibrin was absent. Most effective method have appeared that, when intraabdominal injection of bile was done together with E. coli culture in the rate of 0.33 microbal bodies McF (1.0 x 10(8) CFU/ml) on 1 kg of the animal body mass. Diffuse biliary peritonitis have occurred in all 23 animals, including serous-fibrinous one--in 17 (76%), and purulent-fibrinous--in 6 (24%).
CAE "FOCUS" for modelling and simulating electron optics systems: development and application
NASA Astrophysics Data System (ADS)
Trubitsyn, Andrey; Grachev, Evgeny; Gurov, Victor; Bochkov, Ilya; Bochkov, Victor
2017-02-01
Electron optics is a theoretical base of scientific instrument engineering. Mathematical simulation of occurring processes is a base for contemporary design of complicated devices of the electron optics. Problems of the numerical mathematical simulation are effectively solved by CAE system means. CAE "FOCUS" developed by the authors includes fast and accurate methods: boundary element method (BEM) for the electric field calculation, Runge-Kutta- Fieghlberg method for the charged particle trajectory computation controlling an accuracy of calculations, original methods for search of terms for the angular and time-of-flight focusing. CAE "FOCUS" is organized as a collection of modules each of which solves an independent (sub) task. A range of physical and analytical devices, in particular a microfocus X-ray tube of high power, has been developed using this soft.
Modeling and Simulation of Nanoindentation
NASA Astrophysics Data System (ADS)
Huang, Sixie; Zhou, Caizhi
2017-11-01
Nanoindentation is a hardness test method applied to small volumes of material which can provide some unique effects and spark many related research activities. To fully understand the phenomena observed during nanoindentation tests, modeling and simulation methods have been developed to predict the mechanical response of materials during nanoindentation. However, challenges remain with those computational approaches, because of their length scale, predictive capability, and accuracy. This article reviews recent progress and challenges for modeling and simulation of nanoindentation, including an overview of molecular dynamics, the quasicontinuum method, discrete dislocation dynamics, and the crystal plasticity finite element method, and discusses how to integrate multiscale modeling approaches seamlessly with experimental studies to understand the length-scale effects and microstructure evolution during nanoindentation tests, creating a unique opportunity to establish new calibration procedures for the nanoindentation technique.
NASA Astrophysics Data System (ADS)
Lange, Jacob; O'Shaughnessy, Richard; Healy, James; Lousto, Carlos; Shoemaker, Deirdre; Lovelace, Geoffrey; Scheel, Mark; Ossokine, Serguei
2016-03-01
In this talk, we describe a procedure to reconstruct the parameters of sufficiently massive coalescing compact binaries via direct comparison with numerical relativity simulations. For sufficiently massive sources, existing numerical relativity simulations are long enough to cover the observationally accessible part of the signal. Due to the signal's brevity, the posterior parameter distribution it implies is broad, simple, and easily reconstructed from information gained by comparing to only the sparse sample of existing numerical relativity simulations. We describe how followup simulations can corroborate and improve our understanding of a detected source. Since our method can include all physics provided by full numerical relativity simulations of coalescing binaries, it provides a valuable complement to alternative techniques which employ approximations to reconstruct source parameters. Supported by NSF Grant PHY-1505629.
NASA Technical Reports Server (NTRS)
Stern, Boris E.; Svensson, Roland; Begelman, Mitchell C.; Sikora, Marek
1995-01-01
High-energy radiation processes in compact cosmic objects are often expected to have a strongly non-linear behavior. Such behavior is shown, for example, by electron-positron pair cascades and the time evolution of relativistic proton distributions in dense radiation fields. Three independent techniques have been developed to simulate these non-linear problems: the kinetic equation approach; the phase-space density (PSD) Monte Carlo method; and the large-particle (LP) Monte Carlo method. In this paper, we present the latest version of the LP method and compare it with the other methods. The efficiency of the method in treating geometrically complex problems is illustrated by showing results of simulations of 1D, 2D and 3D systems. The method is shown to be powerful enough to treat non-spherical geometries, including such effects as bulk motion of the background plasma, reflection of radiation from cold matter, and anisotropic distributions of radiating particles. It can therefore be applied to simulate high-energy processes in such astrophysical systems as accretion discs with coronae, relativistic jets, pulsar magnetospheres and gamma-ray bursts.
Mspire-Simulator: LC-MS shotgun proteomic simulator for creating realistic gold standard data.
Noyce, Andrew B; Smith, Rob; Dalgleish, James; Taylor, Ryan M; Erb, K C; Okuda, Nozomu; Prince, John T
2013-12-06
The most important step in any quantitative proteomic pipeline is feature detection (aka peak picking). However, generating quality hand-annotated data sets to validate the algorithms, especially for lower abundance peaks, is nearly impossible. An alternative for creating gold standard data is to simulate it with features closely mimicking real data. We present Mspire-Simulator, a free, open-source shotgun proteomic simulator that goes beyond previous simulation attempts by generating LC-MS features with realistic m/z and intensity variance along with other noise components. It also includes machine-learned models for retention time and peak intensity prediction and a genetic algorithm to custom fit model parameters for experimental data sets. We show that these methods are applicable to data from three different mass spectrometers, including two fundamentally different types, and show visually and analytically that simulated peaks are nearly indistinguishable from actual data. Researchers can use simulated data to rigorously test quantitation software, and proteomic researchers may benefit from overlaying simulated data on actual data sets.
Cart3D Simulations for the First AIAA Sonic Boom Prediction Workshop
NASA Technical Reports Server (NTRS)
Aftosmis, Michael J.; Nemec, Marian
2014-01-01
Simulation results for the First AIAA Sonic Boom Prediction Workshop (LBW1) are presented using an inviscid, embedded-boundary Cartesian mesh method. The method employs adjoint-based error estimation and adaptive meshing to automatically determine resolution requirements of the computational domain. Results are presented for both mandatory and optional test cases. These include an axisymmetric body of revolution, a 69deg delta wing model and a complete model of the Lockheed N+2 supersonic tri-jet with V-tail and flow through nacelles. In addition to formal mesh refinement studies and examination of the adjoint-based error estimates, mesh convergence is assessed by presenting simulation results for meshes at several resolutions which are comparable in size to the unstructured grids distributed by the workshop organizers. Data provided includes both the pressure signals required by the workshop and information on code performance in both memory and processing time. Various enhanced techniques offering improved simulation efficiency will be demonstrated and discussed.
Schwenke, Michael; Georgii, Joachim; Preusser, Tobias
2017-07-01
Focused ultrasound (FUS) is rapidly gaining clinical acceptance for several target tissues in the human body. Yet, treating liver targets is not clinically applied due to a high complexity of the procedure (noninvasiveness, target motion, complex anatomy, blood cooling effects, shielding by ribs, and limited image-based monitoring). To reduce the complexity, numerical FUS simulations can be utilized for both treatment planning and execution. These use-cases demand highly accurate and computationally efficient simulations. We propose a numerical method for the simulation of abdominal FUS treatments during respiratory motion of the organs and target. Especially, a novel approach is proposed to simulate the heating during motion by solving Pennes' bioheat equation in a computational reference space, i.e., the equation is mathematically transformed to the reference. The approach allows for motion discontinuities, e.g., the sliding of the liver along the abdominal wall. Implementing the solver completely on the graphics processing unit and combining it with an atlas-based ultrasound simulation approach yields a simulation performance faster than real time (less than 50-s computing time for 100 s of treatment time) on a modern off-the-shelf laptop. The simulation method is incorporated into a treatment planning demonstration application that allows to simulate real patient cases including respiratory motion. The high performance of the presented simulation method opens the door to clinical applications. The methods bear the potential to enable the application of FUS for moving organs.
Accurate hybrid stochastic simulation of a system of coupled chemical or biochemical reactions.
Salis, Howard; Kaznessis, Yiannis
2005-02-01
The dynamical solution of a well-mixed, nonlinear stochastic chemical kinetic system, described by the Master equation, may be exactly computed using the stochastic simulation algorithm. However, because the computational cost scales with the number of reaction occurrences, systems with one or more "fast" reactions become costly to simulate. This paper describes a hybrid stochastic method that partitions the system into subsets of fast and slow reactions, approximates the fast reactions as a continuous Markov process, using a chemical Langevin equation, and accurately describes the slow dynamics using the integral form of the "Next Reaction" variant of the stochastic simulation algorithm. The key innovation of this method is its mechanism of efficiently monitoring the occurrences of slow, discrete events while simultaneously simulating the dynamics of a continuous, stochastic or deterministic process. In addition, by introducing an approximation in which multiple slow reactions may occur within a time step of the numerical integration of the chemical Langevin equation, the hybrid stochastic method performs much faster with only a marginal decrease in accuracy. Multiple examples, including a biological pulse generator and a large-scale system benchmark, are simulated using the exact and proposed hybrid methods as well as, for comparison, a previous hybrid stochastic method. Probability distributions of the solutions are compared and the weak errors of the first two moments are computed. In general, these hybrid methods may be applied to the simulation of the dynamics of a system described by stochastic differential, ordinary differential, and Master equations.
Hydrogeologic unit flow characterization using transition probability geostatistics.
Jones, Norman L; Walker, Justin R; Carle, Steven F
2005-01-01
This paper describes a technique for applying the transition probability geostatistics method for stochastic simulation to a MODFLOW model. Transition probability geostatistics has some advantages over traditional indicator kriging methods including a simpler and more intuitive framework for interpreting geologic relationships and the ability to simulate juxtapositional tendencies such as fining upward sequences. The indicator arrays generated by the transition probability simulation are converted to layer elevation and thickness arrays for use with the new Hydrogeologic Unit Flow package in MODFLOW 2000. This makes it possible to preserve complex heterogeneity while using reasonably sized grids and/or grids with nonuniform cell thicknesses.
Systems Biology in Immunology – A Computational Modeling Perspective
Germain, Ronald N.; Meier-Schellersheim, Martin; Nita-Lazar, Aleksandra; Fraser, Iain D. C.
2011-01-01
Systems biology is an emerging discipline that combines high-content, multiplexed measurements with informatic and computational modeling methods to better understand biological function at various scales. Here we present a detailed review of the methods used to create computational models and conduct simulations of immune function, We provide descriptions of the key data gathering techniques employed to generate the quantitative and qualitative data required for such modeling and simulation and summarize the progress to date in applying these tools and techniques to questions of immunological interest, including infectious disease. We include comments on what insights modeling can provide that complement information obtained from the more familiar experimental discovery methods used by most investigators and why quantitative methods are needed to eventually produce a better understanding of immune system operation in health and disease. PMID:21219182
Methods of Helium Injection and Removal for Heat Transfer Augmentation
NASA Technical Reports Server (NTRS)
Haight, Harlan; Kegley, Jeff; Bourdreaux, Meghan
2008-01-01
While augmentation of heat transfer from a test article by helium gas at low pressures is well known, the method is rarely employed during space simulation testing because the test objectives usually involve simulation of an orbital thermal environment. Test objectives of cryogenic optical testing at Marshall Space Flight Center's X-ray Cryogenic Facility (XRCF) have typically not been constrained by orbital environment parameters. As a result, several methods of helium injection have been utilized at the XRCF since 1999 to decrease thermal transition times. A brief synopsis of these injection (and removal) methods including will be presented.
Methods of Helium Injection and Removal for Heat Transfer Augmentation
NASA Technical Reports Server (NTRS)
Kegley, Jeffrey
2008-01-01
While augmentation of heat transfer from a test article by helium gas at low pressures is well known, the method is rarely employed during space simulation testing because the test objectives are to simulate an orbital thermal environment. Test objectives of cryogenic optical testing at Marshall Space Flight Center's X-ray Calibration Facility (XRCF) have typically not been constrained by orbital environment parameters. As a result, several methods of helium injection have been utilized at the XRCF since 1999 to decrease thermal transition times. A brief synopsis of these injection (and removal) methods including will be presented.
NASA Astrophysics Data System (ADS)
Aarseth, S. J.
2008-05-01
We describe efforts over the last six years to implement regularization methods suitable for studying one or more interacting black holes by direct N-body simulations. Three different methods have been adapted to large-N systems: (i) Time-Transformed Leapfrog, (ii) Wheel-Spoke, and (iii) Algorithmic Regularization. These methods have been tried out with some success on GRAPE-type computers. Special emphasis has also been devoted to including post-Newtonian terms, with application to moderately massive black holes in stellar clusters. Some examples of simulations leading to coalescence by gravitational radiation will be presented to illustrate the practical usefulness of such methods.
Transonic Flow Computations Using Nonlinear Potential Methods
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Kwak, Dochan (Technical Monitor)
2000-01-01
This presentation describes the state of transonic flow simulation using nonlinear potential methods for external aerodynamic applications. The presentation begins with a review of the various potential equation forms (with emphasis on the full potential equation) and includes a discussion of pertinent mathematical characteristics and all derivation assumptions. Impact of the derivation assumptions on simulation accuracy, especially with respect to shock wave capture, is discussed. Key characteristics of all numerical algorithm types used for solving nonlinear potential equations, including steady, unsteady, space marching, and design methods, are described. Both spatial discretization and iteration scheme characteristics are examined. Numerical results for various aerodynamic applications are included throughout the presentation to highlight key discussion points. The presentation ends with concluding remarks and recommendations for future work. Overall. nonlinear potential solvers are efficient, highly developed and routinely used in the aerodynamic design environment for cruise conditions. Published by Elsevier Science Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Lee, Taewoong; Lee, Hyounggun; Lee, Wonho
2015-10-01
This study evaluated the use of Compton imaging technology to monitor prompt gamma rays emitted by 10B in boron neutron capture therapy (BNCT) applied to a computerized human phantom. The Monte Carlo method, including particle-tracking techniques, was used for simulation. The distribution of prompt gamma rays emitted by the phantom during irradiation with neutron beams is closely associated with the distribution of the boron in the phantom. Maximum likelihood expectation maximization (MLEM) method was applied to the information obtained from the detected prompt gamma rays to reconstruct the distribution of the tumor including the boron uptake regions (BURs). The reconstructed Compton images of the prompt gamma rays were combined with the cross-sectional images of the human phantom. Quantitative analysis of the intensity curves showed that all combined images matched the predetermined conditions of the simulation. The tumors including the BURs were distinguishable if they were more than 2 cm apart.
NASA Astrophysics Data System (ADS)
Rognlien, Thomas; Rensink, Marvin
2016-10-01
Transport simulations for the edge plasma of tokamaks and other magnetic fusion devices requires the coupling of plasma and recycling or injected neutral gas. There are various neutral models used for this purpose, e.g., atomic fluid model, a Monte Carlo particle models, transition/escape probability methods, and semi-analytic models. While the Monte Carlo method is generally viewed as the most accurate, it is time consuming, which becomes even more demanding for device simulations of high densities and size typical of fusion power plants because the neutral collisional mean-free path becomes very small. Here we examine the behavior of an extended fluid neutral model for hydrogen that includes both atoms and molecules, which easily includes nonlinear neutral-neutral collision effects. In addition to the strong charge-exchange between hydrogen atoms and ions, elastic scattering is included among all species. Comparisons are made with the DEGAS 2 Monte Carlo code. Work performed for U.S. DoE by LLNL under Contract DE-AC52-07NA27344.
Turbulence simulation mechanization for Space Shuttle Orbiter dynamics and control studies
NASA Technical Reports Server (NTRS)
Tatom, F. B.; King, R. L.
1977-01-01
The current version of the NASA turbulent simulation model in the form of a digital computer program, TBMOD, is described. The logic of the program is discussed and all inputs and outputs are defined. An alternate method of shear simulation suitable for incorporation into the model is presented. The simulation is based on a von Karman spectrum and the assumption of isotropy. The resulting spectral density functions for the shear model are included.
Nonlinear vs. linear biasing in Trp-cage folding simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spiwok, Vojtěch, E-mail: spiwokv@vscht.cz; Oborský, Pavel; Králová, Blanka
2015-03-21
Biased simulations have great potential for the study of slow processes, including protein folding. Atomic motions in molecules are nonlinear, which suggests that simulations with enhanced sampling of collective motions traced by nonlinear dimensionality reduction methods may perform better than linear ones. In this study, we compare an unbiased folding simulation of the Trp-cage miniprotein with metadynamics simulations using both linear (principle component analysis) and nonlinear (Isomap) low dimensional embeddings as collective variables. Folding of the mini-protein was successfully simulated in 200 ns simulation with linear biasing and non-linear motion biasing. The folded state was correctly predicted as the free energymore » minimum in both simulations. We found that the advantage of linear motion biasing is that it can sample a larger conformational space, whereas the advantage of nonlinear motion biasing lies in slightly better resolution of the resulting free energy surface. In terms of sampling efficiency, both methods are comparable.« less
NASA Astrophysics Data System (ADS)
Lokotko, A. V.
2016-10-01
Modeling massflow-traction characteristics of the power unit (PU) may be of interest in the study of aerodynamic characteristics (ADC) aircraft models with full dynamic likeness, and in the study of the effect of interference PU. These studies require the use of a number of processing methods. These include: 1) The method of delivery of the high-pressure body of jets model engines on the sensitive part of the aerodynamic balance. 2) The method of estimate accuracy and reliability of measurement thrust generated by the jet device. 3) The method of implementation of the simulator SU in modeling the external contours of the nacelle, and the conditions at the inlet and outlet. 4) The method of determining the traction simulator PU. 5) The method of determining the interference effect from the work of power unit on the ADC of model. 6) The method of producing hot jets of jet engines. The paper examines implemented in ITAM methodology applied to testing in a supersonic wind tunnel T-313.
Perspectives on the simulation of protein–surface interactions using empirical force field methods
Latour, Robert A.
2014-01-01
Protein–surface interactions are of fundamental importance for a broad range of applications in the fields of biomaterials and biotechnology. Present experimental methods are limited in their ability to provide a comprehensive depiction of these interactions at the atomistic level. In contrast, empirical force field based simulation methods inherently provide the ability to predict and visualize protein–surface interactions with full atomistic detail. These methods, however, must be carefully developed, validated, and properly applied before confidence can be placed in results from the simulations. In this perspectives paper, I provide an overview of the critical aspects that I consider being of greatest importance for the development of these methods, with a focus on the research that my combined experimental and molecular simulation groups have conducted over the past decade to address these issues. These critical issues include the tuning of interfacial force field parameters to accurately represent the thermodynamics of interfacial behavior, adequate sampling of these types of complex molecular systems to generate results that can be comparable with experimental data, and the generation of experimental data that can be used for simulation results evaluation and validation. PMID:25028242
Lee, Anthony; Yau, Christopher; Giles, Michael B.; Doucet, Arnaud; Holmes, Christopher C.
2011-01-01
We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design. PMID:22003276
A new method for qualitative simulation of water resources systems: 2. Applications
NASA Astrophysics Data System (ADS)
Antunes, M. P.; Seixas, M. J.; Camara, A. S.; Pinheiro, M.
1987-11-01
SLIN (Simulação Linguistica) is a new method for qualitative dynamic simulation. As was presented previously (Camara et al., this issue), SLIN relies upon a categorical representation of variables which are manipulated by logical rules. Two applications to water resources systems are included to illustrate SLIN's potential usefulness: the environmental impact evaluation of a hydropower plant and the assessment of oil dispersion in the sea after a tanker wreck.
Digital multishaker modal testing
NASA Technical Reports Server (NTRS)
Blair, M.; Craig, R. R., Jr.
1983-01-01
A review of several modal testing techniques is made, along with brief discussions of their advantages and limitations. A new technique is presented which overcomes many of the previous limitations. Several simulated experiments are included to verify the validity and accuracy of the new method. Conclusions are drawn from the simulation studies and recommendations for further work are presented. The complete computer code configured for the simulation study is presented.
Jennifer.Vanrij@nrel.gov | 303-384-7180 Jennifer's expertise is in developing computational modeling methods for collaboratively developing numerical modeling methods to simulate the hydrodynamic, structural dynamic, power -elastic interactions. Her other diverse work experiences include developing numerical modeling methods for
Magee, Maclain J; Farkouh-Karoleski, Christiana; Rosen, Tove S
2018-04-01
Simulation training is an effective method to teach neonatal resuscitation (NR), yet many pediatrics residents do not feel comfortable with NR. Rapid cycle deliberate practice (RCDP) allows the facilitator to provide debriefing throughout the session. In RCDP, participants work through the scenario multiple times, eventually reaching more complex tasks once basic elements have been mastered. We determined if pediatrics residents have improved observed abilities, confidence level, and recall in NR after receiving RCDP training compared to the traditional simulation debriefing method. Thirty-eight pediatrics interns from a large academic training program were randomized to a teaching simulation session using RCDP or simulation debriefing methods. The primary outcome was the intern's cumulative score on the initial Megacode Assessment Form (MCAF). Secondary outcome measures included surveys of confidence level, recall MCAF scores at 4 months, and time to perform critical interventions. Thirty-four interns were included in analysis. Interns in the RCDP group had higher initial MCAF scores (89% versus 84%, P < .026), initiated positive pressure ventilation within 1 minute (100% versus 71%, P < .05), and administered epinephrine earlier (152 s versus 180 s, P < .039). Recall MCAF scores were not different between the 2 groups. Immediately following RCDP interns had improved observed abilities and decreased time to perform critical interventions in NR simulation as compared to those trained with the simulation debriefing. RCDP was not superior in improving confidence level or retention.
Apparatus and method for interaction phenomena with world modules in data-flow-based simulation
Xavier, Patrick G [Albuquerque, NM; Gottlieb, Eric J [Corrales, NM; McDonald, Michael J [Albuquerque, NM; Oppel, III, Fred J.
2006-08-01
A method and apparatus accommodate interaction phenomenon in a data-flow-based simulation of a system of elements, by establishing meta-modules to simulate system elements and by establishing world modules associated with interaction phenomena. World modules are associated with proxy modules from a group of meta-modules associated with one of the interaction phenomenon. The world modules include a communication world, a sensor world, a mobility world, and a contact world. World modules can be further associated with other world modules if necessary. Interaction phenomenon are simulated in corresponding world modules by accessing member functions in the associated group of proxy modules. Proxy modules can be dynamically allocated at a desired point in the simulation to accommodate the addition of elements in the system of elements such as a system of robots, a system of communication terminals, or a system of vehicles, being simulated.
Wayne—A Simulator for HST WFC3 IR Grism Spectroscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Varley, R.; Tsiaras, A.; Karpouzas, K., E-mail: r.varley@ucl.ac.uk
Wayne is an algorithm that simulates Hubble Space Telescope Wide Field Camera 3 (WFC3) grism spectroscopic frames, including sources of noise and systematics. It can simulate both staring and spatial scan modes, and observations such as the transit and the eclipse of an exoplanet. Unlike many other instrument simulators, the focus of Wayne is on creating frames with realistic systematics in order to test the effectiveness of different data analysis methods in a variety of different scenarios. This approach is critical for method validation and optimizing observing strategies. In this paper we describe the implementation of Wayne for WFC3 inmore » the near-infrared channel with the G102 and G141 grisms. We compare the simulations to real data obtained for the exoplanet HD 209458b, to verify the accuracy of the simulation. The software is now available as open source at https://github.com/ucl-exoplanets/wayne.« less
Simulator certification methods and the vertical motion simulator
NASA Technical Reports Server (NTRS)
Showalter, T. W.
1981-01-01
The vertical motion simulator (VMS) is designed to simulate a variety of experimental helicopter and STOL/VTOL aircraft as well as other kinds of aircraft with special pitch and Z axis characteristics. The VMS includes a large motion base with extensive vertical and lateral travel capabilities, a computer generated image visual system, and a high speed CDC 7600 computer system, which performs aero model calculations. Guidelines on how to measure and evaluate VMS performance were developed. A survey of simulation users was conducted to ascertain they evaluated and certified simulators for use. The results are presented.
Three axis electronic flight motion simulator real time control system design and implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Zhiyuan; Miao, Zhonghua, E-mail: zhonghua-miao@163.com; Wang, Xiaohua
2014-12-15
A three axis electronic flight motion simulator is reported in this paper including the modelling, the controller design as well as the hardware implementation. This flight motion simulator could be used for inertial navigation test and high precision inertial navigation system with good dynamic and static performances. A real time control system is designed, several control system implementation problems were solved including time unification with parallel port interrupt, high speed finding-zero method of rotary inductosyn, zero-crossing management with continuous rotary, etc. Tests were carried out to show the effectiveness of the proposed real time control system.
Three axis electronic flight motion simulator real time control system design and implementation.
Gao, Zhiyuan; Miao, Zhonghua; Wang, Xuyong; Wang, Xiaohua
2014-12-01
A three axis electronic flight motion simulator is reported in this paper including the modelling, the controller design as well as the hardware implementation. This flight motion simulator could be used for inertial navigation test and high precision inertial navigation system with good dynamic and static performances. A real time control system is designed, several control system implementation problems were solved including time unification with parallel port interrupt, high speed finding-zero method of rotary inductosyn, zero-crossing management with continuous rotary, etc. Tests were carried out to show the effectiveness of the proposed real time control system.
A Homogenization Approach for Design and Simulation of Blast Resistant Composites
NASA Astrophysics Data System (ADS)
Sheyka, Michael
Structural composites have been used in aerospace and structural engineering due to their high strength to weight ratio. Composite laminates have been successfully and extensively used in blast mitigation. This dissertation examines the use of the homogenization approach to design and simulate blast resistant composites. Three case studies are performed to examine the usefulness of different methods that may be used in designing and optimizing composite plates for blast resistance. The first case study utilizes a single degree of freedom system to simulate the blast and a reliability based approach. The first case study examines homogeneous plates and the optimal stacking sequence and plate thicknesses are determined. The second and third case studies use the homogenization method to calculate the properties of composite unit cell made of two different materials. The methods are integrated with dynamic simulation environments and advanced optimization algorithms. The second case study is 2-D and uses an implicit blast simulation, while the third case study is 3-D and simulates blast using the explicit blast method. Both case studies 2 and 3 rely on multi-objective genetic algorithms for the optimization process. Pareto optimal solutions are determined in case studies 2 and 3. Case study 3 is an integrative method for determining optimal stacking sequence, microstructure and plate thicknesses. The validity of the different methods such as homogenization, reliability, explicit blast modeling and multi-objective genetic algorithms are discussed. Possible extension of the methods to include strain rate effects and parallel computation is also examined.
Molecular dynamics simulations and novel drug discovery.
Liu, Xuewei; Shi, Danfeng; Zhou, Shuangyan; Liu, Hongli; Liu, Huanxiang; Yao, Xiaojun
2018-01-01
Molecular dynamics (MD) simulations can provide not only plentiful dynamical structural information on biomacromolecules but also a wealth of energetic information about protein and ligand interactions. Such information is very important to understanding the structure-function relationship of the target and the essence of protein-ligand interactions and to guiding the drug discovery and design process. Thus, MD simulations have been applied widely and successfully in each step of modern drug discovery. Areas covered: In this review, the authors review the applications of MD simulations in novel drug discovery, including the pathogenic mechanisms of amyloidosis diseases, virtual screening and the interaction mechanisms between drugs and targets. Expert opinion: MD simulations have been used widely in investigating the pathogenic mechanisms of diseases caused by protein misfolding, in virtual screening, and in investigating drug resistance mechanisms caused by mutations of the target. These issues are very difficult to solve by experimental methods alone. Thus, in the future, MD simulations will have wider application with the further improvement of computational capacity and the development of better sampling methods and more accurate force fields together with more efficient analysis methods.
An object-oriented simulator for 3D digital breast tomosynthesis imaging system.
Seyyedi, Saeed; Cengiz, Kubra; Kamasak, Mustafa; Yildirim, Isa
2013-01-01
Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values.
An Object-Oriented Simulator for 3D Digital Breast Tomosynthesis Imaging System
Cengiz, Kubra
2013-01-01
Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values. PMID:24371468
Wet cooling towers: rule-of-thumb design and simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leeper, Stephen A.
1981-07-01
A survey of wet cooling tower literature was performed to develop a simplified method of cooling tower design and simulation for use in power plant cycle optimization. The theory of heat exchange in wet cooling towers is briefly summarized. The Merkel equation (the fundamental equation of heat transfer in wet cooling towers) is presented and discussed. The cooling tower fill constant (Ka) is defined and values derived. A rule-of-thumb method for the optimized design of cooling towers is presented. The rule-of-thumb design method provides information useful in power plant cycle optimization, including tower dimensions, water consumption rate, exit air temperature,more » power requirements and construction cost. In addition, a method for simulation of cooling tower performance at various operating conditions is presented. This information is also useful in power plant cycle evaluation. Using the information presented, it will be possible to incorporate wet cooling tower design and simulation into a procedure to evaluate and optimize power plant cycles.« less
CABS-flex 2.0: a web server for fast simulations of flexibility of protein structures.
Kuriata, Aleksander; Gierut, Aleksandra Maria; Oleniecki, Tymoteusz; Ciemny, Maciej Pawel; Kolinski, Andrzej; Kurcinski, Mateusz; Kmiecik, Sebastian
2018-05-14
Classical simulations of protein flexibility remain computationally expensive, especially for large proteins. A few years ago, we developed a fast method for predicting protein structure fluctuations that uses a single protein model as the input. The method has been made available as the CABS-flex web server and applied in numerous studies of protein structure-function relationships. Here, we present a major update of the CABS-flex web server to version 2.0. The new features include: extension of the method to significantly larger and multimeric proteins, customizable distance restraints and simulation parameters, contact maps and a new, enhanced web server interface. CABS-flex 2.0 is freely available at http://biocomp.chem.uw.edu.pl/CABSflex2.
Wang, Y; He, S; Guo, Y; Wang, S; Chen, S
2013-08-01
To evaluate the accuracy of volumetric measurement of simulated root resorption cavities based on cone beam computed tomography (CBCT), in comparison with that of Micro-computed tomography (Micro-CT) which served as the reference. The State Key Laboratory of Oral Diseases at Sichuan University. Thirty-two bovine teeth were included for standardized CBCT scanning and Micro-CT scanning before and after the simulation of different degrees of root resorption. The teeth were divided into three groups according to the depths of the root resorption cavity (group 1: 0.15, 0.2, 0.3 mm; group 2: 0.6, 1.0 mm; group 3: 1.5, 2.0, 3.0 mm). Each depth included four specimens. Differences in tooth volume before and after simulated root resorption were then calculated from CBCT and Micro-CT scans, respectively. The overall between-method agreement of the measurements was evaluated using the concordance correlation coefficient (CCC). For the first group, the average volume of resorption cavity was 1.07 mm(3) , and the between-method agreement of measurement for the volume changes was low (CCC = 0.098). For the second and third groups, the average volumes of resorption cavities were 3.47 and 6.73 mm(3) respectively, and the between-method agreements were good (CCC = 0.828 and 0.895, respectively). The accuracy of 3-D quantitative volumetric measurement of simulated root resorption based on CBCT was fairly good in detecting simulated resorption cavities larger than 3.47 mm(3), while it was not sufficient for measuring resorption cavities smaller than 1.07 mm(3) . This method could be applied in future studies of root resorption although further studies are required to improve its accuracy. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Murphy, Margaret; Curtis, Kate; McCloughen, Andrea
2016-02-01
In hospital emergencies require a structured team approach to facilitate simultaneous input into immediate resuscitation, stabilisation and prioritisation of care. Efforts to improve teamwork in the health care context include multidisciplinary simulation-based resuscitation team training, yet there is limited evidence demonstrating the value of these programmes.(1) We aimed to determine the current state of knowledge about the key components and impacts of multidisciplinary simulation-based resuscitation team training by conducting an integrative review of the literature. A systematic search using electronic (three databases) and hand searching methods for primary research published between 1980 and 2014 was undertaken; followed by a rigorous screening and quality appraisal process. The included articles were assessed for similarities and differences; the content was grouped and synthesised to form three main categories of findings. Eleven primary research articles representing a variety of simulation-based resuscitation team training were included. Five studies involved trauma teams; two described resuscitation teams in the context of intensive care and operating theatres and one focused on the anaesthetic team. Simulation is an effective method to train resuscitation teams in the management of crisis scenarios and has the potential to improve team performance in the areas of communication, teamwork and leadership. Team training improves the performance of the resuscitation team in simulated emergency scenarios. However, the transferability of educational outcomes to the clinical setting needs to be more clearly demonstrated. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.
Realistic finite temperature simulations of magnetic systems using quantum statistics
NASA Astrophysics Data System (ADS)
Bergqvist, Lars; Bergman, Anders
2018-01-01
We have performed realistic atomistic simulations at finite temperatures using Monte Carlo and atomistic spin dynamics simulations incorporating quantum (Bose-Einstein) statistics. The description is much improved at low temperatures compared to classical (Boltzmann) statistics normally used in these kind of simulations, while at higher temperatures the classical statistics are recovered. This corrected low-temperature description is reflected in both magnetization and the magnetic specific heat, the latter allowing for improved modeling of the magnetic contribution to free energies. A central property in the method is the magnon density of states at finite temperatures, and we have compared several different implementations for obtaining it. The method has no restrictions regarding chemical and magnetic order of the considered materials. This is demonstrated by applying the method to elemental ferromagnetic systems, including Fe and Ni, as well as Fe-Co random alloys and the ferrimagnetic system GdFe3.
Mixed-RKDG Finite Element Methods for the 2-D Hydrodynamic Model for Semiconductor Device Simulation
Chen, Zhangxin; Cockburn, Bernardo; Jerome, Joseph W.; ...
1995-01-01
In this paper we introduce a new method for numerically solving the equations of the hydrodynamic model for semiconductor devices in two space dimensions. The method combines a standard mixed finite element method, used to obtain directly an approximation to the electric field, with the so-called Runge-Kutta Discontinuous Galerkin (RKDG) method, originally devised for numerically solving multi-dimensional hyperbolic systems of conservation laws, which is applied here to the convective part of the equations. Numerical simulations showing the performance of the new method are displayed, and the results compared with those obtained by using Essentially Nonoscillatory (ENO) finite difference schemes. Frommore » the perspective of device modeling, these methods are robust, since they are capable of encompassing broad parameter ranges, including those for which shock formation is possible. The simulations presented here are for Gallium Arsenide at room temperature, but we have tested them much more generally with considerable success.« less
Analysis of mixed model in gear transmission based on ADAMS
NASA Astrophysics Data System (ADS)
Li, Xiufeng; Wang, Yabin
2012-09-01
The traditional method of mechanical gear driving simulation includes gear pair method and solid to solid contact method. The former has higher solving efficiency but lower results accuracy; the latter usually obtains higher precision of results while the calculation process is complex, also it is not easy to converge. Currently, most of the researches are focused on the description of geometric models and the definition of boundary conditions. However, none of them can solve the problems fundamentally. To improve the simulation efficiency while ensure the results with high accuracy, a mixed model method which uses gear tooth profiles to take the place of the solid gear to simulate gear movement is presented under these circumstances. In the process of modeling, build the solid models of the mechanism in the SolidWorks firstly; Then collect the point coordinates of outline curves of the gear using SolidWorks API and create fit curves in Adams based on the point coordinates; Next, adjust the position of those fitting curves according to the position of the contact area; Finally, define the loading conditions, boundary conditions and simulation parameters. The method provides gear shape information by tooth profile curves; simulates the mesh process through tooth profile curve to curve contact and offer mass as well as inertia data via solid gear models. This simulation process combines the two models to complete the gear driving analysis. In order to verify the validity of the method presented, both theoretical derivation and numerical simulation on a runaway escapement are conducted. The results show that the computational efficiency of the mixed model method is 1.4 times over the traditional method which contains solid to solid contact. Meanwhile, the simulation results are more closely to theoretical calculations. Consequently, mixed model method has a high application value regarding to the study of the dynamics of gear mechanism.
Special nuclear material simulation device
Leckey, John H.; DeMint, Amy; Gooch, Jack; Hawk, Todd; Pickett, Chris A.; Blessinger, Chris; York, Robbie L.
2014-08-12
An apparatus for simulating special nuclear material is provided. The apparatus typically contains a small quantity of special nuclear material (SNM) in a configuration that simulates a much larger quantity of SNM. Generally the apparatus includes a spherical shell that is formed from an alloy containing a small quantity of highly enriched uranium. Also typically provided is a core of depleted uranium. A spacer, typically aluminum, may be used to separate the depleted uranium from the shell of uranium alloy. A cladding, typically made of titanium, is provided to seal the source. Methods are provided to simulate SNM for testing radiation monitoring portals. Typically the methods use at least one primary SNM spectral line and exclude at least one secondary SNM spectral line.
Treeby, Bradley E; Tumen, Mustafa; Cox, B T
2011-01-01
A k-space pseudospectral model is developed for the fast full-wave simulation of nonlinear ultrasound propagation through heterogeneous media. The model uses a novel equation of state to account for nonlinearity in addition to power law absorption. The spectral calculation of the spatial gradients enables a significant reduction in the number of required grid nodes compared to finite difference methods. The model is parallelized using a graphical processing unit (GPU) which allows the simulation of individual ultrasound scan lines using a 256 x 256 x 128 voxel grid in less than five minutes. Several numerical examples are given, including the simulation of harmonic ultrasound images and beam patterns using a linear phased array transducer.
Winter Simulation Conference, Miami Beach, Fla., December 4-6, 1978, Proceedings. Volumes 1 & 2
NASA Technical Reports Server (NTRS)
Highland, H. J. (Editor); Nielsen, N. R.; Hull, L. G.
1978-01-01
The papers report on the various aspects of simulation such as random variate generation, simulation optimization, ranking and selection of alternatives, model management, documentation, data bases, and instructional methods. Simulation studies in a wide variety of fields are described, including system design and scheduling, government and social systems, agriculture, computer systems, the military, transportation, corporate planning, ecosystems, health care, manufacturing and industrial systems, computer networks, education, energy, production planning and control, financial models, behavioral models, information systems, and inventory control.
Wakefield Simulation of CLIC PETS Structure Using Parallel 3D Finite Element Time-Domain Solver T3P
DOE Office of Scientific and Technical Information (OSTI.GOV)
Candel, A.; Kabel, A.; Lee, L.
In recent years, SLAC's Advanced Computations Department (ACD) has developed the parallel 3D Finite Element electromagnetic time-domain code T3P. Higher-order Finite Element methods on conformal unstructured meshes and massively parallel processing allow unprecedented simulation accuracy for wakefield computations and simulations of transient effects in realistic accelerator structures. Applications include simulation of wakefield damping in the Compact Linear Collider (CLIC) power extraction and transfer structure (PETS).
Cheng, Ji; Pullenayegum, Eleanor; Marshall, John K; Thabane, Lehana
2016-01-01
Objectives There is no consensus on whether studies with no observed events in the treatment and control arms, the so-called both-armed zero-event studies, should be included in a meta-analysis of randomised controlled trials (RCTs). Current analytic approaches handled them differently depending on the choice of effect measures and authors' discretion. Our objective is to evaluate the impact of including or excluding both-armed zero-event (BA0E) studies in meta-analysis of RCTs with rare outcome events through a simulation study. Method We simulated 2500 data sets for different scenarios varying the parameters of baseline event rate, treatment effect and number of patients in each trial, and between-study variance. We evaluated the performance of commonly used pooling methods in classical meta-analysis—namely, Peto, Mantel-Haenszel with fixed-effects and random-effects models, and inverse variance method with fixed-effects and random-effects models—using bias, root mean square error, length of 95% CI and coverage. Results The overall performance of the approaches of including or excluding BA0E studies in meta-analysis varied according to the magnitude of true treatment effect. Including BA0E studies introduced very little bias, decreased mean square error, narrowed the 95% CI and increased the coverage when no true treatment effect existed. However, when a true treatment effect existed, the estimates from the approach of excluding BA0E studies led to smaller bias than including them. Among all evaluated methods, the Peto method excluding BA0E studies gave the least biased results when a true treatment effect existed. Conclusions We recommend including BA0E studies when treatment effects are unlikely, but excluding them when there is a decisive treatment effect. Providing results of including and excluding BA0E studies to assess the robustness of the pooled estimated effect is a sensible way to communicate the results of a meta-analysis when the treatment effects are unclear. PMID:27531725
Infrared imagery acquisition process supporting simulation and real image training
NASA Astrophysics Data System (ADS)
O'Connor, John
2012-05-01
The increasing use of infrared sensors requires development of advanced infrared training and simulation tools to meet current Warfighter needs. In order to prepare the force, a challenge exists for training and simulation images to be both realistic and consistent with each other to be effective and avoid negative training. The US Army Night Vision and Electronic Sensors Directorate has corrected this deficiency by developing and implementing infrared image collection methods that meet the needs of both real image trainers and real-time simulations. The author presents innovative methods for collection of high-fidelity digital infrared images and the associated equipment and environmental standards. The collected images are the foundation for US Army, and USMC Recognition of Combat Vehicles (ROC-V) real image combat ID training and also support simulations including the Night Vision Image Generator and Synthetic Environment Core. The characteristics, consistency, and quality of these images have contributed to the success of these and other programs. To date, this method has been employed to generate signature sets for over 350 vehicles. The needs of future physics-based simulations will also be met by this data. NVESD's ROC-V image database will support the development of training and simulation capabilities as Warfighter needs evolve.
Cold dark matter. 1: The formation of dark halos
NASA Technical Reports Server (NTRS)
Gelb, James M.; Bertschinger, Edmund
1994-01-01
We use numerical simulations of critically closed cold dark matter (CDM) models to study the effects of numerical resolution on observable quantities. We study simulations with up to 256(exp 3) particles using the particle-mesh (PM) method and with up to 144(exp 3) particles using the adaptive particle-particle-mesh (P3M) method. Comparisons of galaxy halo distributions are made among the various simulations. We also compare distributions with observations, and we explore methods for identifying halos, including a new algorithm that finds all particles within closed contours of the smoothed density field surrounding a peak. The simulated halos show more substructure than predicted by the Press-Schechter theory. We are able to rule out all omega = 1 CDM models for linear amplitude sigma(sub 8) greater than or approximately = 0.5 because the simulations produce too many massive halos compared with the observations. The simulations also produce too many low-mass halos. The distribution of halos characterized by their circular velocities for the P3M simulations is in reasonable agreement with the observations for 150 km/s less than or = V(sub circ) less than or = 350 km/s.
Recent developments in structural proteomics for protein structure determination.
Liu, Hsuan-Liang; Hsu, Jyh-Ping
2005-05-01
The major challenges in structural proteomics include identifying all the proteins on the genome-wide scale, determining their structure-function relationships, and outlining the precise three-dimensional structures of the proteins. Protein structures are typically determined by experimental approaches such as X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. However, the knowledge of three-dimensional space by these techniques is still limited. Thus, computational methods such as comparative and de novo approaches and molecular dynamic simulations are intensively used as alternative tools to predict the three-dimensional structures and dynamic behavior of proteins. This review summarizes recent developments in structural proteomics for protein structure determination; including instrumental methods such as X-ray crystallography and NMR spectroscopy, and computational methods such as comparative and de novo structure prediction and molecular dynamics simulations.
Evaluation of methodology for detecting/predicting migration of forest species
Dale S. Solomon; William B. Leak
1996-01-01
Available methods for analyzing migration of forest species are evaluated, including simulation models, remeasured plots, resurveys, pollen/vegetation analysis, and age/distance trends. Simulation models have provided some of the most drastic estimates of species changes due to predicted changes in global climate. However, these models require additional testing...
NASA Technical Reports Server (NTRS)
Nusinov, M. D.; Kochnev, V. A.; Chernyak, Y. B.; Kuznetsov, A. V.; Kosolapov, A. I.; Yakovlev, O. I.
1974-01-01
Study of evaporation, condensation and sputtering on the moon can provide information on the same processes on other planets, and reveal details of the formation of the lunar regolith. Simulation methods include vacuum evaporation, laser evaporation, and bubbling gas through melts.
Modeling a Hall Thruster from Anode to Plume Far Field
2008-12-31
Two dimensional ax symmetric simulations of xenon plasma plume flow fields from a D55 Anode layer Hall thruster is performed. A hybrid particle-fluid...method is used for the Simulations. The magnetic field surrounding the Hall thruster exit is included in the Calculation. The plasma properties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bostani, Maryam, E-mail: mbostani@mednet.ucla.edu; McMillan, Kyle; Cagnon, Chris H.
2014-11-01
Purpose: Monte Carlo (MC) simulation methods have been widely used in patient dosimetry in computed tomography (CT), including estimating patient organ doses. However, most simulation methods have undergone a limited set of validations, often using homogeneous phantoms with simple geometries. As clinical scanning has become more complex and the use of tube current modulation (TCM) has become pervasive in the clinic, MC simulations should include these techniques in their methodologies and therefore should also be validated using a variety of phantoms with different shapes and material compositions to result in a variety of differently modulated tube current profiles. The purposemore » of this work is to perform the measurements and simulations to validate a Monte Carlo model under a variety of test conditions where fixed tube current (FTC) and TCM were used. Methods: A previously developed MC model for estimating dose from CT scans that models TCM, built using the platform of MCNPX, was used for CT dose quantification. In order to validate the suitability of this model to accurately simulate patient dose from FTC and TCM CT scan, measurements and simulations were compared over a wide range of conditions. Phantoms used for testing range from simple geometries with homogeneous composition (16 and 32 cm computed tomography dose index phantoms) to more complex phantoms including a rectangular homogeneous water equivalent phantom, an elliptical shaped phantom with three sections (where each section was a homogeneous, but different material), and a heterogeneous, complex geometry anthropomorphic phantom. Each phantom requires varying levels of x-, y- and z-modulation. Each phantom was scanned on a multidetector row CT (Sensation 64) scanner under the conditions of both FTC and TCM. Dose measurements were made at various surface and depth positions within each phantom. Simulations using each phantom were performed for FTC, detailed x–y–z TCM, and z-axis-only TCM to obtain dose estimates. This allowed direct comparisons between measured and simulated dose values under each condition of phantom, location, and scan to be made. Results: For FTC scans, the percent root mean square (RMS) difference between measurements and simulations was within 5% across all phantoms. For TCM scans, the percent RMS of the difference between measured and simulated values when using detailed TCM and z-axis-only TCM simulations was 4.5% and 13.2%, respectively. For the anthropomorphic phantom, the difference between TCM measurements and detailed TCM and z-axis-only TCM simulations was 1.2% and 8.9%, respectively. For FTC measurements and simulations, the percent RMS of the difference was 5.0%. Conclusions: This work demonstrated that the Monte Carlo model developed provided good agreement between measured and simulated values under both simple and complex geometries including an anthropomorphic phantom. This work also showed the increased dose differences for z-axis-only TCM simulations, where considerable modulation in the x–y plane was present due to the shape of the rectangular water phantom. Results from this investigation highlight details that need to be included in Monte Carlo simulations of TCM CT scans in order to yield accurate, clinically viable assessments of patient dosimetry.« less
High-fidelity large eddy simulation for supersonic jet noise prediction
NASA Astrophysics Data System (ADS)
Aikens, Kurt M.
The problem of intense sound radiation from supersonic jets is a concern for both civil and military applications. As a result, many experimental and computational efforts are focused at evaluating possible noise suppression techniques. Large-eddy simulation (LES) is utilized in many computational studies to simulate the turbulent jet flowfield. Integral methods such as the Ffowcs Williams-Hawkings (FWH) method are then used for propagation of the sound waves to the farfield. Improving the accuracy of this two-step methodology and evaluating beveled converging-diverging nozzles for noise suppression are the main tasks of this work. First, a series of numerical experiments are undertaken to ensure adequate numerical accuracy of the FWH methodology. This includes an analysis of different treatments for the downstream integration surface: with or without including an end-cap, averaging over multiple end-caps, and including an approximate surface integral correction term. Secondly, shock-capturing methods based on characteristic filtering and adaptive spatial filtering are used to extend a highly-parallelizable multiblock subsonic LES code to enable simulations of supersonic jets. The code is based on high-order numerical methods for accurate prediction of the acoustic sources and propagation of the sound waves. Furthermore, this new code is more efficient than the legacy version, allows cylindrical multiblock topologies, and is capable of simulating nozzles with resolved turbulent boundary layers when coupled with an approximate turbulent inflow boundary condition. Even though such wall-resolved simulations are more physically accurate, their expense is often prohibitive. To make simulations more economical, a wall model is developed and implemented. The wall modeling methodology is validated for turbulent quasi-incompressible and compressible zero pressure gradient flat plate boundary layers, and for subsonic and supersonic jets. The supersonic code additions and the wall model treatment are then utilized to simulate military-style nozzles with and without beveling of the nozzle exit plane. Experiments of beveled converging-diverging nozzles have found reduced noise levels for some observer locations. Predicting the noise for these geometries provides a good initial test of the overall methodology for a more complex nozzle. The jet flowfield and acoustic data are analyzed and compared to similar experiments and excellent agreement is found. Potential areas of improvement are discussed for future research.
Upgrades for the CMS simulation
Lange, D. J.; Hildreth, M.; Ivantchenko, V. N.; ...
2015-05-22
Over the past several years, the CMS experiment has made significant changes to its detector simulation application. The geometry has been generalized to include modifications being made to the CMS detector for 2015 operations, as well as model improvements to the simulation geometry of the current CMS detector and the implementation of a number of approved and possible future detector configurations. These include both completely new tracker and calorimetry systems. We have completed the transition to Geant4 version 10, we have made significant progress in reducing the CPU resources required to run our Geant4 simulation. These have been achieved throughmore » both technical improvements and through numerical techniques. Substantial speed improvements have been achieved without changing the physics validation benchmarks that the experiment uses to validate our simulation application for use in production. As a result, we will discuss the methods that we implemented and the corresponding demonstrated performance improvements deployed for our 2015 simulation application.« less
NASA Astrophysics Data System (ADS)
Yan, Jiawei; Ke, Youqi
In realistic nanoelectronics, disordered impurities/defects are inevitable and play important roles in electron transport. However, due to the lack of effective quantum transport method, the important effects of disorders remain poorly understood. Here, we report a generalized non-equilibrium vertex correction (NVC) method with coherent potential approximation to treat the disorder effects in quantum transport simulation. With this generalized NVC method, any averaged product of two single-particle Green's functions can be obtained by solving a set of simple linear equations. As a result, the averaged non-equilibrium density matrix and various important transport properties, including averaged current, disordered induced current fluctuation and the averaged shot noise, can all be efficiently computed in a unified scheme. Moreover, a generalized form of conditionally averaged non-equilibrium Green's function is derived to incorporate with density functional theory to enable first-principles simulation. We prove the non-equilibrium coherent potential equals the non-equilibrium vertex correction. Our approach provides a unified, efficient and self-consistent method for simulating non-equilibrium quantum transport through disorder nanoelectronics. Shanghaitech start-up fund.
Numerical simulation of the wave-induced non-linear bending moment of ships
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xia, J.; Wang, Z.; Gu, X.
1995-12-31
Ships traveling in moderate or rough seas may experience non-linear bending moments due to flare effect and slamming loads. The numerical simulation of the total wave-induced bending moment contributed from both the wave frequency component induced by wave forces and the high frequency whipping component induced by slamming actions is very important in predicting the responses and ensuring the safety of the ship in rough seas. The time simulation is also useful for the reliability analysis of ship girder strength. The present paper discusses four different methods of the numerical simulation of wave-induced non-linear vertical bending moment of ships recentlymore » developed in CSSRC, including the hydroelastic integral-differential method (HID), the hydroelastic differential analysis method (HDA), the combined seakeeping and structural forced vibration method (CSFV), and the modified CSFV method (MCSFV). Numerical predictions are compared with the experimental results obtained from the elastic ship model test of S-175 container ship in regular and irregular waves presented by Watanabe Ueno and Sawada (1989).« less
NASA Astrophysics Data System (ADS)
Tian, C.; Weng, J.; Liu, Y.
2017-11-01
The convection heat transfer coefficient is one of the evaluation indexes of the brake disc performance. The method used in this paper to calculate the convection heat transfer coefficient is a fluid-solid coupling simulation method, because the calculation results through the empirical formula method have great differences. The model, including a brake disc, a car body, a bogie and flow field, was built, meshed and simulated in the software FLUENT. The calculation models were K-epsilon Standard model and Energy model. The working condition of the brake disc was considered. The coefficient of various parts can be obtained through the method in this paper. The simulation result shows that, under 160 km/h speed, the radiating ribs have the maximum convection heat transfer coefficient and the value is 129.6W/(m2·K), the average coefficient of the whole disc is 100.4W/(m2·K), the windward of ribs is positive-pressure area and the leeward of ribs is negative-pressure area, the maximum pressure is 2663.53Pa.
NASA Astrophysics Data System (ADS)
Ceperley, Daniel Peter
This thesis presents a Finite-Difference Time-Domain simulation framework as well as both scientific observations and quantitative design data for emerging optical devices. These emerging applications required the development of simulation capabilities to carefully control numerical experimental conditions, isolate and quantifying specific scattering processes, and overcome memory and run-time limitations on large device structures. The framework consists of a new version 7 of TEMPEST and auxiliary tools implemented as Matlab scripts. In improving the geometry representation and absorbing boundary conditions in TEMPEST from v6 the accuracy has been sustained and key improvements have yielded application specific speed and accuracy improvements. These extensions include pulsed methods, PML for plasmon termination, and plasmon and scattered field sources. The auxiliary tools include application specific methods such as signal flow graphs of plasmon couplers, Bloch mode expansions of sub-wavelength grating waves, and back-propagation methods to characterize edge scattering in diffraction masks. Each application posed different numerical hurdles and physical questions for the simulation framework. The Terrestrial Planet Finder Coronagraph required accurate modeling of diffraction mask structures too large for solely FDTD analysis. This analysis was achieved through a combination of targeted TEMPEST simulations and full system simulator based on thin mask scalar diffraction models by Ball Aerospace for JPL. TEMPEST simulation showed that vertical sidewalls were the strongest scatterers, adding nearly 2lambda of light per mask edge, which could be reduced by 20° undercuts. TEMPEST assessment of coupling in rapid thermal annealing was complicated by extremely sub-wavelength features and fine meshes. Near 100% coupling and low variability was confirmed even in the presence of unidirectional dense metal gates. Accurate analysis of surface plasmon coupling efficiency by small surface features required capabilities to isolate these features and cleanly illuminate them with plasmons and plane-waves. These features were shown to have coupling cross-sections up to and slightly exceeding their physical size. Long run-times for TEMPEST simulations of finite length gratings were overcome with a signal flow graph method. With these methods a plasmon coupler with over a 10lambda 100% capture length was demonstrated. Simulation of 3D nano-particle arrays utilized TEMPEST v7's pulsed methods to minimize the number of multi-day simulations. These simulations led to the discovery that interstitial plasmons were responsible for resonant absorption and transmission but not reflection. Simulation of a sub-wavelength grating mirror using pulsed sources to map resonant spectra showed that neither coupled guided waves nor coupled isolated resonators accurately described the operation. However, a new model based on vertical propagation of lateral Bloch modes with zero phase progression efficiently characterized the device and provided principles for designing similar devices at other wavelengths.
Simulating propagation of coherent light in random media using the Fredholm type integral equation
NASA Astrophysics Data System (ADS)
Kraszewski, Maciej; Pluciński, Jerzy
2017-06-01
Studying propagation of light in random scattering materials is important for both basic and applied research. Such studies often require usage of numerical method for simulating behavior of light beams in random media. However, if such simulations require consideration of coherence properties of light, they may become a complex numerical problems. There are well established methods for simulating multiple scattering of light (e.g. Radiative Transfer Theory and Monte Carlo methods) but they do not treat coherence properties of light directly. Some variations of these methods allows to predict behavior of coherent light but only for an averaged realization of the scattering medium. This limits their application in studying many physical phenomena connected to a specific distribution of scattering particles (e.g. laser speckle). In general, numerical simulation of coherent light propagation in a specific realization of random medium is a time- and memory-consuming problem. The goal of the presented research was to develop new efficient method for solving this problem. The method, presented in our earlier works, is based on solving the Fredholm type integral equation, which describes multiple light scattering process. This equation can be discretized and solved numerically using various algorithms e.g. by direct solving the corresponding linear equations system, as well as by using iterative or Monte Carlo solvers. Here we present recent development of this method including its comparison with well-known analytical results and a finite-difference type simulations. We also present extension of the method for problems of multiple scattering of a polarized light on large spherical particles that joins presented mathematical formalism with Mie theory.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2014-01-01
Simulation codes often utilize finite-dimensional approximation resulting in numerical error. Some examples include, numerical methods utilizing grids and finite-dimensional basis functions, particle methods using a finite number of particles. These same simulation codes also often contain sources of uncertainty, for example, uncertain parameters and fields associated with the imposition of initial and boundary data,uncertain physical model parameters such as chemical reaction rates, mixture model parameters, material property parameters, etc.
NASA Technical Reports Server (NTRS)
Reeves, P. M.; Campbell, G. S.; Ganzer, V. M.; Joppa, R. G.
1974-01-01
A method is described for generating time histories which model the frequency content and certain non-Gaussian probability characteristics of atmospheric turbulence including the large gusts and patchy nature of turbulence. Methods for time histories using either analog or digital computation are described. A STOL airplane was programmed into a 6-degree-of-freedom flight simulator, and turbulence time histories from several atmospheric turbulence models were introduced. The pilots' reactions are described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rick Demmer; John Drake; Ryan James, PhD
Over the last 50 years, the study of radiological contamination and decontamination has expanded significantly. This paper addresses the mechanisms of radiological contamination that have been reported and then discusses which methods have recently been used during performance testing of several different decontamination technologies. About twenty years ago the Idaho Nuclear Technology Engineering Center (INTEC) at the INL began a search for decontamination processes which could minimize secondary waste. In order to test the effectiveness of these decontamination technologies, a new simulated contamination, termed SIMCON, was developed. SIMCON was designed to replicate the types of contamination found on stainless steel,more » spent fuel processing equipment. Ten years later, the INL began research into methods for simulating urban contamination resulting from a radiological dispersal device (RDD). This work was sponsored by the Defense Advanced Research Projects Agency (DARPA) and included the initial development an aqueous application of contaminant to substrate. Since 2007, research sponsored by the US Environmental Protection Agency (EPA) has advanced that effort and led to the development of a contamination method that simulates particulate fallout from an Improvised Nuclear Device (IND). The IND method diverges from previous efforts to create tenacious contamination by simulating a reproducible “loose” contamination. Examining these different types of contamination (and subsequent decontamination processes), which have included several different radionuclides and substrates, sheds light on contamination processes that occur throughout the nuclear industry and in the urban environment.« less
Tian, Yuxi; Schuemie, Martijn J; Suchard, Marc A
2018-06-22
Propensity score adjustment is a popular approach for confounding control in observational studies. Reliable frameworks are needed to determine relative propensity score performance in large-scale studies, and to establish optimal propensity score model selection methods. We detail a propensity score evaluation framework that includes synthetic and real-world data experiments. Our synthetic experimental design extends the 'plasmode' framework and simulates survival data under known effect sizes, and our real-world experiments use a set of negative control outcomes with presumed null effect sizes. In reproductions of two published cohort studies, we compare two propensity score estimation methods that contrast in their model selection approach: L1-regularized regression that conducts a penalized likelihood regression, and the 'high-dimensional propensity score' (hdPS) that employs a univariate covariate screen. We evaluate methods on a range of outcome-dependent and outcome-independent metrics. L1-regularization propensity score methods achieve superior model fit, covariate balance and negative control bias reduction compared with the hdPS. Simulation results are mixed and fluctuate with simulation parameters, revealing a limitation of simulation under the proportional hazards framework. Including regularization with the hdPS reduces commonly reported non-convergence issues but has little effect on propensity score performance. L1-regularization incorporates all covariates simultaneously into the propensity score model and offers propensity score performance superior to the hdPS marginal screen.
Radio frequency tank eigenmode sensor for propellant quantity gauging
NASA Technical Reports Server (NTRS)
Zimmerli, Gregory A. (Inventor)
2013-01-01
A method for measuring the quantity of fluid in a tank may include the steps of selecting a match between a measured set of electromagnetic eigenfrequencies and a simulated plurality of sets of electromagnetic eigenfrequencies using a matching algorithm, wherein the match is one simulated set of electromagnetic eigenfrequencies from the simulated plurality of sets of electromagnetic eigenfrequencies, and determining the fill level of the tank based upon the match.
Real-time inextensible surgical thread simulation.
Xu, Lang; Liu, Qian
2018-03-27
This paper discusses a real-time simulation method of inextensible surgical thread based on the Cosserat rod theory using position-based dynamics (PBD). The method realizes stable twining and knotting of surgical thread while including inextensibility, bending, twisting and coupling effects. The Cosserat rod theory is used to model the nonlinear elastic behavior of surgical thread. The surgical thread model is solved with PBD to achieve a real-time, extremely stable simulation. Due to the one-dimensional linear structure of surgical thread, the direct solution of the distance constraint based on tridiagonal matrix algorithm is used to enhance stretching resistance in every constraint projection iteration. In addition, continuous collision detection and collision response guarantee a large time step and high performance. Furthermore, friction is integrated into the constraint projection process to stabilize the twining of multiple threads and complex contact situations. Through comparisons with existing methods, the surgical thread maintains constant length under large deformation after applying the direct distance constraint in our method. The twining and knotting of multiple threads correspond to stable solutions to contact and friction forces. A surgical suture scene is also modeled to demonstrate the practicality and simplicity of our method. Our method achieves stable and fast simulation of inextensible surgical thread. Benefiting from the unified particle framework, the rigid body, elastic rod, and soft body can be simultaneously simulated. The method is appropriate for applications in virtual surgery that require multiple dynamic bodies.
QM/MM free energy simulations: recent progress and challenges
Lu, Xiya; Fang, Dong; Ito, Shingo; Okamoto, Yuko; Ovchinnikov, Victor
2016-01-01
Due to the higher computational cost relative to pure molecular mechanical (MM) simulations, hybrid quantum mechanical/molecular mechanical (QM/MM) free energy simulations particularly require a careful consideration of balancing computational cost and accuracy. Here we review several recent developments in free energy methods most relevant to QM/MM simulations and discuss several topics motivated by these developments using simple but informative examples that involve processes in water. For chemical reactions, we highlight the value of invoking enhanced sampling technique (e.g., replica-exchange) in umbrella sampling calculations and the value of including collective environmental variables (e.g., hydration level) in metadynamics simulations; we also illustrate the sensitivity of string calculations, especially free energy along the path, to various parameters in the computation. Alchemical free energy simulations with a specific thermodynamic cycle are used to probe the effect of including the first solvation shell into the QM region when computing solvation free energies. For cases where high-level QM/MM potential functions are needed, we analyze two different approaches: the QM/MM-MFEP method of Yang and co-workers and perturbative correction to low-level QM/MM free energy results. For the examples analyzed here, both approaches seem productive although care needs to be exercised when analyzing the perturbative corrections. PMID:27563170
Detecting Unsteady Blade Row Interaction in a Francis Turbine using a Phase-Lag Boundary Condition
NASA Astrophysics Data System (ADS)
Wouden, Alex; Cimbala, John; Lewis, Bryan
2013-11-01
For CFD simulations in turbomachinery, methods are typically used to reduce the computational cost. For example, the standard periodic assumption reduces the underlying mesh to a single blade passage in axisymmetric applications. If the simulation includes only a single array of blades with an uniform inlet condition, this assumption is adequate. However, to compute the interaction between successive blade rows of differing periodicity in an unsteady simulation, the periodic assumption breaks down and may produce inaccurate results. As a viable alternative the phase-lag boundary condition assumes that the periodicity includes a temporal component which, if considered, allows for a single passage to be modeled per blade row irrespective of differing periodicity. Prominently used in compressible CFD codes for the analysis of gas turbines/compressors, the phase-lag boundary condition is adapted to analyze the interaction between the guide vanes and rotor blades in an incompressible simulation of the 1989 GAMM Workshop Francis turbine using OpenFOAM. The implementation is based on the ``direct-storage'' method proposed in 1977 by Erdos and Alzner. The phase-lag simulation is compared with available data from the GAMM workshop as well as a full-wheel simulation. Funding provided by DOE Award number: DE-EE0002667.
A sequential coalescent algorithm for chromosomal inversions
Peischl, S; Koch, E; Guerrero, R F; Kirkpatrick, M
2013-01-01
Chromosomal inversions are common in natural populations and are believed to be involved in many important evolutionary phenomena, including speciation, the evolution of sex chromosomes and local adaptation. While recent advances in sequencing and genotyping methods are leading to rapidly increasing amounts of genome-wide sequence data that reveal interesting patterns of genetic variation within inverted regions, efficient simulation methods to study these patterns are largely missing. In this work, we extend the sequential Markovian coalescent, an approximation to the coalescent with recombination, to include the effects of polymorphic inversions on patterns of recombination. Results show that our algorithm is fast, memory-efficient and accurate, making it feasible to simulate large inversions in large populations for the first time. The SMC algorithm enables studies of patterns of genetic variation (for example, linkage disequilibria) and tests of hypotheses (using simulation-based approaches) that were previously intractable. PMID:23632894
Singh, Gurpreet; Ravi, Koustuban; Wang, Qian; Ho, Seng-Tiong
2012-06-15
A complex-envelope (CE) alternating-direction-implicit (ADI) finite-difference time-domain (FDTD) approach to treat light-matter interaction self-consistently with electromagnetic field evolution for efficient simulations of active photonic devices is presented for the first time (to our best knowledge). The active medium (AM) is modeled using an efficient multilevel system of carrier rate equations to yield the correct carrier distributions, suitable for modeling semiconductor/solid-state media accurately. To include the AM in the CE-ADI-FDTD method, a first-order differential system involving CE fields in the AM is first set up. The system matrix that includes AM parameters is then split into two time-dependent submatrices that are then used in an efficient ADI splitting formula. The proposed CE-ADI-FDTD approach with AM takes 22% of the time as the approach of the corresponding explicit FDTD, as validated by semiconductor microdisk laser simulations.
Urbanowicz, Ryan J; Kiralis, Jeff; Sinnott-Armstrong, Nicholas A; Heberling, Tamra; Fisher, Jonathan M; Moore, Jason H
2012-10-01
Geneticists who look beyond single locus disease associations require additional strategies for the detection of complex multi-locus effects. Epistasis, a multi-locus masking effect, presents a particular challenge, and has been the target of bioinformatic development. Thorough evaluation of new algorithms calls for simulation studies in which known disease models are sought. To date, the best methods for generating simulated multi-locus epistatic models rely on genetic algorithms. However, such methods are computationally expensive, difficult to adapt to multiple objectives, and unlikely to yield models with a precise form of epistasis which we refer to as pure and strict. Purely and strictly epistatic models constitute the worst-case in terms of detecting disease associations, since such associations may only be observed if all n-loci are included in the disease model. This makes them an attractive gold standard for simulation studies considering complex multi-locus effects. We introduce GAMETES, a user-friendly software package and algorithm which generates complex biallelic single nucleotide polymorphism (SNP) disease models for simulation studies. GAMETES rapidly and precisely generates random, pure, strict n-locus models with specified genetic constraints. These constraints include heritability, minor allele frequencies of the SNPs, and population prevalence. GAMETES also includes a simple dataset simulation strategy which may be utilized to rapidly generate an archive of simulated datasets for given genetic models. We highlight the utility and limitations of GAMETES with an example simulation study using MDR, an algorithm designed to detect epistasis. GAMETES is a fast, flexible, and precise tool for generating complex n-locus models with random architectures. While GAMETES has a limited ability to generate models with higher heritabilities, it is proficient at generating the lower heritability models typically used in simulation studies evaluating new algorithms. In addition, the GAMETES modeling strategy may be flexibly combined with any dataset simulation strategy. Beyond dataset simulation, GAMETES could be employed to pursue theoretical characterization of genetic models and epistasis.
Comparison of optimization algorithms in intensity-modulated radiation therapy planning
NASA Astrophysics Data System (ADS)
Kendrick, Rachel
Intensity-modulated radiation therapy is used to better conform the radiation dose to the target, which includes avoiding healthy tissue. Planning programs employ optimization methods to search for the best fluence of each photon beam, and therefore to create the best treatment plan. The Computational Environment for Radiotherapy Research (CERR), a program written in MATLAB, was used to examine some commonly-used algorithms for one 5-beam plan. Algorithms include the genetic algorithm, quadratic programming, pattern search, constrained nonlinear optimization, simulated annealing, the optimization method used in Varian EclipseTM, and some hybrids of these. Quadratic programing, simulated annealing, and a quadratic/simulated annealing hybrid were also separately compared using different prescription doses. The results of each dose-volume histogram as well as the visual dose color wash were used to compare the plans. CERR's built-in quadratic programming provided the best overall plan, but avoidance of the organ-at-risk was rivaled by other programs. Hybrids of quadratic programming with some of these algorithms seems to suggest the possibility of better planning programs, as shown by the improved quadratic/simulated annealing plan when compared to the simulated annealing algorithm alone. Further experimentation will be done to improve cost functions and computational time.
Flanders, W Dana; Strickland, Matthew J; Klein, Mitchel
2017-05-15
Methods exist to detect residual confounding in epidemiologic studies. One requires a negative control exposure with 2 key properties: 1) conditional independence of the negative control and the outcome (given modeled variables) absent confounding and other model misspecification, and 2) associations of the negative control with uncontrolled confounders and the outcome. We present a new method to partially correct for residual confounding: When confounding is present and our assumptions hold, we argue that estimators from models that include a negative control exposure with these 2 properties tend to be less biased than those from models without it. Using regression theory, we provide theoretical arguments that support our claims. In simulations, we empirically evaluated the approach using a time-series study of ozone effects on asthma emergency department visits. In simulations, effect estimators from models that included the negative control exposure (ozone concentrations 1 day after the emergency department visit) had slightly or modestly less residual confounding than those from models without it. Theory and simulations show that including the negative control can reduce residual confounding, if our assumptions hold. Our method differs from available methods because it uses a regression approach involving an exposure-based indicator rather than a negative control outcome to partially correct for confounding. © The Author 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
ERIC Educational Resources Information Center
Wankat, Phillip C.
1984-01-01
Discusses a simple method for following the movement of a solute in an adsorption or ion exchange system. This movement is used to study a variety of operational methods, including continuous flow and pulsed flow counter-current operations and simulated counter-current systems. Effect of changing thermodynamic variables is also considered. (JM)
Quantum Fragment Based ab Initio Molecular Dynamics for Proteins.
Liu, Jinfeng; Zhu, Tong; Wang, Xianwei; He, Xiao; Zhang, John Z H
2015-12-08
Developing ab initio molecular dynamics (AIMD) methods for practical application in protein dynamics is of significant interest. Due to the large size of biomolecules, applying standard quantum chemical methods to compute energies for dynamic simulation is computationally prohibitive. In this work, a fragment based ab initio molecular dynamics approach is presented for practical application in protein dynamics study. In this approach, the energy and forces of the protein are calculated by a recently developed electrostatically embedded generalized molecular fractionation with conjugate caps (EE-GMFCC) method. For simulation in explicit solvent, mechanical embedding is introduced to treat protein interaction with explicit water molecules. This AIMD approach has been applied to MD simulations of a small benchmark protein Trpcage (with 20 residues and 304 atoms) in both the gas phase and in solution. Comparison to the simulation result using the AMBER force field shows that the AIMD gives a more stable protein structure in the simulation, indicating that quantum chemical energy is more reliable. Importantly, the present fragment-based AIMD simulation captures quantum effects including electrostatic polarization and charge transfer that are missing in standard classical MD simulations. The current approach is linear-scaling, trivially parallel, and applicable to performing the AIMD simulation of proteins with a large size.
SSAGES: Software Suite for Advanced General Ensemble Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sidky, Hythem; Colón, Yamil J.; Helfferich, Julian
Molecular simulation has emerged as an essential tool for modern-day research, but obtaining proper results and making reliable conclusions from simulations requires adequate sampling of the system under consideration. To this end, a variety of methods exist in the literature that can enhance sampling considerably, and increasingly sophisticated, effective algorithms continue to be developed at a rapid pace. Implementation of these techniques, however, can be challenging for experts and non-experts alike. There is a clear need for software that provides rapid, reliable, and easy access to a wide range of advanced sampling methods, and that facilitates implementation of new techniquesmore » as they emerge. Here we present SSAGES, a publicly available Software Suite for Advanced General Ensemble Simulations designed to interface with multiple widely used molecular dynamics simulations packages. SSAGES allows facile application of a variety of enhanced sampling techniques—including adaptive biasing force, string methods, and forward flux sampling—that extract meaningful free energy and transition path data from all-atom and coarse grained simulations. A noteworthy feature of SSAGES is a user-friendly framework that facilitates further development and implementation of new methods and collective variables. In this work, the use of SSAGES is illustrated in the context of simple representative applications involving distinct methods and different collective variables that are available in the current release of the suite.« less
SSAGES: Software Suite for Advanced General Ensemble Simulations.
Sidky, Hythem; Colón, Yamil J; Helfferich, Julian; Sikora, Benjamin J; Bezik, Cody; Chu, Weiwei; Giberti, Federico; Guo, Ashley Z; Jiang, Xikai; Lequieu, Joshua; Li, Jiyuan; Moller, Joshua; Quevillon, Michael J; Rahimi, Mohammad; Ramezani-Dakhel, Hadi; Rathee, Vikramjit S; Reid, Daniel R; Sevgen, Emre; Thapar, Vikram; Webb, Michael A; Whitmer, Jonathan K; de Pablo, Juan J
2018-01-28
Molecular simulation has emerged as an essential tool for modern-day research, but obtaining proper results and making reliable conclusions from simulations requires adequate sampling of the system under consideration. To this end, a variety of methods exist in the literature that can enhance sampling considerably, and increasingly sophisticated, effective algorithms continue to be developed at a rapid pace. Implementation of these techniques, however, can be challenging for experts and non-experts alike. There is a clear need for software that provides rapid, reliable, and easy access to a wide range of advanced sampling methods and that facilitates implementation of new techniques as they emerge. Here we present SSAGES, a publicly available Software Suite for Advanced General Ensemble Simulations designed to interface with multiple widely used molecular dynamics simulations packages. SSAGES allows facile application of a variety of enhanced sampling techniques-including adaptive biasing force, string methods, and forward flux sampling-that extract meaningful free energy and transition path data from all-atom and coarse-grained simulations. A noteworthy feature of SSAGES is a user-friendly framework that facilitates further development and implementation of new methods and collective variables. In this work, the use of SSAGES is illustrated in the context of simple representative applications involving distinct methods and different collective variables that are available in the current release of the suite. The code may be found at: https://github.com/MICCoM/SSAGES-public.
SSAGES: Software Suite for Advanced General Ensemble Simulations
NASA Astrophysics Data System (ADS)
Sidky, Hythem; Colón, Yamil J.; Helfferich, Julian; Sikora, Benjamin J.; Bezik, Cody; Chu, Weiwei; Giberti, Federico; Guo, Ashley Z.; Jiang, Xikai; Lequieu, Joshua; Li, Jiyuan; Moller, Joshua; Quevillon, Michael J.; Rahimi, Mohammad; Ramezani-Dakhel, Hadi; Rathee, Vikramjit S.; Reid, Daniel R.; Sevgen, Emre; Thapar, Vikram; Webb, Michael A.; Whitmer, Jonathan K.; de Pablo, Juan J.
2018-01-01
Molecular simulation has emerged as an essential tool for modern-day research, but obtaining proper results and making reliable conclusions from simulations requires adequate sampling of the system under consideration. To this end, a variety of methods exist in the literature that can enhance sampling considerably, and increasingly sophisticated, effective algorithms continue to be developed at a rapid pace. Implementation of these techniques, however, can be challenging for experts and non-experts alike. There is a clear need for software that provides rapid, reliable, and easy access to a wide range of advanced sampling methods and that facilitates implementation of new techniques as they emerge. Here we present SSAGES, a publicly available Software Suite for Advanced General Ensemble Simulations designed to interface with multiple widely used molecular dynamics simulations packages. SSAGES allows facile application of a variety of enhanced sampling techniques—including adaptive biasing force, string methods, and forward flux sampling—that extract meaningful free energy and transition path data from all-atom and coarse-grained simulations. A noteworthy feature of SSAGES is a user-friendly framework that facilitates further development and implementation of new methods and collective variables. In this work, the use of SSAGES is illustrated in the context of simple representative applications involving distinct methods and different collective variables that are available in the current release of the suite. The code may be found at: https://github.com/MICCoM/SSAGES-public.
Nonholonomic Hamiltonian Method for Meso-macroscale Simulations of Reacting Shocks
NASA Astrophysics Data System (ADS)
Fahrenthold, Eric; Lee, Sangyup
2015-06-01
The seamless integration of macroscale, mesoscale, and molecular scale models of reacting shock physics has been hindered by dramatic differences in the model formulation techniques normally used at different scales. In recent research the authors have developed the first unified discrete Hamiltonian approach to multiscale simulation of reacting shock physics. Unlike previous work, the formulation employs reacting themomechanical Hamiltonian formulations at all scales, including the continuum. Unlike previous work, the formulation employs a nonholonomic modeling approach to systematically couple the models developed at all scales. Example applications of the method show meso-macroscale shock to detonation simulations in nitromethane and RDX. Research supported by the Defense Threat Reduction Agency.
Probabilistic simulation of uncertainties in composite uniaxial strengths
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Stock, T. A.
1990-01-01
Probabilistic composite micromechanics methods are developed that simulate uncertainties in unidirectional fiber composite strengths. These methods are in the form of computational procedures using composite mechanics with Monte Carlo simulation. The variables for which uncertainties are accounted include constituent strengths and their respective scatter. A graphite/epoxy unidirectional composite (ply) is studied to illustrate the procedure and its effectiveness to formally estimate the probable scatter in the composite uniaxial strengths. The results show that ply longitudinal tensile and compressive, transverse compressive and intralaminar shear strengths are not sensitive to single fiber anomalies (breaks, intergacial disbonds, matrix microcracks); however, the ply transverse tensile strength is.
Simulated moving bed system for CO.sub.2 separation, and method of same
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elliott, Jeannine Elizabeth; Copeland, Robert James; Lind, Jeff
A system and method for separating and/or purification of CO.sub.2 gas from a CO.sub.2 feed stream is described. The system and method include a plurality of fixed sorbent beds, adsorption zones and desorption zones, where the sorbent beds are connected via valve and lines to create a simulated moving bed system, where the sorbent beds move from one adsorption position to another adsorption position, and then into one regeneration position to another regeneration position, and optionally back to an adsorption position. The system and method operate by concentration swing adsorption/desorption and by adsorptive/desorptive displacement.
Modeling Electrokinetic Flows by the Smoothed Profile Method
Luo, Xian; Beskok, Ali; Karniadakis, George Em
2010-01-01
We propose an efficient modeling method for electrokinetic flows based on the Smoothed Profile Method (SPM) [1–4] and spectral element discretizations. The new method allows for arbitrary differences in the electrical conductivities between the charged surfaces and the the surrounding electrolyte solution. The electrokinetic forces are included into the flow equations so that the Poisson-Boltzmann and electric charge continuity equations are cast into forms suitable for SPM. The method is validated by benchmark problems of electroosmotic flow in straight channels and electrophoresis of charged cylinders. We also present simulation results of electrophoresis of charged microtubules, and show that the simulated electrophoretic mobility and anisotropy agree with the experimental values. PMID:20352076
NASA Astrophysics Data System (ADS)
Abdi, Mohamad; Hajihasani, Mojtaba; Gharibzadeh, Shahriar; Tavakkoli, Jahan
2012-12-01
Ultrasound waves have been widely used in diagnostic and therapeutic medical applications. Accurate and effective simulation of ultrasound beam propagation and its interaction with tissue has been proved to be important. The nonlinear nature of the ultrasound beam propagation, especially in the therapeutic regime, plays an important role in the mechanisms of interaction with tissue. There are three main approaches in current computational fluid dynamics (CFD) methods to model and simulate nonlinear ultrasound beams: macroscopic, mesoscopic and microscopic approaches. In this work, a mesoscopic CFD method based on the Lattice-Boltzmann model (LBM) was investigated. In the developed method, the Boltzmann equation is evolved to simulate the flow of a Newtonian fluid with the collision model instead of solving the Navier-Stokes, continuity and state equations which are used in conventional CFD methods. The LBM has some prominent advantages over conventional CFD methods, including: (1) its parallel computational nature; (2) taking microscopic boundaries into account; and (3) capability of simulating in porous and inhomogeneous media. In our proposed method, the propagating medium is discretized with a square grid in 2 dimensions with 9 velocity vectors for each node. Using the developed model, the nonlinear distortion and shock front development of a finiteamplitude diffractive ultrasonic beam in a dissipative fluid medium was computed and validated against the published data. The results confirm that the LBM is an accurate and effective approach to model and simulate nonlinearity in finite-amplitude ultrasound beams with Mach numbers of up to 0.01 which, among others, falls within the range of therapeutic ultrasound regime such as high intensity focused ultrasound (HIFU) beams. A comparison between the HIFU nonlinear beam simulations using the proposed model and pseudospectral methods in a 2D geometry is presented.
NASA Astrophysics Data System (ADS)
Chen, Tzikang J.; Shiao, Michael
2016-04-01
This paper verified a generic and efficient assessment concept for probabilistic fatigue life management. The concept is developed based on an integration of damage tolerance methodology, simulations methods1, 2, and a probabilistic algorithm RPI (recursive probability integration)3-9 considering maintenance for damage tolerance and risk-based fatigue life management. RPI is an efficient semi-analytical probabilistic method for risk assessment subjected to various uncertainties such as the variability in material properties including crack growth rate, initial flaw size, repair quality, random process modeling of flight loads for failure analysis, and inspection reliability represented by probability of detection (POD). In addition, unlike traditional Monte Carlo simulations (MCS) which requires a rerun of MCS when maintenance plan is changed, RPI can repeatedly use a small set of baseline random crack growth histories excluding maintenance related parameters from a single MCS for various maintenance plans. In order to fully appreciate the RPI method, a verification procedure was performed. In this study, MC simulations in the orders of several hundred billions were conducted for various flight conditions, material properties, and inspection scheduling, POD and repair/replacement strategies. Since the MC simulations are time-consuming methods, the simulations were conducted parallelly on DoD High Performance Computers (HPC) using a specialized random number generator for parallel computing. The study has shown that RPI method is several orders of magnitude more efficient than traditional Monte Carlo simulations.
Bland, Andrew J; Tobbell, Jane
2015-11-01
Simulation has become an established feature of undergraduate nurse education and as such requires extensive investigation. Research limited to pre-constructed categories imposed by some questionnaire and interview methods may only provide partial understanding. This is problematic in understanding the mechanisms of learning in simulation-based education as contemporary distributed theories of learning posit that learning can be understood as the interaction of individual identity with context. This paper details a method of data collection and analysis that captures interaction of individuals within the simulation experience which can be analysed through multiple lenses, including context and through the lens of both researcher and learner. The study utilised a grounded theory approach involving 31 under-graduate third year student nurses. Data was collected and analysed through non-participant observation, digital recordings of simulation activity and focus group deconstruction of their recorded simulation by the participants and researcher. Focus group interviews enabled further clarification. The method revealed multiple levels of dynamic data, concluding that in order to better understand how students learn in social and active learning strategies, dynamic data is required enabling researchers and participants to unpack what is happening as it unfolds in action. Copyright © 2015 Elsevier Ltd. All rights reserved.
Effect of Turbulence Modeling on an Excited Jet
NASA Technical Reports Server (NTRS)
Brown, Clifford A.; Hixon, Ray
2010-01-01
The flow dynamics in a high-speed jet are dominated by unsteady turbulent flow structures in the plume. Jet excitation seeks to control these flow structures through the natural instabilities present in the initial shear layer of the jet. Understanding and optimizing the excitation input, for jet noise reduction or plume mixing enhancement, requires many trials that may be done experimentally or computationally at a significant cost savings. Numerical simulations, which model various parts of the unsteady dynamics to reduce the computational expense of the simulation, must adequately capture the unsteady flow dynamics in the excited jet for the results are to be used. Four CFD methods are considered for use in an excited jet problem, including two turbulence models with an Unsteady Reynolds Averaged Navier-Stokes (URANS) solver, one Large Eddy Simulation (LES) solver, and one URANS/LES hybrid method. Each method is used to simulate a simplified excited jet and the results are evaluated based on the flow data, computation time, and numerical stability. The knowledge gained about the effect of turbulence modeling and CFD methods from these basic simulations will guide and assist future three-dimensional (3-D) simulations that will be used to understand and optimize a realistic excited jet for a particular application.
Wimberley, Catriona J; Fischer, Kristina; Reilhac, Anthonin; Pichler, Bernd J; Gregoire, Marie Claude
2014-10-01
The partial saturation approach (PSA) is a simple, single injection experimental protocol that will estimate both B(avail) and appK(D) without the use of blood sampling. This makes it ideal for use in longitudinal studies of neurodegenerative diseases in the rodent. The aim of this study was to increase the range and applicability of the PSA by developing a data driven strategy for determining reliable regional estimates of receptor density (B(avail)) and in vivo affinity (1/appK(D)), and validate the strategy using a simulation model. The data driven method uses a time window guided by the dynamic equilibrium state of the system as opposed to using a static time window. To test the method, simulations of partial saturation experiments were generated and validated against experimental data. The experimental conditions simulated included a range of receptor occupancy levels and three different B(avail) and appK(D) values to mimic diseases states. Also the effect of using a reference region and typical PET noise on the stability and accuracy of the estimates was investigated. The investigations showed that the parameter estimates in a simulated healthy mouse, using the data driven method were within 10±30% of the simulated input for the range of occupancy levels simulated. Throughout all experimental conditions simulated, the accuracy and robustness of the estimates using the data driven method were much improved upon the typical method of using a static time window, especially at low receptor occupancy levels. Introducing a reference region caused a bias of approximately 10% over the range of occupancy levels. Based on extensive simulated experimental conditions, it was shown the data driven method provides accurate and precise estimates of B(avail) and appK(D) for a broader range of conditions compared to the original method. Copyright © 2014 Elsevier Inc. All rights reserved.
Neurosurgery simulation using non-linear finite element modeling and haptic interaction
NASA Astrophysics Data System (ADS)
Lee, Huai-Ping; Audette, Michel; Joldes, Grand R.; Enquobahrie, Andinet
2012-02-01
Real-time surgical simulation is becoming an important component of surgical training. To meet the realtime requirement, however, the accuracy of the biomechancial modeling of soft tissue is often compromised due to computing resource constraints. Furthermore, haptic integration presents an additional challenge with its requirement for a high update rate. As a result, most real-time surgical simulation systems employ a linear elasticity model, simplified numerical methods such as the boundary element method or spring-particle systems, and coarse volumetric meshes. However, these systems are not clinically realistic. We present here an ongoing work aimed at developing an efficient and physically realistic neurosurgery simulator using a non-linear finite element method (FEM) with haptic interaction. Real-time finite element analysis is achieved by utilizing the total Lagrangian explicit dynamic (TLED) formulation and GPU acceleration of per-node and per-element operations. We employ a virtual coupling method for separating deformable body simulation and collision detection from haptic rendering, which needs to be updated at a much higher rate than the visual simulation. The system provides accurate biomechancial modeling of soft tissue while retaining a real-time performance with haptic interaction. However, our experiments showed that the stability of the simulator depends heavily on the material property of the tissue and the speed of colliding objects. Hence, additional efforts including dynamic relaxation are required to improve the stability of the system.
Fang, Yuan; Badal, Andreu; Allec, Nicholas; Karim, Karim S; Badano, Aldo
2012-01-01
The authors describe a detailed Monte Carlo (MC) method for the coupled transport of ionizing particles and charge carriers in amorphous selenium (a-Se) semiconductor x-ray detectors, and model the effect of statistical variations on the detected signal. A detailed transport code was developed for modeling the signal formation process in semiconductor x-ray detectors. The charge transport routines include three-dimensional spatial and temporal models of electron-hole pair transport taking into account recombination and trapping. Many electron-hole pairs are created simultaneously in bursts from energy deposition events. Carrier transport processes include drift due to external field and Coulombic interactions, and diffusion due to Brownian motion. Pulse-height spectra (PHS) have been simulated with different transport conditions for a range of monoenergetic incident x-ray energies and mammography radiation beam qualities. Two methods for calculating Swank factors from simulated PHS are shown, one using the entire PHS distribution, and the other using the photopeak. The latter ignores contributions from Compton scattering and K-fluorescence. Comparisons differ by approximately 2% between experimental measurements and simulations. The a-Se x-ray detector PHS responses simulated in this work include three-dimensional spatial and temporal transport of electron-hole pairs. These PHS were used to calculate the Swank factor and compare it with experimental measurements. The Swank factor was shown to be a function of x-ray energy and applied electric field. Trapping and recombination models are all shown to affect the Swank factor.
On Characterizing Particle Shape
NASA Technical Reports Server (NTRS)
Ennis, Bryan J.; Rickman, Douglas; Rollins, A. Brent; Ennis, Brandon
2014-01-01
It is well known that particle shape affects flow characteristics of granular materials, as well as a variety of other solids processing issues such as compaction, rheology, filtration and other two-phase flow problems. The impact of shape crosses many diverse and commercially important applications, including pharmaceuticals, civil engineering, metallurgy, health, and food processing. Two applications studied here include the dry solids flow of lunar simulants (e.g. JSC-1, NU-LHT-2M, OB-1), and the flow properties of wet concrete, including final compressive strength. A multi-dimensional generalized, engineering method to quantitatively characterize particle shapes has been developed, applicable to both single particle orientation and multi-particle assemblies. The two-dimension, three dimension inversion problem is also treated, and the application of these methods to DEM model particles will be discussed. In the case of lunar simulants, flow properties of six lunar simulants have been measured, and the impact of particle shape on flowability - as characterized by the shape method developed here -- is discussed, especially in the context of three simulants of similar size range. In the context of concrete processing, concrete construction is a major contributor to greenhouse gas production, of which the major contributor is cement binding loading. Any optimization in concrete rheology and packing that can reduce cement loading and improve strength loading can also reduce currently required construction safety factors. The characterization approach here is also demonstrated for the impact of rock aggregate shape on concrete slump rheology and dry compressive strength.
NASA Astrophysics Data System (ADS)
Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N. C.
2010-10-01
Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first-order, explicit, fixed-step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall-runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman-Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive-step finite difference schemes and an operator splitting method. Our results show that adaptive-step, second-order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed-step or adaptive-step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive-step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage.
Sim3C: simulation of Hi-C and Meta3C proximity ligation sequencing technologies.
DeMaere, Matthew Z; Darling, Aaron E
2018-02-01
Chromosome conformation capture (3C) and Hi-C DNA sequencing methods have rapidly advanced our understanding of the spatial organization of genomes and metagenomes. Many variants of these protocols have been developed, each with their own strengths. Currently there is no systematic means for simulating sequence data from this family of sequencing protocols, potentially hindering the advancement of algorithms to exploit this new datatype. We describe a computational simulator that, given simple parameters and reference genome sequences, will simulate Hi-C sequencing on those sequences. The simulator models the basic spatial structure in genomes that is commonly observed in Hi-C and 3C datasets, including the distance-decay relationship in proximity ligation, differences in the frequency of interaction within and across chromosomes, and the structure imposed by cells. A means to model the 3D structure of randomly generated topologically associating domains is provided. The simulator considers several sources of error common to 3C and Hi-C library preparation and sequencing methods, including spurious proximity ligation events and sequencing error. We have introduced the first comprehensive simulator for 3C and Hi-C sequencing protocols. We expect the simulator to have use in testing of Hi-C data analysis algorithms, as well as more general value for experimental design, where questions such as the required depth of sequencing, enzyme choice, and other decisions can be made in advance in order to ensure adequate statistical power with respect to experimental hypothesis testing.
Chen, P P; Tsui, N Tk; Fung, A Sw; Chiu, A Hf; Wong, W Cw; Leong, H T; Lee, P Sf; Lau, J Yw
2017-08-01
The implementation of a new clinical service is associated with anxiety and challenges that may prevent smooth and safe execution of the service. Unexpected issues may not be apparent until the actual clinical service commences. We present a novel approach to test the new clinical setting before actual implementation of our endovascular aortic repair service. In-situ simulation at the new clinical location would enable identification of potential process and system issues prior to implementation of the service. After preliminary planning, a simulation test utilising a case scenario with actual simulation of the entire care process was carried out to identify any logistic, equipment, settings or clinical workflow issues, and to trial a contingency plan for a surgical complication. All patient care including anaesthetic, surgical, and nursing procedures and processes were simulated and tested. Overall, 17 vital process and system issues were identified during the simulation as potential clinical concerns. They included difficult patient positioning, draping pattern, unsatisfactory equipment setup, inadequate critical surgical instruments, blood products logistics, and inadequate nursing support during crisis. In-situ simulation provides an innovative method to identify critical deficiencies and unexpected issues before implementation of a new clinical service. Life-threatening and serious practical issues can be identified and corrected before formal service commences. This article describes our experience with the use of simulation in pre-implementation testing of a clinical process or service. We found the method useful and would recommend it to others.
Wan, Xiaohua; Katchalski, Tsvi; Churas, Christopher; Ghosh, Sreya; Phan, Sebastien; Lawrence, Albert; Hao, Yu; Zhou, Ziying; Chen, Ruijuan; Chen, Yu; Zhang, Fa; Ellisman, Mark H
2017-05-01
Because of the significance of electron microscope tomography in the investigation of biological structure at nanometer scales, ongoing improvement efforts have been continuous over recent years. This is particularly true in the case of software developments. Nevertheless, verification of improvements delivered by new algorithms and software remains difficult. Current analysis tools do not provide adaptable and consistent methods for quality assessment. This is particularly true with images of biological samples, due to image complexity, variability, low contrast and noise. We report an electron tomography (ET) simulator with accurate ray optics modeling of image formation that includes curvilinear trajectories through the sample, warping of the sample and noise. As a demonstration of the utility of our approach, we have concentrated on providing verification of the class of reconstruction methods applicable to wide field images of stained plastic-embedded samples. Accordingly, we have also constructed digital phantoms derived from serial block face scanning electron microscope images. These phantoms are also easily modified to include alignment features to test alignment algorithms. The combination of more realistic phantoms with more faithful simulations facilitates objective comparison of acquisition parameters, alignment and reconstruction algorithms and their range of applicability. With proper phantoms, this approach can also be modified to include more complex optical models, including distance-dependent blurring and phase contrast functions, such as may occur in cryotomography. Copyright © 2017 Elsevier Inc. All rights reserved.
Carolan-Olah, Mary; Kruger, Gina; Brown, Vera; Lawton, Felicity; Mazzarino, Melissa; Vasilevski, Vidanka
2018-03-01
Midwifery students feel unprepared to deal with commonly encountered emergencies, such as neonatal resuscitation. Clinical simulation of emergencies may provide a safe forum for students to develop necessary skills. A simulation exercise, for neonatal resuscitation, was developed and evaluated using qualitative methods. Pre and post-simulation questions focussed on student confidence and knowledge of resuscitation. Data were analysed using a thematic analysis approach. Pre-simulation questions revealed that most students considered themselves not very confident/unsure about their level of confidence in undertaking neonatal resuscitation. Most correctly identified features of the neonate requiring resuscitation. Post-simulation, students indicated that their confidence and knowledge of neonatal resuscitation had improved. Themes included: gaining confidence; understanding when to call for help; understanding the principles of resuscitation; tailoring simulation/education approaches to student needs. Students benefits included improved knowledge, confidence and skills. Participants unanimously suggested a program of simulation exercises, over a longer period of time, to reinforce knowledge and confidence gains. Ideally, students would like to actively participate in the simulation, rather than observe. Copyright © 2017. Published by Elsevier Ltd.
Optimal design and uncertainty quantification in blood flow simulations for congenital heart disease
NASA Astrophysics Data System (ADS)
Marsden, Alison
2009-11-01
Recent work has demonstrated substantial progress in capabilities for patient-specific cardiovascular flow simulations. Recent advances include increasingly complex geometries, physiological flow conditions, and fluid structure interaction. However inputs to these simulations, including medical image data, catheter-derived pressures and material properties, can have significant uncertainties associated with them. For simulations to predict clinically useful and reliable output information, it is necessary to quantify the effects of input uncertainties on outputs of interest. In addition, blood flow simulation tools can now be efficiently coupled to shape optimization algorithms for surgery design applications, and these tools should incorporate uncertainty information. We present a unified framework to systematically and efficient account for uncertainties in simulations using adaptive stochastic collocation. In addition, we present a framework for derivative-free optimization of cardiovascular geometries, and layer these tools to perform optimization under uncertainty. These methods are demonstrated using simulations and surgery optimization to improve hemodynamics in pediatric cardiology applications.
Hybrid Method for Power Control Simulation of a Single Fluid Plasma Thruster
NASA Astrophysics Data System (ADS)
Jaisankar, S.; Sheshadri, T. S.
2018-05-01
Propulsive plasma flow through a cylindrical-conical diverging thruster is simulated by a power controlled hybrid method to obtain the basic flow, thermodynamic and electromagnetic variables. Simulation is based on a single fluid model with electromagnetics being described by the equations of potential Poisson, Maxwell and the Ohm's law while the compressible fluid dynamics by the Navier Stokes in cylindrical form. The proposed method solved the electromagnetics and fluid dynamics separately, both to segregate the two prominent scales for an efficient computation and for the delivery of voltage controlled rated power. The magnetic transport is solved for steady state while fluid dynamics is allowed to evolve in time along with an electromagnetic source using schemes based on generalized finite difference discretization. The multistep methodology with power control is employed for simulating fully ionized propulsive flow of argon plasma through the thruster. Numerical solution shows convergence of every part of the solver including grid stability causing the multistep hybrid method to converge for a rated power delivery. Simulation results are reasonably in agreement with the reported physics of plasma flow in the thruster thus indicating the potential utility of this hybrid computational framework, especially when single fluid approximation of plasma is relevant.
Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows
NASA Technical Reports Server (NTRS)
Givi, P.; Frankel, S. H.; Adumitroaie, V.; Sabini, G.; Madnia, C. K.
1993-01-01
The primary objective of this research is to extend current capabilities of Large Eddy Simulations (LES) and Direct Numerical Simulations (DNS) for the computational analyses of high speed reacting flows. Our efforts in the first two years of this research have been concentrated on a priori investigations of single-point Probability Density Function (PDF) methods for providing subgrid closures in reacting turbulent flows. In the efforts initiated in the third year, our primary focus has been on performing actual LES by means of PDF methods. The approach is based on assumed PDF methods and we have performed extensive analysis of turbulent reacting flows by means of LES. This includes simulations of both three-dimensional (3D) isotropic compressible flows and two-dimensional reacting planar mixing layers. In addition to these LES analyses, some work is in progress to assess the extent of validity of our assumed PDF methods. This assessment is done by making detailed companions with recent laboratory data in predicting the rate of reactant conversion in parallel reacting shear flows. This report provides a summary of our achievements for the first six months of the third year of this program.
Saurin, Tarcisio Abreu; Wachs, Priscila; Righi, Angela Weber; Henriqson, Eder
2014-07-01
Although scenario-based training (SBT) can be an effective means to help workers develop resilience skills, it has not yet been analyzed from the resilience engineering (RE) perspective. This study introduces a five-stage method for designing SBT from the RE view: (a) identification of resilience skills, work constraints and actions for re-designing the socio-technical system; (b) design of template scenarios, allowing the simulation of the work constraints and the use of resilience skills; (c) design of the simulation protocol, which includes briefing, simulation and debriefing; (d) implementation of both scenarios and simulation protocol; and (e) evaluation of the scenarios and simulation protocol. It is reported how the method was applied in an electricity distribution company, in order to train grid electricians. The study was framed as an application of design science research, and five research outputs are discussed: method, constructs, model of the relationships among constructs, instantiations of the method, and theory building. Concerning the last output, the operationalization of the RE perspective on three elements of SBT is presented: identification of training objectives; scenario design; and debriefing. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Inochkin, F. M.; Kruglov, S. K.; Bronshtein, I. G.; Kompan, T. A.; Kondratjev, S. V.; Korenev, A. S.; Pukhov, N. F.
2017-06-01
A new method for precise subpixel edge estimation is presented. The principle of the method is the iterative image approximation in 2D with subpixel accuracy until the appropriate simulated is found, matching the simulated and acquired images. A numerical image model is presented consisting of three parts: an edge model, object and background brightness distribution model, lens aberrations model including diffraction. The optimal values of model parameters are determined by means of conjugate-gradient numerical optimization of a merit function corresponding to the L2 distance between acquired and simulated images. Computationally-effective procedure for the merit function calculation along with sufficient gradient approximation is described. Subpixel-accuracy image simulation is performed in a Fourier domain with theoretically unlimited precision of edge points location. The method is capable of compensating lens aberrations and obtaining the edge information with increased resolution. Experimental method verification with digital micromirror device applied to physically simulate an object with known edge geometry is shown. Experimental results for various high-temperature materials within the temperature range of 1000°C..2400°C are presented.
Well-balanced compressible cut-cell simulation of atmospheric flow.
Klein, R; Bates, K R; Nikiforakis, N
2009-11-28
Cut-cell meshes present an attractive alternative to terrain-following coordinates for the representation of topography within atmospheric flow simulations, particularly in regions of steep topographic gradients. In this paper, we present an explicit two-dimensional method for the numerical solution on such meshes of atmospheric flow equations including gravitational sources. This method is fully conservative and allows for time steps determined by the regular grid spacing, avoiding potential stability issues due to arbitrarily small boundary cells. We believe that the scheme is unique in that it is developed within a dimensionally split framework, in which each coordinate direction in the flow is solved independently at each time step. Other notable features of the scheme are: (i) its conceptual and practical simplicity, (ii) its flexibility with regard to the one-dimensional flux approximation scheme employed, and (iii) the well-balancing of the gravitational sources allowing for stable simulation of near-hydrostatic flows. The presented method is applied to a selection of test problems including buoyant bubble rise interacting with geometry and lee-wave generation due to topography.
Hart, Carl R; Reznicek, Nathan J; Wilson, D Keith; Pettit, Chris L; Nykaza, Edward T
2016-05-01
Many outdoor sound propagation models exist, ranging from highly complex physics-based simulations to simplified engineering calculations, and more recently, highly flexible statistical learning methods. Several engineering and statistical learning models are evaluated by using a particular physics-based model, namely, a Crank-Nicholson parabolic equation (CNPE), as a benchmark. Narrowband transmission loss values predicted with the CNPE, based upon a simulated data set of meteorological, boundary, and source conditions, act as simulated observations. In the simulated data set sound propagation conditions span from downward refracting to upward refracting, for acoustically hard and soft boundaries, and low frequencies. Engineering models used in the comparisons include the ISO 9613-2 method, Harmonoise, and Nord2000 propagation models. Statistical learning methods used in the comparisons include bagged decision tree regression, random forest regression, boosting regression, and artificial neural network models. Computed skill scores are relative to sound propagation in a homogeneous atmosphere over a rigid ground. Overall skill scores for the engineering noise models are 0.6%, -7.1%, and 83.8% for the ISO 9613-2, Harmonoise, and Nord2000 models, respectively. Overall skill scores for the statistical learning models are 99.5%, 99.5%, 99.6%, and 99.6% for bagged decision tree, random forest, boosting, and artificial neural network regression models, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gehin, Jess C; Godfrey, Andrew T; Evans, Thomas M
The Consortium for Advanced Simulation of Light Water Reactors (CASL) is developing a collection of methods and software products known as VERA, the Virtual Environment for Reactor Applications, including a core simulation capability called VERA-CS. A key milestone for this endeavor is to validate VERA against measurements from operating nuclear power reactors. The first step in validation against plant data is to determine the ability of VERA to accurately simulate the initial startup physics tests for Watts Bar Nuclear Power Station, Unit 1 (WBN1) cycle 1. VERA-CS calculations were performed with the Insilico code developed at ORNL using cross sectionmore » processing from the SCALE system and the transport capabilities within the Denovo transport code using the SPN method. The calculations were performed with ENDF/B-VII.0 cross sections in 252 groups (collapsed to 23 groups for the 3D transport solution). The key results of the comparison of calculations with measurements include initial criticality, control rod worth critical configurations, control rod worth, differential boron worth, and isothermal temperature reactivity coefficient (ITC). The VERA results for these parameters show good agreement with measurements, with the exception of the ITC, which requires additional investigation. Results are also compared to those obtained with Monte Carlo methods and a current industry core simulator.« less
Discrete Optimization of Electronic Hyperpolarizabilities in a Chemical Subspace
2009-05-01
molecular design. Methods for optimization in discrete spaces have been studied extensively and recently reviewed ( 5). Optimization methods include...integer programming, as in branch-and-bound techniques (including dead-end elimination [ 6]), simulated annealing ( 7), and genetic algorithms ( 8...These algorithms have found renewed interest and application in molecular and materials design (9- 12) . Recently, new approaches have been
Royer, Lucas; Krupa, Alexandre; Dardenne, Guillaume; Le Bras, Anthony; Marchand, Eric; Marchal, Maud
2017-01-01
In this paper, we present a real-time approach that allows tracking deformable structures in 3D ultrasound sequences. Our method consists in obtaining the target displacements by combining robust dense motion estimation and mechanical model simulation. We perform evaluation of our method through simulated data, phantom data, and real-data. Results demonstrate that this novel approach has the advantage of providing correct motion estimation regarding different ultrasound shortcomings including speckle noise, large shadows and ultrasound gain variation. Furthermore, we show the good performance of our method with respect to state-of-the-art techniques by testing on the 3D databases provided by MICCAI CLUST'14 and CLUST'15 challenges. Copyright © 2016 Elsevier B.V. All rights reserved.
Crack propagation of brittle rock under high geostress
NASA Astrophysics Data System (ADS)
Liu, Ning; Chu, Weijiang; Chen, Pingzhi
2018-03-01
Based on fracture mechanics and numerical methods, the characteristics and failure criterions of wall rock cracks including initiation, propagation, and coalescence are analyzed systematically under different conditions. In order to consider the interaction among cracks, adopt the sliding model of multi-cracks to simulate the splitting failure of rock in axial compress. The reinforcement of bolts and shotcrete supporting to rock mass can control the cracks propagation well. Adopt both theory analysis and simulation method to study the mechanism of controlling the propagation. The best fixed angle of bolts is calculated. Then use ansys to simulate the crack arrest function of bolt to crack. Analyze the influence of different factors on stress intensity factor. The method offer more scientific and rational criterion to evaluate the splitting failure of underground engineering under high geostress.
NASA Astrophysics Data System (ADS)
Lange, J.; O'Shaughnessy, R.; Boyle, M.; Calderón Bustillo, J.; Campanelli, M.; Chu, T.; Clark, J. A.; Demos, N.; Fong, H.; Healy, J.; Hemberger, D. A.; Hinder, I.; Jani, K.; Khamesra, B.; Kidder, L. E.; Kumar, P.; Laguna, P.; Lousto, C. O.; Lovelace, G.; Ossokine, S.; Pfeiffer, H.; Scheel, M. A.; Shoemaker, D. M.; Szilagyi, B.; Teukolsky, S.; Zlochower, Y.
2017-11-01
We present and assess a Bayesian method to interpret gravitational wave signals from binary black holes. Our method directly compares gravitational wave data to numerical relativity (NR) simulations. In this study, we present a detailed investigation of the systematic and statistical parameter estimation errors of this method. This procedure bypasses approximations used in semianalytical models for compact binary coalescence. In this work, we use the full posterior parameter distribution for only generic nonprecessing binaries, drawing inferences away from the set of NR simulations used, via interpolation of a single scalar quantity (the marginalized log likelihood, ln L ) evaluated by comparing data to nonprecessing binary black hole simulations. We also compare the data to generic simulations, and discuss the effectiveness of this procedure for generic sources. We specifically assess the impact of higher order modes, repeating our interpretation with both l ≤2 as well as l ≤3 harmonic modes. Using the l ≤3 higher modes, we gain more information from the signal and can better constrain the parameters of the gravitational wave signal. We assess and quantify several sources of systematic error that our procedure could introduce, including simulation resolution and duration; most are negligible. We show through examples that our method can recover the parameters for equal mass, zero spin, GW150914-like, and unequal mass, precessing spin sources. Our study of this new parameter estimation method demonstrates that we can quantify and understand the systematic and statistical error. This method allows us to use higher order modes from numerical relativity simulations to better constrain the black hole binary parameters.
Daetwyler, Hans D; Calus, Mario P L; Pong-Wong, Ricardo; de Los Campos, Gustavo; Hickey, John M
2013-02-01
The genomic prediction of phenotypes and breeding values in animals and plants has developed rapidly into its own research field. Results of genomic prediction studies are often difficult to compare because data simulation varies, real or simulated data are not fully described, and not all relevant results are reported. In addition, some new methods have been compared only in limited genetic architectures, leading to potentially misleading conclusions. In this article we review simulation procedures, discuss validation and reporting of results, and apply benchmark procedures for a variety of genomic prediction methods in simulated and real example data. Plant and animal breeding programs are being transformed by the use of genomic data, which are becoming widely available and cost-effective to predict genetic merit. A large number of genomic prediction studies have been published using both simulated and real data. The relative novelty of this area of research has made the development of scientific conventions difficult with regard to description of the real data, simulation of genomes, validation and reporting of results, and forward in time methods. In this review article we discuss the generation of simulated genotype and phenotype data, using approaches such as the coalescent and forward in time simulation. We outline ways to validate simulated data and genomic prediction results, including cross-validation. The accuracy and bias of genomic prediction are highlighted as performance indicators that should be reported. We suggest that a measure of relatedness between the reference and validation individuals be reported, as its impact on the accuracy of genomic prediction is substantial. A large number of methods were compared in example simulated and real (pine and wheat) data sets, all of which are publicly available. In our limited simulations, most methods performed similarly in traits with a large number of quantitative trait loci (QTL), whereas in traits with fewer QTL variable selection did have some advantages. In the real data sets examined here all methods had very similar accuracies. We conclude that no single method can serve as a benchmark for genomic prediction. We recommend comparing accuracy and bias of new methods to results from genomic best linear prediction and a variable selection approach (e.g., BayesB), because, together, these methods are appropriate for a range of genetic architectures. An accompanying article in this issue provides a comprehensive review of genomic prediction methods and discusses a selection of topics related to application of genomic prediction in plants and animals.
Daetwyler, Hans D.; Calus, Mario P. L.; Pong-Wong, Ricardo; de los Campos, Gustavo; Hickey, John M.
2013-01-01
The genomic prediction of phenotypes and breeding values in animals and plants has developed rapidly into its own research field. Results of genomic prediction studies are often difficult to compare because data simulation varies, real or simulated data are not fully described, and not all relevant results are reported. In addition, some new methods have been compared only in limited genetic architectures, leading to potentially misleading conclusions. In this article we review simulation procedures, discuss validation and reporting of results, and apply benchmark procedures for a variety of genomic prediction methods in simulated and real example data. Plant and animal breeding programs are being transformed by the use of genomic data, which are becoming widely available and cost-effective to predict genetic merit. A large number of genomic prediction studies have been published using both simulated and real data. The relative novelty of this area of research has made the development of scientific conventions difficult with regard to description of the real data, simulation of genomes, validation and reporting of results, and forward in time methods. In this review article we discuss the generation of simulated genotype and phenotype data, using approaches such as the coalescent and forward in time simulation. We outline ways to validate simulated data and genomic prediction results, including cross-validation. The accuracy and bias of genomic prediction are highlighted as performance indicators that should be reported. We suggest that a measure of relatedness between the reference and validation individuals be reported, as its impact on the accuracy of genomic prediction is substantial. A large number of methods were compared in example simulated and real (pine and wheat) data sets, all of which are publicly available. In our limited simulations, most methods performed similarly in traits with a large number of quantitative trait loci (QTL), whereas in traits with fewer QTL variable selection did have some advantages. In the real data sets examined here all methods had very similar accuracies. We conclude that no single method can serve as a benchmark for genomic prediction. We recommend comparing accuracy and bias of new methods to results from genomic best linear prediction and a variable selection approach (e.g., BayesB), because, together, these methods are appropriate for a range of genetic architectures. An accompanying article in this issue provides a comprehensive review of genomic prediction methods and discusses a selection of topics related to application of genomic prediction in plants and animals. PMID:23222650
The History of Simulation and Its Impact on the Future.
Aebersold, Michelle
2016-02-01
Simulation has had a long and varied history in many different fields, including aviation and the military. A look into the past to briefly touch on some of the major historical aspects of simulation in aviation, military, and health care will give readers a broader understanding of simulation's historical roots and the relationship to patient safety. This review may also help predict what the future may hold for simulation in nursing. Health care, like aviation, is driven by safety, more specifically patient safety. As the link between simulation and patient safety becomes increasingly apparent, simulation will be adopted as the education and training method of choice for such critical behaviors as communication and teamwork skills.
Multinuclear NMR of CaSiO(3) glass: simulation from first-principles.
Pedone, Alfonso; Charpentier, Thibault; Menziani, Maria Cristina
2010-06-21
An integrated computational method which couples classical molecular dynamics simulations with density functional theory calculations is used to simulate the solid-state NMR spectra of amorphous CaSiO(3). Two CaSiO(3) glass models are obtained by shell-model molecular dynamics simulations, successively relaxed at the GGA-PBE level of theory. The calculation of the NMR parameters (chemical shielding and quadrupolar parameters), which are then used to simulate solid-state 1D and 2D-NMR spectra of silicon-29, oxygen-17 and calcium-43, is achieved by the gauge including projector augmented-wave (GIPAW) and the projector augmented-wave (PAW) methods. It is shown that the limitations due to the finite size of the MD models can be overcome using a Kernel Estimation Density (KDE) approach to simulate the spectra since it better accounts for the disorder effects on the NMR parameter distribution. KDE allows reconstructing a smoothed NMR parameter distribution from the MD/GIPAW data. Simulated NMR spectra calculated with the present approach are found to be in excellent agreement with the experimental data. This further validates the CaSiO(3) structural model obtained by MD simulations allowing the inference of relationships between structural data and NMR response. The methods used to simulate 1D and 2D-NMR spectra from MD GIPAW data have been integrated in a package (called fpNMR) freely available on request.
Statistical power calculations for mixed pharmacokinetic study designs using a population approach.
Kloprogge, Frank; Simpson, Julie A; Day, Nicholas P J; White, Nicholas J; Tarning, Joel
2014-09-01
Simultaneous modelling of dense and sparse pharmacokinetic data is possible with a population approach. To determine the number of individuals required to detect the effect of a covariate, simulation-based power calculation methodologies can be employed. The Monte Carlo Mapped Power method (a simulation-based power calculation methodology using the likelihood ratio test) was extended in the current study to perform sample size calculations for mixed pharmacokinetic studies (i.e. both sparse and dense data collection). A workflow guiding an easy and straightforward pharmacokinetic study design, considering also the cost-effectiveness of alternative study designs, was used in this analysis. Initially, data were simulated for a hypothetical drug and then for the anti-malarial drug, dihydroartemisinin. Two datasets (sampling design A: dense; sampling design B: sparse) were simulated using a pharmacokinetic model that included a binary covariate effect and subsequently re-estimated using (1) the same model and (2) a model not including the covariate effect in NONMEM 7.2. Power calculations were performed for varying numbers of patients with sampling designs A and B. Study designs with statistical power >80% were selected and further evaluated for cost-effectiveness. The simulation studies of the hypothetical drug and the anti-malarial drug dihydroartemisinin demonstrated that the simulation-based power calculation methodology, based on the Monte Carlo Mapped Power method, can be utilised to evaluate and determine the sample size of mixed (part sparsely and part densely sampled) study designs. The developed method can contribute to the design of robust and efficient pharmacokinetic studies.
General-relativistic Simulations of Four States of Accretion onto Millisecond Pulsars
NASA Astrophysics Data System (ADS)
Parfrey, Kyle; Tchekhovskoy, Alexander
2017-12-01
Accreting neutron stars can power a wide range of astrophysical phenomena including short- and long-duration gamma-ray bursts, ultra-luminous X-ray sources, and X-ray binaries. Numerical simulations are a valuable tool for studying the accretion-disk–magnetosphere interaction that is central to these problems, most clearly for the recently discovered transitional millisecond pulsars. However, magnetohydrodynamic (MHD) methods, widely used for simulating accretion, have difficulty in highly magnetized stellar magnetospheres, while force-free methods, suitable for such regions, cannot include the accreting gas. We present an MHD method that can stably evolve essentially force-free, highly magnetized regions, and describe the first time-dependent relativistic simulations of magnetized accretion onto millisecond pulsars. Our axisymmetric general-relativistic MHD simulations for the first time demonstrate how the interaction of a turbulent accretion flow with a pulsar’s electromagnetic wind can lead to the transition of an isolated pulsar to the accreting state. This transition naturally leads to the formation of relativistic jets, whose power can greatly exceed the power of the isolated pulsar’s wind. If the accretion rate is below a critical value, the pulsar instead expels the accretion stream. More generally, our simulations produce for the first time the four possible accretion regimes, in order of decreasing mass accretion rate: (a) crushed magnetosphere and direct accretion; (b) magnetically channeled accretion onto the stellar poles; (c) the propeller state, where material enters through the light cylinder but is prevented from accreting by the centrifugal barrier; (d) almost perfect exclusion of the accretion flow from the light cylinder by the pulsar wind.
Design and realization of retina-like three-dimensional imaging based on a MOEMS mirror
NASA Astrophysics Data System (ADS)
Cao, Jie; Hao, Qun; Xia, Wenze; Peng, Yuxin; Cheng, Yang; Mu, Jiaxing; Wang, Peng
2016-07-01
To balance conflicts for high-resolution, large-field-of-view and real-time imaging, a retina-like imaging method based on time-of flight (TOF) is proposed. Mathematical models of 3D imaging based on MOEMS are developed. Based on this method, we perform simulations of retina-like scanning properties, including compression of redundant information and rotation and scaling invariance. To validate the theory, we develop a prototype and conduct relevant experiments. The preliminary results agree well with the simulations.
NASA Technical Reports Server (NTRS)
Kowalski, Marc Edward
2009-01-01
A method for the prediction of time-domain signatures of chafed coaxial cables is presented. The method is quasi-static in nature, and is thus efficient enough to be included in inference and inversion routines. Unlike previous models proposed, no restriction on the geometry or size of the chafe is required in the present approach. The model is validated and its speed is illustrated via comparison to simulations from a commercial, three-dimensional electromagnetic simulator.
Chapter 8: Simulating mortality from forest insects and diseases
Alan A. Ager; Jane L. Hayes; Craig L. Schmitt
2004-01-01
We describe methods for incorporating the effects of insects and diseases on coniferous forests into forest simulation models and discuss options for including this capability in the modeling work of the Interior Northwest Landscape Analysis System (INLAS) project. Insects and diseases are major disturbance agents in forested ecosystems in the Western United States,...
Classroom Simulation to Prepare Teachers to Use Evidence-Based Comprehension Practices
ERIC Educational Resources Information Center
Ely, Emily; Alves, Kat D.; Dolenc, Nathan R.; Sebolt, Stephanie; Walton, Emily A.
2018-01-01
Reading comprehension is an area of weakness for many students, including those with disabilities. Innovative technology methods may play a role in improving teacher readiness to use evidence-based comprehension practices for all students. In this experimental study, researchers examined a classroom simulation (TLE TeachLivE™) to improve…
Estimation and simulation of multi-beam sonar noise.
Holmin, Arne Johannes; Korneliussen, Rolf J; Tjøstheim, Dag
2016-02-01
Methods for the estimation and modeling of noise present in multi-beam sonar data, including the magnitude, probability distribution, and spatial correlation of the noise, are developed. The methods consider individual acoustic samples and facilitate compensation of highly localized noise as well as subtraction of noise estimates averaged over time. The modeled noise is included in an existing multi-beam sonar simulation model [Holmin, Handegard, Korneliussen, and Tjøstheim, J. Acoust. Soc. Am. 132, 3720-3734 (2012)], resulting in an improved model that can be used to strengthen interpretation of data collected in situ at any signal to noise ratio. Two experiments, from the former study in which multi-beam sonar data of herring schools were simulated, are repeated with inclusion of noise. These experiments demonstrate (1) the potentially large effect of changes in fish orientation on the backscatter from a school, and (2) the estimation of behavioral characteristics such as the polarization and packing density of fish schools. The latter is achieved by comparing real data with simulated data for different polarizations and packing densities.
Higher-level simulations of turbulent flows
NASA Technical Reports Server (NTRS)
Ferziger, J. H.
1981-01-01
The fundamentals of large eddy simulation are considered and the approaches to it are compared. Subgrid scale models and the development of models for the Reynolds-averaged equations are discussed as well as the use of full simulation in testing these models. Numerical methods used in simulating large eddies, the simulation of homogeneous flows, and results from full and large scale eddy simulations of such flows are examined. Free shear flows are considered with emphasis on the mixing layer and wake simulation. Wall-bounded flow (channel flow) and recent work on the boundary layer are also discussed. Applications of large eddy simulation and full simulation in meteorological and environmental contexts are included along with a look at the direction in which work is proceeding and what can be expected from higher-level simulation in the future.
Vandyk, Amanda D; Lalonde, Michelle; Merali, Sabrina; Wright, Erica; Bajnok, Irmajean; Davies, Barbara
2018-04-01
Evidence on the use of simulation to teach psychiatry and mental health (including addiction) content is emerging, yet no summary of the implementation processes or associated outcomes exists. The aim of this study was to systematically search and review empirical literature on the use of psychiatry-focused simulation in undergraduate nursing education. Objectives were to (i) assess the methodological quality of existing evidence on the use of simulation to teach mental health content to undergraduate nursing students, (ii) describe the operationalization of the simulations, and (iii) summarize the associated quantitative and qualitative outcomes. We conducted online database (MEDLINE, Embase, ERIC, CINAHL, PsycINFO from January 2004 to October 2015) and grey literature searches. Thirty-two simulation studies were identified describing and evaluating six types of simulations (standardized patients, audio simulations, high-fidelity simulators, virtual world, multimodal, and tabletop). Overall, 2724 participants were included in the studies. Studies reflected a limited number of intervention designs, and outcomes were evaluated with qualitative and quantitative methods incorporating a variety of tools. Results indicated that simulation was effective in reducing student anxiety and improving their knowledge, empathy, communication, and confidence. The summarized qualitative findings all supported the benefit of simulation; however, more research is needed to assess the comparative effectiveness of the types of simulations. Recommendations from the findings include the development of guidelines for educators to deliver each simulation component (briefing, active simulation, debriefing). Finally, consensus around appropriate training of facilitators is needed, as is consistent and agreed upon simulation terminology. © 2017 Australian College of Mental Health Nurses Inc.
NASA Astrophysics Data System (ADS)
Lynch, Cheryl L.; Graham, Geoff M.; Popovic, Milos R.
2011-08-01
Functional electrical stimulation (FES) applications are frequently evaluated in simulation prior to testing in human subjects. Such simulations are usually based on the typical muscle responses to electrical stimulation, which may result in an overly optimistic assessment of likely real-world performance. We propose a novel method for simulating FES applications that includes non-ideal muscle behaviour during electrical stimulation resulting from muscle fatigue, spasms and tremors. A 'non-idealities' block that can be incorporated into existing FES simulations and provides a realistic estimate of real-world performance is described. An implementation example is included, showing how the non-idealities block can be incorporated into a simulation of electrically stimulated knee extension against gravity for both a proportional-integral-derivative controller and a sliding mode controller. The results presented in this paper illustrate that the real-world performance of a FES system may be vastly different from the performance obtained in simulation using nominal muscle models. We believe that our non-idealities block should be included in future simulations that involve muscle response to FES, as this tool will provide neural engineers with a realistic simulation of the real-world performance of FES systems. This simulation strategy will help engineers and organizations save time and money by preventing premature human testing. The non-idealities block will become available free of charge at www.toronto-fes.ca in late 2011.
2HOT: An Improved Parallel Hashed Oct-Tree N-Body Algorithm for Cosmological Simulation
Warren, Michael S.
2014-01-01
We report on improvements made over the past two decades to our adaptive treecode N-body method (HOT). A mathematical and computational approach to the cosmological N-body problem is described, with performance and scalability measured up to 256k (2 18 ) processors. We present error analysis and scientific application results from a series of more than ten 69 billion (4096 3 ) particle cosmological simulations, accounting for 4×10 20 floating point operations. These results include the first simulations using the new constraints on the standard model of cosmology from the Planck satellite. Our simulations set a new standard for accuracy andmore » scientific throughput, while meeting or exceeding the computational efficiency of the latest generation of hybrid TreePM N-body methods.« less
Ross, Alastair J; Anderson, Janet E; Kodate, Naonori; Thomas, Libby; Thompson, Kellie; Thomas, Beth; Key, Suzie; Jensen, Heidi; Schiff, Rebekah; Jaye, Peter
2013-06-01
This paper describes the evaluation of a 2-day simulation training programme for staff designed to improve teamwork and inpatient care and compassion in an older persons' unit. The programme was designed to improve inpatient care for older people by using mixed modality simulation exercises to enhance teamwork and empathetic and compassionate care. Healthcare professionals took part in: (a) a 1-day human patient simulation course with six scenarios and (b) a 1-day ward-based simulation course involving five 1-h exercises with integrated debriefing. A mixed methods evaluation included observations of the programme, precourse and postcourse confidence rating scales and follow-up interviews with staff at 7-9 weeks post-training. Observations showed enjoyment of the course but some anxiety and apprehension about the simulation environment. Staff self-confidence improved after human patient simulation (t=9; df=56; p<0.001) and ward-based exercises (t=9.3; df=76; p<0.001). Thematic analysis of interview data showed learning in teamwork and patient care. Participants thought that simulation had been beneficial for team practices such as calling for help and verbalising concerns and for improved interaction with patients. Areas to address in future include widening participation across multi-disciplinary teams, enhancing post-training support and exploring further which aspects of the programme enhance compassion and care of older persons. The study demonstrated that simulation is an effective method for encouraging dignified care and compassion for older persons by teaching team skills and empathetic and sensitive communication with patients and relatives.
NASA Astrophysics Data System (ADS)
Liu, J. X.; Deng, S. C.; Liang, N. G.
2008-02-01
Concrete is heterogeneous and usually described as a three-phase material, where matrix, aggregate and interface are distinguished. To take this heterogeneity into consideration, the Generalized Beam (GB) lattice model is adopted. The GB lattice model is much more computationally efficient than the beam lattice model. Numerical procedures of both quasi-static method and dynamic method are developed to simulate fracture processes in uniaxial tensile tests conducted on a concrete panel. Cases of different loading rates are compared with the quasi-static case. It is found that the inertia effect due to load increasing becomes less important and can be ignored with the loading rate decreasing, but the inertia effect due to unstable crack propagation remains considerable no matter how low the loading rate is. Therefore, an unrealistic result will be obtained if a fracture process including unstable cracking is simulated by the quasi-static procedure.
Computational plasticity algorithm for particle dynamics simulations
NASA Astrophysics Data System (ADS)
Krabbenhoft, K.; Lyamin, A. V.; Vignes, C.
2018-01-01
The problem of particle dynamics simulation is interpreted in the framework of computational plasticity leading to an algorithm which is mathematically indistinguishable from the common implicit scheme widely used in the finite element analysis of elastoplastic boundary value problems. This algorithm provides somewhat of a unification of two particle methods, the discrete element method and the contact dynamics method, which usually are thought of as being quite disparate. In particular, it is shown that the former appears as the special case where the time stepping is explicit while the use of implicit time stepping leads to the kind of schemes usually labelled contact dynamics methods. The framing of particle dynamics simulation within computational plasticity paves the way for new approaches similar (or identical) to those frequently employed in nonlinear finite element analysis. These include mixed implicit-explicit time stepping, dynamic relaxation and domain decomposition schemes.
NASA Astrophysics Data System (ADS)
Nakashima, Hiroshi; Takatsu, Yuzuru
The goal of this study is to develop a practical and fast simulation tool for soil-tire interaction analysis, where finite element method (FEM) and discrete element method (DEM) are coupled together, and which can be realized on a desktop PC. We have extended our formerly proposed dynamic FE-DE method (FE-DEM) to include practical soil-tire system interaction, where not only the vertical sinkage of a tire, but also the travel of a driven tire was considered. Numerical simulation by FE-DEM is stable, and the relationships between variables, such as load-sinkage and sinkage-travel distance, and the gross tractive effort and running resistance characteristics, are obtained. Moreover, the simulation result is accurate enough to predict the maximum drawbar pull for a given tire, once the appropriate parameter values are provided. Therefore, the developed FE-DEM program can be applied with sufficient accuracy to interaction problems in soil-tire systems.
NASA Astrophysics Data System (ADS)
Borowik, Piotr; Thobel, Jean-Luc; Adamowicz, Leszek
2017-07-01
Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron-electron (e-e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport properties of degenerate electrons in graphene with e-e interactions. This required adapting the treatment of e-e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.
NASA Technical Reports Server (NTRS)
Venkatachari, Balaji Shankar; Streett, Craig L.; Chang, Chau-Lyan; Friedlander, David J.; Wang, Xiao-Yen; Chang, Sin-Chung
2016-01-01
Despite decades of development of unstructured mesh methods, high-fidelity time-accurate simulations are still predominantly carried out on structured, or unstructured hexahedral meshes by using high-order finite-difference, weighted essentially non-oscillatory (WENO), or hybrid schemes formed by their combinations. In this work, the space-time conservation element solution element (CESE) method is used to simulate several flow problems including supersonic jet/shock interaction and its impact on launch vehicle acoustics, and direct numerical simulations of turbulent flows using tetrahedral meshes. This paper provides a status report for the continuing development of the space-time conservation element solution element (CESE) numerical and software framework under the Revolutionary Computational Aerosciences (RCA) project. Solution accuracy and large-scale parallel performance of the numerical framework is assessed with the goal of providing a viable paradigm for future high-fidelity flow physics simulations.
Residents’ perceptions of simulation as a clinical learning approach
Walsh, Catharine M.; Garg, Ankit; Ng, Stella L.; Goyal, Fenny; Grover, Samir C.
2017-01-01
Background Simulation is increasingly being integrated into medical education; however, there is little research into trainees’ perceptions of this learning modality. We elicited trainees’ perceptions of simulation-based learning, to inform how simulation is developed and applied to support training. Methods We conducted an instrumental qualitative case study entailing 36 semi-structured one-hour interviews with 12 residents enrolled in an introductory simulation-based course. Trainees were interviewed at three time points: pre-course, post-course, and 4–6 weeks later. Interview transcripts were analyzed using a qualitative descriptive analytic approach. Results Residents’ perceptions of simulation included: 1) simulation serves pragmatic purposes; 2) simulation provides a safe space; 3) simulation presents perils and pitfalls; and 4) optimal design for simulation: integration and tension. Key findings included residents’ markedly narrow perception of simulation’s capacity to support non-technical skills development or its use beyond introductory learning. Conclusion Trainees’ learning expectations of simulation were restricted. Educators should critically attend to the way they present simulation to learners as, based on theories of problem-framing, trainees’ a priori perceptions may delimit the focus of their learning experiences. If they view simulation as merely a replica of real cases for the purpose of practicing basic skills, they may fail to benefit from the full scope of learning opportunities afforded by simulation. PMID:28344719
Development of new methodologies for evaluating the energy performance of new commercial buildings
NASA Astrophysics Data System (ADS)
Song, Suwon
The concept of Measurement and Verification (M&V) of a new building continues to become more important because efficient design alone is often not sufficient to deliver an efficient building. Simulation models that are calibrated to measured data can be used to evaluate the energy performance of new buildings if they are compared to energy baselines such as similar buildings, energy codes, and design standards. Unfortunately, there is a lack of detailed M&V methods and analysis methods to measure energy savings from new buildings that would have hypothetical energy baselines. Therefore, this study developed and demonstrated several new methodologies for evaluating the energy performance of new commercial buildings using a case-study building in Austin, Texas. First, three new M&V methods were developed to enhance the previous generic M&V framework for new buildings, including: (1) The development of a method to synthesize weather-normalized cooling energy use from a correlation of Motor Control Center (MCC) electricity use when chilled water use is unavailable, (2) The development of an improved method to analyze measured solar transmittance against incidence angle for sample glazing using different solar sensor types, including Eppley PSP and Li-Cor sensors, and (3) The development of an improved method to analyze chiller efficiency and operation at part-load conditions. Second, three new calibration methods were developed and analyzed, including: (1) A new percentile analysis added to the previous signature method for use with a DOE-2 calibration, (2) A new analysis to account for undocumented exhaust air in DOE-2 calibration, and (3) An analysis of the impact of synthesized direct normal solar radiation using the Erbs correlation on DOE-2 simulation. Third, an analysis of the actual energy savings compared to three different energy baselines was performed, including: (1) Energy Use Index (EUI) comparisons with sub-metered data, (2) New comparisons against Standards 90.1-1989 and 90.1-2001, and (3) A new evaluation of the performance of selected Energy Conservation Design Measures (ECDMs). Finally, potential energy savings were also simulated from selected improvements, including: minimum supply air flow, undocumented exhaust air, and daylighting.
NASA Astrophysics Data System (ADS)
Zsolt Torma, Csaba; Giorgi, Filippo
2014-05-01
A set of regional climate model (RCM) simulations applying dynamical downscaling of global climate model (GCM) simulations over the Mediterranean domain specified by the international initiative Coordinated Regional Downscaling Experiment (CORDEX) were completed with the Regional Climate Model RegCM, version RegCM4.3. Two GCMs were selected from the Coupled Model Intercomparison Project Phase 5 (CMIP5) ensemble to provide the driving fields for the RegCM: HadGEM2-ES (HadGEM) and MPI-ESM-MR (MPI). The simulations consist of an ensemble including multiple physics configurations and different "Reference Concentration Pathways" (RCP4.5 and RCP8.5). In total 15 simulations were carried out with 7 model physics configurations with varying convection and land surface schemes. The horizontal grid spacing of the RCM simulations is 50 km and the simulated period in all cases is 1970-2100 (1970-2099 in case of HadGEM driven simulations). This ensemble includes a combination of experiments in which different model components are changed individually and in combination, and thus lends itself optimally to the application of the Factor Separation (FS) method. This study applies the FS method to investigate the contributions of different factors, along with their synergy, on a set of regional climate model (RCM) projections for the Mediterranean region. The FS method is applied to 6 projections for the period 1970-2100 performed with the regional model RegCM4.3 over the Med-CORDEX domain. Two different sets of factors are intercompared, namely the driving global climate model (HadGEM and MPI) boundary conditions against two model physics settings (convection scheme and irrigation). We find that both the GCM driving conditions and the model physics provide important contributions, depending on the variable analyzed (surface air temperature and precipitation), season (winter vs. summer) and time horizon into the future, while the synergy term mostly tends to counterbalance the contributions of the individual factors. We demonstrate the usefulness of the FS method to assess different sources of uncertainty in RCM-based regional climate projections.
Methods Data Qualification Interim Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. Sam Alessi; Tami Grimmett; Leng Vang
The overall goal of the Next Generation Nuclear Plant (NGNP) Data Management and Analysis System (NDMAS) is to maintain data provenance for all NGNP data including the Methods component of NGNP data. Multiple means are available to access data stored in NDMAS. A web portal environment allows users to access data, view the results of qualification tests and view graphs and charts of various attributes of the data. NDMAS also has methods for the management of the data output from VHTR simulation models and data generated from experiments designed to verify and validate the simulation codes. These simulation models representmore » the outcome of mathematical representation of VHTR components and systems. The methods data management approaches described herein will handle data that arise from experiment, simulation, and external sources for the main purpose of facilitating parameter estimation and model verification and validation (V&V). A model integration environment entitled ModelCenter is used to automate the storing of data from simulation model runs to the NDMAS repository. This approach does not adversely change the why computational scientists conduct their work. The method is to be used mainly to store the results of model runs that need to be preserved for auditing purposes or for display to the NDMAS web portal. This interim report demonstrates the currently development of NDMAS for Methods data and discusses data and its qualification that is currently part of NDMAS.« less
Parallel computing method for simulating hydrological processesof large rivers under climate change
NASA Astrophysics Data System (ADS)
Wang, H.; Chen, Y.
2016-12-01
Climate change is one of the proverbial global environmental problems in the world.Climate change has altered the watershed hydrological processes in time and space distribution, especially in worldlarge rivers.Watershed hydrological process simulation based on physically based distributed hydrological model can could have better results compared with the lumped models.However, watershed hydrological process simulation includes large amount of calculations, especially in large rivers, thus needing huge computing resources that may not be steadily available for the researchers or at high expense, this seriously restricted the research and application. To solve this problem, the current parallel method are mostly parallel computing in space and time dimensions.They calculate the natural features orderly thatbased on distributed hydrological model by grid (unit, a basin) from upstream to downstream.This articleproposes ahigh-performancecomputing method of hydrological process simulation with high speedratio and parallel efficiency.It combinedthe runoff characteristics of time and space of distributed hydrological model withthe methods adopting distributed data storage, memory database, distributed computing, parallel computing based on computing power unit.The method has strong adaptability and extensibility,which means it canmake full use of the computing and storage resources under the condition of limited computing resources, and the computing efficiency can be improved linearly with the increase of computing resources .This method can satisfy the parallel computing requirements ofhydrological process simulation in small, medium and large rivers.
Niswonger, Richard G.; Prudic, David E.
2005-01-01
Many streams in the United States, especially those in semiarid regions, have reaches that are hydraulically disconnected from underlying aquifers. Ground-water withdrawals have decreased water levels in valley aquifers beneath streams, increasing the occurrence of disconnected streams and aquifers. The U.S. Geological Survey modular ground-water model (MODFLOW-2000) can be used to model these interactions using the Streamflow-Routing (SFR1) Package. However, the approach does not consider unsaturated flow between streams and aquifers and may not give realistic results in areas with significantly deep unsaturated zones. This documentation describes a method for extending the capabilities of MODFLOW-2000 by incorporating the ability to simulate unsaturated flow beneath streams. A kinematic-wave approximation to Richards' equation was solved by the method of characteristics to simulate unsaturated flow beneath streams in SFR1. This new package, called SFR2, includes all the capabilities of SFR1 and is designed to be used with MODFLOW-2000. Unlike SFR1, seepage loss from the stream may be restricted by the hydraulic conductivity of the unsaturated zone. Unsaturated flow is simulated independently of saturated flow within each model cell corresponding to a stream reach whenever the water table (head in MODFLOW) is below the elevation of the streambed. The relation between unsaturated hydraulic conductivity and water content is defined by the Brooks-Corey function. Unsaturated flow variables specified in SFR2 include saturated and initial water contents; saturated vertical hydraulic conductivity; and the Brooks-Corey exponent. These variables are defined independently for each stream reach. Unsaturated flow in SFR2 was compared to the U.S. Geological Survey's Variably Saturated Two-Dimensional Flow and Transport (VS2DT) Model for two test simulations. For both test simulations, results of the two models were in good agreement with respect to the magnitude and downward progression of a wetting front through an unsaturated column. A third hypothetical simulation is presented that includes interaction between a stream and aquifer separated by an unsaturated zone. This simulation is included to demonstrate the utility of unsaturated flow in SFR2 with MODFLOW-2000. This report includes a description of the data input requirements for simulating unsaturated flow in SFR2.
Study of Hydrokinetic Turbine Arrays with Large Eddy Simulation
NASA Astrophysics Data System (ADS)
Sale, Danny; Aliseda, Alberto
2014-11-01
Marine renewable energy is advancing towards commercialization, including electrical power generation from ocean, river, and tidal currents. The focus of this work is to develop numerical simulations capable of predicting the power generation potential of hydrokinetic turbine arrays-this includes analysis of unsteady and averaged flow fields, turbulence statistics, and unsteady loadings on turbine rotors and support structures due to interaction with rotor wakes and ambient turbulence. The governing equations of large-eddy-simulation (LES) are solved using a finite-volume method, and the presence of turbine blades are approximated by the actuator-line method in which hydrodynamic forces are projected to the flow field as a body force. The actuator-line approach captures helical wake formation including vortex shedding from individual blades, and the effects of drag and vorticity generation from the rough seabed surface are accounted for by wall-models. This LES framework was used to replicate a previous flume experiment consisting of three hydrokinetic turbines tested under various operating conditions and array layouts. Predictions of the power generation, velocity deficit and turbulence statistics in the wakes are compared between the LES and experimental datasets.
Using Decision Trees to Detect and Isolate Simulated Leaks in the J-2X Rocket Engine
NASA Technical Reports Server (NTRS)
Schwabacher, Mark A.; Aguilar, Robert; Figueroa, Fernando F.
2009-01-01
The goal of this work was to use data-driven methods to automatically detect and isolate faults in the J-2X rocket engine. It was decided to use decision trees, since they tend to be easier to interpret than other data-driven methods. The decision tree algorithm automatically "learns" a decision tree by performing a search through the space of possible decision trees to find one that fits the training data. The particular decision tree algorithm used is known as C4.5. Simulated J-2X data from a high-fidelity simulator developed at Pratt & Whitney Rocketdyne and known as the Detailed Real-Time Model (DRTM) was used to "train" and test the decision tree. Fifty-six DRTM simulations were performed for this purpose, with different leak sizes, different leak locations, and different times of leak onset. To make the simulations as realistic as possible, they included simulated sensor noise, and included a gradual degradation in both fuel and oxidizer turbine efficiency. A decision tree was trained using 11 of these simulations, and tested using the remaining 45 simulations. In the training phase, the C4.5 algorithm was provided with labeled examples of data from nominal operation and data including leaks in each leak location. From the data, it "learned" a decision tree that can classify unseen data as having no leak or having a leak in one of the five leak locations. In the test phase, the decision tree produced very low false alarm rates and low missed detection rates on the unseen data. It had very good fault isolation rates for three of the five simulated leak locations, but it tended to confuse the remaining two locations, perhaps because a large leak at one of these two locations can look very similar to a small leak at the other location.
RRAWFLOW: Rainfall-Response Aquifer and Watershed Flow Model (v1.11)
NASA Astrophysics Data System (ADS)
Long, A. J.
2014-09-01
The Rainfall-Response Aquifer and Watershed Flow Model (RRAWFLOW) is a lumped-parameter model that simulates streamflow, springflow, groundwater level, solute transport, or cave drip for a measurement point in response to a system input of precipitation, recharge, or solute injection. The RRAWFLOW open-source code is written in the R language and is included in the Supplement to this article along with an example model of springflow. RRAWFLOW includes a time-series process to estimate recharge from precipitation and simulates the response to recharge by convolution; i.e., the unit hydrograph approach. Gamma functions are used for estimation of parametric impulse-response functions (IRFs); a combination of two gamma functions results in a double-peaked IRF. A spline fit to a set of control points is introduced as a new method for estimation of nonparametric IRFs. Other options include the use of user-defined IRFs and different methods to simulate time-variant systems. For many applications, lumped models simulate the system response with equal accuracy to that of distributed models, but moreover, the ease of model construction and calibration of lumped models makes them a good choice for many applications. RRAWFLOW provides professional hydrologists and students with an accessible and versatile tool for lumped-parameter modeling.
Psikuta, Agnes; Koelblen, Barbara; Mert, Emel; Fontana, Piero; Annaheim, Simon
2017-12-07
Following the growing interest in the further development of manikins to simulate human thermal behaviour more adequately, thermo-physiological human simulators have been developed by coupling a thermal sweating manikin with a thermo-physiology model. Despite their availability and obvious advantages, the number of studies involving these devices is only marginal, which plausibly results from the high complexity of the development and evaluation process and need of multi-disciplinary expertise. The aim of this paper is to present an integrated approach to develop, validate and operate such devices including technical challenges and limitations of thermo-physiological human simulators, their application and measurement protocol, strategy for setting test scenarios, and the comparison to standard methods and human studies including details which have not been published so far. A physical manikin controlled by a human thermoregulation model overcame the limitations of mathematical clothing models and provided a complementary method to investigate thermal interactions between the human body, protective clothing, and its environment. The opportunities of these devices include not only realistic assessment of protective clothing assemblies and equipment but also potential application in many research fields ranging from biometeorology, automotive industry, environmental engineering, and urban climate to clinical and safety applications.
PSIKUTA, Agnes; KOELBLEN, Barbara; MERT, Emel; FONTANA, Piero; ANNAHEIM, Simon
2017-01-01
Following the growing interest in the further development of manikins to simulate human thermal behaviour more adequately, thermo-physiological human simulators have been developed by coupling a thermal sweating manikin with a thermo-physiology model. Despite their availability and obvious advantages, the number of studies involving these devices is only marginal, which plausibly results from the high complexity of the development and evaluation process and need of multi-disciplinary expertise. The aim of this paper is to present an integrated approach to develop, validate and operate such devices including technical challenges and limitations of thermo-physiological human simulators, their application and measurement protocol, strategy for setting test scenarios, and the comparison to standard methods and human studies including details which have not been published so far. A physical manikin controlled by a human thermoregulation model overcame the limitations of mathematical clothing models and provided a complementary method to investigate thermal interactions between the human body, protective clothing, and its environment. The opportunities of these devices include not only realistic assessment of protective clothing assemblies and equipment but also potential application in many research fields ranging from biometeorology, automotive industry, environmental engineering, and urban climate to clinical and safety applications. PMID:28966294
Determining procedures for simulation-based training in radiology: a nationwide needs assessment.
Nayahangan, Leizl Joy; Nielsen, Kristina Rue; Albrecht-Beste, Elisabeth; Bachmann Nielsen, Michael; Paltved, Charlotte; Lindorff-Larsen, Karen Gilboe; Nielsen, Bjørn Ulrik; Konge, Lars
2018-06-01
New training modalities such as simulation are widely accepted in radiology; however, development of effective simulation-based training programs is challenging. They are often unstructured and based on convenience or coincidence. The study objective was to perform a nationwide needs assessment to identify and prioritize technical procedures that should be included in a simulation-based curriculum. A needs assessment using the Delphi method was completed among 91 key leaders in radiology. Round 1 identified technical procedures that radiologists should learn. Round 2 explored frequency of procedure, number of radiologists performing the procedure, risk and/or discomfort for patients, and feasibility for simulation. Round 3 was elimination and prioritization of procedures. Response rates were 67 %, 70 % and 66 %, respectively. In Round 1, 22 technical procedures were included. Round 2 resulted in pre-prioritization of procedures. In round 3, 13 procedures were included in the final prioritized list. The three highly prioritized procedures were ultrasound-guided (US) histological biopsy and fine-needle aspiration, US-guided needle puncture and catheter drainage, and basic abdominal ultrasound. A needs assessment identified and prioritized 13 technical procedures to include in a simulation-based curriculum. The list may be used as guide for development of training programs. • Simulation-based training can supplement training on patients in radiology. • Development of simulation-based training should follow a structured approach. • The CAMES Needs Assessment Formula explores needs for simulation training. • A national Delphi study identified and prioritized procedures suitable for simulation training. • The prioritized list serves as guide for development of courses in radiology.
Ion Move Brownian Dynamics (IMBD)--simulations of ion transport.
Kurczynska, Monika; Kotulska, Malgorzata
2014-01-01
Comparison of the computed characteristics and physiological measurement of ion transport through transmembrane proteins could be a useful method to assess the quality of protein structures. Simulations of ion transport should be detailed but also timeefficient. The most accurate method could be Molecular Dynamics (MD), which is very time-consuming, hence is not used for this purpose. The model which includes ion-ion interactions and reduces the simulation time by excluding water, protein and lipid molecules is Brownian Dynamics (BD). In this paper a new computer program for BD simulation of the ion transport is presented. We evaluate two methods for calculating the pore accessibility (round and irregular shape) and two representations of ion sizes (van der Waals diameter and one voxel). Ion Move Brownian Dynamics (IMBD) was tested with two nanopores: alpha-hemolysin and potassium channel KcsA. In both cases during the simulation an ion passed through the pore in less than 32 ns. Although two types of ions were in solution (potassium and chloride), only ions which agreed with the selectivity properties of the channels passed through the pores. IMBD is a new tool for the ion transport modelling, which can be used in the simulations of wide and narrow pores.
Accelerating functional verification of an integrated circuit
Deindl, Michael; Ruedinger, Jeffrey Joseph; Zoellin, Christian G.
2015-10-27
Illustrative embodiments include a method, system, and computer program product for accelerating functional verification in simulation testing of an integrated circuit (IC). Using a processor and a memory, a serial operation is replaced with a direct register access operation, wherein the serial operation is configured to perform bit shifting operation using a register in a simulation of the IC. The serial operation is blocked from manipulating the register in the simulation of the IC. Using the register in the simulation of the IC, the direct register access operation is performed in place of the serial operation.
Aviation Safety Program Atmospheric Environment Safety Technologies (AEST) Project
NASA Technical Reports Server (NTRS)
Colantonio, Ron
2011-01-01
Engine Icing: Characterization and Simulation Capability: Develop knowledge bases, analysis methods, and simulation tools needed to address the problem of engine icing; in particular, ice-crystal icing Airframe Icing Simulation and Engineering Tool Capability: Develop and demonstrate 3-D capability to simulate and model airframe ice accretion and related aerodynamic performance degradation for current and future aircraft configurations in an expanded icing environment that includes freezing drizzle/rain Atmospheric Hazard Sensing and Mitigation Technology Capability: Improve and expand remote sensing and mitigation of hazardous atmospheric environments and phenomena
Modeling and Simulation of High Resolution Optical Remote Sensing Satellite Geometric Chain
NASA Astrophysics Data System (ADS)
Xia, Z.; Cheng, S.; Huang, Q.; Tian, G.
2018-04-01
The high resolution satellite with the longer focal length and the larger aperture has been widely used in georeferencing of the observed scene in recent years. The consistent end to end model of high resolution remote sensing satellite geometric chain is presented, which consists of the scene, the three line array camera, the platform including attitude and position information, the time system and the processing algorithm. The integrated design of the camera and the star tracker is considered and the simulation method of the geolocation accuracy is put forward by introduce the new index of the angle between the camera and the star tracker. The model is validated by the geolocation accuracy simulation according to the test method of the ZY-3 satellite imagery rigorously. The simulation results show that the geolocation accuracy is within 25m, which is highly consistent with the test results. The geolocation accuracy can be improved about 7 m by the integrated design. The model combined with the simulation method is applicable to the geolocation accuracy estimate before the satellite launching.
Shegog, Ross; Bartholomew, L Kay; Gold, Robert S; Pierrel, Elaine; Parcel, Guy S; Sockrider, Marianna M; Czyzewski, Danita I; Fernandez, Maria E; Berlin, Nina J; Abramson, Stuart
2006-01-01
Translating behavioral theories, models, and strategies to guide the development and structure of computer-based health applications is well recognized, although a continued challenge for program developers. A stepped approach to translate behavioral theory in the design of simulations to teach chronic disease management to children is described. This includes the translation steps to: 1) define target behaviors and their determinants, 2) identify theoretical methods to optimize behavioral change, and 3) choose educational strategies to effectively apply these methods and combine these into a cohesive computer-based simulation for health education. Asthma is used to exemplify a chronic health management problem and a computer-based asthma management simulation (Watch, Discover, Think and Act) that has been evaluated and shown to effect asthma self-management in children is used to exemplify the application of theory to practice. Impact and outcome evaluation studies have indicated the effectiveness of these steps in providing increased rigor and accountability, suggesting their utility for educators and developers seeking to apply simulations to enhance self-management behaviors in patients.
Real-time electron dynamics for massively parallel excited-state simulations
NASA Astrophysics Data System (ADS)
Andrade, Xavier
The simulation of the real-time dynamics of electrons, based on time dependent density functional theory (TDDFT), is a powerful approach to study electronic excited states in molecular and crystalline systems. What makes the method attractive is its flexibility to simulate different kinds of phenomena beyond the linear-response regime, including strongly-perturbed electronic systems and non-adiabatic electron-ion dynamics. Electron-dynamics simulations are also attractive from a computational point of view. They can run efficiently on massively parallel architectures due to the low communication requirements. Our implementations of electron dynamics, based on the codes Octopus (real-space) and Qball (plane-waves), allow us to simulate systems composed of thousands of atoms and to obtain good parallel scaling up to 1.6 million processor cores. Due to the versatility of real-time electron dynamics and its parallel performance, we expect it to become the method of choice to apply the capabilities of exascale supercomputers for the simulation of electronic excited states.
Generating Neuron Geometries for Detailed Three-Dimensional Simulations Using AnaMorph.
Mörschel, Konstantin; Breit, Markus; Queisser, Gillian
2017-07-01
Generating realistic and complex computational domains for numerical simulations is often a challenging task. In neuroscientific research, more and more one-dimensional morphology data is becoming publicly available through databases. This data, however, only contains point and diameter information not suitable for detailed three-dimensional simulations. In this paper, we present a novel framework, AnaMorph, that automatically generates water-tight surface meshes from one-dimensional point-diameter files. These surface triangulations can be used to simulate the electrical and biochemical behavior of the underlying cell. In addition to morphology generation, AnaMorph also performs quality control of the semi-automatically reconstructed cells coming from anatomical reconstructions. This toolset allows an extension from the classical dimension-reduced modeling and simulation of cellular processes to a full three-dimensional and morphology-including method, leading to novel structure-function interplay studies in the medical field. The developed numerical methods can further be employed in other areas where complex geometries are an essential component of numerical simulations.
Development and application of incrementally complex tools for wind turbine aerodynamics
NASA Astrophysics Data System (ADS)
Gundling, Christopher H.
Advances and availability of computational resources have made wind farm design using simulation tools a reality. Wind farms are battling two issues, affecting the cost of energy, that will make or break many future investments in wind energy. The most significant issue is the power reduction of downstream turbines operating in the wake of upstream turbines. The loss of energy from wind turbine wakes is difficult to predict and the underestimation of energy losses due to wakes has been a common problem throughout the industry. The second issue is a shorter lifetime of blades and past failures of gearboxes due to increased fluctuations in the unsteady loading of waked turbines. The overall goal of this research is to address these problems by developing a platform for a multi-fidelity wind turbine aerodynamic performance and wake prediction tool. Full-scale experiments in the field have dramatically helped researchers understand the unique issues inside a large wind farm, but experimental methods can only be used to a limited extent due to the cost of such field studies and the size of wind farms. The uncertainty of the inflow is another inherent drawback of field experiments. Therefore, computational fluid dynamics (CFD) predictions, strategically validated using carefully performed wind farm field campaigns, are becoming a more standard design practice. The developed CFD models include a blade element model (BEM) code with a free-vortex wake, an actuator disk or line based method with large eddy simulations (LES) and a fully resolved rotor based method with detached eddy simulations (DES) and adaptive mesh refinement (AMR). To create more realistic simulations, performance of a one-way coupling between different mesoscale atmospheric boundary layer (ABL) models and the three microscale CFD solvers is tested. These methods are validated using data from incrementally complex test cases that include the NREL Phase VI wind tunnel test, the Sexbierum wind farm and the Lillgrund offshore wind farm. By cross-comparing the lowest complexity free-vortex method with the higher complexity methods, a fast and accurate simulation tool has been generated that can perform wind farm simulations in a few hours.
Direct simulation of groundwater age
Goode, Daniel J.
1996-01-01
A new method is proposed to simulate groundwater age directly, by use of an advection-dispersion transport equation with a distributed zero-order source of unit (1) strength, corresponding to the rate of aging. The dependent variable in the governing equation is the mean age, a mass-weighted average age. The governing equation is derived from residence-time-distribution concepts for the case of steady flow. For the more general case of transient flow, a transient governing equation for age is derived from mass-conservation principles applied to conceptual “age mass.” The age mass is the product of the water mass and its age, and age mass is assumed to be conserved during mixing. Boundary conditions include zero age mass flux across all noflow and inflow boundaries and no age mass dispersive flux across outflow boundaries. For transient-flow conditions, the initial distribution of age must be known. The solution of the governing transport equation yields the spatial distribution of the mean groundwater age and includes diffusion, dispersion, mixing, and exchange processes that typically are considered only through tracer-specific solute transport simulation. Traditional methods have relied on advective transport to predict point values of groundwater travel time and age. The proposed method retains the simplicity and tracer-independence of advection-only models, but incorporates the effects of dispersion and mixing on volume-averaged age. Example simulations of age in two idealized regional aquifer systems, one homogeneous and the other layered, demonstrate the agreement between the proposed method and traditional particle-tracking approaches and illustrate use of the proposed method to determine the effects of diffusion, dispersion, and mixing on groundwater age.
NASA Astrophysics Data System (ADS)
Pickl, Kristina; Pande, Jayant; Köstler, Harald; Rüde, Ulrich; Smith, Ana-Sunčana
2017-03-01
Propulsion at low Reynolds numbers is often studied by defining artificial microswimmers which exhibit a particular stroke. The disadvantage of such an approach is that the stroke does not adjust to the environment, in particular the fluid flow, which can diminish the effect of hydrodynamic interactions. To overcome this limitation, we simulate a microswimmer consisting of three beads connected by springs and dampers, using the self-developed waLBerla and pe framework based on the lattice Boltzmann method and the discrete element method. In our approach, the swimming stroke of a swimmer emerges as a balance of the drag, the driving and the elastic internal forces. We validate the simulations by comparing the obtained swimming velocity to the velocity found analytically using a perturbative method where the bead oscillations are taken to be small. Including higher-order terms in the hydrodynamic interactions between the beads improves the agreement to the simulations in parts of the parameter space. Encouraged by the agreement between the theory and the simulations and aided by the massively parallel capabilities of the waLBerla-pe framework, we simulate more than ten thousand such swimmers together, thus presenting the first fully resolved simulations of large swarms with active responsive components.
Modelling and scale-up of chemical flooding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pope, G.A.; Lake, L.W.; Sepehrnoori, K.
1990-03-01
The objective of this research is to develop, validate, and apply a comprehensive chemical flooding simulator for chemical recovery processes involving surfactants, polymers, and alkaline chemicals in various combinations. This integrated program includes components of laboratory experiments, physical property modelling, scale-up theory, and numerical analysis as necessary and integral components of the simulation activity. We have continued to develop, test, and apply our chemical flooding simulator (UTCHEM) to a wide variety of laboratory and reservoir problems involving tracers, polymers, polymer gels, surfactants, and alkaline agents. Part I is an update on the Application of Higher-Order Methods in Chemical Flooding Simulation.more » This update focuses on the comparison of grid orientation effects for four different numerical methods implemented in UTCHEM. Part II is on Simulation Design Studies and is a continuation of Saad's Big Muddy surfactant pilot simulation study reported last year. Part III reports on the Simulation of Gravity Effects under conditions similar to those of some of the oil reservoirs in the North Sea. Part IV is on Determining Oil Saturation from Interwell Tracers UTCHEM is used for large-scale interwell tracer tests. A systematic procedure for estimating oil saturation from interwell tracer data is developed and a specific example based on the actual field data provided by Sun E P Co. is given. Part V reports on the Application of Vectorization and Microtasking for Reservoir Simulation. Part VI reports on Alkaline Simulation. The alkaline/surfactant/polymer flood compositional simulator (UTCHEM) reported last year is further extended to include reactions involving chemical species containing magnesium, aluminium and silicon as constituent elements. Part VII reports on permeability and trapping of microemulsion.« less
Extension of a coarse grained particle method to simulate heat transfer in fluidized beds
Lu, Liqiang; Morris, Aaron; Li, Tingwen; ...
2017-04-18
The heat transfer in a gas-solids fluidized bed is simulated with computational fluid dynamic-discrete element method (CFD-DEM) and coarse grained particle method (CGPM). In CGPM fewer numerical particles and their collisions are tracked by lumping several real particles into a computational parcel. Here, the assumption is that the real particles inside a coarse grained particle (CGP) are made from same species and share identical physical properties including density, diameter and temperature. The parcel-fluid convection term in CGPM is calculated using the same method as in DEM. For all other heat transfer mechanisms, we derive in this study mathematical expressions thatmore » relate the new heat transfer terms for CGPM to those traditionally derived in DEM. This newly derived CGPM model is verified and validated by comparing the results with CFD-DEM simulation results and experiment data. The numerical results compare well with experimental data for both hydrodynamics and temperature profiles. Finally, the proposed CGPM model can be used for fast and accurate simulations of heat transfer in large scale gas-solids fluidized beds.« less
Extension of a coarse grained particle method to simulate heat transfer in fluidized beds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Liqiang; Morris, Aaron; Li, Tingwen
The heat transfer in a gas-solids fluidized bed is simulated with computational fluid dynamic-discrete element method (CFD-DEM) and coarse grained particle method (CGPM). In CGPM fewer numerical particles and their collisions are tracked by lumping several real particles into a computational parcel. Here, the assumption is that the real particles inside a coarse grained particle (CGP) are made from same species and share identical physical properties including density, diameter and temperature. The parcel-fluid convection term in CGPM is calculated using the same method as in DEM. For all other heat transfer mechanisms, we derive in this study mathematical expressions thatmore » relate the new heat transfer terms for CGPM to those traditionally derived in DEM. This newly derived CGPM model is verified and validated by comparing the results with CFD-DEM simulation results and experiment data. The numerical results compare well with experimental data for both hydrodynamics and temperature profiles. Finally, the proposed CGPM model can be used for fast and accurate simulations of heat transfer in large scale gas-solids fluidized beds.« less
MRXCAT: Realistic numerical phantoms for cardiovascular magnetic resonance
2014-01-01
Background Computer simulations are important for validating novel image acquisition and reconstruction strategies. In cardiovascular magnetic resonance (CMR), numerical simulations need to combine anatomical information and the effects of cardiac and/or respiratory motion. To this end, a framework for realistic CMR simulations is proposed and its use for image reconstruction from undersampled data is demonstrated. Methods The extended Cardiac-Torso (XCAT) anatomical phantom framework with various motion options was used as a basis for the numerical phantoms. Different tissue, dynamic contrast and signal models, multiple receiver coils and noise are simulated. Arbitrary trajectories and undersampled acquisition can be selected. The utility of the framework is demonstrated for accelerated cine and first-pass myocardial perfusion imaging using k-t PCA and k-t SPARSE. Results MRXCAT phantoms allow for realistic simulation of CMR including optional cardiac and respiratory motion. Example reconstructions from simulated undersampled k-t parallel imaging demonstrate the feasibility of simulated acquisition and reconstruction using the presented framework. Myocardial blood flow assessment from simulated myocardial perfusion images highlights the suitability of MRXCAT for quantitative post-processing simulation. Conclusion The proposed MRXCAT phantom framework enables versatile and realistic simulations of CMR including breathhold and free-breathing acquisitions. PMID:25204441
Quantitative validation of carbon-fiber laminate low velocity impact simulations
English, Shawn A.; Briggs, Timothy M.; Nelson, Stacy M.
2015-09-26
Simulations of low velocity impact with a flat cylindrical indenter upon a carbon fiber fabric reinforced polymer laminate are rigorously validated. Comparison of the impact energy absorption between the model and experiment is used as the validation metric. Additionally, non-destructive evaluation, including ultrasonic scans and three-dimensional computed tomography, provide qualitative validation of the models. The simulations include delamination, matrix cracks and fiber breaks. An orthotropic damage and failure constitutive model, capable of predicting progressive damage and failure, is developed in conjunction and described. An ensemble of simulations incorporating model parameter uncertainties is used to predict a response distribution which ismore » then compared to experimental output using appropriate statistical methods. Lastly, the model form errors are exposed and corrected for use in an additional blind validation analysis. The result is a quantifiable confidence in material characterization and model physics when simulating low velocity impact in structures of interest.« less
Sensing Methods for Detecting Analog Television Signals
NASA Astrophysics Data System (ADS)
Rahman, Mohammad Azizur; Song, Chunyi; Harada, Hiroshi
This paper introduces a unified method of spectrum sensing for all existing analog television (TV) signals including NTSC, PAL and SECAM. We propose a correlation based method (CBM) with a single reference signal for sensing any analog TV signals. In addition we also propose an improved energy detection method. The CBM approach has been implemented in a hardware prototype specially designed for participating in Singapore TV white space (WS) test trial conducted by Infocomm Development Authority (IDA) of the Singapore government. Analytical and simulation results of the CBM method will be presented in the paper, as well as hardware testing results for sensing various analog TV signals. Both AWGN and fading channels will be considered. It is shown that the theoretical results closely match with those from simulations. Sensing performance of the hardware prototype will also be presented in fading environment by using a fading simulator. We present performance of the proposed techniques in terms of probability of false alarm, probability of detection, sensing time etc. We also present a comparative study of the various techniques.
A compressible multiphase framework for simulating supersonic atomization
NASA Astrophysics Data System (ADS)
Regele, Jonathan D.; Garrick, Daniel P.; Hosseinzadeh-Nik, Zahra; Aslani, Mohamad; Owkes, Mark
2016-11-01
The study of atomization in supersonic combustors is critical in designing efficient and high performance scramjets. Numerical methods incorporating surface tension effects have largely focused on the incompressible regime as most atomization applications occur at low Mach numbers. Simulating surface tension effects in high speed compressible flow requires robust numerical methods that can handle discontinuities caused by both material interfaces and shocks. A shock capturing/diffused interface method is developed to simulate high-speed compressible gas-liquid flows with surface tension effects using the five-equation model. This includes developments that account for the interfacial pressure jump that occurs in the presence of surface tension. A simple and efficient method for computing local interface curvature is developed and an acoustic non-dimensional scaling for the surface tension force is proposed. The method successfully captures a variety of droplet breakup modes over a range of Weber numbers and demonstrates the impact of surface tension in countering droplet deformation in both subsonic and supersonic cross flows.
A Method for Large Eddy Simulation of Acoustic Combustion Instabilities
NASA Astrophysics Data System (ADS)
Wall, Clifton; Moin, Parviz
2003-11-01
A method for performing Large Eddy Simulation of acoustic combustion instabilities is presented. By extending the low Mach number pressure correction method to the case of compressible flow, a numerical method is developed in which the Poisson equation for pressure is replaced by a Helmholtz equation. The method avoids the acoustic CFL condition by using implicit time advancement, leading to large efficiency gains at low Mach number. The method also avoids artificial damping of acoustic waves. The numerical method is attractive for the simulation of acoustics combustion instabilities, since these flows are typically at low Mach number, and the acoustic frequencies of interest are usually low. Additionally, new boundary conditions based on the work of Poinsot and Lele have been developed to model the acoustic effect of a long channel upstream of the computational inlet, thus avoiding the need to include such a channel in the computational domain. The turbulent combustion model used is the Level Set model of Duchamp de Lageneste and Pitsch for premixed combustion. Comparison of LES results to the reacting experiments of Besson et al. will be presented.
Immersed boundary methods for simulating fluid-structure interaction
NASA Astrophysics Data System (ADS)
Sotiropoulos, Fotis; Yang, Xiaolei
2014-02-01
Fluid-structure interaction (FSI) problems commonly encountered in engineering and biological applications involve geometrically complex flexible or rigid bodies undergoing large deformations. Immersed boundary (IB) methods have emerged as a powerful simulation tool for tackling such flows due to their inherent ability to handle arbitrarily complex bodies without the need for expensive and cumbersome dynamic re-meshing strategies. Depending on the approach such methods adopt to satisfy boundary conditions on solid surfaces they can be broadly classified as diffused and sharp interface methods. In this review, we present an overview of the fundamentals of both classes of methods with emphasis on solution algorithms for simulating FSI problems. We summarize and juxtapose different IB approaches for imposing boundary conditions, efficient iterative algorithms for solving the incompressible Navier-Stokes equations in the presence of dynamic immersed boundaries, and strong and loose coupling FSI strategies. We also present recent results from the application of such methods to study a wide range of problems, including vortex-induced vibrations, aquatic swimming, insect flying, human walking and renewable energy. Limitations of such methods and the need for future research to mitigate them are also discussed.
FFT multislice method--the silver anniversary.
Ishizuka, Kazuo
2004-02-01
The first paper on the FFT multislice method was published in 1977, a quarter of a century ago. The formula was extended in 1982 to include a large tilt of an incident beam relative to the specimen surface. Since then, with advances of computing power, the FFT multislice method has been successfully applied to coherent CBED and HAADF-STEM simulations. However, because the multislice formula is built on some physical approximations and approximations in numerical procedure, there seem to be controversial conclusions in the literature on the multislice method. In this report, the physical implication of the multislice method is reviewed based on the formula for the tilted illumination. Then, some results on the coherent CBED and the HAADF-STEM simulations are presented.
The viability of ADVANTG deterministic method for synthetic radiography generation
NASA Astrophysics Data System (ADS)
Bingham, Andrew; Lee, Hyoung K.
2018-07-01
Fast simulation techniques to generate synthetic radiographic images of high resolution are helpful when new radiation imaging systems are designed. However, the standard stochastic approach requires lengthy run time with poorer statistics at higher resolution. The investigation of the viability of a deterministic approach to synthetic radiography image generation was explored. The aim was to analyze a computational time decrease over the stochastic method. ADVANTG was compared to MCNP in multiple scenarios including a small radiography system prototype, to simulate high resolution radiography images. By using ADVANTG deterministic code to simulate radiography images the computational time was found to decrease 10 to 13 times compared to the MCNP stochastic approach while retaining image quality.
A Novel Actuator for Simulation of Epidural Anesthesia and Other Needle Insertion Procedures
Magill, John C.; Byl, Marten F.; Hinds, Michael F.; Agassounon, William; Pratt, Stephen D.; Hess, Philip E.
2010-01-01
Introduction When navigating a needle from skin to epidural space, a skilled clinician maintains a mental model of the anatomy and uses the various forms of haptic and visual feedback to track the location of the needle tip. Simulating the procedure requires an actuator that can produce the feel of tissue layers even as the needle direction changes from the ideal path. Methods A new actuator and algorithm architecture simulate forces associated with passing a needle through varying tissue layers. The actuator uses a set of cables to suspend a needle holder. The cables are wound onto spools controlled by brushless motors. An electromagnetic tracker is used to monitor the position of the needle tip. Results Novice and expert clinicians simulated epidural insertion with the simulator. Preliminary depth-time curves show that the user responds to changes in tissue properties as the needle is advanced. Some discrepancy in clinician response indicates that the feel of the simulator is sensitive to technique, thus perfect tissue property simulation has not been achieved. Conclusions The new simulator is able to approximately reproduce properties of complex multilayer tissue structures, including fine-scale texture. Methods for improving fidelity of the simulation are identified. PMID:20651481
Discrete Molecular Dynamics Approach to the Study of Disordered and Aggregating Proteins.
Emperador, Agustí; Orozco, Modesto
2017-03-14
We present a refinement of the Coarse Grained PACSAB force field for Discrete Molecular Dynamics (DMD) simulations of proteins in aqueous conditions. As the original version, the refined method provides good representation of the structure and dynamics of folded proteins but provides much better representations of a variety of unfolded proteins, including some very large, impossible to analyze by atomistic simulation methods. The PACSAB/DMD method also reproduces accurately aggregation properties, providing good pictures of the structural ensembles of proteins showing a folded core and an intrinsically disordered region. The combination of accuracy and speed makes the method presented here a good alternative for the exploration of unstructured protein systems.
N-S/DSMC hybrid simulation of hypersonic flow over blunt body including wakes
NASA Astrophysics Data System (ADS)
Li, Zhonghua; Li, Zhihui; Li, Haiyan; Yang, Yanguang; Jiang, Xinyu
2014-12-01
A hybrid N-S/DSMC method is presented and applied to solve the three-dimensional hypersonic transitional flows by employing the MPC (modular Particle-Continuum) technique based on the N-S and the DSMC method. A sub-relax technique is adopted to deal with information transfer between the N-S and the DSMC. The hypersonic flows over a 70-deg spherically blunted cone under different Kn numbers are simulated using the CFD, DSMC and hybrid N-S/DSMC method. The present computations are found in good agreement with DSMC and experimental results. The present method provides an efficient way to predict the hypersonic aerodynamics in near-continuum transitional flow regime.
NASA Astrophysics Data System (ADS)
Saide, P. E.; Steinhoff, D.; Kosovic, B.; Weil, J.; Smith, N.; Blewitt, D.; Delle Monache, L.
2017-12-01
There are a wide variety of methods that have been proposed and used to estimate methane emissions from oil and gas production by using air composition and meteorology observations in conjunction with dispersion models. Although there has been some verification of these methodologies using controlled releases and concurrent atmospheric measurements, it is difficult to assess the accuracy of these methods for more realistic scenarios considering factors such as terrain, emissions from multiple components within a well pad, and time-varying emissions representative of typical operations. In this work we use a large-eddy simulation (LES) to generate controlled but realistic synthetic observations, which can be used to test multiple source term estimation methods, also known as an Observing System Simulation Experiment (OSSE). The LES is based on idealized simulations of the Weather Research & Forecasting (WRF) model at 10 m horizontal grid-spacing covering an 8 km by 7 km domain with terrain representative of a region located in the Barnett shale. Well pads are setup in the domain following a realistic distribution and emissions are prescribed every second for the components of each well pad (e.g., chemical injection pump, pneumatics, compressor, tanks, and dehydrator) using a simulator driven by oil and gas production volume, composition and realistic operational conditions. The system is setup to allow assessments under different scenarios such as normal operations, during liquids unloading events, or during other prescribed operational upset events. Methane and meteorology model output are sampled following the specifications of the emission estimation methodologies and considering typical instrument uncertainties, resulting in realistic observations (see Figure 1). We will show the evaluation of several emission estimation methods including the EPA Other Test Method 33A and estimates using the EPA AERMOD regulatory model. We will also show source estimation results from advanced methods such as variational inverse modeling, and Bayesian inference and stochastic sampling techniques. Future directions including other types of observations, other hydrocarbons being considered, and assessment of additional emission estimation methods will be discussed.
Recent Survey and Application of the simSUNDT Software
NASA Astrophysics Data System (ADS)
Persson, G.; Wirdelius, H.
2010-02-01
The simSUNDT software is based on a previous developed program (SUNDT). The latest version has been customized in order to generate realistic synthetic data (including a grain noise model), compatible with a number of off-line analysis software. The software consists of a Windows®-based preprocessor and postprocessor together with a mathematical kernel (UTDefect), dealing with the actual mathematical modeling. The model employs various integral transforms and integral equation and enables simulations of the entire ultrasonic testing situation. The model is completely three-dimensional though the simulated component is two-dimensional, bounded by the scanning surface and a planar back surface as an option. It is of great importance that inspection methods that are applied are proper validated and that their capability of detection of cracks and defects are quantified. In order to achieve this, statistical methods such as Probability of Detection (POD) often are applied, with the ambition to estimate the detectability as a function of defect size. Despite the fact that the proposed procedure with the utilization of test pieces is very expensive, it also tends to introduce a number of possible misalignments between the actual NDT situation that is to be performed and the proposed experimental simulation. The presentation will describe the developed model that will enable simulation of a phased array NDT inspection and the ambition to use this simulation software to generate POD information. The paper also includes the most recent developments of the model including some initial experimental validation of the phased array probe model.
Pryor, Alan; Ophus, Colin; Miao, Jianwei
2017-10-25
Simulation of atomic-resolution image formation in scanning transmission electron microscopy can require significant computation times using traditional methods. A recently developed method, termed plane-wave reciprocal-space interpolated scattering matrix (PRISM), demonstrates potential for significant acceleration of such simulations with negligible loss of accuracy. In this paper, we present a software package called Prismatic for parallelized simulation of image formation in scanning transmission electron microscopy (STEM) using both the PRISM and multislice methods. By distributing the workload between multiple CUDA-enabled GPUs and multicore processors, accelerations as high as 1000 × for PRISM and 15 × for multislice are achieved relative to traditionalmore » multislice implementations using a single 4-GPU machine. We demonstrate a potentially important application of Prismatic, using it to compute images for atomic electron tomography at sufficient speeds to include in the reconstruction pipeline. Prismatic is freely available both as an open-source CUDA/C++ package with a graphical user interface and as a Python package, PyPrismatic.« less
Pryor, Alan; Ophus, Colin; Miao, Jianwei
2017-01-01
Simulation of atomic-resolution image formation in scanning transmission electron microscopy can require significant computation times using traditional methods. A recently developed method, termed plane-wave reciprocal-space interpolated scattering matrix (PRISM), demonstrates potential for significant acceleration of such simulations with negligible loss of accuracy. Here, we present a software package called Prismatic for parallelized simulation of image formation in scanning transmission electron microscopy (STEM) using both the PRISM and multislice methods. By distributing the workload between multiple CUDA-enabled GPUs and multicore processors, accelerations as high as 1000 × for PRISM and 15 × for multislice are achieved relative to traditional multislice implementations using a single 4-GPU machine. We demonstrate a potentially important application of Prismatic , using it to compute images for atomic electron tomography at sufficient speeds to include in the reconstruction pipeline. Prismatic is freely available both as an open-source CUDA/C++ package with a graphical user interface and as a Python package, PyPrismatic .
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pryor, Alan; Ophus, Colin; Miao, Jianwei
Simulation of atomic-resolution image formation in scanning transmission electron microscopy can require significant computation times using traditional methods. A recently developed method, termed plane-wave reciprocal-space interpolated scattering matrix (PRISM), demonstrates potential for significant acceleration of such simulations with negligible loss of accuracy. In this paper, we present a software package called Prismatic for parallelized simulation of image formation in scanning transmission electron microscopy (STEM) using both the PRISM and multislice methods. By distributing the workload between multiple CUDA-enabled GPUs and multicore processors, accelerations as high as 1000 × for PRISM and 15 × for multislice are achieved relative to traditionalmore » multislice implementations using a single 4-GPU machine. We demonstrate a potentially important application of Prismatic, using it to compute images for atomic electron tomography at sufficient speeds to include in the reconstruction pipeline. Prismatic is freely available both as an open-source CUDA/C++ package with a graphical user interface and as a Python package, PyPrismatic.« less
Profile Optimization Method for Robust Airfoil Shape Optimization in Viscous Flow
NASA Technical Reports Server (NTRS)
Li, Wu
2003-01-01
Simulation results obtained by using FUN2D for robust airfoil shape optimization in transonic viscous flow are included to show the potential of the profile optimization method for generating fairly smooth optimal airfoils with no off-design performance degradation.
Teaching Materials and Methods.
ERIC Educational Resources Information Center
Physiologist, 1982
1982-01-01
Twelve abstracts of papers presented at the 33rd Annual Fall Meeting of the American Physiological Society are listed, focusing on teaching materials/methods. Topics, among others, include trends in physiology laboratory programs, cardiovascular system model, cardiovascular computer simulation with didactic feedback, and computer generated figures…
An Improved Method for Demonstrating Visual Selection by Wild Birds.
ERIC Educational Resources Information Center
Allen, J. A.; And Others
1990-01-01
An activity simulating natural selection in which wild birds are predators, green and brown pastry "baits" are prey, and trays containing colored stones as the backgrounds is presented. Two different methods of measuring selection are used to describe the results. The materials and methods, results, and discussion are included. (KR)
NASA Astrophysics Data System (ADS)
Bonek, Mirosław; Śliwa, Agata; Mikuła, Jarosław
2016-12-01
Investigations >The language in this paper has been slightly changed. Please check for clarity of thought, and that the meaning is still correct, and amend if necessary.include Finite Element Method simulation model of remelting of PMHSS6-5-3 high-speed steel surface layer using the high power diode laser (HPDL). The Finite Element Method computations were performed using ANSYS software. The scope of FEM simulation was determination of temperature distribution during laser alloying process at various process configurations regarding the laser beam power and method of powder deposition, as pre-coated past or surface with machined grooves. The Finite Element Method simulation was performed on five different 3-dimensional models. The model assumed nonlinear change of thermal conductivity, specific heat and density that were depended on temperature. The heating process was realized as heat flux corresponding to laser beam power of 1.4, 1.7 and 2.1 kW. Latent heat effects are considered during solidification. The molten pool is composed of the same material as the substrate and there is no chemical reaction. The absorptivity of laser energy was dependent on the simulated materials properties and their surface condition. The Finite Element Method simulation allows specifying the heat affected zone and the temperature distribution in the sample as a function of time and thus allows the estimation of the structural changes taking place during laser remelting process. The simulation was applied to determine the shape of molten pool and the penetration depth of remelted surface. Simulated penetration depth and molten pool profile have a good match with the experimental results. The depth values obtained in simulation are very close to experimental data. Regarding the shape of molten pool, the little differences have been noted. The heat flux input considered in simulation is only part of the mechanism for heating; thus, the final shape of solidified molten pool will depend on more variables.
Mohiuddin, Syed; Busby, John; Savović, Jelena; Richards, Alison; Northstone, Kate; Hollingworth, William; Donovan, Jenny L; Vasilakis, Christos
2017-01-01
Objectives Overcrowding in the emergency department (ED) is common in the UK as in other countries worldwide. Computer simulation is one approach used for understanding the causes of ED overcrowding and assessing the likely impact of changes to the delivery of emergency care. However, little is known about the usefulness of computer simulation for analysis of ED patient flow. We undertook a systematic review to investigate the different computer simulation methods and their contribution for analysis of patient flow within EDs in the UK. Methods We searched eight bibliographic databases (MEDLINE, EMBASE, COCHRANE, WEB OF SCIENCE, CINAHL, INSPEC, MATHSCINET and ACM DIGITAL LIBRARY) from date of inception until 31 March 2016. Studies were included if they used a computer simulation method to capture patient progression within the ED of an established UK National Health Service hospital. Studies were summarised in terms of simulation method, key assumptions, input and output data, conclusions drawn and implementation of results. Results Twenty-one studies met the inclusion criteria. Of these, 19 used discrete event simulation and 2 used system dynamics models. The purpose of many of these studies (n=16; 76%) centred on service redesign. Seven studies (33%) provided no details about the ED being investigated. Most studies (n=18; 86%) used specific hospital models of ED patient flow. Overall, the reporting of underlying modelling assumptions was poor. Nineteen studies (90%) considered patient waiting or throughput times as the key outcome measure. Twelve studies (57%) reported some involvement of stakeholders in the simulation study. However, only three studies (14%) reported on the implementation of changes supported by the simulation. Conclusions We found that computer simulation can provide a means to pretest changes to ED care delivery before implementation in a safe and efficient manner. However, the evidence base is small and poorly developed. There are some methodological, data, stakeholder, implementation and reporting issues, which must be addressed by future studies. PMID:28487459
User News. Volume 17, Number 1 -- Spring 1996
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
This is a newsletter for users of the DOE-2, PowerDOE, SPARK, and BLAST building energy simulation programs. The topics for the Spring 1996 issue include the SPARK simulation environment, DOE-2 validation, listing of free fenestration software from LBNL, Web sites for building energy efficiency, the heat balance method of calculating building heating and cooling loads.
An optimal control approach to the design of moving flight simulators
NASA Technical Reports Server (NTRS)
Sivan, R.; Ish-Shalom, J.; Huang, J.-K.
1982-01-01
An abstract flight simulator design problem is formulated in the form of an optimal control problem, which is solved for the linear-quadratic-Gaussian special case using a mathematical model of the vestibular organs. The optimization criterion used is the mean-square difference between the physiological outputs of the vestibular organs of the pilot in the aircraft and the pilot in the simulator. The dynamical equations are linearized, and the output signal is modeled as a random process with rational power spectral density. The method described yields the optimal structure of the simulator's motion generator, or 'washout filter'. A two-degree-of-freedom flight simulator design, including single output simulations, is presented.
NASA Technical Reports Server (NTRS)
Pholsiri, Chalongrath; English, James; Seberino, Charles; Lim, Yi-Je
2010-01-01
The Excavator Design Validation tool verifies excavator designs by automatically generating control systems and modeling their performance in an accurate simulation of their expected environment. Part of this software design includes interfacing with human operations that can be included in simulation-based studies and validation. This is essential for assessing productivity, versatility, and reliability. This software combines automatic control system generation from CAD (computer-aided design) models, rapid validation of complex mechanism designs, and detailed models of the environment including soil, dust, temperature, remote supervision, and communication latency to create a system of high value. Unique algorithms have been created for controlling and simulating complex robotic mechanisms automatically from just a CAD description. These algorithms are implemented as a commercial cross-platform C++ software toolkit that is configurable using the Extensible Markup Language (XML). The algorithms work with virtually any mobile robotic mechanisms using module descriptions that adhere to the XML standard. In addition, high-fidelity, real-time physics-based simulation algorithms have also been developed that include models of internal forces and the forces produced when a mechanism interacts with the outside world. This capability is combined with an innovative organization for simulation algorithms, new regolith simulation methods, and a unique control and study architecture to make powerful tools with the potential to transform the way NASA verifies and compares excavator designs. Energid's Actin software has been leveraged for this design validation. The architecture includes parametric and Monte Carlo studies tailored for validation of excavator designs and their control by remote human operators. It also includes the ability to interface with third-party software and human-input devices. Two types of simulation models have been adapted: high-fidelity discrete element models and fast analytical models. By using the first to establish parameters for the second, a system has been created that can be executed in real time, or faster than real time, on a desktop PC. This allows Monte Carlo simulations to be performed on a computer platform available to all researchers, and it allows human interaction to be included in a real-time simulation process. Metrics on excavator performance are established that work with the simulation architecture. Both static and dynamic metrics are included.
Borycki, Elizabeth; Kushniruk, Andre; Carvalho, Christopher
2013-01-01
Internationally, health information systems (HIS) safety has emerged as a significant concern for governments. Recently, research has emerged that has documented the ability of HIS to be implicated in the harm and death of patients. Researchers have attempted to develop methods that can be used to prevent or reduce technology-induced errors. Some researchers are developing methods that can be employed prior to systems release. These methods include the development of safety heuristics and clinical simulations. In this paper, we outline our methodology for developing safety heuristics specific to identifying the features or functions of a HIS user interface design that may lead to technology-induced errors. We follow this with a description of a methodological approach to validate these heuristics using clinical simulations. PMID:23606902
DNS of Flow in a Low-Pressure Turbine Cascade Using a Discontinuous-Galerkin Spectral-Element Method
NASA Technical Reports Server (NTRS)
Garai, Anirban; Diosady, Laslo Tibor; Murman, Scott; Madavan, Nateri
2015-01-01
A new computational capability under development for accurate and efficient high-fidelity direct numerical simulation (DNS) and large eddy simulation (LES) of turbomachinery is described. This capability is based on an entropy-stable Discontinuous-Galerkin spectral-element approach that extends to arbitrarily high orders of spatial and temporal accuracy and is implemented in a computationally efficient manner on a modern high performance computer architecture. A validation study using this method to perform DNS of flow in a low-pressure turbine airfoil cascade are presented. Preliminary results indicate that the method captures the main features of the flow. Discrepancies between the predicted results and the experiments are likely due to the effects of freestream turbulence not being included in the simulation and will be addressed in the final paper.
Fluid Simulation in the Movies: Navier and Stokes Must Be Circulating in Their Graves
NASA Astrophysics Data System (ADS)
Tessendorf, Jerry
2010-11-01
Fluid simulations based on the Incompressible Navier-Stokes equations are commonplace computer graphics tools in the visual effects industry. These simulations mostly come from custom C++ code written by the visual effects companies. Their significant impact in films was recognized in 2008 with Academy Awards to four visual effects companies for their technical achievement. However artists are not fluid dynamicists, and fluid dynamics simulations are expensive to use in a deadline-driven production environment. As a result, the simulation algorithms are modified to limit the computational resources, adapt them to production workflow, and to respect the client's vision of the film plot. Eulerian solvers on fixed rectangular grids use a mix of momentum solvers, including Semi-Lagrangian, FLIP, and QUICK. Incompressibility is enforced with FFT, Conjugate Gradient, and Multigrid methods. For liquids, a levelset field tracks the free surface. Smooth Particle Hydrodynamics is also used, and is part of a hybrid Eulerian-SPH liquid simulator. Artists use all of them in a mix and match fashion to control the appearance of the simulation. Specially designed forces and boundary conditions control the flow. The simulation can be an input to artistically driven procedural particle simulations that enhance the flow with more detail and drama. Post-simulation processing increases the visual detail beyond the grid resolution. Ultimately, iterative simulation methods that fit naturally in the production workflow are extremely desirable but not yet successful. Results from some efforts for iterative methods are shown, and other approaches motivated by the history of production are proposed.
NASA Astrophysics Data System (ADS)
Gao, Xiang; Schlosser, C. Adam
2018-04-01
Regional climate models (RCMs) can simulate heavy precipitation more accurately than general circulation models (GCMs) through more realistic representation of topography and mesoscale processes. Analogue methods of downscaling, which identify the large-scale atmospheric conditions associated with heavy precipitation, can also produce more accurate and precise heavy precipitation frequency in GCMs than the simulated precipitation. In this study, we examine the performances of the analogue method versus direct simulation, when applied to RCM and GCM simulations, in detecting present-day and future changes in summer (JJA) heavy precipitation over the Midwestern United States. We find analogue methods are comparable to MERRA-2 and its bias-corrected precipitation in characterizing the occurrence and interannual variations of observed heavy precipitation events, all significantly improving upon MERRA precipitation. For the late twentieth-century heavy precipitation frequency, RCM precipitation improves upon the corresponding driving GCM with greater accuracy yet comparable inter-model discrepancies, while both RCM- and GCM-based analogue results outperform their model-simulated precipitation counterparts in terms of accuracy and model consensus. For the projected trends in heavy precipitation frequency through the mid twenty-first century, analogue method also manifests its superiority to direct simulation with reduced intermodel disparities, while the RCM-based analogue and simulated precipitation do not demonstrate a salient improvement (in model consensus) over the GCM-based assessment. However, a number of caveats preclude any overall judgement, and further work—over any region of interest—should include a larger sample of GCMs and RCMs as well as ensemble simulations to comprehensively account for internal variability.
PARALLEL HOP: A SCALABLE HALO FINDER FOR MASSIVE COSMOLOGICAL DATA SETS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skory, Stephen; Turk, Matthew J.; Norman, Michael L.
2010-11-15
Modern N-body cosmological simulations contain billions (10{sup 9}) of dark matter particles. These simulations require hundreds to thousands of gigabytes of memory and employ hundreds to tens of thousands of processing cores on many compute nodes. In order to study the distribution of dark matter in a cosmological simulation, the dark matter halos must be identified using a halo finder, which establishes the halo membership of every particle in the simulation. The resources required for halo finding are similar to the requirements for the simulation itself. In particular, simulations have become too extensive to use commonly employed halo finders, suchmore » that the computational requirements to identify halos must now be spread across multiple nodes and cores. Here, we present a scalable-parallel halo finding method called Parallel HOP for large-scale cosmological simulation data. Based on the halo finder HOP, it utilizes message passing interface and domain decomposition to distribute the halo finding workload across multiple compute nodes, enabling analysis of much larger data sets than is possible with the strictly serial or previous parallel implementations of HOP. We provide a reference implementation of this method as a part of the toolkit {sup yt}, an analysis toolkit for adaptive mesh refinement data that include complementary analysis modules. Additionally, we discuss a suite of benchmarks that demonstrate that this method scales well up to several hundred tasks and data sets in excess of 2000{sup 3} particles. The Parallel HOP method and our implementation can be readily applied to any kind of N-body simulation data and is therefore widely applicable.« less
Adaptive hybrid simulations for multiscale stochastic reaction networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hepp, Benjamin; Gupta, Ankit; Khammash, Mustafa
2015-01-21
The probability distribution describing the state of a Stochastic Reaction Network (SRN) evolves according to the Chemical Master Equation (CME). It is common to estimate its solution using Monte Carlo methods such as the Stochastic Simulation Algorithm (SSA). In many cases, these simulations can take an impractical amount of computational time. Therefore, many methods have been developed that approximate sample paths of the underlying stochastic process and estimate the solution of the CME. A prominent class of these methods include hybrid methods that partition the set of species and the set of reactions into discrete and continuous subsets. Such amore » partition separates the dynamics into a discrete and a continuous part. Simulating such a stochastic process can be computationally much easier than simulating the exact discrete stochastic process with SSA. Moreover, the quasi-stationary assumption to approximate the dynamics of fast subnetworks can be applied for certain classes of networks. However, as the dynamics of a SRN evolves, these partitions may have to be adapted during the simulation. We develop a hybrid method that approximates the solution of a CME by automatically partitioning the reactions and species sets into discrete and continuous components and applying the quasi-stationary assumption on identifiable fast subnetworks. Our method does not require any user intervention and it adapts to exploit the changing timescale separation between reactions and/or changing magnitudes of copy-numbers of constituent species. We demonstrate the efficiency of the proposed method by considering examples from systems biology and showing that very good approximations to the exact probability distributions can be achieved in significantly less computational time. This is especially the case for systems with oscillatory dynamics, where the system dynamics change considerably throughout the time-period of interest.« less
Adaptive hybrid simulations for multiscale stochastic reaction networks.
Hepp, Benjamin; Gupta, Ankit; Khammash, Mustafa
2015-01-21
The probability distribution describing the state of a Stochastic Reaction Network (SRN) evolves according to the Chemical Master Equation (CME). It is common to estimate its solution using Monte Carlo methods such as the Stochastic Simulation Algorithm (SSA). In many cases, these simulations can take an impractical amount of computational time. Therefore, many methods have been developed that approximate sample paths of the underlying stochastic process and estimate the solution of the CME. A prominent class of these methods include hybrid methods that partition the set of species and the set of reactions into discrete and continuous subsets. Such a partition separates the dynamics into a discrete and a continuous part. Simulating such a stochastic process can be computationally much easier than simulating the exact discrete stochastic process with SSA. Moreover, the quasi-stationary assumption to approximate the dynamics of fast subnetworks can be applied for certain classes of networks. However, as the dynamics of a SRN evolves, these partitions may have to be adapted during the simulation. We develop a hybrid method that approximates the solution of a CME by automatically partitioning the reactions and species sets into discrete and continuous components and applying the quasi-stationary assumption on identifiable fast subnetworks. Our method does not require any user intervention and it adapts to exploit the changing timescale separation between reactions and/or changing magnitudes of copy-numbers of constituent species. We demonstrate the efficiency of the proposed method by considering examples from systems biology and showing that very good approximations to the exact probability distributions can be achieved in significantly less computational time. This is especially the case for systems with oscillatory dynamics, where the system dynamics change considerably throughout the time-period of interest.
NASA Astrophysics Data System (ADS)
Oh, Seok-Geun; Suh, Myoung-Seok
2017-07-01
The projection skills of five ensemble methods were analyzed according to simulation skills, training period, and ensemble members, using 198 sets of pseudo-simulation data (PSD) produced by random number generation assuming the simulated temperature of regional climate models. The PSD sets were classified into 18 categories according to the relative magnitude of bias, variance ratio, and correlation coefficient, where each category had 11 sets (including 1 truth set) with 50 samples. The ensemble methods used were as follows: equal weighted averaging without bias correction (EWA_NBC), EWA with bias correction (EWA_WBC), weighted ensemble averaging based on root mean square errors and correlation (WEA_RAC), WEA based on the Taylor score (WEA_Tay), and multivariate linear regression (Mul_Reg). The projection skills of the ensemble methods improved generally as compared with the best member for each category. However, their projection skills are significantly affected by the simulation skills of the ensemble member. The weighted ensemble methods showed better projection skills than non-weighted methods, in particular, for the PSD categories having systematic biases and various correlation coefficients. The EWA_NBC showed considerably lower projection skills than the other methods, in particular, for the PSD categories with systematic biases. Although Mul_Reg showed relatively good skills, it showed strong sensitivity to the PSD categories, training periods, and number of members. On the other hand, the WEA_Tay and WEA_RAC showed relatively superior skills in both the accuracy and reliability for all the sensitivity experiments. This indicates that WEA_Tay and WEA_RAC are applicable even for simulation data with systematic biases, a short training period, and a small number of ensemble members.
Zhu, Hao; Sun, Yan; Rajagopal, Gunaretnam; Mondry, Adrian; Dhar, Pawan
2004-01-01
Background Many arrhythmias are triggered by abnormal electrical activity at the ionic channel and cell level, and then evolve spatio-temporally within the heart. To understand arrhythmias better and to diagnose them more precisely by their ECG waveforms, a whole-heart model is required to explore the association between the massively parallel activities at the channel/cell level and the integrative electrophysiological phenomena at organ level. Methods We have developed a method to build large-scale electrophysiological models by using extended cellular automata, and to run such models on a cluster of shared memory machines. We describe here the method, including the extension of a language-based cellular automaton to implement quantitative computing, the building of a whole-heart model with Visible Human Project data, the parallelization of the model on a cluster of shared memory computers with OpenMP and MPI hybrid programming, and a simulation algorithm that links cellular activity with the ECG. Results We demonstrate that electrical activities at channel, cell, and organ levels can be traced and captured conveniently in our extended cellular automaton system. Examples of some ECG waveforms simulated with a 2-D slice are given to support the ECG simulation algorithm. A performance evaluation of the 3-D model on a four-node cluster is also given. Conclusions Quantitative multicellular modeling with extended cellular automata is a highly efficient and widely applicable method to weave experimental data at different levels into computational models. This process can be used to investigate complex and collective biological activities that can be described neither by their governing differentiation equations nor by discrete parallel computation. Transparent cluster computing is a convenient and effective method to make time-consuming simulation feasible. Arrhythmias, as a typical case, can be effectively simulated with the methods described. PMID:15339335
A simple method for simulating wind profiles in the boundary layer of tropical cyclones
Bryan, George H.; Worsnop, Rochelle P.; Lundquist, Julie K.; ...
2016-11-01
A method to simulate characteristics of wind speed in the boundary layer of tropical cyclones in an idealized manner is developed and evaluated. The method can be used in a single-column modelling set-up with a planetary boundary-layer parametrization, or within large-eddy simulations (LES). The key step is to include terms in the horizontal velocity equations representing advection and centrifugal acceleration in tropical cyclones that occurs on scales larger than the domain size. Compared to other recently developed methods, which require two input parameters (a reference wind speed, and radius from the centre of a tropical cyclone) this new method alsomore » requires a third input parameter: the radial gradient of reference wind speed. With the new method, simulated wind profiles are similar to composite profiles from dropsonde observations; in contrast, a classic Ekman-type method tends to overpredict inflow-layer depth and magnitude, and two recently developed methods for tropical cyclone environments tend to overpredict near-surface wind speed. When used in LES, the new technique produces vertical profiles of total turbulent stress and estimated eddy viscosity that are similar to values determined from low-level aircraft flights in tropical cyclones. Lastly, temporal spectra from LES produce an inertial subrange for frequencies ≳0.1 Hz, but only when the horizontal grid spacing ≲20 m.« less
NASA Astrophysics Data System (ADS)
Bylaska, E. J.; Kowalski, K.; Apra, E.; Govind, N.; Valiev, M.
2017-12-01
Methods of directly simulating the behavior of complex strongly interacting atomic systems (molecular dynamics, Monte Carlo) have provided important insight into the behavior of nanoparticles, biogeochemical systems, mineral/fluid systems, nanoparticles, actinide systems and geofluids. The limitation of these methods to even wider applications is the difficulty of developing accurate potential interactions in these systems at the molecular level that capture their complex chemistry. The well-developed tools of quantum chemistry and physics have been shown to approach the accuracy required. However, despite the continuous effort being put into improving their accuracy and efficiency, these tools will be of little value to condensed matter problems without continued improvements in techniques to traverse and sample the high-dimensional phase space needed to span the ˜10^12 time scale differences between molecular simulation and chemical events. In recent years, we have made considerable progress in developing electronic structure and AIMD methods tailored to treat biochemical and geochemical problems, including very efficient implementations of many-body methods, fast exact exchange methods, electron-transfer methods, excited state methods, QM/MM, and new parallel algorithms that scale to +100,000 cores. The poster will focus on the fundamentals of these methods and the realities in terms of system size, computational requirements and simulation times that are required for their application to complex biogeochemical systems.
A Simple Method for Simulating Wind Profiles in the Boundary Layer of Tropical Cyclones
NASA Astrophysics Data System (ADS)
Bryan, George H.; Worsnop, Rochelle P.; Lundquist, Julie K.; Zhang, Jun A.
2017-03-01
A method to simulate characteristics of wind speed in the boundary layer of tropical cyclones in an idealized manner is developed and evaluated. The method can be used in a single-column modelling set-up with a planetary boundary-layer parametrization, or within large-eddy simulations (LES). The key step is to include terms in the horizontal velocity equations representing advection and centrifugal acceleration in tropical cyclones that occurs on scales larger than the domain size. Compared to other recently developed methods, which require two input parameters (a reference wind speed, and radius from the centre of a tropical cyclone) this new method also requires a third input parameter: the radial gradient of reference wind speed. With the new method, simulated wind profiles are similar to composite profiles from dropsonde observations; in contrast, a classic Ekman-type method tends to overpredict inflow-layer depth and magnitude, and two recently developed methods for tropical cyclone environments tend to overpredict near-surface wind speed. When used in LES, the new technique produces vertical profiles of total turbulent stress and estimated eddy viscosity that are similar to values determined from low-level aircraft flights in tropical cyclones. Temporal spectra from LES produce an inertial subrange for frequencies ≳ 0.1 Hz, but only when the horizontal grid spacing ≲ 20 m.
Dynamical Modeling of NGC 6397: Simulated HST Imaging
NASA Astrophysics Data System (ADS)
Dull, J. D.; Cohn, H. N.; Lugger, P. M.; Slavin, S. D.; Murphy, B. W.
1994-12-01
The proximity of NGC 6397 (2.2 kpc) provides an ideal opportunity to test current dynamical models for globular clusters with the HST Wide-Field/Planetary Camera (WFPC2)\\@. We have used a Monte Carlo algorithm to generate ensembles of simulated Planetary Camera (PC) U-band images of NGC 6397 from evolving, multi-mass Fokker-Planck models. These images, which are based on the post-repair HST-PC point-spread function, are used to develop and test analysis methods for recovering structural information from actual HST imaging. We have considered a range of exposure times up to 2.4times 10(4) s, based on our proposed HST Cycle 5 observations. Our Fokker-Planck models include energy input from dynamically-formed binaries. We have adopted a 20-group mass spectrum extending from 0.16 to 1.4 M_sun. We use theoretical luminosity functions for red giants and main sequence stars. Horizontal branch stars, blue stragglers, white dwarfs, and cataclysmic variables are also included. Simulated images are generated for cluster models at both maximal core collapse and at a post-collapse bounce. We are carrying out stellar photometry on these images using ``DAOPHOT-assisted aperture photometry'' software that we have developed. We are testing several techniques for analyzing the resulting star counts, to determine the underlying cluster structure, including parametric model fits and the nonparametric density estimation methods. Our simulated images also allow us to investigate the accuracy and completeness of methods for carrying out stellar photometry in HST Planetary Camera images of dense cluster cores.
NASA Astrophysics Data System (ADS)
Thoma, Jean Ulrich
The fundamental principles and applications of the bond graph method, in which a system is represented on paper by letter elements and their interconnections (bonds), are presented in an introduction for engineering students. Chapters are devoted to simulation and graphical system models; bond graphs as networks for power and signal exchange; the simulation and design of mechanical engineering systems; the simulation of fluid power systems and hydrostatic devices; electrical circuits, drives, and components; practical procedures and problems of bond-graph-based numerical simulation; and applications to thermodynamics, chemistry, and biology. Also included are worked examples of applications to robotics, shocks and collisions, ac circuits, hydraulics, and a hydropneumatic fatigue-testing machine.
Multi-disciplinary coupling effects for integrated design of propulsion systems
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Singhal, S. N.
1993-01-01
Effective computational simulation procedures are described for modeling the inherent multi-disciplinary interactions which govern the accurate response of propulsion systems. Results are presented for propulsion system responses including multi-disciplinary coupling effects using coupled multi-discipline thermal, structural, and acoustic tailoring; an integrated system of multi-disciplinary simulators; coupled material behavior/fabrication process tailoring; sensitivities using a probabilistic simulator; and coupled materials, structures, fracture, and probabilistic behavior simulator. The results demonstrate that superior designs can be achieved if the analysis/tailoring methods account for the multi-disciplinary coupling effects. The coupling across disciplines can be used to develop an integrated coupled multi-discipline numerical propulsion system simulator.
Multi-disciplinary coupling for integrated design of propulsion systems
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Singhal, S. N.
1993-01-01
Effective computational simulation procedures are described for modeling the inherent multi-disciplinary interactions for determining the true response of propulsion systems. Results are presented for propulsion system responses including multi-discipline coupling effects via (1) coupled multi-discipline tailoring, (2) an integrated system of multidisciplinary simulators, (3) coupled material-behavior/fabrication-process tailoring, (4) sensitivities using a probabilistic simulator, and (5) coupled materials/structures/fracture/probabilistic behavior simulator. The results show that the best designs can be determined if the analysis/tailoring methods account for the multi-disciplinary coupling effects. The coupling across disciplines can be used to develop an integrated interactive multi-discipline numerical propulsion system simulator.
Modeling and Simulation for Mission Operations Work System Design
NASA Technical Reports Server (NTRS)
Sierhuis, Maarten; Clancey, William J.; Seah, Chin; Trimble, Jay P.; Sims, Michael H.
2003-01-01
Work System analysis and design is complex and non-deterministic. In this paper we describe Brahms, a multiagent modeling and simulation environment for designing complex interactions in human-machine systems. Brahms was originally conceived as a business process design tool that simulates work practices, including social systems of work. We describe our modeling and simulation method for mission operations work systems design, based on a research case study in which we used Brahms to design mission operations for a proposed discovery mission to the Moon. We then describe the results of an actual method application project-the Brahms Mars Exploration Rover. Space mission operations are similar to operations of traditional organizations; we show that the application of Brahms for space mission operations design is relevant and transferable to other types of business processes in organizations.
Composite Load Spectra for Select Space Propulsion Structural Components
NASA Technical Reports Server (NTRS)
Ho, Hing W.; Newell, James F.
1994-01-01
Generic load models are described with multiple levels of progressive sophistication to simulate the composite (combined) load spectra (CLS) that are induced in space propulsion system components, representative of Space Shuttle Main Engines (SSME), such as transfer ducts, turbine blades and liquid oxygen (LOX) posts. These generic (coupled) models combine the deterministic models for composite load dynamic, acoustic, high-pressure and high rotational speed, etc., load simulation using statistically varying coefficients. These coefficients are then determined using advanced probabilistic simulation methods with and without strategically selected experimental data. The entire simulation process is included in a CLS computer code. Applications of the computer code to various components in conjunction with the PSAM (Probabilistic Structural Analysis Method) to perform probabilistic load evaluation and life prediction evaluations are also described to illustrate the effectiveness of the coupled model approach.
A Semi-implicit Method for Resolution of Acoustic Waves in Low Mach Number Flows
NASA Astrophysics Data System (ADS)
Wall, Clifton; Pierce, Charles D.; Moin, Parviz
2002-09-01
A semi-implicit numerical method for time accurate simulation of compressible flow is presented. By extending the low Mach number pressure correction method, a Helmholtz equation for pressure is obtained in the case of compressible flow. The method avoids the acoustic CFL limitation, allowing a time step restricted only by the convective velocity, resulting in significant efficiency gains. Use of a discretization that is centered in both time and space results in zero artificial damping of acoustic waves. The method is attractive for problems in which Mach numbers are low, and the acoustic waves of most interest are those having low frequency, such as acoustic combustion instabilities. Both of these characteristics suggest the use of time steps larger than those allowable by an acoustic CFL limitation. In some cases it may be desirable to include a small amount of numerical dissipation to eliminate oscillations due to small-wavelength, high-frequency, acoustic modes, which are not of interest; therefore, a provision for doing this in a controlled manner is included in the method. Results of the method for several model problems are presented, and the performance of the method in a large eddy simulation is examined.
Shao, Jing-Yuan; Qu, Hai-Bin; Gong, Xing-Chu
2018-05-01
In this work, two algorithms (overlapping method and the probability-based method) for design space calculation were compared by using the data collected from extraction process of Codonopsis Radix as an example. In the probability-based method, experimental error was simulated to calculate the probability of reaching the standard. The effects of several parameters on the calculated design space were studied, including simulation number, step length, and the acceptable probability threshold. For the extraction process of Codonopsis Radix, 10 000 times of simulation and 0.02 for the calculation step length can lead to a satisfactory design space. In general, the overlapping method is easy to understand, and can be realized by several kinds of commercial software without coding programs, but the reliability of the process evaluation indexes when operating in the design space is not indicated. Probability-based method is complex in calculation, but can provide the reliability to ensure that the process indexes can reach the standard within the acceptable probability threshold. In addition, there is no probability mutation in the edge of design space by probability-based method. Therefore, probability-based method is recommended for design space calculation. Copyright© by the Chinese Pharmaceutical Association.
Asymptotic confidence intervals for the Pearson correlation via skewness and kurtosis.
Bishara, Anthony J; Li, Jiexiang; Nash, Thomas
2018-02-01
When bivariate normality is violated, the default confidence interval of the Pearson correlation can be inaccurate. Two new methods were developed based on the asymptotic sampling distribution of Fisher's z' under the general case where bivariate normality need not be assumed. In Monte Carlo simulations, the most successful of these methods relied on the (Vale & Maurelli, 1983, Psychometrika, 48, 465) family to approximate a distribution via the marginal skewness and kurtosis of the sample data. In Simulation 1, this method provided more accurate confidence intervals of the correlation in non-normal data, at least as compared to no adjustment of the Fisher z' interval, or to adjustment via the sample joint moments. In Simulation 2, this approximate distribution method performed favourably relative to common non-parametric bootstrap methods, but its performance was mixed relative to an observed imposed bootstrap and two other robust methods (PM1 and HC4). No method was completely satisfactory. An advantage of the approximate distribution method, though, is that it can be implemented even without access to raw data if sample skewness and kurtosis are reported, making the method particularly useful for meta-analysis. Supporting information includes R code. © 2017 The British Psychological Society.
Ovchinnikov, Victor; Nam, Kwangho; Karplus, Martin
2016-08-25
A method is developed to obtain simultaneously free energy profiles and diffusion constants from restrained molecular simulations in diffusive systems. The method is based on low-order expansions of the free energy and diffusivity as functions of the reaction coordinate. These expansions lead to simple analytical relationships between simulation statistics and model parameters. The method is tested on 1D and 2D model systems; its accuracy is found to be comparable to or better than that of the existing alternatives, which are briefly discussed. An important aspect of the method is that the free energy is constructed by integrating its derivatives, which can be computed without need for overlapping sampling windows. The implementation of the method in any molecular simulation program that supports external umbrella potentials (e.g., CHARMM) requires modification of only a few lines of code. As a demonstration of its applicability to realistic biomolecular systems, the method is applied to model the α-helix ↔ β-sheet transition in a 16-residue peptide in implicit solvent, with the reaction coordinate provided by the string method. Possible modifications of the method are briefly discussed; they include generalization to multidimensional reaction coordinates [in the spirit of the model of Ermak and McCammon (Ermak, D. L.; McCammon, J. A. J. Chem. Phys. 1978, 69, 1352-1360)], a higher-order expansion of the free energy surface, applicability in nonequilibrium systems, and a simple test for Markovianity. In view of the small overhead of the method relative to standard umbrella sampling, we suggest its routine application in the cases where umbrella potential simulations are appropriate.
Practical Entanglement Estimation for Spin-System Quantum Simulators.
Marty, O; Cramer, M; Plenio, M B
2016-03-11
We present practical methods to measure entanglement for quantum simulators that can be realized with trapped ions, cold atoms, and superconducting qubits. Focusing on long- and short-range Ising-type Hamiltonians, we introduce schemes that are applicable under realistic experimental conditions including mixedness due to, e.g., noise or temperature. In particular, we identify a single observable whose expectation value serves as a lower bound to entanglement and that may be obtained by a simple quantum circuit. As such circuits are not (yet) available for every platform, we investigate the performance of routinely measured observables as quantitative entanglement witnesses. Possible applications include experimental studies of entanglement scaling in critical systems and the reliable benchmarking of quantum simulators.
Brownian dynamics simulation of rigid particles of arbitrary shape in external fields.
Fernandes, Miguel X; de la Torre, José García
2002-12-01
We have developed a Brownian dynamics simulation algorithm to generate Brownian trajectories of an isolated, rigid particle of arbitrary shape in the presence of electric fields or any other external agents. Starting from the generalized diffusion tensor, which can be calculated with the existing HYDRO software, the new program BROWNRIG (including a case-specific subprogram for the external agent) carries out a simulation that is analyzed later to extract the observable dynamic properties. We provide a variety of examples of utilization of this method, which serve as tests of its performance, and also illustrate its applicability. Examples include free diffusion, transport in an electric field, and diffusion in a restricting environment.
Daniel, Colin J.; Sleeter, Benjamin M.; Frid, Leonardo; Fortin, Marie-Josée
2018-01-01
State-and-transition simulation models (STSMs) provide a general framework for forecasting landscape dynamics, including projections of both vegetation and land-use/land-cover (LULC) change. The STSM method divides a landscape into spatially-referenced cells and then simulates the state of each cell forward in time, as a discrete-time stochastic process using a Monte Carlo approach, in response to any number of possible transitions. A current limitation of the STSM method, however, is that all of the state variables must be discrete.Here we present a new approach for extending a STSM, in order to account for continuous state variables, called a state-and-transition simulation model with stocks and flows (STSM-SF). The STSM-SF method allows for any number of continuous stocks to be defined for every spatial cell in the STSM, along with a suite of continuous flows specifying the rates at which stock levels change over time. The change in the level of each stock is then simulated forward in time, for each spatial cell, as a discrete-time stochastic process. The method differs from the traditional systems dynamics approach to stock-flow modelling in that the stocks and flows can be spatially-explicit, and the flows can be expressed as a function of the STSM states and transitions.We demonstrate the STSM-SF method by integrating a spatially-explicit carbon (C) budget model with a STSM of LULC change for the state of Hawai'i, USA. In this example, continuous stocks are pools of terrestrial C, while the flows are the possible fluxes of C between these pools. Importantly, several of these C fluxes are triggered by corresponding LULC transitions in the STSM. Model outputs include changes in the spatial and temporal distribution of C pools and fluxes across the landscape in response to projected future changes in LULC over the next 50 years.The new STSM-SF method allows both discrete and continuous state variables to be integrated into a STSM, including interactions between them. With the addition of stocks and flows, STSMs provide a conceptually simple yet powerful approach for characterizing uncertainties in projections of a wide range of questions regarding landscape change.
Calculation of out-of-field dose distribution in carbon-ion radiotherapy by Monte Carlo simulation.
Yonai, Shunsuke; Matsufuji, Naruhiro; Namba, Masao
2012-08-01
Recent radiotherapy technologies including carbon-ion radiotherapy can improve the dose concentration in the target volume, thereby not only reducing side effects in organs at risk but also the secondary cancer risk within or near the irradiation field. However, secondary cancer risk in the low-dose region is considered to be non-negligible, especially for younger patients. To achieve a dose estimation of the whole body of each patient receiving carbon-ion radiotherapy, which is essential for risk assessment and epidemiological studies, Monte Carlo simulation plays an important role because the treatment planning system can provide dose distribution only in∕near the irradiation field and the measured data are limited. However, validation of Monte Carlo simulations is necessary. The primary purpose of this study was to establish a calculation method using the Monte Carlo code to estimate the dose and quality factor in the body and to validate the proposed method by comparison with experimental data. Furthermore, we show the distributions of dose equivalent in a phantom and identify the partial contribution of each radiation type. We proposed a calculation method based on a Monte Carlo simulation using the PHITS code to estimate absorbed dose, dose equivalent, and dose-averaged quality factor by using the Q(L)-L relationship based on the ICRP 60 recommendation. The values obtained by this method in modeling the passive beam line at the Heavy-Ion Medical Accelerator in Chiba were compared with our previously measured data. It was shown that our calculation model can estimate the measured value within a factor of 2, which included not only the uncertainty of this calculation method but also those regarding the assumptions of the geometrical modeling and the PHITS code. Also, we showed the differences in the doses and the partial contributions of each radiation type between passive and active carbon-ion beams using this calculation method. These results indicated that it is essentially important to include the dose by secondary neutrons in the assessment of the secondary cancer risk of patients receiving carbon-ion radiotherapy with active as well as passive beams. We established a calculation method with a Monte Carlo simulation to estimate the distribution of dose equivalent in the body as a first step toward routine risk assessment and an epidemiological study of carbon-ion radiotherapy at NIRS. This method has the advantage of being verifiable by the measurement.
Educational aspects of molecular simulation
NASA Astrophysics Data System (ADS)
Allen, Michael P.
This article addresses some aspects of teaching simulation methods to undergraduates and graduate students. Simulation is increasingly a cross-disciplinary activity, which means that the students who need to learn about simulation methods may have widely differing backgrounds. Also, they may have a wide range of views on what constitutes an interesting application of simulation methods. Almost always, a successful simulation course includes an element of practical, hands-on activity: a balance always needs to be struck between treating the simulation software as a 'black box', and becoming bogged down in programming issues. With notebook computers becoming widely available, students often wish to take away the programs to run themselves, and access to raw computer power is not the limiting factor that it once was; on the other hand, the software should be portable and, if possible, free. Examples will be drawn from the author's experience in three different contexts. (1) An annual simulation summer school for graduate students, run by the UK CCP5 organization, in which practical sessions are combined with an intensive programme of lectures describing the methodology. (2) A molecular modelling module, given as part of a doctoral training centre in the Life Sciences at Warwick, for students who might not have a first degree in the physical sciences. (3) An undergraduate module in Physics at Warwick, also taken by students from other disciplines, teaching high performance computing, visualization, and scripting in the context of a physical application such as Monte Carlo simulation.
Status of simulation in health care education: an international survey.
Qayumi, Karim; Pachev, George; Zheng, Bin; Ziv, Amitai; Koval, Valentyna; Badiei, Sadia; Cheng, Adam
2014-01-01
Simulation is rapidly penetrating the terrain of health care education and has gained growing acceptance as an educational method and patient safety tool. Despite this, the state of simulation in health care education has not yet been evaluated on a global scale. In this project, we studied the global status of simulation in health care education by determining the degree of financial support, infrastructure, manpower, information technology capabilities, engagement of groups of learners, and research and scholarly activities, as well as the barriers, strengths, opportunities for growth, and other aspects of simulation in health care education. We utilized a two-stage process, including an online survey and a site visit that included interviews and debriefings. Forty-two simulation centers worldwide participated in this study, the results of which show that despite enormous interest and enthusiasm in the health care community, use of simulation in health care education is limited to specific areas and is not a budgeted item in many institutions. Absence of a sustainable business model, as well as sufficient financial support in terms of budget, infrastructure, manpower, research, and scholarly activities, slows down the movement of simulation. Specific recommendations are made based on current findings to support simulation in the next developmental stages.
NASA Technical Reports Server (NTRS)
Deacetis, Louis A.
1991-01-01
The need to reduce the costs of Space Station Freedom has resulted in a major redesign and downsizing of the Station in general, and its Communications and Tracking (C&T) components in particular. Earlier models and simulations of the C&T Space-to-Ground Subsystem (SGS) in particular are no longer valid. There thus exists a general need for updated, high fidelity simulations of C&T subsystems. This project explored simulation techniques and methods that might be used in developing new simulations of C&T subsystems, including the SGS. Three requirements were placed on the simulations to be developed: (1) they run on IBM PC/XT/AT compatible computers; (2) they be written in Ada as much as possible; and (3) since control and monitoring of the C&T subsystems will involve communication via a MIL-STD-1553B serial bus, that the possibility of commanding the simulator and monitoring its sensors via that bus be included in the design of the simulator. The result of the project is a prototype of a simulation of the Assembly/Contingency Transponder of the SGS, written in Ada, which can be controlled from another PC via a MIL-STD-1553B bus.
Treeby, Bradley E; Jaros, Jiri; Rendell, Alistair P; Cox, B T
2012-06-01
The simulation of nonlinear ultrasound propagation through tissue realistic media has a wide range of practical applications. However, this is a computationally difficult problem due to the large size of the computational domain compared to the acoustic wavelength. Here, the k-space pseudospectral method is used to reduce the number of grid points required per wavelength for accurate simulations. The model is based on coupled first-order acoustic equations valid for nonlinear wave propagation in heterogeneous media with power law absorption. These are derived from the equations of fluid mechanics and include a pressure-density relation that incorporates the effects of nonlinearity, power law absorption, and medium heterogeneities. The additional terms accounting for convective nonlinearity and power law absorption are expressed as spatial gradients making them efficient to numerically encode. The governing equations are then discretized using a k-space pseudospectral technique in which the spatial gradients are computed using the Fourier-collocation method. This increases the accuracy of the gradient calculation and thus relaxes the requirement for dense computational grids compared to conventional finite difference methods. The accuracy and utility of the developed model is demonstrated via several numerical experiments, including the 3D simulation of the beam pattern from a clinical ultrasound probe.
ERIC Educational Resources Information Center
Estes, Charles R.
1994-01-01
Discusses theoretical versus applied science and the use of the scientific method for analysis of social issues. Topics addressed include the use of simulation and modeling; the growth in computer power, including nanotechnology; distributed computing; self-evolving programs; spiritual matters; human engineering, i.e., molding individuals;…
Numerical Simulations Of Flagellated Micro-Swimmers
NASA Astrophysics Data System (ADS)
Rorai, Cecilia; Markesteijn, Anton; Zaitstev, Mihail; Karabasov, Sergey
2017-11-01
We study flagellated microswimmers locomotion by representing the entire swimmer body. We discuss and contrast the accuracy and computational cost of different numerical approaches including the Resistive Force Theory, the Regularized Stokeslet Method and the Finite Element Method. We focus on how the accuracy of the methods in reproducing the swimming trajectories, velocities and flow field, compares to the sensitivity of these quantities to certain physical parameters, such as the body shape and the location of the center of mass. We discuss the opportunity and physical relevance of retaining inertia in our models. Finally, we present some preliminary results toward collective motion simulations. Marie Skodowska-Curie Individual Fellowship.
NASA Astrophysics Data System (ADS)
John, Christopher; Spura, Thomas; Habershon, Scott; Kühne, Thomas D.
2016-04-01
We present a simple and accurate computational method which facilitates ab initio path-integral molecular dynamics simulations, where the quantum-mechanical nature of the nuclei is explicitly taken into account, at essentially no additional computational cost in comparison to the corresponding calculation using classical nuclei. The predictive power of the proposed quantum ring-polymer contraction method is demonstrated by computing various static and dynamic properties of liquid water at ambient conditions using density functional theory. This development will enable routine inclusion of nuclear quantum effects in ab initio molecular dynamics simulations of condensed-phase systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao, Jingfeng; Yuan, Chengxun, E-mail: yuancx@hit.edu.cn, E-mail: zhouzx@hit.edu.cn; Gao, Ruilin
2016-08-15
This study focuses on the transmission of normal-incidence electromagnetic waves in one-dimensional plasma photonic crystals. Using the Maxwell's equations in a medium, a method that is based on the concept of impendence is employed to perform the simulation. The accuracy of the method was evaluated by simulating a one-layer plasma and conventional photonic crystal. In frequency-domain, the transmission and reflection coefficients in the unmagnetized plasma photonic crystal were calculated, and the influence factors on plasma photonic crystals including dielectric constants of dielectric, spatial period, filling factor, plasma frequency, and collision frequency were studied.
Effect of simulation on the ability of first year nursing students to learn vital signs.
Eyikara, Evrim; Baykara, Zehra Göçmen
2018-01-01
The acquisition of cognitive, affective and psychomotor knowledge and skills are required in nursing, made possible via an interactive teaching method, such as simulation. This study conducted to identify the impact of simulation on first-year nursing students' ability to learn vital signs. A convenience sample of 90 first-year nursing students enrolled at a University, Ankara, in 2014-2015. Ninety students enrolled for lessons on the "Fundamentals of Nursing" were identified using a simple random sampling method. The students were taught vital signs theory via traditional methods. They were grouped into experimental 1, experimental 2 and control group, of 30 students each. Students in the experimental 1 group attended sessions on simulation and those in the experimental 2 group sessions on laboratory work, followed by simulation. The control group were taught via traditional methods and only attended the laboratory work sessions. The students' cognitive knowledge acquisition was evaluated using a knowledge test before and after the lessons. The ability to measure vital signs in adults (healthy ones and patients) was evaluated using a skill control list. A statistically significant difference was not observed between the groups in terms of the average pre-test scores on knowledge (p>0.050). Groups exposed to simulation obtained statistically significantly higher scores than the control group in post-test knowledge (p<0.050). The ability of the groups exposed to simulation to measure vital signs in healthy adults and patients was more successful than that the control group (p<0.050). This was statistically significant. Simulation had a positive effect on the ability of nursing students to measure vital signs. Thus, simulation should be included in the mainstream curriculum in order to effectively impart nursing knowledge and skills. Copyright © 2017 Elsevier Ltd. All rights reserved.
Xiao, Li; Luo, Ray
2017-12-07
We explored a multi-scale algorithm for the Poisson-Boltzmann continuum solvent model for more robust simulations of biomolecules. In this method, the continuum solvent/solute interface is explicitly simulated with a numerical fluid dynamics procedure, which is tightly coupled to the solute molecular dynamics simulation. There are multiple benefits to adopt such a strategy as presented below. At this stage of the development, only nonelectrostatic interactions, i.e., van der Waals and hydrophobic interactions, are included in the algorithm to assess the quality of the solvent-solute interface generated by the new method. Nevertheless, numerical challenges exist in accurately interpolating the highly nonlinear van der Waals term when solving the finite-difference fluid dynamics equations. We were able to bypass the challenge rigorously by merging the van der Waals potential and pressure together when solving the fluid dynamics equations and by considering its contribution in the free-boundary condition analytically. The multi-scale simulation method was first validated by reproducing the solute-solvent interface of a single atom with analytical solution. Next, we performed the relaxation simulation of a restrained symmetrical monomer and observed a symmetrical solvent interface at equilibrium with detailed surface features resembling those found on the solvent excluded surface. Four typical small molecular complexes were then tested, both volume and force balancing analyses showing that these simple complexes can reach equilibrium within the simulation time window. Finally, we studied the quality of the multi-scale solute-solvent interfaces for the four tested dimer complexes and found that they agree well with the boundaries as sampled in the explicit water simulations.
Environmental Chemicals in Urine and Blood: Improving Methods for Creatinine and Lipid Adjustment.
O'Brien, Katie M; Upson, Kristen; Cook, Nancy R; Weinberg, Clarice R
2016-02-01
Investigators measuring exposure biomarkers in urine typically adjust for creatinine to account for dilution-dependent sample variation in urine concentrations. Similarly, it is standard to adjust for serum lipids when measuring lipophilic chemicals in serum. However, there is controversy regarding the best approach, and existing methods may not effectively correct for measurement error. We compared adjustment methods, including novel approaches, using simulated case-control data. Using a directed acyclic graph framework, we defined six causal scenarios for epidemiologic studies of environmental chemicals measured in urine or serum. The scenarios include variables known to influence creatinine (e.g., age and hydration) or serum lipid levels (e.g., body mass index and recent fat intake). Over a range of true effect sizes, we analyzed each scenario using seven adjustment approaches and estimated the corresponding bias and confidence interval coverage across 1,000 simulated studies. For urinary biomarker measurements, our novel method, which incorporates both covariate-adjusted standardization and the inclusion of creatinine as a covariate in the regression model, had low bias and possessed 95% confidence interval coverage of nearly 95% for most simulated scenarios. For serum biomarker measurements, a similar approach involving standardization plus serum lipid level adjustment generally performed well. To control measurement error bias caused by variations in serum lipids or by urinary diluteness, we recommend improved methods for standardizing exposure concentrations across individuals.
Computational Methods for Structural Mechanics and Dynamics
NASA Technical Reports Server (NTRS)
Stroud, W. Jefferson (Editor); Housner, Jerrold M. (Editor); Tanner, John A. (Editor); Hayduk, Robert J. (Editor)
1989-01-01
Topics addressed include: transient dynamics; transient finite element method; transient analysis in impact and crash dynamic studies; multibody computer codes; dynamic analysis of space structures; multibody mechanics and manipulators; spatial and coplanar linkage systems; flexible body simulation; multibody dynamics; dynamical systems; and nonlinear characteristics of joints.
A Structural Modeling Approach to a Multilevel Random Coefficients Model.
ERIC Educational Resources Information Center
Rovine, Michael J.; Molenaar, Peter C. M.
2000-01-01
Presents a method for estimating the random coefficients model using covariance structure modeling and allowing one to estimate both fixed and random effects. The method is applied to real and simulated data, including marriage data from J. Belsky and M. Rovine (1990). (SLD)
Using the BBC Microcomputer to Teach the Electrocardiogram to Biology Students.
ERIC Educational Resources Information Center
Dewhurst, D. G.; And Others
1990-01-01
Described are two methods which use microcomputers to illustrate the use of the electrocardiogram and the function of the heart. Included are a simulation and a method of collecting live electrocardiograms. Hardware, software, and the use of these systems are discussed. (CW)
A generalized vortex lattice method for subsonic and supersonic flow applications
NASA Technical Reports Server (NTRS)
Miranda, L. R.; Elliot, R. D.; Baker, W. M.
1977-01-01
If the discrete vortex lattice is considered as an approximation to the surface-distributed vorticity, then the concept of the generalized principal part of an integral yields a residual term to the vorticity-induced velocity field. The proper incorporation of this term to the velocity field generated by the discrete vortex lines renders the present vortex lattice method valid for supersonic flow. Special techniques for simulating nonzero thickness lifting surfaces and fusiform bodies with vortex lattice elements are included. Thickness effects of wing-like components are simulated by a double (biplanar) vortex lattice layer, and fusiform bodies are represented by a vortex grid arranged on a series of concentrical cylindrical surfaces. The analysis of sideslip effects by the subject method is described. Numerical considerations peculiar to the application of these techniques are also discussed. The method has been implemented in a digital computer code. A users manual is included along with a complete FORTRAN compilation, an executed case, and conversion programs for transforming input for the NASA wave drag program.
Thermal helium clusters at 3.2 Kelvin in classical and semiclassical simulations
NASA Astrophysics Data System (ADS)
Schulte, J.
1993-03-01
The thermodynamic stability of4He4-13 at 3.2 K is investigated with the classical Monte Carlo method, with the semiclassical path-integral Monte Carlo (PIMC) method, and with the semiclassical all-order many-body method. In the all-order many-body simulation the dipole-dipole approximation including short-range correction is used. The resulting stability plots are discussed and related to recent TOF experiments by Stephens and King. It is found that with classical Monte Carlo of course the characteristics of the measured mass spectrum cannot be resolved. With PIMC, switching on more and more quantum mechanics. by raising the number of virtual time steps results in more structure in the stability plot, but this did not lead to sufficient agreement with the TOF experiment. Only the all-order many-body method resolved the characteristic structures of the measured mass spectrum, including magic numbers. The result shows the influence of quantum statistics and quantum mechanics on the stability of small neutral helium clusters.
COMPARISON OF MONTE CARLO METHODS FOR NONLINEAR RADIATION TRANSPORT
DOE Office of Scientific and Technical Information (OSTI.GOV)
W. R. MARTIN; F. B. BROWN
2001-03-01
Five Monte Carlo methods for solving the nonlinear thermal radiation transport equations are compared. The methods include the well-known Implicit Monte Carlo method (IMC) developed by Fleck and Cummings, an alternative to IMC developed by Carter and Forest, an ''exact'' method recently developed by Ahrens and Larsen, and two methods recently proposed by Martin and Brown. The five Monte Carlo methods are developed and applied to the radiation transport equation in a medium assuming local thermodynamic equilibrium. Conservation of energy is derived and used to define appropriate material energy update equations for each of the methods. Details of the Montemore » Carlo implementation are presented, both for the random walk simulation and the material energy update. Simulation results for all five methods are obtained for two infinite medium test problems and a 1-D test problem, all of which have analytical solutions. Conclusions regarding the relative merits of the various schemes are presented.« less
Stochastic Time Models of Syllable Structure
Shaw, Jason A.; Gafos, Adamantios I.
2015-01-01
Drawing on phonology research within the generative linguistics tradition, stochastic methods, and notions from complex systems, we develop a modelling paradigm linking phonological structure, expressed in terms of syllables, to speech movement data acquired with 3D electromagnetic articulography and X-ray microbeam methods. The essential variable in the models is syllable structure. When mapped to discrete coordination topologies, syllabic organization imposes systematic patterns of variability on the temporal dynamics of speech articulation. We simulated these dynamics under different syllabic parses and evaluated simulations against experimental data from Arabic and English, two languages claimed to parse similar strings of segments into different syllabic structures. Model simulations replicated several key experimental results, including the fallibility of past phonetic heuristics for syllable structure, and exposed the range of conditions under which such heuristics remain valid. More importantly, the modelling approach consistently diagnosed syllable structure proving resilient to multiple sources of variability in experimental data including measurement variability, speaker variability, and contextual variability. Prospects for extensions of our modelling paradigm to acoustic data are also discussed. PMID:25996153
Garretson, Justin R [Albuquerque, NM; Parker, Eric P [Albuquerque, NM; Gladwell, T Scott [Albuquerque, NM; Rigdon, J Brian [Edgewood, NM; Oppel, III, Fred J.
2012-05-29
Apparatus and methods for modifying the operation of a robotic vehicle in a real environment to emulate the operation of the robotic vehicle in a mixed reality environment include a vehicle sensing system having a communications module attached to the robotic vehicle for communicating operating parameters related to the robotic vehicle in a real environment to a simulation controller for simulating the operation of the robotic vehicle in a mixed (live, virtual and constructive) environment wherein the affects of virtual and constructive entities on the operation of the robotic vehicle (and vice versa) are simulated. These effects are communicated to the vehicle sensing system which generates a modified control command for the robotic vehicle including the effects of virtual and constructive entities, causing the robot in the real environment to behave as if virtual and constructive entities existed in the real environment.
Methodical aspects of text testing in a driving simulator.
Sundin, A; Patten, C J D; Bergmark, M; Hedberg, A; Iraeus, I-M; Pettersson, I
2012-01-01
A test with 30 test persons was conducted in a driving simulator. The test was a concept exploration and comparison of existing user interaction technologies for text message handling with focus on traffic safety and experience (technology familiarity and learning effects). Focus was put on methodical aspects how to measure and how to analyze the data. Results show difficulties with the eye tracking system (calibration etc.) per se, and also include the subsequent raw data preparation. The physical setup in the car where found important for the test completion.
Structure identification methods for atomistic simulations of crystalline materials
Stukowski, Alexander
2012-05-28
Here, we discuss existing and new computational analysis techniques to classify local atomic arrangements in large-scale atomistic computer simulations of crystalline solids. This article includes a performance comparison of typical analysis algorithms such as common neighbor analysis (CNA), centrosymmetry analysis, bond angle analysis, bond order analysis and Voronoi analysis. In addition we propose a simple extension to the CNA method that makes it suitable for multi-phase systems. Finally, we introduce a new structure identification algorithm, the neighbor distance analysis, which is designed to identify atomic structure units in grain boundaries.
A particle finite element method for machining simulations
NASA Astrophysics Data System (ADS)
Sabel, Matthias; Sator, Christian; Müller, Ralf
2014-07-01
The particle finite element method (PFEM) appears to be a convenient technique for machining simulations, since the geometry and topology of the problem can undergo severe changes. In this work, a short outline of the PFEM-algorithm is given, which is followed by a detailed description of the involved operations. The -shape method, which is used to track the topology, is explained and tested by a simple example. Also the kinematics and a suitable finite element formulation are introduced. To validate the method simple settings without topological changes are considered and compared to the standard finite element method for large deformations. To examine the performance of the method, when dealing with separating material, a tensile loading is applied to a notched plate. This investigation includes a numerical analysis of the different meshing parameters, and the numerical convergence is studied. With regard to the cutting simulation it is found that only a sufficiently large number of particles (and thus a rather fine finite element discretisation) leads to converged results of process parameters, such as the cutting force.
Simulation of Physical Experiments in Immersive Virtual Environments
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Wasfy, Tamer M.
2001-01-01
An object-oriented event-driven immersive Virtual environment is described for the creation of virtual labs (VLs) for simulating physical experiments. Discussion focuses on a number of aspects of the VLs, including interface devices, software objects, and various applications. The VLs interface with output devices, including immersive stereoscopic screed(s) and stereo speakers; and a variety of input devices, including body tracking (head and hands), haptic gloves, wand, joystick, mouse, microphone, and keyboard. The VL incorporates the following types of primitive software objects: interface objects, support objects, geometric entities, and finite elements. Each object encapsulates a set of properties, methods, and events that define its behavior, appearance, and functions. A container object allows grouping of several objects. Applications of the VLs include viewing the results of the physical experiment, viewing a computer simulation of the physical experiment, simulation of the experiments procedure, computational steering, and remote control of the physical experiment. In addition, the VL can be used as a risk-free (safe) environment for training. The implementation of virtual structures testing machines, virtual wind tunnels, and a virtual acoustic testing facility is described.
Simulating galactic dust grain evolution on a moving mesh
NASA Astrophysics Data System (ADS)
McKinnon, Ryan; Vogelsberger, Mark; Torrey, Paul; Marinacci, Federico; Kannan, Rahul
2018-05-01
Interstellar dust is an important component of the galactic ecosystem, playing a key role in multiple galaxy formation processes. We present a novel numerical framework for the dynamics and size evolution of dust grains implemented in the moving-mesh hydrodynamics code AREPO suited for cosmological galaxy formation simulations. We employ a particle-based method for dust subject to dynamical forces including drag and gravity. The drag force is implemented using a second-order semi-implicit integrator and validated using several dust-hydrodynamical test problems. Each dust particle has a grain size distribution, describing the local abundance of grains of different sizes. The grain size distribution is discretised with a second-order piecewise linear method and evolves in time according to various dust physical processes, including accretion, sputtering, shattering, and coagulation. We present a novel scheme for stochastically forming dust during stellar evolution and new methods for sub-cycling of dust physics time-steps. Using this model, we simulate an isolated disc galaxy to study the impact of dust physical processes that shape the interstellar grain size distribution. We demonstrate, for example, how dust shattering shifts the grain size distribution to smaller sizes resulting in a significant rise of radiation extinction from optical to near-ultraviolet wavelengths. Our framework for simulating dust and gas mixtures can readily be extended to account for other dynamical processes relevant in galaxy formation, like magnetohydrodynamics, radiation pressure, and thermo-chemical processes.
Aeroacoustic and Performance Simulations of a Test Scale Open Rotor
NASA Technical Reports Server (NTRS)
Claus, Russell W.
2013-01-01
This paper explores a comparison between experimental data and numerical simulations of the historical baseline F31/A31 open rotor geometry. The experimental data were obtained at the NASA Glenn Research Center s Aeroacoustic facility and include performance and noise information for a variety of flow speeds (matching take-off and cruise). The numerical simulations provide both performance and aeroacoustic results using the NUMECA s Fine-Turbo analysis code. A non-linear harmonic method is used to capture the rotor/rotor interaction.
Simulation of wear in overhead current collection systems
NASA Astrophysics Data System (ADS)
Klapas, D.; Benson, F. A.; Hackam, R.
1985-09-01
Apparatus have been designed to simulate the wear from conductors in a railway current collection system. The main features of the wear machine include a continuous monitoring of the strip wear, strip traversing, and dwell-time test facilities for the investigation of oxidational wear on a copper disk, simulating the contact wire. Disk wear is measured in situ by the spherical indentations method. Typical results of the specific wear rate are also presented to demonstrate the capability of the apparatus.
Lattice Boltzmann simulations of heat transfer in fully developed periodic incompressible flows
NASA Astrophysics Data System (ADS)
Wang, Zimeng; Shang, Helen; Zhang, Junfeng
2017-06-01
Flow and heat transfer in periodic structures are of great interest for many applications. In this paper, we carefully examine the periodic features of fully developed periodic incompressible thermal flows, and incorporate them in the lattice Boltzmann method (LBM) for flow and heat transfer simulations. Two numerical approaches, the distribution modification (DM) approach and the source term (ST) approach, are proposed; and they can both be used for periodic thermal flows with constant wall temperature (CWT) and surface heat flux boundary conditions. However, the DM approach might be more efficient, especially for CWT systems since the ST approach requires calculations of the streamwise temperature gradient at all lattice nodes. Several example simulations are conducted, including flows through flat and wavy channels and flows through a square array with circular cylinders. Results are compared to analytical solutions, previous studies, and our own LBM calculations using different simulation techniques (i.e., the one-module simulation vs. the two-module simulation, and the DM approach vs. the ST approach) with good agreement. These simple, however, representative simulations demonstrate the accuracy and usefulness of our proposed LBM methods for future thermal periodic flow simulations.
Simulation of HLNC and NCC measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Ming-Shih; Teichmann, T.; De Ridder, P.
1994-03-01
This report discusses an automatic method of simulating the results of High Level Neutron Coincidence Counting (HLNC) and Neutron Collar Coincidence Counting (NCC) measurements to facilitate the safeguards` inspectors understanding and use of these instruments under realistic conditions. This would otherwise be expensive, and time-consuming, except at sites designed to handle radioactive materials, and having the necessary variety of fuel elements and other samples. This simulation must thus include the behavior of the instruments for variably constituted and composed fuel elements (including poison rods and Gd loading), and must display the changes in the count rates as a function ofmore » these characteristics, as well as of various instrumental parameters. Such a simulation is an efficient way of accomplishing the required familiarization and training of the inspectors by providing a realistic reproduction of the results of such measurements.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jennings, Elise; Wolf, Rachel; Sako, Masao
2016-11-09
Cosmological parameter estimation techniques that robustly account for systematic measurement uncertainties will be crucial for the next generation of cosmological surveys. We present a new analysis method, superABC, for obtaining cosmological constraints from Type Ia supernova (SN Ia) light curves using Approximate Bayesian Computation (ABC) without any likelihood assumptions. The ABC method works by using a forward model simulation of the data where systematic uncertainties can be simulated and marginalized over. A key feature of the method presented here is the use of two distinct metrics, the `Tripp' and `Light Curve' metrics, which allow us to compare the simulated data to the observed data set. The Tripp metric takes as input the parameters of models fit to each light curve with the SALT-II method, whereas the Light Curve metric uses the measured fluxes directly without model fitting. We apply the superABC sampler to a simulated data set ofmore » $$\\sim$$1000 SNe corresponding to the first season of the Dark Energy Survey Supernova Program. Varying $$\\Omega_m, w_0, \\alpha$$ and $$\\beta$$ and a magnitude offset parameter, with no systematics we obtain $$\\Delta(w_0) = w_0^{\\rm true} - w_0^{\\rm best \\, fit} = -0.036\\pm0.109$$ (a $$\\sim11$$% 1$$\\sigma$$ uncertainty) using the Tripp metric and $$\\Delta(w_0) = -0.055\\pm0.068$$ (a $$\\sim7$$% 1$$\\sigma$$ uncertainty) using the Light Curve metric. Including 1% calibration uncertainties in four passbands, adding 4 more parameters, we obtain $$\\Delta(w_0) = -0.062\\pm0.132$$ (a $$\\sim14$$% 1$$\\sigma$$ uncertainty) using the Tripp metric. Overall we find a $17$% increase in the uncertainty on $$w_0$$ with systematics compared to without. We contrast this with a MCMC approach where systematic effects are approximately included. We find that the MCMC method slightly underestimates the impact of calibration uncertainties for this simulated data set.« less
2011-01-01
Background Molecular marker information is a common source to draw inferences about the relationship between genetic and phenotypic variation. Genetic effects are often modelled as additively acting marker allele effects. The true mode of biological action can, of course, be different from this plain assumption. One possibility to better understand the genetic architecture of complex traits is to include intra-locus (dominance) and inter-locus (epistasis) interaction of alleles as well as the additive genetic effects when fitting a model to a trait. Several Bayesian MCMC approaches exist for the genome-wide estimation of genetic effects with high accuracy of genetic value prediction. Including pairwise interaction for thousands of loci would probably go beyond the scope of such a sampling algorithm because then millions of effects are to be estimated simultaneously leading to months of computation time. Alternative solving strategies are required when epistasis is studied. Methods We extended a fast Bayesian method (fBayesB), which was previously proposed for a purely additive model, to include non-additive effects. The fBayesB approach was used to estimate genetic effects on the basis of simulated datasets. Different scenarios were simulated to study the loss of accuracy of prediction, if epistatic effects were not simulated but modelled and vice versa. Results If 23 QTL were simulated to cause additive and dominance effects, both fBayesB and a conventional MCMC sampler BayesB yielded similar results in terms of accuracy of genetic value prediction and bias of variance component estimation based on a model including additive and dominance effects. Applying fBayesB to data with epistasis, accuracy could be improved by 5% when all pairwise interactions were modelled as well. The accuracy decreased more than 20% if genetic variation was spread over 230 QTL. In this scenario, accuracy based on modelling only additive and dominance effects was generally superior to that of the complex model including epistatic effects. Conclusions This simulation study showed that the fBayesB approach is convenient for genetic value prediction. Jointly estimating additive and non-additive effects (especially dominance) has reasonable impact on the accuracy of prediction and the proportion of genetic variation assigned to the additive genetic source. PMID:21867519
Kelay, Tanika; Chan, Kah Leong; Ako, Emmanuel; Yasin, Mohammad; Costopoulos, Charis; Gold, Matthew; Kneebone, Roger K; Malik, Iqbal S; Bello, Fernando
2017-01-01
Distributed Simulation is the concept of portable, high-fidelity immersive simulation. Here, it is used for the development of a simulation-based training programme for cardiovascular specialities. We present an evidence base for how accessible, portable and self-contained simulated environments can be effectively utilised for the modelling, development and testing of a complex training framework and assessment methodology. Iterative user feedback through mixed-methods evaluation techniques resulted in the implementation of the training programme. Four phases were involved in the development of our immersive simulation-based training programme: ( 1) initial conceptual stage for mapping structural criteria and parameters of the simulation training framework and scenario development ( n = 16), (2) training facility design using Distributed Simulation , (3) test cases with clinicians ( n = 8) and collaborative design, where evaluation and user feedback involved a mixed-methods approach featuring (a) quantitative surveys to evaluate the realism and perceived educational relevance of the simulation format and framework for training and (b) qualitative semi-structured interviews to capture detailed feedback including changes and scope for development. Refinements were made iteratively to the simulation framework based on user feedback, resulting in (4) transition towards implementation of the simulation training framework, involving consistent quantitative evaluation techniques for clinicians ( n = 62). For comparative purposes, clinicians' initial quantitative mean evaluation scores for realism of the simulation training framework, realism of the training facility and relevance for training ( n = 8) are presented longitudinally, alongside feedback throughout the development stages from concept to delivery, including the implementation stage ( n = 62). Initially, mean evaluation scores fluctuated from low to average, rising incrementally. This corresponded with the qualitative component, which augmented the quantitative findings; trainees' user feedback was used to perform iterative refinements to the simulation design and components (collaborative design), resulting in higher mean evaluation scores leading up to the implementation phase. Through application of innovative Distributed Simulation techniques, collaborative design, and consistent evaluation techniques from conceptual, development, and implementation stages, fully immersive simulation techniques for cardiovascular specialities are achievable and have the potential to be implemented more broadly.
SeaWiFS technical report series. Volume 15: The simulated SesWiFS data set, version 2
NASA Technical Reports Server (NTRS)
Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Gregg, Watson W.; Patt, Frederick S.; Woodward, Robert H.
1994-01-01
This document describes the second version of the simulated SeaWiFS data set. A realistic simulated data set is essential for mission readiness preparations and can potentially assist in all phases of ground support for a future mission. The second version improves on the first version primarily through additional realism and complexity. This version incorporates a representation of virtually every aspect of the flight mission. Thus, it provides a high-fidelity data set for testing several aspects of the ground system, including data acquisition, data processing, data transfers, calibration and validation, quality control, and mission operations. The data set is constructed for a seven-day period, 25-31 March 1994. Specific features of the data set include Global Area coverage (GAC), recorded Local Area Coverage (LAC), and realtime High Resolution Picture Transmission (HRPT) data for the seven-day period. A realistic orbit, which is propagated using a Brouwer-Lyddane model with drag, is used to simulate orbit positions. The simulated data corresponds to the command schedule based on the orbit for this seven-day period. It includes total (at-satellite) radiances not only for ocean, but for land, clouds, and ice. The simulation also utilizes a high-resolution land-sea mask. It includes the April 1993 SeaWiFS spectral responses and sensor saturation responses. The simulation is formatted according to July 1993 onboard data structures, which include corresponding telemetry (instrument and spacecraft) data. The methods are described and some examples of the output are given. The instrument response functions made available in April 1993 have been used to produce the Version 2 simulated data. These response functions will change as part of the sensor improvements initiated in July-August 1993.
NASA Astrophysics Data System (ADS)
Akai, Hisazumi; Tsuneyuki, Shinji
2009-02-01
This special issue of Journal of Physics: Condensed Matter comprises selected papers from the proceedings of the 2nd International Conference on Quantum Simulators and Design (QSD2008) held in Tokyo, Japan, between 31 May and 3 June 2008. This conference was organized under the auspices of the Development of New Quantum Simulators and Quantum Design Grant-in-Aid for Scientific Research on Priority Areas, Ministry of Education, Culture, Sports, Science and Technology of Japan (MEXT). The conference focused on the development of first principles electronic structure calculations and their applications. The aim was to provide an opportunity for discussion on the progress in computational materials design and, in particular, the development of quantum simulators and quantum design. Computational materials design is a computational approach to the development of new materials. The essential ingredient is the use of quantum simulators to design a material that meets a given specification of properties and functionalities. For this to be successful, the quantum simulator should be very reliable and be applicable to systems of realistic size. During the conference, new methods of quantum simulation and quantum design were discussed including methods beyond the local density approximation of density functional theory, order-N methods, methods dealing with excitations and reactions, and the application of these methods to the design of novel materials, devices and systems. The conference provided an international forum for experimental and theoretical researchers to exchange ideas. A total of 220 delegates from eight countries participated in the conference. There were 13 invited talks, ten oral presentations and 120 posters. The 3rd International Conference on Quantum Simulators and Design will be held in Germany in the autumn of 2011.
High Performance Parallel Computational Nanotechnology
NASA Technical Reports Server (NTRS)
Saini, Subhash; Craw, James M. (Technical Monitor)
1995-01-01
At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to control mini robotic manipulators for positional control; scalable numerical algorithms for reliability, verifications and testability. There appears no fundamental obstacle to simulating molecular compilers and molecular computers on high performance parallel computers, just as the Boeing 777 was simulated on a computer before manufacturing it.
Kontis, A.L.
2001-01-01
The Variable-Recharge Package is a computerized method designed for use with the U.S. Geological Survey three-dimensional finitedifference ground-water flow model (MODFLOW-88) to simulate areal recharge to an aquifer. It is suitable for simulations of aquifers in which the relation between ground-water levels and land surface can affect the amount and distribution of recharge. The method is based on the premise that recharge to an aquifer cannot occur where the water level is at or above land surface. Consequently, recharge will vary spatially in simulations in which the Variable- Recharge Package is applied, if the water levels are sufficiently high. The input data required by the program for each model cell that can potentially receive recharge includes the average land-surface elevation and a quantity termed ?water available for recharge,? which is equal to precipitation minus evapotranspiration. The Variable-Recharge Package also can be used to simulate recharge to a valley-fill aquifer in which the valley fill and the adjoining uplands are explicitly simulated. Valley-fill aquifers, which are the most common type of aquifer in the glaciated northeastern United States, receive much of their recharge from upland sources as channeled and(or) unchanneled surface runoff and as lateral ground-water flow. Surface runoff in the uplands is generated in the model when the applied water available for recharge is rejected because simulated water levels are at or above land surface. The surface runoff can be distributed to other parts of the model by (1) applying the amount of the surface runoff that flows to upland streams (channeled runoff) to explicitly simulated streams that flow onto the valley floor, and(or) (2) applying the amount that flows downslope toward the valley- fill aquifer (unchanneled runoff) to specified model cells, typically those near the valley wall. An example model of an idealized valley- fill aquifer is presented to demonstrate application of the method and the type of information that can be derived from its use. Documentation of the Variable-Recharge Package is provided in the appendixes and includes listings of model code and of program variables. Comment statements in the program listings provide a narrative of the code. Input-data instructions and printed model output for the package are included.
Reliability computation using fault tree analysis
NASA Technical Reports Server (NTRS)
Chelson, P. O.
1971-01-01
A method is presented for calculating event probabilities from an arbitrary fault tree. The method includes an analytical derivation of the system equation and is not a simulation program. The method can handle systems that incorporate standby redundancy and it uses conditional probabilities for computing fault trees where the same basic failure appears in more than one fault path.
Life Span as the Measure of Performance and Learning in a Business Gaming Simulation
ERIC Educational Resources Information Center
Thavikulwat, Precha
2012-01-01
This study applies the learning curve method of measuring learning to participants of a computer-assisted business gaming simulation that includes a multiple-life-cycle feature. The study involved 249 participants. It verified the workability of the feature and estimated the participants' rate of learning at 17.4% for every doubling of experience.…
NASA Astrophysics Data System (ADS)
Akai, Hisazumi; Oguchi, Tamio
2007-09-01
This special issue of Journal of Physics: Condensed Matter comprises selected papers from the 1st International Conference on Quantum Simulators and Design (QSD2006) held in Hiroshima, Japan, 3-6 December 2006. This conference was organized under the auspices of the Development of New Quantum Simulators and Quantum Design Grant-in-Aid for Scientific Research on Priority Areas, Ministry of Education, Culture, Sports, Science and Technology of Japan (MEXT), and Hiroshima University Quantum design is a computational approach to the development of new materials with specified properties and functionalities. The basic ingredient is the use of quantum simulations to design a material that meets a given specification of properties and functionalities. For this to be successful, the quantum simulation should be highly reliable and be applicable to systems of realistic size. A central interest is, therefore, the development of new methods of quantum simulation and quantum design. This includes methods beyond the local density approximation of density functional theory (LDA), order-N methods, methods dealing with excitations and reactions, and so on, as well as the application of these methods to the design of new materials and devices. The field of quantum design has developed rapidly in the past few years and this conference provides an international forum for experimental and theoretical researchers to exchange ideas. A total of 183 delegates from 8 countries participated in the conference. There were 18 invited talks, 16 oral presentations and 100 posters. There were many new ideas and we foresee dramatic progress in the coming years. The 2nd International Conference on Quantum Simulators and Design will be held in Tokyo, Japan, 31 May-3 June 2008.
Multiple well-shutdown tests and site-scale flow simulation in fractured rocks
Tiedeman, Claire; Lacombe, Pierre J.; Goode, Daniel J.
2010-01-01
A new method was developed for conducting aquifer tests in fractured-rock flow systems that have a pump-and-treat (P&T) operation for containing and removing groundwater contaminants. The method involves temporary shutdown of individual pumps in wells of the P&T system. Conducting aquifer tests in this manner has several advantages, including (1) no additional contaminated water is withdrawn, and (2) hydraulic containment of contaminants remains largely intact because pumping continues at most wells. The well-shutdown test method was applied at the former Naval Air Warfare Center (NAWC), West Trenton, New Jersey, where a P&T operation is designed to contain and remove trichloroethene and its daughter products in the dipping fractured sedimentary rocks underlying the site. The detailed site-scale subsurface geologic stratigraphy, a three-dimensional MODFLOW model, and inverse methods in UCODE_2005 were used to analyze the shutdown tests. In the model, a deterministic method was used for representing the highly heterogeneous hydraulic conductivity distribution and simulations were conducted using an equivalent porous media method. This approach was very successful for simulating the shutdown tests, contrary to a common perception that flow in fractured rocks must be simulated using a stochastic or discrete fracture representation of heterogeneity. Use of inverse methods to simultaneously calibrate the model to the multiple shutdown tests was integral to the effectiveness of the approach.
Projected Pupil Plane Pattern: an alternative LGS wavefront sensing technique
NASA Astrophysics Data System (ADS)
Yang, Huizhe; Bharmal, Nazim A.; Myers, Richard M.
2018-07-01
We have analysed and simulated a novel alternative Laser Guide Star (LGS) configuration termed Projected Pupil Plane Pattern (PPPP), including wavefront sensing and the reconstruction method. A key advantage of this method is that a collimated beam is launched through the telescope primary mirror, therefore the wavefront measurements do not suffer from the effects of focal anisoplanatism. A detailed simulation including the upward wave optics propagation, return path imaging, and linearized wavefront reconstruction has been presented. The conclusions that we draw from the simulation include the optimum pixel number across the pupilN = 32, the optimum number of Zernike modes (which is 78), propagation altitudes h1 = 10 km and h2 = 20 km for Rayleigh scattered returns, and the choice for the laser beam modulation (Gaussian beam). We also investigate the effects of turbulence profiles with multiple layers and find that it does not reduce PPPP performance as long as the turbulence layers are below h1. A signal-to-noise ratio analysis has been given when photon and read noise are introduced. Finally, we compare the PPPP performance with a conventional Shack-Hartmann Wavefront Sensor in an open loop, using Rayleigh LGS or sodium LGS, for 4-m and 10-m telescopes, respectively. For this purpose, we use a full Monte Carlo end-to-end AO simulation tool, Soapy. From these results, we confirm that PPPP does not suffer from focus anisoplanatism.
Projected Pupil Plane Pattern: an alternative LGS wavefront sensing technique
NASA Astrophysics Data System (ADS)
Yang, Huizhe; Bharmal, Nazim A.; Myers, Richard M.
2018-04-01
We have analyzed and simulated a novel alternative LGS configuration termed Projected Pupil Plane Pattern (PPPP), including wavefront sensing and the reconstruction method. A key advantage of this method is that a collimated beam is launched through the telescope primary mirror, therefore the wavefront measurements do not suffer from the effects of focal anisoplanatism. A detailed simulation including the upward wave optics propagation, return path imaging and linearized wavefront reconstruction has been presented. The conclusions that we draw from the simulation include the optimum pixel number across the pupil N=32, the optimum number of Zernike modes (which is 78), propagation altitudes h1 = 10 km and h2 = 20 km for Rayleigh scattered returns, and the choice for the laser beam modulation (Gaussian beam). We also investigate the effects of turbulence profiles with multiple layers and find that it does not reduce PPPP performance as long as the turbulence layers are below h1. A signal-to-noise ratio (SNR) analysis has been given when photon and read noise are introduced. Finally, we compare the PPPP performance with a conventional Shack-Hartmann Wavefront Sensor (WFS) in open loop, using Rayleigh LGS or sodium LGS, for 4-m and 10-m telescopes respectively. For this purpose we use a full Monte-Carlo end-to-end AO simulation tool, Soapy. From these results we confirm that PPPP does not suffer from focus anisoplanatism.
New Cogging Torque Reduction Methods for Permanent Magnet Machine
NASA Astrophysics Data System (ADS)
Bahrim, F. S.; Sulaiman, E.; Kumar, R.; Jusoh, L. I.
2017-08-01
Permanent magnet type motors (PMs) especially permanent magnet synchronous motor (PMSM) are expanding its limbs in industrial application system and widely used in various applications. The key features of this machine include high power and torque density, extending speed range, high efficiency, better dynamic performance and good flux-weakening capability. Nevertheless, high in cogging torque, which may cause noise and vibration, is one of the threat of the machine performance. Therefore, with the aid of 3-D finite element analysis (FEA) and simulation using JMAG Designer, this paper proposed new method for cogging torque reduction. Based on the simulation, methods of combining the skewing with radial pole pairing method and skewing with axial pole pairing method reduces the cogging torque effect up to 71.86% and 65.69% simultaneously.
Vertebral derotation in adolescent idiopathic scoliosis causes hypokyphosis of the thoracic spine
2012-01-01
Background The purpose of this study was to test the hypothesis that direct vertebral derotation by pedicle screws (PS) causes hypokyphosis of the thoracic spine in adolescent idiopathic scoliosis (AIS) patients, using computer simulation. Methods Twenty AIS patients with Lenke type 1 or 2 who underwent posterior correction surgeries using PS were included in this study. Simulated corrections of each patient’s scoliosis, as determined by the preoperative CT scan data, were performed on segmented 3D models of the whole spine. Two types of simulated extreme correction were performed: 1) complete coronal correction only (C method) and 2) complete coronal correction with complete derotation of vertebral bodies (C + D method). The kyphosis angle (T5-T12) and vertebral rotation angle at the apex were measured before and after the simulated corrections. Results The mean kyphosis angle after the C + D method was significantly smaller than that after the C method (2.7 ± 10.0° vs. 15.0 ± 7.1°, p < 0.01). The mean preoperative apical rotation angle of 15.2 ± 5.5° was completely corrected after the C + D method (0°) and was unchanged after the C method (17.6 ± 4.2°). Conclusions In the 3D simulation study, kyphosis was reduced after complete correction of the coronal and rotational deformity, but it was maintained after the coronal-only correction. These results proved the hypothesis that the vertebral derotation obtained by PS causes hypokyphosis of the thoracic spine. PMID:22691717
NASA Astrophysics Data System (ADS)
Wu, Xiongwu; Brooks, Bernard R.
2011-11-01
The self-guided Langevin dynamics (SGLD) is a method to accelerate conformational searching. This method is unique in the way that it selectively enhances and suppresses molecular motions based on their frequency to accelerate conformational searching without modifying energy surfaces or raising temperatures. It has been applied to studies of many long time scale events, such as protein folding. Recent progress in the understanding of the conformational distribution in SGLD simulations makes SGLD also an accurate method for quantitative studies. The SGLD partition function provides a way to convert the SGLD conformational distribution to the canonical ensemble distribution and to calculate ensemble average properties through reweighting. Based on the SGLD partition function, this work presents a force-momentum-based self-guided Langevin dynamics (SGLDfp) simulation method to directly sample the canonical ensemble. This method includes interaction forces in its guiding force to compensate the perturbation caused by the momentum-based guiding force so that it can approximately sample the canonical ensemble. Using several example systems, we demonstrate that SGLDfp simulations can approximately maintain the canonical ensemble distribution and significantly accelerate conformational searching. With optimal parameters, SGLDfp and SGLD simulations can cross energy barriers of more than 15 kT and 20 kT, respectively, at similar rates for LD simulations to cross energy barriers of 10 kT. The SGLDfp method is size extensive and works well for large systems. For studies where preserving accessible conformational space is critical, such as free energy calculations and protein folding studies, SGLDfp is an efficient approach to search and sample the conformational space.
Sopori, Bhushan L.
1995-01-01
A method and apparatus for improving the accuracy of the simulation of sunlight reaching the earth's surface includes a relatively small heated chamber having an optical inlet and an optical outlet, the chamber having a cavity that can be filled with a heated stream of CO.sub.2 and water vapor. A simulated beam comprising infrared and near infrared light can be directed through the chamber cavity containing the CO.sub.2 and water vapor, whereby the spectral characteristics of the beam are altered so that the output beam from the chamber contains wavelength bands that accurately replicate atmospheric absorption of solar energy due to atmospheric CO.sub.2 and moisture.
A Step Towards CO2-Neutral Aviation
NASA Technical Reports Server (NTRS)
Brankovic, Andreja; Ryder, Robert C.; Hendricks, Robert C.; Huber, Marcia L.
2008-01-01
An approximation method for evaluation of the caloric equations used in combustion chemistry simulations is described. The method is applied to generate the equations of specific heat, static enthalpy, and Gibb's free energy for fuel mixtures of interest to gas turbine engine manufacturers. Liquid-phase fuel properties are also derived. The fuels investigated include JP-8, synthetic fuel, and two blends of JP-8 and synthetic fuel. The complete set of fuel property equations for both phases are implemented into a computational fluid dynamics (CFD) flow solver database, and multiphase, reacting flow simulations of a well-tested liquid-fueled combustor are performed. The simulations are a first step in understanding combustion system performance and operational issues when using alternate fuels, at practical engine operating conditions.
Sopori, B.L.
1995-06-20
A method and apparatus for improving the accuracy of the simulation of sunlight reaching the earth`s surface includes a relatively small heated chamber having an optical inlet and an optical outlet, the chamber having a cavity that can be filled with a heated stream of CO{sub 2} and water vapor. A simulated beam comprising infrared and near infrared light can be directed through the chamber cavity containing the CO{sub 2} and water vapor, whereby the spectral characteristics of the beam are altered so that the output beam from the chamber contains wavelength bands that accurately replicate atmospheric absorption of solar energy due to atmospheric CO{sub 2} and moisture. 8 figs.
Computational structural mechanics for engine structures
NASA Technical Reports Server (NTRS)
Chamis, C. C.
1989-01-01
The computational structural mechanics (CSM) program at Lewis encompasses: (1) fundamental aspects for formulating and solving structural mechanics problems, and (2) development of integrated software systems to computationally simulate the performance/durability/life of engine structures. It is structured to mainly supplement, complement, and whenever possible replace, costly experimental efforts which are unavoidable during engineering research and development programs. Specific objectives include: investigate unique advantages of parallel and multiprocesses for: reformulating/solving structural mechanics and formulating/solving multidisciplinary mechanics and develop integrated structural system computational simulators for: predicting structural performances, evaluating newly developed methods, and for identifying and prioritizing improved/missing methods needed. Herein the CSM program is summarized with emphasis on the Engine Structures Computational Simulator (ESCS). Typical results obtained using ESCS are described to illustrate its versatility.
NASA Astrophysics Data System (ADS)
Li, Jun-jun; Yang, Xiao-jun; Xiao, Ying-jie; Xu, Bo-wei; Wu, Hua-feng
2018-03-01
Immersed tunnel is an important part of the Hong Kong-Zhuhai-Macao Bridge (HZMB) project. In immersed tunnel floating, translation which includes straight and transverse movements is the main working mode. To decide the magnitude and direction of the towing force for each tug, a particle swarm-based translation control method is presented for non-power immersed tunnel element. A sort of linear weighted logarithmic function is exploited to avoid weak subgoals. In simulation, the particle swarm-based control method is evaluated and compared with traditional empirical method in the case of the HZMB project. Simulation results show that the presented method delivers performance improvement in terms of the enhanced surplus towing force.
NASA Astrophysics Data System (ADS)
Li, Dachao; Xu, Qingmei; Liu, Yu; Wang, Ridong; Xu, Kexin; Yu, Haixia
2017-11-01
A high-accuracy microdialysis method that can provide the reference values of glucose concentration in interstitial fluid for the accurate evaluation of non-invasive and minimally invasive continuous glucose monitoring is reported in this study. The parameters of the microdialysis process were firstly optimized by testing and analyzing three main factors that impact microdialysis recovery, including the perfusion rate, temperature, and glucose concentration in the area surrounding the microdialysis probe. The precision of the optimized microdialysis method was then determined in a simulation system that was designed and established in this study to simulate variations in continuous glucose concentration in the human body. Finally, the microdialysis method was tested for in vivo interstitial glucose concentration measurement.
Integrated control and health management. Orbit transfer rocket engine technology program
NASA Technical Reports Server (NTRS)
Holzmann, Wilfried A.; Hayden, Warren R.
1988-01-01
To insure controllability of the baseline design for a 7500 pound thrust, 10:1 throttleable, dual expanded cycle, Hydrogen-Oxygen, orbit transfer rocket engine, an Integrated Controls and Health Monitoring concept was developed. This included: (1) Dynamic engine simulations using a TUTSIM derived computer code; (2) analysis of various control methods; (3) Failure Modes Analysis to identify critical sensors; (4) Survey of applicable sensors technology; and, (5) Study of Health Monitoring philosophies. The engine design was found to be controllable over the full throttling range by using 13 valves, including an oxygen turbine bypass valve to control mixture ratio, and a hydrogen turbine bypass valve, used in conjunction with the oxygen bypass to control thrust. Classic feedback control methods are proposed along with specific requirements for valves, sensors, and the controller. Expanding on the control system, a Health Monitoring system is proposed including suggested computing methods and the following recommended sensors: (1) Fiber optic and silicon bearing deflectometers; (2) Capacitive shaft displacement sensors; and (3) Hot spot thermocouple arrays. Further work is needed to refine and verify the dynamic simulations and control algorithms, to advance sensor capabilities, and to develop the Health Monitoring computational methods.
Preparation of a Frozen Regolith Simulant Bed for ISRU Component Testing in a Vacuum Chamber
NASA Technical Reports Server (NTRS)
Klenhenz, Julie; Linne, Diane
2013-01-01
In-Situ Resource Utilization (ISRU) systems and components have undergone extensive laboratory and field tests to expose hardware to relevant soil environments. The next step is to combine these soil environments with relevant pressure and temperature conditions. Previous testing has demonstrated how to incorporate large bins of unconsolidated lunar regolith into sufficiently sized vacuum chambers. In order to create appropriate depth dependent soil characteristics that are needed to test drilling operations for the lunar surface, the regolith simulant bed must by properly compacted and frozen. While small cryogenic simulant beds have been created for laboratory tests, this scale effort will allow testing of a full 1m drill which has been developed for a potential lunar prospector mission. Compacted bulk densities were measured at various moisture contents for GRC-3 and Chenobi regolith simulants. Vibrational compaction methods were compared with the previously used hammer compaction, or "Proctor", method. All testing was done per ASTM standard methods. A full 6.13 m3 simulant bed with 6 percent moisture by weight was prepared, compacted in layers, and frozen in a commercial freezer. Temperature and desiccation data was collected to determine logistics for preparation and transport of the simulant bed for thermal vacuum testing. Once in the vacuum facility, the simulant bed will be cryogenically frozen with liquid nitrogen. These cryogenic vacuum tests are underway, but results will not be included in this manuscript.
NASA Astrophysics Data System (ADS)
Haworth, Daniel
2013-11-01
The importance of explicitly accounting for the effects of unresolved turbulent fluctuations in Reynolds-averaged and large-eddy simulations of chemically reacting turbulent flows is increasingly recognized. Transported probability density function (PDF) methods have emerged as one of the most promising modeling approaches for this purpose. In particular, PDF methods provide an elegant and effective resolution to the closure problems that arise from averaging or filtering terms that correspond to nonlinear point processes, including chemical reaction source terms and radiative emission. PDF methods traditionally have been associated with studies of turbulence-chemistry interactions in laboratory-scale, atmospheric-pressure, nonluminous, statistically stationary nonpremixed turbulent flames; and Lagrangian particle-based Monte Carlo numerical algorithms have been the predominant method for solving modeled PDF transport equations. Recent advances and trends in PDF methods are reviewed and discussed. These include advances in particle-based algorithms, alternatives to particle-based algorithms (e.g., Eulerian field methods), treatment of combustion regimes beyond low-to-moderate-Damköhler-number nonpremixed systems (e.g., premixed flamelets), extensions to include radiation heat transfer and multiphase systems (e.g., soot and fuel sprays), and the use of PDF methods as the basis for subfilter-scale modeling in large-eddy simulation. Examples are provided that illustrate the utility and effectiveness of PDF methods for physics discovery and for applications to practical combustion systems. These include comparisons of results obtained using the PDF method with those from models that neglect unresolved turbulent fluctuations in composition and temperature in the averaged or filtered chemical source terms and/or the radiation heat transfer source terms. In this way, the effects of turbulence-chemistry-radiation interactions can be isolated and quantified.
Towards the simulation of molecular collisions with a superconducting quantum computer
NASA Astrophysics Data System (ADS)
Geller, Michael
2013-05-01
I will discuss the prospects for the use of large-scale, error-corrected quantum computers to simulate complex quantum dynamics such as molecular collisions. This will likely require millions qubits. I will also discuss an alternative approach [M. R. Geller et al., arXiv:1210.5260] that is ideally suited for today's superconducting circuits, which uses the single-excitation subspace (SES) of a system of n tunably coupled qubits. The SES method allows many operations in the unitary group SU(n) to be implemented in a single step, bypassing the need for elementary gates, thereby making large computations possible without error correction. The method enables universal quantum simulation, including simulation of the time-dependent Schrodinger equation, and we argue that a 1000-qubit SES processor should be capable of achieving quantum speedup relative to a petaflop supercomputer. We speculate on the utility and practicality of such a simulator for atomic and molecular collision physics. Work supported by the US National Science Foundation CDI program.
An Object-Oriented Finite Element Framework for Multiphysics Phase Field Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michael R Tonks; Derek R Gaston; Paul C Millett
2012-01-01
The phase field approach is a powerful and popular method for modeling microstructure evolution. In this work, advanced numerical tools are used to create a phase field framework that facilitates rapid model development. This framework, called MARMOT, is based on Idaho National Laboratory's finite element Multiphysics Object-Oriented Simulation Environment. In MARMOT, the system of phase field partial differential equations (PDEs) are solved simultaneously with PDEs describing additional physics, such as solid mechanics and heat conduction, using the Jacobian-Free Newton Krylov Method. An object-oriented architecture is created by taking advantage of commonalities in phase fields models to facilitate development of newmore » models with very little written code. In addition, MARMOT provides access to mesh and time step adaptivity, reducing the cost for performing simulations with large disparities in both spatial and temporal scales. In this work, phase separation simulations are used to show the numerical performance of MARMOT. Deformation-induced grain growth and void growth simulations are included to demonstrate the muliphysics capability.« less
Comparative study of signalling methods for high-speed backplane transceiver
NASA Astrophysics Data System (ADS)
Wu, Kejun
2017-11-01
A combined analysis of transient simulation and statistical method is proposed for comparative study of signalling methods applied to high-speed backplane transceivers. This method enables fast and accurate signal-to-noise ratio and symbol error rate estimation of a serial link based on a four-dimension design space, including channel characteristics, noise scenarios, equalisation schemes, and signalling methods. The proposed combined analysis method chooses an efficient sampling size for performance evaluation. A comparative study of non-return-to-zero (NRZ), PAM-4, and four-phase shifted sinusoid symbol (PSS-4) using parameterised behaviour-level simulation shows PAM-4 and PSS-4 has substantial advantages over conventional NRZ in most of the cases. A comparison between PAM-4 and PSS-4 shows PAM-4 gets significant bit error rate degradation when noise level is enhanced.
Adaptively biased molecular dynamics: An umbrella sampling method with a time-dependent potential
NASA Astrophysics Data System (ADS)
Babin, Volodymyr; Karpusenka, Vadzim; Moradi, Mahmoud; Roland, Christopher; Sagui, Celeste
We discuss an adaptively biased molecular dynamics (ABMD) method for the computation of a free energy surface for a set of reaction coordinates. The ABMD method belongs to the general category of umbrella sampling methods with an evolving biasing potential. It is characterized by a small number of control parameters and an O(t) numerical cost with simulation time t. The method naturally allows for extensions based on multiple walkers and replica exchange mechanism. The workings of the method are illustrated with a number of examples, including sugar puckering, and free energy landscapes for polymethionine and polyproline peptides, and for a short β-turn peptide. ABMD has been implemented into the latest version (Case et al., AMBER 10; University of California: San Francisco, 2008) of the AMBER software package and is freely available to the simulation community.
Nivala, Michael; de Lange, Enno; Rovetti, Robert; Qu, Zhilin
2012-01-01
Intracellular calcium (Ca) cycling dynamics in cardiac myocytes is regulated by a complex network of spatially distributed organelles, such as sarcoplasmic reticulum (SR), mitochondria, and myofibrils. In this study, we present a mathematical model of intracellular Ca cycling and numerical and computational methods for computer simulations. The model consists of a coupled Ca release unit (CRU) network, which includes a SR domain and a myoplasm domain. Each CRU contains 10 L-type Ca channels and 100 ryanodine receptor channels, with individual channels simulated stochastically using a variant of Gillespie’s method, modified here to handle time-dependent transition rates. Both the SR domain and the myoplasm domain in each CRU are modeled by 5 × 5 × 5 voxels to maintain proper Ca diffusion. Advanced numerical algorithms implemented on graphical processing units were used for fast computational simulations. For a myocyte containing 100 × 20 × 10 CRUs, a 1-s heart time simulation takes about 10 min of machine time on a single NVIDIA Tesla C2050. Examples of simulated Ca cycling dynamics, such as Ca sparks, Ca waves, and Ca alternans, are shown. PMID:22586402
Generating Virtual Patients by Multivariate and Discrete Re-Sampling Techniques.
Teutonico, D; Musuamba, F; Maas, H J; Facius, A; Yang, S; Danhof, M; Della Pasqua, O
2015-10-01
Clinical Trial Simulations (CTS) are a valuable tool for decision-making during drug development. However, to obtain realistic simulation scenarios, the patients included in the CTS must be representative of the target population. This is particularly important when covariate effects exist that may affect the outcome of a trial. The objective of our investigation was to evaluate and compare CTS results using re-sampling from a population pool and multivariate distributions to simulate patient covariates. COPD was selected as paradigm disease for the purposes of our analysis, FEV1 was used as response measure and the effects of a hypothetical intervention were evaluated in different populations in order to assess the predictive performance of the two methods. Our results show that the multivariate distribution method produces realistic covariate correlations, comparable to the real population. Moreover, it allows simulation of patient characteristics beyond the limits of inclusion and exclusion criteria in historical protocols. Both methods, discrete resampling and multivariate distribution generate realistic pools of virtual patients. However the use of a multivariate distribution enable more flexible simulation scenarios since it is not necessarily bound to the existing covariate combinations in the available clinical data sets.
A new clocking method for a charge coupled device
DOE Office of Scientific and Technical Information (OSTI.GOV)
Umezu, Rika; Kitamoto, Shunji, E-mail: kitamoto@rikkyo.ac.jp; Murakami, Hiroshi
2014-07-15
We propose and demonstrate a new clocking method for a charge-coupled device (CCD). When a CCD is used for a photon counting detector of X-rays, its weak point is a limitation of its counting rate, because high counting rate makes non-negligible pile-up of photons. In astronomical usage, this pile-up is especially severe for an observation of a bright point-like object. One typical idea to reduce the pile-up is a parallel sum (P-sum) mode. This mode completely loses one-dimensional information. Our new clocking method, panning mode, provides complementary properties between the normal mode and the P-sum mode. We performed a simplemore » simulation in order to investigate a pile-up probability and compared the simulated result and actual obtained event rates. Using this simulation and the experimental results, we compared the pile-up tolerance of various clocking modes including our new method and also compared their other characteristics.« less
A method to investigate the diffusion properties of nuclear calcium.
Queisser, Gillian; Wittum, Gabriel
2011-10-01
Modeling biophysical processes in general requires knowledge about underlying biological parameters. The quality of simulation results is strongly influenced by the accuracy of these parameters, hence the identification of parameter values that the model includes is a major part of simulating biophysical processes. In many cases, secondary data can be gathered by experimental setups, which are exploitable by mathematical inverse modeling techniques. Here we describe a method for parameter identification of diffusion properties of calcium in the nuclei of rat hippocampal neurons. The method is based on a Gauss-Newton method for solving a least-squares minimization problem and was formulated in such a way that it is ideally implementable in the simulation platform uG. Making use of independently published space- and time-dependent calcium imaging data, generated from laser-assisted calcium uncaging experiments, here we could identify the diffusion properties of nuclear calcium and were able to validate a previously published model that describes nuclear calcium dynamics as a diffusion process.
Inclusion of Structural Flexibility in Design Load Analysis for Wave Energy Converters: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Yi; Yu, Yi-Hsiang; van Rij, Jennifer A
2017-08-14
Hydroelastic interactions, caused by ocean wave loading on wave energy devices with deformable structures, are studied in the time domain. A midfidelity, hybrid modeling approach of rigid-body and flexible-body dynamics is developed and implemented in an open-source simulation tool for wave energy converters (WEC-Sim) to simulate the dynamic responses of wave energy converter component structural deformations under wave loading. A generalized coordinate system, including degrees of freedom associated with rigid bodies, structural modes, and constraints connecting multiple bodies, is utilized. A simplified method of calculating stress loads and sectional bending moments is implemented, with the purpose of sizing and designingmore » wave energy converters. Results calculated using the method presented are verified with those of high-fidelity fluid-structure interaction simulations, as well as low-fidelity, frequency-domain, boundary element method analysis.« less
Cockpit weather radar display demonstrator and ground-to-air sferics telemetry system
NASA Technical Reports Server (NTRS)
Nickum, J. D.; Mccall, D. L.
1982-01-01
The results of two methods of obtaining timely and accurate severe weather presentations in the cockpit are detailed. The first method described is a course up display of uplinked weather radar data. This involves the construction of a demonstrator that will show the feasibility of producing a course up display in the cockpit of the NASA simulator at Langley. A set of software algorithms was designed that could easily be implemented, along with data tapes generated to provide the cockpit simulation. The second method described involves the uplinking of sferic data from a ground based 3M-Ryan Stormscope. The technique involves transfer of the data on the CRT of the Stormscope to a remote CRT. This sferic uplink and display could also be included in an implementation on the NASA cockpit simulator, allowing evaluation of pilot responses based on real Stormscope data.
Lattice Boltzmann Method for Spacecraft Propellant Slosh Simulation
NASA Technical Reports Server (NTRS)
Orr, Jeb S.; Powers, Joseph F.; Yang, Hong Q.
2015-01-01
A scalable computational approach to the simulation of propellant tank sloshing dynamics in microgravity is presented. In this work, we use the lattice Boltzmann equation (LBE) to approximate the behavior of two-phase, single-component isothermal flows at very low Bond numbers. Through the use of a non-ideal gas equation of state and a modified multiple relaxation time (MRT) collision operator, the proposed method can simulate thermodynamically consistent phase transitions at temperatures and density ratios consistent with typical spacecraft cryogenic propellants, for example, liquid oxygen. Determination of the tank forces and moments relies upon the global momentum conservation of the fluid domain, and a parametric wall wetting model allows tuning of the free surface contact angle. Development of the interface is implicit and no interface tracking approach is required. Numerical examples illustrate the method's application to predicting bulk fluid motion including lateral propellant slosh in low-g conditions.
NASA Astrophysics Data System (ADS)
Fernandez, Pablo; Nguyen, Ngoc-Cuong; Peraire, Jaime
2017-11-01
Over the past few years, high-order discontinuous Galerkin (DG) methods for Large-Eddy Simulation (LES) have emerged as a promising approach to solve complex turbulent flows. Despite the significant research investment, the relation between the discretization scheme, the Riemann flux, the subgrid-scale (SGS) model and the accuracy of the resulting LES solver remains unclear. In this talk, we investigate the role of the Riemann solver and the SGS model in the ability to predict a variety of flow regimes, including transition to turbulence, wall-free turbulence, wall-bounded turbulence, and turbulence decay. The Taylor-Green vortex problem and the turbulent channel flow at various Reynolds numbers are considered. Numerical results show that DG methods implicitly introduce numerical dissipation in under-resolved turbulence simulations and, even in the high Reynolds number limit, this implicit dissipation provides a more accurate representation of the actual subgrid-scale dissipation than that by explicit models.
An RF phased array applicator designed for hyperthermia breast cancer treatments
Wu, Liyong; McGough, Robert J; Arabe, Omar Ali; Samulski, Thaddeus V
2007-01-01
An RF phased array applicator has been constructed for hyperthermia treatments in the intact breast. This RF phased array consists of four antennas mounted on a Lexan water tank, and geometric focusing is employed so that each antenna points in the direction of the intended target. The operating frequency for this phased array is 140 MHz. The RF array has been characterized both by electric field measurements in a water tank and by electric field simulations using the finite-element method. The finite-element simulations are performed with HFSS software, where the mesh defined for finite-element calculations includes the geometry of the tank enclosure and four end-loaded dipole antennas. The material properties of the water tank enclosure and the antennas are also included in each simulation. The results of the finite-element simulations are compared to the measured values for this configuration, and the results, which include the effects of amplitude shading and phase shifting, show that the electric field predicted by finite-element simulations is similar to the measured field. Simulations also show that the contributions from standing waves are significant, which is consistent with measurement results. Simulated electric field and bio-heat transfer results are also computed within a simple 3D breast model. Temperature simulations show that, although peak temperatures are generated outside the simulated tumour target, this RF phased array applicator is an effective device for regional hyperthermia in the intact breast. PMID:16357427
Modeling and simulation of dust behaviors behind a moving vehicle
NASA Astrophysics Data System (ADS)
Wang, Jingfang
Simulation of physically realistic complex dust behaviors is a difficult and attractive problem in computer graphics. A fast, interactive and visually convincing model of dust behaviors behind moving vehicles is very useful in computer simulation, training, education, art, advertising, and entertainment. In my dissertation, an experimental interactive system has been implemented for the simulation of dust behaviors behind moving vehicles. The system includes physically-based models, particle systems, rendering engines and graphical user interface (GUI). I have employed several vehicle models including tanks, cars, and jeeps to test and simulate in different scenarios and conditions. Calm weather, winding condition, vehicle turning left or right, and vehicle simulation controlled by users from the GUI are all included. I have also tested the factors which play against the physical behaviors and graphics appearances of the dust particles through GUI or off-line scripts. The simulations are done on a Silicon Graphics Octane station. The animation of dust behaviors is achieved by physically-based modeling and simulation. The flow around a moving vehicle is modeled using computational fluid dynamics (CFD) techniques. I implement a primitive variable and pressure-correction approach to solve the three dimensional incompressible Navier Stokes equations in a volume covering the moving vehicle. An alternating- direction implicit (ADI) method is used for the solution of the momentum equations, with a successive-over- relaxation (SOR) method for the solution of the Poisson pressure equation. Boundary conditions are defined and simplified according to their dynamic properties. The dust particle dynamics is modeled using particle systems, statistics, and procedure modeling techniques. Graphics and real-time simulation techniques, such as dynamics synchronization, motion blur, blending, and clipping have been employed in the rendering to achieve realistic appearing dust behaviors. In addition, I introduce a temporal smoothing technique to eliminate the jagged effect caused by large simulation time. Several algorithms are used to speed up the simulation. For example, pre-calculated tables and display lists are created to replace some of the most commonly used functions, scripts and processes. The performance study shows that both time and space costs of the algorithms are linear in the number of particles in the system. On a Silicon Graphics Octane, three vehicles with 20,000 particles run at 6-8 frames per second on average. This speed does not include the extra calculations of convergence of the numerical integration for fluid dynamics which usually takes about 4-5 minutes to achieve steady state.
NASA Astrophysics Data System (ADS)
Mertens, Christopher; Moyers, Michael; Walker, Steven; Tweed, John
Recent developments in NASA's High Charge and Energy Transport (HZETRN) code have included lateral broadening of primary ion beams due to small-angle multiple Coulomb scattering, and coupling of the ion-nuclear scattering interactions with energy loss and straggling. The new version of HZETRN based on Green function methods, GRNTRN, is suitable for modeling transport with both space environment and laboratory boundary conditions. Multiple scattering processes are a necessary extension to GRNTRN in order to accurately model ion beam experiments, to simulate the physical and biological-effective radiation dose, and to develop new methods and strategies for light ion radiation therapy. In this paper we compare GRNTRN simulations of proton lateral scattering distributions with beam measurements taken at Loma Linda Medical University. The simulated and measured lateral proton distributions will be compared for a 250 MeV proton beam on aluminum, polyethylene, polystyrene, bone, iron, and lead target materials.
Flipped Learning With Simulation in Undergraduate Nursing Education.
Kim, HeaRan; Jang, YounKyoung
2017-06-01
Flipped learning has proliferated in various educational environments. This study aimed to verify the effects of flipped learning on the academic achievement, teamwork skills, and satisfaction levels of undergraduate nursing students. For the flipped learning group, simulation-based education via the flipped learning method was provided, whereas traditional, simulation-based education was provided for the control group. After completion of the program, academic achievement, teamwork skills, and satisfaction levels were assessed and analyzed. The flipped learning group received higher scores on academic achievement, teamwork skills, and satisfaction levels than the control group, including the areas of content knowledge and clinical nursing practice competency. In addition, this difference gradually increased between the two groups throughout the trial. The results of this study demonstrated the positive, statistically significant effects of the flipped learning method on simulation-based nursing education. [J Nurs Educ. 2017;56(6):329-336.]. Copyright 2017, SLACK Incorporated.
The changing face of surgical education: simulation as the new paradigm.
Scott, Daniel J; Cendan, Juan C; Pugh, Carla M; Minter, Rebecca M; Dunnington, Gary L; Kozar, Rosemary A
2008-06-15
Surgical simulation has evolved considerably over the past two decades and now plays a major role in training efforts designed to foster the acquisition of new skills and knowledge outside of the clinical environment. Numerous driving forces have fueled this fundamental change in educational methods, including concerns over patient safety and the need to maximize efficiency within the context of limited work hours and clinical exposure. The importance of simulation has been recognized by the major stake-holders in surgical education, and the Residency Review Committee has mandated that all programs implement skills training curricula in 2008. Numerous issues now face educators who must use these novel training methods. It is important that these individuals have a solid understanding of content, development, research, and implementation aspects regarding simulation. This paper highlights presentations about these topics from a panel of experts convened at the 2008 Academic Surgical Congress.
NASA Astrophysics Data System (ADS)
Ching-Lin Fan,; Hui-Lung Lai,; Jyu-Yu Chang,
2010-05-01
In this paper, we propose a novel pixel design and driving method for active-matrix organic light-emitting diode (AM-OLED) displays using low-temperature polycrystalline silicon thin-film transistors (LTPS-TFTs). The proposed threshold voltage compensation circuit, which comprised five transistors and two capacitors, has been verified to supply uniform output current by simulation work using the automatic integrated circuit modeling simulation program with integrated circuit emphasis (AIM-SPICE) simulator. The driving scheme of this voltage programming method includes four periods: precharging, compensation, data input, and emission. The simulated results demonstrate excellent properties such as low error rate of OLED anode voltage variation (<1%) and high output current. The proposed pixel circuit shows high immunity to the threshold voltage deviation characteristics of both the driving poly-Si TFT and the OLED.
THE CHANGING FACE OF SURGICAL EDUCATION: SIMULATION AS THE NEW PARADIGM
Scott, Daniel J.; Cendan, Juan C.; Pugh, Carla M.; Minter, Rebecca M.; Dunnington, Gary L.; Kozar, Rosemary A.
2009-01-01
Surgical simulation has evolved considerably over the past two decades and now plays a major role in training efforts designed to foster the acquisition of new skills and knowledge outside of the clinical environment. Numerous driving forces have fueled this fundamental change in educational methods, including concerns over patient safety and the need to maximize efficiency within the context of limited work hours and clinical exposure. The importance of simulation has been recognized by the major stake-holders in surgical education, and the Residency Review Committee has mandated that all programs implement skills training curricula in 2008. Numerous issues now face educators who must use these novel training methods. It is important that these individuals have a solid understanding of content, development, research, and implementation aspects regarding simulation. This paper highlights presentations about these topics from a panel of experts convened at the 2008 Academic Surgical Congress. PMID:18498868
Particle kinetic simulation of high altitude hypervelocity flight
NASA Technical Reports Server (NTRS)
Boyd, Iain; Haas, Brian L.
1994-01-01
Rarefied flows about hypersonic vehicles entering the upper atmosphere or through nozzles expanding into a near vacuum may only be simulated accurately with a direct simulation Monte Carlo (DSMC) method. Under this grant, researchers enhanced the models employed in the DSMC method and performed simulations in support of existing NASA projects or missions. DSMC models were developed and validated for simulating rotational, vibrational, and chemical relaxation in high-temperature flows, including effects of quantized anharmonic oscillators and temperature-dependent relaxation rates. State-of-the-art advancements were made in simulating coupled vibration-dissociation recombination for post-shock flows. Models were also developed to compute vehicle surface temperatures directly in the code rather than requiring isothermal estimates. These codes were instrumental in simulating aerobraking of NASA's Magellan spacecraft during orbital maneuvers to assess heat transfer and aerodynamic properties of the delicate satellite. NASA also depended upon simulations of entry of the Galileo probe into the atmosphere of Jupiter to provide drag and flow field information essential for accurate interpretation of an onboard experiment. Finally, the codes have been used extensively to simulate expanding nozzle flows in low-power thrusters in support of propulsion activities at NASA-Lewis. Detailed comparisons between continuum calculations and DSMC results helped to quantify the limitations of continuum CFD codes in rarefied applications.
NASA Astrophysics Data System (ADS)
Aigrain, S.; Llama, J.; Ceillier, T.; Chagas, M. L. das; Davenport, J. R. A.; García, R. A.; Hay, K. L.; Lanza, A. F.; McQuillan, A.; Mazeh, T.; de Medeiros, J. R.; Nielsen, M. B.; Reinhold, T.
2015-07-01
We present the results of a blind exercise to test the recoverability of stellar rotation and differential rotation in Kepler light curves. The simulated light curves lasted 1000 d and included activity cycles, Sun-like butterfly patterns, differential rotation and spot evolution. The range of rotation periods, activity levels and spot lifetime were chosen to be representative of the Kepler data of solar-like stars. Of the 1000 simulated light curves, 770 were injected into actual quiescent Kepler light curves to simulate Kepler noise. The test also included five 1000-d segments of the Sun's total irradiance variations at different points in the Sun's activity cycle. Five teams took part in the blind exercise, plus two teams who participated after the content of the light curves had been released. The methods used included Lomb-Scargle periodograms and variants thereof, autocorrelation function and wavelet-based analyses, plus spot modelling to search for differential rotation. The results show that the `overall' period is well recovered for stars exhibiting low and moderate activity levels. Most teams reported values within 10 per cent of the true value in 70 per cent of the cases. There was, however, little correlation between the reported and simulated values of the differential rotation shear, suggesting that differential rotation studies based on full-disc light curves alone need to be treated with caution, at least for solar-type stars. The simulated light curves and associated parameters are available online for the community to test their own methods.
NASA Astrophysics Data System (ADS)
Akushevich, I.; Filoti, O. F.; Ilyichev, A.; Shumeiko, N.
2012-07-01
The structure and algorithms of the Monte Carlo generator ELRADGEN 2.0 designed to simulate radiative events in polarized ep-scattering are presented. The full set of analytical expressions for the QED radiative corrections is presented and discussed in detail. Algorithmic improvements implemented to provide faster simulation of hard real photon events are described. Numerical tests show high quality of generation of photonic variables and radiatively corrected cross section. The comparison of the elastic radiative tail simulated within the kinematical conditions of the BLAST experiment at MIT BATES shows a good agreement with experimental data. Catalogue identifier: AELO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1299 No. of bytes in distributed program, including test data, etc.: 11 348 Distribution format: tar.gz Programming language: FORTRAN 77 Computer: All Operating system: Any RAM: 1 MB Classification: 11.2, 11.4 Nature of problem: Simulation of radiative events in polarized ep-scattering. Solution method: Monte Carlo simulation according to the distributions of the real photon kinematic variables that are calculated by the covariant method of QED radiative correction estimation. The approach provides rather fast and accurate generation. Running time: The simulation of 108 radiative events for itest:=1 takes up to 52 seconds on Pentium(R) Dual-Core 2.00 GHz processor.
Polymorphic phase transitions: Macroscopic theory and molecular simulation.
Anwar, Jamshed; Zahn, Dirk
2017-08-01
Transformations in the solid state are of considerable interest, both for fundamental reasons and because they underpin important technological applications. The interest spans a wide spectrum of disciplines and application domains. For pharmaceuticals, a common issue is unexpected polymorphic transformation of the drug or excipient during processing or on storage, which can result in product failure. A more ambitious goal is that of exploiting the advantages of metastable polymorphs (e.g. higher solubility and dissolution rate) while ensuring their stability with respect to solid state transformation. To address these issues and to advance technology, there is an urgent need for significant insights that can only come from a detailed molecular level understanding of the involved processes. Whilst experimental approaches at best yield time- and space-averaged structural information, molecular simulation offers unprecedented, time-resolved molecular-level resolution of the processes taking place. This review aims to provide a comprehensive and critical account of state-of-the-art methods for modelling polymorph stability and transitions between solid phases. This is flanked by revisiting the associated macroscopic theoretical framework for phase transitions, including their classification, proposed molecular mechanisms, and kinetics. The simulation methods are presented in tutorial form, focusing on their application to phase transition phenomena. We describe molecular simulation studies for crystal structure prediction and polymorph screening, phase coexistence and phase diagrams, simulations of crystal-crystal transitions of various types (displacive/martensitic, reconstructive and diffusive), effects of defects, and phase stability and transitions at the nanoscale. Our selection of literature is intended to illustrate significant insights, concepts and understanding, as well as the current scope of using molecular simulations for understanding polymorphic transitions in an accessible way, rather than claiming completeness. With exciting prospects in both simulation methods development and enhancements in computer hardware, we are on the verge of accessing an unprecedented capability for designing and developing dosage forms and drug delivery systems in silico, including tackling challenges in polymorph control on a rational basis. Copyright © 2017 Elsevier B.V. All rights reserved.
Modelling and study of active vibration control for off-road vehicle
NASA Astrophysics Data System (ADS)
Zhang, Junwei; Chen, Sizhong
2014-05-01
In view of special working characteristics and structure, engineering machineries do not have conventional suspension system typically. Consequently, operators have to endure severe vibrations which are detrimental both to their health and to the productivity of the loader. Based on displacement control, a kind of active damping method is developed for a skid-steer loader. In this paper, the whole hydraulic system for active damping method is modelled which include swash plate dynamics model, proportional valve model, piston accumulator model, pilot-operated check valve model, relief valve model, pump loss model, and cylinder model. A new road excitation model is developed for the skid-steer loader specially. The response of chassis vibration acceleration to road excitation is verified through simulation. The simulation result of passive accumulator damping is compared with measurements and the comparison shows that they are close. Based on this, parallel PID controller and track PID controller with acceleration feedback are brought into the simulation model, and the simulation results are compared with passive accumulator damping. It shows that the active damping methods with PID controllers are better in reducing chassis vibration acceleration and pitch movement. In the end, the test work for active damping method is proposed for the future work.
Stochastic simulation by image quilting of process-based geological models
NASA Astrophysics Data System (ADS)
Hoffimann, Júlio; Scheidt, Céline; Barfod, Adrian; Caers, Jef
2017-09-01
Process-based modeling offers a way to represent realistic geological heterogeneity in subsurface models. The main limitation lies in conditioning such models to data. Multiple-point geostatistics can use these process-based models as training images and address the data conditioning problem. In this work, we further develop image quilting as a method for 3D stochastic simulation capable of mimicking the realism of process-based geological models with minimal modeling effort (i.e. parameter tuning) and at the same time condition them to a variety of data. In particular, we develop a new probabilistic data aggregation method for image quilting that bypasses traditional ad-hoc weighting of auxiliary variables. In addition, we propose a novel criterion for template design in image quilting that generalizes the entropy plot for continuous training images. The criterion is based on the new concept of voxel reuse-a stochastic and quilting-aware function of the training image. We compare our proposed method with other established simulation methods on a set of process-based training images of varying complexity, including a real-case example of stochastic simulation of the buried-valley groundwater system in Denmark.
NASA Astrophysics Data System (ADS)
Troldborg, M.; Nowak, W.; Binning, P. J.; Bjerg, P. L.
2012-12-01
Estimates of mass discharge (mass/time) are increasingly being used when assessing risks of groundwater contamination and designing remedial systems at contaminated sites. Mass discharge estimates are, however, prone to rather large uncertainties as they integrate uncertain spatial distributions of both concentration and groundwater flow velocities. For risk assessments or any other decisions that are being based on mass discharge estimates, it is essential to address these uncertainties. We present a novel Bayesian geostatistical approach for quantifying the uncertainty of the mass discharge across a multilevel control plane. The method decouples the flow and transport simulation and has the advantage of avoiding the heavy computational burden of three-dimensional numerical flow and transport simulation coupled with geostatistical inversion. It may therefore be of practical relevance to practitioners compared to existing methods that are either too simple or computationally demanding. The method is based on conditional geostatistical simulation and accounts for i) heterogeneity of both the flow field and the concentration distribution through Bayesian geostatistics (including the uncertainty in covariance functions), ii) measurement uncertainty, and iii) uncertain source zone geometry and transport parameters. The method generates multiple equally likely realizations of the spatial flow and concentration distribution, which all honour the measured data at the control plane. The flow realizations are generated by analytical co-simulation of the hydraulic conductivity and the hydraulic gradient across the control plane. These realizations are made consistent with measurements of both hydraulic conductivity and head at the site. An analytical macro-dispersive transport solution is employed to simulate the mean concentration distribution across the control plane, and a geostatistical model of the Box-Cox transformed concentration data is used to simulate observed deviations from this mean solution. By combining the flow and concentration realizations, a mass discharge probability distribution is obtained. Tests show that the decoupled approach is both efficient and able to provide accurate uncertainty estimates. The method is demonstrated on a Danish field site contaminated with chlorinated ethenes. For this site, we show that including a physically meaningful concentration trend and the co-simulation of hydraulic conductivity and hydraulic gradient across the transect helps constrain the mass discharge uncertainty. The number of sampling points required for accurate mass discharge estimation and the relative influence of different data types on mass discharge uncertainty is discussed.
NASA Technical Reports Server (NTRS)
Pliutau, Denis; Prasad, Narasimha S.
2012-01-01
In this paper a modeling method based on data reductions is investigated which includes pre analyzed MERRA atmospheric fields for quantitative estimates of uncertainties introduced in the integrated path differential absorption methods for the sensing of various molecules including CO2. This approach represents the extension of our existing lidar modeling framework previously developed and allows effective on- and offline wavelength optimizations and weighting function analysis to minimize the interference effects such as those due to temperature sensitivity and water vapor absorption. The new simulation methodology is different from the previous implementation in that it allows analysis of atmospheric effects over annual spans and the entire Earth coverage which was achieved due to the data reduction methods employed. The effectiveness of the proposed simulation approach is demonstrated with application to the mixing ratio retrievals for the future ASCENDS mission. Independent analysis of multiple accuracy limiting factors including the temperature, water vapor interferences, and selected system parameters is further used to identify favorable spectral regions as well as wavelength combinations facilitating the reduction in total errors in the retrieved XCO2 values.
User Guidelines and Best Practices for CASL VUQ Analysis Using Dakota.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Coleman, Kayla; Hooper, Russell
2016-11-01
Sandia's Dakota software (available at http://dakota.sandia.gov) supports science and engineering transformation through advanced exploration of simulations. Specifically, it manages and analyzes ensembles of simulations to provide broader and deeper perspective for analysts and decision makers. This enables them to enhance understanding of risk, improve products, and assess simulation credibility. This manual offers Consortium for Advanced Simulation of Light Water Reactors (LWRs) (CASL) partners a guide to conducting Dakota-based VUQ studies for CASL problems. It motivates various classes of Dakota methods and includes examples of their use on representative application problems. On reading, a CASL analyst should understand why and howmore » to apply Dakota to a simulation problem.« less
Role of Boundary Conditions in Monte Carlo Simulation of MEMS Devices
NASA Technical Reports Server (NTRS)
Nance, Robert P.; Hash, David B.; Hassan, H. A.
1997-01-01
A study is made of the issues surrounding prediction of microchannel flows using the direct simulation Monte Carlo method. This investigation includes the introduction and use of new inflow and outflow boundary conditions suitable for subsonic flows. A series of test simulations for a moderate-size microchannel indicates that a high degree of grid under-resolution in the streamwise direction may be tolerated without loss of accuracy. In addition, the results demonstrate the importance of physically correct boundary conditions, as well as possibilities for reducing the time associated with the transient phase of a simulation. These results imply that simulations of longer ducts may be more feasible than previously envisioned.
Solernou, Albert; Hanson, Benjamin S; Richardson, Robin A; Welch, Robert; Read, Daniel J; Harlen, Oliver G; Harris, Sarah A
2018-03-01
Fluctuating Finite Element Analysis (FFEA) is a software package designed to perform continuum mechanics simulations of proteins and other globular macromolecules. It combines conventional finite element methods with stochastic thermal noise, and is appropriate for simulations of large proteins and protein complexes at the mesoscale (length-scales in the range of 5 nm to 1 μm), where there is currently a paucity of modelling tools. It requires 3D volumetric information as input, which can be low resolution structural information such as cryo-electron tomography (cryo-ET) maps or much higher resolution atomistic co-ordinates from which volumetric information can be extracted. In this article we introduce our open source software package for performing FFEA simulations which we have released under a GPLv3 license. The software package includes a C ++ implementation of FFEA, together with tools to assist the user to set up the system from Electron Microscopy Data Bank (EMDB) or Protein Data Bank (PDB) data files. We also provide a PyMOL plugin to perform basic visualisation and additional Python tools for the analysis of FFEA simulation trajectories. This manuscript provides a basic background to the FFEA method, describing the implementation of the core mechanical model and how intermolecular interactions and the solvent environment are included within this framework. We provide prospective FFEA users with a practical overview of how to set up an FFEA simulation with reference to our publicly available online tutorials and manuals that accompany this first release of the package.
NASA Astrophysics Data System (ADS)
Stellmach, Stephan; Hansen, Ulrich
2008-05-01
Numerical simulations of the process of convection and magnetic field generation in planetary cores still fail to reach geophysically realistic control parameter values. Future progress in this field depends crucially on efficient numerical algorithms which are able to take advantage of the newest generation of parallel computers. Desirable features of simulation algorithms include (1) spectral accuracy, (2) an operation count per time step that is small and roughly proportional to the number of grid points, (3) memory requirements that scale linear with resolution, (4) an implicit treatment of all linear terms including the Coriolis force, (5) the ability to treat all kinds of common boundary conditions, and (6) reasonable efficiency on massively parallel machines with tens of thousands of processors. So far, algorithms for fully self-consistent dynamo simulations in spherical shells do not achieve all these criteria simultaneously, resulting in strong restrictions on the possible resolutions. In this paper, we demonstrate that local dynamo models in which the process of convection and magnetic field generation is only simulated for a small part of a planetary core in Cartesian geometry can achieve the above goal. We propose an algorithm that fulfills the first five of the above criteria and demonstrate that a model implementation of our method on an IBM Blue Gene/L system scales impressively well for up to O(104) processors. This allows for numerical simulations at rather extreme parameter values.
Using Empirical Orthogonal Teleconnections to Analyze Interannual Precipitation Variability in China
NASA Astrophysics Data System (ADS)
Stephan, C.; Klingaman, N. P.; Vidale, P. L.; Turner, A. G.; Demory, M. E.; Guo, L.
2017-12-01
Interannual rainfall variability in China affects agriculture, infrastructure and water resource management. A consistent and objective method, Empirical Orthogonal Teleconnection (EOT) analysis, is applied to precipitation observations over China in all seasons. Instead of maximizing the explained space-time variance, the method identifies regions in China that best explain the temporal variability in domain-averaged rainfall. It produces known teleconnections, that include high positive correlations with ENSO in eastern China in winter, along the Yangtze River in summer, and in southeast China during spring. New findings include that variability along the southeast coast in winter, in the Yangtze valley in spring, and in eastern China in autumn, are associated with extratropical Rossby wave trains. The same analysis is applied to six climate simulations of the Met Office Unified Model with and without air-sea coupling and at various horizontal resolutions of 40, 90 and 200 km. All simulations reproduce the observed patterns of interannual rainfall variability in winter, spring and autumn; the leading pattern in summer is present in all but one simulation. However, only in two simulations are all patterns associated with the observed physical mechanism. Coupled simulations capture more observed patterns of variability and associate more of them with the correct physical mechanism, compared to atmosphere-only simulations at the same resolution. Finer resolution does not improve the fidelity of these patterns or their associated mechanisms. Evaluating climate models by only geographical distribution of mean precipitation and its interannual variance is insufficient; attention must be paid to associated mechanisms.
Shadrina, Maria S; English, Ann M; Peslherbe, Gilles H
2012-07-11
The diffusion of small gases to special binding sites within polypeptide matrices pivotally defines the biochemical specificity and reactivity of proteins. We investigate here explicit O(2) diffusion in adult human hemoglobin (HbA) as a case study employing the recently developed temperature-controlled locally enhanced sampling (TLES) method and vary the parameters to greatly increase the simulation efficiency. The method is carefully validated against standard molecular dynamics (MD) simulations and available experimental structural and kinetic data on ligand diffusion in T-state deoxyHbA. The methodology provides a viable alternative approach to traditional MD simulations and/or potential of mean force calculations for: (i) characterizing kinetically accessible diffusion tunnels and escape routes for light ligands in porous proteins; (ii) very large systems when realistic simulations require the inclusion of multiple subunits of a protein; and (iii) proteins that access short-lived conformations relative to the simulation time. In the case of T-state deoxyHbA, we find distinct ligand diffusion tunnels consistent with the experimentally observed disparate Xe cavities in the α- and β-subunits. We identify two distal barriers including the distal histidine (E7) that control access to the heme. The multiple escape routes uncovered by our simulations call for a review of the current popular hypothesis on ligand escape from hemoglobin. Larger deviations from the crystal structure during simulated diffusion in isolated α- and β-subunits highlight the dampening effects of subunit interactions and the importance of including all subunits of multisubunit proteins to map realistic kinetically accessible diffusion tunnels and escape routes.