Waltemath, Dagmar; Adams, Richard; Bergmann, Frank T; Hucka, Michael; Kolpakov, Fedor; Miller, Andrew K; Moraru, Ion I; Nickerson, David; Sahle, Sven; Snoep, Jacky L; Le Novère, Nicolas
2011-12-15
The increasing use of computational simulation experiments to inform modern biological research creates new challenges to annotate, archive, share and reproduce such experiments. The recently published Minimum Information About a Simulation Experiment (MIASE) proposes a minimal set of information that should be provided to allow the reproduction of simulation experiments among users and software tools. In this article, we present the Simulation Experiment Description Markup Language (SED-ML). SED-ML encodes in a computer-readable exchange format the information required by MIASE to enable reproduction of simulation experiments. It has been developed as a community project and it is defined in a detailed technical specification and additionally provides an XML schema. The version of SED-ML described in this publication is Level 1 Version 1. It covers the description of the most frequent type of simulation experiments in the area, namely time course simulations. SED-ML documents specify which models to use in an experiment, modifications to apply on the models before using them, which simulation procedures to run on each model, what analysis results to output, and how the results should be presented. These descriptions are independent of the underlying model implementation. SED-ML is a software-independent format for encoding the description of simulation experiments; it is not specific to particular simulation tools. Here, we demonstrate that with the growing software support for SED-ML we can effectively exchange executable simulation descriptions. With SED-ML, software can exchange simulation experiment descriptions, enabling the validation and reuse of simulation experiments in different tools. Authors of papers reporting simulation experiments can make their simulation protocols available for other scientists to reproduce the results. Because SED-ML is agnostic about exact modeling language(s) used, experiments covering models from different fields of research can be accurately described and combined.
2011-01-01
Background The increasing use of computational simulation experiments to inform modern biological research creates new challenges to annotate, archive, share and reproduce such experiments. The recently published Minimum Information About a Simulation Experiment (MIASE) proposes a minimal set of information that should be provided to allow the reproduction of simulation experiments among users and software tools. Results In this article, we present the Simulation Experiment Description Markup Language (SED-ML). SED-ML encodes in a computer-readable exchange format the information required by MIASE to enable reproduction of simulation experiments. It has been developed as a community project and it is defined in a detailed technical specification and additionally provides an XML schema. The version of SED-ML described in this publication is Level 1 Version 1. It covers the description of the most frequent type of simulation experiments in the area, namely time course simulations. SED-ML documents specify which models to use in an experiment, modifications to apply on the models before using them, which simulation procedures to run on each model, what analysis results to output, and how the results should be presented. These descriptions are independent of the underlying model implementation. SED-ML is a software-independent format for encoding the description of simulation experiments; it is not specific to particular simulation tools. Here, we demonstrate that with the growing software support for SED-ML we can effectively exchange executable simulation descriptions. Conclusions With SED-ML, software can exchange simulation experiment descriptions, enabling the validation and reuse of simulation experiments in different tools. Authors of papers reporting simulation experiments can make their simulation protocols available for other scientists to reproduce the results. Because SED-ML is agnostic about exact modeling language(s) used, experiments covering models from different fields of research can be accurately described and combined. PMID:22172142
Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 3 (L1V3).
Bergmann, Frank T; Cooper, Jonathan; König, Matthias; Moraru, Ion; Nickerson, David; Le Novère, Nicolas; Olivier, Brett G; Sahle, Sven; Smith, Lucian; Waltemath, Dagmar
2018-03-19
The creation of computational simulation experiments to inform modern biological research poses challenges to reproduce, annotate, archive, and share such experiments. Efforts such as SBML or CellML standardize the formal representation of computational models in various areas of biology. The Simulation Experiment Description Markup Language (SED-ML) describes what procedures the models are subjected to, and the details of those procedures. These standards, together with further COMBINE standards, describe models sufficiently well for the reproduction of simulation studies among users and software tools. The Simulation Experiment Description Markup Language (SED-ML) is an XML-based format that encodes, for a given simulation experiment, (i) which models to use; (ii) which modifications to apply to models before simulation; (iii) which simulation procedures to run on each model; (iv) how to post-process the data; and (v) how these results should be plotted and reported. SED-ML Level 1 Version 1 (L1V1) implemented support for the encoding of basic time course simulations. SED-ML L1V2 added support for more complex types of simulations, specifically repeated tasks and chained simulation procedures. SED-ML L1V3 extends L1V2 by means to describe which datasets and subsets thereof to use within a simulation experiment.
NASA Technical Reports Server (NTRS)
1974-01-01
The present form of this cardiovascular model simulates both 1-g and zero-g LBNP (lower body negative pressure) experiments and tilt experiments. In addition, the model simulates LBNP experiments at any body angle. The model is currently accessible on the Univac 1110 Time-Shared System in an interactive operational mode. Model output may be in tabular form and/or graphic form. The graphic capabilities are programmed for the Tektronix 4010 graphics terminal and the Univac 1110.
NASA Astrophysics Data System (ADS)
Colombant, Denis; Manheimer, Wallace
2008-11-01
The Krook model described in the previous talk has been incorporated into a fluid simulation. These fluid simulations are then compared with Fokker Planck simulations and also with a recent NRL Nike experiment. We also examine several other models for electron energy transport that have been used in laser fusion research. As regards comparison with Fokker Planck simulation, the Krook model gives better agreement than the other models, especially in the time asymptotic limit. As regards the NRL experiment, all models except one give reasonable agreement.
A simple analytical infiltration model for short-duration rainfall
NASA Astrophysics Data System (ADS)
Wang, Kaiwen; Yang, Xiaohua; Liu, Xiaomang; Liu, Changming
2017-12-01
Many infiltration models have been proposed to simulate infiltration process. Different initial soil conditions and non-uniform initial water content can lead to infiltration simulation errors, especially for short-duration rainfall (SHR). Few infiltration models are specifically derived to eliminate the errors caused by the complex initial soil conditions. We present a simple analytical infiltration model for SHR infiltration simulation, i.e., Short-duration Infiltration Process model (SHIP model). The infiltration simulated by 5 models (i.e., SHIP (high) model, SHIP (middle) model, SHIP (low) model, Philip model and Parlange model) were compared based on numerical experiments and soil column experiments. In numerical experiments, SHIP (middle) and Parlange models had robust solutions for SHR infiltration simulation of 12 typical soils under different initial soil conditions. The absolute values of percent bias were less than 12% and the values of Nash and Sutcliffe efficiency were greater than 0.83. Additionally, in soil column experiments, infiltration rate fluctuated in a range because of non-uniform initial water content. SHIP (high) and SHIP (low) models can simulate an infiltration range, which successfully covered the fluctuation range of the observed infiltration rate. According to the robustness of solutions and the coverage of fluctuation range of infiltration rate, SHIP model can be integrated into hydrologic models to simulate SHR infiltration process and benefit the flood forecast.
Development of an Implantable WBAN Path-Loss Model for Capsule Endoscopy
NASA Astrophysics Data System (ADS)
Aoyagi, Takahiro; Takizawa, Kenichi; Kobayashi, Takehiko; Takada, Jun-Ichi; Hamaguchi, Kiyoshi; Kohno, Ryuji
An implantable WBAN path-loss model for a capsule endoscopy which is used for examining digestive organs, is developed by conducting simulations and experiments. First, we performed FDTD simulations on implant WBAN propagation by using a numerical human model. Second, we performed FDTD simulations on a vessel that represents the human body. Third, we performed experiments using a vessel of the same dimensions as that used in the simulations. On the basis of the results of these simulations and experiments, we proposed the gradient and intercept parameters of the simple path-loss in-body propagation model.
A High-Resolution Integrated Model of the National Ignition Campaign Cryogenic Layered Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, O. S.; Callahan, D. A.; Cerjan, C. J.
A detailed simulation-based model of the June 2011 National Ignition Campaign (NIC) cryogenic DT experiments is presented. The model is based on integrated hohlraum-capsule simulations that utilize the best available models for the hohlraum wall, ablator, and DT equations of state and opacities. The calculated radiation drive was adjusted by changing the input laser power to match the experimentally measured shock speeds, shock merger times, peak implosion velocity, and bangtime. The crossbeam energy transfer model was tuned to match the measured time-dependent symmetry. Mid-mode mix was included by directly modeling the ablator and ice surface perturbations up to mode 60.more » Simulated experimental values were extracted from the simulation and compared against the experiment. The model adjustments brought much of the simulated data into closer agreement with the experiment, with the notable exception of the measured yields, which were 15-40% of the calculated yields.« less
A High-Resolution Integrated Model of the National Ignition Campaign Cryogenic Layered Experiments
Jones, O. S.; Callahan, D. A.; Cerjan, C. J.; ...
2012-05-29
A detailed simulation-based model of the June 2011 National Ignition Campaign (NIC) cryogenic DT experiments is presented. The model is based on integrated hohlraum-capsule simulations that utilize the best available models for the hohlraum wall, ablator, and DT equations of state and opacities. The calculated radiation drive was adjusted by changing the input laser power to match the experimentally measured shock speeds, shock merger times, peak implosion velocity, and bangtime. The crossbeam energy transfer model was tuned to match the measured time-dependent symmetry. Mid-mode mix was included by directly modeling the ablator and ice surface perturbations up to mode 60.more » Simulated experimental values were extracted from the simulation and compared against the experiment. The model adjustments brought much of the simulated data into closer agreement with the experiment, with the notable exception of the measured yields, which were 15-40% of the calculated yields.« less
Pore-scale and Continuum Simulations of Solute Transport Micromodel Benchmark Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oostrom, Martinus; Mehmani, Yashar; Romero Gomez, Pedro DJ
Four sets of micromodel nonreactive solute transport experiments were conducted with flow velocity, grain diameter, pore-aspect ratio, and flow focusing heterogeneity as the variables. The data sets were offered to pore-scale modeling groups to test their simulators. Each set consisted of two learning experiments, for which all results was made available, and a challenge experiment, for which only the experimental description and base input parameters were provided. The experimental results showed a nonlinear dependence of the dispersion coefficient on the Peclet number, a negligible effect of the pore-aspect ratio on transverse mixing, and considerably enhanced mixing due to flow focusing.more » Five pore-scale models and one continuum-scale model were used to simulate the experiments. Of the pore-scale models, two used a pore-network (PN) method, two others are based on a lattice-Boltzmann (LB) approach, and one employed a computational fluid dynamics (CFD) technique. The learning experiments were used by the PN models to modify the standard perfect mixing approach in pore bodies into approaches to simulate the observed incomplete mixing. The LB and CFD models used these experiments to appropriately discretize the grid representations. The continuum model use published non-linear relations between transverse dispersion coefficients and Peclet numbers to compute the required dispersivity input values. Comparisons between experimental and numerical results for the four challenge experiments show that all pore-scale models were all able to satisfactorily simulate the experiments. The continuum model underestimated the required dispersivity values and, resulting in less dispersion. The PN models were able to complete the simulations in a few minutes, whereas the direct models needed up to several days on supercomputers to resolve the more complex problems.« less
Predictions of Cockpit Simulator Experimental Outcome Using System Models
NASA Technical Reports Server (NTRS)
Sorensen, J. A.; Goka, T.
1984-01-01
This study involved predicting the outcome of a cockpit simulator experiment where pilots used cockpit displays of traffic information (CDTI) to establish and maintain in-trail spacing behind a lead aircraft during approach. The experiments were run on the NASA Ames Research Center multicab cockpit simulator facility. Prior to the experiments, a mathematical model of the pilot/aircraft/CDTI flight system was developed which included relative in-trail and vertical dynamics between aircraft in the approach string. This model was used to construct a digital simulation of the string dynamics including response to initial position errors. The model was then used to predict the outcome of the in-trail following cockpit simulator experiments. Outcome included performance and sensitivity to different separation criteria. The experimental results were then used to evaluate the model and its prediction accuracy. Lessons learned in this modeling and prediction study are noted.
Comparisons of CTH simulations with measured wave profiles for simple flyer plate experiments
Thomas, S. A.; Veeser, L. R.; Turley, W. D.; ...
2016-06-13
We conducted detailed 2-dimensional hydrodynamics calculations to assess the quality of simulations commonly used to design and analyze simple shock compression experiments. Such simple shock experiments also contain data where dynamic properties of materials are integrated together. We wished to assess how well the chosen computer hydrodynamic code could do at capturing both the simple parts of the experiments and the integral parts. We began with very simple shock experiments, in which we examined the effects of the equation of state and the compressional and tensile strength models. We increased complexity to include spallation in copper and iron and amore » solid-solid phase transformation in iron to assess the quality of the damage and phase transformation simulations. For experiments with a window, the response of both the sample and the window are integrated together, providing a good test of the material models. While CTH physics models are not perfect and do not reproduce all experimental details well, we find the models are useful; the simulations are adequate for understanding much of the dynamic process and for planning experiments. However, higher complexity in the simulations, such as adding in spall, led to greater differences between simulation and experiment. Lastly, this comparison of simulation to experiment may help guide future development of hydrodynamics codes so that they better capture the underlying physics.« less
Modelling and Simulation as a Recognizing Method in Education
ERIC Educational Resources Information Center
Stoffa, Veronika
2004-01-01
Computer animation-simulation models of complex processes and events, which are the method of instruction, can be an effective didactic device. Gaining deeper knowledge about objects modelled helps to plan simulation experiments oriented on processes and events researched. Animation experiments realized on multimedia computers can aid easier…
Comparison of simulator fidelity model predictions with in-simulator evaluation data
NASA Technical Reports Server (NTRS)
Parrish, R. V.; Mckissick, B. T.; Ashworth, B. R.
1983-01-01
A full factorial in simulator experiment of a single axis, multiloop, compensatory pitch tracking task is described. The experiment was conducted to provide data to validate extensions to an analytic, closed loop model of a real time digital simulation facility. The results of the experiment encompassing various simulation fidelity factors, such as visual delay, digital integration algorithms, computer iteration rates, control loading bandwidths and proprioceptive cues, and g-seat kinesthetic cues, are compared with predictions obtained from the analytic model incorporating an optimal control model of the human pilot. The in-simulator results demonstrate more sensitivity to the g-seat and to the control loader conditions than were predicted by the model. However, the model predictions are generally upheld, although the predicted magnitudes of the states and of the error terms are sometimes off considerably. Of particular concern is the large sensitivity difference for one control loader condition, as well as the model/in-simulator mismatch in the magnitude of the plant states when the other states match.
Pore-scale and continuum simulations of solute transport micromodel benchmark experiments
Oostrom, M.; Mehmani, Y.; Romero-Gomez, P.; ...
2014-06-18
Four sets of nonreactive solute transport experiments were conducted with micromodels. Three experiments with one variable, i.e., flow velocity, grain diameter, pore-aspect ratio, and flow-focusing heterogeneity were in each set. The data sets were offered to pore-scale modeling groups to test their numerical simulators. Each set consisted of two learning experiments, for which our results were made available, and one challenge experiment, for which only the experimental description and base input parameters were provided. The experimental results showed a nonlinear dependence of the transverse dispersion coefficient on the Peclet number, a negligible effect of the pore-aspect ratio on transverse mixing,more » and considerably enhanced mixing due to flow focusing. Five pore-scale models and one continuum-scale model were used to simulate the experiments. Of the pore-scale models, two used a pore-network (PN) method, two others are based on a lattice Boltzmann (LB) approach, and one used a computational fluid dynamics (CFD) technique. Furthermore, we used the learning experiments, by the PN models, to modify the standard perfect mixing approach in pore bodies into approaches to simulate the observed incomplete mixing. The LB and CFD models used the learning experiments to appropriately discretize the spatial grid representations. For the continuum modeling, the required dispersivity input values were estimated based on published nonlinear relations between transverse dispersion coefficients and Peclet number. Comparisons between experimental and numerical results for the four challenge experiments show that all pore-scale models were all able to satisfactorily simulate the experiments. The continuum model underestimated the required dispersivity values, resulting in reduced dispersion. The PN models were able to complete the simulations in a few minutes, whereas the direct models, which account for the micromodel geometry and underlying flow and transport physics, needed up to several days on supercomputers to resolve the more complex problems.« less
Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 2.
Bergmann, Frank T; Cooper, Jonathan; Le Novère, Nicolas; Nickerson, David; Waltemath, Dagmar
2015-09-04
The number, size and complexity of computational models of biological systems are growing at an ever increasing pace. It is imperative to build on existing studies by reusing and adapting existing models and parts thereof. The description of the structure of models is not sufficient to enable the reproduction of simulation results. One also needs to describe the procedures the models are subjected to, as recommended by the Minimum Information About a Simulation Experiment (MIASE) guidelines. This document presents Level 1 Version 2 of the Simulation Experiment Description Markup Language (SED-ML), a computer-readable format for encoding simulation and analysis experiments to apply to computational models. SED-ML files are encoded in the Extensible Markup Language (XML) and can be used in conjunction with any XML-based model encoding format, such as CellML or SBML. A SED-ML file includes details of which models to use, how to modify them prior to executing a simulation, which simulation and analysis procedures to apply, which results to extract and how to present them. Level 1 Version 2 extends the format by allowing the encoding of repeated and chained procedures.
Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 2.
Bergmann, Frank T; Cooper, Jonathan; Le Novère, Nicolas; Nickerson, David; Waltemath, Dagmar
2015-06-01
The number, size and complexity of computational models of biological systems are growing at an ever increasing pace. It is imperative to build on existing studies by reusing and adapting existing models and parts thereof. The description of the structure of models is not sufficient to enable the reproduction of simulation results. One also needs to describe the procedures the models are subjected to, as recommended by the Minimum Information About a Simulation Experiment (MIASE) guidelines. This document presents Level 1 Version 2 of the Simulation Experiment Description Markup Language (SED-ML), a computer-readable format for encoding simulation and analysis experiments to apply to computational models. SED-ML files are encoded in the Extensible Markup Language (XML) and can be used in conjunction with any XML-based model encoding format, such as CellML or SBML. A SED-ML file includes details of which models to use, how to modify them prior to executing a simulation, which simulation and analysis procedures to apply, which results to extract and how to present them. Level 1 Version 2 extends the format by allowing the encoding of repeated and chained procedures.
NASA Astrophysics Data System (ADS)
Popke, Dagmar; Bony, Sandrine; Mauritsen, Thorsten; Stevens, Bjorn
2015-04-01
Model simulations with state-of-the-art general circulation models reveal a strong disagreement concerning the simulated regional precipitation patterns and their changes with warming. The deviating precipitation response even persists when reducing the model experiment complexity to aquaplanet simulation with forced sea surface temperatures (Stevens and Bony, 2013). To assess feedbacks between clouds and radiation on precipitation responses we analyze data from 5 models performing the aquaplanet simulations of the Clouds On Off Klima Intercomparison Experiment (COOKIE), where the interaction of clouds and radiation is inhibited. Although cloud radiative effects are then disabled, the precipitation patterns among models are as diverse as with cloud radiative effects switched on. Disentangling differing model responses in such simplified experiments thus appears to be key to better understanding the simulated regional precipitation in more standard configurations. By analyzing the local moisture and moist static energy budgets in the COOKIE experiments we investigate likely causes for the disagreement among models. References Stevens, B. & S. Bony: What Are Climate Models Missing?, Science, 2013, 340, 1053-1054
Towards an Integrated Model of the NIC Layered Implosions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, O S; Callahan, D A; Cerjan, C J
A detailed simulation-based model of the June 2011 National Ignition Campaign (NIC) cryogenic DT experiments is presented. The model is based on integrated hohlraum-capsule simulations that utilize the best available models for the hohlraum wall, ablator, and DT equations of state and opacities. The calculated radiation drive was adjusted by changing the input laser power to match the experimentally measured shock speeds, shock merger times, peak implosion velocity, and bangtime. The crossbeam energy transfer model was tuned to match the measured time-dependent symmetry. Mid-mode mix was included by directly modeling the ablator and ice surface perturbations up to mode 60.more » Simulated experimental values were extracted from the simulation and compared against the experiment. The model adjustments brought much of the simulated data into closer agreement with the experiment, with the notable exception of the measured yields, which were 15-45% of the calculated yields.« less
Analysis of a DNA simulation model through hairpin melting experiments.
Linak, Margaret C; Dorfman, Kevin D
2010-09-28
We compare the predictions of a two-bead Brownian dynamics simulation model to melting experiments of DNA hairpins with complementary AT or GC stems and noninteracting loops in buffer A. This system emphasizes the role of stacking and hydrogen bonding energies, which are characteristics of DNA, rather than backbone bending, stiffness, and excluded volume interactions, which are generic characteristics of semiflexible polymers. By comparing high throughput data on the open-close transition of various DNA hairpins to the corresponding simulation data, we (1) establish a suitable metric to compare the simulations to experiments, (2) find a conversion between the simulation and experimental temperatures, and (3) point out several limitations of the model, including the lack of G-quartets and cross stacking effects. Our approach and experimental data can be used to validate similar coarse-grained simulation models.
Capsule modeling of high foot implosion experiments on the National Ignition Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, D. S.; Kritcher, A. L.; Milovich, J. L.
This study summarizes the results of detailed, capsule-only simulations of a set of high foot implosion experiments conducted on the National Ignition Facility (NIF). These experiments span a range of ablator thicknesses, laser powers, and laser energies, and modeling these experiments as a set is important to assess whether the simulation model can reproduce the trends seen experimentally as the implosion parameters were varied. Two-dimensional (2D) simulations have been run including a number of effects—both nominal and off-nominal—such as hohlraum radiation asymmetries, surface roughness, the capsule support tent, and hot electron pre-heat. Selected three-dimensional simulations have also been run tomore » assess the validity of the 2D axisymmetric approximation. As a composite, these simulations represent the current state of understanding of NIF high foot implosion performance using the best and most detailed computational model available. While the most detailed simulations show approximate agreement with the experimental data, it is evident that the model remains incomplete and further refinements are needed. Nevertheless, avenues for improved performance are clearly indicated.« less
Capsule modeling of high foot implosion experiments on the National Ignition Facility
Clark, D. S.; Kritcher, A. L.; Milovich, J. L.; ...
2017-03-21
This study summarizes the results of detailed, capsule-only simulations of a set of high foot implosion experiments conducted on the National Ignition Facility (NIF). These experiments span a range of ablator thicknesses, laser powers, and laser energies, and modeling these experiments as a set is important to assess whether the simulation model can reproduce the trends seen experimentally as the implosion parameters were varied. Two-dimensional (2D) simulations have been run including a number of effects—both nominal and off-nominal—such as hohlraum radiation asymmetries, surface roughness, the capsule support tent, and hot electron pre-heat. Selected three-dimensional simulations have also been run tomore » assess the validity of the 2D axisymmetric approximation. As a composite, these simulations represent the current state of understanding of NIF high foot implosion performance using the best and most detailed computational model available. While the most detailed simulations show approximate agreement with the experimental data, it is evident that the model remains incomplete and further refinements are needed. Nevertheless, avenues for improved performance are clearly indicated.« less
Pasma, Jantsje H.; Assländer, Lorenz; van Kordelaar, Joost; de Kam, Digna; Mergner, Thomas; Schouten, Alfred C.
2018-01-01
The Independent Channel (IC) model is a commonly used linear balance control model in the frequency domain to analyze human balance control using system identification and parameter estimation. The IC model is a rudimentary and noise-free description of balance behavior in the frequency domain, where a stable model representation is not guaranteed. In this study, we conducted firstly time-domain simulations with added noise, and secondly robot experiments by implementing the IC model in a real-world robot (PostuRob II) to test the validity and stability of the model in the time domain and for real world situations. Balance behavior of seven healthy participants was measured during upright stance by applying pseudorandom continuous support surface rotations. System identification and parameter estimation were used to describe the balance behavior with the IC model in the frequency domain. The IC model with the estimated parameters from human experiments was implemented in Simulink for computer simulations including noise in the time domain and robot experiments using the humanoid robot PostuRob II. Again, system identification and parameter estimation were used to describe the simulated balance behavior. Time series, Frequency Response Functions, and estimated parameters from human experiments, computer simulations, and robot experiments were compared with each other. The computer simulations showed similar balance behavior and estimated control parameters compared to the human experiments, in the time and frequency domain. Also, the IC model was able to control the humanoid robot by keeping it upright, but showed small differences compared to the human experiments in the time and frequency domain, especially at high frequencies. We conclude that the IC model, a descriptive model in the frequency domain, can imitate human balance behavior also in the time domain, both in computer simulations with added noise and real world situations with a humanoid robot. This provides further evidence that the IC model is a valid description of human balance control. PMID:29615886
Pasma, Jantsje H; Assländer, Lorenz; van Kordelaar, Joost; de Kam, Digna; Mergner, Thomas; Schouten, Alfred C
2018-01-01
The Independent Channel (IC) model is a commonly used linear balance control model in the frequency domain to analyze human balance control using system identification and parameter estimation. The IC model is a rudimentary and noise-free description of balance behavior in the frequency domain, where a stable model representation is not guaranteed. In this study, we conducted firstly time-domain simulations with added noise, and secondly robot experiments by implementing the IC model in a real-world robot (PostuRob II) to test the validity and stability of the model in the time domain and for real world situations. Balance behavior of seven healthy participants was measured during upright stance by applying pseudorandom continuous support surface rotations. System identification and parameter estimation were used to describe the balance behavior with the IC model in the frequency domain. The IC model with the estimated parameters from human experiments was implemented in Simulink for computer simulations including noise in the time domain and robot experiments using the humanoid robot PostuRob II. Again, system identification and parameter estimation were used to describe the simulated balance behavior. Time series, Frequency Response Functions, and estimated parameters from human experiments, computer simulations, and robot experiments were compared with each other. The computer simulations showed similar balance behavior and estimated control parameters compared to the human experiments, in the time and frequency domain. Also, the IC model was able to control the humanoid robot by keeping it upright, but showed small differences compared to the human experiments in the time and frequency domain, especially at high frequencies. We conclude that the IC model, a descriptive model in the frequency domain, can imitate human balance behavior also in the time domain, both in computer simulations with added noise and real world situations with a humanoid robot. This provides further evidence that the IC model is a valid description of human balance control.
Research on numerical simulation technology about regional important pollutant diffusion of haze
NASA Astrophysics Data System (ADS)
Du, Boying; Ma, Yunfeng; Li, Qiangqiang; Wang, Qi; Hu, Qiongqiong; Bian, Yushan
2018-02-01
In order to analyze the formation of haze in Shenyang and the factors that affect the diffusion of pollutants, the simulation experiment adopted in this paper is based on the numerical model of WRF/CALPUFF coupling. Simulation experiment was conducted to select PM10 of Shenyang City in the period from March 1 to 8, and the PM10 in the regional important haze was simulated. The survey was conducted with more than 120 enterprises section the point of the emission source of this experiment. The contrastive data were analyzed with 11 air quality monitoring points, and the simulation results were compared. Analyze the contribution rate of each typical enterprise to the air quality, verify the correctness of the simulation results, and then use the model to establish the prediction model.
Experiment and simulation for CSI: What are the missing links?
NASA Technical Reports Server (NTRS)
Belvin, W. Keith; Park, K. C.
1989-01-01
Viewgraphs on experiment and simulation for control structure interaction (CSI) are presented. Topics covered include: control structure interaction; typical control/structure interaction system; CSI problem classification; actuator/sensor models; modeling uncertainty; noise models; real-time computations; and discrete versus continuous.
NASA Astrophysics Data System (ADS)
Colombant, Denis; Manheimer, Wallace
2008-08-01
This paper incorporates the Krook model for nonlocal transport into a fluid simulation. It uses these fluid simulations to compare with Fokker-Planck simulations and also with a recent NRL NIKE [S. P. Obenschain et al., Phys. Plasmas 3, 2098 (1996)] experiment. The paper also examines several other models for electron energy transport that have been used in laser fusion research. With regards to the comparison with Fokker-Planck simulation, the Krook model gives better agreement, especially in the time asymptotic limit. With regards to the NRL experiment, all models except one give reasonable agreement.
Systematic approach to verification and validation: High explosive burn models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menikoff, Ralph; Scovel, Christina A.
2012-04-16
Most material models used in numerical simulations are based on heuristics and empirically calibrated to experimental data. For a specific model, key questions are determining its domain of applicability and assessing its relative merits compared to other models. Answering these questions should be a part of model verification and validation (V and V). Here, we focus on V and V of high explosive models. Typically, model developers implemented their model in their own hydro code and use different sets of experiments to calibrate model parameters. Rarely can one find in the literature simulation results for different models of the samemore » experiment. Consequently, it is difficult to assess objectively the relative merits of different models. This situation results in part from the fact that experimental data is scattered through the literature (articles in journals and conference proceedings) and that the printed literature does not allow the reader to obtain data from a figure in electronic form needed to make detailed comparisons among experiments and simulations. In addition, it is very time consuming to set up and run simulations to compare different models over sufficiently many experiments to cover the range of phenomena of interest. The first difficulty could be overcome if the research community were to support an online web based database. The second difficulty can be greatly reduced by automating procedures to set up and run simulations of similar types of experiments. Moreover, automated testing would be greatly facilitated if the data files obtained from a database were in a standard format that contained key experimental parameters as meta-data in a header to the data file. To illustrate our approach to V and V, we have developed a high explosive database (HED) at LANL. It now contains a large number of shock initiation experiments. Utilizing the header information in a data file from HED, we have written scripts to generate an input file for a hydro code, run a simulation, and generate a comparison plot showing simulated and experimental velocity gauge data. These scripts are then applied to several series of experiments and to several HE burn models. The same systematic approach is applicable to other types of material models; for example, equations of state models and material strength models.« less
Simulations of the modified gap experiment
NASA Astrophysics Data System (ADS)
Sutherland, Gerrit T.; Benjamin, Richard; Kooker, Douglas
2017-01-01
Modified gap experiment (test) hydrocode simulations predict the trends seen in experimental excess free surface velocity versus input pressure curves for explosives with both large and modest failure diameters. Simulations were conducted for explosive "A", an explosive with a large failure diameter, and for cast TNT, which has a modest failure diameter. Using the best available reactive rate models, the simulations predicted sustained ignition thresholds similar to experiment. This is a threshold where detonation is likely given a long enough run distance. For input pressures greater than the sustained ignition threshold pressure, the simulations predicted too little velocity for explosive "A" and too much velocity for TNT. It was found that a better comparison of experiment and simulation requires additional experimental data for both explosives. It was observed that the choice of reactive rate model for cast TNT can lead to large differences in the predicted modified gap experiment result. The cause of the difference is that the same data was not used to parameterize both models; one set of data was more shock reactive than the other.
Computer Based Simulation of Laboratory Experiments.
ERIC Educational Resources Information Center
Edward, Norrie S.
1997-01-01
Examines computer based simulations of practical laboratory experiments in engineering. Discusses the aims and achievements of lab work (cognitive, process, psychomotor, and affective); types of simulations (model building and behavioral); and the strengths and weaknesses of simulations. Describes the development of a centrifugal pump simulation,…
2017-06-01
designed experiment to model and explore a ship-to-shore logistics process supporting dispersed units via three types of ULSs, which vary primarily in...systems, simulation, discrete event simulation, design of experiments, data analysis, simplekit, nearly orthogonal and balanced designs 15. NUMBER OF... designed experiment to model and explore a ship-to-shore logistics process supporting dispersed units via three types of ULSs, which vary primarily
Ma, Jun; Liu, Lei; Ge, Sai; Xue, Qiang; Li, Jiangshan; Wan, Yong; Hui, Xinminnan
2018-03-01
A quantitative description of aerobic waste degradation is important in evaluating landfill waste stability and economic management. This research aimed to develop a coupling model to predict the degree of aerobic waste degradation. On the basis of the first-order kinetic equation and the law of conservation of mass, we first developed the coupling model of aerobic waste degradation that considered temperature, initial moisture content and air injection volume to simulate and predict the chemical oxygen demand in the leachate. Three different laboratory experiments on aerobic waste degradation were simulated to test the model applicability. Parameter sensitivity analyses were conducted to evaluate the reliability of parameters. The coupling model can simulate aerobic waste degradation, and the obtained simulation agreed with the corresponding results of the experiment. Comparison of the experiment and simulation demonstrated that the coupling model is a new approach to predict aerobic waste degradation and can be considered as the basis for selecting the economic air injection volume and appropriate management in the future.
Numerical simulation of experiments in the Giant Planet Facility
NASA Technical Reports Server (NTRS)
Green, M. J.; Davy, W. C.
1979-01-01
Utilizing a series of existing computer codes, ablation experiments in the Giant Planet Facility are numerically simulated. Of primary importance is the simulation of the low Mach number shock layer that envelops the test model. The RASLE shock-layer code, used in the Jupiter entry probe heat-shield design, is adapted to the experimental conditions. RASLE predictions for radiative and convective heat fluxes are in good agreement with calorimeter measurements. In simulating carbonaceous ablation experiments, the RASLE code is coupled directly with the CMA material response code. For the graphite models, predicted and measured recessions agree very well. Predicted recession for the carbon phenolic models is 50% higher than that measured. This is the first time codes used for the Jupiter probe design have been compared with experiments.
TREAT Modeling and Simulation Strategy
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeHart, Mark David
2015-09-01
This report summarizes a four-phase process used to describe the strategy in developing modeling and simulation software for the Transient Reactor Test Facility. The four phases of this research and development task are identified as (1) full core transient calculations with feedback, (2) experiment modeling, (3) full core plus experiment simulation and (4) quality assurance. The document describes the four phases, the relationship between these research phases, and anticipated needs within each phase.
Wedge Experiment Modeling and Simulation for Reactive Flow Model Calibration
NASA Astrophysics Data System (ADS)
Maestas, Joseph T.; Dorgan, Robert J.; Sutherland, Gerrit T.
2017-06-01
Wedge experiments are a typical method for generating pop-plot data (run-to-detonation distance versus input shock pressure), which is used to assess an explosive material's initiation behavior. Such data can be utilized to calibrate reactive flow models by running hydrocode simulations and successively tweaking model parameters until a match between experiment is achieved. Typical simulations are performed in 1D and typically use a flyer impact to achieve the prescribed shock loading pressure. In this effort, a wedge experiment performed at the Army Research Lab (ARL) was modeled using CTH (SNL hydrocode) in 1D, 2D, and 3D space in order to determine if there was any justification in using simplified models. A simulation was also performed using the BCAT code (CTH companion tool) that assumes a plate impact shock loading. Results from the simulations were compared to experimental data and show that the shock imparted into an explosive specimen is accurately captured with 2D and 3D simulations, but changes significantly in 1D space and with the BCAT tool. The difference in shock profile is shown to only affect numerical predictions for large run distances. This is attributed to incorrectly capturing the energy fluence for detonation waves versus flat shock loading. Portions of this work were funded through the Joint Insensitive Munitions Technology Program.
George, Stephanie M; Domire, Zachary J
2017-07-01
As the reliance on computational models to inform experiments and evaluate medical devices grows, the demand for students with modeling experience will grow. In this paper, we report on the 3-yr experience of a National Science Foundation (NSF) funded Research Experiences for Undergraduates (REU) based on the theme simulations, imaging, and modeling in biomechanics. While directly applicable to REU sites, our findings also apply to those creating other types of summer undergraduate research programs. The objective of the paper is to examine if a theme of simulations, imaging, and modeling will improve students' understanding of the important topic of modeling, provide an overall positive research experience, and provide an interdisciplinary experience. The structure of the program and the evaluation plan are described. We report on the results from 25 students over three summers from 2014 to 2016. Overall, students reported significant gains in the knowledge of modeling, research process, and graduate school based on self-reported mastery levels and open-ended qualitative responses. This theme provides students with a skill set that is adaptable to other applications illustrating the interdisciplinary nature of modeling in biomechanics. Another advantage is that students may also be able to continue working on their project following the summer experience through network connections. In conclusion, we have described the successful implementation of the theme simulation, imaging, and modeling for an REU site and the overall positive response of the student participants.
NASA Astrophysics Data System (ADS)
Solman, Silvina A.; Pessacg, Natalia L.
2012-01-01
In this study the capability of the MM5 model in simulating the main mode of intraseasonal variability during the warm season over South America is evaluated through a series of sensitivity experiments. Several 3-month simulations nested into ERA40 reanalysis were carried out using different cumulus schemes and planetary boundary layer schemes in an attempt to define the optimal combination of physical parameterizations for simulating alternating wet and dry conditions over La Plata Basin (LPB) and the South Atlantic Convergence Zone regions, respectively. The results were compared with different observational datasets and model evaluation was performed taking into account the spatial distribution of monthly precipitation and daily statistics of precipitation over the target regions. Though every experiment was able to capture the contrasting behavior of the precipitation during the simulated period, precipitation was largely underestimated particularly over the LPB region, mainly due to a misrepresentation in the moisture flux convergence. Experiments using grid nudging of the winds above the planetary boundary layer showed a better performance compared with those in which no constrains were imposed to the regional circulation within the model domain. Overall, no single experiment was found to perform the best over the entire domain and during the two contrasting months. The experiment that outperforms depends on the area of interest, being the simulation using the Grell (Kain-Fritsch) cumulus scheme in combination with the MRF planetary boundary layer scheme more adequate for subtropical (tropical) latitudes. The ensemble of the sensitivity experiments showed a better performance compared with any individual experiment.
ERIC Educational Resources Information Center
Anderson, G. Ernest, Jr.
The mission of the simulation team of the Model Elementary Teacher Education Project, 1968-71, was to develop simulation tools and conduct appropriate studies of the anticipated operation of that project. The team focused on the experiences of individual students and on the resources necessary for these experiences to be reasonable. This report…
Kim, K B; Shanyfelt, L M; Hahn, D W
2006-01-01
Dense-medium scattering is explored in the context of providing a quantitative measurement of turbidity, with specific application to corneal haze. A multiple-wavelength scattering technique is proposed to make use of two-color scattering response ratios, thereby providing a means for data normalization. A combination of measurements and simulations are reported to assess this technique, including light-scattering experiments for a range of polystyrene suspensions. Monte Carlo (MC) simulations were performed using a multiple-scattering algorithm based on full Mie scattering theory. The simulations were in excellent agreement with the polystyrene suspension experiments, thereby validating the MC model. The MC model was then used to simulate multiwavelength scattering in a corneal tissue model. Overall, the proposed multiwavelength scattering technique appears to be a feasible approach to quantify dense-medium scattering such as the manifestation of corneal haze, although more complex modeling of keratocyte scattering, and animal studies, are necessary.
Simulation Exploration Experience 2018 Overview
NASA Technical Reports Server (NTRS)
Paglialonga, Stephen; Elfrey, Priscilla; Crues, Edwin Z.
2018-01-01
The Simulation Exploration Experience (SEE) joins students, industry, professional associations, and faculty together for an annual modeling and simulation (M&S) challenge. SEE champions collaborative collegiate-level modeling and simulation by providing a venue for students to work in highly dispersed inter-university teams to design, develop, test, and execute simulated missions associated with space exploration. Participating teams gain valuable knowledge, skills, and increased employability by working closely with industry professionals, NASA, and faculty advisors. This presentation gives and overview of the SEE and the upcoming 2018 SEE event.
System Simulation Modeling: A Case Study Illustration of the Model Development Life Cycle
Janice K. Wiedenbeck; D. Earl Kline
1994-01-01
Systems simulation modeling techniques offer a method of representing the individual elements of a manufacturing system and their interactions. By developing and experimenting with simulation models, one can obtain a better understanding of the overall physical system. Forest products industries are beginning to understand the importance of simulation modeling to help...
NASA Astrophysics Data System (ADS)
Yao, Zhixiong; Tang, Youmin; Chen, Dake; Zhou, Lei; Li, Xiaojing; Lian, Tao; Ul Islam, Siraj
2016-12-01
This study examines the possible impacts of coupling processes on simulations of the Indian Ocean Dipole (IOD). Emphasis is placed on the atmospheric model resolution and physics. Five experiments were conducted for this purpose, including one control run of the ocean-only model, four coupled experiments using two different versions of the Community Atmosphere Model (CAM4 and CAM5) and two different resolutions. The results show that the control run could effectively simulate various features of the IOD. The coupled experiments run at the higher resolution yielded more realistic IOD period and intensity than their counterparts at the low resolution. The coupled experiments using CAM5 generally showed a better simulation skill in the tropical Indian SST climatology and phase-locking than those using CAM4, but the wind anomalies were stronger and the IOD period were longer in the former experiments than in the latter. In all coupled experiments, the IOD intensity was much stronger than the observed intensity, which is attributable to wind-thermocline depth feedback and thermocline depth-subsurface temperature feedback. The CAM5 physics seems beneficial for the simulation of summer rainfall over the eastern equatorial Indian Ocean and the CAM4 physics tends to produce less biases over the western equatorial Indian Ocean, whereas the higher resolution tends to generate unrealistically strong meridional winds. The IOD-ENSO relationship was captured reasonably well in coupled experiments, with improvements in CAM5 relative to CAM4. However, the teleconnection of the IOD-Indian summer monsoon and ENSO-Indian summer monsoon was not realistically simulated in all experiments.
NASA Astrophysics Data System (ADS)
Attada, Raju; Kumar, Prashant; Dasari, Hari Prasad
2018-04-01
Assessment of the land surface models (LSMs) on monsoon studies over the Indian summer monsoon (ISM) region is essential. In this study, we evaluate the skill of LSMs at 10 km spatial resolution in simulating the 2010 monsoon season. The thermal diffusion scheme (TDS), rapid update cycle (RUC), and Noah and Noah with multi-parameterization (Noah-MP) LSMs are chosen based on nature of complexity, that is, from simple slab model to multi-parameterization options coupled with the Weather Research and Forecasting (WRF) model. Model results are compared with the available in situ observations and reanalysis fields. The sensitivity of monsoon elements, surface characteristics, and vertical structures to different LSMs is discussed. Our results reveal that the monsoon features are reproduced by WRF model with all LSMs, but with some regional discrepancies. The model simulations with selected LSMs are able to reproduce the broad rainfall patterns, orography-induced rainfall over the Himalayan region, and dry zone over the southern tip of India. The unrealistic precipitation pattern over the equatorial western Indian Ocean is simulated by WRF-LSM-based experiments. The spatial and temporal distributions of top 2-m soil characteristics (soil temperature and soil moisture) are well represented in RUC and Noah-MP LSM-based experiments during the ISM. Results show that the WRF simulations with RUC, Noah, and Noah-MP LSM-based experiments significantly improved the skill of 2-m temperature and moisture compared to TDS (chosen as a base) LSM-based experiments. Furthermore, the simulations with Noah, RUC, and Noah-MP LSMs exhibit minimum error in thermodynamics fields. In case of surface wind speed, TDS LSM performed better compared to other LSM experiments. A significant improvement is noticeable in simulating rainfall by WRF model with Noah, RUC, and Noah-MP LSMs over TDS LSM. Thus, this study emphasis the importance of choosing/improving LSMs for simulating the ISM phenomena in a regional model.
NASA Technical Reports Server (NTRS)
Shih, Hsin-Yi; Tien, James S.; Ferkul, Paul (Technical Monitor)
2001-01-01
The recently developed numerical model of concurrent-flow flame spread over thin solids has been used as a simulation tool to help the designs of a space experiment. The two-dimensional and three-dimensional, steady form of the compressible Navier-Stokes equations with chemical reactions are solved. With the coupled multi-dimensional solver of the radiative heat transfer, the model is capable of answering a number of questions regarding the experiment concept and the hardware designs. In this paper, the capabilities of the numerical model are demonstrated by providing the guidance for several experimental designing issues. The test matrix and operating conditions of the experiment are estimated through the modeling results. The three-dimensional calculations are made to simulate the flame-spreading experiment with realistic hardware configuration. The computed detailed flame structures provide the insight to the data collection. In addition, the heating load and the requirements of the product exhaust cleanup for the flow tunnel are estimated with the model. We anticipate that using this simulation tool will enable a more efficient and successful space experiment to be conducted.
Using Voice Recognition Equipment to Run the Warfare Environmental Simulator (WES),
1981-03-01
simulations and models are often used. War games are a type of simulation frequently used by the military to evaluate C3 effectiveness. Through the use of a...to 162 words or short phrases (Appendix B). B. EQUIPMENT USED 1. Hardware Description [13] For the experiment a Threshold Model T600 discrete... Model T600 terminal used in this experiment con- sists of an analog speech preprocessor, microcomputer, CRT/keyboard unit, magnetic tape cartridge unit
NASA Technical Reports Server (NTRS)
Pi, Xiaoqing; Mannucci, Anthony J.; Verkhoglyadova, Olga P.; Stephens, Philip; Wilson, Brian D.; Akopian, Vardan; Komjathy, Attila; Lijima, Byron A.
2013-01-01
ISOGAME is designed and developed to assess quantitatively the impact of new observation systems on the capability of imaging and modeling the ionosphere. With ISOGAME, one can perform observation system simulation experiments (OSSEs). A typical OSSE using ISOGAME would involve: (1) simulating various ionospheric conditions on global scales; (2) simulating ionospheric measurements made from a constellation of low-Earth-orbiters (LEOs), particularly Global Navigation Satellite System (GNSS) radio occultation data, and from ground-based global GNSS networks; (3) conducting ionospheric data assimilation experiments with the Global Assimilative Ionospheric Model (GAIM); and (4) analyzing modeling results with visualization tools. ISOGAME can provide quantitative assessment of the accuracy of assimilative modeling with the interested observation system. Other observation systems besides those based on GNSS are also possible to analyze. The system is composed of a suite of software that combines the GAIM, including a 4D first-principles ionospheric model and data assimilation modules, an Internal Reference Ionosphere (IRI) model that has been developed by international ionospheric research communities, observation simulator, visualization software, and orbit design, simulation, and optimization software. The core GAIM model used in ISOGAME is based on the GAIM++ code (written in C++) that includes a new high-fidelity geomagnetic field representation (multi-dipole). New visualization tools and analysis algorithms for the OSSEs are now part of ISOGAME.
A Comparison of Two Balance Calibration Model Building Methods
NASA Technical Reports Server (NTRS)
DeLoach, Richard; Ulbrich, Norbert
2007-01-01
Simulated strain-gage balance calibration data is used to compare the accuracy of two balance calibration model building methods for different noise environments and calibration experiment designs. The first building method obtains a math model for the analysis of balance calibration data after applying a candidate math model search algorithm to the calibration data set. The second building method uses stepwise regression analysis in order to construct a model for the analysis. Four balance calibration data sets were simulated in order to compare the accuracy of the two math model building methods. The simulated data sets were prepared using the traditional One Factor At a Time (OFAT) technique and the Modern Design of Experiments (MDOE) approach. Random and systematic errors were introduced in the simulated calibration data sets in order to study their influence on the math model building methods. Residuals of the fitted calibration responses and other statistical metrics were compared in order to evaluate the calibration models developed with different combinations of noise environment, experiment design, and model building method. Overall, predicted math models and residuals of both math model building methods show very good agreement. Significant differences in model quality were attributable to noise environment, experiment design, and their interaction. Generally, the addition of systematic error significantly degraded the quality of calibration models developed from OFAT data by either method, but MDOE experiment designs were more robust with respect to the introduction of a systematic component of the unexplained variance.
NASA Astrophysics Data System (ADS)
Shirley, Rachel Elizabeth
Nuclear power plant (NPP) simulators are proliferating in academic research institutions and national laboratories in response to the availability of affordable, digital simulator platforms. Accompanying the new research facilities is a renewed interest in using data collected in NPP simulators for Human Reliability Analysis (HRA) research. An experiment conducted in The Ohio State University (OSU) NPP Simulator Facility develops data collection methods and analytical tools to improve use of simulator data in HRA. In the pilot experiment, student operators respond to design basis accidents in the OSU NPP Simulator Facility. Thirty-three undergraduate and graduate engineering students participated in the research. Following each accident scenario, student operators completed a survey about perceived simulator biases and watched a video of the scenario. During the video, they periodically recorded their perceived strength of significant Performance Shaping Factors (PSFs) such as Stress. This dissertation reviews three aspects of simulator-based research using the data collected in the OSU NPP Simulator Facility: First, a qualitative comparison of student operator performance to computer simulations of expected operator performance generated by the Information Decision Action Crew (IDAC) HRA method. Areas of comparison include procedure steps, timing of operator actions, and PSFs. Second, development of a quantitative model of the simulator bias introduced by the simulator environment. Two types of bias are defined: Environmental Bias and Motivational Bias. This research examines Motivational Bias--that is, the effect of the simulator environment on an operator's motivations, goals, and priorities. A bias causal map is introduced to model motivational bias interactions in the OSU experiment. Data collected in the OSU NPP Simulator Facility are analyzed using Structural Equation Modeling (SEM). Data include crew characteristics, operator surveys, and time to recognize and diagnose the accident in the scenario. These models estimate how the effects of the scenario conditions are mediated by simulator bias, and demonstrate how to quantify the strength of the simulator bias. Third, development of a quantitative model of subjective PSFs based on objective data (plant parameters, alarms, etc.) and PSF values reported by student operators. The objective PSF model is based on the PSF network in the IDAC HRA method. The final model is a mixed effects Bayesian hierarchical linear regression model. The subjective PSF model includes three factors: The Environmental PSF, the simulator Bias, and the Context. The Environmental Bias is mediated by an operator sensitivity coefficient that captures the variation in operator reactions to plant conditions. The data collected in the pilot experiments are not expected to reflect professional NPP operator performance, because the students are still novice operators. However, the models used in this research and the methods developed to analyze them demonstrate how to consider simulator bias in experiment design and how to use simulator data to enhance the technical basis of a complex HRA method. The contributions of the research include a framework for discussing simulator bias, a quantitative method for estimating simulator bias, a method for obtaining operator-reported PSF values, and a quantitative method for incorporating the variability in operator perception into PSF models. The research demonstrates applications of Structural Equation Modeling and hierarchical Bayesian linear regression models in HRA. Finally, the research demonstrates the benefits of using student operators as a test platform for HRA research.
Fatigue Damage of Collagenous Tissues: Experiment, Modeling and Simulation Studies
Martin, Caitlin; Sun, Wei
2017-01-01
Mechanical fatigue damage is a critical issue for soft tissues and tissue-derived materials, particularly for musculoskeletal and cardiovascular applications; yet, our understanding of the fatigue damage process is incomplete. Soft tissue fatigue experiments are often difficult and time-consuming to perform, which has hindered progress in this area. However, the recent development of soft-tissue fatigue-damage constitutive models has enabled simulation-based fatigue analyses of tissues under various conditions. Computational simulations facilitate highly controlled and quantitative analyses to study the distinct effects of various loading conditions and design features on tissue durability; thus, they are advantageous over complex fatigue experiments. Although significant work to calibrate the constitutive models from fatigue experiments and to validate predictability remains, further development in these areas will add to our knowledge of soft-tissue fatigue damage and will facilitate the design of durable treatments and devices. In this review, the experimental, modeling, and simulation efforts to study collagenous tissue fatigue damage are summarized and critically assessed. PMID:25955007
Modeling of turbulent separated flows for aerodynamic applications
NASA Technical Reports Server (NTRS)
Marvin, J. G.
1983-01-01
Steady, high speed, compressible separated flows modeled through numerical simulations resulting from solutions of the mass-averaged Navier-Stokes equations are reviewed. Emphasis is placed on benchmark flows that represent simplified (but realistic) aerodynamic phenomena. These include impinging shock waves, compression corners, glancing shock waves, trailing edge regions, and supersonic high angle of attack flows. A critical assessment of modeling capabilities is provided by comparing the numerical simulations with experiment. The importance of combining experiment, numerical algorithm, grid, and turbulence model to effectively develop this potentially powerful simulation technique is stressed.
Parallel discrete event simulation using shared memory
NASA Technical Reports Server (NTRS)
Reed, Daniel A.; Malony, Allen D.; Mccredie, Bradley D.
1988-01-01
With traditional event-list techniques, evaluating a detailed discrete-event simulation-model can often require hours or even days of computation time. By eliminating the event list and maintaining only sufficient synchronization to ensure causality, parallel simulation can potentially provide speedups that are linear in the numbers of processors. A set of shared-memory experiments, using the Chandy-Misra distributed-simulation algorithm, to simulate networks of queues is presented. Parameters of the study include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors. These experiments show that Chandy-Misra distributed simulation is a questionable alternative to sequential-simulation of most queueing network models.
NASA Technical Reports Server (NTRS)
Bremmer, D. A.
1986-01-01
The feasibility of some off-the-shelf microprocessors and state-of-art software is assessed (1) as a development system for the principle investigator (pi) in the design of the experiment model, (2) as an example of available technology application for future PI's experiments, (3) as a system capable of being interactive in the PCTC's simulation of the dedicated experiment processor (DEP), preferably by bringing the PI's DEP software directly into the simulation model, (4) as a system having bus compatibility with host VAX simulation computers, (5) as a system readily interfaced with mock-up panels and information displays, and (6) as a functional system for post mission data analysis.
Simulation System Fidelity Assessment at the Vertical Motion Simulator
NASA Technical Reports Server (NTRS)
Beard, Steven D.; Reardon, Scott E.; Tobias, Eric L.; Aponso, Bimal L.
2013-01-01
Fidelity is a word that is often used but rarely understood when talking about groundbased simulation. Assessing the cueing fidelity of a ground based flight simulator requires a comparison to actual flight data either directly or indirectly. Two experiments were conducted at the Vertical Motion Simulator using the GenHel UH-60A Black Hawk helicopter math model that was directly compared to flight data. Prior to the experiment the simulator s motion and visual system frequency responses were measured, the aircraft math model was adjusted to account for the simulator motion system delays, and the motion system gains and washouts were tuned for the individual tasks. The tuned motion system fidelity was then assessed against the modified Sinacori criteria. The first experiments showed similar handling qualities ratings (HQRs) to actual flight for a bob-up and sidestep maneuvers. The second experiment showed equivalent HQRs between flight and simulation for the ADS33 slalom maneuver for the two pilot participants. The ADS33 vertical maneuver HQRs were mixed with one pilot rating the flight and simulation the same while the second pilot rated the simulation worse. In addition to recording HQRs on the second experiment, an experimental Simulation Fidelity Rating (SFR) scale developed by the University of Liverpool was tested for applicability to engineering simulators. A discussion of the SFR scale for use on the Vertical Motion Simulator is included in this paper.
NASA Astrophysics Data System (ADS)
Zhang, Huqiang; Zhao, Y.; Moise, A.; Ye, H.; Colman, R.; Roff, G.; Zhao, M.
2018-02-01
Significant uncertainty exists in regional climate change projections, particularly for rainfall and other hydro-climate variables. In this study, we conduct a series of Atmospheric General Circulation Model (AGCM) experiments with different future sea surface temperature (SST) warming simulated by a range of coupled climate models. They allow us to assess the extent to which uncertainty from current coupled climate model rainfall projections can be attributed to their simulated SST warming. Nine CMIP5 model-simulated global SST warming anomalies have been super-imposed onto the current SSTs simulated by the Australian climate model ACCESS1.3. The ACCESS1.3 SST-forced experiments closely reproduce rainfall means and interannual variations as in its own fully coupled experiments. Although different global SST warming intensities explain well the inter-model difference in global mean precipitation changes, at regional scales the SST influence vary significantly. SST warming explains about 20-25% of the patterns of precipitation changes in each of the four/five models in its rainfall projections over the oceans in the Indo-Pacific domain, but there are also a couple of models in which different SST warming explains little of their precipitation pattern changes. The influence is weaker again for rainfall changes over land. Roughly similar levels of contribution can be attributed to different atmospheric responses to SST warming in these models. The weak SST influence in our study could be due to the experimental setup applied: superimposing different SST warming anomalies onto the same SSTs simulated for current climate by ACCESS1.3 rather than directly using model-simulated past and future SSTs. Similar modelling and analysis from other modelling groups with more carefully designed experiments are needed to tease out uncertainties caused by different SST warming patterns, different SST mean biases and different model physical/dynamical responses to the same underlying SST forcing.
NRL 1989 Beam Propagation Studies in Support of the ATA Multi-Pulse Propagation Experiment
1990-08-31
papers presented here were all written prior to the completion of the experiment. The first of these papers presents simulation results which modeled ...beam stability and channel evolution for an entire five pulse burst. The second paper describes a new air chemistry model used in the SARLAC...Experiment: A new air chemistry model for use in the propagation codes simulating the MPPE was developed by making analytic fits to benchmark runs with
NASA Astrophysics Data System (ADS)
Zhang, Ning; Du, Yunsong; Miao, Shiguang; Fang, Xiaoyi
2016-08-01
The simulation performance over complex building clusters of a wind simulation model (Wind Information Field Fast Analysis model, WIFFA) in a micro-scale air pollutant dispersion model system (Urban Microscale Air Pollution dispersion Simulation model, UMAPS) is evaluated using various wind tunnel experimental data including the CEDVAL (Compilation of Experimental Data for Validation of Micro-Scale Dispersion Models) wind tunnel experiment data and the NJU-FZ experiment data (Nanjing University-Fang Zhuang neighborhood wind tunnel experiment data). The results show that the wind model can reproduce the vortexes triggered by urban buildings well, and the flow patterns in urban street canyons and building clusters can also be represented. Due to the complex shapes of buildings and their distributions, the simulation deviations/discrepancies from the measurements are usually caused by the simplification of the building shapes and the determination of the key zone sizes. The computational efficiencies of different cases are also discussed in this paper. The model has a high computational efficiency compared to traditional numerical models that solve the Navier-Stokes equations, and can produce very high-resolution (1-5 m) wind fields of a complex neighborhood scale urban building canopy (~ 1 km ×1 km) in less than 3 min when run on a personal computer.
Landry, Guillaume; Reniers, Brigitte; Granton, Patrick Vincent; van Rooijen, Bart; Beaulieu, Luc; Wildberger, Joachim E; Verhaegen, Frank
2011-09-01
Dual energy CT (DECT) imaging can provide both the electron density ρ(e) and effective atomic number Z(eff), thus facilitating tissue type identification. This paper investigates the accuracy of a dual source DECT scanner by means of measurements and simulations. Previous simulation work suggested improved Monte Carlo dose calculation accuracy when compared to single energy CT for low energy photon brachytherapy, but lacked validation. As such, we aim to validate our DECT simulation model in this work. A cylindrical phantom containing tissue mimicking inserts was scanned with a second generation dual source scanner (SOMATOM Definition FLASH) to obtain Z(eff) and ρ(e). A model of the scanner was designed in ImaSim, a CT simulation program, and was used to simulate the experiment. Accuracy of measured Z(eff) (labelled Z) was found to vary from -10% to 10% from low to high Z tissue substitutes while the accuracy on ρ(e) from DECT was about 2.5%. Our simulation reproduced the experiments within ±5% for both Z and ρ(e). A clinical DECT scanner was able to extract Z and ρ(e) of tissue substitutes. Our simulation tool replicates the experiments within a reasonable accuracy. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romander, C. M.; Cagliostro, D. J.
Five experiments were performed to help evaluate the structural integrity of the reactor vessel and head design and to verify code predictions. In the first experiment (SM 1), a detailed model of the head was loaded statically to determine its stiffness. In the remaining four experiments (SM 2 to SM 5), models of the vessel and head were loaded dynamically under a simulated 661 MW-sec hypothetical core disruptive accident (HCDA). Models SM 2 to SM 4, each of increasing complexity, systematically showed the effects of upper internals structures, a thermal liner, core support platform, and torospherical bottom on vessel response.more » Model SM 5, identical to SM 4 but more heavily instrumented, demonstrated experimental reproducibility and provided more comprehensive data. The models consisted of a Ni 200 vessel and core barrel, a head with shielding and simulated component masses, an upper internals structure (UIS), and, in the more complex models SM 4 and SM 5, a Ni 200 thermal liner and core support structure. Water simulated the liquid sodium coolant and a low-density explosive simulated the HCDA loads.« less
Dietterich, Hannah; Lev, Einat; Chen, Jiangzhi; Richardson, Jacob A.; Cashman, Katharine V.
2017-01-01
Numerical simulations of lava flow emplacement are valuable for assessing lava flow hazards, forecasting active flows, designing flow mitigation measures, interpreting past eruptions, and understanding the controls on lava flow behavior. Existing lava flow models vary in simplifying assumptions, physics, dimensionality, and the degree to which they have been validated against analytical solutions, experiments, and natural observations. In order to assess existing models and guide the development of new codes, we conduct a benchmarking study of computational fluid dynamics (CFD) models for lava flow emplacement, including VolcFlow, OpenFOAM, FLOW-3D, COMSOL, and MOLASSES. We model viscous, cooling, and solidifying flows over horizontal planes, sloping surfaces, and into topographic obstacles. We compare model results to physical observations made during well-controlled analogue and molten basalt experiments, and to analytical theory when available. Overall, the models accurately simulate viscous flow with some variability in flow thickness where flows intersect obstacles. OpenFOAM, COMSOL, and FLOW-3D can each reproduce experimental measurements of cooling viscous flows, and OpenFOAM and FLOW-3D simulations with temperature-dependent rheology match results from molten basalt experiments. We assess the goodness-of-fit of the simulation results and the computational cost. Our results guide the selection of numerical simulation codes for different applications, including inferring emplacement conditions of past lava flows, modeling the temporal evolution of ongoing flows during eruption, and probabilistic assessment of lava flow hazard prior to eruption. Finally, we outline potential experiments and desired key observational data from future flows that would extend existing benchmarking data sets.
General purpose simulation system of the data management system for Space Shuttle mission 18
NASA Technical Reports Server (NTRS)
Bengtson, N. M.; Mellichamp, J. M.; Smith, O. C.
1976-01-01
A simulation program for the flow of data through the Data Management System of Spacelab and Space Shuttle was presented. The science, engineering, command and guidance, navigation and control data were included. The programming language used was General Purpose Simulation System V (OS). The science and engineering data flow was modeled from its origin at the experiments and subsystems to transmission from Space Shuttle. Command data flow was modeled from the point of reception onboard and from the CDMS Control Panel to the experiments and subsystems. The GN&C data flow model handled data between the General Purpose Computer and the experiments and subsystems. Mission 18 was the particular flight chosen for simulation. The general structure of the program is presented, followed by a user's manual. Input data required to make runs are discussed followed by identification of the output statistics. The appendices contain a detailed model configuration, program listing and results.
Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas
2016-01-01
Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments.
Models and Measurements Intercomparison 2
NASA Technical Reports Server (NTRS)
Park, Jae H. (Editor); Ko, Malcolm K. W. (Editor); Jackman, Charles H. (Editor); Plumb, R. Alan (Editor); Kaye, Jack A. (Editor); Sage, Karen H. (Editor)
1999-01-01
Models and Measurement Intercomparison II (MM II) summarizes the intercomparison of results from model simulations and observations of stratospheric species. Representatives from twenty-three modeling groups using twenty-nine models participated in these MM II exercises between 1996 and 1999. Twelve of the models were two- dimensional zonal-mean models while seventeen were three-dimensional models. This was an international effort as seven were from outside the United States. Six transport experiments and five chemistry experiments were designed for various models. Models participating in the transport experiments performed simulations of chemically inert tracers providing diagnostics for transport. The chemistry experiments involved simulating the distributions of chemically active trace cases including ozone. The model run conditions for dynamics and chemistry were prescribed in order to minimize the factors that caused differences in the models. The report includes a critical review of the results by the participants and a discussion of the causes of differences between modeled and measured results as well as between results from different models, A sizable effort went into preparation of the database of the observations. This included a new climatology for ozone. The report should help in evaluating the results from various predictive models for assessing humankind perturbations of the stratosphere.
NASA Astrophysics Data System (ADS)
Huang, Shih-Chieh Douglas
In this dissertation, I investigate the effects of a grounded learning experience on college students' mental models of physics systems. The grounded learning experience consisted of a priming stage and an instruction stage, and within each stage, one of two different types of visuo-haptic representation was applied: visuo-gestural simulation (visual modality and gestures) and visuo-haptic simulation (visual modality, gestures, and somatosensory information). A pilot study involving N = 23 college students examined how using different types of visuo-haptic representation in instruction affected people's mental model construction for physics systems. Participants' abilities to construct mental models were operationalized through their pretest-to-posttest gain scores for a basic physics system and their performance on a transfer task involving an advanced physics system. Findings from this pilot study revealed that, while both simulations significantly improved participants' mental modal construction for physics systems, visuo-haptic simulation was significantly better than visuo-gestural simulation. In addition, clinical interviews suggested that participants' mental model construction for physics systems benefited from receiving visuo-haptic simulation in a tutorial prior to the instruction stage. A dissertation study involving N = 96 college students examined how types of visuo-haptic representation in different applications support participants' mental model construction for physics systems. Participant's abilities to construct mental models were again operationalized through their pretest-to-posttest gain scores for a basic physics system and their performance on a transfer task involving an advanced physics system. Participants' physics misconceptions were also measured before and after the grounded learning experience. Findings from this dissertation study not only revealed that visuo-haptic simulation was significantly more effective in promoting mental model construction and remedying participants' physics misconceptions than visuo-gestural simulation, they also revealed that visuo-haptic simulation was more effective during the priming stage than during the instruction stage. Interestingly, the effects of visuo-haptic simulation in priming and visuo-haptic simulation in instruction on participants' pretest-to-posttest gain scores for a basic physics system appeared additive. These results suggested that visuo-haptic simulation is effective in physics learning, especially when it is used during the priming stage.
Parallel discrete event simulation: A shared memory approach
NASA Technical Reports Server (NTRS)
Reed, Daniel A.; Malony, Allen D.; Mccredie, Bradley D.
1987-01-01
With traditional event list techniques, evaluating a detailed discrete event simulation model can often require hours or even days of computation time. Parallel simulation mimics the interacting servers and queues of a real system by assigning each simulated entity to a processor. By eliminating the event list and maintaining only sufficient synchronization to insure causality, parallel simulation can potentially provide speedups that are linear in the number of processors. A set of shared memory experiments is presented using the Chandy-Misra distributed simulation algorithm to simulate networks of queues. Parameters include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors. These experiments show that Chandy-Misra distributed simulation is a questionable alternative to sequential simulation of most queueing network models.
Revisiting the horizontal redistribution of water in soils: Experiments and numerical modeling.
Zhuang, L; Hassanizadeh, S M; Kleingeld, P J; van Genuchten, M Th
2017-09-01
A series of experiments and related numerical simulations were carried out to study one-dimensional water redistribution processes in an unsaturated soil. A long horizontal Plexiglas box was packed as homogenously as possible with sand. The sandbox was divided into two sections using a very thin metal plate, with one section initially fully saturated and the other section only partially saturated. Initial saturation in the dry section was set to 0.2, 0.4, or 0.6 in three different experiments. Redistribution between the wet and dry sections started as soon as the metal plate was removed. Changes in water saturation at various locations along the sandbox were measured as a function of time using a dual-energy gamma system. Also, air and water pressures were measured using two different kinds of tensiometers at various locations as a function of time. The saturation discontinuity was found to persist during the entire experiments, while observed water pressures were found to become continuous immediately after the experiments started. Two models, the standard Richards equation and an interfacial area model, were used to simulate the experiments. Both models showed some deviations between the simulated water pressures and the measured data at early times during redistribution. The standard model could only simulate the observed saturation distributions reasonably well for the experiment with the lowest initial water saturation in the dry section. The interfacial area model could reproduce observed saturation distributions of all three experiments, albeit by fitting one of the parameters in the surface area production term.
The Design and the Formative Evaluation of a Web-Based Course for Simulation Analysis Experiences
ERIC Educational Resources Information Center
Tao, Yu-Hui; Guo, Shin-Ming; Lu, Ya-Hui
2006-01-01
Simulation output analysis has received little attention comparing to modeling and programming in real-world simulation applications. This is further evidenced by our observation that students and beginners acquire neither adequate details of knowledge nor relevant experience of simulation output analysis in traditional classroom learning. With…
NASA Astrophysics Data System (ADS)
Watanabe, S.; Kim, H.; Utsumi, N.
2017-12-01
This study aims to develop a new approach which projects hydrology under climate change using super ensemble experiments. The use of multiple ensemble is essential for the estimation of extreme, which is a major issue in the impact assessment of climate change. Hence, the super ensemble experiments are recently conducted by some research programs. While it is necessary to use multiple ensemble, the multiple calculations of hydrological simulation for each output of ensemble simulations needs considerable calculation costs. To effectively use the super ensemble experiments, we adopt a strategy to use runoff projected by climate models directly. The general approach of hydrological projection is to conduct hydrological model simulations which include land-surface and river routing process using atmospheric boundary conditions projected by climate models as inputs. This study, on the other hand, simulates only river routing model using runoff projected by climate models. In general, the climate model output is systematically biased so that a preprocessing which corrects such bias is necessary for impact assessments. Various bias correction methods have been proposed, but, to the best of our knowledge, no method has proposed for variables other than surface meteorology. Here, we newly propose a method for utilizing the projected future runoff directly. The developed method estimates and corrects the bias based on the pseudo-observation which is a result of retrospective offline simulation. We show an application of this approach to the super ensemble experiments conducted under the program of Half a degree Additional warming, Prognosis and Projected Impacts (HAPPI). More than 400 ensemble experiments from multiple climate models are available. The results of the validation using historical simulations by HAPPI indicates that the output of this approach can effectively reproduce retrospective runoff variability. Likewise, the bias of runoff from super ensemble climate projections is corrected, and the impact of climate change on hydrologic extremes is assessed in a cost-efficient way.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, Haomin; Solberg, Jerome; Merzari, Elia
This paper describes a numerical study of flow-induced vibration in a helical coil steam generator experiment conducted at Argonne National Laboratory in the 1980s. In the experiment, a half-scale sector model of a steam generator helical coil tube bank was subjected to still and flowing air and water, and the vibrational characteristics were recorded. The research detailed in this document utilizes the multi-physics simulation toolkit SHARP developed at Argonne National Laboratory, in cooperation with Lawrence Livermore National Laboratory, to simulate the experiment. SHARP uses the spectral element code Nek5000 for fluid dynamics analysis and the finite element code DIABLO formore » structural analysis. The flow around the coil tubes is modeled in Nek5000 by using a large eddy simulation turbulence model. Transient pressure data on the tube surfaces is sampled and transferred to DIABLO for the structural simulation. The structural response is simulated in DIABLO via an implicit time-marching algorithm and a combination of continuum elements and structural shells. Tube vibration data (acceleration and frequency) are sampled and compared with the experimental data. Currently, only one-way coupling is used, which means that pressure loads from the fluid simulation are transferred to the structural simulation but the resulting structural displacements are not fed back to the fluid simulation« less
Yuan, Haomin; Solberg, Jerome; Merzari, Elia; ...
2017-08-01
This study describes a numerical study of flow-induced vibration in a helical coil steam generator experiment conducted at Argonne National Laboratory in the 1980 s. In the experiment, a half-scale sector model of a steam generator helical coil tube bank was subjected to still and flowing air and water, and the vibrational characteristics were recorded. The research detailed in this document utilizes the multi-physics simulation toolkit SHARP developed at Argonne National Laboratory, in cooperation with Lawrence Livermore National Laboratory, to simulate the experiment. SHARP uses the spectral element code Nek5000 for fluid dynamics analysis and the finite element code DIABLOmore » for structural analysis. The flow around the coil tubes is modeled in Nek5000 by using a large eddy simulation turbulence model. Transient pressure data on the tube surfaces is sampled and transferred to DIABLO for the structural simulation. The structural response is simulated in DIABLO via an implicit time-marching algorithm and a combination of continuum elements and structural shells. Tube vibration data (acceleration and frequency) are sampled and compared with the experimental data. Currently, only one-way coupling is used, which means that pressure loads from the fluid simulation are transferred to the structural simulation but the resulting structural displacements are not fed back to the fluid simulation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, Haomin; Solberg, Jerome; Merzari, Elia
This study describes a numerical study of flow-induced vibration in a helical coil steam generator experiment conducted at Argonne National Laboratory in the 1980 s. In the experiment, a half-scale sector model of a steam generator helical coil tube bank was subjected to still and flowing air and water, and the vibrational characteristics were recorded. The research detailed in this document utilizes the multi-physics simulation toolkit SHARP developed at Argonne National Laboratory, in cooperation with Lawrence Livermore National Laboratory, to simulate the experiment. SHARP uses the spectral element code Nek5000 for fluid dynamics analysis and the finite element code DIABLOmore » for structural analysis. The flow around the coil tubes is modeled in Nek5000 by using a large eddy simulation turbulence model. Transient pressure data on the tube surfaces is sampled and transferred to DIABLO for the structural simulation. The structural response is simulated in DIABLO via an implicit time-marching algorithm and a combination of continuum elements and structural shells. Tube vibration data (acceleration and frequency) are sampled and compared with the experimental data. Currently, only one-way coupling is used, which means that pressure loads from the fluid simulation are transferred to the structural simulation but the resulting structural displacements are not fed back to the fluid simulation.« less
Kravitz, Benjamin S.; Robock, Alan; Tilmes, S.; ...
2015-10-27
We present a suite of new climate model experiment designs for the Geoengineering Model Intercomparison Project (GeoMIP). This set of experiments, named GeoMIP6 (to be consistent with the Coupled Model Intercomparison Project Phase 6), builds on the previous GeoMIP project simulations, and has been expanded to address several further important topics, including key uncertainties in extreme events, the use of geoengineering as part of a portfolio of responses to climate change, and the relatively new idea of cirrus cloud thinning to allow more long wave radiation to escape to space. We discuss experiment designs, as well as the rationale formore » those designs, showing preliminary results from individual models when available. We also introduce a new feature, called the GeoMIP Testbed, which provides a platform for simulations that will be performed with a few models and subsequently assessed to determine whether the proposed experiment designs will be adopted as core (Tier 1) GeoMIP experiments. In conclusion, this is meant to encourage various stakeholders to propose new targeted experiments that address their key open science questions, with the goal of making GeoMIP more relevant to a broader set of communities.« less
Kinetic modeling of x-ray laser-driven solid Al plasmas via particle-in-cell simulation
NASA Astrophysics Data System (ADS)
Royle, R.; Sentoku, Y.; Mancini, R. C.; Paraschiv, I.; Johzaki, T.
2017-06-01
Solid-density plasmas driven by intense x-ray free-electron laser (XFEL) radiation are seeded by sources of nonthermal photoelectrons and Auger electrons that ionize and heat the target via collisions. Simulation codes that are commonly used to model such plasmas, such as collisional-radiative (CR) codes, typically assume a Maxwellian distribution and thus instantaneous thermalization of the source electrons. In this study, we present a detailed description and initial applications of a collisional particle-in-cell code, picls, that has been extended with a self-consistent radiation transport model and Monte Carlo models for photoionization and K L L Auger ionization, enabling the fully kinetic simulation of XFEL-driven plasmas. The code is used to simulate two experiments previously performed at the Linac Coherent Light Source investigating XFEL-driven solid-density Al plasmas. It is shown that picls-simulated pulse transmissions using the Ecker-Kröll continuum-lowering model agree much better with measurements than do simulations using the Stewart-Pyatt model. Good quantitative agreement is also found between the time-dependent picls results and those of analogous simulations by the CR code scfly, which was used in the analysis of the experiments to accurately reproduce the observed K α emissions and pulse transmissions. Finally, it is shown that the effects of the nonthermal electrons are negligible for the conditions of the particular experiments under investigation.
NASA Astrophysics Data System (ADS)
Cho, G. S.
2017-09-01
For performance optimization of Refrigerated Warehouses, design parameters are selected based on the physical parameters such as number of equipment and aisles, speeds of forklift for ease of modification. This paper provides a comprehensive framework approach for the system design of Refrigerated Warehouses. We propose a modeling approach which aims at the simulation optimization so as to meet required design specifications using the Design of Experiment (DOE) and analyze a simulation model using integrated aspect-oriented modeling approach (i-AOMA). As a result, this suggested method can evaluate the performance of a variety of Refrigerated Warehouses operations.
Fast Realistic MRI Simulations Based on Generalized Multi-Pool Exchange Tissue Model.
Liu, Fang; Velikina, Julia V; Block, Walter F; Kijowski, Richard; Samsonov, Alexey A
2017-02-01
We present MRiLab, a new comprehensive simulator for large-scale realistic MRI simulations on a regular PC equipped with a modern graphical processing unit (GPU). MRiLab combines realistic tissue modeling with numerical virtualization of an MRI system and scanning experiment to enable assessment of a broad range of MRI approaches including advanced quantitative MRI methods inferring microstructure on a sub-voxel level. A flexible representation of tissue microstructure is achieved in MRiLab by employing the generalized tissue model with multiple exchanging water and macromolecular proton pools rather than a system of independent proton isochromats typically used in previous simulators. The computational power needed for simulation of the biologically relevant tissue models in large 3D objects is gained using parallelized execution on GPU. Three simulated and one actual MRI experiments were performed to demonstrate the ability of the new simulator to accommodate a wide variety of voxel composition scenarios and demonstrate detrimental effects of simplified treatment of tissue micro-organization adapted in previous simulators. GPU execution allowed ∼ 200× improvement in computational speed over standard CPU. As a cross-platform, open-source, extensible environment for customizing virtual MRI experiments, MRiLab streamlines the development of new MRI methods, especially those aiming to infer quantitatively tissue composition and microstructure.
Fast Realistic MRI Simulations Based on Generalized Multi-Pool Exchange Tissue Model
Velikina, Julia V.; Block, Walter F.; Kijowski, Richard; Samsonov, Alexey A.
2017-01-01
We present MRiLab, a new comprehensive simulator for large-scale realistic MRI simulations on a regular PC equipped with a modern graphical processing unit (GPU). MRiLab combines realistic tissue modeling with numerical virtualization of an MRI system and scanning experiment to enable assessment of a broad range of MRI approaches including advanced quantitative MRI methods inferring microstructure on a sub-voxel level. A flexibl representation of tissue microstructure is achieved in MRiLab by employing the generalized tissue model with multiple exchanging water and macromolecular proton pools rather than a system of independent proton isochromats typically used in previous simulators. The computational power needed for simulation of the biologically relevant tissue models in large 3D objects is gained using parallelized execution on GPU. Three simulated and one actual MRI experiments were performed to demonstrate the ability of the new simulator to accommodate a wide variety of voxel composition scenarios and demonstrate detrimental effects of simplifie treatment of tissue micro-organization adapted in previous simulators. GPU execution allowed ∼200× improvement in computational speed over standard CPU. As a cross-platform, open-source, extensible environment for customizing virtual MRI experiments, MRiLab streamlines the development of new MRI methods, especially those aiming to infer quantitatively tissue composition and microstructure. PMID:28113746
Modelling of deformation and recrystallisation microstructures in rocks and ice
NASA Astrophysics Data System (ADS)
Bons, Paul D.; Evans, Lynn A.; Gomez-Rivas, Enrique; Griera, Albert; Jessell, Mark W.; Lebensohn, Ricardo; Llorens, Maria-Gema; Peternell, Mark; Piazolo, Sandra; Weikusat, Ilka; Wilson, Chris J. L.
2015-04-01
Microstructures both record the deformation history of a rock and strongly control its mechanical properties. As microstructures in natural rocks only show the final "post-mortem" state, geologists have attempted to simulate the development of microstructures with experiments and later numerical models. Especially in-situ experiments have given enormous insight, as time-lapse movies could reveal the full history of a microstructure. Numerical modelling is an alternative approach to simulate and follow the change in microstructure with time, unconstrained by experimental limitations. Numerical models have been applied to a range of microstructural processes, such as grain growth, dynamic recrystallisation, porphyroblast rotation, vein growth, formation of mylonitic fabrics, etc. The numerical platform "Elle" (www.elle.ws) in particular has brought progress in the simulation of microstructural development as it is specifically designed to include the competition between simultaneously operating processes. Three developments significantly improve our capability to simulate microstructural evolution: (1) model input from the mapping of crystallographic orientation with EBSD or the automatic fabric analyser, (2) measurement of grain size and crystallographic preferred orientation evolution using neutron diffraction experiments and (3) the implementation of the full-field Fast Fourier Transform (FFT) solver for modelling anisotropic crystal-plastic deformation. The latter enables the detailed modelling of stress and strain as a function of local crystallographic orientation, which has a strong effect on strain localisation such as, for example, the formation of shear bands. These models can now be compared with the temporal evolution of crystallographic orientation distributions in in-situ experiments. In the last decade, the possibility to combine experiments with numerical simulations has allowed not only verification and refinement of the numerical simulation technique but also increased significantly the ability to predict and/or interpret natural microstructures. This contribution will present the most recent developments in in-situ and numerical modelling of deformation and recrystallisation microstructures in rocks and in ice.
A high-resolution integrated model of the National Ignition Campaign cryogenic layered experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, O. S.; Cerjan, C. J.; Marinak, M. M.
A detailed simulation-based model of the June 2011 National Ignition Campaign cryogenic DT experiments is presented. The model is based on integrated hohlraum-capsule simulations that utilize the best available models for the hohlraum wall, ablator, and DT equations of state and opacities. The calculated radiation drive was adjusted by changing the input laser power to match the experimentally measured shock speeds, shock merger times, peak implosion velocity, and bangtime. The crossbeam energy transfer model was tuned to match the measured time-dependent symmetry. Mid-mode mix was included by directly modeling the ablator and ice surface perturbations up to mode 60. Simulatedmore » experimental values were extracted from the simulation and compared against the experiment. Although by design the model is able to reproduce the 1D in-flight implosion parameters and low-mode asymmetries, it is not able to accurately predict the measured and inferred stagnation properties and levels of mix. In particular, the measured yields were 15%-40% of the calculated yields, and the inferred stagnation pressure is about 3 times lower than simulated.« less
Simulator design for advanced ISDN satellite design and experiments
NASA Technical Reports Server (NTRS)
Pepin, Gerald R.
1992-01-01
This simulation design task completion report documents the simulation techniques associated with the network models of both the Interim Service ISDN (integrated services digital network) Satellite (ISIS) and the Full Service ISDN Satellite (FSIS) architectures. The ISIS network model design represents satellite systems like the Advanced Communication Technology Satellite (ACTS) orbiting switch. The FSIS architecture, the ultimate aim of this element of the Satellite Communications Applications Research (SCAR) program, moves all control and switching functions on-board the next generation ISDN communication satellite. The technical and operational parameters for the advanced ISDN communications satellite design will be obtained from the simulation of ISIS and FSIS engineering software models for their major subsystems. Discrete events simulation experiments will be performed with these models using various traffic scenarios, design parameters and operational procedures. The data from these simulations will be used to determine the engineering parameters for the advanced ISDN communications satellite.
Reactive transport of metal contaminants in alluvium - Model comparison and column simulation
Brown, J.G.; Bassett, R.L.; Glynn, P.D.
2000-01-01
A comparative assessment of two reactive-transport models, PHREEQC and HYDROGEOCHEM (HGC), was done to determine the suitability of each for simulating the movement of acidic contamination in alluvium. For simulations that accounted for aqueous complexation, precipitation and dissolution, the breakthrough and rinseout curves generated by each model were similar. The differences in simulated equilibrium concentrations between models were minor and were related to (1) different units in model output, (2) different activity coefficients, and (3) ionic-strength calculations. When adsorption processes were added to the models, the rinseout pH simulated by PHREEQC using the diffuse double-layer adsorption model rose to a pH of 6 after pore volume 15, about 1 pore volume later than the pH simulated by HGC using the constant-capacitance model. In PHREEQC simulation of a laboratory column experiment, the inability of the model to match measured outflow concentrations of selected constituents was related to the evident lack of local geochemical equilibrium in the column. The difference in timing and size of measured and simulated breakthrough of selected constituents indicated that the redox and adsorption reactions in the column occurred slowly when compared with the modeled reactions. MINTEQA2 and PHREEQC simulations of the column experiment indicated that the number of surface sites that took part in adsorption reactions was less than that estimated from the measured concentration of Fe hydroxide in the alluvium.
Flight simulator fidelity assessment in a rotorcraft lateral translation maneuver
NASA Technical Reports Server (NTRS)
Hess, R. A.; Malsbury, T.; Atencio, A., Jr.
1992-01-01
A model-based methodology for assessing flight simulator fidelity in closed-loop fashion is exercised in analyzing a rotorcraft low-altitude maneuver for which flight test and simulation results were available. The addition of a handling qualities sensitivity function to a previously developed model-based assessment criteria allows an analytical comparison of both performance and handling qualities between simulation and flight test. Model predictions regarding the existence of simulator fidelity problems are corroborated by experiment. The modeling approach is used to assess analytically the effects of modifying simulator characteristics on simulator fidelity.
Virtual geotechnical laboratory experiments using a simulator
NASA Astrophysics Data System (ADS)
Penumadu, Dayakar; Zhao, Rongda; Frost, David
2000-04-01
The details of a test simulator that provides a realistic environment for performing virtual laboratory experimentals in soil mechanics is presented. A computer program Geo-Sim that can be used to perform virtual experiments, and allow for real-time observations of material response is presented. The results of experiments, for a given set of input parameters, are obtained with the test simulator using well-trained artificial neural-network-based soil models for different soil types and stress paths. Multimedia capabilities are integrated in Geo-Sim, using software that links and controls a laser disc player with a real-time parallel processing ability. During the simulation of a virtual experiment, relevant portions of the video image of a previously recorded test on an actual soil specimen are dispalyed along with the graphical presentation of response from the feedforward ANN model predictions. The pilot simulator developed to date includes all aspects related to performing a triaxial test on cohesionless soil under undrained and drained conditions. The benefits of the test simulator are also presented.
Simulate what is measured: next steps towards predictive simulations (Conference Presentation)
NASA Astrophysics Data System (ADS)
Bussmann, Michael; Kluge, Thomas; Debus, Alexander; Hübl, Axel; Garten, Marco; Zacharias, Malte; Vorberger, Jan; Pausch, Richard; Widera, René; Schramm, Ulrich; Cowan, Thomas E.; Irman, Arie; Zeil, Karl; Kraus, Dominik
2017-05-01
Simulations of laser matter interaction at extreme intensities that have predictive power are nowadays in reach when considering codes that make optimum use of high performance compute architectures. Nevertheless, this is mostly true for very specific settings where model parameters are very well known from experiment and the underlying plasma dynamics is governed by Maxwell's equations solely. When including atomic effects, prepulse influences, radiation reaction and other physical phenomena things look different. Not only is it harder to evaluate the sensitivity of the simulation result on the variation of the various model parameters but numerical models are less well tested and their combination can lead to subtle side effects that influence the simulation outcome. We propose to make optimum use of future compute hardware to compute statistical and systematic errors rather than just find the mots optimum set of parameters fitting an experiment. This requires to include experimental uncertainties which is a challenge to current state of the art techniques. Moreover, it demands better comparison to experiments as inclusion of simulating the diagnostic's response becomes important. We strongly advocate the use of open standards for finding interoperability between codes for comparison studies, building complete tool chains for simulating laser matter experiments from start to end.
Virtual reality simulators: valuable surgical skills trainers or video games?
Willis, Ross E; Gomez, Pedro Pablo; Ivatury, Srinivas J; Mitra, Hari S; Van Sickle, Kent R
2014-01-01
Virtual reality (VR) and physical model (PM) simulators differ in terms of whether the trainee is manipulating actual 3-dimensional objects (PM) or computer-generated 3-dimensional objects (VR). Much like video games (VG), VR simulators utilize computer-generated graphics. These differences may have profound effects on the utility of VR and PM training platforms. In this study, we aimed to determine whether a relationship exists between VR, PM, and VG platforms. VR and PM simulators for laparoscopic camera navigation ([LCN], experiment 1) and flexible endoscopy ([FE] experiment 2) were used in this study. In experiment 1, 20 laparoscopic novices played VG and performed 0° and 30° LCN exercises on VR and PM simulators. In experiment 2, 20 FE novices played VG and performed colonoscopy exercises on VR and PM simulators. In both experiments, VG performance was correlated with VR performance but not with PM performance. Performance on VR simulators did not correlate with performance on respective PM models. VR environments may be more like VG than previously thought. © 2013 Published by Association of Program Directors in Surgery on behalf of Association of Program Directors in Surgery.
Host Model Uncertainty in Aerosol Radiative Forcing Estimates - The AeroCom Prescribed Experiment
NASA Astrophysics Data System (ADS)
Stier, P.; Kinne, S.; Bellouin, N.; Myhre, G.; Takemura, T.; Yu, H.; Randles, C.; Chung, C. E.
2012-04-01
Anthropogenic and natural aerosol radiative effects are recognized to affect global and regional climate. However, even for the case of identical aerosol emissions, the simulated direct aerosol radiative forcings show significant diversity among the AeroCom models (Schulz et al., 2006). Our analysis of aerosol absorption in the AeroCom models indicates a larger diversity in the translation from given aerosol radiative properties (absorption optical depth) to actual atmospheric absorption than in the translation of a given atmospheric burden of black carbon to the radiative properties (absorption optical depth). The large diversity is caused by differences in the simulated cloud fields, radiative transfer, the relative vertical distribution of aerosols and clouds, and the effective surface albedo. This indicates that differences in host model (GCM or CTM hosting the aerosol module) parameterizations contribute significantly to the simulated diversity of aerosol radiative forcing. The magnitude of these host model effects in global aerosol model and satellites retrieved aerosol radiative forcing estimates cannot be estimated from the diagnostics of the "standard" AeroCom forcing experiments. To quantify the contribution of differences in the host models to the simulated aerosol radiative forcing and absorption we conduct the AeroCom Prescribed experiment, a simple aerosol model and satellite retrieval intercomparison with prescribed highly idealised aerosol fields. Quality checks, such as diagnostic output of the 3D aerosol fields as implemented in each model, ensure the comparability of the aerosol implementation in the participating models. The simulated forcing variability among the models and retrievals is a direct measure of the contribution of host model assumptions to the uncertainty in the assessment of the aerosol radiative effects. We will present the results from the AeroCom prescribed experiment with focus on the attribution to the simulated variability to parametric and structural model uncertainties. This work will help to prioritise areas for future model improvements and ultimately lead to uncertainty reduction.
Experimental Study and CFD Simulation of a 2D Circulating Fluidized Bed
NASA Astrophysics Data System (ADS)
Kallio, S.; Guldén, M.; Hermanson, A.
Computational fluid dynamics (CFD) gains popularity in fluidized bed modeling. For model validation, there is a need of detailed measurements under well-defined conditions. In the present study, experiments were carried out in a 40 em wide and 3 m high 2D circulating fluidized bed. Two experiments were simulated by means of the Eulerian multiphase models of the Fluent CFD software. The vertical pressure and solids volume fraction profiles and the solids circulation rate obtained from the simulation were compared to the experimental results. In addition, lateral volume fraction profiles could be compared. The simulated CFB flow patterns and the profiles obtained from simulations were in general in a good agreement with the experimental results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veneziani, Carmela
Two sets of simulations were performed within this allocation: 1) a 12-year fully-coupled experiment in preindustrial conditions, using the CICE4 version of the sea-ice model; 2) a set of multi-decadal ocean-ice-only experiments, forced with CORE-I atmospheric fields and using the CICE5 version of the sea-ice model. Results from simulation 1) are presented in Figures 1-3, and specific results from a simulation in 2) with tracer releases are presented in Figure 4.
ARES Modeling of High-foot Implosions (NNSA Milestone #5466)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hurricane, O. A.
ARES “capsule only” simulations demonstrated results of applying an ASC code to a suite of high-foot ICF implosion experiments. While a capability to apply an asymmetric FDS drive to the capsule-only model using add-on Python routines exists, it was not exercised here. The ARES simulation results resemble the results from HYDRA simulations documented in A. Kritcher, et al., Phys. Plasmas, 23, 052709 (2016); namely, 1D simulation and data are in reasonable agreement for the lowest velocity experiments, but diverge from each other at higher velocities.
Simulation-Based Training for Colonoscopy
Preisler, Louise; Svendsen, Morten Bo Søndergaard; Nerup, Nikolaj; Svendsen, Lars Bo; Konge, Lars
2015-01-01
Abstract The aim of this study was to create simulation-based tests with credible pass/fail standards for 2 different fidelities of colonoscopy models. Only competent practitioners should perform colonoscopy. Reliable and valid simulation-based tests could be used to establish basic competency in colonoscopy before practicing on patients. Twenty-five physicians (10 consultants with endoscopic experience and 15 fellows with very little endoscopic experience) were tested on 2 different simulator models: a virtual-reality simulator and a physical model. Tests were repeated twice on each simulator model. Metrics with discriminatory ability were identified for both modalities and reliability was determined. The contrasting-groups method was used to create pass/fail standards and the consequences of these were explored. The consultants significantly performed faster and scored higher than the fellows on both the models (P < 0.001). Reliability analysis showed Cronbach α = 0.80 and 0.87 for the virtual-reality and the physical model, respectively. The established pass/fail standards failed one of the consultants (virtual-reality simulator) and allowed one fellow to pass (physical model). The 2 tested simulations-based modalities provided reliable and valid assessments of competence in colonoscopy and credible pass/fail standards were established for both the tests. We propose to use these standards in simulation-based training programs before proceeding to supervised training on patients. PMID:25634177
LeBlanc, Fabien; Champagne, Bradley J; Augestad, Knut M; Neary, Paul C; Senagore, Anthony J; Ellis, Clyde N; Delaney, Conor P
2010-08-01
The aim of this study was to compare the human cadaver model with an augmented reality simulator for straight laparoscopic colorectal skills acquisition. Thirty-five sigmoid colectomies were performed on a cadaver (n = 7) or an augmented reality simulator (n = 28) during a laparoscopic training course. Prior laparoscopic colorectal experience was assessed. Objective structured technical skills assessment forms were completed by trainers and trainees independently. Groups were compared according to technical skills and events scores and satisfaction with training model. Prior laparoscopic experience was similar in both groups. For trainers and trainees, technical skills scores were considerably better on the simulator than on the cadaver. For trainers, generic events score was also considerably better on the simulator than on the cadaver. The main generic event occurring on both models was errors in the use of retraction. The main specific event occurring on both models was bowel perforation. Global satisfaction was better for the cadaver than for the simulator model (p < 0.001). The human cadaver model was more difficult but better appreciated than the simulator for laparoscopic sigmoid colectomy training. Simulator training followed by cadaver training can appropriately integrate simulators into the learning curve and maintain the benefits of both training methodologies. Published by Elsevier Inc.
Mock Data Challenge for the MPD/NICA Experiment on the HybriLIT Cluster
NASA Astrophysics Data System (ADS)
Gertsenberger, Konstantin; Rogachevsky, Oleg
2018-02-01
Simulation of data processing before receiving first experimental data is an important issue in high-energy physics experiments. This article presents the current Event Data Model and the Mock Data Challenge for the MPD experiment at the NICA accelerator complex which uses ongoing simulation studies to exercise in a stress-testing the distributed computing infrastructure and experiment software in the full production environment from simulated data through the physical analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ohno, N.; Abdel-Karim, M.
2000-01-01
Uniaxial ratchetting experiments of 316FR steel at room temperature reported in Part 1 are simulated using a new kinematic hardening model which has two kinds of dynamic recovery terms. The model, which features the capability of simulating slight opening of stress-strain hysteresis loops robustly, is formulated by furnishing the Armstrong and Frederick model with the critical state of dynamic recovery introduced by Ohno and Wang (1993). The model is then combined with a viscoplastic equation, and the resulting constitutive model is applied successfully to simulating the experiments. It is shown that for ratchetting under stress cycling with negative stress ratio,more » viscoplasticity and slight opening of hysteresis loops are effective mainly in early and subsequent cycles, respectively, whereas for ratchetting under zero-to-tension only viscoplasticity is effective.« less
Monthly mean simulation experiments with a course-mesh global atmospheric model
NASA Technical Reports Server (NTRS)
Spar, J.; Klugman, R.; Lutz, R. J.; Notario, J. J.
1978-01-01
Substitution of observed monthly mean sea-surface temperatures (SSTs) as lower boundary conditions, in place of climatological SSTs, failed to improve the model simulations. While the impact of SST anomalies on the model output is greater at sea level than at upper levels the impact on the monthly mean simulations is not beneficial at any level. Shifts of one and two days in initialization time produced small, but non-trivial, changes in the model-generated monthly mean synoptic fields. No improvements in the mean simulations resulted from the use of either time-averaged initial data or re-initialization with time-averaged early model output. The noise level of the model, as determined from a multiple initial state perturbation experiment, was found to be generally low, but with a noisier response to initial state errors in high latitudes than the tropics.
Statistical Surrogate Modeling of Atmospheric Dispersion Events Using Bayesian Adaptive Splines
NASA Astrophysics Data System (ADS)
Francom, D.; Sansó, B.; Bulaevskaya, V.; Lucas, D. D.
2016-12-01
Uncertainty in the inputs of complex computer models, including atmospheric dispersion and transport codes, is often assessed via statistical surrogate models. Surrogate models are computationally efficient statistical approximations of expensive computer models that enable uncertainty analysis. We introduce Bayesian adaptive spline methods for producing surrogate models that capture the major spatiotemporal patterns of the parent model, while satisfying all the necessities of flexibility, accuracy and computational feasibility. We present novel methodological and computational approaches motivated by a controlled atmospheric tracer release experiment conducted at the Diablo Canyon nuclear power plant in California. Traditional methods for building statistical surrogate models often do not scale well to experiments with large amounts of data. Our approach is well suited to experiments involving large numbers of model inputs, large numbers of simulations, and functional output for each simulation. Our approach allows us to perform global sensitivity analysis with ease. We also present an approach to calibration of simulators using field data.
Pagan, Darren C.; Miller, Matthew P.
2014-01-01
A forward modeling diffraction framework is introduced and employed to identify slip system activity in high-energy diffraction microscopy (HEDM) experiments. In the framework, diffraction simulations are conducted on virtual mosaic crystals with orientation gradients consistent with Nye’s model of heterogeneous single slip. Simulated diffraction peaks are then compared against experimental measurements to identify slip system activity. Simulation results compared against diffraction data measured in situ from a silicon single-crystal specimen plastically deformed under single-slip conditions indicate that slip system activity can be identified during HEDM experiments. PMID:24904242
The Oceanographic Multipurpose Software Environment (OMUSE v1.0)
NASA Astrophysics Data System (ADS)
Pelupessy, Inti; van Werkhoven, Ben; van Elteren, Arjen; Viebahn, Jan; Candy, Adam; Portegies Zwart, Simon; Dijkstra, Henk
2017-08-01
In this paper we present the Oceanographic Multipurpose Software Environment (OMUSE). OMUSE aims to provide a homogeneous environment for existing or newly developed numerical ocean simulation codes, simplifying their use and deployment. In this way, numerical experiments that combine ocean models representing different physics or spanning different ranges of physical scales can be easily designed. Rapid development of simulation models is made possible through the creation of simple high-level scripts. The low-level core of the abstraction in OMUSE is designed to deploy these simulations efficiently on heterogeneous high-performance computing resources. Cross-verification of simulation models with different codes and numerical methods is facilitated by the unified interface that OMUSE provides. Reproducibility in numerical experiments is fostered by allowing complex numerical experiments to be expressed in portable scripts that conform to a common OMUSE interface. Here, we present the design of OMUSE as well as the modules and model components currently included, which range from a simple conceptual quasi-geostrophic solver to the global circulation model POP (Parallel Ocean Program). The uniform access to the codes' simulation state and the extensive automation of data transfer and conversion operations aids the implementation of model couplings. We discuss the types of couplings that can be implemented using OMUSE. We also present example applications that demonstrate the straightforward model initialization and the concurrent use of data analysis tools on a running model. We give examples of multiscale and multiphysics simulations by embedding a regional ocean model into a global ocean model and by coupling a surface wave propagation model with a coastal circulation model.
Coleman, Kevin; Muhammed, Shibu E; Milne, Alice E; Todman, Lindsay C; Dailey, A Gordon; Glendining, Margaret J; Whitmore, Andrew P
2017-12-31
We describe a model framework that simulates spatial and temporal interactions in agricultural landscapes and that can be used to explore trade-offs between production and environment so helping to determine solutions to the problems of sustainable food production. Here we focus on models of agricultural production, water movement and nutrient flow in a landscape. We validate these models against data from two long-term experiments, (the first a continuous wheat experiment and the other a permanent grass-land experiment) and an experiment where water and nutrient flow are measured from isolated catchments. The model simulated wheat yield (RMSE 20.3-28.6%), grain N (RMSE 21.3-42.5%) and P (RMSE 20.2-29% excluding the nil N plots), and total soil organic carbon particularly well (RMSE3.1-13.8%), the simulations of water flow were also reasonable (RMSE 180.36 and 226.02%). We illustrate the use of our model framework to explore trade-offs between production and nutrient losses. Copyright © 2017 Rothamsted Research. Published by Elsevier B.V. All rights reserved.
Effects of Learning Support in Simulation-Based Physics Learning
ERIC Educational Resources Information Center
Chang, Kuo-En; Chen, Yu-Lung; Lin, He-Yan; Sung, Yao-Ting
2008-01-01
This paper describes the effects of learning support on simulation-based learning in three learning models: experiment prompting, a hypothesis menu, and step guidance. A simulation learning system was implemented based on these three models, and the differences between simulation-based learning and traditional laboratory learning were explored in…
Educating the delivery of bad news in medicine: Preceptorship versus simulation
Jacques, Andrew P; Adkins, Eric J; Knepel, Sheri; Boulger, Creagh; Miller, Jessica; Bahner, David P
2011-01-01
Simulation experiences have begun to replace traditional education models of teaching the skill of bad news delivery in medical education. The tiered apprenticeship model of medical education emphasizes experiential learning. Studies have described a lack of support in bad news delivery and inadequacy of training in this important clinical skill as well as poor familial comprehension and dissatisfaction on the part of physicians in training regarding the resident delivery of bad news. Many residency training programs lacked a formalized training curriculum in the delivery of bad news. Simulation teaching experiences may address these noted clinical deficits in the delivery of bad news to patients and their families. Unique experiences can be role-played with this educational technique to simulate perceived learner deficits. A variety of scenarios can be constructed within the framework of the simulation training method to address specific cultural and religious responses to bad news in the medical setting. Even potentially explosive and violent scenarios can be role-played in order to prepare physicians for these rare and difficult situations. While simulation experiences cannot supplant the model of positive, real-life clinical teaching in the delivery of bad news, simulation of clinical scenarios with scripting, self-reflection, and peer-to-peer feedback can be powerful educational tools. Simulation training can help to develop the skills needed to effectively and empathetically deliver bad news to patients and families in medical practice. PMID:22229135
Educating the delivery of bad news in medicine: Preceptorship versus simulation.
Jacques, Andrew P; Adkins, Eric J; Knepel, Sheri; Boulger, Creagh; Miller, Jessica; Bahner, David P
2011-07-01
Simulation experiences have begun to replace traditional education models of teaching the skill of bad news delivery in medical education. The tiered apprenticeship model of medical education emphasizes experiential learning. Studies have described a lack of support in bad news delivery and inadequacy of training in this important clinical skill as well as poor familial comprehension and dissatisfaction on the part of physicians in training regarding the resident delivery of bad news. Many residency training programs lacked a formalized training curriculum in the delivery of bad news. Simulation teaching experiences may address these noted clinical deficits in the delivery of bad news to patients and their families. Unique experiences can be role-played with this educational technique to simulate perceived learner deficits. A variety of scenarios can be constructed within the framework of the simulation training method to address specific cultural and religious responses to bad news in the medical setting. Even potentially explosive and violent scenarios can be role-played in order to prepare physicians for these rare and difficult situations. While simulation experiences cannot supplant the model of positive, real-life clinical teaching in the delivery of bad news, simulation of clinical scenarios with scripting, self-reflection, and peer-to-peer feedback can be powerful educational tools. Simulation training can help to develop the skills needed to effectively and empathetically deliver bad news to patients and families in medical practice.
Optimising electron microscopy experiment through electron optics simulation.
Kubo, Y; Gatel, C; Snoeck, E; Houdellier, F
2017-04-01
We developed a new type of electron trajectories simulation inside a complete model of a modern transmission electron microscope (TEM). Our model incorporates the precise and real design of each element constituting a TEM, i.e. the field emission (FE) cathode, the extraction optic and acceleration stages of a 300kV cold field emission gun, the illumination lenses, the objective lens, the intermediate and projection lenses. Full trajectories can be computed using magnetically saturated or non-saturated round lenses, magnetic deflectors and even non-cylindrical symmetry elements like electrostatic biprism. This multi-scale model gathers nanometer size components (FE tip) with parts of meter length (illumination and projection systems). We demonstrate that non-trivial TEM experiments requiring specific and complex optical configurations can be simulated and optimized prior to any experiment using such model. We show that all the currents set in all optical elements of the simulated column can be implemented in the real column (I2TEM in CEMES) and used as starting alignment for the requested experiment. We argue that the combination of such complete electron trajectory simulations in the whole TEM column with automatic optimization of the microscope parameters for optimal experimental data (images, diffraction, spectra) allows drastically simplifying the implementation of complex experiments in TEM and will facilitate the development of advanced use of the electron microscope in the near future. Copyright © 2017 Elsevier B.V. All rights reserved.
Hurricanes and Climate: The U.S. CLIVAR Working Group on Hurricanes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walsh, Kevin J. E.; Camargo, Suzana J.; Vecchi, Gabriel A.
While a quantitative climate theory of tropical cyclone formation remains elusive, considerable progress has been made recently in our ability to simulate tropical cyclone climatologies and to understand the relationship between climate and tropical cyclone formation. Climate models are now able to simulate a realistic rate of global tropical cyclone formation, although simulation of the Atlantic tropical cyclone climatology remains challenging unless horizontal resolutions finer than 50 km are employed. This article summarizes published research from the idealized experiments of the Hurricane Working Group of U.S. Climate and Ocean: Variability, Predictability and Change (CLIVAR). This work, combined with results frommore » other model simulations, has strengthened relationships between tropical cyclone formation rates and climate variables such as midtropospheric vertical velocity, with decreased climatological vertical velocities leading to decreased tropical cyclone formation. Systematic differences are shown between experiments in which only sea surface temperature is increased compared with experiments where only atmospheric carbon dioxide is increased. Experiments where only carbon dioxide is increased are more likely to demonstrate a decrease in tropical cyclone numbers, similar to the decreases simulated by many climate models for a future, warmer climate. Experiments where the two effects are combined also show decreases in numbers, but these tend to be less for models that demonstrate a strong tropical cyclone response to increased sea surface temperatures. Lastly, further experiments are proposed that may improve our understanding of the relationship between climate and tropical cyclone formation, including experiments with two-way interaction between the ocean and the atmosphere and variations in atmospheric aerosols.« less
Hurricanes and Climate: The U.S. CLIVAR Working Group on Hurricanes
Walsh, Kevin J. E.; Camargo, Suzana J.; Vecchi, Gabriel A.; ...
2015-06-01
While a quantitative climate theory of tropical cyclone formation remains elusive, considerable progress has been made recently in our ability to simulate tropical cyclone climatologies and to understand the relationship between climate and tropical cyclone formation. Climate models are now able to simulate a realistic rate of global tropical cyclone formation, although simulation of the Atlantic tropical cyclone climatology remains challenging unless horizontal resolutions finer than 50 km are employed. This article summarizes published research from the idealized experiments of the Hurricane Working Group of U.S. Climate and Ocean: Variability, Predictability and Change (CLIVAR). This work, combined with results frommore » other model simulations, has strengthened relationships between tropical cyclone formation rates and climate variables such as midtropospheric vertical velocity, with decreased climatological vertical velocities leading to decreased tropical cyclone formation. Systematic differences are shown between experiments in which only sea surface temperature is increased compared with experiments where only atmospheric carbon dioxide is increased. Experiments where only carbon dioxide is increased are more likely to demonstrate a decrease in tropical cyclone numbers, similar to the decreases simulated by many climate models for a future, warmer climate. Experiments where the two effects are combined also show decreases in numbers, but these tend to be less for models that demonstrate a strong tropical cyclone response to increased sea surface temperatures. Lastly, further experiments are proposed that may improve our understanding of the relationship between climate and tropical cyclone formation, including experiments with two-way interaction between the ocean and the atmosphere and variations in atmospheric aerosols.« less
Students' Design of Experiments: An Inquiry Module on the Conduction of Heat
ERIC Educational Resources Information Center
Hatzikraniotis, E.; Kallery, M.; Molohidis, A.; Psillos, D.
2010-01-01
This article examines secondary students' design of experiments after engagement in an innovative and inquiry-oriented module on heat transfer. The module consists of an integration of hands-on experiments, simulated experiments and microscopic model simulations, includes a structured series of guided investigative tasks and was implemented for a…
In-flight simulation investigation of rotorcraft pitch-roll cross coupling
NASA Technical Reports Server (NTRS)
Watson, Douglas C.; Hindson, William S.
1988-01-01
An in-flight simulation experiment investigating the handling qualities effects of the pitch-roll cross-coupling characteristic of single-main-rotor helicopters is described. The experiment was conducted using the NASA/Army CH-47B variable stability helicopter with an explicit-model-following control system. The research is an extension of an earlier ground-based investigation conducted on the NASA Ames Research Center's Vertical Motion Simulator. The model developed for the experiment is for an unaugmented helicopter with cross-coupling implemented using physical rotor parameters. The details of converting the model from the simulation to use in flight are described. A frequency-domain comparison of the model and actual aircraft responses showing the fidelity of the in-flight simulation is described. The evaluation task was representative of nap-of-the-Earth maneuvering flight. The results indicate that task demands are important in determining allowable levels of coupling. In addition, on-axis damping characteristics influence the frequency-dependent characteristics of coupling and affect the handling qualities. Pilot technique, in terms of learned control crossfeeds, can improve performance and lower workload for particular types of coupling. The results obtained in flight corroborated the simulation results.
NASA Astrophysics Data System (ADS)
Jungclaus, Johann H.; Bard, Edouard; Baroni, Mélanie; Braconnot, Pascale; Cao, Jian; Chini, Louise P.; Egorova, Tania; Evans, Michael; Fidel González-Rouco, J.; Goosse, Hugues; Hurtt, George C.; Joos, Fortunat; Kaplan, Jed O.; Khodri, Myriam; Klein Goldewijk, Kees; Krivova, Natalie; LeGrande, Allegra N.; Lorenz, Stephan J.; Luterbacher, Jürg; Man, Wenmin; Maycock, Amanda C.; Meinshausen, Malte; Moberg, Anders; Muscheler, Raimund; Nehrbass-Ahles, Christoph; Otto-Bliesner, Bette I.; Phipps, Steven J.; Pongratz, Julia; Rozanov, Eugene; Schmidt, Gavin A.; Schmidt, Hauke; Schmutz, Werner; Schurer, Andrew; Shapiro, Alexander I.; Sigl, Michael; Smerdon, Jason E.; Solanki, Sami K.; Timmreck, Claudia; Toohey, Matthew; Usoskin, Ilya G.; Wagner, Sebastian; Wu, Chi-Ju; Leng Yeo, Kok; Zanchettin, Davide; Zhang, Qiong; Zorita, Eduardo
2017-11-01
The pre-industrial millennium is among the periods selected by the Paleoclimate Model Intercomparison Project (PMIP) for experiments contributing to the sixth phase of the Coupled Model Intercomparison Project (CMIP6) and the fourth phase of the PMIP (PMIP4). The past1000 transient simulations serve to investigate the response to (mainly) natural forcing under background conditions not too different from today, and to discriminate between forced and internally generated variability on interannual to centennial timescales. This paper describes the motivation and the experimental set-ups for the PMIP4-CMIP6 past1000 simulations, and discusses the forcing agents orbital, solar, volcanic, and land use/land cover changes, and variations in greenhouse gas concentrations. The past1000 simulations covering the pre-industrial millennium from 850 Common Era (CE) to 1849 CE have to be complemented by historical simulations (1850 to 2014 CE) following the CMIP6 protocol. The external forcings for the past1000 experiments have been adapted to provide a seamless transition across these time periods. Protocols for the past1000 simulations have been divided into three tiers. A default forcing data set has been defined for the Tier 1 (the CMIP6 past1000) experiment. However, the PMIP community has maintained the flexibility to conduct coordinated sensitivity experiments to explore uncertainty in forcing reconstructions as well as parameter uncertainty in dedicated Tier 2 simulations. Additional experiments (Tier 3) are defined to foster collaborative model experiments focusing on the early instrumental period and to extend the temporal range and the scope of the simulations. This paper outlines current and future research foci and common analyses for collaborative work between the PMIP and the observational communities (reconstructions, instrumental data).
NASA Astrophysics Data System (ADS)
Mota, F. L.; Song, Y.; Pereda, J.; Billia, B.; Tourret, D.; Debierre, J.-M.; Trivedi, R.; Karma, A.; Bergeon, N.
2017-08-01
To study the dynamical formation and evolution of cellular and dendritic arrays under diffusive growth conditions, three-dimensional (3D) directional solidification experiments were conducted in microgravity on a model transparent alloy onboard the International Space Station using the Directional Solidification Insert in the DEvice for the study of Critical LIquids and Crystallization. Selected experiments were repeated on Earth under gravity-driven fluid flow to evidence convection effects. Both radial and axial macrosegregation resulting from convection are observed in ground experiments, and primary spacings measured on Earth and microgravity experiments are noticeably different. The microgravity experiments provide unique benchmark data for numerical simulations of spatially extended pattern formation under diffusive growth conditions. The results of 3D phase-field simulations highlight the importance of accurately modeling thermal conditions that strongly influence the front recoil of the interface and the selection of the primary spacing. The modeling predictions are in good quantitative agreements with the microgravity experiments.
Stone, John E.; Hynninen, Antti-Pekka; Phillips, James C.; Schulten, Klaus
2017-01-01
All-atom molecular dynamics simulations of biomolecules provide a powerful tool for exploring the structure and dynamics of large protein complexes within realistic cellular environments. Unfortunately, such simulations are extremely demanding in terms of their computational requirements, and they present many challenges in terms of preparation, simulation methodology, and analysis and visualization of results. We describe our early experiences porting the popular molecular dynamics simulation program NAMD and the simulation preparation, analysis, and visualization tool VMD to GPU-accelerated OpenPOWER hardware platforms. We report our experiences with compiler-provided autovectorization and compare with hand-coded vector intrinsics for the POWER8 CPU. We explore the performance benefits obtained from unique POWER8 architectural features such as 8-way SMT and its value for particular molecular modeling tasks. Finally, we evaluate the performance of several GPU-accelerated molecular modeling kernels and relate them to other hardware platforms. PMID:29202130
Experimental verification of dynamic simulation
NASA Technical Reports Server (NTRS)
Yae, K. Harold; Hwang, Howyoung; Chern, Su-Tai
1989-01-01
The dynamics model here is a backhoe, which is a four degree of freedom manipulator from the dynamics standpoint. Two types of experiment are chosen that can also be simulated by a multibody dynamics simulation program. In the experiment, recorded were the configuration and force histories; that is, velocity and position, and force output and differential pressure change from the hydraulic cylinder, in the time domain. When the experimental force history is used as driving force in the simulation model, the forward dynamics simulation produces a corresponding configuration history. Then, the experimental configuration history is used in the inverse dynamics analysis to generate a corresponding force history. Therefore, two sets of configuration and force histories--one set from experiment, and the other from the simulation that is driven forward and backward with the experimental data--are compared in the time domain. More comparisons are made in regard to the effects of initial conditions, friction, and viscous damping.
NASA Astrophysics Data System (ADS)
Danáčová, Michaela; Valent, Peter; Výleta, Roman
2017-12-01
Nowadays, rainfall simulators are being used by many researchers in field or laboratory experiments. The main objective of most of these experiments is to better understand the underlying runoff generation processes, and to use the results in the process of calibration and validation of hydrological models. Many research groups have assembled their own rainfall simulators, which comply with their understanding of rainfall processes, and the requirements of their experiments. Most often, the existing rainfall simulators differ mainly in the size of the irrigated area, and the way they generate rain drops. They can be characterized by the accuracy, with which they produce a rainfall of a given intensity, the size of the irrigated area, and the rain drop generating mechanism. Rainfall simulation experiments can provide valuable information about the genesis of surface runoff, infiltration of water into soil and rainfall erodibility. Apart from the impact of physical properties of soil, its moisture and compaction on the generation of surface runoff and the amount of eroded particles, some studies also investigate the impact of vegetation cover of the whole area of interest. In this study, the rainfall simulator was used to simulate the impact of the slope gradient of the irrigated area on the amount of generated runoff and sediment yield. In order to eliminate the impact of external factors and to improve the reproducibility of the initial conditions, the experiments were conducted in laboratory conditions. The laboratory experiments were carried out using a commercial rainfall simulator, which was connected to an external peristaltic pump. The pump maintained a constant and adjustable inflow of water, which enabled to overcome the maximum volume of simulated precipitation of 2.3 l, given by the construction of the rainfall simulator, while maintaining constant characteristics of the simulated precipitation. In this study a 12-minute rainfall with a constant intensity of 5 mm/min was used to irrigate a corrupted soil sample. The experiment was undertaken for several different slopes, under the condition of no vegetation cover. The results of the rainfall simulation experiment complied with the expectations of a strong relationship between the slope gradient, and the amount of surface runoff generated. The experiments with higher slope gradients were characterised by larger volumes of surface runoff generated, and by shorter times after which it occurred. The experiments with rainfall simulators in both laboratory and field conditions play an important role in better understanding of runoff generation processes. The results of such small scale experiments could be used to estimate some of the parameters of complex hydrological models, which are used to model rainfall-runoff and erosion processes at catchment scale.
Simulation of Oil Palm Shell Pyrolysis to Produce Bio-Oil with Self-Pyrolysis Reactor
NASA Astrophysics Data System (ADS)
Fika, R.; Nelwan, L. O.; Yulianto, M.
2018-05-01
A new self-pyrolysis reactor was designed to reduce the utilization of electric heater due to the energy saving for the production of bio-oil from oil palm shell. The yield of the bio- oil was then evaluated with the developed mathematical model by Sharma [1] with the characteristic of oil palm shell [2]. During the simulation, the temperature on the combustion chamber on the release of the bio-oil was utilized to determine the volatile composition from the combustion of the oil palm shell as fuel. The mass flow was assumed constant for three experiments. The model resulted in a significant difference between the simulated bio-oil and experiments. The bio-oil yields from the simulation were 22.01, 16.36, and 21.89 % (d.b.) meanwhile the experimental yields were 10.23, 9.82, and 8.41% (d.b.). The char yield varied from 30.7 % (d.b.) from the simulation to 40.9 % (d.b.) from the experiment. This phenomenon was due to the development of process temperature over time which was not considered as one of the influential factors in producing volatile matters on the simulation model. Meanwhile the real experiments highly relied on the process conditions (reactor type, temperature over time, gas flow). There was also possibilities of the occurrence of the gasification inside the reactor which caused the liquid yield was not as high as simulated. Further simulation model research on producing the bio-oil yield will be needed to predict the optimum condition and temperature development on the newly self-pyrolysis reactor.
The NASA Ames Hypersonic Combustor-Model Inlet CFD Simulations and Experimental Comparisons
NASA Technical Reports Server (NTRS)
Venkatapathy, E.; Tokarcik-Polsky, S.; Deiwert, G. S.; Edwards, Thomas A. (Technical Monitor)
1995-01-01
Computations have been performed on a three-dimensional inlet associated with the NASA Ames combustor model for the hypersonic propulsion experiment in the 16-inch shock tunnel. The 3-dimensional inlet was designed to have the combustor inlet flow nearly two-dimensional and of sufficient mass flow necessary for combustion. The 16-inch shock tunnel experiment is a short duration test with test time of the order of milliseconds. The flow through the inlet is in chemical non-equilibrium. Two test entries have been completed and limited experimental results for the inlet region of the combustor-model are available. A number of CFD simulations, with various levels of simplifications such as 2-D simulations, 3-D simulations with and without chemical reactions, simulations with and without turbulent conditions, etc., have been performed. These simulations have helped determine the model inlet flow characteristics and the important factors that affect the combustor inlet flow and the sensitivity of the flow field to these simplifications. In the proposed paper, CFD modeling of the hypersonic inlet, results from the simulations and comparison with available experimental results will be presented.
Multi-injector modeling of transverse combustion instability experiments
NASA Astrophysics Data System (ADS)
Shipley, Kevin J.
Concurrent simulations and experiments are used to study combustion instabilities in a multiple injector element combustion chamber. The experiments employ a linear array of seven coaxial injector elements positioned atop a rectangular chamber. Different levels of instability are driven in the combustor by varying the operating and geometry parameters of the outer driving injector elements located near the chamber end-walls. The objectives of the study are to apply a reduced three-injector model to generate a computational test bed for the evaluation of injector response to transverse instability, to apply a full seven-injector model to investigate the inter-element coupling between injectors in response to transverse instability, and to further develop this integrated approach as a key element in a predictive methodology that relies heavily on subscale test and simulation. To measure the effects of the transverse wave on a central study injector element two opposing windows are placed in the chamber to allow optical access. The chamber is extensively instrumented with high-frequency pressure transducers. High-fidelity computational fluid dynamics simulations are used to model the experiment. Specifically three-dimensional, detached eddy simulations (DES) are used. Two computational approaches are investigated. The first approach models the combustor with three center injectors and forces transverse waves in the chamber with a wall velocity function at the chamber side walls. Different levels of pressure oscillation amplitudes are possible by varying the amplitude of the forcing function. The purpose of this method is to focus on the combustion response of the study element. In the second approach, all seven injectors are modeled and self-excited combustion instability is achieved. This realistic model of the chamber allows the study of inter-element flow dynamics, e.g., how the resonant motions in the injector tubes are coupled through the transverse pressure waves in the chamber. The computational results are analyzed and compared with experiment results in the time, frequency and modal domains. Results from the three injector model show how applying different velocity forcing amplitudes change the amplitude and spatial location of heat release from the center injector. The instability amplitudes in the simulation are able to be tuned to experiments and produce similar modal combustion responses of the center injector. The reaction model applied was found to play an important role in the spatial and temporal heat release response. Only when the model was calibrated to ignition delay measurements did the heat release response reflect measurements in the experiment. While insightful the simulations are not truly predictive because the driving frequency and forcing function amplitude are input into the simulation. However, the use of this approach as a tool to investigate combustion response is demonstrated. Results from the seven injector simulations provide an insightful look at the mechanisms driving the instability in the combustor. The instability was studied over a range of pressure fluctuations, up to 70% of mean chamber pressure produced in the self-exited simulation. At low amplitudes the transverse instability was found to be supported by both flame impingement with the side wall as well as vortex shedding at the primary acoustic frequency. As instability level grew the primary supporting mechanism shifted to just vortex impingement on the side walls and the greatest growth was seen as additional vortices began impinging between injector elements at the primary acoustic frequency. This research reveals the advantages and limitations of applying these two modeling techniques to simulate multiple injector experiments. The advantage of the three injector model is a simplified geometry which results in faster model development and the ability to more rapidly study the injector response under varying velocity amplitudes. The possibly faster run time is offset though by the need to run multiple cases to calibrate the model to the experiment. The model is also limited to studying the central injector effect and lacks heat release sources from the outer injectors and additional vortex interactions as shown in the seven injector simulation. The advantage of the seven injector model is that the whole domain can be explored to provide a better understanding about influential processes but does require longer development and run time due to the extensive gridding requirement. Both simulations have proven useful in exploring transverse combustion instability and show the need to further develop subscale experiments and companions simulations in developing a full-scale combustion instability prediction capability.
Phase Distribution Phenomena for Simulated Microgravity Conditions: Experimental Work
NASA Technical Reports Server (NTRS)
Singhal, Maneesh; Bonetto, Fabian J.; Lahey, R. T., Jr.
1996-01-01
This report summarizes the work accomplished at Rensselaer to study phase distribution phenomenon under simulated microgravity conditions. Our group at Rensselaer has been able to develop sophisticated analytical models to predict phase distribution in two-phase flows under a variety of conditions. These models are based on physics and data obtained from carefully controlled experiments that are being conducted here. These experiments also serve to verify the models developed.
Phase Distribution Phenomena for Simulated Microgravity Conditions: Experimental Work
NASA Technical Reports Server (NTRS)
Singhal, Maneesh; Bonetto, Fabian J.; Lahey, R. T., Jr.
1996-01-01
This report summarizes the work accomplished at Rensselaer to study phase distribution phenomenon under simulated microgravity conditions. Our group at Rensselaer has been able to develop sophisticated analytical models to predict phase distribution in two-phase flows under variety of conditions. These models are based on physics and data obtained from carefully controlled experiments that are being conducted here. These experiments also serve to verify the models developed.
Experiments in Error Propagation within Hierarchal Combat Models
2015-09-01
Bayesian Information Criterion CNO Chief of Naval Operations DOE Design of Experiments DOD Department of Defense MANA Map Aware Non-uniform Automata ...ground up” approach. First, it develops a mission-level model for one on one submarine combat in Map Aware Non-uniform Automata (MANA) simulation, an... Automata (MANA), an agent based simulation that can model the different postures of submarines. It feeds the results from MANA into stochastic
Design, construction, and evaluation of a 1:8 scale model binaural manikin.
Robinson, Philip; Xiang, Ning
2013-03-01
Many experiments in architectural acoustics require presenting listeners with simulations of different rooms to compare. Acoustic scale modeling is a feasible means to create accurate simulations of many rooms at reasonable cost. A critical component in a scale model room simulation is a receiver that properly emulates a human receiver. For this purpose, a scale model artificial head has been constructed and tested. This paper presents the design and construction methods used, proper equalization procedures, and measurements of its response. A headphone listening experiment examining sound externalization with various reflection conditions is presented that demonstrates its use for psycho-acoustic testing.
Scattering Models and Basic Experiments in the Microwave Regime
NASA Technical Reports Server (NTRS)
Fung, A. K.; Blanchard, A. J. (Principal Investigator)
1985-01-01
The objectives of research over the next three years are: (1) to develop a randomly rough surface scattering model which is applicable over the entire frequency band; (2) to develop a computer simulation method and algorithm to simulate scattering from known randomly rough surfaces, Z(x,y); (3) to design and perform laboratory experiments to study geometric and physical target parameters of an inhomogeneous layer; (4) to develop scattering models for an inhomogeneous layer which accounts for near field interaction and multiple scattering in both the coherent and the incoherent scattering components; and (5) a comparison between theoretical models and measurements or numerical simulation.
Kulhánek, Tomáš; Ježek, Filip; Mateják, Marek; Šilar, Jan; Kofránek, Jří
2015-08-01
This work introduces experiences of teaching modeling and simulation for graduate students in the field of biomedical engineering. We emphasize the acausal and object-oriented modeling technique and we have moved from teaching block-oriented tool MATLAB Simulink to acausal and object oriented Modelica language, which can express the structure of the system rather than a process of computation. However, block-oriented approach is allowed in Modelica language too and students have tendency to express the process of computation. Usage of the exemplar acausal domains and approach allows students to understand the modeled problems much deeper. The causality of the computation is derived automatically by the simulation tool.
NASA Technical Reports Server (NTRS)
Kurzeja, R. J.; Haggard, K. V.; Grose, W. L.
1981-01-01
Three experiments have been performed using a three-dimensional, spectral quasi-geostrophic model in order to investigate the sensitivity of ozone transport to tropospheric orographic and thermal effects and to the zonal wind distribution. In the first experiment, the ozone distribution averaged over the last 30 days of a 60 day transport simulation was determined; in the second experiment, the transport simulation was repeated, but nonzonal orographic and thermal forcing was omitted; and in the final experiment, the simulation was conducted with the intensity and position of the stratospheric jets altered by addition of a Newtonian cooling term to the zonal-mean diabatic heating rate. Results of the three experiments are summarized by comparing the zonal-mean ozone distribution, the amplitude of eddy geopotential height, the zonal winds, and zonal-mean diabatic heating.
ERIC Educational Resources Information Center
Whitman, David L.; Terry, Ronald E.
1985-01-01
Demonstrating petroleum engineering concepts in undergraduate laboratories often requires expensive and time-consuming experiments. To eliminate these problems, a graphical simulation technique was developed for junior-level laboratories which illustrate vapor-liquid equilibrium and the use of mathematical modeling. A description of this…
A molecular dynamics simulation study of chloroform
NASA Astrophysics Data System (ADS)
Tironi, Ilario G.; van Gunsteren, Wilfred F.
Three different chloroform models have been investigated using molecular dynamics computer simulation. The thermodynamic, structural and dynamic properties of the various models were investigated in detail. In particular, the potential energies, diffusion coefficients and rotational correlation times obtained for each model are compared with experiment. It is found that the theory of rotational Brownian motion fails in describing the rotational diffusion of chloroform. The force field of Dietz and Heinzinger was found to give good overall agreement with experiment. An extended investigation of this chloroform model has been performed. Values are reported for the isothermal compressibility, the thermal expansion coefficient and the constant volume heat capacity. The values agree well with experiment. The static and frequency dependent dielectric permittivity were computed from a 1·2 ns simulation conducted under reaction field boundary conditions. Considering the fact that the model is rigid with fixed partial charges, the static dielectric constant and Debye relaxation time compare well with experiment. From the same simulation the shear viscosity was computed using the off-diagonal elements of the pressure tensor, both via an Einstein type relation and via a Green-Kubo equation. The calculated viscosities show good agreement with experimental values. The excess Helmholtz energy is calculated using the thermodynamic integration technique and simulations of 50 and 80 ps. The value obtained for the excess Helmholtz energy matches the theoretical value within a few per cent.
A Bone Marrow Aspirate and Trephine Simulator.
Yap, Eng Soo; Koh, Pei Lin; Ng, Chin Hin; de Mel, Sanjay; Chee, Yen Lin
2015-08-01
Bone marrow aspirate and trephine (BMAT) biopsy is a commonly performed procedure in hematology-oncology practice. Although complications are uncommon, they can cause significant morbidity and mortality. Simulation models are an excellent tool to teach novice doctors basic procedural skills before performing the actual procedure on patients to improve patient safety and well-being. There are no commercial BMAT simulators, and this technical report describes the rationale, technical specifications, and construction of a low-cost, easily constructed, reusable BMAT simulator that reproduced the tactile properties of tissue layers for use as a teaching tool in our resident BMAT simulation course. Preliminary data of learner responses to the simulator were also collected. From April 2013 to November 2013, 32 internal medicine residents underwent the BMAT simulation course. Eighteen (56%) completed the online survey, 11 residents with previous experience doing BMAT and 7 without experience. Despite the difference in operative experience, both experienced and novice residents all agreed or strongly agreed that the model aided their understanding of the BMAT procedure. All agreed or strongly agreed that this enhanced their knowledge of anatomy and 16 residents (89%) agreed or strongly agreed that this model was a realistic simulator. We present a novel, low-cost, easily constructed, realistic BMAT simulator for training novice doctors to perform BMAT.
ERIC Educational Resources Information Center
Wee, Loo Kang
2012-01-01
We develop an Easy Java Simulation (EJS) model for students to experience the physics of idealized one-dimensional collision carts. The physics model is described and simulated by both continuous dynamics and discrete transition during collision. In designing the simulations, we discuss briefly three pedagogical considerations namely (1) a…
Simulation-based modeling of building complexes construction management
NASA Astrophysics Data System (ADS)
Shepelev, Aleksandr; Severova, Galina; Potashova, Irina
2018-03-01
The study reported here examines the experience in the development and implementation of business simulation games based on network planning and management of high-rise construction. Appropriate network models of different types and levels of detail have been developed; a simulation model including 51 blocks (11 stages combined in 4 units) is proposed.
NASA Technical Reports Server (NTRS)
Kiley, C. M.; Fuelberg, Henry E.; Palmer, P. I.; Allen, D. J.; Carmichael, G. R.; Jacob, D. J.; Mari, C.; Pierce, R. B.; Pickering, K. E.; Tang, Y.
2002-01-01
Four global scale and three regional scale chemical transport models are intercompared and evaluated during NASA's TRACE-P experiment. Model simulated and measured CO are statistically analyzed along aircraft flight tracks. Results for the combination of eleven flights show an overall negative bias in simulated CO. Biases are most pronounced during large CO events. Statistical agreements vary greatly among the individual flights. Those flights with the greatest range of CO values tend to be the worst simulated. However, for each given flight, the models generally provide similar relative results. The models exhibit difficulties simulating intense CO plumes. CO error is found to be greatest in the lower troposphere. Convective mass flux is shown to be very important, particularly near emissions source regions. Occasionally meteorological lift associated with excessive model-calculated mass fluxes leads to an overestimation of mid- and upper- tropospheric mixing ratios. Planetary Boundary Layer (PBL) depth is found to play an important role in simulating intense CO plumes. PBL depth is shown to cap plumes, confining heavy pollution to the very lowest levels.
Experiment Analysis and Modelling of Compaction Behaviour of Ag60Cu30Sn10 Mixed Metal Powders
NASA Astrophysics Data System (ADS)
Zhou, Mengcheng; Huang, Shangyu; Liu, Wei; Lei, Yu; Yan, Shiwei
2018-03-01
A novel process method combines powder compaction and sintering was employed to fabricate thin sheets of cadmium-free silver based filler metals, the compaction densification behaviour of Ag60Cu30Sn10 mixed metal powders was investigated experimentally. Based on the equivalent density method, the density-dependent Drucker-Prager Cap (DPC) model was introduced to model the powder compaction behaviour. Various experiment procedures were completed to determine the model parameters. The friction coefficients in lubricated and unlubricated die were experimentally determined. The determined material parameters were validated by experiments and numerical simulation of powder compaction process using a user subroutine (USDFLD) in ABAQUS/Standard. The good agreement between the simulated and experimental results indicates that the determined model parameters are able to describe the compaction behaviour of the multicomponent mixed metal powders, which can be further used for process optimization simulations.
Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas
2017-01-01
Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments. PMID:28190948
NASA Astrophysics Data System (ADS)
Prime, M. B.; Vaughan, D. E.; Preston, D. L.; Buttler, W. T.; Chen, S. R.; Oró, D. M.; Pack, C.
2014-05-01
Experiments applying a supported shock through mating surfaces (Atwood number = 1) with geometrical perturbations have been proposed for studying strength at strain rates up to 107/s using Richtmyer-Meshkov (RM) instabilities. Buttler et al. recently reported experimental results for RM instability growth in copper but with an unsupported shock applied by high explosives and the geometrical perturbations on the opposite free surface (Atwood number = -1). This novel configuration allowed detailed experimental observation of the instability growth and arrest. We present results and interpretation from numerical simulations of the Buttler RM instability experiments. Highly-resolved, two-dimensional simulations were performed using a Lagrangian hydrocode and the Preston-Tonks-Wallace (PTW) strength model. The model predictions show good agreement with the data. The numerical simulations are used to examine various assumptions previously made in an analytical model and to estimate the sensitivity of such experiments to material strength.
Sim, Adelene Y L
2016-06-01
Nucleic acids are biopolymers that carry genetic information and are also involved in various gene regulation functions such as gene silencing and protein translation. Because of their negatively charged backbones, nucleic acids are polyelectrolytes. To adequately understand nucleic acid folding and function, we need to properly describe its i) polymer/polyelectrolyte properties and ii) associating ion atmosphere. While various theories and simulation models have been developed to describe nucleic acids and the ions around them, many of these theories/simulations have not been well evaluated due to complexities in comparison with experiment. In this review, I discuss some recent experiments that have been strategically designed for straightforward comparison with theories and simulation models. Such data serve as excellent benchmarks to identify limitations in prevailing theories and simulation parameters. Copyright © 2015 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, Anthony P; Hanson, Paul J; DeKauwe, Martin G
2014-01-01
Free Air CO2 Enrichment (FACE) experiments provide a remarkable wealth of data to test the sensitivities of terrestrial ecosystem models (TEMs). In this study, a broad set of 11 TEMs were compared to 22 years of data from two contrasting FACE experiments in temperate forests of the south eastern US the evergreen Duke Forest and the deciduous Oak Ridge forest. We evaluated the models' ability to reproduce observed net primary productivity (NPP), transpiration and Leaf Area index (LAI) in ambient CO2 treatments. Encouragingly, many models simulated annual NPP and transpiration within observed uncertainty. Daily transpiration model errors were often relatedmore » to errors in leaf area phenology and peak LAI. Our analysis demonstrates that the simulation of LAI often drives the simulation of transpiration and hence there is a need to adopt the most appropriate of hypothesis driven methods to simulate and predict LAI. Of the three competing hypotheses determining peak LAI (1) optimisation to maximise carbon export, (2) increasing SLA with canopy depth and (3) the pipe model the pipe model produced LAI closest to the observations. Modelled phenology was either prescribed or based on broader empirical calibrations to climate. In some cases, simulation accuracy was achieved through compensating biases in component variables. For example, NPP accuracy was sometimes achieved with counter-balancing biases in nitrogen use efficiency and nitrogen uptake. Combined analysis of parallel measurements aides the identification of offsetting biases; without which over-confidence in model abilities to predict ecosystem function may emerge, potentially leading to erroneous predictions of change under future climates.« less
NASA Astrophysics Data System (ADS)
Clark, D. S.; Hinkel, D. E.; Eder, D. C.; Jones, O. S.; Haan, S. W.; Hammel, B. A.; Marinak, M. M.; Milovich, J. L.; Robey, H. F.; Suter, L. J.; Town, R. P. J.
2013-05-01
More than two dozen inertial confinement fusion ignition experiments with cryogenic deuterium-tritium layers have now been performed on the National Ignition Facility (NIF) [G. H. Miller et al., Opt. Eng. 443, 2841 (2004)]. Each of these yields a wealth of data including neutron yield, neutron down-scatter fraction, burn-averaged ion temperature, x-ray image shape and size, primary and down-scattered neutron image shape and size, etc. Compared to 2-D radiation-hydrodynamics simulations modeling both the hohlraum and the capsule implosion, however, the measured capsule yield is usually lower by a factor of 5 to 10, and the ion temperature varies from simulations, while most other observables are well matched between experiment and simulation. In an effort to understand this discrepancy, we perform detailed post-shot simulations of a subset of NIF implosion experiments. Using two-dimensional HYDRA simulations [M. M. Marinak, et al., Phys. Plasmas 8, 2275 (2001).] of the capsule only, these simulations represent as accurately as possible the conditions of a given experiment, including the as-shot capsule metrology, capsule surface roughness, and ice layer defects as seeds for the growth of hydrodynamic instabilities. The radiation drive used in these capsule-only simulations can be tuned to reproduce quite well the measured implosion timing, kinematics, and low-mode asymmetry. In order to simulate the experiments as accurately as possible, a limited number of fully three-dimensional implosion simulations are also being performed. Despite detailed efforts to incorporate all of the effects known and believed to be important in determining implosion performance, substantial yield discrepancies remain between experiment and simulation. Some possible alternate scenarios and effects that could resolve this discrepancy are discussed.
Modeling to predict pilot performance during CDTI-based in-trail following experiments
NASA Technical Reports Server (NTRS)
Sorensen, J. A.; Goka, T.
1984-01-01
A mathematical model was developed of the flight system with the pilot using a cockpit display of traffic information (CDTI) to establish and maintain in-trail spacing behind a lead aircraft during approach. Both in-trail and vertical dynamics were included. The nominal spacing was based on one of three criteria (Constant Time Predictor; Constant Time Delay; or Acceleration Cue). This model was used to simulate digitally the dynamics of a string of multiple following aircraft, including response to initial position errors. The simulation was used to predict the outcome of a series of in-trail following experiments, including pilot performance in maintaining correct longitudinal spacing and vertical position. The experiments were run in the NASA Ames Research Center multi-cab cockpit simulator facility. The experimental results were then used to evaluate the model and its prediction accuracy. Model parameters were adjusted, so that modeled performance matched experimental results. Lessons learned in this modeling and prediction study are summarized.
Propulsion simulation for magnetically suspended wind tunnel models
NASA Technical Reports Server (NTRS)
Joshi, Prakash B.; Beerman, Henry P.; Chen, James; Krech, Robert H.; Lintz, Andrew L.; Rosen, David I.
1990-01-01
The feasibility of simulating propulsion-induced aerodynamic effects on scaled aircraft models in wind tunnels employing Magnetic Suspension and Balance Systems. The investigation concerned itself with techniques of generating exhaust jets of appropriate characteristics. The objectives were to: (1) define thrust and mass flow requirements of jets; (2) evaluate techniques for generating propulsive gas within volume limitations imposed by magnetically-suspended models; (3) conduct simple diagnostic experiments for techniques involving new concepts; and (4) recommend experiments for demonstration of propulsion simulation techniques. Various techniques of generating exhaust jets of appropriate characteristics were evaluated on scaled aircraft models in wind tunnels with MSBS. Four concepts of remotely-operated propulsion simulators were examined. Three conceptual designs involving innovative adaptation of convenient technologies (compressed gas cylinders, liquid, and solid propellants) were developed. The fourth innovative concept, namely, the laser-assisted thruster, which can potentially simulate both inlet and exhaust flows, was found to require very high power levels for small thrust levels.
NASA Technical Reports Server (NTRS)
Pepin, Gerard R.
1992-01-01
The simulation development associated with the network models of both the Interim Service Integrated Services Digital Network (ISDN) Satellite (ISIS) and the Full Service ISDN Satellite (FSIS) architectures is documented. The ISIS Network Model design represents satellite systems like the Advanced Communications Technology Satellite (ACTS) orbiting switch. The FSIS architecture, the ultimate aim of this element of the Satellite Communications Applications Research (SCAR) Program, moves all control and switching functions on-board the next generation ISDN communications satellite. The technical and operational parameters for the advanced ISDN communications satellite design will be obtained from the simulation of ISIS and FSIS engineering software models for their major subsystems. Discrete event simulation experiments will be performed with these models using various traffic scenarios, design parameters, and operational procedures. The data from these simulations will be used to determine the engineering parameters for the advanced ISDN communications satellite.
Establishing a Novel Modeling Tool: A Python-Based Interface for a Neuromorphic Hardware System
Brüderle, Daniel; Müller, Eric; Davison, Andrew; Muller, Eilif; Schemmel, Johannes; Meier, Karlheinz
2008-01-01
Neuromorphic hardware systems provide new possibilities for the neuroscience modeling community. Due to the intrinsic parallelism of the micro-electronic emulation of neural computation, such models are highly scalable without a loss of speed. However, the communities of software simulator users and neuromorphic engineering in neuroscience are rather disjoint. We present a software concept that provides the possibility to establish such hardware devices as valuable modeling tools. It is based on the integration of the hardware interface into a simulator-independent language which allows for unified experiment descriptions that can be run on various simulation platforms without modification, implying experiment portability and a huge simplification of the quantitative comparison of hardware and simulator results. We introduce an accelerated neuromorphic hardware device and describe the implementation of the proposed concept for this system. An example setup and results acquired by utilizing both the hardware system and a software simulator are demonstrated. PMID:19562085
Establishing a novel modeling tool: a python-based interface for a neuromorphic hardware system.
Brüderle, Daniel; Müller, Eric; Davison, Andrew; Muller, Eilif; Schemmel, Johannes; Meier, Karlheinz
2009-01-01
Neuromorphic hardware systems provide new possibilities for the neuroscience modeling community. Due to the intrinsic parallelism of the micro-electronic emulation of neural computation, such models are highly scalable without a loss of speed. However, the communities of software simulator users and neuromorphic engineering in neuroscience are rather disjoint. We present a software concept that provides the possibility to establish such hardware devices as valuable modeling tools. It is based on the integration of the hardware interface into a simulator-independent language which allows for unified experiment descriptions that can be run on various simulation platforms without modification, implying experiment portability and a huge simplification of the quantitative comparison of hardware and simulator results. We introduce an accelerated neuromorphic hardware device and describe the implementation of the proposed concept for this system. An example setup and results acquired by utilizing both the hardware system and a software simulator are demonstrated.
NASA Astrophysics Data System (ADS)
Kodama, C.; Noda, A. T.; Satoh, M.
2012-06-01
This study presents an assessment of three-dimensional structures of hydrometeors simulated by the NICAM, global nonhydrostatic atmospheric model without cumulus parameterization, using multiple satellite data sets. A satellite simulator package (COSP: the CFMIP Observation Simulator Package) is employed to consistently compare model output with ISCCP, CALIPSO, and CloudSat satellite observations. Special focus is placed on high thin clouds, which are not observable in the conventional ISCCP data set, but can be detected by the CALIPSO observations. For the control run, the NICAM simulation qualitatively captures the geographical distributions of the high, middle, and low clouds, even though the horizontal mesh spacing is as coarse as 14 km. The simulated low cloud is very close to that of the CALIPSO low cloud. Both the CloudSat observations and NICAM simulation show a boomerang-type pattern in the radar reflectivity-height histogram, suggesting that NICAM realistically simulates the deep cloud development process. A striking difference was found in the comparisons of high thin cirrus, showing overestimated cloud and higher cloud top in the model simulation. Several model sensitivity experiments are conducted with different cloud microphysical parameters to reduce the model-observation discrepancies in high thin cirrus. In addition, relationships among clouds, Hadley circulation, outgoing longwave radiation and precipitation are discussed through the sensitivity experiments.
NASA Technical Reports Server (NTRS)
daSilva, Arlinda
2012-01-01
A model-based Observing System Simulation Experiment (OSSE) is a framework for numerical experimentation in which observables are simulated from fields generated by an earth system model, including a parameterized description of observational error characteristics. Simulated observations can be used for sampling studies, quantifying errors in analysis or retrieval algorithms, and ultimately being a planning tool for designing new observing missions. While this framework has traditionally been used to assess the impact of observations on numerical weather prediction, it has a much broader applicability, in particular to aerosols and chemical constituents. In this talk we will give a general overview of Observing System Simulation Experiments (OSSE) activities at NASA's Global Modeling and Assimilation Office, with focus on its emerging atmospheric composition component.
The photon identification loophole in EPRB experiments: computer models with single-wing selection
NASA Astrophysics Data System (ADS)
De Raedt, Hans; Michielsen, Kristel; Hess, Karl
2017-11-01
Recent Einstein-Podolsky-Rosen-Bohm experiments [M. Giustina et al. Phys. Rev. Lett. 115, 250401 (2015); L. K. Shalm et al. Phys. Rev. Lett. 115, 250402 (2015)] that claim to be loophole free are scrutinized. The combination of a digital computer and discrete-event simulation is used to construct a minimal but faithful model of the most perfected realization of these laboratory experiments. In contrast to prior simulations, all photon selections are strictly made, as they are in the actual experiments, at the local station and no other "post-selection" is involved. The simulation results demonstrate that a manifestly non-quantum model that identifies photons in the same local manner as in these experiments can produce correlations that are in excellent agreement with those of the quantum theoretical description of the corresponding thought experiment, in conflict with Bell's theorem which states that this is impossible. The failure of Bell's theorem is possible because of our recognition of the photon identification loophole. Such identification measurement-procedures are necessarily included in all actual experiments but are not included in the theory of Bell and his followers.
Numerical simulations of a nonequilibrium argon plasma in a shock-tube experiment
NASA Technical Reports Server (NTRS)
Cambier, Jean-Luc
1991-01-01
A code developed for the numerical modeling of nonequilibrium radiative plasmas is applied to the simulation of the propagation of strong ionizing shock waves in argon gas. The simulations attempt to reproduce a series of shock-tube experiments which will be used to validate the numerical models and procedures. The ability to perform unsteady simulations makes it possible to observe some fluctuations in the shock propagation, coupled to the kinetic processes. A coupling mechanism by pressure waves, reminiscent of oscillation mechanisms observed in detonation waves, is described. The effect of upper atomic levels is also briefly discussed.
NASA Astrophysics Data System (ADS)
Athreya, C. N.; Mukilventhan, A.; Suwas, Satyam; Vedantam, Srikanth; Subramanya Sarma, V.
2018-04-01
The influence of the mode of deformation on recrystallisation behaviour of Ti was studied by experiments and modelling. Ti samples were deformed through torsion and rolling to the same equivalent strain of 0.5. The deformed samples were annealed at different temperatures for different time durations and the recrystallisation kinetics were compared. Recrystallisation is found to be faster in the rolled samples compared to the torsion deformed samples. This is attributed to the differences in stored energy and number of nuclei per unit area in the two modes of deformation. Considering decay in stored energy during recrystallisation, the grain boundary mobility was estimated through a mean field model. The activation energy for recrystallisation obtained from experiments matched with the activation energy for grain boundary migration obtained from mobility calculation. A multi-phase field model (with mobility estimated from the mean field model as a constitutive input) was used to simulate the kinetics, microstructure and texture evolution. The recrystallisation kinetics and grain size distributions obtained from experiments matched reasonably well with the phase field simulations. The recrystallisation texture predicted through phase field simulations compares well with experiments though few additional texture components are present in simulations. This is attributed to the anisotropy in grain boundary mobility, which is not accounted for in the present study.
Mazilu, I; Mazilu, D A; Melkerson, R E; Hall-Mejia, E; Beck, G J; Nshimyumukiza, S; da Fonseca, Carlos M
2016-03-01
We present exact and approximate results for a class of cooperative sequential adsorption models using matrix theory, mean-field theory, and computer simulations. We validate our models with two customized experiments using ionically self-assembled nanoparticles on glass slides. We also address the limitations of our models and their range of applicability. The exact results obtained using matrix theory can be applied to a variety of two-state systems with cooperative effects.
A Simple Classification Model for Debriefing Simulation Games
ERIC Educational Resources Information Center
Peters, Vincent A. M.; Vissers, Geert A. N.
2004-01-01
Debriefing is an important phase in using simulation games. Participants are invited to make a connection between experiences gained from playing the game and experiences in real-life situations. Thus, debriefing is the phase meant to encourage learning from the simulation game. Although design and practice of debriefing sessions should be aligned…
Modeling the effects of high-G stress on pilots in a tracking task
NASA Technical Reports Server (NTRS)
Korn, J.; Kleinman, D. L.
1978-01-01
Air-to-air tracking experiments were conducted at the Aerospace Medical Research Laboratories using both fixed and moving base dynamic environment simulators. The obtained data, which includes longitudinal error of a simulated air-to-air tracking task as well as other auxiliary variables, was analyzed using an ensemble averaging method. In conjunction with these experiments, the optimal control model is applied to model a human operator under high-G stress.
High Resolution Model Intercomparison Project (HighResMIP v1.0) for CMIP6
NASA Astrophysics Data System (ADS)
Haarsma, Reindert J.; Roberts, Malcolm J.; Vidale, Pier Luigi; Senior, Catherine A.; Bellucci, Alessio; Bao, Qing; Chang, Ping; Corti, Susanna; Fučkar, Neven S.; Guemas, Virginie; von Hardenberg, Jost; Hazeleger, Wilco; Kodama, Chihiro; Koenigk, Torben; Leung, L. Ruby; Lu, Jian; Luo, Jing-Jia; Mao, Jiafu; Mizielinski, Matthew S.; Mizuta, Ryo; Nobre, Paulo; Satoh, Masaki; Scoccimarro, Enrico; Semmler, Tido; Small, Justin; von Storch, Jin-Song
2016-11-01
Robust projections and predictions of climate variability and change, particularly at regional scales, rely on the driving processes being represented with fidelity in model simulations. The role of enhanced horizontal resolution in improved process representation in all components of the climate system is of growing interest, particularly as some recent simulations suggest both the possibility of significant changes in large-scale aspects of circulation as well as improvements in small-scale processes and extremes. However, such high-resolution global simulations at climate timescales, with resolutions of at least 50 km in the atmosphere and 0.25° in the ocean, have been performed at relatively few research centres and generally without overall coordination, primarily due to their computational cost. Assessing the robustness of the response of simulated climate to model resolution requires a large multi-model ensemble using a coordinated set of experiments. The Coupled Model Intercomparison Project 6 (CMIP6) is the ideal framework within which to conduct such a study, due to the strong link to models being developed for the CMIP DECK experiments and other model intercomparison projects (MIPs). Increases in high-performance computing (HPC) resources, as well as the revised experimental design for CMIP6, now enable a detailed investigation of the impact of increased resolution up to synoptic weather scales on the simulated mean climate and its variability. The High Resolution Model Intercomparison Project (HighResMIP) presented in this paper applies, for the first time, a multi-model approach to the systematic investigation of the impact of horizontal resolution. A coordinated set of experiments has been designed to assess both a standard and an enhanced horizontal-resolution simulation in the atmosphere and ocean. The set of HighResMIP experiments is divided into three tiers consisting of atmosphere-only and coupled runs and spanning the period 1950-2050, with the possibility of extending to 2100, together with some additional targeted experiments. This paper describes the experimental set-up of HighResMIP, the analysis plan, the connection with the other CMIP6 endorsed MIPs, as well as the DECK and CMIP6 historical simulations. HighResMIP thereby focuses on one of the CMIP6 broad questions, "what are the origins and consequences of systematic model biases?", but we also discuss how it addresses the World Climate Research Program (WCRP) grand challenges.
Design and Analysis of AN Static Aeroelastic Experiment
NASA Astrophysics Data System (ADS)
Hou, Ying-Yu; Yuan, Kai-Hua; Lv, Ji-Nan; Liu, Zi-Qiang
2016-06-01
Static aeroelastic experiments are very common in the United States and Russia. The objective of static aeroelastic experiments is to investigate deformation and loads of elastic structure in flow field. Generally speaking, prerequisite of this experiment is that the stiffness distribution of structure is known. This paper describes a method for designing experimental models, in the case where the stiffness distribution and boundary condition of a real aircraft are both uncertain. The stiffness distribution form of the structure can be calculated via finite element modeling and simulation calculation and F141 steels and rigid foam are used to make elastic model. In this paper, the design and manufacturing process of static aeroelastic models is presented and a set of experiment model was designed to simulate the stiffness of the designed wings, a set of experiments was designed to check the results. The test results show that the experimental method can effectively complete the design work of elastic model. This paper introduces the whole process of the static aeroelastic experiment, and the experimental results are analyzed. This paper developed a static aeroelasticity experiment technique and established an experiment model targeting at the swept wing of a certain kind of large aspect ratio aircraft.
Nicoulaud-Gouin, V; Garcia-Sanchez, L; Giacalone, M; Attard, J C; Martin-Garin, A; Bois, F Y
2016-10-01
This paper addresses the methodological conditions -particularly experimental design and statistical inference- ensuring the identifiability of sorption parameters from breakthrough curves measured during stirred flow-through reactor experiments also known as continuous flow stirred-tank reactor (CSTR) experiments. The equilibrium-kinetic (EK) sorption model was selected as nonequilibrium parameterization embedding the K d approach. Parameter identifiability was studied formally on the equations governing outlet concentrations. It was also studied numerically on 6 simulated CSTR experiments on a soil with known equilibrium-kinetic sorption parameters. EK sorption parameters can not be identified from a single breakthrough curve of a CSTR experiment, because K d,1 and k - were diagnosed collinear. For pairs of CSTR experiments, Bayesian inference allowed to select the correct models of sorption and error among sorption alternatives. Bayesian inference was conducted with SAMCAT software (Sensitivity Analysis and Markov Chain simulations Applied to Transfer models) which launched the simulations through the embedded simulation engine GNU-MCSim, and automated their configuration and post-processing. Experimental designs consisting in varying flow rates between experiments reaching equilibrium at contamination stage were found optimal, because they simultaneously gave accurate sorption parameters and predictions. Bayesian results were comparable to maximum likehood method but they avoided convergence problems, the marginal likelihood allowed to compare all models, and credible interval gave directly the uncertainty of sorption parameters θ. Although these findings are limited to the specific conditions studied here, in particular the considered sorption model, the chosen parameter values and error structure, they help in the conception and analysis of future CSTR experiments with radionuclides whose kinetic behaviour is suspected. Copyright © 2016 Elsevier Ltd. All rights reserved.
Russell, David A.; D'Ippolito, Daniel A.; Myra, James R.; ...
2015-09-01
The effect of lithium (Li) wall coatings on scrape-off-layer (SOL) turbulence in the National Spherical Torus Experiment (NSTX) is modeled with the Lodestar SOLT (“SOL Turbulence”) code. Specifically, the implications for the SOL heat flux width of experimentally observed, Li-induced changes in the pedestal profiles are considered. The SOLT code used in the modeling has been expanded recently to include ion temperature evolution and ion diamagnetic drift effects. This work focuses on two NSTX discharges occurring pre- and with-Li deposition. The simulation density and temperature profiles are constrained, inside the last closed flux surface only, to match those measured inmore » the two experiments, and the resulting drift-interchange-driven turbulence is explored. The effect of Li enters the simulation only through the pedestal profile constraint: Li modifies the experimental density and temperature profiles in the pedestal, and these profiles affect the simulated SOL turbulence. The power entering the SOL measured in the experiments is matched in the simulations by adjusting “free” dissipation parameters (e.g., diffusion coefficients) that are not measured directly in the experiments. With power-matching, (a) the heat flux SOL width is smaller, as observed experimentally by infra-red thermography, and (b) the simulated density fluctuation amplitudes are reduced with Li, as inferred for the experiments as well from reflectometry analysis. The instabilities and saturation mechanisms that underlie the SOLT model equilibria are also discussed.« less
NASA Astrophysics Data System (ADS)
Rosland, R.; Strand, Ø.; Alunno-Bruscia, M.; Bacher, C.; Strohmeier, T.
2009-08-01
A Dynamic Energy Budget (DEB) model for simulation of growth and bioenergetics of blue mussels ( Mytilus edulis) has been tested in three low seston sites in southern Norway. The observations comprise four datasets from laboratory experiments (physiological and biometrical mussel data) and three datasets from in situ growth experiments (biometrical mussel data). Additional in situ data from commercial farms in southern Norway were used for estimation of biometrical relationships in the mussels. Three DEB parameters (shape coefficient, half saturation coefficient, and somatic maintenance rate coefficient) were estimated from experimental data, and the estimated parameters were complemented with parameter values from literature to establish a basic parameter set. Model simulations based on the basic parameter set and site specific environmental forcing matched fairly well with observations, but the model was not successful in simulating growth at the extreme low seston regimes in the laboratory experiments in which the long period of negative growth caused negative reproductive mass. Sensitivity analysis indicated that the model was moderately sensitive to changes in the parameter and initial conditions. The results show the robust properties of the DEB model as it manages to simulate mussel growth in several independent datasets from a common basic parameter set. However, the results also demonstrate limitations of Chl a as a food proxy for blue mussels and limitations of the DEB model to simulate long term starvation. Future work should aim at establishing better food proxies and improving the model formulations of the processes involved in food ingestion and assimilation. The current DEB model should also be elaborated to allow shrinking in the structural tissue in order to produce more realistic growth simulations during long periods of starvation.
An IBM PC-based math model for space station solar array simulation
NASA Technical Reports Server (NTRS)
Emanuel, E. M.
1986-01-01
This report discusses and documents the design, development, and verification of a microcomputer-based solar cell math model for simulating the Space Station's solar array Initial Operational Capability (IOC) reference configuration. The array model is developed utilizing a linear solar cell dc math model requiring only five input parameters: short circuit current, open circuit voltage, maximum power voltage, maximum power current, and orbit inclination. The accuracy of this model is investigated using actual solar array on orbit electrical data derived from the Solar Array Flight Experiment/Dynamic Augmentation Experiment (SAFE/DAE), conducted during the STS-41D mission. This simulator provides real-time simulated performance data during the steady state portion of the Space Station orbit (i.e., array fully exposed to sunlight). Eclipse to sunlight transients and shadowing effects are not included in the analysis, but are discussed briefly. Integrating the Solar Array Simulator (SAS) into the Power Management and Distribution (PMAD) subsystem is also discussed.
Thermal modeling with solid/liquid phase change of the thermal energy storage experiment
NASA Technical Reports Server (NTRS)
Skarda, J. Raymond Lee
1991-01-01
A thermal model which simulates combined conduction and phase change characteristics of thermal energy storage (TES) materials is presented. Both the model and results are presented for the purpose of benchmarking the conduction and phase change capabilities of recently developed and unvalidated microgravity TES computer programs. Specifically, operation of TES-1 is simulated. A two-dimensional SINDA85 model of the TES experiment in cylindrical coordinates was constructed. The phase change model accounts for latent heat stored in, or released from, a node undergoing melting and freezing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Babic, Miroslav; Kljenak, Ivo; Mavko, Borut
2006-07-01
The CFD code CFX4.4 was used to simulate an experiment in the ThAI facility, which was designed for investigation of thermal-hydraulic processes during a severe accident inside a Light Water Reactor containment. In the considered experiment, air was initially present in the vessel, and helium and steam were injected during different phases of the experiment at various mass flow rates and at different locations. The main purpose of the proposed work was to assess the capabilities of the CFD code to reproduce the atmosphere structure with a three-dimensional model, coupled with condensation models proposed by the authors. A three-dimensional modelmore » of the ThAI vessel for the CFX4.4 code was developed. The flow in the simulation domain was modeled as single-phase. Steam condensation on vessel walls was modeled as a sink of mass and energy using a correlation that was originally developed for an integral approach. A simple model of bulk phase change was also included. Calculated time-dependent variables together with temperature and volume fraction distributions at the end of different experiment phases are compared to experimental results. (authors)« less
Shuttle operations simulation model programmers'/users' manual
NASA Technical Reports Server (NTRS)
Porter, D. G.
1972-01-01
The prospective user of the shuttle operations simulation (SOS) model is given sufficient information to enable him to perform simulation studies of the space shuttle launch-to-launch operations cycle. The procedures used for modifying the SOS model to meet user requirements are described. The various control card sequences required to execute the SOS model are given. The report is written for users with varying computer simulation experience. A description of the components of the SOS model is included that presents both an explanation of the logic involved in the simulation of the shuttle operations cycle and a description of the routines used to support the actual simulation.
A mathematical simulation model of the CH-47B helicopter, volume 1
NASA Technical Reports Server (NTRS)
Weber, J. M.; Liu, T. Y.; Chung, W.
1984-01-01
A nonlinear simulation model of the CH-47B helicopter was adapted for use in the NASA Ames Research Center (ARC) simulation facility. The model represents the specific configuration of the ARC variable stability CH-47B helicopter and will be used in ground simulation research and to expedite and verify flight experiment design. Modeling of the helicopter uses a total force approach in six rigid body degrees of freedom. Rotor dynamics are simulated using the Wheatlely-Bailey equations including steady-state flapping dynamics. Also included in the model is the option for simulation of external suspension, slung-load equations of motion.
Euler-Lagrange Simulations of Shock Wave-Particle Cloud Interaction
NASA Astrophysics Data System (ADS)
Koneru, Rahul; Rollin, Bertrand; Ouellet, Frederick; Park, Chanyoung; Balachandar, S.
2017-11-01
Numerical experiments of shock interacting with an evolving and fixed cloud of particles are performed. In these simulations we use Eulerian-Lagrangian approach along with state-of-the-art point-particle force and heat transfer models. As validation, we use Sandia Multiphase Shock Tube experiments and particle-resolved simulations. The particle curtain upon interaction with the shock wave is expected to experience Kelvin-Helmholtz (KH) and Richtmyer-Meshkov (RM) instabilities. In the simulations evolving the particle cloud, the initial volume fraction profile matches with that of Sandia Multiphase Shock Tube experiments, and the shock Mach number is limited to M =1.66. Measurements of particle dispersion are made at different initial volume fractions. A detailed analysis of the influence of initial conditions on the evolution of the particle cloudis presented. The early time behavior of the models is studied in the fixed bed simulations at varying volume fractions and shock Mach numbers.The mean gas quantities are measured in the context of 1-way and 2-way coupled simulations. This work was supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program, Contract No. DE-NA0002378.
Seasonal changes in the atmospheric heat balance simulated by the GISS general circulation model
NASA Technical Reports Server (NTRS)
Stone, P. H.; Chow, S.; Helfand, H. M.; Quirk, W. J.; Somerville, R. C. J.
1975-01-01
Tests of the ability of numerical general circulation models to simulate the atmosphere have focussed so far on simulations of the January climatology. These models generally present boundary conditions such as sea surface temperature, but this does not prevent testing their ability to simulate seasonal changes in atmospheric processes that accompany presented seasonal changes in boundary conditions. Experiments to simulate changes in the zonally averaged heat balance are discussed since many simplified models of climatic processes are based solely on this balance.
USDA-ARS?s Scientific Manuscript database
The data set reported here includes the part of a Hot Serial Cereal Experiment (HSC) experiment recently used in the AgMIP-Wheat project to analyze the uncertainty of 30 wheat models and quantify their response to temperature. The HSC experiment was conducted in an open-field in a semiarid environme...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romander, C M; Cagliostro, D J
Five experiments were performed to help evaluate the structural integrity of the reactor vessel and head design and to verify code predictions. In the first experiment (SM 1), a detailed model of the head was loaded statically to determine its stiffness. In the remaining four experiments (SM 2 to SM 5), models of the vessel and head were loaded dynamically under a simulated 661 MW-s hypothetical core disruptive accident (HCDA). Models SM 2 to SM 4, each of increasing complexity, systematically showed the effects of upper internals structures, a thermal liner, core support platform, and torospherical bottom on vessel response.more » Model SM 5, identical to SM 4 but more heavily instrumented, demonstrated experimental reproducibility and provided more comprehensive data. The models consisted of a Ni 200 vessel and core barrel, a head with shielding and simulated component masses, and an upper internals structure (UIS).« less
Impact of detector simulation in particle physics collider experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elvira, V. Daniel
Through the last three decades, precise simulation of the interactions of particles with matter and modeling of detector geometries has proven to be of critical importance to the success of the international high-energy physics experimental programs. For example, the detailed detector modeling and accurate physics of the Geant4-based simulation software of the CMS and ATLAS particle physics experiments at the European Center of Nuclear Research (CERN) Large Hadron Collider (LHC) was a determinant factor for these collaborations to deliver physics results of outstanding quality faster than any hadron collider experiment ever before. This review article highlights the impact of detectormore » simulation on particle physics collider experiments. It presents numerous examples of the use of simulation, from detector design and optimization, through software and computing development and testing, to cases where the use of simulation samples made a difference in the accuracy of the physics results and publication turnaround, from data-taking to submission. It also presents the economic impact and cost of simulation in the CMS experiment. Future experiments will collect orders of magnitude more data, taxing heavily the performance of simulation and reconstruction software for increasingly complex detectors. Consequently, it becomes urgent to find solutions to speed up simulation software in order to cope with the increased demand in a time of flat budgets. The study ends with a short discussion on the potential solutions that are being explored, by leveraging core count growth in multicore machines, using new generation coprocessors, and re-engineering of HEP code for concurrency and parallel computing.« less
Impact of detector simulation in particle physics collider experiments
Elvira, V. Daniel
2017-06-01
Through the last three decades, precise simulation of the interactions of particles with matter and modeling of detector geometries has proven to be of critical importance to the success of the international high-energy physics experimental programs. For example, the detailed detector modeling and accurate physics of the Geant4-based simulation software of the CMS and ATLAS particle physics experiments at the European Center of Nuclear Research (CERN) Large Hadron Collider (LHC) was a determinant factor for these collaborations to deliver physics results of outstanding quality faster than any hadron collider experiment ever before. This review article highlights the impact of detectormore » simulation on particle physics collider experiments. It presents numerous examples of the use of simulation, from detector design and optimization, through software and computing development and testing, to cases where the use of simulation samples made a difference in the accuracy of the physics results and publication turnaround, from data-taking to submission. It also presents the economic impact and cost of simulation in the CMS experiment. Future experiments will collect orders of magnitude more data, taxing heavily the performance of simulation and reconstruction software for increasingly complex detectors. Consequently, it becomes urgent to find solutions to speed up simulation software in order to cope with the increased demand in a time of flat budgets. The study ends with a short discussion on the potential solutions that are being explored, by leveraging core count growth in multicore machines, using new generation coprocessors, and re-engineering of HEP code for concurrency and parallel computing.« less
Impact of detector simulation in particle physics collider experiments
NASA Astrophysics Data System (ADS)
Daniel Elvira, V.
2017-06-01
Through the last three decades, accurate simulation of the interactions of particles with matter and modeling of detector geometries has proven to be of critical importance to the success of the international high-energy physics (HEP) experimental programs. For example, the detailed detector modeling and accurate physics of the Geant4-based simulation software of the CMS and ATLAS particle physics experiments at the European Center of Nuclear Research (CERN) Large Hadron Collider (LHC) was a determinant factor for these collaborations to deliver physics results of outstanding quality faster than any hadron collider experiment ever before. This review article highlights the impact of detector simulation on particle physics collider experiments. It presents numerous examples of the use of simulation, from detector design and optimization, through software and computing development and testing, to cases where the use of simulation samples made a difference in the precision of the physics results and publication turnaround, from data-taking to submission. It also presents estimates of the cost and economic impact of simulation in the CMS experiment. Future experiments will collect orders of magnitude more data with increasingly complex detectors, taxing heavily the performance of simulation and reconstruction software. Consequently, exploring solutions to speed up simulation and reconstruction software to satisfy the growing demand of computing resources in a time of flat budgets is a matter that deserves immediate attention. The article ends with a short discussion on the potential solutions that are being considered, based on leveraging core count growth in multicore machines, using new generation coprocessors, and re-engineering HEP code for concurrency and parallel computing.
Spin glass model for dynamics of cell reprogramming
NASA Astrophysics Data System (ADS)
Pusuluri, Sai Teja; Lang, Alex H.; Mehta, Pankaj; Castillo, Horacio E.
2015-03-01
Recent experiments show that differentiated cells can be reprogrammed to become pluripotent stem cells. The possible cell fates can be modeled as attractors in a dynamical system, the ``epigenetic landscape.'' Both cellular differentiation and reprogramming can be described in the landscape picture as motion from one attractor to another attractor. We perform Monte Carlo simulations in a simple model of the landscape. This model is based on spin glass theory and it can be used to construct a simulated epigenetic landscape starting from the experimental genomic data. We re-analyse data from several cell reprogramming experiments and compare with our simulation results. We find that the model can reproduce some of the main features of the dynamics of cell reprogramming.
SIGMA--A Graphical Approach to Teaching Simulation.
ERIC Educational Resources Information Center
Schruben, Lee W.
1992-01-01
SIGMA (Simulation Graphical Modeling and Analysis) is a computer graphics environment for building, testing, and experimenting with discrete event simulation models on personal computers. It uses symbolic representations (computer animation) to depict the logic of large, complex discrete event systems for easier understanding and has proven itself…
Planetary Boundary Layer Simulation Using TASS
NASA Technical Reports Server (NTRS)
Schowalter, David G.; DeCroix, David S.; Lin, Yuh-Lang; Arya, S. Pal; Kaplan, Michael
1996-01-01
Boundary conditions to an existing large-eddy simulation model have been changed in order to simulate turbulence in the atmospheric boundary layer. Several options are now available, including the use of a surface energy balance. In addition, we compare convective boundary layer simulations with the Wangara and Minnesota field experiments as well as with other model results. We find excellent agreement of modelled mean profiles of wind and temperature with observations and good agreement for velocity variances. Neutral boundary simulation results are compared with theory and with previously used models. Agreement with theory is reasonable, while agreement with previous models is excellent.
NASA Astrophysics Data System (ADS)
Juhui, Chen; Yanjia, Tang; Dan, Li; Pengfei, Xu; Huilin, Lu
2013-07-01
Flow behavior of gas and particles is predicted by the large eddy simulation of gas-second order moment of solid model (LES-SOM model) in the simulation of flow behavior in CFB. This study shows that the simulated solid volume fractions along height using a two-dimensional model are in agreement with experiments. The velocity, volume fraction and second-order moments of particles are computed. The second-order moments of clusters are calculated. The solid volume fraction, velocity and second order moments are compared at the three different model constants.
A New Approach for Coupled GCM Sensitivity Studies
NASA Astrophysics Data System (ADS)
Kirtman, B. P.; Duane, G. S.
2011-12-01
A new multi-model approach for coupled GCM sensitivity studies is presented. The purpose of the sensitivity experiments is to understand why two different coupled models have such large differences in their respective climate simulations. In the application presented here, the differences between the coupled models using the Center for Ocean-Land-Atmosphere Studies (COLA) and the National Center for Atmospheric Research (NCAR) atmospheric general circulation models (AGCMs) are examined. The intent is to isolate which component of the air-sea fluxes is most responsible for the differences between the coupled models and for the errors in their respective coupled simulations. The procedure is to simultaneously couple the two different atmospheric component models to a single ocean general circulation model (OGCM), in this case the Modular Ocean Model (MOM) developed at the Geophysical Fluid Dynamics Laboratory (GFDL). Each atmospheric component model experiences the same SST produced by the OGCM, but the OGCM is simultaneously coupled to both AGCMs using a cross coupling strategy. In the first experiment, the OGCM is coupled to the heat and fresh water flux from the NCAR AGCM (Community Atmospheric Model; CAM) and the momentum flux from the COLA AGCM. Both AGCMs feel the same SST. In the second experiment, the OGCM is coupled to the heat and fresh water flux from the COLA AGCM and the momentum flux from the CAM AGCM. Again, both atmospheric component models experience the same SST. By comparing these two experimental simulations with control simulations where only one AGCM is used, it is possible to argue which of the flux components are most responsible for the differences in the simulations and their respective errors. Based on these sensitivity experiments we conclude that the tropical ocean warm bias in the COLA coupled model is due to errors in the heat flux, and that the erroneous westward shift in the tropical Pacific cold tongue minimum in the NCAR model is due errors in the momentum flux. All the coupled simulations presented here have warm biases along the eastern boundary of the tropical oceans suggesting that the problem is common to both AGCMs. In terms of interannual variability in the tropical Pacific, the CAM momentum flux is responsible for the erroneous westward extension of the sea surface temperature anomalies (SSTA) and errors in the COLA momentum flux cause the erroneous eastward migration of the El Niño-Southern Oscillation (ENSO) events. These conclusions depend on assuming that the error due to the OGCM can be neglected.
NASA Astrophysics Data System (ADS)
Fen, Cao; XuHai, Yang; ZhiGang, Li; ChuGang, Feng
2016-08-01
The normal consecutive observing model in Chinese Area Positioning System (CAPS) can only supply observations of one GEO satellite in 1 day from one station. However, this can't satisfy the project need for observing many GEO satellites in 1 day. In order to obtain observations of several GEO satellites in 1 day like GPS/GLONASS/Galileo/BeiDou, the time-sharing observing model for GEO satellites in CAPS needs research. The principle of time-sharing observing model is illuminated with subsequent Precise Orbit Determination (POD) experiments using simulated time-sharing observations in 2005 and the real time-sharing observations in 2015. From time-sharing simulation experiments before 2014, the time-sharing observing 6 GEO satellites every 2 h has nearly the same orbit precision with the consecutive observing model. From POD experiments using the real time-sharing observations, POD precision for ZX12# and Yatai7# are about 3.234 m and 2.570 m, respectively, which indicates the time-sharing observing model is appropriate for CBTR system and can realize observing many GEO satellites in 1 day.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Babic, Miroslav; Kljenak, Ivo; Mavko, Borut
2006-07-01
The CFD code CFX4.4 was used to simulate an experiment in the ThAI facility, which was designed for investigation of thermal-hydraulic processes during a severe accident inside a Light Water Reactor containment. In the considered experiment, air was initially present in the vessel, and helium and steam were injected during different phases of the experiment at various mass flow rates and at different locations. The main purpose of the simulation was to reproduce the non-homogeneous temperature and species concentration distributions in the ThAI experimental facility. A three-dimensional model of the ThAI vessel for the CFX4.4 code was developed. The flowmore » in the simulation domain was modeled as single-phase. Steam condensation on vessel walls was modeled as a sink of mass and energy using a correlation that was originally developed for an integral approach. A simple model of bulk phase change was also introduced. The calculated time-dependent variables together with temperature and concentration distributions at the end of experiment phases are compared to experimental results. (authors)« less
Model simulation and experiments of flow and mass transport through a nano-material gas filter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiaofan; Zheng, Zhongquan C.; Winecki, Slawomir
2013-11-01
A computational model for evaluating the performance of nano-material packed-bed filters was developed. The porous effects of the momentum and mass transport within the filter bed were simulated. For the momentum transport, an extended Ergun-type model was employed and the energy loss (pressure drop) along the packed-bed was simulated and compared with measurement. For the mass transport, a bulk dsorption model was developed to study the adsorption process (breakthrough behavior). Various types of porous materials and gas flows were tested in the filter system where the mathematical models used in the porous substrate were implemented and validated by comparing withmore » experimental data and analytical solutions under similar conditions. Good agreements were obtained between experiments and model predictions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ricci, Paolo; Theiler, C.; Fasoli, A.
A methodology for plasma turbulence code validation is discussed, focusing on quantitative assessment of the agreement between experiments and simulations. The present work extends the analysis carried out in a previous paper [P. Ricci et al., Phys. Plasmas 16, 055703 (2009)] where the validation observables were introduced. Here, it is discussed how to quantify the agreement between experiments and simulations with respect to each observable, how to define a metric to evaluate this agreement globally, and - finally - how to assess the quality of a validation procedure. The methodology is then applied to the simulation of the basic plasmamore » physics experiment TORPEX [A. Fasoli et al., Phys. Plasmas 13, 055902 (2006)], considering both two-dimensional and three-dimensional simulation models.« less
A Program for Simulated Thermodynamic Experiments.
ERIC Educational Resources Information Center
Olds, Dan W.
A time-sharing FORTRAN program is described. It was created to allow a student to design and perform classical thermodynamic experiments on three models of a working substance. One goal was to develop a simulation which gave the student maximum freedom and responsibility in the design of the experiment and provided only the primary experimental…
Measurement and simulation of deformation and stresses in steel casting
NASA Astrophysics Data System (ADS)
Galles, D.; Monroe, C. A.; Beckermann, C.
2012-07-01
Experiments are conducted to measure displacements and forces during casting of a steel bar in a sand mold. In some experiments the bar is allowed to contract freely, while in others the bar is manually strained using embedded rods connected to a frame. Solidification and cooling of the experimental castings are simulated using a commercial code, and good agreement between measured and predicted temperatures is obtained. The deformations and stresses in the experiments are simulated using an elasto-viscoplastic finite-element model. The high temperature mechanical properties are estimated from data available in the literature. The mush is modeled using porous metal plasticity theory, where the coherency and coalescence solid fraction are taken into account. Good agreement is obtained between measured and predicted displacements and forces. The results shed considerable light on the modeling of stresses in steel casting and help in developing more accurate models for predicting hot tears and casting distortions.
Hurricanes and Climate: the U.S. CLIVAR Working Group on Hurricanes
NASA Technical Reports Server (NTRS)
Walsh, Kevin; Camargo, Suzana J.; Vecchi, Gabriel A.; Daloz, Anne Sophie; Elsner, James; Emanuel, Kerry; Horn, Michael; Lim, Young-Kwon; Roberts, Malcolm; Patricola, Christina;
2015-01-01
While a quantitative climate theory of tropical cyclone formation remains elusive, considerable progress has been made recently in our ability to simulate tropical cyclone climatologies and understand the relationship between climate and tropical cyclone formation. Climate models are now able to simulate a realistic rate of global tropical cyclone formation, although simulation of the Atlantic tropical cyclone climatology remains challenging unless horizontal resolutions finer than 50 km are employed. The idealized experiments of the Hurricane Working Group of U.S. CLIVAR, combined with results from other model simulations, have suggested relationships between tropical cyclone formation rates and climate variables such as mid-tropospheric vertical velocity. Systematic differences are shown between experiments in which only sea surface temperature is increases versus experiments where only atmospheric carbon dioxide is increased, with the carbon dioxide experiments more likely to demonstrate a decrease in numbers. Further experiments are proposed that may improve our understanding of the relationship between climate and tropical cyclone formation, including experiments with two-way interaction between the ocean and the atmosphere and variations in atmospheric aerosols.
Pathak, Shriram M; Ruff, Aaron; Kostewicz, Edmund S; Patel, Nikunjkumar; Turner, David B; Jamei, Masoud
2017-12-04
Mechanistic modeling of in vitro data generated from metabolic enzyme systems (viz., liver microsomes, hepatocytes, rCYP enzymes, etc.) facilitates in vitro-in vivo extrapolation (IVIV_E) of metabolic clearance which plays a key role in the successful prediction of clearance in vivo within physiologically-based pharmacokinetic (PBPK) modeling. A similar concept can be applied to solubility and dissolution experiments whereby mechanistic modeling can be used to estimate intrinsic parameters required for mechanistic oral absorption simulation in vivo. However, this approach has not widely been applied within an integrated workflow. We present a stepwise modeling approach where relevant biopharmaceutics parameters for ketoconazole (KTZ) are determined and/or confirmed from the modeling of in vitro experiments before being directly used within a PBPK model. Modeling was applied to various in vitro experiments, namely: (a) aqueous solubility profiles to determine intrinsic solubility, salt limiting solubility factors and to verify pK a ; (b) biorelevant solubility measurements to estimate bile-micelle partition coefficients; (c) fasted state simulated gastric fluid (FaSSGF) dissolution for formulation disintegration profiling; and (d) transfer experiments to estimate supersaturation and precipitation parameters. These parameters were then used within a PBPK model to predict the dissolved and total (i.e., including the precipitated fraction) concentrations of KTZ in the duodenum of a virtual population and compared against observed clinical data. The developed model well characterized the intraluminal dissolution, supersaturation, and precipitation behavior of KTZ. The mean simulated AUC 0-t of the total and dissolved concentrations of KTZ were comparable to (within 2-fold of) the corresponding observed profile. Moreover, the developed PBPK model of KTZ successfully described the impact of supersaturation and precipitation on the systemic plasma concentration profiles of KTZ for 200, 300, and 400 mg doses. These results demonstrate that IVIV_E applied to biopharmaceutical experiments can be used to understand and build confidence in the quality of the input parameters and mechanistic models used for mechanistic oral absorption simulations in vivo, thereby improving the prediction performance of PBPK models. Moreover, this approach can inform the selection and design of in vitro experiments, potentially eliminating redundant experiments and thus helping to reduce the cost and time of drug product development.
Simulation of the GEM detector for BM@N experiment
NASA Astrophysics Data System (ADS)
Baranov, Dmitriy; Rogachevsky, Oleg
2017-03-01
The Gas Electron Multiplier (GEM) detector is one of the basic parts of the BM@N experiment included in the NICA project. The simulation model that takes into account features of signal generation process in an ionization GEM chamber is presented in this article. Proper parameters for the simulation were extracted from data retrieved with the help of Garfield++ (a toolkit for the detailed simulation of particle detectors). Due to this, we are able to generate clusters in layers of the micro-strip readout that correspond to clusters retrieved from a real physics experiment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vance, B.; Mendillo, M.
1981-04-30
A three-dimensional model of the ionosphere was developed including chemical reactions and neutral and plasma transport. The model uses Finite Element Simulation to simulate ionospheric modification rather than solving a set of differential equations. The initial conditions of the Los Alamos Scientific Laboratory experiments, Lagopedo Uno and Dos, were input to the model, and these events were simulated. Simulation results were compared to ground and rocketborne electron-content measurements. A simulation of the transport of released SF6 was also made.
Process for Design Optimization of Honeycomb Core Sandwich Panels for Blast Load Mitigation
2012-12-01
experiments. Numerical simulation using a single ‘Y’ cross-sectional unit cell model predicted the crush behavior quite well compared to experiments with...of foil glued together by an adhesive. LS-DYNA is used to carry out the virtual simulation . The foil is modeled by quadrilateral Belytschko-Tsay...aluminum alloy with bilinear isotropic-hardening elastoplastic material model is used for the foil. Since the yield and ultimate strength of the AL5052
Determining erosion relevant soil characteristics with a small-scale rainfall simulator
NASA Astrophysics Data System (ADS)
Schindewolf, M.; Schmidt, J.
2009-04-01
The use of soil erosion models is of great importance in soil and water conservation. Routine application of these models on the regional scale is not at least limited by the high parameter demands. Although the EROSION 3D simulation model is operating with a comparable low number of parameters, some of the model input variables could only be determined by rainfall simulation experiments. The existing data base of EROSION 3D was created in the mid 90s based on large-scale rainfall simulation experiments on 22x2m sized experimental plots. Up to now this data base does not cover all soil and field conditions adequately. Therefore a new campaign of experiments would be essential to produce additional information especially with respect to the effects of new soil management practices (e.g. long time conservation tillage, non tillage). The rainfall simulator used in the actual campaign consists of 30 identic modules, which are equipped with oscillating rainfall nozzles. Veejet 80/100 (Spraying Systems Co., Wheaton, IL) are used in order to ensure best possible comparability to natural rainfalls with respect to raindrop size distribution and momentum transfer. Central objectives of the small-scale rainfall simulator are - effectively application - provision of comparable results to large-scale rainfall simulation experiments. A crucial problem in using the small scale simulator is the restriction on rather small volume rates of surface runoff. Under this conditions soil detachment is governed by raindrop impact. Thus impact of surface runoff on particle detachment cannot be reproduced adequately by a small-scale rainfall simulator With this problem in mind this paper presents an enhanced small-scale simulator which allows a virtual multiplication of the plot length by feeding additional sediment loaded water to the plot from upstream. Thus is possible to overcome the plot length limited to 3m while reproducing nearly similar flow conditions as in rainfall experiments on standard plots. The simulator is extensively applied to plots of different soil types, crop types and management systems. The comparison with existing data sets obtained by large-scale rainfall simulations show that results can adequately be reproduced by the applied combination of small-scale rainfall simulator and sediment loaded water influx.
Drewes, Rich; Zou, Quan; Goodman, Philip H
2009-01-01
Neuroscience modeling experiments often involve multiple complex neural network and cell model variants, complex input stimuli and input protocols, followed by complex data analysis. Coordinating all this complexity becomes a central difficulty for the experimenter. The Python programming language, along with its extensive library packages, has emerged as a leading "glue" tool for managing all sorts of complex programmatic tasks. This paper describes a toolkit called Brainlab, written in Python, that leverages Python's strengths for the task of managing the general complexity of neuroscience modeling experiments. Brainlab was also designed to overcome the major difficulties of working with the NCS (NeoCortical Simulator) environment in particular. Brainlab is an integrated model-building, experimentation, and data analysis environment for the powerful parallel spiking neural network simulator system NCS.
Drewes, Rich; Zou, Quan; Goodman, Philip H.
2008-01-01
Neuroscience modeling experiments often involve multiple complex neural network and cell model variants, complex input stimuli and input protocols, followed by complex data analysis. Coordinating all this complexity becomes a central difficulty for the experimenter. The Python programming language, along with its extensive library packages, has emerged as a leading “glue” tool for managing all sorts of complex programmatic tasks. This paper describes a toolkit called Brainlab, written in Python, that leverages Python's strengths for the task of managing the general complexity of neuroscience modeling experiments. Brainlab was also designed to overcome the major difficulties of working with the NCS (NeoCortical Simulator) environment in particular. Brainlab is an integrated model-building, experimentation, and data analysis environment for the powerful parallel spiking neural network simulator system NCS. PMID:19506707
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan; Hale, Elaine; Hodge, Bri-Mathias
2016-08-11
This paper discusses the development of, approaches for, experiences with, and some results from a large-scale, high-performance-computer-based (HPC-based) co-simulation of electric power transmission and distribution systems using the Integrated Grid Modeling System (IGMS). IGMS was developed at the National Renewable Energy Laboratory (NREL) as a novel Independent System Operator (ISO)-to-appliance scale electric power system modeling platform that combines off-the-shelf tools to simultaneously model 100s to 1000s of distribution systems in co-simulation with detailed ISO markets, transmission power flows, and AGC-level reserve deployment. Lessons learned from the co-simulation architecture development are shared, along with a case study that explores the reactivemore » power impacts of PV inverter voltage support on the bulk power system.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
English, Shawn Allen; Nelson, Stacy Michelle; Briggs, Timothy
Presented is a model verification and validation effort using low - velocity impact (LVI) of carbon fiber reinforced polymer laminate experiments. A flat cylindrical indenter impacts the laminate with enough energy to produce delamination, matrix cracks and fiber breaks. Included in the experimental efforts are ultrasonic scans of the damage for qualitative validation of the models. However, the primary quantitative metrics of validation are the force time history measured through the instrumented indenter and initial and final velocities. The simulations, whi ch are run on Sandia's Sierra finite element codes , consist of all physics and material parameters of importancemore » as determined by a sensitivity analysis conducted on the LVI simulation. A novel orthotropic damage and failure constitutive model that is cap able of predicting progressive composite damage and failure is described in detail and material properties are measured, estimated from micromechanics or optimized through calibration. A thorough verification and calibration to the accompanying experiment s are presented. Specia l emphasis is given to the four - point bend experiment. For all simulations of interest, the mesh and material behavior is verified through extensive convergence studies. An ensemble of simulations incorporating model parameter unc ertainties is used to predict a response distribution which is then compared to experimental output. The result is a quantifiable confidence in material characterization and model physics when simulating this phenomenon in structures of interest.« less
Locating Anomalies in Complex Data Sets Using Visualization and Simulation
NASA Technical Reports Server (NTRS)
Panetta, Karen
2001-01-01
The research goals are to create a simulation framework that can accept any combination of models written at the gate or behavioral level. The framework provides the ability to fault simulate and create scenarios of experiments using concurrent simulation. In order to meet these goals we have had to fulfill the following requirements. The ability to accept models written in VHDL, Verilog or the C languages. The ability to propagate faults through any model type. The ability to create experiment scenarios efficiently without generating every possible combination of variables. The ability to accept adversity of fault models beyond the single stuck-at model. Major development has been done to develop a parser that can accept models written in various languages. This work has generated considerable attention from other universities and industry for its flexibility and usefulness. The parser uses LEXX and YACC to parse Verilog and C. We have also utilized our industrial partnership with Alternative System's Inc. to import vhdl into our simulator. For multilevel simulation, we needed to modify the simulator architecture to accept models that contained multiple outputs. This enabled us to accept behavioral components. The next major accomplishment was the addition of "functional fault models". Functional fault models change the behavior of a gate or model. For example, a bridging fault can make an OR gate behave like an AND gate. This has applications beyond fault simulation. This modeling flexibility will make the simulator more useful for doing verification and model comparison. For instance, two or more versions of an ALU can be comparatively simulated in a single execution. The results will show where and how the models differed so that the performance and correctness of the models may be evaluated. A considerable amount of time has been dedicated to validating the simulator performance on larger models provided by industry and other universities.
Fate and transport of phenol in a packed bed reactor containing simulated solid waste
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saquing, Jovita M., E-mail: jmsaquing@gmail.com; Knappe, Detlef R.U., E-mail: knappe@ncsu.edu; Barlaz, Morton A., E-mail: barlaz@ncsu.edu
Highlights: Black-Right-Pointing-Pointer Anaerobic column experiments were conducted at 37 Degree-Sign C using a simulated waste mixture. Black-Right-Pointing-Pointer Sorption and biodegradation model parameters were determined from batch tests. Black-Right-Pointing-Pointer HYDRUS simulated well the fate and transport of phenol in a fully saturated waste column. Black-Right-Pointing-Pointer The batch biodegradation rate and the rate obtained by inverse modeling differed by a factor of {approx}2. Black-Right-Pointing-Pointer Tracer tests showed the importance of hydrodynamic parameters to improve model estimates. - Abstract: An assessment of the risk to human health and the environment associated with the presence of organic contaminants (OCs) in landfills necessitates reliable predictivemore » models. The overall objectives of this study were to (1) conduct column experiments to measure the fate and transport of an OC in a simulated solid waste mixture, (2) compare the results of column experiments to model predictions using HYDRUS-1D (version 4.13), a contaminant fate and transport model that can be parameterized to simulate the laboratory experimental system, and (3) determine model input parameters from independently conducted batch experiments. Experiments were conducted in which sorption only and sorption plus biodegradation influenced OC transport. HYDRUS-1D can reasonably simulate the fate and transport of phenol in an anaerobic and fully saturated waste column in which biodegradation and sorption are the prevailing fate processes. The agreement between model predictions and column data was imperfect (i.e., within a factor of two) for the sorption plus biodegradation test and the error almost certainly lies in the difficulty of measuring a biodegradation rate that is applicable to the column conditions. Nevertheless, a biodegradation rate estimate that is within a factor of two or even five may be adequate in the context of a landfill, given the extended retention time and the fact that leachate release will be controlled by the infiltration rate which can be minimized by engineering controls.« less
Performance Impact of Deflagration to Detonation Transition Enhancing Obstacles
NASA Technical Reports Server (NTRS)
Paxson, Daniel E.; Schauer, Frederick; Hopper, David
2012-01-01
A sub-model is developed to account for the drag and heat transfer enhancement resulting from deflagration-to-detonation (DDT) inducing obstacles commonly used in pulse detonation engines (PDE). The sub-model is incorporated as a source term in a time-accurate, quasi-onedimensional, CFD-based PDE simulation. The simulation and sub-model are then validated through comparison with a particular experiment in which limited DDT obstacle parameters were varied. The simulation is then used to examine the relative contributions from drag and heat transfer to the reduced thrust which is observed. It is found that heat transfer is far more significant than aerodynamic drag in this particular experiment.
Rodman Linn; Kerry Anderson; Judith Winterkamp; Alyssa Broos; Michael Wotton; Jean-Luc Dupuy; Francois Pimont; Carleton Edminster
2012-01-01
Field experiments are one way to develop or validate wildland fire-behavior models. It is important to consider the implications of assumptions relating to the locality of measurements with respect to the fire, the temporal frequency of the measured data, and the changes to local winds that might be caused by the experimental configuration. Twenty FIRETEC simulations...
NASA Astrophysics Data System (ADS)
Maljaars, E.; Felici, F.; Blanken, T. C.; Galperti, C.; Sauter, O.; de Baar, M. R.; Carpanese, F.; Goodman, T. P.; Kim, D.; Kim, S. H.; Kong, M.; Mavkov, B.; Merle, A.; Moret, J. M.; Nouailletas, R.; Scheffer, M.; Teplukhina, A. A.; Vu, N. M. T.; The EUROfusion MST1-team; The TCV-team
2017-12-01
The successful performance of a model predictive profile controller is demonstrated in simulations and experiments on the TCV tokamak, employing a profile controller test environment. Stable high-performance tokamak operation in hybrid and advanced plasma scenarios requires control over the safety factor profile (q-profile) and kinetic plasma parameters such as the plasma beta. This demands to establish reliable profile control routines in presently operational tokamaks. We present a model predictive profile controller that controls the q-profile and plasma beta using power requests to two clusters of gyrotrons and the plasma current request. The performance of the controller is analyzed in both simulation and TCV L-mode discharges where successful tracking of the estimated inverse q-profile as well as plasma beta is demonstrated under uncertain plasma conditions and the presence of disturbances. The controller exploits the knowledge of the time-varying actuator limits in the actuator input calculation itself such that fast transitions between targets are achieved without overshoot. A software environment is employed to prepare and test this and three other profile controllers in parallel in simulations and experiments on TCV. This set of tools includes the rapid plasma transport simulator RAPTOR and various algorithms to reconstruct the plasma equilibrium and plasma profiles by merging the available measurements with model-based predictions. In this work the estimated q-profile is merely based on RAPTOR model predictions due to the absence of internal current density measurements in TCV. These results encourage to further exploit model predictive profile control in experiments on TCV and other (future) tokamaks.
Oceanic response to tropical cyclone `Phailin' in the Bay of Bengal
NASA Astrophysics Data System (ADS)
Pant, V.; Prakash, K. R.
2016-02-01
Vertical mixing largely explains surface cooling induced by Tropical Cyclones (TCs). However, TC-induced upwelling of deeper waters plays an important role as it partly balances the warming of subsurface waters induced by vertical mixing. Below 100 m, vertical advection results in cooling that persists for a few days after the storm. The present study investigates the integrated ocean response to tropical cyclone `Phaillin' (10-14 October 2013) in the Bay of Bengal (BoB) through both coupled and stand-alone ocean-atmosphere models. Two numerical experiments with different coupling configurations between Regional Ocean Modelling System (ROMS) and Weather Research and Forecasting (WRF) were performed to investigate the impact of Phailin cyclone on the surface and sub-surface oceanic parameters. In the first experiment, ocean circulation model ROMS observe surface wind forcing from a mesoscale atmospheric model (WRF with nested damin setup), while rest forcing parameters are supplied to ROMS from NCEP data. In the second experiment, all surface forcing data to ROMS directly comes from WRF. The modeling components and data fields exchanged between atmospheric and oceanic models are described. The coupled modeling system is used to identify model sensitivity by exchanging prognostic variable fields between the two model components during simulation of Phallin cyclone (10-14 October 2013) in the BoB.In general, the simulated Phailin cyclone track and intensities agree well with observations in WRF simulations. Further, the inter-comparison between stand-alone and coupled model simulations validated against observations highlights better performance of coupled modeling system in simulating the oceanic conditions during the Phailin cyclone event.
Evaluation of the flame propagation within an SI engine using flame imaging and LES
NASA Astrophysics Data System (ADS)
He, Chao; Kuenne, Guido; Yildar, Esra; van Oijen, Jeroen; di Mare, Francesca; Sadiki, Amsini; Ding, Carl-Philipp; Baum, Elias; Peterson, Brian; Böhm, Benjamin; Janicka, Johannes
2017-11-01
This work shows experiments and simulations of the fired operation of a spark ignition engine with port-fuelled injection. The test rig considered is an optically accessible single cylinder engine specifically designed at TU Darmstadt for the detailed investigation of in-cylinder processes and model validation. The engine was operated under lean conditions using iso-octane as a substitute for gasoline. Experiments have been conducted to provide a sound database of the combustion process. A planar flame imaging technique has been applied within the swirl- and tumble-planes to provide statistical information on the combustion process to complement a pressure-based comparison between simulation and experiments. This data is then analysed and used to assess the large eddy simulation performed within this work. For the simulation, the engine code KIVA has been extended by the dynamically thickened flame model combined with chemistry reduction by means of pressure dependent tabulation. Sixty cycles have been simulated to perform a statistical evaluation. Based on a detailed comparison with the experimental data, a systematic study has been conducted to obtain insight into the most crucial modelling uncertainties.
Simulating Bioremediation of Chloroethenes in a Fractured Rock Aquifer.
NASA Astrophysics Data System (ADS)
Curtis, G. P.
2016-12-01
Reactive transport simulations are being conducted to synthesize the results of a field experiment on the enhanced bioremediation of chloroethenes in a heterogeneous fractured-rock aquifer near West Trenton, NJ. The aquifer consists of a sequence of dipping mudstone beds, with water-conducting bedding-plane fractures separated by low-permeability rock where transport is diffusion-limited. The enhanced bioremediation experiment was conducted by injecting emulsified vegetable oil as an electron donor (EOS™) and a microbial consortium (KB1™) that contained dehalococcoides ethenogenes into a fracture zone that had maximum trichloroethene (TCE) concentrations of 84µM. TCE was significantly biodegraded to dichloroethene, chloroethene and ethene or CO2 at the injection well and at a downgradient well. The results also show the concomitant reduction of Fe(III) and S(6) and the production of methane . The results were used to calibrate transport models for quantifying the dominant mass-removal mechanisms. A nonreactive transport model was developed to simulate advection, dispersion and matrix diffusion of bromide and deuterium tracers present in the injection solution. This calibrated model matched tracer concentrations at the injection well and a downgradient observation well and demonstrated that matrix diffusion was a dominant control on tracer transport. A reactive transport model was developed to extend the nonreactive transport model to simulate the microbially mediated sequential dechlorination reactions, reduction of Fe(III) and S(6), and methanogenesis. The reactive transport model was calibrated to concentrations of chloride, chloroethenes, pH, alkalinity, redox-sensitive species and major ions, to estimate key biogeochemical kinetic parameters. The simulation results generally match the diverse set of observations at the injection and observation wells throughout the three year experiment. In addition, the observations and model simulations indicate that a significant pool of TCE that was initially sorbed to either the fracture surfaces or in the matrix was degraded during the field experiment. The calibrated reactive transport model will be used to quantify the extent of chloroethene mass removal from a range of hypothetical aquifers.
The 3D model of debriefing: defusing, discovering, and deepening.
Zigmont, Jason J; Kappus, Liana J; Sudikoff, Stephanie N
2011-04-01
The experiential learning process involves participation in key experiences and analysis of those experiences. In health care, these experiences can occur through high-fidelity simulation or in the actual clinical setting. The most important component of this process is the postexperience analysis or debriefing. During the debriefing, individuals must reflect upon the experience, identify the mental models that led to behaviors or cognitive processes, and then build or enhance new mental models to be used in future experiences. On the basis of adult learning theory, the Kolb Experiential Learning Cycle, and the Learning Outcomes Model, we structured a framework for facilitators of debriefings entitled "the 3D Model of Debriefing: Defusing, Discovering, and Deepening." It incorporates common phases prevalent in the debriefing literature, including description of and reactions to the experience, analysis of behaviors, and application or synthesis of new knowledge into clinical practice. It can be used to enhance learning after real or simulated events. Copyright © 2011 Elsevier Inc. All rights reserved.
Physician Utilization of a Hospital Information System: A Computer Simulation Model
Anderson, James G.; Jay, Stephen J.; Clevenger, Stephen J.; Kassing, David R.; Perry, Jane; Anderson, Marilyn M.
1988-01-01
The purpose of this research was to develop a computer simulation model that represents the process through which physicians enter orders into a hospital information system (HIS). Computer simulation experiments were performed to estimate the effects of two methods of order entry on outcome variables. The results of the computer simulation experiments were used to perform a cost-benefit analysis to compare the two different means of entering medical orders into the HIS. The results indicate that the use of personal order sets to enter orders into the HIS will result in a significant reduction in manpower, salaries and fringe benefits, and errors in order entry.
Sabouri, Sepideh; Matene, Elhacene; Vinet, Alain; Richer, Louis-Philippe; Cardinal, René; Armour, J Andrew; Pagé, Pierre; Kus, Teresa; Jacquemet, Vincent
2014-01-01
Epicardial high-density electrical mapping is a well-established experimental instrument to monitor in vivo the activity of the atria in response to modulations of the autonomic nervous system in sinus rhythm. In regions that are not accessible by epicardial mapping, noncontact endocardial mapping performed through a balloon catheter may provide a more comprehensive description of atrial activity. We developed a computer model of the canine right atrium to compare epicardial and noncontact endocardial mapping. The model was derived from an experiment in which electroanatomical reconstruction, epicardial mapping (103 electrodes), noncontact endocardial mapping (2048 virtual electrodes computed from a 64-channel balloon catheter), and direct-contact endocardial catheter recordings were simultaneously performed in a dog. The recording system was simulated in the computer model. For simulations and experiments (after atrio-ventricular node suppression), activation maps were computed during sinus rhythm. Repolarization was assessed by measuring the area under the atrial T wave (ATa), a marker of repolarization gradients. Results showed an epicardial-endocardial correlation coefficients of 0.80 and 0.63 (two dog experiments) and 0.96 (simulation) between activation times, and a correlation coefficients of 0.57 and 0.46 (two dog experiments) and 0.92 (simulation) between ATa values. Despite distance (balloon-atrial wall) and dimension reduction (64 electrodes), some information about atrial repolarization remained present in noncontact signals.
Sabouri, Sepideh; Matene, Elhacene; Vinet, Alain; Richer, Louis-Philippe; Cardinal, René; Armour, J. Andrew; Pagé, Pierre; Kus, Teresa; Jacquemet, Vincent
2014-01-01
Epicardial high-density electrical mapping is a well-established experimental instrument to monitor in vivo the activity of the atria in response to modulations of the autonomic nervous system in sinus rhythm. In regions that are not accessible by epicardial mapping, noncontact endocardial mapping performed through a balloon catheter may provide a more comprehensive description of atrial activity. We developed a computer model of the canine right atrium to compare epicardial and noncontact endocardial mapping. The model was derived from an experiment in which electroanatomical reconstruction, epicardial mapping (103 electrodes), noncontact endocardial mapping (2048 virtual electrodes computed from a 64-channel balloon catheter), and direct-contact endocardial catheter recordings were simultaneously performed in a dog. The recording system was simulated in the computer model. For simulations and experiments (after atrio-ventricular node suppression), activation maps were computed during sinus rhythm. Repolarization was assessed by measuring the area under the atrial T wave (ATa), a marker of repolarization gradients. Results showed an epicardial-endocardial correlation coefficients of 0.80 and 0.63 (two dog experiments) and 0.96 (simulation) between activation times, and a correlation coefficients of 0.57 and 0.46 (two dog experiments) and 0.92 (simulation) between ATa values. Despite distance (balloon-atrial wall) and dimension reduction (64 electrodes), some information about atrial repolarization remained present in noncontact signals. PMID:24598778
Local rules simulation of the kinetics of virus capsid self-assembly.
Schwartz, R; Shor, P W; Prevelige, P E; Berger, B
1998-12-01
A computer model is described for studying the kinetics of the self-assembly of icosahedral viral capsids. Solution of this problem is crucial to an understanding of the viral life cycle, which currently cannot be adequately addressed through laboratory techniques. The abstract simulation model employed to address this is based on the local rules theory of. Proc. Natl. Acad. Sci. USA. 91:7732-7736). It is shown that the principle of local rules, generalized with a model of kinetics and other extensions, can be used to simulate complicated problems in self-assembly. This approach allows for a computationally tractable molecular dynamics-like simulation of coat protein interactions while retaining many relevant features of capsid self-assembly. Three simple simulation experiments are presented to illustrate the use of this model. These show the dependence of growth and malformation rates on the energetics of binding interactions, the tolerance of errors in binding positions, and the concentration of subunits in the examples. These experiments demonstrate a tradeoff within the model between growth rate and fidelity of assembly for the three parameters. A detailed discussion of the computational model is also provided.
NASA Astrophysics Data System (ADS)
Huang, Shiquan; Yi, Youping; Li, Pengchuan
2011-05-01
In recent years, multi-scale simulation technique of metal forming is gaining significant attention for prediction of the whole deformation process and microstructure evolution of product. The advances of numerical simulation at macro-scale level on metal forming are remarkable and the commercial FEM software, such as Deform2D/3D, has found a wide application in the fields of metal forming. However, the simulation method of multi-scale has little application due to the non-linearity of microstructure evolution during forming and the difficulty of modeling at the micro-scale level. This work deals with the modeling of microstructure evolution and a new method of multi-scale simulation in forging process. The aviation material 7050 aluminum alloy has been used as example for modeling of microstructure evolution. The corresponding thermal simulated experiment has been performed on Gleeble 1500 machine. The tested specimens have been analyzed for modeling of dislocation density, nucleation and growth of recrystallization(DRX). The source program using cellular automaton (CA) method has been developed to simulate the grain nucleation and growth, in which the change of grain topology structure caused by the metal deformation was considered. The physical fields at macro-scale level such as temperature field, stress and strain fields, which can be obtained by commercial software Deform 3D, are coupled with the deformed storage energy at micro-scale level by dislocation model to realize the multi-scale simulation. This method was explained by forging process simulation of the aircraft wheel hub forging. Coupled the results of Deform 3D with CA results, the forging deformation progress and the microstructure evolution at any point of forging could be simulated. For verifying the efficiency of simulation, experiments of aircraft wheel hub forging have been done in the laboratory and the comparison of simulation and experiment result has been discussed in details.
Deconvolution of acoustic emissions for source localization using time reverse modeling
NASA Astrophysics Data System (ADS)
Kocur, Georg Karl
2017-01-01
Impact experiments on small-scale slabs made of concrete and aluminum were carried out. Wave motion radiated from the epicenter of the impact was recorded as voltage signals by resonant piezoelectric transducers. Numerical simulations of the elastic wave propagation are performed to simulate the physical experiments. The Hertz theory of contact is applied to estimate the force impulse, which is subsequently used for the numerical simulation. Displacements at the transducer positions are calculated numerically. A deconvolution function is obtained by comparing the physical (voltage signal) and the numerical (calculated displacement) experiments. Acoustic emission signals due to pencil-lead breaks are recorded, deconvolved and applied for localization using time reverse modeling.
Monte-Carlo Geant4 numerical simulation of experiments at 247-MeV proton microscope
NASA Astrophysics Data System (ADS)
Kantsyrev, A. V.; Skoblyakov, A. V.; Bogdanov, A. V.; Golubev, A. A.; Shilkin, N. S.; Yuriev, D. S.; Mintsev, V. B.
2018-01-01
A radiographic facility for an investigation of fast dynamic processes with areal density of targets up to 5 g/cm2 is under development on the basis of high-current proton linear accelerator at the Institute for Nuclear Research (Troitsk, Russia). A virtual model of the proton microscope developed in a software toolkit Geant4 is presented in the article. Fullscale Monte-Carlo numerical simulation of static radiographic experiments at energy of a proton beam 247 MeV was performed. The results of simulation of proton radiography experiments with static model of shock-compressed xenon are presented. The results of visualization of copper and polymethyl methacrylate step wedges static targets also described.
Medlyn, Belinda E; De Kauwe, Martin G; Zaehle, Sönke; Walker, Anthony P; Duursma, Remko A; Luus, Kristina; Mishurov, Mikhail; Pak, Bernard; Smith, Benjamin; Wang, Ying-Ping; Yang, Xiaojuan; Crous, Kristine Y; Drake, John E; Gimeno, Teresa E; Macdonald, Catriona A; Norby, Richard J; Power, Sally A; Tjoelker, Mark G; Ellsworth, David S
2016-08-01
The response of terrestrial ecosystems to rising atmospheric CO2 concentration (Ca ), particularly under nutrient-limited conditions, is a major uncertainty in Earth System models. The Eucalyptus Free-Air CO2 Enrichment (EucFACE) experiment, recently established in a nutrient- and water-limited woodland presents a unique opportunity to address this uncertainty, but can best do so if key model uncertainties have been identified in advance. We applied seven vegetation models, which have previously been comprehensively assessed against earlier forest FACE experiments, to simulate a priori possible outcomes from EucFACE. Our goals were to provide quantitative projections against which to evaluate data as they are collected, and to identify key measurements that should be made in the experiment to allow discrimination among alternative model assumptions in a postexperiment model intercomparison. Simulated responses of annual net primary productivity (NPP) to elevated Ca ranged from 0.5 to 25% across models. The simulated reduction of NPP during a low-rainfall year also varied widely, from 24 to 70%. Key processes where assumptions caused disagreement among models included nutrient limitations to growth; feedbacks to nutrient uptake; autotrophic respiration; and the impact of low soil moisture availability on plant processes. Knowledge of the causes of variation among models is now guiding data collection in the experiment, with the expectation that the experimental data can optimally inform future model improvements. © 2016 John Wiley & Sons Ltd.
Medlyn, Belinda E.; De Kauwe, Martin G.; Zaehle, Sönke; ...
2016-05-09
One major uncertainty in Earth System models is the response of terrestrial ecosystems to rising atmospheric CO 2 concentration (Ca), particularly under nutrient-lim- ited conditions. The Eucalyptus Free-Air CO 2 Enrichment (EucFACE) experiment, recently established in a nutrient- and water-limited woodlands, presents a unique opportunity to address this uncertainty, but can best do so if key model uncertainties have been identified in advance. Moreover, we applied seven vegetation models, which have previously been comprehensively assessed against earlier forest FACE experi- ments, to simulate a priori possible outcomes from EucFACE. Our goals were to provide quantitative projections against which to evaluatemore » data as they are collected, and to identify key measurements that should be made in the experiment to allow discrimination among alternative model assumptions in a postexperiment model intercompari- son. Simulated responses of annual net primary productivity (NPP) to elevated Ca ranged from 0.5 to 25% across models. The simulated reduction of NPP during a low-rainfall year also varied widely, from 24 to 70%. Key processes where assumptions caused disagreement among models included nutrient limitations to growth; feedbacks to nutri- ent uptake; autotrophic respiration; and the impact of low soil moisture availability on plant processes. Finally, knowledge of the causes of variation among models is now guiding data collection in the experiment, with the expectation that the experimental data can optimally inform future model improvements.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Medlyn, Belinda E.; De Kauwe, Martin G.; Zaehle, Sönke
One major uncertainty in Earth System models is the response of terrestrial ecosystems to rising atmospheric CO 2 concentration (Ca), particularly under nutrient-lim- ited conditions. The Eucalyptus Free-Air CO 2 Enrichment (EucFACE) experiment, recently established in a nutrient- and water-limited woodlands, presents a unique opportunity to address this uncertainty, but can best do so if key model uncertainties have been identified in advance. Moreover, we applied seven vegetation models, which have previously been comprehensively assessed against earlier forest FACE experi- ments, to simulate a priori possible outcomes from EucFACE. Our goals were to provide quantitative projections against which to evaluatemore » data as they are collected, and to identify key measurements that should be made in the experiment to allow discrimination among alternative model assumptions in a postexperiment model intercompari- son. Simulated responses of annual net primary productivity (NPP) to elevated Ca ranged from 0.5 to 25% across models. The simulated reduction of NPP during a low-rainfall year also varied widely, from 24 to 70%. Key processes where assumptions caused disagreement among models included nutrient limitations to growth; feedbacks to nutri- ent uptake; autotrophic respiration; and the impact of low soil moisture availability on plant processes. Finally, knowledge of the causes of variation among models is now guiding data collection in the experiment, with the expectation that the experimental data can optimally inform future model improvements.« less
A Rutherford Scattering Simulation with Microcomputer Graphics.
ERIC Educational Resources Information Center
Calle, Carlos I.; Wright, Lavonia F.
1989-01-01
Lists a program for a simulation of Rutherford's gold foil experiment in BASIC for both Apple II and IBM compatible computers. Compares Rutherford's model of the atom with Thompson's plum pudding model of the atom. (MVL)
NASA Technical Reports Server (NTRS)
Carr, Peter C.; Mckissick, Burnell T.
1988-01-01
A joint experiment to investigate simulator validation and cue fidelity was conducted by the Dryden Flight Research Facility of NASA Ames Research Center (Ames-Dryden) and NASA Langley Research Center. The primary objective was to validate the use of a closed-loop pilot-vehicle mathematical model as an analytical tool for optimizing the tradeoff between simulator fidelity requirements and simulator cost. The validation process includes comparing model predictions with simulation and flight test results to evaluate various hypotheses for differences in motion and visual cues and information transfer. A group of five pilots flew air-to-air tracking maneuvers in the Langley differential maneuvering simulator and visual motion simulator and in an F-14 aircraft at Ames-Dryden. The simulators used motion and visual cueing devices including a g-seat, a helmet loader, wide field-of-view horizon, and a motion base platform.
EURODELTA-Trends, a multi-model experiment of air quality hindcast in Europe over 1990-2010
NASA Astrophysics Data System (ADS)
Colette, Augustin; Andersson, Camilla; Manders, Astrid; Mar, Kathleen; Mircea, Mihaela; Pay, Maria-Teresa; Raffort, Valentin; Tsyro, Svetlana; Cuvelier, Cornelius; Adani, Mario; Bessagnet, Bertrand; Bergström, Robert; Briganti, Gino; Butler, Tim; Cappelletti, Andrea; Couvidat, Florian; D'Isidoro, Massimo; Doumbia, Thierno; Fagerli, Hilde; Granier, Claire; Heyes, Chris; Klimont, Zig; Ojha, Narendra; Otero, Noelia; Schaap, Martijn; Sindelarova, Katarina; Stegehuis, Annemiek I.; Roustan, Yelva; Vautard, Robert; van Meijgaard, Erik; Garcia Vivanco, Marta; Wind, Peter
2017-09-01
The EURODELTA-Trends multi-model chemistry-transport experiment has been designed to facilitate a better understanding of the evolution of air pollution and its drivers for the period 1990-2010 in Europe. The main objective of the experiment is to assess the efficiency of air pollutant emissions mitigation measures in improving regional-scale air quality. The present paper formulates the main scientific questions and policy issues being addressed by the EURODELTA-Trends modelling experiment with an emphasis on how the design and technical features of the modelling experiment answer these questions. The experiment is designed in three tiers, with increasing degrees of computational demand in order to facilitate the participation of as many modelling teams as possible. The basic experiment consists of simulations for the years 1990, 2000, and 2010. Sensitivity analysis for the same three years using various combinations of (i) anthropogenic emissions, (ii) chemical boundary conditions, and (iii) meteorology complements it. The most demanding tier consists of two complete time series from 1990 to 2010, simulated using either time-varying emissions for corresponding years or constant emissions. Eight chemistry-transport models have contributed with calculation results to at least one experiment tier, and five models have - to date - completed the full set of simulations (and 21-year trend calculations have been performed by four models). The modelling results are publicly available for further use by the scientific community. The main expected outcomes are (i) an evaluation of the models' performances for the three reference years, (ii) an evaluation of the skill of the models in capturing observed air pollution trends for the 1990-2010 time period, (iii) attribution analyses of the respective role of driving factors (e.g. emissions, boundary conditions, meteorology), (iv) a dataset based on a multi-model approach, to provide more robust model results for use in impact studies related to human health, ecosystem, and radiative forcing.
Driving-forces model on individual behavior in scenarios considering moving threat agents
NASA Astrophysics Data System (ADS)
Li, Shuying; Zhuang, Jun; Shen, Shifei; Wang, Jia
2017-09-01
The individual behavior model is a contributory factor to improve the accuracy of agent-based simulation in different scenarios. However, few studies have considered moving threat agents, which often occur in terrorist attacks caused by attackers with close-range weapons (e.g., sword, stick). At the same time, many existing behavior models lack validation from cases or experiments. This paper builds a new individual behavior model based on seven behavioral hypotheses. The driving-forces model is an extension of the classical social force model considering scenarios including moving threat agents. An experiment was conducted to validate the key components of the model. Then the model is compared with an advanced Elliptical Specification II social force model, by calculating the fitting errors between the simulated and experimental trajectories, and being applied to simulate a specific circumstance. Our results show that the driving-forces model reduced the fitting error by an average of 33.9% and the standard deviation by an average of 44.5%, which indicates the accuracy and stability of the model in the studied situation. The new driving-forces model could be used to simulate individual behavior when analyzing the risk of specific scenarios using agent-based simulation methods, such as risk analysis of close-range terrorist attacks in public places.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morgan, R. V.; Cabot, W. H.; Greenough, J. A.
Experiments and large eddy simulation (LES) were performed to study the development of the Rayleigh–Taylor instability into the saturated, nonlinear regime, produced between two gases accelerated by a rarefaction wave. Single-mode two-dimensional, and single-mode three-dimensional initial perturbations were introduced on the diffuse interface between the two gases prior to acceleration. The rarefaction wave imparts a non-constant acceleration, and a time decreasing Atwood number,more » $$A=(\\unicode[STIX]{x1D70C}_{2}-\\unicode[STIX]{x1D70C}_{1})/(\\unicode[STIX]{x1D70C}_{2}+\\unicode[STIX]{x1D70C}_{1})$$, where$$\\unicode[STIX]{x1D70C}_{2}$$and$$\\unicode[STIX]{x1D70C}_{1}$$are the densities of the heavy and light gas, respectively. Experiments and simulations are presented for initial Atwood numbers of$A=0.49$$,$$A=0.63$$,$$A=0.82$$and$$A=0.94$$. Nominally two-dimensional (2-D) experiments (initiated with nearly 2-D perturbations) and 2-D simulations are observed to approach an intermediate-time velocity plateau that is in disagreement with the late-time velocity obtained from the incompressible model of Goncharov (Phys. Rev. Lett., vol. 88, 2002, 134502). Reacceleration from an intermediate velocity is observed for 2-D bubbles in large wavenumber,$$k=2\\unicode[STIX]{x03C0}/\\unicode[STIX]{x1D706}=0.247~\\text{mm}^{-1}$$, experiments and simulations, where$$\\unicode[STIX]{x1D706}$is the wavelength of the initial perturbation. At moderate Atwood numbers, the bubble and spike velocities approach larger values than those predicted by Goncharov’s model. These late-time velocity trends are predicted well by numerical simulations using the LLNL Miranda code, and by the 2009 model of Mikaelian (Phys. Fluids., vol. 21, 2009, 024103) that extends Layzer type models to variable acceleration and density. Large Atwood number experiments show a delayed roll up, and exhibit a free-fall like behaviour. Finally, experiments initiated with three-dimensional perturbations tend to agree better with models and a simulation using the LLNL Ares code initiated with an axisymmetric rather than Cartesian symmetry.« less
Morgan, R. V.; Cabot, W. H.; Greenough, J. A.; ...
2018-01-12
Experiments and large eddy simulation (LES) were performed to study the development of the Rayleigh–Taylor instability into the saturated, nonlinear regime, produced between two gases accelerated by a rarefaction wave. Single-mode two-dimensional, and single-mode three-dimensional initial perturbations were introduced on the diffuse interface between the two gases prior to acceleration. The rarefaction wave imparts a non-constant acceleration, and a time decreasing Atwood number,more » $$A=(\\unicode[STIX]{x1D70C}_{2}-\\unicode[STIX]{x1D70C}_{1})/(\\unicode[STIX]{x1D70C}_{2}+\\unicode[STIX]{x1D70C}_{1})$$, where$$\\unicode[STIX]{x1D70C}_{2}$$and$$\\unicode[STIX]{x1D70C}_{1}$$are the densities of the heavy and light gas, respectively. Experiments and simulations are presented for initial Atwood numbers of$A=0.49$$,$$A=0.63$$,$$A=0.82$$and$$A=0.94$$. Nominally two-dimensional (2-D) experiments (initiated with nearly 2-D perturbations) and 2-D simulations are observed to approach an intermediate-time velocity plateau that is in disagreement with the late-time velocity obtained from the incompressible model of Goncharov (Phys. Rev. Lett., vol. 88, 2002, 134502). Reacceleration from an intermediate velocity is observed for 2-D bubbles in large wavenumber,$$k=2\\unicode[STIX]{x03C0}/\\unicode[STIX]{x1D706}=0.247~\\text{mm}^{-1}$$, experiments and simulations, where$$\\unicode[STIX]{x1D706}$is the wavelength of the initial perturbation. At moderate Atwood numbers, the bubble and spike velocities approach larger values than those predicted by Goncharov’s model. These late-time velocity trends are predicted well by numerical simulations using the LLNL Miranda code, and by the 2009 model of Mikaelian (Phys. Fluids., vol. 21, 2009, 024103) that extends Layzer type models to variable acceleration and density. Large Atwood number experiments show a delayed roll up, and exhibit a free-fall like behaviour. Finally, experiments initiated with three-dimensional perturbations tend to agree better with models and a simulation using the LLNL Ares code initiated with an axisymmetric rather than Cartesian symmetry.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reedlunn, Benjamin
Room D was an in-situ, isothermal, underground experiment conducted at the Waste Isolation Pilot Plant between 1984 and 1991. The room was carefully instrumented to measure the horizontal and vertical closure immediately upon excavation and for several years thereafter. Early finite element simulations of salt creep around Room D under-predicted the vertical closure by 4.5×, causing investigators to explore a series of changes to the way Room D was modeled. Discrepancies between simulations and measurements were resolved through a series of adjustments to model parameters, which were openly acknowledged in published reports. Interest in Room D has been rekindled recentlymore » by the U.S./German Joint Project III and Project WEIMOS, which seek to improve the predictions of rock salt constitutive models. Joint Project participants calibrate their models solely against laboratory tests, and benchmark the models against underground experiments, such as room D. This report describes updating legacy Room D simulations to today’s computational standards by rectifying several numerical issues. Subsequently, the constitutive model used in previous modeling is recalibrated two different ways against a suite of new laboratory creep experiments on salt extracted from the repository horizon of the Waste Isolation Pilot Plant. Simulations with the new, laboratory-based, calibrations under-predict Room D vertical closure by 3.1×. A list of potential improvements is discussed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reedlunn, Benjamin
Room D was an in-situ, isothermal, underground experiment conducted at theWaste Isolation Pilot Plant between 1984 and 1991. The room was carefully instrumented to measure the horizontal and vertical closure immediately upon excavation and for several years thereafter. Early finite element simulations of salt creep around Room D under predicted the vertical closure by 4.5×, causing investigators to explore a series of changes to the way Room D was modeled. Discrepancies between simulations and measurements were resolved through a series of adjustments to model parameters, which were openly acknowledged in published reports. Interest in Room D has been rekindled recentlymore » by the U.S./German Joint Project III and Project WEIMOS, which seek to improve the predictions of rock salt constitutive models. Joint Project participants calibrate their models solely against laboratory tests, and benchmark the models against underground experiments, such as room D. This report describes updating legacy Room D simulations to today’s computational standards by rectifying several numerical issues. Subsequently, the constitutive model used in previous modeling is recalibrated two different ways against a suite of new laboratory creep experiments on salt extracted from the repository horizon of the Waste Isolation Pilot Plant. Simulations with the new, laboratory-based, calibrations under predict Room D vertical closure by 3.1×. A list of potential improvements is discussed.« less
NASA Astrophysics Data System (ADS)
Pereira, A. S. N.; de Streel, G.; Planes, N.; Haond, M.; Giacomini, R.; Flandre, D.; Kilchytska, V.
2017-02-01
The Drain Induced Barrier Lowering (DIBL) behavior in Ultra-Thin Body and Buried oxide (UTBB) transistors is investigated in details in the temperature range up to 150 °C, for the first time to the best of our knowledge. The analysis is based on experimental data, physical device simulation, compact model (SPICE) simulation and previously published models. Contrary to MASTAR prediction, experiments reveal DIBL increase with temperature. Physical device simulations of different thin-film fully-depleted (FD) devices outline the generality of such behavior. SPICE simulations, with UTSOI DK2.4 model, only partially adhere to experimental trends. Several analytic models available in the literature are assessed for DIBL vs. temperature prediction. Although being the closest to experiments, Fasarakis' model overestimates DIBL(T) dependence for shortest devices and underestimates it for upsized gate lengths frequently used in ultra-low-voltage (ULV) applications. This model is improved in our work, by introducing a temperature-dependent inversion charge at threshold. The improved model shows very good agreement with experimental data, with high gain in precision for the gate lengths under test.
NASA Astrophysics Data System (ADS)
Agaoglu, Berken; Scheytt, Traugott; Copty, Nadim K.
2012-10-01
This study examines the mechanistic processes governing multiphase flow of a water-cosolvent-NAPL system in saturated porous media. Laboratory batch and column flushing experiments were conducted to determine the equilibrium properties of pure NAPL and synthetically prepared NAPL mixtures as well as NAPL recovery mechanisms for different water-ethanol contents. The effect of contact time was investigated by considering different steady and intermittent flow velocities. A modified version of multiphase flow simulator (UTCHEM) was used to compare the multiphase model simulations with the column experiment results. The effect of employing different grid geometries (1D, 2D, 3D), heterogeneity and different initial NAPL saturation configurations was also examined in the model. It is shown that the change in velocity affects the mass transfer rate between phases as well as the ultimate NAPL recovery percentage. The experiments with low flow rate flushing of pure NAPL and the 3D UTCHEM simulations gave similar effluent concentrations and NAPL cumulative recoveries. Model simulations over-estimated NAPL recovery for high specific discharges and rate-limited mass transfer, suggesting a constant mass transfer coefficient for the entire flushing experiment may not be valid. When multi-component NAPLs are present, the dissolution rate of individual organic compounds (namely, toluene and benzene) into the ethanol-water flushing solution is found not to correlate with their equilibrium solubility values.
NASA Astrophysics Data System (ADS)
Tourret, Damien; Clarke, Amy J.; Imhoff, Seth D.; Gibbs, Paul J.; Gibbs, John W.; Karma, Alain
2015-08-01
We present a three-dimensional extension of the multiscale dendritic needle network (DNN) model. This approach enables quantitative simulations of the unsteady dynamics of complex hierarchical networks in spatially extended dendritic arrays. We apply the model to directional solidification of Al-9.8 wt.%Si alloy and directly compare the model predictions with measurements from experiments with in situ x-ray imaging. We focus on the dynamical selection of primary spacings over a range of growth velocities, and the influence of sample geometry on the selection of spacings. Simulation results show good agreement with experiments. The computationally efficient DNN model opens new avenues for investigating the dynamics of large dendritic arrays at scales relevant to solidification experiments and processes.
NASA Astrophysics Data System (ADS)
Shi, Ao; Lu, Bo; Yang, Dangguo; Wang, Xiansheng; Wu, Junqiang; Zhou, Fangqi
2018-05-01
Coupling between aero-acoustic noise and structural vibration under high-speed open cavity flow-induced oscillation may bring about severe random vibration of the structure, and even cause structure to fatigue destruction, which threatens the flight safety. Carrying out the research on vibro-acoustic experiments of scaled down model is an effective means to clarify the effects of high-intensity noise of cavity on structural vibration. Therefore, in allusion to the vibro-acoustic experiments of cavity in wind tunnel, taking typical elastic cavity as the research object, dimensional analysis and finite element method were adopted to establish the similitude relations of structural inherent characteristics and dynamics for distorted model, and verifying the proposed similitude relations by means of experiments and numerical simulation. Research shows that, according to the analysis of scale-down model, the established similitude relations can accurately simulate the structural dynamic characteristics of actual model, which provides theoretic guidance for structural design and vibro-acoustic experiments of scaled down elastic cavity model.
NASA Astrophysics Data System (ADS)
Prasad, K.; Thorpe, A. K.; Duren, R. M.; Thompson, D. R.; Whetstone, J. R.
2016-12-01
The National Institute of Standards and Technology (NIST) has supported the development and demonstration of a measurement capability to accurately locate greenhouse gas sources and measure their flux to the atmosphere over urban domains. However, uncertainties in transport models which form the basis of all top-down approaches can significantly affect our capability to attribute sources and predict their flux to the atmosphere. Reducing uncertainties between bottom-up and top-down models will require high resolution transport models as well as validation and verification of dispersion models over an urban domain. Tracer experiments involving the release of Perfluorocarbon Tracers (PFTs) at known flow rates offer the best approach for validating dispersion / transport models. However, tracer experiments are limited by cost, ability to make continuous measurements, and environmental concerns. Natural tracer experiments, such as the leak from the Aliso Canyon underground storage facility offers a unique opportunity to improve and validate high resolution transport models, test leak hypothesis, and to estimate the amount of methane released.High spatial resolution (10 m) Large Eddy Simulations (LES) coupled with WRF atmospheric transport models were performed to simulate the dynamics of the Aliso Canyon methane plume and to quantify the source. High resolution forward simulation results were combined with aircraft and tower based in-situ measurements as well as data from NASA airborne imaging spectrometers. Comparison of simulation results with measurement data demonstrate the capability of the LES models to accurately model transport and dispersion of methane plumes over urban domains.
NASA Technical Reports Server (NTRS)
Chang, Chia-Bo
1994-01-01
This study is intended to examine the impact of the synthetic relative humidity on the model simulation of mesoscale convective storm environment. The synthetic relative humidity is derived from the National Weather Services surface observations, and non-conventional sources including aircraft, radar, and satellite observations. The latter sources provide the mesoscale data of very high spatial and temporal resolution. The synthetic humidity data is used to complement the National Weather Services rawinsonde observations. It is believed that a realistic representation of initial moisture field in a mesoscale model is critical for the model simulation of thunderstorm development, and the formation of non-convective clouds as well as their effects on the surface energy budget. The impact will be investigated based on a real-data case study using the mesoscale atmospheric simulation system developed by Mesoscale Environmental Simulations Operations, Inc. The mesoscale atmospheric simulation system consists of objective analysis and initialization codes, and the coarse-mesh and fine-mesh dynamic prediction models. Both models are a three dimensional, primitive equation model containing the essential moist physics for simulating and forecasting mesoscale convective processes in the atmosphere. The modeling system is currently implemented at the Applied Meteorology Unit, Kennedy Space Center. Two procedures involving the synthetic relative humidity to define the model initial moisture fields are considered. It is proposed to perform several short-range (approximately 6 hours) comparative coarse-mesh simulation experiments with and without the synthetic data. They are aimed at revealing the model sensitivities should allow us both to refine the specification of the observational requirements, and to develop more accurate and efficient objective analysis schemes. The goal is to advance the MASS (Mesoscal Atmospheric Simulation System) modeling expertise so that the model output can provide reliable guidance for thunderstorm forecasting.
Cloud computing and validation of expandable in silico livers.
Ropella, Glen E P; Hunt, C Anthony
2010-12-03
In Silico Livers (ISLs) are works in progress. They are used to challenge multilevel, multi-attribute, mechanistic hypotheses about the hepatic disposition of xenobiotics coupled with hepatic responses. To enhance ISL-to-liver mappings, we added discrete time metabolism, biliary elimination, and bolus dosing features to a previously validated ISL and initiated re-validated experiments that required scaling experiments to use more simulated lobules than previously, more than could be achieved using the local cluster technology. Rather than dramatically increasing the size of our local cluster we undertook the re-validation experiments using the Amazon EC2 cloud platform. So doing required demonstrating the efficacy of scaling a simulation to use more cluster nodes and assessing the scientific equivalence of local cluster validation experiments with those executed using the cloud platform. The local cluster technology was duplicated in the Amazon EC2 cloud platform. Synthetic modeling protocols were followed to identify a successful parameterization. Experiment sample sizes (number of simulated lobules) on both platforms were 49, 70, 84, and 152 (cloud only). Experimental indistinguishability was demonstrated for ISL outflow profiles of diltiazem using both platforms for experiments consisting of 84 or more samples. The process was analogous to demonstration of results equivalency from two different wet-labs. The results provide additional evidence that disposition simulations using ISLs can cover the behavior space of liver experiments in distinct experimental contexts (there is in silico-to-wet-lab phenotype similarity). The scientific value of experimenting with multiscale biomedical models has been limited to research groups with access to computer clusters. The availability of cloud technology coupled with the evidence of scientific equivalency has lowered the barrier and will greatly facilitate model sharing as well as provide straightforward tools for scaling simulations to encompass greater detail with no extra investment in hardware.
NASA Astrophysics Data System (ADS)
Clark, D. S.; Weber, C. R.; Eder, D. C.; Haan, S. W.; Hammel, B. A.; Hinkel, D. E.; Jones, O. S.; Kritcher, A. L.; Marinak, M. M.; Milovich, J. L.; Patel, P. K.; Robey, H. F.; Salmonson, J. D.; Sepke, S. M.
2016-05-01
Several dozen high convergence inertial confinement fusion ignition experiments have now been completed on the National Ignition Facility (NIF). These include both “low foot” experiments from the National Ignition Campaign (NIC) and more recent “high foot” experiments. At the time of the NIC, there were large discrepancies between simulated implosion performance and experimental data. In particular, simulations over predicted neutron yields by up to an order of magnitude, and some experiments showed clear evidence of mixing of ablator material deep into the hot spot that could not be explained at the time. While the agreement between data and simulation improved for high foot implosion experiments, discrepancies nevertheless remain. This paper describes the state of detailed modelling of both low foot and high foot implosions using 1-D, 2-D, and 3-D radiation hydrodynamics simulations with HYDRA. The simulations include a range of effects, in particular, the impact of the plastic membrane used to support the capsule in the hohlraum, as well as low-mode radiation asymmetries tuned to match radiography measurements. The same simulation methodology is applied to low foot NIC implosion experiments and high foot implosions, and shows a qualitatively similar level of agreement for both types of implosions. While comparison with the experimental data remains imperfect, a reasonable level of agreement is emerging and shows a growing understanding of the high-convergence implosions being performed on NIF.
Rising temperatures reduce global wheat production
NASA Astrophysics Data System (ADS)
Asseng, S.; Ewert, F.; Martre, P.; Rötter, R. P.; Lobell, D. B.; Cammarano, D.; Kimball, B. A.; Ottman, M. J.; Wall, G. W.; White, J. W.; Reynolds, M. P.; Alderman, P. D.; Prasad, P. V. V.; Aggarwal, P. K.; Anothai, J.; Basso, B.; Biernath, C.; Challinor, A. J.; de Sanctis, G.; Doltra, J.; Fereres, E.; Garcia-Vila, M.; Gayler, S.; Hoogenboom, G.; Hunt, L. A.; Izaurralde, R. C.; Jabloun, M.; Jones, C. D.; Kersebaum, K. C.; Koehler, A.-K.; Müller, C.; Naresh Kumar, S.; Nendel, C.; O'Leary, G.; Olesen, J. E.; Palosuo, T.; Priesack, E.; Eyshi Rezaei, E.; Ruane, A. C.; Semenov, M. A.; Shcherbak, I.; Stöckle, C.; Stratonovitch, P.; Streck, T.; Supit, I.; Tao, F.; Thorburn, P. J.; Waha, K.; Wang, E.; Wallach, D.; Wolf, J.; Zhao, Z.; Zhu, Y.
2015-02-01
Crop models are essential tools for assessing the threat of climate change to local and global food production. Present models used to predict wheat grain yield are highly uncertain when simulating how crops respond to temperature. Here we systematically tested 30 different wheat crop models of the Agricultural Model Intercomparison and Improvement Project against field experiments in which growing season mean temperatures ranged from 15 °C to 32 °C, including experiments with artificial heating. Many models simulated yields well, but were less accurate at higher temperatures. The model ensemble median was consistently more accurate in simulating the crop temperature response than any single model, regardless of the input information used. Extrapolating the model ensemble temperature response indicates that warming is already slowing yield gains at a majority of wheat-growing locations. Global wheat production is estimated to fall by 6% for each °C of further temperature increase and become more variable over space and time.
Rising Temperatures Reduce Global Wheat Production
NASA Technical Reports Server (NTRS)
Asseng, S.; Ewert, F.; Martre, P.; Rötter, R. P.; Lobell, D. B.; Cammarano, D.; Kimball, B. A.; Ottman, M. J.; Wall, G. W.; White, J. W.;
2015-01-01
Crop models are essential tools for assessing the threat of climate change to local and global food production. Present models used to predict wheat grain yield are highly uncertain when simulating how crops respond to temperature. Here we systematically tested 30 different wheat crop models of the Agricultural Model Intercomparison and Improvement Project against field experiments in which growing season mean temperatures ranged from 15 degrees C to 32? degrees C, including experiments with artificial heating. Many models simulated yields well, but were less accurate at higher temperatures. The model ensemble median was consistently more accurate in simulating the crop temperature response than any single model, regardless of the input information used. Extrapolating the model ensemble temperature response indicates that warming is already slowing yield gains at a majority of wheat-growing locations. Global wheat production is estimated to fall by 6% for each degree C of further temperature increase and become more variable over space and time.
NASA Astrophysics Data System (ADS)
Kubo, Yu'suke; Syvitski, James P. M.; Hutton, Eric W. H.; Paola, Chris
2005-07-01
The stratigraphic simulation model 2D- SedFlux is further developed and applied to a turbidite experiment in a subsiding minibasin. The new module dynamically simulates evolving hyperpycnal flows and their interaction with the basin bed. Comparison between the numerical results and the experimental results verifies the ability of 2D- SedFlux to predict the distribution of the sediments and the possible feedback from subsidence. The model was subsequently applied to geological-scale minibasins such as are located in the Gulf of Mexico. Distance from the sediment source is determined to be more influential than the sediment entrapment in upstream minibasin. The results suggest that efficiency of sediment entrapment by a basin was not influenced by the distance from the sediment source.
Marwan, Wolfgang; Sujatha, Arumugam; Starostzik, Christine
2005-10-21
We reconstruct the regulatory network controlling commitment and sporulation of Physarum polycephalum from experimental results using a hierarchical Petri Net-based modelling and simulation framework. The stochastic Petri Net consistently describes the structure and simulates the dynamics of the molecular network as analysed by genetic, biochemical and physiological experiments within a single coherent model. The Petri Net then is extended to simulate time-resolved somatic complementation experiments performed by mixing the cytoplasms of mutants altered in the sporulation response, to systematically explore the network structure and to probe its dynamics. This reverse engineering approach presumably can be employed to explore other molecular or genetic signalling systems where the activity of genes or their products can be experimentally controlled in a time-resolved manner.
Computer modeling and simulators as part of university training for NPP operating personnel
NASA Astrophysics Data System (ADS)
Volman, M.
2017-01-01
This paper considers aspects of a program for training future nuclear power plant personnel developed by the NPP Department of Ivanovo State Power Engineering University. Computer modeling is used for numerical experiments on the kinetics of nuclear reactors in Mathcad. Simulation modeling is carried out on the computer and full-scale simulator of water-cooled power reactor for the simulation of neutron-physical reactor measurements and the start-up - shutdown process.
Simulation studies of chemical erosion on carbon based materials at elevated temperatures
NASA Astrophysics Data System (ADS)
Kenmotsu, T.; Kawamura, T.; Li, Zhijie; Ono, T.; Yamamura, Y.
1999-06-01
We simulated the fluence dependence of methane reaction yield in carbon with hydrogen bombardment using the ACAT-DIFFUSE code. The ACAT-DIFFUSE code is a simulation code based on a Monte Carlo method with a binary collision approximation and on solving diffusion equations. The chemical reaction model in carbon was studied by Roth or other researchers. Roth's model is suitable for the steady state methane reaction. But this model cannot estimate the fluence dependence of the methane reaction. Then, we derived an empirical formula based on Roth's model for methane reaction. In this empirical formula, we assumed the reaction region where chemical sputtering due to methane formation takes place. The reaction region corresponds to the peak range of incident hydrogen distribution in the target material. We adopted this empirical formula to the ACAT-DIFFUSE code. The simulation results indicate the similar fluence dependence compared with the experiment result. But, the fluence to achieve the steady state are different between experiment and simulation results.
Validated simulator for space debris removal with nets and other flexible tethers applications
NASA Astrophysics Data System (ADS)
Gołębiowski, Wojciech; Michalczyk, Rafał; Dyrek, Michał; Battista, Umberto; Wormnes, Kjetil
2016-12-01
In the context of active debris removal technologies and preparation activities for the e.Deorbit mission, a simulator for net-shaped elastic bodies dynamics and their interactions with rigid bodies, has been developed. Its main application is to aid net design and test scenarios for space debris deorbitation. The simulator can model all the phases of the debris capturing process: net launch, flight and wrapping around the target. It handles coupled simulation of rigid and flexible bodies dynamics. Flexible bodies were implemented using Cosserat rods model. It allows to simulate flexible threads or wires with elasticity and damping for stretching, bending and torsion. Threads may be combined into structures of any topology, so the software is able to simulate nets, pure tethers, tether bundles, cages, trusses, etc. Full contact dynamics was implemented. Programmatic interaction with simulation is possible - i.e. for control implementation. The underlying model has been experimentally validated and due to significant gravity influence, experiment had to be performed in microgravity conditions. Validation experiment for parabolic flight was a downscaled process of Envisat capturing. The prepacked net was launched towards the satellite model, it expanded, hit the model and wrapped around it. The whole process was recorded with 2 fast stereographic camera sets for full 3D trajectory reconstruction. The trajectories were used to compare net dynamics to respective simulations and then to validate the simulation tool. The experiments were performed on board of a Falcon-20 aircraft, operated by National Research Council in Ottawa, Canada. Validation results show that model reflects phenomenon physics accurately enough, so it may be used for scenario evaluation and mission design purposes. The functionalities of the simulator are described in detail in the paper, as well as its underlying model, sample cases and methodology behind validation. Results are presented and typical use cases are discussed showing that the software may be used to design throw nets for space debris capturing, but also to simulate deorbitation process, chaser control system or general interactions between rigid and elastic bodies - all in convenient and efficient way. The presented work was led by SKA Polska under the ESA contract, within the CleanSpace initiative.
Khadivzadeh, Talat; Erfanian, Fatemeh
2012-10-01
Midwifery students experience high levels of stress during their initial clinical practices. Addressing the learner's source of anxiety and discomfort can ease the learning experience and lead to better outcomes. The aim of this study was to find out the effect of a simulation-based course, using simulated patients and simulated gynecologic models on student anxiety and comfort while practicing to provide intrauterine device (IUD) services. Fifty-six eligible midwifery students were randomly allocated into simulation-based and traditional training groups. They participated in a 12-hour workshop in providing IUD services. The simulation group was trained through an educational program including simulated gynecologic models and simulated patients. The students in both groups then practiced IUD consultation and insertion with real patients in the clinic. The students' anxiety in IUD insertion was assessed using the "Spielberger anxiety test" and the "comfort in providing IUD services" questionnaire. There were significant differences between students in 2 aspects of anxiety including state (P < 0.001) and trait (P = 0.024) and the level of comfort (P = 0.000) in providing IUD services in simulation and traditional groups. "Fear of uterine perforation during insertion" was the most important cause of students' anxiety in providing IUD services, which was reported by 74.34% of students. Simulated patients and simulated gynecologic models are effective in optimizing students' anxiety levels when practicing to deliver IUD services. Therefore, it is recommended that simulated patients and simulated gynecologic models be used before engaging students in real clinical practice.
A Computational Approach for Modeling Neutron Scattering Data from Lipid Bilayers
Carrillo, Jan-Michael Y.; Katsaras, John; Sumpter, Bobby G.; ...
2017-01-12
Biological cell membranes are responsible for a range of structural and dynamical phenomena crucial to a cell's well-being and its associated functions. Due to the complexity of cell membranes, lipid bilayer systems are often used as biomimetic models. These systems have led to signficant insights into vital membrane phenomena such as domain formation, passive permeation and protein insertion. Experimental observations of membrane structure and dynamics are, however, limited in resolution, both spatially and temporally. Importantly, computer simulations are starting to play a more prominent role in interpreting experimental results, enabling a molecular under- standing of lipid membranes. Particularly, the synergymore » between scattering experiments and simulations offers opportunities for new discoveries in membrane physics, as the length and time scales probed by molecular dynamics (MD) simulations parallel those of experiments. We also describe a coarse-grained MD simulation approach that mimics neutron scattering data from large unilamellar lipid vesicles over a range of bilayer rigidity. Specfically, we simulate vesicle form factors and membrane thickness fluctuations determined from small angle neutron scattering (SANS) and neutron spin echo (NSE) experiments, respectively. Our simulations accurately reproduce trends from experiments and lay the groundwork for investigations of more complex membrane systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chandler, David; Betzler, Ben; Hirtz, Gregory John
2016-09-01
The purpose of this report is to document a high-fidelity VESTA/MCNP High Flux Isotope Reactor (HFIR) core model that features a new, representative experiment loading. This model, which represents the current, high-enriched uranium fuel core, will serve as a reference for low-enriched uranium conversion studies, safety-basis calculations, and other research activities. A new experiment loading model was developed to better represent current, typical experiment loadings, in comparison to the experiment loading included in the model for Cycle 400 (operated in 2004). The new experiment loading model for the flux trap target region includes full length 252Cf production targets, 75Se productionmore » capsules, 63Ni production capsules, a 188W production capsule, and various materials irradiation targets. Fully loaded 238Pu production targets are modeled in eleven vertical experiment facilities located in the beryllium reflector. Other changes compared to the Cycle 400 model are the high-fidelity modeling of the fuel element side plates and the material composition of the control elements. Results obtained from the depletion simulations with the new model are presented, with a focus on time-dependent isotopic composition of irradiated fuel and single cycle isotope production metrics.« less
NASA Astrophysics Data System (ADS)
Chirskaia, Natalia; Novikov, Lev; Voronina, Ekaterina
2016-07-01
Atomic oxygen (AO) of the upper atmosphere is one of the most important space factors that can cause degradation of spacecraft surface. In our previous mathematical model the Monte Carlo method and the "large particles" approximation were used for simulating processes of polymer etching under the influence of AO [1]. The interaction of enlarged AO particles with the polymer was described in terms of probabilities of reactions such as etching of polymer and specular and diffuse scattering of the AO particles on polymer. The effects of atomic oxygen on protected polymers and microfiller containing composites were simulated. The simulation results were in quite good agreement with the results of laboratory experiments on magnetoplasmadynamic accelerator of the oxygen plasma of SINP MSU [2]. In this paper we present a new model that describes the reactions of AO interactions with polymeric materials in more detail. Reactions of formation and further emission of chemical compounds such as CO, CO _{2}, H _{2}O, etc. cause the modification of the chemical composition of the polymer and change the probabilities of its consequent interaction with the AO. The simulation results are compared with the results of previous simulation and with the results of laboratory experiments. The reasons for the differences between the results of natural experiments on spacecraft, laboratory experiments and simulations are discussed. N. Chirskaya, M. Samokhina, Computer modeling of polymer structures degradation under the atomic oxygen exposure, WDS'12 Proceedings of Contributed Papers: Part III - Physics, Matfyzpress Prague, 2012, pp. 30-35. E. Voronina, L. Novikov, V. Chernik, N. Chirskaya, K. Vernigorov, G. Bondarenko, and A. Gaidar, Mathematical and experimental simulation of impact of atomic oxygen of the earth's upper atmosphere on nanostructures and polymer composites, Inorganic Materials: Applied Research, 2012, vol. 3, no. 2, pp. 95-101.
NASA Technical Reports Server (NTRS)
Breisacher, Kevin; Moder, Jeffrey
2015-01-01
The results of CFD simulations of microgravity tank pressure control experiments performed on the Space Shuttle are presented. A 13.7 liter acrylic model tank was used in these experiments. The tank was filled to an 83 percent fill fraction with Freon refrigerant to simulate cryogenic propellants stored in space. In the experiments, a single liquid jet near the bottom of the tank was used for mixing the tank. Simulations at a range of jet Weber numbers were performed. Qualitative comparisons of the liquid and gas interface dynamics observed and recorded in the experiments and those computed are shown and discussed. The simulations were able to correctly capture jet penetration of the ullage, qualitatively reproduce ullage shapes and dynamics, as well as the final equilibrium position of the ullage.
NASA Technical Reports Server (NTRS)
Breisacher, Kevin; Moder, Jeffrey
2015-01-01
The results of CFD simulations of microgravity tank pressure control experiments performed on the Space Shuttle are presented. A 13.7 liter acrylic model tank was used in these experiments. The tank was filled to an 83 percent fill fraction with Freon refrigerant to simulate cryogenic propellants stored in space. In the experiments, a single liquid jet near the bottom of the tank was used for mixing the tank. Simulations at a range of jet Weber numbers were performed. Qualitative comparisons of the liquid and gas interface dynamics observed and recorded in the experiments and those computed are shown and discussed. The simulations were able to correctly capture jet penetration of the ullage, qualitatively reproduce ullage shapes and dynamics, as well as the final equilibrium position of the ullage.
Predictive Finite Rate Model for Oxygen-Carbon Interactions at High Temperature
NASA Astrophysics Data System (ADS)
Poovathingal, Savio
An oxidation model for carbon surfaces is developed to predict ablation rates for carbon heat shields used in hypersonic vehicles. Unlike existing empirical models, the approach used here was to probe gas-surface interactions individually and then based on an understanding of the relevant fundamental processes, build a predictive model that would be accurate over a wide range of pressures and temperatures, and even microstructures. Initially, molecular dynamics was used to understand the oxidation processes on the surface. The molecular dynamics simulations were compared to molecular beam experiments and good qualitative agreement was observed. The simulations reproduced cylindrical pitting observed in the experiments where oxidation was rapid and primarily occurred around a defect. However, the studies were limited to small systems at low temperatures and could simulate time scales only of the order of nanoseconds. Molecular beam experiments at high surface temperature indicated that a majority of surface reaction products were produced through thermal mechanisms. Since the reactions were thermal, they occurred over long time scales which were computationally prohibitive for molecular dynamics to simulate. The experiments provided detailed dynamical data on the scattering of O, O2, CO, and CO2 and it was found that the data from molecular beam experiments could be used directly to build a model. The data was initially used to deduce surface reaction probabilities at 800 K. The reaction probabilities were then incorporated into the direct simulation Monte Carlo (DSMC) method. Simulations were performed where the microstructure was resolved and dissociated oxygen convected and diffused towards it. For a gas-surface temperature of 800 K, it was found that despite CO being the dominant surface reaction product, a gas-phase reaction forms significant CO2 within the microstructure region. It was also found that surface area did not play any role in concentration of reaction products because the reaction probabilities were in the diffusion dominant regime. The molecular beam data at different surface temperatures was then used to build a finite rate model. Each reaction mechanism and all rate parameters of the new model were determined individually based on the molecular beam data. Despite the experiments being performed at near vacuum conditions, the finite rate model developed using the data could be used at pressures and temperatures relevant to hypersonic conditions. The new model was implemented in a computational fluid dynamics (CFD) solver and flow over a hypersonic vehicle was simulated. The new model predicted similar overall mass loss rates compared to existing models, however, the individual species production rates were completely different. The most notable difference was that the new model (based on molecular beam data) predicts CO as the oxidation reaction product with virtually no CO2 production, whereas existing models predict the exact opposite trend. CO being the dominant oxidation product is consistent with recent high enthalpy wind tunnel experiments. The discovery that measurements taken in molecular beam facilities are able to determine individual reaction mechanisms, including dependence on surface coverage, opens up an entirely new way of constructing ablation models.
Moving Base Simulation of an ASTOVL Lift-Fan Aircraft
DOT National Transportation Integrated Search
1995-08-01
Using a generalized simulation model, a moving-base simulation of a lift-fan : short takeoff/vertical landing fighter aircraft was conducted on the Vertical : Motion Simulator at Ames Research Center. Objectives of the experiment were to : (1)assess ...
Analysis and Simulation of Far-Field Seismic Data from the Source Physics Experiment
2012-09-01
ANALYSIS AND SIMULATION OF FAR-FIELD SEISMIC DATA FROM THE SOURCE PHYSICS EXPERIMENT Arben Pitarka, Robert J. Mellors, Arthur J. Rodgers, Sean...Security Site (NNSS) provides new data for investigating the excitation and propagation of seismic waves generated by buried explosions. A particular... seismic model. The 3D seismic model includes surface topography. It is based on regional geological data, with material properties constrained by shallow
NASA Astrophysics Data System (ADS)
Prime, Michael; Vaughan, Diane; Preston, Dean; Oro, David; Buttler, William
2013-06-01
Rayleigh-Taylor instabilities have been widely used to study the deviatoric (flow) strength of solids at high strain rates. More recently, experiments applying a supported shock through mating surfaces (Atwood number = 1) with geometrical perturbations have been proposed for studying strength at strain rates up to 107/sec using Richtmyer-Meshkov (RM) instabilities. Buttler et al. [J. Fluid Mech., 2012] recently reported experimental results for RM instability growth but with an unsupported shock applied by high explosives and the geometrical perturbations on the opposite free surface (Atwood number = -1). This novel configuration allowed detailed experimental observation of the instability growth and arrest. We present results and detailed interpretation from numerical simulations of the Buttler experiments on copper. Highly-resolved, two-dimensional simulations were performed using a Lagrangian hydrocode and the Preston-Tonks-Wallace (PTW) strength model. The model predictions show good agreement with the data in spite of the PTW model being calibrated on lower strain rate data. The numerical simulations are used to 1) examine various assumptions previously made in an analytical model, 2) to estimate the sensitivity of such experiments to material strength and 3) to explore the possibility of extracting meaningful strength information in the face of complicated spatial and temporal variations of stress, pressure, and temperature during the experiments.
A Single Column Model Ensemble Approach Applied to the TWP-ICE Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davies, Laura; Jakob, Christian; Cheung, K.
2013-06-27
Single column models (SCM) are useful testbeds for investigating the parameterisation schemes of numerical weather prediction and climate models. The usefulness of SCM simulations are limited, however, by the accuracy of the best-estimate large-scale data prescribed. One method to address this uncertainty is to perform ensemble simulations of the SCM. This study first derives an ensemble of large-scale data for the Tropical Warm Pool International Cloud Experiment (TWP-ICE) based on an estimate of a possible source of error in the best-estimate product. This data is then used to carry out simulations with 11 SCM and 2 cloud-resolving models (CRM). Best-estimatemore » simulations are also performed. All models show that moisture related variables are close to observations and there are limited differences between the best-estimate and ensemble mean values. The models, however, show different sensitivities to changes in the forcing particularly when weakly forced. The ensemble simulations highlight important differences in the moisture budget between the SCM and CRM. Systematic differences are also apparent in the ensemble mean vertical structure of cloud variables. The ensemble is further used to investigate relations between cloud variables and precipitation identifying large differences between CRM and SCM. This study highlights that additional information can be gained by performing ensemble simulations enhancing the information derived from models using the more traditional single best-estimate simulation.« less
Sensitivity of southern hemisphere westerly wind to boundary conditions for the last glacial maximum
NASA Astrophysics Data System (ADS)
Jun, S. Y.; Kim, S. J.; Kim, B. M.
2017-12-01
To examine the change in SH westerly wind in the LGM, we performed LGM simulation with sensitivity experiments by specifying the LGM sea ice in the Southern Ocean (SO), ice sheet over Antarctica, and tropical pacific sea surface temperature to CAM5 atmosphere general circulation model (GCM). The SH westerly response to LGM boundary conditions in the CAM5 was compared with those from CMIP5 LGM simulations. In the CAM5 LGM simulation, the SH westerly wind substantially increases between 40°S and 65°S, while the zonal-mean zonal wind decreases at latitudes higher than 65°S. The position of the SH maximum westerly wind moves poleward by about 8° in the LGM simulation. Sensitivity experiments suggest that the increase in SH westerly winds is mainly due to the increase in sea ice in the SO that accounts for 60% of total wind change. In the CMIP5-PMIP3 LGM experiments, most of the models show the slight increase and poleward shift of the SH westerly wind as in the CAM5 experiment. The increased and poleward shifted westerly wind in the LGM obtained in the current model result is consistent with previous model results and some lines of proxy evidence, though opposite model responses and proxy evidence exist for the SH westerly wind change.
High Resolution Model Intercomparison Project (HighResMIP v1.0) for CMIP6
Haarsma, Reindert J.; Roberts, Malcolm J.; Vidale, Pier Luigi; ...
2016-11-22
Robust projections and predictions of climate variability and change, particularly at regional scales, rely on the driving processes being represented with fidelity in model simulations. The role of enhanced horizontal resolution in improved process representation in all components of the climate system is of growing interest, particularly as some recent simulations suggest both the possibility of significant changes in large-scale aspects of circulation as well as improvements in small-scale processes and extremes. However, such high-resolution global simulations at climate timescales, with resolutions of at least 50 km in the atmosphere and 0.25° in the ocean, have been performed at relativelymore » few research centres and generally without overall coordination, primarily due to their computational cost. Assessing the robustness of the response of simulated climate to model resolution requires a large multi-model ensemble using a coordinated set of experiments. The Coupled Model Intercomparison Project 6 (CMIP6) is the ideal framework within which to conduct such a study, due to the strong link to models being developed for the CMIP DECK experiments and other model intercomparison projects (MIPs). Increases in high-performance computing (HPC) resources, as well as the revised experimental design for CMIP6, now enable a detailed investigation of the impact of increased resolution up to synoptic weather scales on the simulated mean climate and its variability. The High Resolution Model Intercomparison Project (HighResMIP) presented in this paper applies, for the first time, a multi-model approach to the systematic investigation of the impact of horizontal resolution. A coordinated set of experiments has been designed to assess both a standard and an enhanced horizontal-resolution simulation in the atmosphere and ocean. The set of HighResMIP experiments is divided into three tiers consisting of atmosphere-only and coupled runs and spanning the period 1950–2050, with the possibility of extending to 2100, together with some additional targeted experiments. This paper describes the experimental set-up of HighResMIP, the analysis plan, the connection with the other CMIP6 endorsed MIPs, as well as the DECK and CMIP6 historical simulations. Lastly, HighResMIP thereby focuses on one of the CMIP6 broad questions, “what are the origins and consequences of systematic model biases?”, but we also discuss how it addresses the World Climate Research Program (WCRP) grand challenges.« less
NASA Astrophysics Data System (ADS)
Smith, L. A.
2001-05-01
Many sources of uncertainty come into play when modelling geophysical systems by simulation. These include uncertainty in the initial condition, uncertainty in model parameter values (and the parameterisations themselves) and error in the model class from which the model(s) was selected. In recent decades, climate simulations have focused resources on reducing the last of these by including more and more details into the model. One can question when this ``kitchen sink'' approach should be complimented with realistic estimates of the impact from other uncertainties noted above. Indeed while the impact of model error can never be fully quantified, as all simulation experiments are interpreted a the rosy scenario which assumes a priori that nothing crucial is missing, the impact of other uncertainties can be quantified at only the cost of computational power; as illustrated, for example, in ensemble climate modelling experiments like Casino-21. This talk illustrates the interplay uncertainties in the context of a trivial nonlinear system and an ensemble of models. The simple systems considered in this small scale experiment, Keno-21, are meant to illustrate issues of experimental design; they are not intended to provide true climate simulations. The use of simulation models with huge numbers of parameters given limited data is usually justified by an appeal to the Laws of Physics: the number of free degrees-of-freedom are many fewer than the number of variables; both variables, parameterisations, and parameter values are constrained by ``the physics" and the resulting simulation yields a realistic reproduction of the entire planet's climate system to within reasonable bounds. But what bounds? exactly? In a single model run under transient forcing scenario, there are good statistical grounds for considering only large space and time averages; most of these reasons vanish if an ensemble of runs are made. Ensemble runs can quantify the (in)ability of a model to provide insight on regional changes: if a model cannot capture regional variations in the data on which the model was constructed (that is, in-sample) claims that out-of-sample predictions of those same regional averages should be used in policy making are vacuous. While motivated by climate modelling and illustrated on a trivial nonlinear system, these issues have implications across the range of geophysical modelling. These include implications for appropriate resource allocation, on the making of science policy, and on the public understanding of science and the role of uncertainty in decision making.
New Approaches to Quantifying Transport Model Error in Atmospheric CO2 Simulations
NASA Technical Reports Server (NTRS)
Ott, L.; Pawson, S.; Zhu, Z.; Nielsen, J. E.; Collatz, G. J.; Gregg, W. W.
2012-01-01
In recent years, much progress has been made in observing CO2 distributions from space. However, the use of these observations to infer source/sink distributions in inversion studies continues to be complicated by difficulty in quantifying atmospheric transport model errors. We will present results from several different experiments designed to quantify different aspects of transport error using the Goddard Earth Observing System, Version 5 (GEOS-5) Atmospheric General Circulation Model (AGCM). In the first set of experiments, an ensemble of simulations is constructed using perturbations to parameters in the model s moist physics and turbulence parameterizations that control sub-grid scale transport of trace gases. Analysis of the ensemble spread and scales of temporal and spatial variability among the simulations allows insight into how parameterized, small-scale transport processes influence simulated CO2 distributions. In the second set of experiments, atmospheric tracers representing model error are constructed using observation minus analysis statistics from NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA). The goal of these simulations is to understand how errors in large scale dynamics are distributed, and how they propagate in space and time, affecting trace gas distributions. These simulations will also be compared to results from NASA's Carbon Monitoring System Flux Pilot Project that quantified the impact of uncertainty in satellite constrained CO2 flux estimates on atmospheric mixing ratios to assess the major factors governing uncertainty in global and regional trace gas distributions.
Observing System Simulation Experiments for Fun and Profit
NASA Technical Reports Server (NTRS)
Prive, Nikki C.
2015-01-01
Observing System Simulation Experiments can be powerful tools for evaluating and exploring both the behavior of data assimilation systems and the potential impacts of future observing systems. With great power comes great responsibility - given a pure modeling framework, how can we be sure our results are meaningful? The challenges and pitfalls of OSSE calibration and validation will be addressed, as well as issues of incestuousness, selection of appropriate metrics, and experiment design. The use of idealized observational networks to investigate theoretical ideas in a fully complex modeling framework will also be discussed
Advanced ISDN satellite designs and experiments
NASA Technical Reports Server (NTRS)
Pepin, Gerard R.
1992-01-01
The research performed by GTE Government Systems and the University of Colorado in support of the NASA Satellite Communications Applications Research (SCAR) Program is summarized. Two levels of research were undertaken. The first dealt with providing interim services Integrated Services Digital Network (ISDN) satellite (ISIS) capabilities that accented basic rate ISDN with a ground control similar to that of the Advanced Communications Technology Satellite (ACTS). The ISIS Network Model development represents satellite systems like the ACTS orbiting switch. The ultimate aim is to move these ACTS ground control functions on-board the next generation of ISDN communications satellite to provide full-service ISDN satellite (FSIS) capabilities. The technical and operational parameters for the advanced ISDN communications satellite design are obtainable from the simulation of ISIS and FSIS engineering software models of the major subsystems of the ISDN communications satellite architecture. Discrete event simulation experiments would generate data for analysis against NASA SCAR performance measure and the data obtained from the ISDN satellite terminal adapter hardware (ISTA) experiments, also developed in the program. The Basic and Option 1 phases of the program are also described and include the following: literature search, traffic mode, network model, scenario specifications, performance measures definitions, hardware experiment design, hardware experiment development, simulator design, and simulator development.
Simulation of Coast Guard Vessel Traffic Service Operations by Model and Experiment
DOT National Transportation Integrated Search
1980-09-01
A technique for computer simulation of operations of U.S. Coast Guard Vessel Traffic Services is described and verified with data obtained in four field studies. Uses of the Technique are discussed and illustrated. A field experiment is described in ...
NASA Astrophysics Data System (ADS)
Wee, Loo Kang
2012-05-01
We develop an Easy Java Simulation (EJS) model for students to experience the physics of idealized one-dimensional collision carts. The physics model is described and simulated by both continuous dynamics and discrete transition during collision. In designing the simulations, we discuss briefly three pedagogical considerations namely (1) a consistent simulation world view with a pen and paper representation, (2) a data table, scientific graphs and symbolic mathematical representations for ease of data collection and multiple representational visualizations and (3) a game for simple concept testing that can further support learning. We also suggest using a physical world setup augmented by simulation by highlighting three advantages of real collision carts equipment such as a tacit 3D experience, random errors in measurement and the conceptual significance of conservation of momentum applied to just before and after collision. General feedback from the students has been relatively positive, and we hope teachers will find the simulation useful in their own classes.
Using deep neural networks to augment NIF post-shot analysis
NASA Astrophysics Data System (ADS)
Humbird, Kelli; Peterson, Luc; McClarren, Ryan; Field, John; Gaffney, Jim; Kruse, Michael; Nora, Ryan; Spears, Brian
2017-10-01
Post-shot analysis of National Ignition Facility (NIF) experiments is the process of determining which simulation inputs yield results consistent with experimental observations. This analysis is typically accomplished by running suites of manually adjusted simulations, or Monte Carlo sampling surrogate models that approximate the response surfaces of the physics code. These approaches are expensive and often find simulations that match only a small subset of observables simultaneously. We demonstrate an alternative method for performing post-shot analysis using inverse models, which map directly from experimental observables to simulation inputs with quantified uncertainties. The models are created using a novel machine learning algorithm which automates the construction and initialization of deep neural networks to optimize predictive accuracy. We show how these neural networks, trained on large databases of post-shot simulations, can rigorously quantify the agreement between simulation and experiment. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Telescope performance and image simulations of the balloon-borne coded-mask protoMIRAX experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Penacchioni, A. V., E-mail: ana.penacchioni@inpe.br; Braga, J., E-mail: joao.braga@inpe.br; Castro, M. A., E-mail: manuel.castro@inpe.br
2015-12-17
In this work we present the results of imaging simulations performed with the help of the GEANT4 package for the protoMIRAX hard X-ray balloon experiment. The instrumental background was simulated taking into account the various radiation components and their angular dependence, as well as a detailed mass model of the experiment. We modelled the meridian transits of the Crab Nebula and the Galactic Centre (CG) region during balloon flights in Brazil (∼ −23° of latitude and an altitude of ∼40 km) and introduced the correspondent spectra as inputs to the imaging simulations. We present images of the Crab and ofmore » three sources in the GC: 1E 1740.7-2942, GRS 1758-258 and GX 1+4. The results show that the protoMIRAX experiment is capable of making spectral and timing observations of bright hard X-ray sources as well as important imaging demonstrations that will contribute to the design of the MIRAX satellite mission.« less
The influence of atmospheric grid resolution in a climate model-forced ice sheet simulation
NASA Astrophysics Data System (ADS)
Lofverstrom, Marcus; Liakka, Johan
2018-04-01
Coupled climate-ice sheet simulations have been growing in popularity in recent years. Experiments of this type are however challenging as ice sheets evolve over multi-millennial timescales, which is beyond the practical integration limit of most Earth system models. A common method to increase model throughput is to trade resolution for computational efficiency (compromise accuracy for speed). Here we analyze how the resolution of an atmospheric general circulation model (AGCM) influences the simulation quality in a stand-alone ice sheet model. Four identical AGCM simulations of the Last Glacial Maximum (LGM) were run at different horizontal resolutions: T85 (1.4°), T42 (2.8°), T31 (3.8°), and T21 (5.6°). These simulations were subsequently used as forcing of an ice sheet model. While the T85 climate forcing reproduces the LGM ice sheets to a high accuracy, the intermediate resolution cases (T42 and T31) fail to build the Eurasian ice sheet. The T21 case fails in both Eurasia and North America. Sensitivity experiments using different surface mass balance parameterizations improve the simulations of the Eurasian ice sheet in the T42 case, but the compromise is a substantial ice buildup in Siberia. The T31 and T21 cases do not improve in the same way in Eurasia, though the latter simulates the continent-wide Laurentide ice sheet in North America. The difficulty to reproduce the LGM ice sheets in the T21 case is in broad agreement with previous studies using low-resolution atmospheric models, and is caused by a substantial deterioration of the model climate between the T31 and T21 resolutions. It is speculated that this deficiency may demonstrate a fundamental problem with using low-resolution atmospheric models in these types of experiments.
NASA Astrophysics Data System (ADS)
Farley, Richard D.
1987-07-01
This paper reports on simulations of a multicellular hailstorm case observed during the 1983 Alberta Hail Project. The field operations on that day concentrated on two successive feeder cells which were subjected to controlled seeding experiments. The fist of these cells received the placebo treatment and the second was seeded with dry ice. The principal tool of this study is a modified version of the two-dimensional, time dependent hail category model described in Part I of this series of papers. It is with this model that hail growth processes are investigated, including the simulated effects of cloud seeding techniques as practiced in Alberta.The model simulation of the natural case produces a very good replication of the observed storm, particularly the placebo feeder cell. This is evidenced, in particular, by the high degree of fidelity of the observed and modeled radar reflectivity in terms of magnitudes, structure, and evolution. The character of the hailfall at the surface and the scale of the storm are captured nicely by the model, although cloud-top heights are generally too high, particularly for the mature storm system.Seeding experiments similar to those conducted in the field have also been simulated. These involve seeding the feeder cell early in its active development phase with dry ice (CO2) or silver iodide (AgI) introduced near cloud top. The model simulations of these seeded cases capture some of the observed seeding signatures detected by radar and aircraft. In these model experiments, CO2 seeding produced a stronger response than AgI seeding relative to inhibiting hail formation. For both seeded cases, production of precipitating ice was initially enhanced by the seeding, but retarded slightly in the later stages, the net result being modest increases in surface rainfall, with hail reduced slightly. In general, the model simulations support several subhypotheses of the operational strategy of the Alberta Research Council regarding the earlier formation of ice, snow, and graupel due to seeding.
The hybrid RANS/LES of partially premixed supersonic combustion using G/Z flamelet model
NASA Astrophysics Data System (ADS)
Wu, Jinshui; Wang, Zhenguo; Bai, Xuesong; Sun, Mingbo; Wang, Hongbo
2016-10-01
In order to describe partially premixed supersonic combustion numerically, G/Z flamelet model is developed and compared with finite rate model in hybrid RANS/LES simulation to study the strut-injection supersonic combustion flow field designed by the German Aerospace Center. A new temperature calculation method based on time-splitting method of total energy is introduced in G/Z flamelet model. Simulation results show that temperature predictions in partially premixed zone by G/Z flamelet model are more consistent with experiment than finite rate model. It is worth mentioning that low temperature reaction zone behind the strut is well reproduced. Other quantities such as average velocity and average velocity fluctuation obtained by developed G/Z flamelet model are also in good agreement with experiment. Besides, simulation results by G/Z flamelet also reveal the mechanism of partially premixed supersonic combustion by the analyses of the interaction between turbulent burning velocity and flow field.
Wang, Ping; Zhou, Ye; MacLaren, Stephan A.; ...
2015-11-06
Three- and two-dimensional numerical studies have been carried out to simulate recent counter-propagating shear flow experiments on the National Ignition Facility. A multi-physics three-dimensional, time-dependent radiation hydrodynamics simulation code is used. Using a Reynolds Averaging Navier-Stokes model, we show that the evolution of the mixing layer width obtained from the simulations agrees well with that measured from the experiments. A sensitivity study is conducted to illustrate a 3D geometrical effect that could confuse the measurement at late times, if the energy drives from the two ends of the shock tube are asymmetric. Implications for future experiments are discussed.
Advances in modelling of condensation phenomena
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, W.S.; Zaltsgendler, E.; Hanna, B.
1997-07-01
The physical parameters in the modelling of condensation phenomena in the CANDU reactor system codes are discussed. The experimental programs used for thermal-hydraulic code validation in the Canadian nuclear industry are briefly described. The modelling of vapour generation and in particular condensation plays a key role in modelling of postulated reactor transients. The condensation models adopted in the current state-of-the-art two-fluid CANDU reactor thermal-hydraulic system codes (CATHENA and TUF) are described. As examples of the modelling challenges faced, the simulation of a cold water injection experiment by CATHENA and the simulation of a condensation induced water hammer experiment by TUFmore » are described.« less
Force and torque modelling of drilling simulation for orthopaedic surgery.
MacAvelia, Troy; Ghasempoor, Ahmad; Janabi-Sharifi, Farrokh
2014-01-01
The advent of haptic simulation systems for orthopaedic surgery procedures has provided surgeons with an excellent tool for training and preoperative planning purposes. This is especially true for procedures involving the drilling of bone, which require a great amount of adroitness and experience due to difficulties arising from vibration and drill bit breakage. One of the potential difficulties with the drilling of bone is the lack of consistent material evacuation from the drill's flutes as the material tends to clog. This clogging leads to significant increases in force and torque experienced by the surgeon. Clogging was observed for feed rates greater than 0.5 mm/s and spindle speeds less than 2500 rpm. The drilling simulation systems that have been created to date do not address the issue of drill flute clogging. This paper presents force and torque prediction models that account for this phenomenon. The two coefficients of friction required by these models were determined via a set of calibration experiments. The accuracy of both models was evaluated by an additional set of validation experiments resulting in average R² regression correlation values of 0.9546 and 0.9209 for the force and torque prediction models, respectively. The resulting models can be adopted by haptic simulation systems to provide a more realistic tactile output.
A Comparison of Three Approaches to Model Human Behavior
NASA Astrophysics Data System (ADS)
Palmius, Joel; Persson-Slumpi, Thomas
2010-11-01
One way of studying social processes is through the use of simulations. The use of simulations for this purpose has been established as its own field, social simulations, and has been used for studying a variety of phenomena. A simulation of a social setting can serve as an aid for thinking about that social setting, and for experimenting with different parameters and studying the outcomes caused by them. When using the simulation as an aid for thinking and experimenting, the chosen simulation approach will implicitly steer the simulationist towards thinking in a certain fashion in order to fit the model. To study the implications of model choice on the understanding of a setting where human anticipation comes into play, a simulation scenario of a coffee room was constructed using three different simulation approaches: Cellular Automata, Systems Dynamics and Agent-based modeling. The practical implementations of the models were done in three different simulation packages: Stella for Systems Dynamic, CaFun for Cellular automata and SesAM for Agent-based modeling. The models were evaluated both using Randers' criteria for model evaluation, and through introspection where the authors reflected upon how their understanding of the scenario was steered through the model choice. Further the software used for implementing the simulation models was evaluated, and practical considerations for the choice of software package are listed. It is concluded that the models have very different strengths. The Agent-based modeling approach offers the most intuitive support for thinking about and modeling a social setting where the behavior of the individual is in focus. The Systems Dynamics model would be preferable in situations where populations and large groups would be studied as wholes, but where individual behavior is of less concern. The Cellular Automata models would be preferable where processes need to be studied from the basis of a small set of very simple rules. It is further concluded that in most social simulation settings the Agent-based modeling approach would be the probable choice. This since the other models does not offer much in the way of supporting the modeling of the anticipatory behavior of humans acting in an organization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ngirmang, Gregory K., E-mail: ngirmang.1@osu.edu; Orban, Chris; Feister, Scott
We present 3D Particle-in-Cell (PIC) modeling of an ultra-intense laser experiment by the Extreme Light group at the Air Force Research Laboratory using the Large Scale Plasma (LSP) PIC code. This is the first time PIC simulations have been performed in 3D for this experiment which involves an ultra-intense, short-pulse (30 fs) laser interacting with a water jet target at normal incidence. The laser-energy-to-ejected-electron-energy conversion efficiency observed in 2D(3v) simulations were comparable to the conversion efficiencies seen in the 3D simulations, but the angular distribution of ejected electrons in the 2D(3v) simulations displayed interesting differences with the 3D simulations' angular distribution;more » the observed differences between the 2D(3v) and 3D simulations were more noticeable for the simulations with higher intensity laser pulses. An analytic plane-wave model is discussed which provides some explanation for the angular distribution and energies of ejected electrons in the 2D(3v) simulations. We also performed a 3D simulation with circularly polarized light and found a significantly higher conversion efficiency and peak electron energy, which is promising for future experiments.« less
Numerical Investigation of Plasma Detachment in Magnetic Nozzle Experiments
NASA Technical Reports Server (NTRS)
Sankaran, Kamesh; Polzin, Kurt A.
2008-01-01
At present there exists no generally accepted theoretical model that provides a consistent physical explanation of plasma detachment from an externally-imposed magnetic nozzle. To make progress towards that end, simulation of plasma flow in the magnetic nozzle of an arcjet experiment is performed using a multidimensional numerical simulation tool that includes theoretical models of the various dispersive and dissipative processes present in the plasma. This is an extension of the simulation tool employed in previous work by Sankaran et al. The aim is to compare the computational results with various proposed magnetic nozzle detachment theories to develop an understanding of the physical mechanisms that cause detachment. An applied magnetic field topology is obtained using a magnetostatic field solver (see Fig. I), and this field is superimposed on the time-dependent magnetic field induced in the plasma to provide a self-consistent field description. The applied magnetic field and model geometry match those found in experiments by Kuriki and Okada. This geometry is modeled because there is a substantial amount of experimental data that can be compared to the computational results, allowing for validation of the model. In addition, comparison of the simulation results with the experimentally obtained plasma parameters will provide insight into the mechanisms that lead to plasma detachment, revealing how they scale with different input parameters. Further studies will focus on modeling literature experiments both for the purpose of additional code validation and to extract physical insight regarding the mechanisms driving detachment.
NASA Astrophysics Data System (ADS)
Pittman, E. R.; Gustavsen, R. L.; Hagelberg, C. R.; Schmidt, J. H.
2017-06-01
The focus of this set of experiments is the development of data on the Hugoniot for the overdriven products equation of state (EOS) of PBX 9501 (95 weight % HMX, 5 weight % plastic binder) and to extend data from which current computational EOS models draw. This series of shots was conducted using the two-stage gas-guns at Los Alamos and aimed to gather data in the 30 to 120 GPa pressure regime. Experiments were simulated using FLAG, a Langrangian multiphysics code, using a one-dimensional setup which employs the Wescott Stewart Davis (WSD) reactive burn model. Prior to this study, data did not extend above 90 GPa, so the new data allowed the model to be re-evaluated. A comparison of the simulations with the experimental data shows that the model fits well below 80 GPa. However, the model did not fall within the error bars of the data for higher pressures. This is an indication that the PBX 9501 overdriven EOS products model could be modified to better match the data.
NASA Technical Reports Server (NTRS)
Lutz, R. J.; Spar, J.
1978-01-01
The Hansen atmospheric model was used to compute five monthly forecasts (October 1976 through February 1977). The comparison is based on an energetics analysis, meridional and vertical profiles, error statistics, and prognostic and observed mean maps. The monthly mean model simulations suffer from several defects. There is, in general, no skill in the simulation of the monthly mean sea-level pressure field, and only marginal skill is indicated for the 850 mb temperatures and 500 mb heights. The coarse-mesh model appears to generate a less satisfactory monthly mean simulation than the finer mesh GISS model.
Design of virtual simulation experiment based on key events
NASA Astrophysics Data System (ADS)
Zhong, Zheng; Zhou, Dongbo; Song, Lingxiu
2018-06-01
Considering complex content and lacking of guidance in virtual simulation experiments, the key event technology in VR narrative theory was introduced for virtual simulation experiment to enhance fidelity and vividness process. Based on the VR narrative technology, an event transition structure was designed to meet the need of experimental operation process, and an interactive event processing model was used to generate key events in interactive scene. The experiment of" margin value of bees foraging" based on Biologic morphology was taken as an example, many objects, behaviors and other contents were reorganized. The result shows that this method can enhance the user's experience and ensure experimental process complete and effectively.
NASA Technical Reports Server (NTRS)
Fahrenthold, Eric P.; Shivarama, Ravishankar
2004-01-01
The hybrid particle-finite element method of Fahrenthold and Horban, developed for the simulation of hypervelocity impact problems, has been extended to include new formulations of the particle-element kinematics, additional constitutive models, and an improved numerical implementation. The extended formulation has been validated in three dimensional simulations of published impact experiments. The test cases demonstrate good agreement with experiment, good parallel speedup, and numerical convergence of the simulation results.
Blast Load Simulator Experiments for Computational Model Validation Report 3
2017-07-01
establish confidence in the results produced by the simulations. This report describes a set of replicate experiments in which a small, non - responding steel...designed to simulate blast waveforms for explosive yields up to 20,000 lb of TNT equivalent at a peak reflected pressure up to 80 psi and a peak...the pressure loading on a non - responding box-type structure at varying obliquities located in the flow of the BLS simulated blast environment for
Simulations and experiments of ejecta generation in twice-shocked metals
NASA Astrophysics Data System (ADS)
Karkhanis, Varad; Ramaprabhu, Praveen; Buttler, William; Hammerberg, James; Cherne, Frank; Andrews, Malcolm
2016-11-01
Using continuum hydrodynamics embedded in the FLASH code, we model ejecta generation in recent target experiments, where a metallic surface was loaded by two successive shock waves. The experimental data were obtained from a two-shockwave, high-explosive tool at Los Alamos National Laboratory, capable of generating ejecta from a shocked tin surface in to a vacuum. In both simulations and experiment, linear growth is observed following the first shock event, while the second shock strikes a finite-amplitude interface leading to nonlinear growth. The timing of the second incident shock was varied systematically in our simulations to realize a finite-amplitude re-initialization of the RM instability driving the ejecta. We find the shape of the interface at the event of second shock is critical in determining the amount of ejecta, and thus must be used as an initial condition to evaluate subsequent ejected mass using a source model. In particular, the agreement between simulations, experiments and the mass model is improved when shape effects associated with the interface at second shock are incorporated. This work was supported in part by the (U.S.) Department of Energy (DOE) under Contract No. DE-AC52-06NA2-5396.
Comparing Simulated and Experimental Data from UCN τ
NASA Astrophysics Data System (ADS)
Howard, Dezrick; Holley, Adam
2017-09-01
The UCN τ experiment is designed to measure the average lifetime of a free neutron (τn) by trapping ultracold neutrons (UCN) in a magneto-gravitational trap and allowing them to β-decay, with the ultimate goal of minimizing the uncertainty to approximately 0.01% (0.1 s). Understanding the systematics of the experiment at the level necessary to reach this high precision may help to better understand the disparity between measurements from cold neutron beam and UCN bottle experiments (τn 888 s and τn 878 s, respectively). To assist in evaluating systemics that might conceivably contribute at this level, a neutron spin-tracking Monte Carlo simulation, which models a UCN population's behavior throughout a run, is currently under development. The simulation will utilize an empirical map of the magnetic field in the trap (see poster by K. Hoffman) by interpolating the field between measured points (see poster by J. Felkins) in order to model the depolarization mechanism with high fidelity. As a preliminary step, I have checked that the Monte Carlo model can reasonably reproduce the observed behavior of the experiment. In particular, I will present a comparison between simulated data and data acquired from the 2016-2017 UCN τ run cycle.
High Resolution Integrated Hohlraum-Capsule Simulations for Virtual NIF Ignition Campaign
NASA Astrophysics Data System (ADS)
Jones, O. S.; Marinak, M. M.; Cerjan, C. J.; Clark, D. S.; Edwards, M. J.; Haan, S. W.; Langer, S. H.; Salmonson, J. D.
2009-11-01
We have undertaken a virtual campaign to assess the viability of the sequence of NIF experiments planned for 2010 that will experimentally tune the shock timing, symmetry, and ablator thickness of a cryogenic ignition capsule prior to the first ignition attempt. The virtual campaign consists of two teams. The ``red team'' creates realistic simulated diagnostic data for a given experiment from the output of a detailed radiation hydrodynamics calculation that has physics models that have been altered in a way that is consistent with probable physics uncertainties. The ``blue team'' executes a series of virtual experiments and interprets the simulated diagnostic data from those virtual experiments. To support this effort we have developed a capability to do very high spatial resolution integrated hohlraum-capsule simulations using the Hydra code. Surface perturbations for all ablator layer surfaces and the DT ice layer are calculated explicitly through mode 30. The effects of the fill tube, cracks in the ice layer, and defects in the ablator are included in models extracted from higher resolution calculations. Very high wave number mix is included through a mix model. We will show results from these calculations in the context of the ongoing virtual campaign.
Synchronizing Two AGCMs via Ocean-Atmosphere Coupling (Invited)
NASA Astrophysics Data System (ADS)
Kirtman, B. P.
2009-12-01
A new approach for fusing or synchronizing to very different Atmospheric General Circulation Models (AGCMs) is described. The approach is also well suited for understand why two different coupled models have such large differences in their respective climate simulations. In the application presented here, the differences between the coupled models using the Center for Ocean-Land-Atmosphere Studies (COLA) and the National Center for Atmospheric Research (NCAR) atmospheric general circulation models (AGCMs) are examined. The intent is to isolate which component of the air-sea fluxes is most responsible for the differences between the coupled models and for the errors in their respective coupled simulations. The procedure is to simultaneously couple the two different atmospheric component models to a single ocean general circulation model (OGCM), in this case the Modular Ocean Model (MOM) developed at the Geophysical Fluid Dynamics Laboratory (GFDL). Each atmospheric component model experiences the same SST produced by the OGCM, but the OGCM is simultaneously coupled to both AGCMs using a cross coupling strategy. In the first experiment, the OGCM is coupled to the heat and fresh water flux from the NCAR AGCM (Community Atmospheric Model; CAM) and the momentum flux from the COLA AGCM. Both AGCMs feel the same SST. In the second experiment, the OGCM is coupled to the heat and fresh water flux from the COLA AGCM and the momentum flux from the CAM AGCM. Again, both atmospheric component models experience the same SST. By comparing these two experimental simulations with control simulations where only one AGCM is used, it is possible to argue which of the flux components are most responsible for the differences in the simulations and their respective errors. Based on these sensitivity experiments we conclude that the tropical ocean warm bias in the COLA coupled model is due to errors in the heat flux, and that the erroneous westward shift in the tropical Pacific cold tongue minimum in the NCAR model is due errors in the momentum flux. All the coupled simulations presented here have warm biases along the eastern boundary of the tropical oceans suggesting that the problem is common to both AGCMs. In terms of interannual variability in the tropical Pacific, the CAM momentum flux is responsible for the erroneous westward extension of the sea surface temperature anomalies (SSTA) and errors in the COLA momentum flux cause the erroneous eastward migration of the El Niño-Southern Oscillation (ENSO) events. These conclusions depend on assuming that the error due to the OGCM can be neglected.
A 3D Joint Simulation Platform for Multiband_A Case Study in the Huailai Soybean and Maize Field
NASA Astrophysics Data System (ADS)
Zhang, Y.; Qinhuo, L.; Du, Y.; Huang, H.
2016-12-01
Canopy radiation and scattering signal contains abundant vegetation information. One can quantitatively retrieve the biophysical parameters by building canopy radiation and scattering models and inverting them. Joint simulation of the 3D models for different spectral (frequency) domains may produce complementary advantages and improves the precision. However, most of the currently models were based on one or two spectral bands (e.g. visible and thermal inferred bands, or visible and microwave bands). This manuscript established a 3D radiation and scattering simulation system which can simulate the BRDF, DBT, and backscattering coefficient based on the same structural description. The system coupled radiosity graphic model, Thermal RGM model and coherent microwave model by Yang Du for VIS/NIR, TIR, and MW, respectively. The models simulating the leaf spectral characteristics, component temperatures and dielectric properties were also coupled into the joint simulation system to convert the various parameters into fewer but more unified parameters. As a demonstration of our system, we applied the established system to simulate a mixed field with soybeans and maize based on the Huailai experiment data in August, 2014. With the help of Xfrog software, we remodeled soybean and maize in ".obj" and ".mtl" format. We extracted the structure information of the soybean and maize by statistics of the ".obj" files. We did simulations on red, NIR, TIR, C and L band. The simulation results were validated by the multi-angular observation data of Huailai experiment. Also, the spacial distribution (horizontal and vertical), leaf area index (LAI), leaf angle distribution (LAD), vegetation water content (VWC) and the incident observation geometry were analyzed in details. Validated by the experiment data, we indicate that the simulations of multiband were quite well. Because the crops were planted in regular rows and the maize and soybeans were with different height, different LAI, different LAD and different VWC, we did the sensitive analysis by changing on one of them and fixed the other parameters. The analysis showed that the parameters influenced the radiation and scattering signal of different spectral (frequency) with varying degrees.
A permeation theory for single-file ion channels: one- and two-step models.
Nelson, Peter Hugo
2011-04-28
How many steps are required to model permeation through ion channels? This question is investigated by comparing one- and two-step models of permeation with experiment and MD simulation for the first time. In recent MD simulations, the observed permeation mechanism was identified as resembling a Hodgkin and Keynes knock-on mechanism with one voltage-dependent rate-determining step [Jensen et al., PNAS 107, 5833 (2010)]. These previously published simulation data are fitted to a one-step knock-on model that successfully explains the highly non-Ohmic current-voltage curve observed in the simulation. However, these predictions (and the simulations upon which they are based) are not representative of real channel behavior, which is typically Ohmic at low voltages. A two-step association/dissociation (A/D) model is then compared with experiment for the first time. This two-parameter model is shown to be remarkably consistent with previously published permeation experiments through the MaxiK potassium channel over a wide range of concentrations and positive voltages. The A/D model also provides a first-order explanation of permeation through the Shaker potassium channel, but it does not explain the asymmetry observed experimentally. To address this, a new asymmetric variant of the A/D model is developed using the present theoretical framework. It includes a third parameter that represents the value of the "permeation coordinate" (fractional electric potential energy) corresponding to the triply occupied state n of the channel. This asymmetric A/D model is fitted to published permeation data through the Shaker potassium channel at physiological concentrations, and it successfully predicts qualitative changes in the negative current-voltage data (including a transition to super-Ohmic behavior) based solely on a fit to positive-voltage data (that appear linear). The A/D model appears to be qualitatively consistent with a large group of published MD simulations, but no quantitative comparison has yet been made. The A/D model makes a network of predictions for how the elementary steps and the channel occupancy vary with both concentration and voltage. In addition, the proposed theoretical framework suggests a new way of plotting the energetics of the simulated system using a one-dimensional permeation coordinate that uses electric potential energy as a metric for the net fractional progress through the permeation mechanism. This approach has the potential to provide a quantitative connection between atomistic simulations and permeation experiments for the first time.
A Single-column Model Ensemble Approach Applied to the TWP-ICE Experiment
NASA Technical Reports Server (NTRS)
Davies, L.; Jakob, C.; Cheung, K.; DelGenio, A.; Hill, A.; Hume, T.; Keane, R. J.; Komori, T.; Larson, V. E.; Lin, Y.;
2013-01-01
Single-column models (SCM) are useful test beds for investigating the parameterization schemes of numerical weather prediction and climate models. The usefulness of SCM simulations are limited, however, by the accuracy of the best estimate large-scale observations prescribed. Errors estimating the observations will result in uncertainty in modeled simulations. One method to address the modeled uncertainty is to simulate an ensemble where the ensemble members span observational uncertainty. This study first derives an ensemble of large-scale data for the Tropical Warm Pool International Cloud Experiment (TWP-ICE) based on an estimate of a possible source of error in the best estimate product. These data are then used to carry out simulations with 11 SCM and two cloud-resolving models (CRM). Best estimate simulations are also performed. All models show that moisture-related variables are close to observations and there are limited differences between the best estimate and ensemble mean values. The models, however, show different sensitivities to changes in the forcing particularly when weakly forced. The ensemble simulations highlight important differences in the surface evaporation term of the moisture budget between the SCM and CRM. Differences are also apparent between the models in the ensemble mean vertical structure of cloud variables, while for each model, cloud properties are relatively insensitive to forcing. The ensemble is further used to investigate cloud variables and precipitation and identifies differences between CRM and SCM particularly for relationships involving ice. This study highlights the additional analysis that can be performed using ensemble simulations and hence enables a more complete model investigation compared to using the more traditional single best estimate simulation only.
Vafaeian, B; Le, L H; Tran, T N H T; El-Rich, M; El-Bialy, T; Adeeb, S
2016-05-01
The present study investigated the accuracy of micro-scale finite element modeling for simulating broadband ultrasound propagation in water-saturated trabecular bone-mimicking phantoms. To this end, five commercially manufactured aluminum foam samples as trabecular bone-mimicking phantoms were utilized for ultrasonic immersion through-transmission experiments. Based on micro-computed tomography images of the same physical samples, three-dimensional high-resolution computational samples were generated to be implemented in the micro-scale finite element models. The finite element models employed the standard Galerkin finite element method (FEM) in time domain to simulate the ultrasonic experiments. The numerical simulations did not include energy dissipative mechanisms of ultrasonic attenuation; however, they expectedly simulated reflection, refraction, scattering, and wave mode conversion. The accuracy of the finite element simulations were evaluated by comparing the simulated ultrasonic attenuation and velocity with the experimental data. The maximum and the average relative errors between the experimental and simulated attenuation coefficients in the frequency range of 0.6-1.4 MHz were 17% and 6% respectively. Moreover, the simulations closely predicted the time-of-flight based velocities and the phase velocities of ultrasound with maximum relative errors of 20 m/s and 11 m/s respectively. The results of this study strongly suggest that micro-scale finite element modeling can effectively simulate broadband ultrasound propagation in water-saturated trabecular bone-mimicking structures. Copyright © 2016 Elsevier B.V. All rights reserved.
Elastic plastic self-consistent (EPSC) modeling of plastic deformation in fayalite olivine
Burnley, Pamela C
2015-07-01
Elastic plastic self-consistent (EPSC) simulations are used to model synchrotron X-ray diffraction observations from deformation experiments on fayalite olivine using the deformation DIA apparatus. Consistent with results from other in situ diffraction studies of monomineralic polycrystals, the results show substantial variations in stress levels among grain populations. Rather than averaging the lattice reflection stresses or choosing a single reflection to determine the macroscopic stress supported by the specimen, an EPSC simulation is used to forward model diffraction data and determine a macroscopic stress that is consistent with lattice strains of all measured diffraction lines. The EPSC simulation presented here includesmore » kink band formation among the plastic deformation mechanisms in the simulation. The inclusion of kink band formation is critical to the success of the models. This study demonstrates the importance of kink band formation as an accommodation mechanism during plastic deformation of olivine as well as the utility of using EPSC models to interpret diffraction from in situ deformation experiments.« less
Tourret, Damien; Clarke, Amy J.; Imhoff, Seth D.; ...
2015-05-27
We present a three-dimensional extension of the multiscale dendritic needle network (DNN) model. This approach enables quantitative simulations of the unsteady dynamics of complex hierarchical networks in spatially extended dendritic arrays. We apply the model to directional solidification of Al-9.8 wt.%Si alloy and directly compare the model predictions with measurements from experiments with in situ x-ray imaging. The focus is on the dynamical selection of primary spacings over a range of growth velocities, and the influence of sample geometry on the selection of spacings. Simulation results show good agreement with experiments. The computationally efficient DNN model opens new avenues formore » investigating the dynamics of large dendritic arrays at scales relevant to solidification experiments and processes.« less
Modeling Ullage Dynamics of Tank Pressure Control Experiment during Jet Mixing in Microgravity
NASA Technical Reports Server (NTRS)
Kartuzova, O.; Kassemi, M.
2016-01-01
A CFD model for simulating the fluid dynamics of the jet induced mixing process is utilized in this paper to model the pressure control portion of the Tank Pressure Control Experiment (TPCE) in microgravity1. The Volume of Fluid (VOF) method is used for modeling the dynamics of the interface during mixing. The simulations were performed at a range of jet Weber numbers from non-penetrating to fully penetrating. Two different initial ullage positions were considered. The computational results for the jet-ullage interaction are compared with still images from the video of the experiment. A qualitative comparison shows that the CFD model was able to capture the main features of the interfacial dynamics, as well as the jet penetration of the ullage.
Model Errors in Simulating Precipitation and Radiation fields in the NARCCAP Hindcast Experiment
NASA Astrophysics Data System (ADS)
Kim, J.; Waliser, D. E.; Mearns, L. O.; Mattmann, C. A.; McGinnis, S. A.; Goodale, C. E.; Hart, A. F.; Crichton, D. J.
2012-12-01
The relationship between the model errors in simulating precipitation and radiation fields including the surface insolation and OLR, is examined from the multi-RCM NARCCAP hindcast experiment for the conterminous U.S. region. Findings in this study suggest that the RCM biases in simulating precipitation are related with those in simulating radiation fields. For a majority of RCMs participated in the NARCCAP hindcast experiment as well as their ensemble, the spatial pattern of the insolation bias is negatively correlated with that of the precipitation bias, suggesting that the biases in precipitation and surface insolation are systematically related, most likely via the cloud fields. The relationship varies according to seasons as well with stronger relationship between the simulated precipitation and surface insolation during winter. This suggests that the RCM biases in precipitation and radiation are related via cloud fields. Additional analysis on the RCM errors in OLR is underway to examine more details of this relationship.
NASA Astrophysics Data System (ADS)
Mukherjee, A.; Shankar, D.; Chatterjee, Abhisek; Vinayachandran, P. N.
2018-06-01
We simulate the East India Coastal Current (EICC) using two numerical models (resolution 0.1° × 0.1°), an oceanic general circulation model (OGCM) called Modular Ocean Model and a simpler, linear, continuously stratified (LCS) model, and compare the simulated current with observations from moorings equipped with acoustic Doppler current profilers deployed on the continental slope in the western Bay of Bengal (BoB). We also carry out numerical experiments to analyse the processes. Both models simulate well the annual cycle of the EICC, but the performance degrades for the intra-annual and intraseasonal components. In a model-resolution experiment, both models (run at a coarser resolution of 0.25° × 0.25°) simulate well the currents in the equatorial Indian Ocean (EIO), but the performance of the high-resolution LCS model as well as the coarse-resolution OGCM, which is good in the EICC regime, degrades in the eastern and northern BoB. An experiment on forcing mechanisms shows that the annual EICC is largely forced by the local alongshore winds in the western BoB and remote forcing due to Ekman pumping over the BoB, but forcing from the EIO has a strong impact on the intra-annual EICC. At intraseasonal periods, local (equatorial) forcing dominates in the south (north) because the Kelvin wave propagates equatorward in the western BoB. A stratification experiment with the LCS model shows that changing the background stratification from EIO to BoB leads to a stronger surface EICC owing to strong coupling of higher order vertical modes with wind forcing for the BoB profiles. These high-order modes, which lead to energy propagating down into the ocean in the form of beams, are important only for the current and do not contribute significantly to the sea level.
Finke, John M; Cheung, Margaret S; Onuchic, José N
2004-09-01
Modeling the structure of natively disordered peptides has proved difficult due to the lack of structural information on these peptides. In this work, we use a novel application of the host-guest method, combining folding theory with experiments, to model the structure of natively disordered polyglutamine peptides. Initially, a minimalist molecular model (C(alpha)C(beta)) of CI2 is developed with a structurally based potential and captures many of the folding properties of CI2 determined from experiments. Next, polyglutamine "guest" inserts of increasing length are introduced into the CI2 "host" model and the polyglutamine is modeled to match the resultant change in CI2 thermodynamic stability between simulations and experiments. The polyglutamine model that best mimics the experimental changes in CI2 thermodynamic stability has 1), a beta-strand dihedral preference and 2), an attractive energy between polyglutamine atoms 0.75-times the attractive energy between the CI2 host Go-contacts. When free-energy differences in the CI2 host-guest system are correctly modeled at varying lengths of polyglutamine guest inserts, the kinetic folding rates and structural perturbation of these CI2 insert mutants are also correctly captured in simulations without any additional parameter adjustment. In agreement with experiments, the residues showing structural perturbation are located in the immediate vicinity of the loop insert. The simulated polyglutamine loop insert predominantly adopts extended random coil conformations, a structural model consistent with low resolution experimental methods. The agreement between simulation and experimental CI2 folding rates, CI2 structural perturbation, and polyglutamine insert structure show that this host-guest method can select a physically realistic model for inserted polyglutamine. If other amyloid peptides can be inserted into stable protein hosts and the stabilities of these host-guest mutants determined, this novel host-guest method may prove useful to determine structural preferences of these intractable but biologically relevant protein fragments.
Maritime Continent seasonal climate biases in AMIP experiments of the CMIP5 multimodel ensemble
NASA Astrophysics Data System (ADS)
Toh, Ying Ying; Turner, Andrew G.; Johnson, Stephanie J.; Holloway, Christopher E.
2018-02-01
The fidelity of 28 Coupled Model Intercomparison Project phase 5 (CMIP5) models in simulating mean climate over the Maritime Continent in the Atmospheric Model Intercomparison Project (AMIP) experiment is evaluated in this study. The performance of AMIP models varies greatly in reproducing seasonal mean climate and the seasonal cycle. The multi-model mean has better skill at reproducing the observed mean climate than the individual models. The spatial pattern of 850 hPa wind is better simulated than the precipitation in all four seasons. We found that model horizontal resolution is not a good indicator of model performance. Instead, a model's local Maritime Continent biases are somewhat related to its biases in the local Hadley circulation and global monsoon. The comparison with coupled models in CMIP5 shows that AMIP models generally performed better than coupled models in the simulation of the global monsoon and local Hadley circulation but less well at simulating the Maritime Continent annual cycle of precipitation. To characterize model systematic biases in the AMIP runs, we performed cluster analysis on Maritime Continent annual cycle precipitation. Our analysis resulted in two distinct clusters. Cluster I models are able to capture both the winter monsoon and summer monsoon shift, but they overestimate the precipitation; especially during the JJA and SON seasons. Cluster II models simulate weaker seasonal migration than observed, and the maximum rainfall position stays closer to the equator throughout the year. The tropics-wide properties of these clusters suggest a connection between the skill of simulating global properties of the monsoon circulation and the skill of simulating the regional scale of Maritime Continent precipitation.
Simulations of Ground and Space-Based Oxygen Atom Experiments
NASA Technical Reports Server (NTRS)
Finchum, A. (Technical Monitor); Cline, J. A.; Minton, T. K.; Braunstein, M.
2003-01-01
A low-earth orbit (LEO) materials erosion scenario and the ground-based experiment designed to simulate it are compared using the direct-simulation Monte Carlo (DSMC) method. The DSMC model provides a detailed description of the interactions between the hyperthermal gas flow and a normally oriented flat plate for each case. We find that while the general characteristics of the LEO exposure are represented in the ground-based experiment, multi-collision effects can potentially alter the impact energy and directionality of the impinging molecules in the ground-based experiment. Multi-collision phenomena also affect downstream flux measurements.
Astronauts Young and Duke begin simulated lunar surface traverse at KSC
NASA Technical Reports Server (NTRS)
1972-01-01
Astronauts John W. Young, right, Apollo 16 commander, and Charles M. Duke Jr., lunar module pilot, prepare to begin a simulated traverse in a training area at the Kennedy Space Center (KSC). Among the experiments to fly on Apollo 16 is the soil mechanics (S-200) experiment, or self-recording penetrometer, a model of which is held here by Duke. A training model of the Lunar Roving Vehicle (LRV) is parked between the two crewmen (30694); Young and Duke maneuver a training version of the LRV about a field at KSC simulated to represent the lunar surface (30695).
Verification technology of remote sensing camera satellite imaging simulation based on ray tracing
NASA Astrophysics Data System (ADS)
Gu, Qiongqiong; Chen, Xiaomei; Yang, Deyun
2017-08-01
Remote sensing satellite camera imaging simulation technology is broadly used to evaluate the satellite imaging quality and to test the data application system. But the simulation precision is hard to examine. In this paper, we propose an experimental simulation verification method, which is based on the test parameter variation comparison. According to the simulation model based on ray-tracing, the experiment is to verify the model precision by changing the types of devices, which are corresponding the parameters of the model. The experimental results show that the similarity between the imaging model based on ray tracing and the experimental image is 91.4%, which can simulate the remote sensing satellite imaging system very well.
Evaluation of simulation training in cardiothoracic surgery: the Senior Tour perspective.
Fann, James I; Feins, Richard H; Hicks, George L; Nesbitt, Jonathan C; Hammon, John W; Crawford, Fred A
2012-02-01
The study objective was to introduce senior surgeons, referred to as members of the "Senior Tour," to simulation-based learning and evaluate ongoing simulation efforts in cardiothoracic surgery. Thirteen senior cardiothoracic surgeons participated in a 2½-day Senior Tour Meeting. Of 12 simulators, each participant focused on 6 cardiac (small vessel anastomosis, aortic cannulation, cardiopulmonary bypass, aortic valve replacement, mitral valve repair, and aortic root replacement) or 6 thoracic surgical simulators (hilar dissection, esophageal anastomosis, rigid bronchoscopy, video-assisted thoracoscopic surgery lobectomy, tracheal resection, and sleeve resection). The participants provided critical feedback regarding the realism and utility of the simulators, which served as the basis for a composite assessment of the simulators. All participants acknowledged that simulation may not provide a wholly immersive experience. For small vessel anastomosis, the portable chest model is less realistic compared with the porcine model, but is valuable in teaching anastomosis mechanics. The aortic cannulation model allows multiple cannulations and can serve as a thoracic aortic surgery model. The cardiopulmonary bypass simulator provides crisis management experience. The porcine aortic valve replacement, mitral valve annuloplasty, and aortic root models are realistic and permit standardized training. The hilar dissection model is subject to variability of porcine anatomy and fragility of the vascular structures. The realistic esophageal anastomosis simulator presents various approaches to esophageal anastomosis. The exercise associated with the rigid bronchoscopy model is brief, and adding additional procedures should be considered. The tracheal resection, sleeve resection, and video-assisted thoracoscopic surgery lobectomy models are highly realistic and simulate advanced maneuvers. By providing the necessary tools, such as task trainers and assessment instruments, the Senior Tour may be one means to enhance simulation-based learning in cardiothoracic surgery. The Senior Tour members can provide regular programmatic evaluation and critical analyses to ensure that proposed simulators are of educational value. Published by Mosby, Inc.
Preduction of Vehicle Mobility on Large-Scale Soft-Soil Terrain Maps Using Physics-Based Simulation
2016-08-02
PREDICTION OF VEHICLE MOBILITY ON LARGE-SCALE SOFT- SOIL TERRAIN MAPS USING PHYSICS-BASED SIMULATION Tamer M. Wasfy, Paramsothy Jayakumar, Dave...NRMM • Objectives • Soft Soils • Review of Physics-Based Soil Models • MBD/DEM Modeling Formulation – Joint & Contact Constraints – DEM Cohesive... Soil Model • Cone Penetrometer Experiment • Vehicle- Soil Model • Vehicle Mobility DOE Procedure • Simulation Results • Concluding Remarks 2UNCLASSIFIED
Numerical modelling of orthogonal cutting: application to woodworking with a bench plane.
Nairn, John A
2016-06-06
A numerical model for orthogonal cutting using the material point method was applied to woodcutting using a bench plane. The cutting process was modelled by accounting for surface energy associated with wood fracture toughness for crack growth parallel to the grain. By using damping to deal with dynamic crack propagation and modelling all contact between wood and the plane, simulations could initiate chip formation and proceed into steady-state chip propagation including chip curling. Once steady-state conditions were achieved, the cutting forces became constant and could be determined as a function of various simulation variables. The modelling details included a cutting tool, the tool's rake and grinding angles, a chip breaker, a base plate and a mouth opening between the base plate and the tool. The wood was modelled as an anisotropic elastic-plastic material. The simulations were verified by comparison to an analytical model and then used to conduct virtual experiments on wood planing. The virtual experiments showed interactions between depth of cut, chip breaker location and mouth opening. Additional simulations investigated the role of tool grinding angle, tool sharpness and friction.
De Paris, Renata; Frantz, Fábio A.; Norberto de Souza, Osmar; Ruiz, Duncan D. A.
2013-01-01
Molecular docking simulations of fully flexible protein receptor (FFR) models are coming of age. In our studies, an FFR model is represented by a series of different conformations derived from a molecular dynamic simulation trajectory of the receptor. For each conformation in the FFR model, a docking simulation is executed and analyzed. An important challenge is to perform virtual screening of millions of ligands using an FFR model in a sequential mode since it can become computationally very demanding. In this paper, we propose a cloud-based web environment, called web Flexible Receptor Docking Workflow (wFReDoW), which reduces the CPU time in the molecular docking simulations of FFR models to small molecules. It is based on the new workflow data pattern called self-adaptive multiple instances (P-SaMIs) and on a middleware built on Amazon EC2 instances. P-SaMI reduces the number of molecular docking simulations while the middleware speeds up the docking experiments using a High Performance Computing (HPC) environment on the cloud. The experimental results show a reduction in the total elapsed time of docking experiments and the quality of the new reduced receptor models produced by discarding the nonpromising conformations from an FFR model ruled by the P-SaMI data pattern. PMID:23691504
Impact of spectral nudging on regional climate simulation over CORDEX East Asia using WRF
NASA Astrophysics Data System (ADS)
Tang, Jianping; Wang, Shuyu; Niu, Xiaorui; Hui, Pinhong; Zong, Peishu; Wang, Xueyuan
2017-04-01
In this study, the impact of the spectral nudging method on regional climate simulation over the Coordinated Regional Climate Downscaling Experiment East Asia (CORDEX-EA) region is investigated using the Weather Research and Forecasting model (WRF). Driven by the ERA-Interim reanalysis, five continuous simulations covering 1989-2007 are conducted by the WRF model, in which four runs adopt the interior spectral nudging with different wavenumbers, nudging variables and nudging coefficients. Model validation shows that WRF has the ability to simulate spatial distributions and temporal variations of the surface climate (air temperature and precipitation) over CORDEX-EA domain. Comparably the spectral nudging technique is effective in improving the model's skill in the following aspects: (1), the simulated biases and root mean square errors of annual mean temperature and precipitation are obviously reduced. The SN3-UVT (spectral nudging with wavenumber 3 in both zonal and meridional directions applied to U, V and T) and SN6 (spectral nudging with wavenumber 6 in both zonal and meridional directions applied to U and V) experiments give the best simulations for temperature and precipitation respectively. The inter-annual and seasonal variances produced by the SN experiments are also closer to the ERA-Interim observation. (2), the application of spectral nudging in WRF is helpful for simulating the extreme temperature and precipitation, and the SN3-UVT simulation shows a clear advantage over the other simulations in depicting both the spatial distributions and inter-annual variances of temperature and precipitation extremes. With the spectral nudging, WRF is able to preserve the variability in the large scale climate information, and therefore adjust the temperature and precipitation variabilities toward the observation.
Direct drive: Simulations and results from the National Ignition Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radha, P. B., E-mail: rbah@lle.rochester.edu; Hohenberger, M.; Edgell, D. H.
Direct-drive implosion physics is being investigated at the National Ignition Facility. The primary goal of the experiments is twofold: to validate modeling related to implosion velocity and to estimate the magnitude of hot-electron preheat. Implosion experiments indicate that the energetics is well-modeled when cross-beam energy transfer (CBET) is included in the simulation and an overall multiplier to the CBET gain factor is employed; time-resolved scattered light and scattered-light spectra display the correct trends. Trajectories from backlit images are well modeled, although those from measured self-emission images indicate increased shell thickness and reduced shell density relative to simulations. Sensitivity analyses indicatemore » that the most likely cause for the density reduction is nonuniformity growth seeded by laser imprint and not laser-energy coupling. Hot-electron preheat is at tolerable levels in the ongoing experiments, although it is expected to increase after the mitigation of CBET. Future work will include continued model validation, imprint measurements, and mitigation of CBET and hot-electron preheat.« less
Matsunaga, Yasuhiro
2018-01-01
Single-molecule experiments and molecular dynamics (MD) simulations are indispensable tools for investigating protein conformational dynamics. The former provide time-series data, such as donor-acceptor distances, whereas the latter give atomistic information, although this information is often biased by model parameters. Here, we devise a machine-learning method to combine the complementary information from the two approaches and construct a consistent model of conformational dynamics. It is applied to the folding dynamics of the formin-binding protein WW domain. MD simulations over 400 μs led to an initial Markov state model (MSM), which was then "refined" using single-molecule Förster resonance energy transfer (FRET) data through hidden Markov modeling. The refined or data-assimilated MSM reproduces the FRET data and features hairpin one in the transition-state ensemble, consistent with mutation experiments. The folding pathway in the data-assimilated MSM suggests interplay between hydrophobic contacts and turn formation. Our method provides a general framework for investigating conformational transitions in other proteins. PMID:29723137
Matsunaga, Yasuhiro; Sugita, Yuji
2018-05-03
Single-molecule experiments and molecular dynamics (MD) simulations are indispensable tools for investigating protein conformational dynamics. The former provide time-series data, such as donor-acceptor distances, whereas the latter give atomistic information, although this information is often biased by model parameters. Here, we devise a machine-learning method to combine the complementary information from the two approaches and construct a consistent model of conformational dynamics. It is applied to the folding dynamics of the formin-binding protein WW domain. MD simulations over 400 μs led to an initial Markov state model (MSM), which was then "refined" using single-molecule Förster resonance energy transfer (FRET) data through hidden Markov modeling. The refined or data-assimilated MSM reproduces the FRET data and features hairpin one in the transition-state ensemble, consistent with mutation experiments. The folding pathway in the data-assimilated MSM suggests interplay between hydrophobic contacts and turn formation. Our method provides a general framework for investigating conformational transitions in other proteins. © 2018, Matsunaga et al.
NASA Astrophysics Data System (ADS)
Ushakov, K. V.; Ibrayev, R. A.
2017-11-01
In this paper, the first results of a simulation of the mean World Ocean thermohaline characteristics obtained by the INMIO ocean general circulation model configured with 0.1 degree resolution in a 5-year long numerical experiment following the CORE-II protocol are presented. The horizontal and zonal mean distributions of the solution bias against the WOA09 data are analyzed. The seasonal cycle of heat content at a specified site of the North Atlantic is also discussed. The simulation results demonstrate a clear improvement in the quality of representation of the upper ocean compared to the results of experiments with 0.5 and 0.25 degree model configurations. Some remaining biases of the model solution and possible ways of their overcoming are highlighted.
A Numerical Simulation and Statistical Modeling of High Intensity Radiated Fields Experiment Data
NASA Technical Reports Server (NTRS)
Smith, Laura J.
2004-01-01
Tests are conducted on a quad-redundant fault tolerant flight control computer to establish upset characteristics of an avionics system in an electromagnetic field. A numerical simulation and statistical model are described in this work to analyze the open loop experiment data collected in the reverberation chamber at NASA LaRC as a part of an effort to examine the effects of electromagnetic interference on fly-by-wire aircraft control systems. By comparing thousands of simulation and model outputs, the models that best describe the data are first identified and then a systematic statistical analysis is performed on the data. All of these efforts are combined which culminate in an extrapolation of values that are in turn used to support previous efforts used in evaluating the data.
NASA Astrophysics Data System (ADS)
Scudeler, Carlotta; Pangle, Luke; Pasetto, Damiano; Niu, Guo-Yue; Volkmann, Till; Paniconi, Claudio; Putti, Mario; Troch, Peter
2016-10-01
This paper explores the challenges of model parameterization and process representation when simulating multiple hydrologic responses from a highly controlled unsaturated flow and transport experiment with a physically based model. The experiment, conducted at the Landscape Evolution Observatory (LEO), involved alternate injections of water and deuterium-enriched water into an initially very dry hillslope. The multivariate observations included point measures of water content and tracer concentration in the soil, total storage within the hillslope, and integrated fluxes of water and tracer through the seepage face. The simulations were performed with a three-dimensional finite element model that solves the Richards and advection-dispersion equations. Integrated flow, integrated transport, distributed flow, and distributed transport responses were successively analyzed, with parameterization choices at each step supported by standard model performance metrics. In the first steps of our analysis, where seepage face flow, water storage, and average concentration at the seepage face were the target responses, an adequate match between measured and simulated variables was obtained using a simple parameterization consistent with that from a prior flow-only experiment at LEO. When passing to the distributed responses, it was necessary to introduce complexity to additional soil hydraulic parameters to obtain an adequate match for the point-scale flow response. This also improved the match against point measures of tracer concentration, although model performance here was considerably poorer. This suggests that still greater complexity is needed in the model parameterization, or that there may be gaps in process representation for simulating solute transport phenomena in very dry soils.
An analysis of intergroup rivalry using Ising model and reinforcement learning
NASA Astrophysics Data System (ADS)
Zhao, Feng-Fei; Qin, Zheng; Shao, Zhuo
2014-01-01
Modeling of intergroup rivalry can help us better understand economic competitions, political elections and other similar activities. The result of intergroup rivalry depends on the co-evolution of individual behavior within one group and the impact from the rival group. In this paper, we model the rivalry behavior using Ising model. Different from other simulation studies using Ising model, the evolution rules of each individual in our model are not static, but have the ability to learn from historical experience using reinforcement learning technique, which makes the simulation more close to real human behavior. We studied the phase transition in intergroup rivalry and focused on the impact of the degree of social freedom, the personality of group members and the social experience of individuals. The results of computer simulation show that a society with a low degree of social freedom and highly educated, experienced individuals is more likely to be one-sided in intergroup rivalry.
Comparing the Degree of Land-Atmosphere Interaction in Four Atmospheric General Circulation Models
NASA Technical Reports Server (NTRS)
Koster, Randal D.; Dirmeyer, Paul A.; Hahmann, Andrea N.; Ijpelaar, Ruben; Tyahla, Lori; Cox, Peter; Suarez, Max J.; Houser, Paul R. (Technical Monitor)
2001-01-01
Land-atmosphere feedback, by which (for example) precipitation-induced moisture anomalies at the land surface affect the overlying atmosphere and thereby the subsequent generation of precipitation, has been examined and quantified with many atmospheric general circulation models (AGCMs). Generally missing from such studies, however, is an indication of the extent to which the simulated feedback strength is model dependent. Four modeling groups have recently performed a highly controlled numerical experiment that allows an objective inter-model comparison of land-atmosphere feedback strength. The experiment essentially consists of an ensemble of simulations in which each member simulation artificially maintains the same time series of surface prognostic variables. Differences in atmospheric behavior between the ensemble members then indicates the degree to which the state of the land surface controls atmospheric processes in that model. A comparison of the four sets of experimental results shows that feedback strength does indeed vary significantly between the AGCMs.
MHD simulation of plasma compression experiments
NASA Astrophysics Data System (ADS)
Reynolds, Meritt; Barsky, Sandra; de Vietien, Peter
2017-10-01
General Fusion (GF) is working to build a magnetized target fusion (MTF) power plant based on compression of magnetically-confined plasma by liquid metal. GF is testing this compression concept by collapsing solid aluminum liners onto plasmas formed by coaxial helicity injection in a series of experiments called PCS (Plasma Compression, Small). We simulate the PCS experiments using the finite-volume MHD code VAC. The single-fluid plasma model includes temperature-dependent resistivity and anisotropic heat transport. The time-dependent curvilinear mesh for MHD simulation is derived from LS-DYNA simulations of actual field tests of liner implosion. We will discuss how 3D simulations reproduced instability observed in the PCS13 experiment and correctly predicted stabilization of PCS14 by ramping the shaft current during compression. We will also present a comparison of simulated Mirnov and x-ray diagnostics with experimental measurements indicating that PCS14 compressed well to a linear compression ratio of 2.5:1.
Nuclear Power Plant Mechanical Component Flooding Fragility Experiments Status
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pope, C. L.; Savage, B.; Johnson, B.
This report describes progress on Nuclear Power Plant mechanical component flooding fragility experiments and supporting research. The progress includes execution of full scale fragility experiments using hollow-core doors, design of improvements to the Portal Evaluation Tank, equipment procurement and initial installation of PET improvements, designation of experiments exploiting the improved PET capabilities, fragility mathematical model development, Smoothed Particle Hydrodynamic simulations, wave impact simulation device research, and pipe rupture mechanics research.
Agaoglu, Berken; Scheytt, Traugott; Copty, Nadim K
2012-10-01
This study examines the mechanistic processes governing multiphase flow of a water-cosolvent-NAPL system in saturated porous media. Laboratory batch and column flushing experiments were conducted to determine the equilibrium properties of pure NAPL and synthetically prepared NAPL mixtures as well as NAPL recovery mechanisms for different water-ethanol contents. The effect of contact time was investigated by considering different steady and intermittent flow velocities. A modified version of multiphase flow simulator (UTCHEM) was used to compare the multiphase model simulations with the column experiment results. The effect of employing different grid geometries (1D, 2D, 3D), heterogeneity and different initial NAPL saturation configurations was also examined in the model. It is shown that the change in velocity affects the mass transfer rate between phases as well as the ultimate NAPL recovery percentage. The experiments with low flow rate flushing of pure NAPL and the 3D UTCHEM simulations gave similar effluent concentrations and NAPL cumulative recoveries. Model simulations over-estimated NAPL recovery for high specific discharges and rate-limited mass transfer, suggesting a constant mass transfer coefficient for the entire flushing experiment may not be valid. When multi-component NAPLs are present, the dissolution rate of individual organic compounds (namely, toluene and benzene) into the ethanol-water flushing solution is found not to correlate with their equilibrium solubility values. Copyright © 2012 Elsevier B.V. All rights reserved.
Experiments and FEM simulations of fracture behaviors for ADC12 aluminum alloy under impact load
NASA Astrophysics Data System (ADS)
Hu, Yumei; Xiao, Yue; Jin, Xiaoqing; Zheng, Haoran; Zhou, Yinge; Shao, Jinhua
2016-11-01
Using the combination of experiment and simulation, the fracture behavior of the brittle metal named ADC12 aluminum alloy was studied. Five typical experiments were carried out on this material, with responding data collected on different stress states and dynamic strain rates. Fractographs revealed that the morphologies of fractured specimen under several rates showed different results, indicating that the fracture was predominantly a brittle one in nature. Simulations of the fracture processes of those specimens were conducted by Finite Element Method, whilst consistency was observed between simulations and experiments. In simulation, the Johnson- Cook model was chosen to describe the damage development and to predict the failure using parameters determined from those experimental data. Subsequently, an ADC12 engine mount bracket crashing simulation was conducted and the results indicated good agreement with the experiments. The accordance showed that our research can provide an accurate description for the deforming and fracture processes of the studied alloy.
Modelling the EDLC-based Power Supply Module for a Maneuvering System of a Nanosatellite
NASA Astrophysics Data System (ADS)
Kumarin, A. A.; Kudryavtsev, I. A.
2018-01-01
The development of the model of the power supply module of a maneuvering system of a nanosatellite is described. The module is based on an EDLC battery as an energy buffer. The EDLC choice is described. Experiments are conducted to provide data for model. Simulation of the power supply module is made for charging and discharging of the battery processes. The difference between simulation and experiment does not exceed 0.5% for charging and 10% for discharging. The developed model can be used in early design and to adjust charger and load parameters. The model can be expanded to represent the entire power system.
Self-charging of identical grains in the absence of an external field.
Yoshimatsu, R; Araújo, N A M; Wurm, G; Herrmann, H J; Shinbrot, T
2017-01-06
We investigate the electrostatic charging of an agitated bed of identical grains using simulations, mathematical modeling, and experiments. We simulate charging with a discrete-element model including electrical multipoles and find that infinitesimally small initial charges can grow exponentially rapidly. We propose a mathematical Turing model that defines conditions for exponential charging to occur and provides insights into the mechanisms involved. Finally, we confirm the predicted exponential growth in experiments using vibrated grains under microgravity, and we describe novel predicted spatiotemporal states that merit further study.
Self-charging of identical grains in the absence of an external field
NASA Astrophysics Data System (ADS)
Yoshimatsu, R.; Araújo, N. A. M.; Wurm, G.; Herrmann, H. J.; Shinbrot, T.
2017-01-01
We investigate the electrostatic charging of an agitated bed of identical grains using simulations, mathematical modeling, and experiments. We simulate charging with a discrete-element model including electrical multipoles and find that infinitesimally small initial charges can grow exponentially rapidly. We propose a mathematical Turing model that defines conditions for exponential charging to occur and provides insights into the mechanisms involved. Finally, we confirm the predicted exponential growth in experiments using vibrated grains under microgravity, and we describe novel predicted spatiotemporal states that merit further study.
Something from nothing: self-charging of identical grains
NASA Astrophysics Data System (ADS)
Shinbrot, Troy; Yoshimatsu, Ryuta; Nuno Araujo, Nuno; Wurm, Gerhard; Herrmann, Hans
We investigate the electrostatic charging of an agitated bed of identical grains using simulations, mathematical modeling, and experiments. We simulate charging with a discrete-element model including electrical multipoles and find that infinitesimally small initial charges can grow exponentially rapidly. We propose a mathematical Turing model that defines conditions for exponential charging to occur and provides insights into the mechanisms involved. Finally, we confirm the predicted exponential growth in experiments using vibrated grains under microgravity, and we describe novel predicted spatiotemporal states that merit further study. I acknowledge support from NSF/DMR, award 1404792.
Self-charging of identical grains in the absence of an external field
Yoshimatsu, R.; Araújo, N. A. M.; Wurm, G.; Herrmann, H. J.; Shinbrot, T.
2017-01-01
We investigate the electrostatic charging of an agitated bed of identical grains using simulations, mathematical modeling, and experiments. We simulate charging with a discrete-element model including electrical multipoles and find that infinitesimally small initial charges can grow exponentially rapidly. We propose a mathematical Turing model that defines conditions for exponential charging to occur and provides insights into the mechanisms involved. Finally, we confirm the predicted exponential growth in experiments using vibrated grains under microgravity, and we describe novel predicted spatiotemporal states that merit further study. PMID:28059124
ERIC Educational Resources Information Center
Hitchen, Trevor; Metcalfe, Judith
1987-01-01
Describes a simulation of the results of real experiments which use different strains of Escherichia coli. Provides an inexpensive practical problem-solving exercise to aid the teaching and understanding of the Jacob and Monod model of gene regulation. (Author/CW)
The importance of wind-flux feedbacks during the November CINDY-DYNAMO MJO event
NASA Astrophysics Data System (ADS)
Riley Dellaripa, Emily; Maloney, Eric; van den Heever, Susan
2015-04-01
High-resolution, large-domain cloud resolving model (CRM) simulations probing the importance of wind-flux feedbacks to Madden-Julian Oscillation (MJO) convection are performed for the November 2011 CINDY-DYNAMO MJO event. The work is motivated by observational analysis from RAMA buoys in the Indian Ocean and TRMM precipitation retrievals that show a positive correlation between MJO precipitation and wind-induced surface fluxes, especially latent heat fluxes, during and beyond the CINDY-DYNAMO time period. Simulations are done using Colorado State University's Regional Atmospheric Modeling System (RAMS). The domain setup is oceanic and spans 1000 km x 1000 km with 1.5 km horizontal resolution and 65 stretched vertical levels centered on the location of Gan Island - one of the major CINDY-DYNAMO observation points. The model is initialized with ECMWF reanalysis and Aqua MODIS sea surface temperatures. Nudging from ECMWF reanalysis is applied at the domain periphery to encourage realistic evolution of MJO convection. The control experiment is run for the entire month of November so both suppressed and active, as well as, transitional phases of the MJO are modeled. In the control experiment, wind-induced surface fluxes are activated through the surface bulk aerodynamic formula and allowed to evolve organically. Sensitivity experiments are done by restarting the control run one week into the simulation and controlling the wind-induced flux feedbacks. In one sensitivity experiment, wind-induced surface flux feedbacks are completely denied, while in another experiment the winds are kept constant at the control simulations mean surface wind speed. The evolution of convection, especially on the mesoscale, is compared between the control and sensitivity simulations.
A Novel Temporal Bone Simulation Model Using 3D Printing Techniques.
Mowry, Sarah E; Jammal, Hachem; Myer, Charles; Solares, Clementino Arturo; Weinberger, Paul
2015-09-01
An inexpensive temporal bone model for use in a temporal bone dissection laboratory setting can be made using a commercially available, consumer-grade 3D printer. Several models for a simulated temporal bone have been described but use commercial-grade printers and materials to produce these models. The goal of this project was to produce a plastic simulated temporal bone on an inexpensive 3D printer that recreates the visual and haptic experience associated with drilling a human temporal bone. Images from a high-resolution CT of a normal temporal bone were converted into stereolithography files via commercially available software, with image conversion and print settings adjusted to achieve optimal print quality. The temporal bone model was printed using acrylonitrile butadiene styrene (ABS) plastic filament on a MakerBot 2x 3D printer. Simulated temporal bones were drilled by seven expert temporal bone surgeons, assessing the fidelity of the model as compared with a human cadaveric temporal bone. Using a four-point scale, the simulated bones were assessed for haptic experience and recreation of the temporal bone anatomy. The created model was felt to be an accurate representation of a human temporal bone. All raters felt strongly this would be a good training model for junior residents or to simulate difficult surgical anatomy. Material cost for each model was $1.92. A realistic, inexpensive, and easily reproducible temporal bone model can be created on a consumer-grade desktop 3D printer.
Simulation of the Effect of Realistic Space Vehicle Environments on Binary Metal Alloys
NASA Technical Reports Server (NTRS)
Westra, Douglas G.; Poirier, D. R.; Heinrich, J. C.; Sung, P. K.; Felicelli, S. D.; Phelps, Lisa (Technical Monitor)
2001-01-01
Simulations that assess the effect of space vehicle acceleration environments on the solidification of Pb-Sb alloys are reported. Space microgravity missions are designed to provide a near zero-g acceleration environment for various types of scientific experiments. Realistically. these space missions cannot provide a perfect environment. Vibrations caused by crew activity, on-board experiments, support systems stems (pumps, fans, etc.), periodic orbital maneuvers, and water dumps can all cause perturbations to the microgravity environment. In addition, the drag on the space vehicle is a source of acceleration. Therefore, it is necessary to predict the impact of these vibration-perturbations and the steady-state drag acceleration on the experiments. These predictions can be used to design mission timelines. so that the experiment is run during times that the impact of the acceleration environment is acceptable for the experiment of interest. The simulations reported herein were conducted using a finite element model that includes mass, species, momentum, and energy conservation. This model predicts the existence of "channels" within the processing mushy zone and subsequently "freckles" within the fully processed solid, which are the effects of thermosolutal convection. It is necessary to mitigate thermosolutal convection during space experiments of metal alloys, in order to study and characterize diffusion-controlled transport phenomena (microsegregation) that are normally coupled with macrosegregation. The model allows simulation of steady-state and transient acceleration values ranging from no acceleration (0 g). to microgravity conditions (10(exp -6) to 10(exp -3) g), to terrestrial gravity conditions (1 g). The transient acceleration environments simulated were from the STS-89 SpaceHAB mission and from the STS-94 SpaceLAB mission. with on-orbit accelerometer data during different mission periods used as inputs for the simulation model. Periods of crew exercise, quiet (no crew activity), and nominal conditions from STS-89 were used as simulation inputs as were periods of nominal. overboard water-dump, and free-drift (no orbit maneuvering operations) from STS-94. Steady-state acceleration environments of 0.0 and 10(exp -6) to 10(exp -1) g were also simulated, to serve as a comparison to the transient data and to assess an acceptable magnitude for the steady-state vehicle drag
NASA Astrophysics Data System (ADS)
Butchart, Neal; Anstey, James A.; Hamilton, Kevin; Osprey, Scott; McLandress, Charles; Bushell, Andrew C.; Kawatani, Yoshio; Kim, Young-Ha; Lott, Francois; Scinocca, John; Stockdale, Timothy N.; Andrews, Martin; Bellprat, Omar; Braesicke, Peter; Cagnazzo, Chiara; Chen, Chih-Chieh; Chun, Hye-Yeong; Dobrynin, Mikhail; Garcia, Rolando R.; Garcia-Serrano, Javier; Gray, Lesley J.; Holt, Laura; Kerzenmacher, Tobias; Naoe, Hiroaki; Pohlmann, Holger; Richter, Jadwiga H.; Scaife, Adam A.; Schenzinger, Verena; Serva, Federico; Versick, Stefan; Watanabe, Shingo; Yoshida, Kohei; Yukimoto, Seiji
2018-03-01
The Stratosphere-troposphere Processes And their Role in Climate (SPARC) Quasi-Biennial Oscillation initiative (QBOi) aims to improve the fidelity of tropical stratospheric variability in general circulation and Earth system models by conducting coordinated numerical experiments and analysis. In the equatorial stratosphere, the QBO is the most conspicuous mode of variability. Five coordinated experiments have therefore been designed to (i) evaluate and compare the verisimilitude of modelled QBOs under present-day conditions, (ii) identify robustness (or alternatively the spread and uncertainty) in the simulated QBO response to commonly imposed changes in model climate forcings (e.g. a doubling of CO2 amounts), and (iii) examine model dependence of QBO predictability. This paper documents these experiments and the recommended output diagnostics. The rationale behind the experimental design and choice of diagnostics is presented. To facilitate scientific interpretation of the results in other planned QBOi studies, consistent descriptions of the models performing each experiment set are given, with those aspects particularly relevant for simulating the QBO tabulated for easy comparison.
Finite element simulation of a novel composite light-weight microporous cladding panel
NASA Astrophysics Data System (ADS)
Tian, Lida; Wang, Dongyan
2018-04-01
A novel composite light-weight microporous cladding panel with matched connection detailing is developed. Numerical simulation on the experiment is conducted by ABAQUS. The accuracy and rationality of the finite element model is verified by comparison between the simulation and the experiment results. It is also indicated that the novel composite cladding panel is of desirable bearing capacity, stiffness and deformability under out-of-plane load.
Computer Simulation of Human Service Program Evaluations.
ERIC Educational Resources Information Center
Trochim, William M. K.; Davis, James E.
1985-01-01
Describes uses of computer simulations for the context of human service program evaluation. Presents simple mathematical models for most commonly used human service outcome evaluation designs (pretest-posttest randomized experiment, pretest-posttest nonequivalent groups design, and regression-discontinuity design). Translates models into single…
NASA Technical Reports Server (NTRS)
Maiorano, Andrea; Martre, Pierre; Asseng, Senthold; Ewert, Frank; Mueller, Christoph; Roetter, Reimund P.; Ruane, Alex C.; Semenov, Mikhail A.; Wallach, Daniel; Wang, Enli
2016-01-01
To improve climate change impact estimates and to quantify their uncertainty, multi-model ensembles (MMEs) have been suggested. Model improvements can improve the accuracy of simulations and reduce the uncertainty of climate change impact assessments. Furthermore, they can reduce the number of models needed in a MME. Herein, 15 wheat growth models of a larger MME were improved through re-parameterization and/or incorporating or modifying heat stress effects on phenology, leaf growth and senescence, biomass growth, and grain number and size using detailed field experimental data from the USDA Hot Serial Cereal experiment (calibration data set). Simulation results from before and after model improvement were then evaluated with independent field experiments from a CIMMYT worldwide field trial network (evaluation data set). Model improvements decreased the variation (10th to 90th model ensemble percentile range) of grain yields simulated by the MME on average by 39% in the calibration data set and by 26% in the independent evaluation data set for crops grown in mean seasonal temperatures greater than 24 C. MME mean squared error in simulating grain yield decreased by 37%. A reduction in MME uncertainty range by 27% increased MME prediction skills by 47%. Results suggest that the mean level of variation observed in field experiments and used as a benchmark can be reached with half the number of models in the MME. Improving crop models is therefore important to increase the certainty of model-based impact assessments and allow more practical, i.e. smaller MMEs to be used effectively.
Multi-scale modelling of supercapacitors: From molecular simulations to a transmission line model
NASA Astrophysics Data System (ADS)
Pean, C.; Rotenberg, B.; Simon, P.; Salanne, M.
2016-09-01
We perform molecular dynamics simulations of a typical nanoporous-carbon based supercapacitor. The organic electrolyte consists in 1-ethyl-3-methylimidazolium and hexafluorophosphate ions dissolved in acetonitrile. We simulate systems at equilibrium, for various applied voltages. This allows us to determine the relevant thermodynamic (capacitance) and transport (in-pore resistivities) properties. These quantities are then injected in a transmission line model for testing its ability to predict the charging properties of the device. The results from this macroscopic model are in good agreement with non-equilibrium molecular dynamics simulations, which validates its use for interpreting electrochemical impedance experiments.
Tang, Yuye; Chen, Xi; Yoo, Jejoong; Yethiraj, Arun; Cui, Qiang
2010-01-01
A hierarchical simulation framework that integrates information from all-atom simulations into a finite element model at the continuum level is established to study the mechanical response of a mechanosensitive channel of large conductance (MscL) in bacteria Escherichia Coli (E.coli) embedded in a vesicle formed by the dipalmitoylphosphatidycholine (DPPC) lipid bilayer. Sufficient structural details of the protein are built into the continuum model, with key parameters and material properties derived from molecular mechanics simulations. The multi-scale framework is used to analyze the gating of MscL when the lipid vesicle is subjective to nanoindentation and patch clamp experiments, and the detailed structural transitions of the protein are obtained explicitly as a function of external load; it is currently impossible to derive such information based solely on all-atom simulations. The gating pathways of E.coli-MscL qualitatively agree with results from previous patch clamp experiments. The gating mechanisms under complex indentation-induced deformation are also predicted. This versatile hierarchical multi-scale framework may be further extended to study the mechanical behaviors of cells and biomolecules, as well as to guide and stimulate biomechanics experiments. PMID:21874098
LES/RANS Modeling of Aero-Optical Effects in a Supersonic Cavity Flow
2016-06-13
the wind tunnel is not modeled in the cavity simulation, a separate turbulent boundary layer simulation with identical free-stream conditions was...the wind tunnel experiments were provided by Dr. Donald J. Wittich and the testbed geometries were modeled by Mr. Jeremy Stanford. Dr. Maziar Hemati...and an auxiliary flat plate simulation is performed to replicate the effects of the wind - tunnel boundary layer on the computed optical path
DSMC simulations of the Shuttle Plume Impingement Flight EXperiment(SPIFEX)
NASA Technical Reports Server (NTRS)
Stewart, Benedicte; Lumpkin, Forrest
2017-01-01
During orbital maneuvers and proximity operations, a spacecraft fires its thrusters inducing plume impingement loads, heating and contamination to itself and to any other nearby spacecraft. These thruster firings are generally modeled using a combination of Computational Fluid Dynamics (CFD) and DSMC simulations. The Shuttle Plume Impingement Flight EXperiment(SPIFEX) produced data that can be compared to a high fidelity simulation. Due to the size of the Shuttle thrusters this problem was too resource intensive to be solved with DSMC when the experiment flew in 1994.
Nursing simulation: a community experience.
Gunowa, Neesha Oozageer; Elliott, Karen; McBride, Michelle
2018-04-02
The education sector faces major challenges in providing learning experiences so that newly qualified nurses feel adequately prepared to work in a community setting. With this in mind, higher education institutions need to develop more innovative ways to deliver the community-nurse experience to student nurses. This paper presents and explores how simulation provides an opportunity for educators to support and evaluate student performance in an environment that models a complete patient encounter in the community. Following the simulation, evaluative data were collated and the answers analysed to identify key recommendations.
NASA Technical Reports Server (NTRS)
Martre, Pierre; Reynolds, Matthew P.; Asseng, Senthold; Ewert, Frank; Alderman, Phillip D.; Cammarano, Davide; Maiorano, Andrea; Ruane, Alexander C.; Aggarwal, Pramod K.; Anothai, Jakarat;
2017-01-01
The data set contains a portion of the International Heat Stress Genotype Experiment (IHSGE) data used in the AgMIP-Wheat project to analyze the uncertainty of 30 wheat crop models and quantify the impact of heat on global wheat yield productivity. It includes two spring wheat cultivars grown during two consecutive winter cropping cycles at hot, irrigated, and low latitude sites in Mexico (Ciudad Obregon and Tlaltizapan), Egypt (Aswan), India (Dharwar), the Sudan (Wad Medani), and Bangladesh (Dinajpur). Experiments in Mexico included normal (November-December) and late (January-March) sowing dates. Data include local daily weather data, soil characteristics and initial soil conditions, crop measurements (anthesis and maturity dates, anthesis and final total above ground biomass, final grain yields and yields components), and cultivar information. Simulations include both daily in-season and end-of-season results from 30 wheat models.
Report on results of current and future metal casting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Unal, Cetin; Carlson, Neil N.
2015-09-28
New modeling capabilities needed to simulate the casting of metallic fuels are added to Truchas code. In this report we summarize improvements we made in FY2015 in three areas; (1) Analysis of new casting experiments conducted with BCS and EFL designs, (2) the simulation of INL’s U-Zr casting experiments with Flow3D computer program, (3) the implementation of surface tension model into Truchas for unstructured mesh required to run U-Zr casting.
Dynamic Fracture Simulations of Explosively Loaded Cylinders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arthur, Carly W.; Goto, D. M.
2015-11-30
This report documents the modeling results of high explosive experiments investigating dynamic fracture of steel (AerMet® 100 alloy) cylinders. The experiments were conducted at Lawrence Livermore National Laboratory (LLNL) during 2007 to 2008 [10]. A principal objective of this study was to gain an understanding of dynamic material failure through the analysis of hydrodynamic computer code simulations. Two-dimensional and three-dimensional computational cylinder models were analyzed using the ALE3D multi-physics computer code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sakaguchi, Koichi; Leung, Lai-Yung R.; Zhao, Chun
This study presents a diagnosis of a multi-resolution approach using the Model for Prediction Across Scales - Atmosphere (MPAS-A) for simulating regional climate. Four AMIP experiments are conducted for 1999-2009. In the first two experiments, MPAS-A is configured using global quasi-uniform grids at 120 km and 30 km grid spacing. In the other two experiments, MPAS-A is configured using variable-resolution (VR) mesh with local refinement at 30 km over North America and South America embedded inside a quasi-uniform domain at 120 km elsewhere. Precipitation and related fields in the four simulations are examined to determine how well the VR simulationsmore » reproduce the features simulated by the globally high-resolution model in the refined domain. In previous analyses of idealized aqua-planet simulations, the characteristics of the global high-resolution simulation in moist processes only developed near the boundary of the refined region. In contrast, the AMIP simulations with VR grids are able to reproduce the high-resolution characteristics across the refined domain, particularly in South America. This indicates the importance of finely resolved lower-boundary forcing such as topography and surface heterogeneity for the regional climate, and demonstrates the ability of the MPAS-A VR to replicate the large-scale moisture transport as simulated in the quasi-uniform high-resolution model. Outside of the refined domain, some upscale effects are detected through large-scale circulation but the overall climatic signals are not significant at regional scales. Our results provide support for the multi-resolution approach as a computationally efficient and physically consistent method for modeling regional climate.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pruess, K.; Oldenburg, C.; Moridis, G.
1997-12-31
This paper summarizes recent advances in methods for simulating water and tracer injection, and presents illustrative applications to liquid- and vapor-dominated geothermal reservoirs. High-resolution simulations of water injection into heterogeneous, vertical fractures in superheated vapor zones were performed. Injected water was found to move in dendritic patterns, and to experience stronger lateral flow effects than predicted from homogeneous medium models. Higher-order differencing methods were applied to modeling water and tracer injection into liquid-dominated systems. Conventional upstream weighting techniques were shown to be adequate for predicting the migration of thermal fronts, while higher-order methods give far better accuracy for tracer transport.more » A new fluid property module for the TOUGH2 simulator is described which allows a more accurate description of geofluids, and includes mineral dissolution and precipitation effects with associated porosity and permeability change. Comparisons between numerical simulation predictions and data for laboratory and field injection experiments are summarized. Enhanced simulation capabilities include a new linear solver package for TOUGH2, and inverse modeling techniques for automatic history matching and optimization.« less
NASA Astrophysics Data System (ADS)
Zhou, Y.; Hou, A.; Lau, W. K.; Shie, C.; Tao, W.; Lin, X.; Chou, M.; Olson, W. S.; Grecu, M.
2006-05-01
The cloud and precipitation statistics simulated by 3D Goddard Cumulus Ensemble (GCE) model during the South China Sea Monsoon Experiment (SCSMEX) is compared with Tropical Rainfall Measuring Mission (TRMM) TMI and PR rainfall measurements and the Earth's Radiant Energy System (CERES) single scanner footprint (SSF) radiation and cloud retrievals. It is found that GCE is capable of simulating major convective system development and reproducing total surface rainfall amount as compared with rainfall estimated from the soundings. Mesoscale organization is adequately simulated except when environmental wind shear is very weak. The partitions between convective and stratiform rain are also close to TMI and PR classification. However, the model simulated rain spectrum is quite different from either TMI or PR measurements. The model produces more heavy rains and light rains (less than 0.1 mm/hr) than the observations. The model also produces heavier vertical hydrometer profiles of rain, graupel when compared with TMI retrievals and PR radar reflectivity. Comparing GCE simulated OLR and cloud properties with CERES measurements found that the model has much larger domain averaged OLR due to smaller total cloud fraction and a much skewed distribution of OLR and cloud top than CERES observations, indicating that the model's cloud field is not wide spread, consistent with the model's precipitation activity. These results will be used as guidance for improving the model's microphysics.
Modeling of rock friction 2. Simulation of preseismic slip
Dieterich, J.H.
1979-01-01
The constitutive relations developed in the companion paper are used to model detailed observations of preseismic slip and the onset of unstable slip in biaxial laboratory experiments. The simulations employ a deterministic plane strain finite element model to represent the interactions both within the sliding blocks and between the blocks and the loading apparatus. Both experiments and simulations show that preseismic slip is controlled by initial inhomogeneity of shear stress along the sliding surface relative to the frictional strength. As a consequence of the inhomogeneity, stable slip begins at a point on the surface and the area of slip slowly expands as the external loading increases. A previously proposed correlation between accelerating rates of stable slip and growth of the area of slip is supported by the simulations. In the simulations and in the experiments, unstable slip occurs shortly after a propagating slip event traverses the sliding surface and breaks out at the ends of the sample. In the model the breakout of stable slip causes a sudden acceleration of slip rates. Because of velocity dependency of the constitutive relationship for friction, the rapid acceleration of slip causes a decrease in frictional strength. Instability occurs when the frictional strength decreases with displacement at a rate that exceeds the intrinsic unloading characteristics of the sample and test machine. A simple slider-spring model that does not consider preseismic slip appears to approximate the transition adequately from stable sliding to unstable slip as a function of normal stress, machine stiffness, and surface roughness for small samples. However, for large samples and for natural faults the simulations suggest that the simple model may be inaccurate because it does not take into account potentially large preseismic displacements that will alter the friction parameters prior to instability. Copyright ?? 1979 by the American Geophysical Union.
Goedhart, Paul W; van der Voet, Hilko; Baldacchino, Ferdinando; Arpaia, Salvatore
2014-04-01
Genetic modification of plants may result in unintended effects causing potentially adverse effects on the environment. A comparative safety assessment is therefore required by authorities, such as the European Food Safety Authority, in which the genetically modified plant is compared with its conventional counterpart. Part of the environmental risk assessment is a comparative field experiment in which the effect on non-target organisms is compared. Statistical analysis of such trials come in two flavors: difference testing and equivalence testing. It is important to know the statistical properties of these, for example, the power to detect environmental change of a given magnitude, before the start of an experiment. Such prospective power analysis can best be studied by means of a statistical simulation model. This paper describes a general framework for simulating data typically encountered in environmental risk assessment of genetically modified plants. The simulation model, available as Supplementary Material, can be used to generate count data having different statistical distributions possibly with excess-zeros. In addition the model employs completely randomized or randomized block experiments, can be used to simulate single or multiple trials across environments, enables genotype by environment interaction by adding random variety effects, and finally includes repeated measures in time following a constant, linear or quadratic pattern in time possibly with some form of autocorrelation. The model also allows to add a set of reference varieties to the GM plants and its comparator to assess the natural variation which can then be used to set limits of concern for equivalence testing. The different count distributions are described in some detail and some examples of how to use the simulation model to study various aspects, including a prospective power analysis, are provided.
Goedhart, Paul W; van der Voet, Hilko; Baldacchino, Ferdinando; Arpaia, Salvatore
2014-01-01
Genetic modification of plants may result in unintended effects causing potentially adverse effects on the environment. A comparative safety assessment is therefore required by authorities, such as the European Food Safety Authority, in which the genetically modified plant is compared with its conventional counterpart. Part of the environmental risk assessment is a comparative field experiment in which the effect on non-target organisms is compared. Statistical analysis of such trials come in two flavors: difference testing and equivalence testing. It is important to know the statistical properties of these, for example, the power to detect environmental change of a given magnitude, before the start of an experiment. Such prospective power analysis can best be studied by means of a statistical simulation model. This paper describes a general framework for simulating data typically encountered in environmental risk assessment of genetically modified plants. The simulation model, available as Supplementary Material, can be used to generate count data having different statistical distributions possibly with excess-zeros. In addition the model employs completely randomized or randomized block experiments, can be used to simulate single or multiple trials across environments, enables genotype by environment interaction by adding random variety effects, and finally includes repeated measures in time following a constant, linear or quadratic pattern in time possibly with some form of autocorrelation. The model also allows to add a set of reference varieties to the GM plants and its comparator to assess the natural variation which can then be used to set limits of concern for equivalence testing. The different count distributions are described in some detail and some examples of how to use the simulation model to study various aspects, including a prospective power analysis, are provided. PMID:24834325
NASA Astrophysics Data System (ADS)
Rakesh, V.; Kantharao, B.
2017-03-01
Data assimilation is considered as one of the effective tools for improving forecast skill of mesoscale models. However, for optimum utilization and effective assimilation of observations, many factors need to be taken into account while designing data assimilation methodology. One of the critical components that determines the amount and propagation observation information into the analysis, is model background error statistics (BES). The objective of this study is to quantify how BES in data assimilation impacts on simulation of heavy rainfall events over a southern state in India, Karnataka. Simulations of 40 heavy rainfall events were carried out using Weather Research and Forecasting Model with and without data assimilation. The assimilation experiments were conducted using global and regional BES while the experiment with no assimilation was used as the baseline for assessing the impact of data assimilation. The simulated rainfall is verified against high-resolution rain-gage observations over Karnataka. Statistical evaluation using several accuracy and skill measures shows that data assimilation has improved the heavy rainfall simulation. Our results showed that the experiment using regional BES outperformed the one which used global BES. Critical thermo-dynamic variables conducive for heavy rainfall like convective available potential energy simulated using regional BES is more realistic compared to global BES. It is pointed out that these results have important practical implications in design of forecast platforms while decision-making during extreme weather events
USDA-ARS?s Scientific Manuscript database
The data set contains a portion of the International Heat Stress Genotype Experiment (IHSGE) data used in the AgMIP-Wheat project to analyze the uncertainty of 30 wheat crop models and quantify the impact of heat on global wheat yield productivity. It includes two spring wheat cultivars grown during...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Asay-Davis, Xylar S.; Cornford, Stephen L.; Durand, Gaël
Coupled ice sheet-ocean models capable of simulating moving grounding lines are just becoming available. Such models have a broad range of potential applications in studying the dynamics of marine ice sheets and tidewater glaciers, from process studies to future projections of ice mass loss and sea level rise. The Marine Ice Sheet-Ocean Model Intercomparison Project (MISOMIP) is a community effort aimed at designing and coordinating a series of model intercomparison projects (MIPs) for model evaluation in idealized setups, model verification based on observations, and future projections for key regions of the West Antarctic Ice Sheet (WAIS). Here we describe computationalmore » experiments constituting three interrelated MIPs for marine ice sheet models and regional ocean circulation models incorporating ice shelf cavities. These consist of ice sheet experiments under the Marine Ice Sheet MIP third phase (MISMIP+), ocean experiments under the Ice Shelf-Ocean MIP second phase (ISOMIP+) and coupled ice sheet-ocean experiments under the MISOMIP first phase (MISOMIP1). All three MIPs use a shared domain with idealized bedrock topography and forcing, allowing the coupled simulations (MISOMIP1) to be compared directly to the individual component simulations (MISMIP+ and ISOMIP+). The experiments, which have qualitative similarities to Pine Island Glacier Ice Shelf and the adjacent region of the Amundsen Sea, are designed to explore the effects of changes in ocean conditions, specifically the temperature at depth, on basal melting and ice dynamics. In future work, differences between model results will form the basis for the evaluation of the participating models.« less
Schädler, Marc René; Warzybok, Anna; Ewert, Stephan D; Kollmeier, Birger
2016-05-01
A framework for simulating auditory discrimination experiments, based on an approach from Schädler, Warzybok, Hochmuth, and Kollmeier [(2015). Int. J. Audiol. 54, 100-107] which was originally designed to predict speech recognition thresholds, is extended to also predict psychoacoustic thresholds. The proposed framework is used to assess the suitability of different auditory-inspired feature sets for a range of auditory discrimination experiments that included psychoacoustic as well as speech recognition experiments in noise. The considered experiments were 2 kHz tone-in-broadband-noise simultaneous masking depending on the tone length, spectral masking with simultaneously presented tone signals and narrow-band noise maskers, and German Matrix sentence test reception threshold in stationary and modulated noise. The employed feature sets included spectro-temporal Gabor filter bank features, Mel-frequency cepstral coefficients, logarithmically scaled Mel-spectrograms, and the internal representation of the Perception Model from Dau, Kollmeier, and Kohlrausch [(1997). J. Acoust. Soc. Am. 102(5), 2892-2905]. The proposed framework was successfully employed to simulate all experiments with a common parameter set and obtain objective thresholds with less assumptions compared to traditional modeling approaches. Depending on the feature set, the simulated reference-free thresholds were found to agree with-and hence to predict-empirical data from the literature. Across-frequency processing was found to be crucial to accurately model the lower speech reception threshold in modulated noise conditions than in stationary noise conditions.
Assessment of the viscoelastic mechanical properties of polycarbonate urethane for medical devices.
Beckmann, Agnes; Heider, Yousef; Stoffel, Marcus; Markert, Bernd
2018-06-01
The underlying research work introduces a study of the mechanical properties of polycarbonate urethane (PCU), used in the construction of various medical devices. This comprises the discussion of a suitable material model, the application of elemental experiments to identify the related parameters and the numerical simulation of the applied experiments in order to calibrate and validate the mathematical model. In particular, the model of choice for the simulation of PCU response is the non-linear viscoelastic Bergström-Boyce material model, applied in the finite-element (FE) package Abaqus®. For the parameter identification, uniaxial tension and unconfined compression tests under in-laboratory physiological conditions were carried out. The geometry of the samples together with the applied loadings were simulated in Abaqus®, to insure the suitability of the modelling approach. The obtained parameters show a very good agreement between the numerical and the experimental results. Copyright © 2018 Elsevier Ltd. All rights reserved.
Aerodynamic Simulation of the MARINTEK Braceless Semisubmersible Wave Tank Tests
NASA Astrophysics Data System (ADS)
Stewart, Gordon; Muskulus, Michael
2016-09-01
Model scale experiments of floating offshore wind turbines are important for both platform design for the industry as well as numerical model validation for the research community. An important consideration in the wave tank testing of offshore wind turbines are scaling effects, especially the tension between accurate scaling of both hydrodynamic and aerodynamic forces. The recent MARINTEK braceless semisubmersible wave tank experiment utilizes a novel aerodynamic force actuator to decouple the scaling of the aerodynamic forces. This actuator consists of an array of motors that pull on cables to provide aerodynamic forces that are calculated by a blade-element momentum code in real time as the experiment is conducted. This type of system has the advantage of supplying realistically scaled aerodynamic forces that include dynamic forces from platform motion, but does not provide the insights into the accuracy of the aerodynamic models that an actual model-scale rotor could provide. The modeling of this system presents an interesting challenge, as there are two ways to simulate the aerodynamics; either by using the turbulent wind fields as inputs to the aerodynamic model of the design code, or by surpassing the aerodynamic model and using the forces applied to the experimental turbine as direct inputs to the simulation. This paper investigates the best practices of modeling this type of novel aerodynamic actuator using a modified wind turbine simulation tool, and demonstrates that bypassing the dynamic aerodynamics solver of design codes can lead to erroneous results.
ERIC Educational Resources Information Center
Carey, Cayelan C.; Gougis, Rebekka Darner
2017-01-01
Ecosystem modeling is a critically important tool for environmental scientists, yet is rarely taught in undergraduate and graduate classrooms. To address this gap, we developed a teaching module that exposes students to a suite of modeling skills and tools (including computer programming, numerical simulation modeling, and distributed computing)…
Piloted Evaluation of a UH-60 Mixer Equivalent Turbulence Simulation Model
NASA Technical Reports Server (NTRS)
Lusardi, Jeff A.; Blanken, Chris L.; Tischeler, Mark B.
2002-01-01
A simulation study of a recently developed hover/low speed Mixer Equivalent Turbulence Simulation (METS) model for the UH-60 Black Hawk helicopter was conducted in the NASA Ames Research Center Vertical Motion Simulator (VMS). The experiment was a continuation of previous work to develop a simple, but validated, turbulence model for hovering rotorcraft. To validate the METS model, two experienced test pilots replicated precision hover tasks that had been conducted in an instrumented UH-60 helicopter in turbulence. Objective simulation data were collected for comparison with flight test data, and subjective data were collected that included handling qualities ratings and pilot comments for increasing levels of turbulence. Analyses of the simulation results show good analytic agreement between the METS model and flight test data, with favorable pilot perception of the simulated turbulence. Precision hover tasks were also repeated using the more complex rotating-frame SORBET (Simulation Of Rotor Blade Element Turbulence) model to generate turbulence. Comparisons of the empirically derived METS model with the theoretical SORBET model show good agreement providing validation of the more complex blade element method of simulating turbulence.
NASA Astrophysics Data System (ADS)
Ballarotta, M.; Brodeau, L.; Brandefelt, J.; Lundberg, P.; Döös, K.
2013-01-01
Most state-of-the-art climate models include a coarsely resolved oceanic component, which has difficulties in capturing detailed dynamics, and therefore eddy-permitting/eddy-resolving simulations have been developed to reproduce the observed World Ocean. In this study, an eddy-permitting numerical experiment is conducted to simulate the global ocean state for a period of the Last Glacial Maximum (LGM, ~ 26 500 to 19 000 yr ago) and to investigate the improvements due to taking into account these higher spatial scales. The ocean general circulation model is forced by a 49-yr sample of LGM atmospheric fields constructed from a quasi-equilibrated climate-model simulation. The initial state and the bottom boundary condition conform to the Paleoclimate Modelling Intercomparison Project (PMIP) recommendations. Before evaluating the model efficiency in representing the paleo-proxy reconstruction of the surface state, the LGM experiment is in this first part of the investigation, compared with a present-day eddy-permitting hindcast simulation as well as with the available PMIP results. It is shown that the LGM eddy-permitting simulation is consistent with the quasi-equilibrated climate-model simulation, but large discrepancies are found with the PMIP model analyses, probably due to the different equilibration states. The strongest meridional gradients of the sea-surface temperature are located near 40° N and S, this due to particularly large North-Atlantic and Southern-Ocean sea-ice covers. These also modify the locations of the convection sites (where deep-water forms) and most of the LGM Conveyor Belt circulation consequently takes place in a thinner layer than today. Despite some discrepancies with other LGM simulations, a glacial state is captured and the eddy-permitting simulation undertaken here yielded a useful set of data for comparisons with paleo-proxy reconstructions.
Leblanc, Fabien; Senagore, Anthony J; Ellis, Clyde N; Champagne, Bradley J; Augestad, Knut M; Neary, Paul C; Delaney, Conor P
2010-01-01
The aim of this study was to compare a simulator with the human cadaver model for hand-assisted laparoscopic colorectal skills acquisition training. An observational prospective comparative study was conducted to compare the laparoscopic surgery training models. The study took place during the laparoscopic colectomy training course performed at the annual scientific meeting of the American Society of Colon and Rectal Surgeons. Thirty four practicing surgeons performed hand-assisted laparoscopic sigmoid colectomy on human cadavers (n = 7) and on an augmented reality simulator (n = 27). Prior laparoscopic colorectal experience was assessed. Trainers and trainees completed independently objective structured assessment forms. Training models were compared by trainees' technical skills scores, events scores, and satisfaction. Prior laparoscopic experience was similar in both surgeon groups. Generic and specific skills scores were similar on both training models. Generic events scores were significantly better on the cadaver model. The 2 most frequent generic events occurring on the simulator were poor hand-eye coordination and inefficient use of retraction. Specific events were scored better on the simulator and reached the significance limit (p = 0.051) for trainers. The specific events occurring on the cadaver were intestinal perforation and left ureter identification difficulties. Overall satisfaction was better for the cadaver than for the simulator model (p = 0.009). With regard to skills scores, the augmented reality simulator had adequate qualities for the hand-assisted laparoscopic colectomy training. Nevertheless, events scores highlighted weaknesses of the anatomical replication on the simulator. Although improvements likely will be required to incorporate the simulator more routinely into the colorectal training, it may be useful in its current form for more junior trainees or those early on their learning curve. Copyright 2010 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
User's guide for the IEBT application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartoletti, T
INFOSEC Experience-Based Training (IEBT) is a simulation and modeling approach to education in the arena of information security issues and its application to system-specific operations. The IEBT philosophy is that ''Experience is the Best Teacher''. This approach to computer-based training aims to bridge the gap between unappealing ''read the text, answer the questions'' types of training (largely a test of short-term memory), and the far more costly, time-consuming and inconvenient ''real hardware'' laboratory experience. Simulation and modeling supports this bridge by allowing the critical or salient features to be exercised while avoiding those aspects of a real world experience unrelatedmore » to the training goal.« less
NASA Astrophysics Data System (ADS)
Basirat, Farzad; Perroud, Hervé; Lofi, Johanna; Denchik, Nataliya; Lods, Gérard; Fagerlund, Fritjof; Sharma, Prabhakar; Pezard, Philippe; Niemi, Auli
2015-04-01
In this study, TOUGH2/EOS7CA model is used to simulate the shallow injection-monitoring experiment carried out at Maguelone, France, during 2012 and 2013. The possibility of CO2 leakage from storage reservoir to upper layers is one of the issues that need to be addressed in CCS projects. Developing reliable monitoring techniques to detect and characterize CO2 leakage is necessary for the safety of CO2 storage in reservoir formations. To test and cross-validate different monitoring techniques, a series of shallow gas injection-monitoring experiments (SIMEx) has been carried out at the Maguelone. The experimental site is documented in Lofi et al [2013]. At the site, a series of nitrogen and one CO2 injection experiment have been carried out during 2012-2013 and different monitoring techniques have been applied. The purpose of modelling is to acquire understanding of the system performance as well as to further develop and validate modelling approaches for gas transport in the shallow subsurface, against the well-controlled data sets. The preliminary simulation of the experiment including the simulation for the Nitrogen injection test in 2012 was presented in Basirat et al [2013]. In this work, the simulations represent the gaseous CO2 distribution and dissolved CO2 within range obtained by monitoring approaches. The Multiphase modelling in combination with geophysical monitoring can be used for process understanding of gas phase migration- and mass transfer processes resulting from gaseous CO2 injection. Basirat, F., A. Niemi, H. Perroud, J. Lofi, N. Denchik, G. Lods, P. Pezard, P. Sharma, and F. Fagerlund (2013), Modeling Gas Transport in the Shallow Subsurface in Maguelone Field Experiment, Energy Procedia, 40, 337-345. Lofi, J., P. Pezard, F. Bouchette, O. Raynal, P. Sabatier, N. Denchik, A. Levannier, L. Dezileau, and R. Certain (2013), Integrated Onshore-Offshore Investigation of a Mediterranean Layered Coastal Aquifer, Groundwater, 51(4), 550-561.
Discriminating Among Probability Weighting Functions Using Adaptive Design Optimization
Cavagnaro, Daniel R.; Pitt, Mark A.; Gonzalez, Richard; Myung, Jay I.
2014-01-01
Probability weighting functions relate objective probabilities and their subjective weights, and play a central role in modeling choices under risk within cumulative prospect theory. While several different parametric forms have been proposed, their qualitative similarities make it challenging to discriminate among them empirically. In this paper, we use both simulation and choice experiments to investigate the extent to which different parametric forms of the probability weighting function can be discriminated using adaptive design optimization, a computer-based methodology that identifies and exploits model differences for the purpose of model discrimination. The simulation experiments show that the correct (data-generating) form can be conclusively discriminated from its competitors. The results of an empirical experiment reveal heterogeneity between participants in terms of the functional form, with two models (Prelec-2, Linear in Log Odds) emerging as the most common best-fitting models. The findings shed light on assumptions underlying these models. PMID:24453406
Modeling Simple Driving Tasks with a One-Boundary Diffusion Model
Ratcliff, Roger; Strayer, David
2014-01-01
A one-boundary diffusion model was applied to the data from two experiments in which subjects were performing a simple simulated driving task. In the first experiment, the same subjects were tested on two driving tasks using a PC-based driving simulator and the psychomotor vigilance test (PVT). The diffusion model fit the response time (RT) distributions for each task and individual subject well. Model parameters were found to correlate across tasks which suggests common component processes were being tapped in the three tasks. The model was also fit to a distracted driving experiment of Cooper and Strayer (2008). Results showed that distraction altered performance by affecting the rate of evidence accumulation (drift rate) and/or increasing the boundary settings. This provides an interpretation of cognitive distraction whereby conversing on a cell phone diverts attention from the normal accumulation of information in the driving environment. PMID:24297620
Miller Neilan, Rachael; Rose, Kenneth
2014-02-21
Individuals are commonly exposed to fluctuating levels of stressors, while most laboratory experiments focus on constant exposures. We develop and test a mathematical model for predicting the effects of low dissolved oxygen (hypoxia) on growth, reproduction, and survival using laboratory experiments on fish and shrimp. The exposure-effects model simulates the hourly reductions in growth and survival, and the reduction in reproduction (fecundity) at times of spawning, of an individual as it is exposed to constant or hourly fluctuating dissolved oxygen (DO) concentrations. The model was applied to seven experiments involving fish and shrimp that included constant and fluctuating DO exposures, with constant exposures used for parameter estimation and the model then used to simulate the growth, reproduction, and survival in the fluctuating treatments. Cumulative effects on growth, reproduction, and survival were predicted well by the model, but the model did not replay the observed episodic low survival days. Further investigation should involve the role of acclimation, possible inclusion of repair effects in reproduction and survival, and the sensitivity of model predictions to the shape of the immediate effects function. Additional testing of the model with other taxa, different patterns of fluctuating exposures, and different stressors is needed to determine the model's generality and robustness. © 2013 Elsevier Ltd. All rights reserved.
Modeling and Simulation of Shuttle Launch and Range Operations
NASA Technical Reports Server (NTRS)
Bardina, Jorge; Thirumalainambi, Rajkumar
2004-01-01
The simulation and modeling test bed is based on a mockup of a space flight operations control suitable to experiment physical, procedural, software, hardware and psychological aspects of space flight operations. The test bed consists of a weather expert system to advise on the effect of weather to the launch operations. It also simulates toxic gas dispersion model, impact of human health risk, debris dispersion model in 3D visualization. Since all modeling and simulation is based on the internet, it could reduce the cost of operations of launch and range safety by conducting extensive research before a particular launch. Each model has an independent decision making module to derive the best decision for launch.
Integration of MATLAB Simulink(Registered Trademark) Models with the Vertical Motion Simulator
NASA Technical Reports Server (NTRS)
Lewis, Emily K.; Vuong, Nghia D.
2012-01-01
This paper describes the integration of MATLAB Simulink(Registered TradeMark) models into the Vertical Motion Simulator (VMS) at NASA Ames Research Center. The VMS is a high-fidelity, large motion flight simulator that is capable of simulating a variety of aerospace vehicles. Integrating MATLAB Simulink models into the VMS needed to retain the development flexibility of the MATLAB environment and allow rapid deployment of model changes. The process developed at the VMS was used successfully in a number of recent simulation experiments. This accomplishment demonstrated that the model integrity was preserved, while working within the hard real-time run environment of the VMS architecture, and maintaining the unique flexibility of the VMS to meet diverse research requirements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tonks, M. R.; Biner, S. B.; Mille, P. C.
2013-07-01
In this work, we used the phase field method to simulate the post-irradiation annealing of UO{sub 2} described in the experimental work by Kashibe et al., 1993 [1]. The simulations were carried out in 2D and 3D using the MARMOT FEM-based phase-field modeling framework. The 2-D results compared fairly well with the experiments, in spite of the assumptions made in the model. The 3-D results compare even more favorably to experiments, indicating that diffusion in all three directions must be considered to accurate represent the bubble growth. (authors)
NASA Technical Reports Server (NTRS)
Sud, Y. C.; Chao, Winston C.; Walker, G. K.
1992-01-01
The influence of a cumulus convection scheme on the simulated atmospheric circulation and hydrologic cycle is investigated by means of a coarse version of the GCM. Two sets of integrations, each containing an ensemble of three summer simulations, were produced. The ensemble sets of control and experiment simulations are compared and differentially analyzed to determine the influence of a cumulus convection scheme on the simulated circulation and hydrologic cycle. The results show that cumulus parameterization has a very significant influence on the simulation circulation and precipitation. The upper-level condensation heating over the ITCZ is much smaller for the experiment simulations as compared to the control simulations; correspondingly, the Hadley and Walker cells for the control simulations are also weaker and are accompanied by a weaker Ferrel cell in the Southern Hemisphere. Overall, the difference fields show that experiment simulations (without cumulus convection) produce a cooler and less energetic atmosphere.
Espinosa, G; Rodríguez, R; Gil, J M; Suzuki-Vidal, F; Lebedev, S V; Ciardi, A; Rubiano, J G; Martel, P
2017-03-01
Numerical simulations of laboratory astrophysics experiments on plasma flows require plasma microscopic properties that are obtained by means of an atomic kinetic model. This fact implies a careful choice of the most suitable model for the experiment under analysis. Otherwise, the calculations could lead to inaccurate results and inappropriate conclusions. First, a study of the validity of the local thermodynamic equilibrium in the calculation of the average ionization, mean radiative properties, and cooling times of argon plasmas in a range of plasma conditions of interest in laboratory astrophysics experiments on radiative shocks is performed in this work. In the second part, we have made an analysis of the influence of the atomic kinetic model used to calculate plasma microscopic properties of experiments carried out on magpie on radiative bow shocks propagating in argon. The models considered were developed assuming both local and nonlocal thermodynamic equilibrium and, for the latter situation, we have considered in the kinetic model different effects such as external radiation field and plasma mixture. The microscopic properties studied were the average ionization, the charge state distributions, the monochromatic opacities and emissivities, the Planck mean opacity, and the radiative power loss. The microscopic study was made as a postprocess of a radiative-hydrodynamic simulation of the experiment. We have also performed a theoretical analysis of the influence of these atomic kinetic models in the criteria for the onset possibility of thermal instabilities due to radiative cooling in those experiments in which small structures were experimentally observed in the bow shock that could be due to this kind of instability.
NASA Astrophysics Data System (ADS)
Espinosa, G.; Rodríguez, R.; Gil, J. M.; Suzuki-Vidal, F.; Lebedev, S. V.; Ciardi, A.; Rubiano, J. G.; Martel, P.
2017-03-01
Numerical simulations of laboratory astrophysics experiments on plasma flows require plasma microscopic properties that are obtained by means of an atomic kinetic model. This fact implies a careful choice of the most suitable model for the experiment under analysis. Otherwise, the calculations could lead to inaccurate results and inappropriate conclusions. First, a study of the validity of the local thermodynamic equilibrium in the calculation of the average ionization, mean radiative properties, and cooling times of argon plasmas in a range of plasma conditions of interest in laboratory astrophysics experiments on radiative shocks is performed in this work. In the second part, we have made an analysis of the influence of the atomic kinetic model used to calculate plasma microscopic properties of experiments carried out on magpie on radiative bow shocks propagating in argon. The models considered were developed assuming both local and nonlocal thermodynamic equilibrium and, for the latter situation, we have considered in the kinetic model different effects such as external radiation field and plasma mixture. The microscopic properties studied were the average ionization, the charge state distributions, the monochromatic opacities and emissivities, the Planck mean opacity, and the radiative power loss. The microscopic study was made as a postprocess of a radiative-hydrodynamic simulation of the experiment. We have also performed a theoretical analysis of the influence of these atomic kinetic models in the criteria for the onset possibility of thermal instabilities due to radiative cooling in those experiments in which small structures were experimentally observed in the bow shock that could be due to this kind of instability.
Analysis of Waves in Space Plasma (WISP) near field simulation and experiment
NASA Technical Reports Server (NTRS)
Richie, James E.
1992-01-01
The WISP payload scheduler for a 1995 space transportation system (shuttle flight) will include a large power transmitter on board at a wide range of frequencies. The levels of electromagnetic interference/electromagnetic compatibility (EMI/EMC) must be addressed to insure the safety of the shuttle crew. This report is concerned with the simulation and experimental verification of EMI/EMC for the WISP payload in the shuttle cargo bay. The simulations have been carried out using the method of moments for both thin wires and patches to stimulate closed solids. Data obtained from simulation is compared with experimental results. An investigation of the accuracy of the modeling approach is also included. The report begins with a description of the WISP experiment. A description of the model used to simulate the cargo bay follows. The results of the simulation are compared to experimental data on the input impedance of the WISP antenna with the cargo bay present. A discussion of the methods used to verify the accuracy of the model is shown to illustrate appropriate methods for obtaining this information. Finally, suggestions for future work are provided.
Wake meandering of a model wind turbine operating in two different regimes
NASA Astrophysics Data System (ADS)
Foti, Daniel; Yang, Xiaolei; Campagnolo, Filippo; Maniaci, David; Sotiropoulos, Fotis
2018-05-01
The flow behind a model wind turbine under two different turbine operating regimes (region 2 for turbine operating at optimal condition with the maximum power coefficient and 1.4-deg pitch angle and region 3 for turbine operating at suboptimal condition with a lower power coefficient and 7-deg pitch angle) is investigated using wind tunnel experiments and numerical experiments using large-eddy simulation (LES) with actuator surface models for turbine blades and nacelle. Measurements from the model wind turbine experiment reveal that the power coefficient and turbine wake are affected by the operating regime. Simulations with and without a nacelle model are carried out for each operating condition to study the influence of the operating regime and nacelle on the formation of the hub vortex and wake meandering. Statistics and energy spectra of the simulated wakes are in good agreement with the measurements. For simulations with a nacelle model, the mean flow field is composed of an outer wake, caused by energy extraction by turbine blades, and an inner wake directly behind the nacelle, while for the simulations without a nacelle model, the central region of the wake is occupied by a jet. The simulations with the nacelle model reveal an unstable helical hub vortex expanding outward toward the outer wake, while the simulations without a nacelle model show a stable and columnar hub vortex. Because of the different interactions of the inner region of the wake with the outer region of the wake, a region with higher turbulence intensity is observed in the tip shear layer for the simulation with a nacelle model. The hub vortex for the turbine operating in region 3 remains in a tight helical spiral and intercepts the outer wake a few diameters further downstream than for the turbine operating in region 2. Wake meandering, a low-frequency large-scale motion of the wake, commences in the region of high turbulence intensity for all simulations with and without a nacelle model, indicating that neither a nacelle model nor an unstable hub vortex is a necessary requirement for the existence of wake meandering. However, further analysis of the wake meandering and instantaneous flow field using a filtering technique and dynamic mode decomposition show that the unstable hub vortex energizes the wake meandering. The turbine operating regime affects the shape and expansion of the hub vortex, altering the location of the onset of the wake meandering and wake meander oscillating intensity. Most important, the unstable hub vortex promotes a high-amplitude energetic meandering which cannot be predicted without a nacelle model.
Quantifying the effect of varying GHG's concentration in Regional Climate Models
NASA Astrophysics Data System (ADS)
López-Romero, Jose Maria; Jerez, Sonia; Palacios-Peña, Laura; José Gómez-Navarro, Juan; Jiménez-Guerrero, Pedro; Montavez, Juan Pedro
2017-04-01
Regional Climate Models (RCMs) are driven at the boundaries by Global Circulation Models (GCM), and in the particular case of Climate Change projections, such simulations are forced by varying greenhouse gases (GHGs) concentrations. In hindcast simulations driven by reanalysis products, the climate change signal is usually introduced in the assimilation process as well. An interesting question arising in this context is whether GHGs concentrations have to be varied within the RCMs model itself, or rather they should be kept constant. Some groups keep the GHGs concentrations constant under the assumption that information about climate change signal is given throughout the boundaries; sometimes certain radiation parameterization schemes do not permit such changes. Other approaches vary these concentrations arguing that this preserves the physical coherence respect to the driving conditions for the RCM. This work aims to shed light on this topic. For this task, various regional climate simulations with the WRF model for the 1954-2004 period have been carried out for using a Euro-CORDEX compliant domain. A series of simulations with constant and variable GHGs have been performed using both, a GCM (ECHAM6-OM) and a reanalysis product (ERA-20C) data. Results indicate that there exist noticeable differences when introducing varying GHGs concentrations within the RCM domain. The differences in 2-m temperature series between the experiments with varying or constant GHGs concentration strongly depend on the atmospheric conditions, appearing a strong interannual variability. This suggests that short-term experiments are not recommended if the aim is to assess the role of varying GHGs. In addition, and consistently in both GCM and reanalysis-driven experiments, the magnitude of temperature trends, as well as the spatial pattern represented by varying GHGs experiment, are closer to the driving dataset than in experiments keeping constant the GHGs concentration. These results point towards the need for the inclusion of varying GHGs concentration within the RCM itself when dynamically downscaling global datasets, both in GCM and hindcast simulations.
A Microcomputer Program that Simulates the Baumol-Tobin Transactions Demand for Money.
ERIC Educational Resources Information Center
Beckman, Steven
1987-01-01
This article describes an economic model dealing with the demand for money and a microcomputer program which enables students to experiment with cash management techniques. By simulating personal experiences, the program teaches how changes in income, interest rates, and charges for exchanging bonds and cash affect money demand. (Author/JDH)
NASA Technical Reports Server (NTRS)
Borner, A.; Swaminathan-Gopalan, K.; Stephani, Kelly; Poovathingal, S.; Murray, V. J.; Minton, T. K.; Panerai, F.; Mansour, N. N.
2017-01-01
A collaborative effort between the University of Illinois at Urbana-Champaign (UIUC), NASA Ames Research Center (ARC) and Montana State University (MSU) succeeded at developing a new finite-rate carbon oxidation model from molecular beam scattering experiments on vitreous carbon (VC). We now aim to use the direct simulation Monte Carlo (DSMC) code SPARTA to apply the model to each fiber of the porous fibrous Thermal Protection Systems (TPS) material FiberForm (FF). The detailed micro-structure of FF was obtained from X-ray micro-tomography and then used in DSMC. Both experiments and simulations show that the CO/O products ratio increased at all temperatures from VC to FF. We postulate this is due to the larger number of collisions an O atom encounters inside the porous FF material compared to the flat surface of VC. For the simulations, we particularly focused on the lowest and highest temperatures studied experimentally, 1023 K and 1823 K, and found good agreement between the finite-rate DSMC simulations and experiments.
NASA Astrophysics Data System (ADS)
Moran, Michael D.; Pielke, Roger A.
1996-03-01
The Colorado State University mesoscale atmospheric dispersion (MAD) numerical modeling system, which consists of a prognostic mesoscale meteorological model coupled to a mesoscale Lagrangian particle dispersion model, has been used to simulate the transport and diffusion of a perfluorocarbon tracer-gas cloud for one afternoon surface release during the July 1980 Great Plains mesoscale tracer field experiment. Ground-level concentration (GLC) measurements taken along arcs of samplers 100 and 600 km downwind of the release site at Norman, Oklahoma, up to three days after the tracer release were available for comparison. Quantitative measures of a number of significant dispersion characteristics obtained from analysis of the observed tracer cloud's moving GLC `footprint' have been used to evaluate the modeling system's skill in simulating this MAD case.MAD is more dependent upon the spatial and temporal structure of the transport wind field than is short-range atmospheric dispersion. For the Great Plains mesoscale tracer experiment, the observations suggest that the Great Plains nocturnal low-level jet played an important role in transporting and deforming the tracer cloud. A suite of ten two- and three-dimensional numerical meteorological experiments was devised to investigate the relative contributions of topography, other surface inhomogeneities, atmospheric baroclinicity, synoptic-scale flow evolution, and meteorological model initialization time to the structure and evolution of the low-level mesoscale flow field and thus to MAD. Results from the ten mesoscale meteorological simulations are compared in this part of the paper. The predicted wind fields display significant differences, which give rise in turn to significant differences in predicted low-level transport. The presence of an oscillatory ageostrophic component in the observed synoptic low-level winds for this case is shown to complicate initialization of the meteorological model considerably and is the likely cause of directional errors in the predicted mean tracer transport. A companion paper describes the results from the associated dispersion simulations.
Garitte, B.; Nguyen, T. S.; Barnichon, J. D.; ...
2017-05-09
Coupled thermal–hydrological–mechanical (THM) processes in the near field of deep geological repositories can influence several safety features of the engineered and geological barriers. Among those features are: the possibility of damage in the host rock, the time for re-saturation of the bentonite, and the perturbations in the hydraulic regime in both the rock and engineered seals. Within the international cooperative code-validation project DECOVALEX-2015, eight research teams developed models to simulate an in situ heater experiment, called HE-D, in Opalinus Clay at the Mont Terri Underground Research Laboratory in Switzerland. The models were developed from the theory of poroelasticity in ordermore » to simulate the coupled THM processes that prevailed during the experiment and thereby to characterize the in situ THM properties of Opalinus Clay. The modelling results for the evolution of temperature, pore water pressure, and deformation at different points are consistent among the research teams and compare favourably with the experimental data in terms of trends and absolute values. The models were able to reproduce the main physical processes of the experiment. In particular, most teams simulated temperature and thermally induced pore water pressure well, including spatial variations caused by inherent anisotropy due to bedding.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garitte, B.; Nguyen, T. S.; Barnichon, J. D.
Coupled thermal–hydrological–mechanical (THM) processes in the near field of deep geological repositories can influence several safety features of the engineered and geological barriers. Among those features are: the possibility of damage in the host rock, the time for re-saturation of the bentonite, and the perturbations in the hydraulic regime in both the rock and engineered seals. Within the international cooperative code-validation project DECOVALEX-2015, eight research teams developed models to simulate an in situ heater experiment, called HE-D, in Opalinus Clay at the Mont Terri Underground Research Laboratory in Switzerland. The models were developed from the theory of poroelasticity in ordermore » to simulate the coupled THM processes that prevailed during the experiment and thereby to characterize the in situ THM properties of Opalinus Clay. The modelling results for the evolution of temperature, pore water pressure, and deformation at different points are consistent among the research teams and compare favourably with the experimental data in terms of trends and absolute values. The models were able to reproduce the main physical processes of the experiment. In particular, most teams simulated temperature and thermally induced pore water pressure well, including spatial variations caused by inherent anisotropy due to bedding.« less
Cloud computing and validation of expandable in silico livers
2010-01-01
Background In Silico Livers (ISLs) are works in progress. They are used to challenge multilevel, multi-attribute, mechanistic hypotheses about the hepatic disposition of xenobiotics coupled with hepatic responses. To enhance ISL-to-liver mappings, we added discrete time metabolism, biliary elimination, and bolus dosing features to a previously validated ISL and initiated re-validated experiments that required scaling experiments to use more simulated lobules than previously, more than could be achieved using the local cluster technology. Rather than dramatically increasing the size of our local cluster we undertook the re-validation experiments using the Amazon EC2 cloud platform. So doing required demonstrating the efficacy of scaling a simulation to use more cluster nodes and assessing the scientific equivalence of local cluster validation experiments with those executed using the cloud platform. Results The local cluster technology was duplicated in the Amazon EC2 cloud platform. Synthetic modeling protocols were followed to identify a successful parameterization. Experiment sample sizes (number of simulated lobules) on both platforms were 49, 70, 84, and 152 (cloud only). Experimental indistinguishability was demonstrated for ISL outflow profiles of diltiazem using both platforms for experiments consisting of 84 or more samples. The process was analogous to demonstration of results equivalency from two different wet-labs. Conclusions The results provide additional evidence that disposition simulations using ISLs can cover the behavior space of liver experiments in distinct experimental contexts (there is in silico-to-wet-lab phenotype similarity). The scientific value of experimenting with multiscale biomedical models has been limited to research groups with access to computer clusters. The availability of cloud technology coupled with the evidence of scientific equivalency has lowered the barrier and will greatly facilitate model sharing as well as provide straightforward tools for scaling simulations to encompass greater detail with no extra investment in hardware. PMID:21129207
Research on the water-entry attitude of a submersible aircraft.
Xu, BaoWei; Li, YongLi; Feng, JinFu; Hu, JunHua; Qi, Duo; Yang, Jian
2016-01-01
The water entry of a submersible aircraft, which is transient, highly coupled, and nonlinear, is complicated. After analyzing the mechanics of this process, the change rate of every variable is considered. A dynamic model is build and employed to study vehicle attitude and overturn phenomenon during water entry. Experiments are carried out and a method to organize experiment data is proposed. The accuracy of the method is confirmed by comparing the results of simulation of dynamic model and experiment under the same condition. Based on the analysis of the experiment and simulation, the initial attack angle and angular velocity largely influence the water entry of vehicle. Simulations of water entry with different initial and angular velocities are completed, followed by an analysis, and the motion law of vehicle is obtained. To solve the problem of vehicle stability and control during water entry, an approach is proposed by which the vehicle sails with a zero attack angle after entering water by controlling the initial angular velocity. With the dynamic model and optimization research algorithm, calculation is performed, and the optimal initial angular velocity of water-entry is obtained. The outcome of simulations confirms that the effectiveness of the propose approach by which the initial water-entry angular velocity is controlled.
NASA Astrophysics Data System (ADS)
Perron, Aurelien; Roehling, John D.; Turchi, Patrice E. A.; Fattebert, Jean-Luc; McKeown, Joseph T.
2018-01-01
A combination of dynamic transmission electron microscopy (DTEM) experiments and CALPHAD-informed phase-field simulations was used to study rapid solidification in Cu-Ni thin-film alloys. Experiments—conducted in the DTEM—consisted of in situ laser melting and determination of the solidification kinetics by monitoring the solid-liquid interface and the overall microstructure evolution (time-resolved measurements) during the solidification process. Modelling of the Cu-Ni alloy microstructure evolution was based on a phase-field model that included realistic Gibbs energies and diffusion coefficients from the CALPHAD framework (thermodynamic and mobility databases). DTEM and post mortem experiments highlighted the formation of microsegregation-free columnar grains with interface velocities varying from ˜0.1 to ˜0.6 m s-1. After an ‘incubation’ time, the velocity of the planar solid-liquid interface accelerated until solidification was complete. In addition, a decrease of the temperature gradient induced a decrease in the interface velocity. The modelling strategy permitted the simulation (in 1D and 2D) of the solidification process from the initially diffusion-controlled to the nearly partitionless regimes. Finally, results of DTEM experiments and phase-field simulations (grain morphology, solute distribution, and solid-liquid interface velocity) were consistent at similar time (μs) and spatial scales (μm).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perron, Aurelien; Roehling, John D.; Turchi, Patrice E. A.
A combination of dynamic transmission electron microscopy (DTEM) experiments and CALPHAD-informed phase-field simulations was used to study rapid solidification in Cu–Ni thin-film alloys. Experiments—conducted in the DTEM—consisted of in situ laser melting and determination of the solidification kinetics by monitoring the solid–liquid interface and the overall microstructure evolution (time-resolved measurements) during the solidification process. Modelling of the Cu–Ni alloy microstructure evolution was based on a phase-field model that included realistic Gibbs energies and diffusion coefficients from the CALPHAD framework (thermodynamic and mobility databases). DTEM and post mortem experiments highlighted the formation of microsegregation-free columnar grains with interface velocities varying frommore » ~0.1 to ~0.6 m s –1. After an 'incubation' time, the velocity of the planar solid–liquid interface accelerated until solidification was complete. In addition, a decrease of the temperature gradient induced a decrease in the interface velocity. The modelling strategy permitted the simulation (in 1D and 2D) of the solidification process from the initially diffusion-controlled to the nearly partitionless regimes. Lastly, results of DTEM experiments and phase-field simulations (grain morphology, solute distribution, and solid–liquid interface velocity) were consistent at similar time (μs) and spatial scales (μm).« less
Perron, Aurelien; Roehling, John D.; Turchi, Patrice E. A.; ...
2017-12-05
A combination of dynamic transmission electron microscopy (DTEM) experiments and CALPHAD-informed phase-field simulations was used to study rapid solidification in Cu–Ni thin-film alloys. Experiments—conducted in the DTEM—consisted of in situ laser melting and determination of the solidification kinetics by monitoring the solid–liquid interface and the overall microstructure evolution (time-resolved measurements) during the solidification process. Modelling of the Cu–Ni alloy microstructure evolution was based on a phase-field model that included realistic Gibbs energies and diffusion coefficients from the CALPHAD framework (thermodynamic and mobility databases). DTEM and post mortem experiments highlighted the formation of microsegregation-free columnar grains with interface velocities varying frommore » ~0.1 to ~0.6 m s –1. After an 'incubation' time, the velocity of the planar solid–liquid interface accelerated until solidification was complete. In addition, a decrease of the temperature gradient induced a decrease in the interface velocity. The modelling strategy permitted the simulation (in 1D and 2D) of the solidification process from the initially diffusion-controlled to the nearly partitionless regimes. Lastly, results of DTEM experiments and phase-field simulations (grain morphology, solute distribution, and solid–liquid interface velocity) were consistent at similar time (μs) and spatial scales (μm).« less
NASA Astrophysics Data System (ADS)
Colette, A.; Ciarelli, G.; Otero, N.; Theobald, M.; Solberg, S.; Andersson, C.; Couvidat, F.; Manders-Groot, A.; Mar, K. A.; Mircea, M.; Pay, M. T.; Raffort, V.; Tsyro, S.; Cuvelier, K.; Adani, M.; Bessagnet, B.; Bergstrom, R.; Briganti, G.; Cappelletti, A.; D'isidoro, M.; Fagerli, H.; Ojha, N.; Roustan, Y.; Vivanco, M. G.
2017-12-01
The Eurodelta-Trends multi-model chemistry-transport experiment has been designed to better understand the evolution of air pollution and its drivers for the period 1990-2010 in Europe. The main objective of the experiment is to assess the efficiency of air pollutant emissions mitigation measures in improving regional scale air quality. The experiment is designed in three tiers with increasing degree of computational demand in order to facilitate the participation of as many modelling teams as possible. The basic experiment consists of simulations for the years 1990, 2000 and 2010. Sensitivity analysis for the same three years using various combinations of (i) anthropogenic emissions, (ii) chemical boundary conditions and (iii) meteorology complements it. The most demanding tier consists in two complete time series from 1990 to 2010, simulated using either time varying emissions for corresponding years or constant emissions. Eight chemistry-transport models have contributed with calculation results to at least one experiment tier, and six models have completed the 21-year trend simulations. The modelling results are publicly available for further use by the scientific community. We assess the skill of the models in capturing observed air pollution trends for the 1990-2010 time period. The average particulate matter relative trends are well captured by the models, even if they display the usual lower bias in reproducing absolute levels. Ozone trends are also well reproduced, yet slightly overestimated in the 1990s. The attribution study emphasizes the efficiency of mitigation measures in reducing air pollution over Europe, although a strong impact of long range transport is pointed out for ozone trends. Meteorological variability is also an important factor in some regions of Europe. The results of the first health and ecosystem impact studies impacts building upon a regional scale multi-model ensemble over a 20yr time period will also be presented.
Dicke-model simulation via cavity-assisted Raman transitions
NASA Astrophysics Data System (ADS)
Zhang, Zhiqiang; Lee, Chern Hui; Kumar, Ravi; Arnold, K. J.; Masson, Stuart J.; Grimsmo, A. L.; Parkins, A. S.; Barrett, M. D.
2018-04-01
The Dicke model is of fundamental importance in quantum mechanics for understanding the collective behavior of atoms coupled to a single electromagnetic mode. Here, we demonstrate a Dicke-model simulation via cavity-assisted Raman transitions in a configuration using counterpropagating laser beams. The observations indicate that motional effects should be included to fully account for the results. These results are contrary to experiments using single-beam and copropagating configurations. We give a theoretical description that accounts for the beam geometries used in the experiments and indicates the potential role of motional effects. In particular, a model is given that highlights the influence of Doppler broadening on the observed phase-transition thresholds.
NASA Technical Reports Server (NTRS)
Dum, C. T.
1990-01-01
Particle simulation experiments were used to study the basic physical ingredients needed for building a global model of foreshock wave phenomena. In particular, the generation of Langmuir waves by a gentle bump-on-tail electron distribution is analyzed. It is shown that, with appropriately designed simulations experiments, quasi-linear theory can be quantitatively verified for parameters corresponding to the electron foreshock.
NASA Astrophysics Data System (ADS)
Aurisano, A.; Backhouse, C.; Hatcher, R.; Mayer, N.; Musser, J.; Patterson, R.; Schroeter, R.; Sousa, A.
2015-12-01
The NOνA experiment is a two-detector, long-baseline neutrino experiment operating in the recently upgraded NuMI muon neutrino beam. Simulating neutrino interactions and backgrounds requires many steps including: the simulation of the neutrino beam flux using FLUKA and the FLUGG interface; cosmic ray generation using CRY; neutrino interaction modeling using GENIE; and a simulation of the energy deposited in the detector using GEANT4. To shorten generation time, the modeling of detector-specific aspects, such as photon transport, detector and electronics noise, and readout electronics, employs custom, parameterized simulation applications. We will describe the NOνA simulation chain, and present details on the techniques used in modeling photon transport near the ends of cells, and in developing a novel data-driven noise simulation. Due to the high intensity of the NuMI beam, the Near Detector samples a high rate of muons originating in the surrounding rock. In addition, due to its location on the surface at Ash River, MN, the Far Detector collects a large rate (˜ 140 kHz) of cosmic muons. We will discuss the methods used in NOνA for overlaying rock muons and cosmic ray muons with simulated neutrino interactions and show how realistically the final simulation reproduces the preliminary NOνA data.
Aurisano, A.; Backhouse, C.; Hatcher, R.; ...
2015-12-23
The NO vA experiment is a two-detector, long-baseline neutrino experiment operating in the recently upgraded NuMI muon neutrino beam. Simulating neutrino interactions and backgrounds requires many steps including: the simulation of the neutrino beam flux using FLUKA and the FLUGG interface, cosmic ray generation using CRY, neutrino interaction modeling using GENIE, and a simulation of the energy deposited in the detector using GEANT4. To shorten generation time, the modeling of detector-specific aspects, such as photon transport, detector and electronics noise, and readout electronics, employs custom, parameterized simulation applications. We will describe the NO vA simulation chain, and present details onmore » the techniques used in modeling photon transport near the ends of cells, and in developing a novel data-driven noise simulation. Due to the high intensity of the NuMI beam, the Near Detector samples a high rate of muons originating in the surrounding rock. In addition, due to its location on the surface at Ash River, MN, the Far Detector collects a large rate ((˜) 140 kHz) of cosmic muons. Furthermore, we will discuss the methods used in NO vA for overlaying rock muons and cosmic ray muons with simulated neutrino interactions and show how realistically the final simulation reproduces the preliminary NO vA data.« less
The Trouble with Thinking like Arena: Learning to Use Simulation Software
ERIC Educational Resources Information Center
Rodgers, Diane M.; Moraga, Reinaldo J.
2011-01-01
Simulation software used for modeling has become as ubiquitous as computers themselves. Despite growing reliance on simulation in educational and workplace settings, users encounter frustration in using simulation software programs. The authors conducted a study with 26 engineering students and interviewed them about their experience learning the…
Three-dimensional computer model for the atmospheric general circulation experiment
NASA Technical Reports Server (NTRS)
Roberts, G. O.
1984-01-01
An efficient, flexible, three-dimensional, hydrodynamic, computer code has been developed for a spherical cap geometry. The code will be used to simulate NASA's Atmospheric General Circulation Experiment (AGCE). The AGCE is a spherical, baroclinic experiment which will model the large-scale dynamics of our atmosphere; it has been proposed to NASA for future Spacelab flights. In the AGCE a radial dielectric body force will simulate gravity, with hot fluid tending to move outwards. In order that this force be dominant, the AGCE must be operated in a low gravity environment such as Spacelab. The full potential of the AGCE will only be realized by working in conjunction with an accurate computer model. Proposed experimental parameter settings will be checked first using model runs. Then actual experimental results will be compared with the model predictions. This interaction between experiment and theory will be very valuable in determining the nature of the AGCE flows and hence their relationship to analytical theories and actual atmospheric dynamics.
NASA Technical Reports Server (NTRS)
Wey, Thomas
2017-01-01
This paper summarizes the reacting results of simulating a bluff body stabilized flame experiment of Volvo Validation Rig using a releasable edition of the National Combustion Code (NCC). The turbulence models selected to investigate the configuration are the sub-grid scaled kinetic energy coupled large eddy simulation (K-LES) and the time-filtered Navier-Stokes (TFNS) simulation. The turbulence chemistry interaction used is linear eddy mixing (LEM).
Magnetosphere Modeling: From Cartoons to Simulations
NASA Astrophysics Data System (ADS)
Gombosi, T. I.
2017-12-01
Over the last half a century physics-based global computer simulations became a bridge between experiment and basic theory and now it represents the "third pillar" of geospace research. Today, many of our scientific publications utilize large-scale simulations to interpret observations, test new ideas, plan campaigns, or design new instruments. Realistic simulations of the complex Sun-Earth system have been made possible by the dramatically increased power of both computing hardware and numerical algorithms. Early magnetosphere models were based on simple E&M concepts (like the Chapman-Ferraro cavity) and hydrodynamic analogies (bow shock). At the beginning of the space age current system models were developed culminating in the sophisticated Tsyganenko-type description of the magnetic configuration. The first 3D MHD simulations of the magnetosphere were published in the early 1980s. A decade later there were several competing global models that were able to reproduce many fundamental properties of the magnetosphere. The leading models included the impact of the ionosphere by using a height-integrated electric potential description. Dynamic coupling of global and regional models started in the early 2000s by integrating a ring current and a global magnetosphere model. It has been recognized for quite some time that plasma kinetic effects play an important role. Presently, global hybrid simulations of the dynamic magnetosphere are expected to be possible on exascale supercomputers, while fully kinetic simulations with realistic mass ratios are still decades away. In the 2010s several groups started to experiment with PIC simulations embedded in large-scale 3D MHD models. Presently this integrated MHD-PIC approach is at the forefront of magnetosphere simulations and this technique is expected to lead to some important advances in our understanding of magnetosheric physics. This talk will review the evolution of magnetosphere modeling from cartoons to current systems, to global MHD to MHD-PIC and discuss the role of state-of-the-art models in forecasting space weather.
NASA Astrophysics Data System (ADS)
Otto-Bliesner, Bette L.; Braconnot, Pascale; Harrison, Sandy P.; Lunt, Daniel J.; Abe-Ouchi, Ayako; Albani, Samuel; Bartlein, Patrick J.; Capron, Emilie; Carlson, Anders E.; Dutton, Andrea; Fischer, Hubertus; Goelzer, Heiko; Govin, Aline; Haywood, Alan; Joos, Fortunat; LeGrande, Allegra N.; Lipscomb, William H.; Lohmann, Gerrit; Mahowald, Natalie; Nehrbass-Ahles, Christoph; Pausata, Francesco S. R.; Peterschmitt, Jean-Yves; Phipps, Steven J.; Renssen, Hans; Zhang, Qiong
2017-11-01
Two interglacial epochs are included in the suite of Paleoclimate Modeling Intercomparison Project (PMIP4) simulations in the Coupled Model Intercomparison Project (CMIP6). The experimental protocols for simulations of the mid-Holocene (midHolocene, 6000 years before present) and the Last Interglacial (lig127k, 127 000 years before present) are described here. These equilibrium simulations are designed to examine the impact of changes in orbital forcing at times when atmospheric greenhouse gas levels were similar to those of the preindustrial period and the continental configurations were almost identical to modern ones. These simulations test our understanding of the interplay between radiative forcing and atmospheric circulation, and the connections among large-scale and regional climate changes giving rise to phenomena such as land-sea contrast and high-latitude amplification in temperature changes, and responses of the monsoons, as compared to today. They also provide an opportunity, through carefully designed additional sensitivity experiments, to quantify the strength of atmosphere, ocean, cryosphere, and land-surface feedbacks. Sensitivity experiments are proposed to investigate the role of freshwater forcing in triggering abrupt climate changes within interglacial epochs. These feedback experiments naturally lead to a focus on climate evolution during interglacial periods, which will be examined through transient experiments. Analyses of the sensitivity simulations will also focus on interactions between extratropical and tropical circulation, and the relationship between changes in mean climate state and climate variability on annual to multi-decadal timescales. The comparative abundance of paleoenvironmental data and of quantitative climate reconstructions for the Holocene and Last Interglacial make these two epochs ideal candidates for systematic evaluation of model performance, and such comparisons will shed new light on the importance of external feedbacks (e.g., vegetation, dust) and the ability of state-of-the-art models to simulate climate changes realistically.
Seth, Ajay; Sherman, Michael; Reinbolt, Jeffrey A; Delp, Scott L
Movement science is driven by observation, but observation alone cannot elucidate principles of human and animal movement. Biomechanical modeling and computer simulation complement observations and inform experimental design. Biological models are complex and specialized software is required for building, validating, and studying them. Furthermore, common access is needed so that investigators can contribute models to a broader community and leverage past work. We are developing OpenSim, a freely available musculoskeletal modeling and simulation application and libraries specialized for these purposes, by providing: musculoskeletal modeling elements, such as biomechanical joints, muscle actuators, ligament forces, compliant contact, and controllers; and tools for fitting generic models to subject-specific data, performing inverse kinematics and forward dynamic simulations. OpenSim performs an array of physics-based analyses to delve into the behavior of musculoskeletal models by employing Simbody, an efficient and accurate multibody system dynamics code. Models are publicly available and are often reused for multiple investigations because they provide a rich set of behaviors that enables different lines of inquiry. This report will discuss one model developed to study walking and applied to gain deeper insights into muscle function in pathological gait and during running. We then illustrate how simulations can test fundamental hypotheses and focus the aims of in vivo experiments, with a postural stability platform and human model that provide a research environment for performing human posture experiments in silico . We encourage wide adoption of OpenSim for community exchange of biomechanical models and methods and welcome new contributors.
ERIC Educational Resources Information Center
Psycharis, Sarantos
2016-01-01
Computational experiment approach considers models as the fundamental instructional units of Inquiry Based Science and Mathematics Education (IBSE) and STEM Education, where the model take the place of the "classical" experimental set-up and simulation replaces the experiment. Argumentation in IBSE and STEM education is related to the…
Simulated Group Counseling: An Experiential Training Model for Group Work.
ERIC Educational Resources Information Center
Romano, John L.
1998-01-01
Describes an experiential group training model designed for prepracticum-level counseling graduate students. Simulated Group Counseling (SCG) offers students an opportunity to experience being group members; facilitating a group; and processing the group with peers, an advanced graduate student observer, and the instructor. SGC reduces…
APEX Model Simulation for Row Crop Watersheds with Agroforestry and Grass Buffers
USDA-ARS?s Scientific Manuscript database
Watershed model simulation has become an important tool in studying ways and means to reduce transport of agricultural pollutants. Conducting field experiments to assess buffer influences on water quality are constrained by the large-scale nature of watersheds, high experimental costs, private owner...
Numerical Modeling Studies of Wake Vortices: Real Case Simulations
NASA Technical Reports Server (NTRS)
Shen, Shao-Hua; Ding, Feng; Han, Jongil; Lin, Yuh-Lang; Arya, S. Pal; Proctor, Fred H.
1999-01-01
A three-dimensional large-eddy simulation model, TASS, is used to simulate the behavior of aircraft wake vortices in a real atmosphere. The purpose for this study is to validate the use of TASS for simulating the decay and transport of wake vortices. Three simulations are performed and the results are compared with the observed data from the 1994-1995 Memphis field experiments. The selected cases have an atmospheric environment of weak turbulence and stable stratification. The model simulations are initialized with appropriate meteorological conditions and a post roll-up vortex system. The behavior of wake vortices as they descend within the atmospheric boundary layer and interact with the ground is discussed.
Polar-Drive--Implosion Physics on OMEGA and the NIF
NASA Astrophysics Data System (ADS)
Radha, P. B.
2012-10-01
Polar drive (PD) permits the execution of direct-drive--ignition experiments on facilities that are configured for x-ray drive such as the National Ignition Facility (NIF) and Laser M'egajoule. Experiments on the OMEGA laser are used to develop and validate models of PD implosions. Results from OMEGA PD shock-timing and warm implosions are presented. Experiments are simulated with the 2-D hydrodynamic code DRACO including full 3-D ray trace to model oblique beams. Excellent agreement is obtained in shock velocity and catch-up in PD geometry in warm, plastic shells. Predicted areal densities are measured in PD implosion experiments. Good agreement between simulation and experiments is obtained in the overall shape of the compressing shell when observed through x-ray backlighting. Simulated images of the hot core, including the effect of magnetic fields, are compared with experiments. Comparisons of simulated and observed scattered light and bang time in PD geometry are presented. Several techniques to increase implosion velocity are presented including beam profile variations and different ablator materials. Results from shimmed-target PD experiments will also be presented. Designs for future PD OMEGA experiments at ignition-relevant intensities will be presented. The implication of these results for NIF-scale plasmas is discussed. Experiments for the NIF in its current configuration, with indirect-drive phase plates, are proposed to study implosion energetics and shell asymmetries. This work was supported by the U.S. Department of Energy Office of Inertial Confinement Fusion under Cooperative Agreement No. DE-FC52-08NA28302.
Direct Numerical Simulation of an Airfoil with Sand Grain Roughness on the Leading Edge
NASA Technical Reports Server (NTRS)
Ribeiro, Andre F. P.; Casalino, Damiano; Fares, Ehab; Choudhari, Meelan
2016-01-01
As part of a computational study of acoustic radiation due to the passage of turbulent boundary layer eddies over the trailing edge of an airfoil, the Lattice-Boltzmann method is used to perform direct numerical simulations of compressible, low Mach number flow past an NACA 0012 airfoil at zero degrees angle of attack. The chord Reynolds number of approximately 0.657 million models one of the test conditions from a previous experiment by Brooks, Pope, and Marcolini at NASA Langley Research Center. A unique feature of these simulations involves direct modeling of the sand grain roughness on the leading edge, which was used in the abovementioned experiment to trip the boundary layer to fully turbulent flow. This report documents the findings of preliminary, proof-of-concept simulations based on a narrow spanwise domain and a limited time interval. The inclusion of fully-resolved leading edge roughness in this simulation leads to significantly earlier transition than that in the absence of any roughness. The simulation data is used in conjunction with both the Ffowcs Williams-Hawkings acoustic analogy and a semi-analytical model by Roger and Moreau to predict the farfield noise. The encouraging agreement between the computed noise spectrum and that measured in the experiment indicates the potential payoff from a full-fledged numerical investigation based on the current approach. Analysis of the computed data is used to identify the required improvements to the preliminary simulations described herein.
NASA Technical Reports Server (NTRS)
Douglass, Anne R.; Stolarski, Richard S.; Steenrod, Steven; Pawson, Steven
2003-01-01
One key application of atmospheric chemistry and transport models is prediction of the response of ozone and other constituents to various natural and anthropogenic perturbations. These include changes in composition, such as the previous rise and recent decline in emission of man-made chlorofluorcarbons, changes in aerosol loading due to volcanic eruption, and changes in solar forcing. Comparisons of hindcast model results for the past few decades with observations are a key element of model evaluation and provide a sense of the reliability of model predictions. The 25 year data set from Total Ozone Mapping Spectrometers is a cornerstone of such model evaluation. Here we report evaluation of three-dimensional multi-decadal simulation of stratospheric composition. Meteorological fields for this off-line calculation are taken from a 50 year simulation of a general circulation model. Model fields are compared with observations from TOMS and also with observations from the Stratospheric Aerosol and Gas Experiment (SAGE), Microwave Limb Sounder (MLS), Cryogenic Limb Array Etalon Spectrometer (CLAES), and the Halogen Occultation Experiment (HALOE). This overall evaluation will emphasize the spatial, seasonal, and interannual variability of the simulation compared with observed atmospheric variability.
NASA Technical Reports Server (NTRS)
Cotton, W. R.; Tripoli, G. J.
1982-01-01
Observational requirements for predicting convective storm development and intensity as suggested by recent numerical experiments are examined. Recent 3D numerical experiments are interpreted with regard to the relationship between overshooting tops and surface wind gusts. The development of software for emulating satellite inferred cloud properties using 3D cloud model predicted data and the simulation of Heymsfield (1981) Northern Illinois storm are described as well as the development of a conceptual/semi-quantitative model of eastward propagating, mesoscale convective complexes forming to the lee of the Rocky Mountains.
Simulation of pump-turbine prototype fast mode transition for grid stability support
NASA Astrophysics Data System (ADS)
Nicolet, C.; Braun, O.; Ruchonnet, N.; Hell, J.; Béguin, A.; Avellan, F.
2017-04-01
The paper explores the additional services that Full Size Frequency Converter, FSFC, solution can provide for the case of an existing pumped storage power plant of 2x210 MW, for which conversion from fixed speed to variable speed is investigated with a focus on fast mode transition. First, reduced scale model tests experiments of fast transition of Francis pump-turbine which have been performed at the ANDRITZ HYDRO Hydraulic Laboratory in Linz Austria are presented. The tests consist of linear speed transition from pump to turbine and vice versa performed with constant guide vane opening. Then existing pumped storage power plant with pump-turbine quasi homologous to the reduced scale model is modelled using the simulation software SIMSEN considering the reservoirs, penstocks, the two Francis pump-turbines, the two downstream surge tanks, and the tailrace tunnel. For the electrical part, an FSFC configuration is considered with a detailed electrical model. The transitions from turbine to pump and vice versa are simulated, and similarities between prototype simulation results and reduced scale model experiments are highlighted.
Dynamic Simulation of a Periodic 10 K Sorption Cryocooler
NASA Technical Reports Server (NTRS)
Bhandari, P.; Rodriguez, J.; Bard, S.; Wade, L.
1994-01-01
A transient thermal simulation model has been developed to simulate the dynamic performance of a multiple-stage 10 K sorption cryocooler for spacecraft sensor cooling applications that require periodic quick-cooldown (under 2 minutes) , negligible vibration, low power consumption, and long life (5 to 10 years). The model was specifically designed to represent the Brilliant Eyes Ten-Kelvin Sorption Cryocooler Experiment (BETSCE), but it can be adapted to represent other sorption cryocooler systems as well. The model simulates the heat transfer, mass transfer, and thermodynamic processes in the cryostat and the sorbent beds for the entire refrigeration cycle, and includes the transient effects of variable hydrogen supply pressures due to expansion and overflow of hydrogen during the cooldown operation. The paper describes model limitations and simplifying assumptions, with estimates of errors induced by them, and presents comparisons of performance predictions with ground experiments. An important benefit of the model is its ability to predict performance sensitivities to variations of key design and operational parameters. The insights thus obtained are expected to lead to higher efficiencies and lower weights for future designs.
NASA Astrophysics Data System (ADS)
Yue, Yingchao; Fan, Wenhui; Xiao, Tianyuan; Ma, Cheng
2013-07-01
High level architecture(HLA) is the open standard in the collaborative simulation field. Scholars have been paying close attention to theoretical research on and engineering applications of collaborative simulation based on HLA/RTI, which extends HLA in various aspects like functionality and efficiency. However, related study on the load balancing problem of HLA collaborative simulation is insufficient. Without load balancing, collaborative simulation under HLA/RTI may encounter performance reduction or even fatal errors. In this paper, load balancing is further divided into static problems and dynamic problems. A multi-objective model is established and the randomness of model parameters is taken into consideration for static load balancing, which makes the model more credible. The Monte Carlo based optimization algorithm(MCOA) is excogitated to gain static load balance. For dynamic load balancing, a new type of dynamic load balancing problem is put forward with regards to the variable-structured collaborative simulation under HLA/RTI. In order to minimize the influence against the running collaborative simulation, the ordinal optimization based algorithm(OOA) is devised to shorten the optimization time. Furthermore, the two algorithms are adopted in simulation experiments of different scenarios, which demonstrate their effectiveness and efficiency. An engineering experiment about collaborative simulation under HLA/RTI of high speed electricity multiple units(EMU) is also conducted to indentify credibility of the proposed models and supportive utility of MCOA and OOA to practical engineering systems. The proposed research ensures compatibility of traditional HLA, enhances the ability for assigning simulation loads onto computing units both statically and dynamically, improves the performance of collaborative simulation system and makes full use of the hardware resources.
Discrete Particle Method for Simulating Hypervelocity Impact Phenomena.
Watson, Erkai; Steinhauser, Martin O
2017-04-02
In this paper, we introduce a computational model for the simulation of hypervelocity impact (HVI) phenomena which is based on the Discrete Element Method (DEM). Our paper constitutes the first application of DEM to the modeling and simulating of impact events for velocities beyond 5 kms -1 . We present here the results of a systematic numerical study on HVI of solids. For modeling the solids, we use discrete spherical particles that interact with each other via potentials. In our numerical investigations we are particularly interested in the dynamics of material fragmentation upon impact. We model a typical HVI experiment configuration where a sphere strikes a thin plate and investigate the properties of the resulting debris cloud. We provide a quantitative computational analysis of the resulting debris cloud caused by impact and a comprehensive parameter study by varying key parameters of our model. We compare our findings from the simulations with recent HVI experiments performed at our institute. Our findings are that the DEM method leads to very stable, energy-conserving simulations of HVI scenarios that map the experimental setup where a sphere strikes a thin plate at hypervelocity speed. Our chosen interaction model works particularly well in the velocity range where the local stresses caused by impact shock waves markedly exceed the ultimate material strength.
Discrete Particle Method for Simulating Hypervelocity Impact Phenomena
Watson, Erkai; Steinhauser, Martin O.
2017-01-01
In this paper, we introduce a computational model for the simulation of hypervelocity impact (HVI) phenomena which is based on the Discrete Element Method (DEM). Our paper constitutes the first application of DEM to the modeling and simulating of impact events for velocities beyond 5 kms−1. We present here the results of a systematic numerical study on HVI of solids. For modeling the solids, we use discrete spherical particles that interact with each other via potentials. In our numerical investigations we are particularly interested in the dynamics of material fragmentation upon impact. We model a typical HVI experiment configuration where a sphere strikes a thin plate and investigate the properties of the resulting debris cloud. We provide a quantitative computational analysis of the resulting debris cloud caused by impact and a comprehensive parameter study by varying key parameters of our model. We compare our findings from the simulations with recent HVI experiments performed at our institute. Our findings are that the DEM method leads to very stable, energy–conserving simulations of HVI scenarios that map the experimental setup where a sphere strikes a thin plate at hypervelocity speed. Our chosen interaction model works particularly well in the velocity range where the local stresses caused by impact shock waves markedly exceed the ultimate material strength. PMID:28772739
Mannava, Sandeep; Plate, Johannes F; Tuohy, Christopher J; Seyler, Thorsten M; Whitlock, Patrick W; Curl, Walton W; Smith, Thomas L; Saul, Katherine R
2013-07-01
The purpose of this article is to review basic science studies using various animal models for rotator cuff research and to describe structural, biomechanical, and functional changes to muscle following rotator cuff tears. The use of computational simulations to translate the findings from animal models to human scale is further detailed. A comprehensive review was performed of the basic science literature describing the use of animal models and simulation analysis to examine muscle function following rotator cuff injury and repair in the ageing population. The findings from various studies of rotator cuff pathology emphasize the importance of preventing permanent muscular changes with detrimental results. In vivo muscle function, electromyography, and passive muscle-tendon unit properties were studied before and after supraspinatus tenotomy in a rodent rotator cuff injury model (acute vs chronic). Then, a series of simulation experiments were conducted using a validated computational human musculoskeletal shoulder model to assess both passive and active tension of rotator cuff repairs based on surgical positioning. Outcomes of rotator cuff repair may be improved by earlier surgical intervention, with lower surgical repair tensions and fewer electromyographic neuromuscular changes. An integrated approach of animal experiments, computer simulation analyses, and clinical studies may allow us to gain a fundamental understanding of the underlying pathology and interpret the results for clinical translation.
NASA Astrophysics Data System (ADS)
Bjerg, Poul L.; Ammentorp, Hans C.; Christensen, Thomas H.
1993-04-01
A large-scale and long-term field experiment on cation exchange in a sandy aquifer has been modelled by a three-dimensional geochemical transport model. The geochemical model includes cation-exchange processes using a Gaines-Thomas expression, the closed carbonate system and the effects of ionic strength. Information on geology, hydrogeology and the transient conservative solute transport behaviour was obtained from a dispersion study in the same aquifer. The geochemical input parameters were carefully examined. CEC and selectivity coefficients were determined on the actual aquifer material by batch experiments and by the composition of the cations on the exchange complex. Potassium showed a non-ideal exchange behaviour with KCa selectivity coefficients indicating dependency on equivalent fraction and K + concentration in the aqueous phase. The model simulations over a distance of 35 m and a period of 250 days described accurately the observed attenuation of Na and the expelled amounts of Ca and Mg. Also, model predictions of plateau zones, formed by interaction with the background groundwater, in general agreed satisfactorily with the observations. Transport of K was simulated over a period of 800 days due to a substantially attenuation in the aquifer. The observed and the predicted breakthrough curves showed a reasonable accordance taking the duration of the experiment into account. However, some discrepancies were observed probably caused by the revealed non-ideal exchange behaviour of K +.
DSMC Simulation and Experimental Validation of Shock Interaction in Hypersonic Low Density Flow
2014-01-01
Direct simulation Monte Carlo (DSMC) of shock interaction in hypersonic low density flow is developed. Three collision molecular models, including hard sphere (HS), variable hard sphere (VHS), and variable soft sphere (VSS), are employed in the DSMC study. The simulations of double-cone and Edney's type IV hypersonic shock interactions in low density flow are performed. Comparisons between DSMC and experimental data are conducted. Investigation of the double-cone hypersonic flow shows that three collision molecular models can predict the trend of pressure coefficient and the Stanton number. HS model shows the best agreement between DSMC simulation and experiment among three collision molecular models. Also, it shows that the agreement between DSMC and experiment is generally good for HS and VHS models in Edney's type IV shock interaction. However, it fails in the VSS model. Both double-cone and Edney's type IV shock interaction simulations show that the DSMC errors depend on the Knudsen number and the models employed for intermolecular interaction. With the increase in the Knudsen number, the DSMC error is decreased. The error is the smallest in HS compared with those in the VHS and VSS models. When the Knudsen number is in the level of 10−4, the DSMC errors, for pressure coefficient, the Stanton number, and the scale of interaction region, are controlled within 10%. PMID:24672360
Effect of the centrifugal force on domain chaos in Rayleigh-Bénard convection.
Becker, Nathan; Scheel, J D; Cross, M C; Ahlers, Guenter
2006-06-01
Experiments and simulations from a variety of sample sizes indicated that the centrifugal force significantly affects the domain-chaos state observed in rotating Rayleigh-Bénard convection-patterns. In a large-aspect-ratio sample, we observed a hybrid state consisting of domain chaos close to the sample center, surrounded by an annulus of nearly stationary nearly radial rolls populated by occasional defects reminiscent of undulation chaos. Although the Coriolis force is responsible for domain chaos, by comparing experiment and simulation we show that the centrifugal force is responsible for the radial rolls. Furthermore, simulations of the Boussinesq equations for smaller aspect ratios neglecting the centrifugal force yielded a domain precession-frequency f approximately epsilon(mu) with mu approximately equal to 1 as predicted by the amplitude-equation model for domain chaos, but contradicted by previous experiment. Additionally the simulations gave a domain size that was larger than in the experiment. When the centrifugal force was included in the simulation, mu and the domain size were consistent with experiment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Passamai, V.; Saravia, L.
1997-05-01
In part one, a simple drying model of red pepper related to water evaporation was developed. In this second part the drying model is applied by means of related experiments. Both laboratory and open air drying experiments were carried out to validate the model and simulation results are presented.
Micromechanics of failure waves in glass. 2: Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Espinosa, H.D.; Xu, Y.; Brar, N.S.
1997-08-01
In an attempt to elucidate the failure mechanism responsible for the so-called failure waves in glass, numerical simulations of plate and rod impact experiments, with a multiple-plane model, have been performed. These simulations show that the failure wave phenomenon can be modeled by the nucleation and growth of penny-shaped shear defects from the specimen surface to its interior. Lateral stress increase, reduction of spall strength,and progressive attenuation of axial stress behind the failure front are properly predicted by the multiple-plane model. Numerical simulations of high-strain-rate pressure-shear experiments indicate that the model predicts reasonably well the shear resistance of the materialmore » at strain rates as high as 1 {times} 10{sup 6}/s. The agreement is believed to be the result of the model capability in simulating damage-induced anisotropy. By examining the kinetics of the failure process in plate experiments, the authors show that the progressive glass spallation in the vicinity of the failure front and the rate of increase in lateral stress are more consistent with a representation of inelasticity based on shear-activated flow surfaces, inhomogeneous flow, and microcracking, rather than pure microcracking. In the former mechanism, microcracks are likely formed at a later time at the intersection of flow surfaces, in the case of rod-on-rod impact, stress and radial velocity histories predicted by the microcracking model are in agreement with the experimental measurements. Stress attenuation, pulse duration, and release structure are properly simulated. It is shown that failure wave speeds in excess to 3,600 m/s are required for adequate prediction in rod radial expansion.« less
Complex molecular assemblies at hand via interactive simulations.
Delalande, Olivier; Férey, Nicolas; Grasseau, Gilles; Baaden, Marc
2009-11-30
Studying complex molecular assemblies interactively is becoming an increasingly appealing approach to molecular modeling. Here we focus on interactive molecular dynamics (IMD) as a textbook example for interactive simulation methods. Such simulations can be useful in exploring and generating hypotheses about the structural and mechanical aspects of biomolecular interactions. For the first time, we carry out low-resolution coarse-grain IMD simulations. Such simplified modeling methods currently appear to be more suitable for interactive experiments and represent a well-balanced compromise between an important gain in computational speed versus a moderate loss in modeling accuracy compared to higher resolution all-atom simulations. This is particularly useful for initial exploration and hypothesis development for rare molecular interaction events. We evaluate which applications are currently feasible using molecular assemblies from 1900 to over 300,000 particles. Three biochemical systems are discussed: the guanylate kinase (GK) enzyme, the outer membrane protease T and the soluble N-ethylmaleimide-sensitive factor attachment protein receptors complex involved in membrane fusion. We induce large conformational changes, carry out interactive docking experiments, probe lipid-protein interactions and are able to sense the mechanical properties of a molecular model. Furthermore, such interactive simulations facilitate exploration of modeling parameters for method improvement. For the purpose of these simulations, we have developed a freely available software library called MDDriver. It uses the IMD protocol from NAMD and facilitates the implementation and application of interactive simulations. With MDDriver it becomes very easy to render any particle-based molecular simulation engine interactive. Here we use its implementation in the Gromacs software as an example. Copyright 2009 Wiley Periodicals, Inc.
Bubbling in vibrated granular films.
Zamankhan, Piroz
2011-02-01
With the help of experiments, computer simulations, and a theoretical investigation, a general model is developed of the flow dynamics of dense granular media immersed in air in an intermediate regime where both collisional and frictional interactions may affect the flow behavior. The model is tested using the example of a system in which bubbles and solid structures are produced in granular films shaken vertically. Both experiments and large-scale, three-dimensional simulations of this system are performed. The experimental results are compared with the results of the simulation to verify the validity of the model. The data indicate evidence of formation of bubbles when peak acceleration relative to gravity exceeds a critical value Γ(b). The air-grain interfaces of bubblelike structures are found to exhibit fractal structure with dimension D=1.7±0.05.
Yoo, Jejoong; Wilson, James; Aksimentiev, Aleksei
2016-10-01
Calcium ions (Ca(2+) ) play key roles in various fundamental biological processes such as cell signaling and brain function. Molecular dynamics (MD) simulations have been used to study such interactions, however, the accuracy of the Ca(2+) models provided by the standard MD force fields has not been rigorously tested. Here, we assess the performance of the Ca(2+) models from the most popular classical force fields AMBER and CHARMM by computing the osmotic pressure of model compounds and the free energy of DNA-DNA interactions. In the simulations performed using the two standard models, Ca(2+) ions are seen to form artificial clusters with chloride, acetate, and phosphate species; the osmotic pressure of CaAc2 and CaCl2 solutions is a small fraction of the experimental values for both force fields. Using the standard parameterization of Ca(2+) ions in the simulations of Ca(2+) -mediated DNA-DNA interactions leads to qualitatively wrong outcomes: both AMBER and CHARMM simulations suggest strong inter-DNA attraction whereas, in experiment, DNA molecules repel one another. The artificial attraction of Ca(2+) to DNA phosphate is strong enough to affect the direction of the electric field-driven translocation of DNA through a solid-state nanopore. To address these shortcomings of the standard Ca(2+) model, we introduce a custom model of a hydrated Ca(2+) ion and show that using our model brings the results of the above MD simulations in quantitative agreement with experiment. Our improved model of Ca(2+) can be readily applied to MD simulations of various biomolecular systems, including nucleic acids, proteins and lipid bilayer membranes. © 2016 Wiley Periodicals, Inc. Biopolymers 105: 752-763, 2016. © 2016 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ainsworth, Nathan; Hariri, Ali; Prabakar, Kumaraguru
Power hardware-in-the-loop (PHIL) simulation, where actual hardware under text is coupled with a real-time digital model in closed loop, is a powerful tool for analyzing new methods of control for emerging distributed power systems. However, without careful design and compensation of the interface between the simulated and actual systems, PHIL simulations may exhibit instability and modeling inaccuracies. This paper addresses issues that arise in the PHIL simulation of a hardware battery inverter interfaced with a simulated distribution feeder. Both the stability and accuracy issues are modeled and characterized, and a methodology for design of PHIL interface compensation to ensure stabilitymore » and accuracy is presented. The stability and accuracy of the resulting compensated PHIL simulation is then shown by experiment.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prabakar, Kumaraguru; Ainsworth, Nathan; Pratt, Annabelle
Power hardware-in-the-loop (PHIL) simulation, where actual hardware under text is coupled with a real-time digital model in closed loop, is a powerful tool for analyzing new methods of control for emerging distributed power systems. However, without careful design and compensation of the interface between the simulated and actual systems, PHIL simulations may exhibit instability and modeling inaccuracies. This paper addresses issues that arise in the PHIL simulation of a hardware battery inverter interfaced with a simulated distribution feeder. Both the stability and accuracy issues are modeled and characterized, and a methodology for design of PHIL interface compensation to ensure stabilitymore » and accuracy is presented. The stability and accuracy of the resulting compensated PHIL simulation is then shown by experiment.« less
Higashi, Hidenori; Tokumi, Takuya; Hogan, Christopher J; Suda, Hiroshi; Seto, Takafumi; Otani, Yoshio
2015-06-28
We use a combination of tandem ion mobility spectrometry (IMS-IMS, with differential mobility analyzers), molecular dynamics (MD) simulations, and analytical models to examine both neutral solvent (H2O) and ion (solvated Na(+)) evaporation from aqueous sodium chloride nanodrops. For experiments, nanodrops were produced via electrospray ionization (ESI) of an aqueous sodium chloride solution. Two nanodrops were examined in MD simulations: a 2500 water molecule nanodrop with 68 Na(+) and 60 Cl(-) ions (an initial net charge of z = +8), and (2) a 1000 water molecule nanodrop with 65 Na(+) and 60 Cl(-) ions (an initial net charge of z = +5). Specifically, we used MD simulations to examine the validity of a model for the neutral evaporation rate incorporating both the Kelvin (surface curvature) and Thomson (electrostatic) influences, while both MD simulations and experimental measurements were compared to predictions of the ion evaporation rate equation of Labowsky et al. [Anal. Chim. Acta, 2000, 406, 105-118]. Within a single fit parameter, we find excellent agreement between simulated and modeled neutral evaporation rates for nanodrops with solute volume fractions below 0.30. Similarly, MD simulation inferred ion evaporation rates are in excellent agreement with predictions based on the Labowsky et al. equation. Measurements of the sizes and charge states of ESI generated NaCl clusters suggest that the charge states of these clusters are governed by ion evaporation, however, ion evaporation appears to have occurred with lower activation energies in experiments than was anticipated based on analytical calculations as well as MD simulations. Several possible reasons for this discrepancy are discussed.
Inter-Disciplinary Collaboration in Support of the Post-Standby TREAT Mission
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeHart, Mark; Baker, Benjamin; Ortensi, Javier
Although analysis methods have advanced significantly in the last two decades, high fidelity multi- physics methods for reactors systems have been under development for only a few years and are not presently mature nor deployed. Furthermore, very few methods provide the ability to simulate rapid transients in three dimensions. Data for validation of advanced time-dependent multi- physics is sparse; at TREAT, historical data were not collected for the purpose of validating three-dimensional methods, let alone multi-physics simulations. Existing data continues to be collected to attempt to simulate the behavior of experiments and calibration transients, but it will be insufficient formore » the complete validation of analysis methods used for TREAT transient simulations. Hence, a 2018 restart will most likely occur without the direct application of advanced modeling and simulation methods. At present, the current INL modeling and simulation team plans to work with TREAT operations staff in performing reactor simulations with MAMMOTH, in parallel with the software packages currently being used in preparation for core restart (e.g., MCNP5, RELAP5, ABAQUS). The TREAT team has also requested specific measurements to be performed during startup testing, currently scheduled to run from February to August of 2018. These startup measurements will be crucial in validating the new analysis methods in preparation for ultimate application for TREAT operations and experiment design. This document describes the collaboration between modeling and simulation staff and restart, operations, instrumentation and experiment development teams to be able to effectively interact and achieve successful validation work during restart testing.« less
The Impact of Different Absolute Solar Irradiance Values on Current Climate Model Simulations
NASA Technical Reports Server (NTRS)
Rind, David H.; Lean, Judith L.; Jonas, Jeffrey
2014-01-01
Simulations of the preindustrial and doubled CO2 climates are made with the GISS Global Climate Middle Atmosphere Model 3 using two different estimates of the absolute solar irradiance value: a higher value measured by solar radiometers in the 1990s and a lower value measured recently by the Solar Radiation and Climate Experiment. Each of the model simulations is adjusted to achieve global energy balance; without this adjustment the difference in irradiance produces a global temperature change of 0.48C, comparable to the cooling estimated for the Maunder Minimum. The results indicate that by altering cloud cover the model properly compensates for the different absolute solar irradiance values on a global level when simulating both preindustrial and doubled CO2 climates. On a regional level, the preindustrial climate simulations and the patterns of change with doubled CO2 concentrations are again remarkably similar, but there are some differences. Using a higher absolute solar irradiance value and the requisite cloud cover affects the model's depictions of high-latitude surface air temperature, sea level pressure, and stratospheric ozone, as well as tropical precipitation. In the climate change experiments it leads to an underestimation of North Atlantic warming, reduced precipitation in the tropical western Pacific, and smaller total ozone growth at high northern latitudes. Although significant, these differences are typically modest compared with the magnitude of the regional changes expected for doubled greenhouse gas concentrations. Nevertheless, the model simulations demonstrate that achieving the highest possible fidelity when simulating regional climate change requires that climate models use as input the most accurate (lower) solar irradiance value.
Direct drive: Simulations and results from the National Ignition Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radha, P. B.; Hohenberger, M.; Edgell, D. H.
Here, the direct-drive implosion physics is being investigated at the National Ignition Facility. The primary goal of the experiments is twofold: to validate modeling related to implosion velocity and to estimate the magnitude of hot-electron preheat. Implosion experiments indicate that the energetics is well-modeled when cross-beam energy transfer (CBET) is included in the simulation and an overall multiplier to the CBET gain factor is employed; time-resolved scattered light and scattered-light spectra display the correct trends. Trajectories from backlit images are well modeled, although those from measured self-emission images indicate increased shell thickness and reduced shell density relative to simulations. Sensitivitymore » analyses indicate that the most likely cause for the density reduction is nonuniformity growth seeded by laser imprint and not laser-energy coupling. Hot-electron preheat is at tolerable levels in the ongoing experiments, although it is expected to increase after the mitigation of CBET. Future work will include continued model validation, imprint measurements, and mitigation of CBET and hot-electron preheat.« less
NASA Astrophysics Data System (ADS)
Figueroa, Aldo; Meunier, Patrice; Cuevas, Sergio; Villermaux, Emmanuel; Ramos, Eduardo
2014-01-01
We present a combination of experiment, theory, and modelling on laminar mixing at large Péclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, "The diffusive strip method for scalar mixing in two-dimensions," J. Fluid Mech. 662, 134-172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement with quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors.
Direct drive: Simulations and results from the National Ignition Facility
Radha, P. B.; Hohenberger, M.; Edgell, D. H.; ...
2016-04-19
Here, the direct-drive implosion physics is being investigated at the National Ignition Facility. The primary goal of the experiments is twofold: to validate modeling related to implosion velocity and to estimate the magnitude of hot-electron preheat. Implosion experiments indicate that the energetics is well-modeled when cross-beam energy transfer (CBET) is included in the simulation and an overall multiplier to the CBET gain factor is employed; time-resolved scattered light and scattered-light spectra display the correct trends. Trajectories from backlit images are well modeled, although those from measured self-emission images indicate increased shell thickness and reduced shell density relative to simulations. Sensitivitymore » analyses indicate that the most likely cause for the density reduction is nonuniformity growth seeded by laser imprint and not laser-energy coupling. Hot-electron preheat is at tolerable levels in the ongoing experiments, although it is expected to increase after the mitigation of CBET. Future work will include continued model validation, imprint measurements, and mitigation of CBET and hot-electron preheat.« less
NASA Astrophysics Data System (ADS)
Crowther, Ashley R.; Singh, Rajendra; Zhang, Nong; Chapman, Chris
2007-10-01
Impulsive responses in geared systems with multiple clearances are studied when the mean torque excitation and system load change abruptly, with application to a vehicle driveline with an automatic transmission. First, torsional lumped-mass models of the planetary and differential gear sets are formulated using matrix elements. The model is then reduced to address tractable nonlinear problems while successfully retaining the main modes of interest. Second, numerical simulations for the nonlinear model are performed for transient conditions and a typical driving situation that induces an impulsive behaviour simulated. However, initial conditions and excitation and load profiles have to be carefully defined before the model can be numerically solved. It is shown that the impacts within the planetary or differential gears may occur under combinations of engine, braking and vehicle load transients. Our analysis shows that the shaping of the engine transient by the torque converter before reaching the clearance locations is more critical. Third, a free vibration experiment is developed for an analogous driveline with multiple clearances and three experiments that excite different response regimes have been carried out. Good correlations validate the proposed methodology.
Comparison of hydrodynamic simulations with two-shockwave drive target experiments
NASA Astrophysics Data System (ADS)
Karkhanis, Varad; Ramaprabhu, Praveen; Buttler, William
2015-11-01
We consider hydrodynamic continuum simulations to mimic ejecta generation in two-shockwave target experiments, where metallic surface is loaded by two successive shock waves. Time of second shock in simulations is determined to match experimental amplitudes at the arrival of the second shock. The negative Atwood number (A --> - 1) of ejecta simulations leads to two successive phase inversions of the interface corresponding to the passage of the shocks from heavy to light media in each instance. Metallic phase of ejecta (solid/liquid) depends on shock loading pressure in the experiment, and we find that hydrodynamic simulations quantify the liquid phase ejecta physics with a fair degree of accuracy, where RM instability is not suppressed by the strength effect. In particular, we find that our results of free surface velocity, maximum ejecta velocity, and maximum ejecta areal density are in excellent agreement with their experimental counterparts, as well as ejecta models. We also comment on the parametric space for hydrodynamic simulations in which they can be used to compare with the target experiments.
Visual acuity with simulated and real astigmatic defocus.
Ohlendorf, Arne; Tabernero, Juan; Schaeffel, Frank
2011-05-01
To compare the effects of "simulated" and "real" spherical and astigmatic defocus on visual acuity (VA). VA was determined with letter charts that were blurred by calculated spherical or astigmatic defocus (simulated defocus) or were seen through spherical or astigmatic trial lenses (real defocus). Defocus was simulated using ZEMAX and the Liou-Brennan eye model. Nine subjects participated [mean age, 27.2 ± 1.8 years; logarithm of the minimum angle of resolution (logMAR), -0.1]. Three different experiments were conducted in which VA was reduced by 20% (logMAR 0.0), 50% (logMAR 0.2), or 75% (logMAR 0.5) by either (1) imposing positive spherical defocus, (2) imposing positive and negative astigmatic defocus in three axes (0, 45, and 90°), and (3) imposing cross-cylinder defocus in the same three axes as in (2). Experiment (1): there were only minor differences in VA with simulated and real positive spherical defocus. Experiment (2): simulated astigmatic defocus reduced VA twice as much as real astigmatic defocus in all tested axes (p < 0.01 in all cases). Experiment (3): simulated cross-cylinder defocus reduced VA much more than real cross-cylinder defocus (p < 0.01 in all cases), similarly for all three tested axes. The visual system appears more tolerant against "real" spherical, astigmatic, and cross-cylinder defocus than against "simulated" blur. Possible reasons could be (1) limitations in the modeling procedures to simulate defocus, (2) higher ocular aberrations, and (3) fluctuations of accommodation. However, the two optical explanations (2) and (3) cannot account for the magnitude of the effect, and (1) was carefully analyzed. It is proposed that something may be special about the visual processing of real astigmatic and cross-cylinder defocus-because they have less effect on VA than simulations predict.
Cardiovascular system simulation in biomedical engineering education.
NASA Technical Reports Server (NTRS)
Rideout, V. C.
1972-01-01
Use of complex cardiovascular system models, in conjunction with a large hybrid computer, in biomedical engineering courses. A cardiovascular blood pressure-flow model, driving a compartment model for the study of dye transport, was set up on the computer for use as a laboratory exercise by students who did not have the computer experience or skill to be able to easily set up such a simulation involving some 27 differential equations running at 'real time' rate. The students were given detailed instructions regarding the model, and were then able to study effects such as those due to septal and valve defects upon the pressure, flow, and dye dilution curves. The success of this experiment in the use of involved models in engineering courses was such that it seems that this type of laboratory exercise might be considered for use in physiology courses as an adjunct to animal experiments.
NASA Astrophysics Data System (ADS)
Colarco, P. R.; Gasso, S.; Jethva, H. T.; Buchard, V.; Ahn, C.; Torres, O.; daSilva, A.
2016-12-01
Output from the NASA Goddard Earth Observing System, version 5 (GEOS-5) Earth system model is used to simulate the top-of-atmosphere 354 and 388 nm radiances observed by the Ozone Monitoring Instrument (OMI) onboard the Aura spacecraft. The principle purpose of developing this simulator tool is to compute from the modeled fields the so-called OMI Aerosol Index (AI), which is a more fundamental retrieval product than higher level products such as the aerosol optical depth (AOD) or absorbing aerosol optical depth (AAOD). This lays the groundwork for eventually developing a capability to assimilate either the OMI AI or its radiances, which would provide further constraint on aerosol loading and absorption properties for global models. We extend the use of the simulator capability to understand the nature of the OMI aerosol retrieval algorithms themselves in an Observing System Simulation Experiment (OSSE). The simulated radiances are used to calculate the AI from the modeled fields. These radiances are also provided to the OMI aerosol algorithms, which return their own retrievals of the AI, AOD, and AAOD. Our assessment reveals that the OMI-retrieved AI can be mostly harmonized with the model-derived AI given the same radiances provided a common surface pressure field is assumed. This is important because the operational OMI algorithms presently assume a fixed pressure field, while the contribution of molecular scattering to the actual OMI signal in fact responds to the actual atmospheric pressure profile, which is accounted for in our OSSE by using GEOS-5 produced atmospheric reanalyses. Other differences between the model and OMI AI are discussed, and we present a preliminary assessment of the OMI AOD and AAOD products with respect to the known inputs from the GEOS-5 simulation.
Virtual hydrology observatory: an immersive visualization of hydrology modeling
NASA Astrophysics Data System (ADS)
Su, Simon; Cruz-Neira, Carolina; Habib, Emad; Gerndt, Andreas
2009-02-01
The Virtual Hydrology Observatory will provide students with the ability to observe the integrated hydrology simulation with an instructional interface by using a desktop based or immersive virtual reality setup. It is the goal of the virtual hydrology observatory application to facilitate the introduction of field experience and observational skills into hydrology courses through innovative virtual techniques that mimic activities during actual field visits. The simulation part of the application is developed from the integrated atmospheric forecast model: Weather Research and Forecasting (WRF), and the hydrology model: Gridded Surface/Subsurface Hydrologic Analysis (GSSHA). Both the output from WRF and GSSHA models are then used to generate the final visualization components of the Virtual Hydrology Observatory. The various visualization data processing techniques provided by VTK are 2D Delaunay triangulation and data optimization. Once all the visualization components are generated, they are integrated into the simulation data using VRFlowVis and VR Juggler software toolkit. VR Juggler is used primarily to provide the Virtual Hydrology Observatory application with fully immersive and real time 3D interaction experience; while VRFlowVis provides the integration framework for the hydrologic simulation data, graphical objects and user interaction. A six-sided CAVETM like system is used to run the Virtual Hydrology Observatory to provide the students with a fully immersive experience.
NASA Technical Reports Server (NTRS)
Mckissick, B. T.; Ashworth, B. R.; Parrish, R. V.; Martin, D. J., Jr.
1980-01-01
NASA's Langley Research Center conducted a simulation experiment to ascertain the comparative effects of motion cues (combinations of platform motion and g-seat normal acceleration cues) on compensatory tracking performance. In the experiment, a full six-degree-of-freedom YF-16 model was used as the simulated pursuit aircraft. The Langley Visual Motion Simulator (with in-house developed wash-out), and a Langley developed g-seat were principal components of the simulation. The results of the experiment were examined utilizing univariate and multivariate techniques. The statistical analyses demonstrate that the platform motion and g-seat cues provide additional information to the pilot that allows substantial reduction of lateral tracking error. Also, the analyses show that the g-seat cue helps reduce vertical error.
Factors affecting species distribution predictions: A simulation modeling experiment
Gordon C. Reese; Kenneth R. Wilson; Jennifer A. Hoeting; Curtis H. Flather
2005-01-01
Geospatial species sample data (e.g., records with location information from natural history museums or annual surveys) are rarely collected optimally, yet are increasingly used for decisions concerning our biological heritage. Using computer simulations, we examined factors that could affect the performance of autologistic regression (ALR) models that predict species...
On the Nature of SEM Estimates of ARMA Parameters.
ERIC Educational Resources Information Center
Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.
2002-01-01
Reexamined the nature of structural equation modeling (SEM) estimates of autoregressive moving average (ARMA) models, replicated the simulation experiments of P. Molenaar, and examined the behavior of the log-likelihood ratio test. Simulation studies indicate that estimates of ARMA parameters observed with SEM software are identical to those…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romanov, Gennady; /Fermilab
CST Particle Studio combines electromagnetic field simulation, multi-particle tracking, adequate post-processing and advanced probabilistic emission model, which is the most important new capability in multipactor simulation. The emission model includes in simulation the stochastic properties of emission and adds primary electron elastic and inelastic reflection from the surfaces. The simulation of multipactor in coaxial waveguides have been performed to study the effects of the innovations on the multipactor threshold and the range over which multipactor can occur. The results compared with available previous experiments and simulations as well as the technique of MP simulation with CST PS are presented andmore » discussed.« less
NASA Astrophysics Data System (ADS)
Ghimire, S.; Choudhary, A.; Dimri, A. P.
2018-04-01
Analysis of regional climate simulations to evaluate the ability of 11 Coordinated Regional Climate Downscaling Experiment in South Asia experiments (CORDEX-South Asia) along with their ensemble to produce precipitation from June to September (JJAS) over the Himalayan region have been carried out. These suite of 11 combinations come from 6 regional climate models (RCMs) driven with 10 initial and boundary conditions from different global climate models and are collectively referred here as 11 CORDEX South Asia experiments. All the RCMs use a similar domain and are having similar spatial resolution of 0.44° ( 50 km). The set of experiments are considered to study precipitation sensitivity associated with the Indian summer monsoon (ISM) over the study region. This effort is made as ISM plays a vital role in summertime precipitation over the Himalayan region which acts as driver for the sustenance of habitat, population, crop, glacier, hydrology etc. In addition, so far the summer monsoon precipitation climatology over the Himalayan region has not been studied with the help of CORDEX data. Thus this study is initiated to evaluate the ability of the experiments and their ensemble in reproducing the characteristics of summer monsoon precipitation over Himalayan region, for the present climate (1970-2005). The precipitation climatology, annual precipitation cycles and interannual variabilities from each simulation have been assessed against the gridded observational dataset: Asian Precipitation-Highly Resolved Observational Data Integration Towards the Evaluation of Water Resources for the given time period. Further, after the selection of the better performing experiment the frequency distribution of precipitation was also studied. In this study, an approach has also been made to study the degree of agreement among individual experiments as a way to quantify the uncertainty among them. The experiments though show a wide variation among themselves and individually over time and space in simulating precipitation distribution over the study region, but noticeably along the foothills of the Himalayas all the simulations show dry precipitation bias against the corresponding observation. In addition, as we move towards higher elevation regions these experiments in general show wet bias. The experiment driven by EC-EARTH global climate model and downscaled using Rossby Center regional Atmospheric model version 4 developed by Swedish Meteorological and Hydrological Institute (SMHI-RCA4) simulate precipitation closely in correspondence with the observation. The ensemble outperforms the result of individual experiments. Correspondingly, different kinds of statistical analysis like spatial and temporal correlation, Taylor diagram, frequency distribution and scatter plot have been performed to compare the model output with observation and to explain the associated resemblance, robustness and dynamics statistically. Through the bias and ensemble spread analysis, an estimation of the uncertainty of the model fields and the degree of agreement among them has also been carried out in this study. Overview of the study suggests that these experiments facilitate precipitation evolution and structure over the Himalayan region with certain degree of uncertainty.
On the statistical equivalence of restrained-ensemble simulations with the maximum entropy method
Roux, Benoît; Weare, Jonathan
2013-01-01
An issue of general interest in computer simulations is to incorporate information from experiments into a structural model. An important caveat in pursuing this goal is to avoid corrupting the resulting model with spurious and arbitrary biases. While the problem of biasing thermodynamic ensembles can be formulated rigorously using the maximum entropy method introduced by Jaynes, the approach can be cumbersome in practical applications with the need to determine multiple unknown coefficients iteratively. A popular alternative strategy to incorporate the information from experiments is to rely on restrained-ensemble molecular dynamics simulations. However, the fundamental validity of this computational strategy remains in question. Here, it is demonstrated that the statistical distribution produced by restrained-ensemble simulations is formally consistent with the maximum entropy method of Jaynes. This clarifies the underlying conditions under which restrained-ensemble simulations will yield results that are consistent with the maximum entropy method. PMID:23464140
The western arctic linkage experiment (WALE): overview and synthesis
A.D. McGuire; J. Walsh; J.S. Kimball; J.S. Clein; S.E. Euskirdhen; S. Drobot; U.C. Herzfeld; J. Maslanik; R.B. Lammers; M.A. Rawlins; C.J. Vorosmarty; T.S. Rupp; W. Wu; M. Calef
2008-01-01
The primary goal of the Western Arctic Linkage Experiment (WALE) was to better understand uncertainties of simulated hydrologic and ecosystem dynamics of the western Arctic in the context of 1) uncertainties in the data available to drive the models and 2) different approaches to simulating regional hydrology and ecosystem dynamics. Analyses of datasets on climate...
Real-time simulation of three-dimensional shoulder girdle and arm dynamics.
Chadwick, Edward K; Blana, Dimitra; Kirsch, Robert F; van den Bogert, Antonie J
2014-07-01
Electrical stimulation is a promising technology for the restoration of arm function in paralyzed individuals. Control of the paralyzed arm under electrical stimulation, however, is a challenging problem that requires advanced controllers and command interfaces for the user. A real-time model describing the complex dynamics of the arm would allow user-in-the-loop type experiments where the command interface and controller could be assessed. Real-time models of the arm previously described have not included the ability to model the independently controlled scapula and clavicle, limiting their utility for clinical applications of this nature. The goal of this study therefore was to evaluate the performance and mechanical behavior of a real-time, dynamic model of the arm and shoulder girdle. The model comprises seven segments linked by eleven degrees of freedom and actuated by 138 muscle elements. Polynomials were generated to describe the muscle lines of action to reduce computation time, and an implicit, first-order Rosenbrock formulation of the equations of motion was used to increase simulation step-size. The model simulated flexion of the arm faster than real time, simulation time being 92% of actual movement time on standard desktop hardware. Modeled maximum isometric torque values agreed well with values from the literature, showing that the model simulates the moment-generating behavior of a real human arm. The speed of the model enables experiments where the user controls the virtual arm and receives visual feedback in real time. The ability to optimize potential solutions in simulation greatly reduces the burden on the user during development.
NASA Astrophysics Data System (ADS)
Chern, J. D.; Tao, W. K.; Lang, S. E.; Matsui, T.; Mohr, K. I.
2014-12-01
Four six-month (March-August 2014) experiments with the Goddard Multi-scale Modeling Framework (MMF) were performed to study the impacts of different Goddard one-moment bulk microphysical schemes and large-scale forcings on the performance of the MMF. Recently a new Goddard one-moment bulk microphysics with four-ice classes (cloud ice, snow, graupel, and frozen drops/hail) has been developed based on cloud-resolving model simulations with large-scale forcings from field campaign observations. The new scheme has been successfully implemented to the MMF and two MMF experiments were carried out with this new scheme and the old three-ice classes (cloud ice, snow graupel) scheme. The MMF has global coverage and can rigorously evaluate microphysics performance for different cloud regimes. The results show MMF with the new scheme outperformed the old one. The MMF simulations are also strongly affected by the interaction between large-scale and cloud-scale processes. Two MMF sensitivity experiments with and without nudging large-scale forcings to those of ERA-Interim reanalysis were carried out to study the impacts of large-scale forcings. The model simulated mean and variability of surface precipitation, cloud types, cloud properties such as cloud amount, hydrometeors vertical profiles, and cloud water contents, etc. in different geographic locations and climate regimes are evaluated against GPM, TRMM, CloudSat/CALIPSO satellite observations. The Goddard MMF has also been coupled with the Goddard Satellite Data Simulation Unit (G-SDSU), a system with multi-satellite, multi-sensor, and multi-spectrum satellite simulators. The statistics of MMF simulated radiances and backscattering can be directly compared with satellite observations to assess the strengths and/or deficiencies of MMF simulations and provide guidance on how to improve the MMF and microphysics.
3D visualization of ultra-fine ICON climate simulation data
NASA Astrophysics Data System (ADS)
Röber, Niklas; Spickermann, Dela; Böttinger, Michael
2016-04-01
Advances in high performance computing and model development allow the simulation of finer and more detailed climate experiments. The new ICON model is based on an unstructured triangular grid and can be used for a wide range of applications, ranging from global coupled climate simulations down to very detailed and high resolution regional experiments. It consists of an atmospheric and an oceanic component and scales very well for high numbers of cores. This allows us to conduct very detailed climate experiments with ultra-fine resolutions. ICON is jointly developed in partnership with DKRZ by the Max Planck Institute for Meteorology and the German Weather Service. This presentation discusses our current workflow for analyzing and visualizing this high resolution data. The ICON model has been used for eddy resolving (<10km) ocean simulations, as well as for ultra-fine cloud resolving (120m) atmospheric simulations. This results in very large 3D time dependent multi-variate data that need to be displayed and analyzed. We have developed specific plugins for the free available visualization software ParaView and Vapor, which allows us to read and handle that much data. Within ParaView, we can additionally compare prognostic variables with performance data side by side to investigate the performance and scalability of the model. With the simulation running in parallel on several hundred nodes, an equal load balance is imperative. In our presentation we show visualizations of high-resolution ICON oceanographic and HDCP2 atmospheric simulations that were created using ParaView and Vapor. Furthermore we discuss our current efforts to improve our visualization capabilities, thereby exploring the potential of regular in-situ visualization, as well as of in-situ compression / post visualization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacFarlane, Joseph J.; Golovkin, I. E.; Woodruff, P. R.
2009-08-07
This Final Report summarizes work performed under DOE STTR Phase II Grant No. DE-FG02-05ER86258 during the project period from August 2006 to August 2009. The project, “Development of Spectral and Atomic Models for Diagnosing Energetic Particle Characteristics in Fast Ignition Experiments,” was led by Prism Computational Sciences (Madison, WI), and involved collaboration with subcontractors University of Nevada-Reno and Voss Scientific (Albuquerque, NM). In this project, we have: Developed and implemented a multi-dimensional, multi-frequency radiation transport model in the LSP hybrid fluid-PIC (particle-in-cell) code [1,2]. Updated the LSP code to support the use of accurate equation-of-state (EOS) tables generated by Prism’smore » PROPACEOS [3] code to compute more accurate temperatures in high energy density physics (HEDP) plasmas. Updated LSP to support the use of Prism’s multi-frequency opacity tables. Generated equation of state and opacity data for LSP simulations for several materials being used in plasma jet experimental studies. Developed and implemented parallel processing techniques for the radiation physics algorithms in LSP. Benchmarked the new radiation transport and radiation physics algorithms in LSP and compared simulation results with analytic solutions and results from numerical radiation-hydrodynamics calculations. Performed simulations using Prism radiation physics codes to address issues related to radiative cooling and ionization dynamics in plasma jet experiments. Performed simulations to study the effects of radiation transport and radiation losses due to electrode contaminants in plasma jet experiments. Updated the LSP code to generate output using NetCDF to provide a better, more flexible interface to SPECT3D [4] in order to post-process LSP output. Updated the SPECT3D code to better support the post-processing of large-scale 2-D and 3-D datasets generated by simulation codes such as LSP. Updated atomic physics modeling to provide for more comprehensive and accurate atomic databases that feed into the radiation physics modeling (spectral simulations and opacity tables). Developed polarization spectroscopy modeling techniques suitable for diagnosing energetic particle characteristics in HEDP experiments. A description of these items is provided in this report. The above efforts lay the groundwork for utilizing the LSP and SPECT3D codes in providing simulation support for DOE-sponsored HEDP experiments, such as plasma jet and fast ignition physics experiments. We believe that taken together, the LSP and SPECT3D codes have unique capabilities for advancing our understanding of the physics of these HEDP plasmas. Based on conversations early in this project with our DOE program manager, Dr. Francis Thio, our efforts emphasized developing radiation physics and atomic modeling capabilities that can be utilized in the LSP PIC code, and performing radiation physics studies for plasma jets. A relatively minor component focused on the development of methods to diagnose energetic particle characteristics in short-pulse laser experiments related to fast ignition physics. The period of performance for the grant was extended by one year to August 2009 with a one-year no-cost extension, at the request of subcontractor University of Nevada-Reno.« less
NASA Astrophysics Data System (ADS)
Sugawara, D.; Imai, K.; Mitobe, Y.; Takahashi, T.
2016-12-01
Coastal lakes are one of the promising environments to identify deposits of past tsunamis, and such deposits have been an important key to know the recurrence of tsunami events. In contrast to tsunami deposits on the coastal plains, however, relationship between deposit geometry and tsunami hydrodynamic character in the coastal lakes has poorly been understood. Flume experiment and numerical modeling will be important measures to clarify such relationship. In this study, data from a series of flume experiment were compared with simulations by an existing tsunami sediment transport model to examine applicability of the numerical model for tsunami-induced morphological change in a coastal lake. A coastal lake with a non-erodible beach ridge was modeled as the target geomorphology. The ridge separates the lake from the offshore part of the flume, and the lake bottom was filled by sand. Tsunami bore was generated by a dam-break flow, which is capable of generating a maximum near-bed flow speed of 2.5 m/s. Test runs with varying magnitude of the bore demonstrated that the duration of tsunami overflow controls the scouring depth of the lake bottom behind the ridge. The maximum scouring depth reached up to 7 cm, and sand deposition occurred mainly in the seaward-half of the lake. A conventional depth-averaged tsunami hydrodynamic model coupled with the sediment transport model was used to compare the simulation and experimental results. In the Simulation, scouring depth behind the ridge reached up to 6 cm. In addition, the width of the scouring was consistent between the simulation and experiment. However, sand deposition occurred mainly in a zone much far from the ridge, showing a considerable deviation from the experimental results. This may be associated with the lack of model capability to resolve some important physics, such as vortex generation behind the ridge and shoreward migration of hydraulic jump. In this presentation, the results from the flume experiment and the numerical modeling will be compared in detail, including temporal evolution of the morphological change. In addition, model applicability and future improvements will be discussed.
Equatorial waves simulated by the NCAR community climate model
NASA Technical Reports Server (NTRS)
Cheng, Xinhua; Chen, Tsing-Chang
1988-01-01
The equatorial planetary waves simulated by the NCAR CCM1 general circulation model were investigated in terms of space-time spectral analysis (Kao, 1968; Hayashi, 1971, 1973) and energetic analysis (Hayashi, 1980). These analyses are particularly applied to grid-point data on latitude circles. In order to test some physical factors which may affect the generation of tropical transient planetary waves, three different model simulations with the CCM1 (the control, the no-mountain, and the no-cloud experiments) were analyzed.
2013-09-01
fraction of SRB could be active in O2 respiration, fermentation of organics, and even NO3- respiration. Therefore, the metabolic diversity of SRB...the case with PRB, which are able to reduce NO3- and ClO4-. To evaluate the model, we simulated effluent H2, UAP, and BAP concentrations, along with...effluent_experiment 56 Figure 36. Model- simulated concentrations of H2, UAP, and BAP in the effluent. Figure 37. Model- simulated
NASA Astrophysics Data System (ADS)
Wang, Pinya; Tang, Jianping; Sun, Xuguang; Liu, Jianyong; Juan, Fang
2018-03-01
Using the Weather Research and Forecasting (WRF) model, this paper analyzes the spatiotemporal features of heat waves in 20-year regional climate simulations over East Asia, and investigates the capability of WRF to reproduce observational heat waves in China. Within the framework of the Coordinated Regional Climate Downscaling Experiment (CORDEX), the WRF model is driven by the ERA-Interim (ERAIN) reanalysis, and five continuous simulations are conducted from 1989 to 2008. Of these, four runs apply the interior spectral nudging (SN) technique with different wavenumbers, nudging variables and nudging coefficients. Model validations show that WRF can reasonably reproduce the spatiotemporal features of heat waves in China. Compared with the experiment without SN, the application of SN is effectie on improving the skill of the model in simulating both the spatial distributions and temporal variations of heat waves of different intensities. The WRF model shows advantages in reproducing the synoptic circulations with SN and therefore yields better representations for heat wave events. Besides, the SN method is able to preserve the variability of large-scale circulations quite well, which in turn adjusts the extreme temperature variability towards the observation. Among the four SN experiments, those with stronger nudging coefficients perform better in modulating both the spatial and temporal features of heat waves. In contrast, smaller nudging coefficients weaken the effects of SN on improving WRF's performances.
Pai, Hsiang-Chu
2016-10-01
The use of clinical simulation in undergraduate nursing programs in Taiwan has gradually increased over the past 5years. Previous research has shown that students' experience of anxiety during simulated laboratory sessions influences their self-reflection and learning effectiveness. Thus, further study that tracks what influences students' clinical performance in actual clinical sites is vital. The aim of the study is to develop an integrated model that considers the associations among anxiety, self-reflection, and learning effectiveness and to understand how this model applies to student nurses' clinical performance while on clinical placement. This study used a correlational and longitudinal study design. The 80 nursing students, who ranged in age from 19 to 21 (mean=20.38, SD=0.56), were recruited from a nursing school in southern Taiwan. Data were collected during three phases of implementation using four questionnaires. During the first phase, the State-Trait Anxiety Inventory (STAI), Simulation Learning Effectiveness Scale (SLES), and Self-Reflection and Insight Scale (SRIS) were used after students completed the simulation course in the school simulation laboratory. Nursing students also completed the Holistic Nursing Competence Scale at 2months (Phase 2) and 4months (Phase 3) after clinical practice experience. In Phase 3, students again completed the STAI and SRIS. Partial least squares (PLS), a structural equation modeling (SEM) procedure, was used to test the research model. The findings showed that: (1) at the start of the simulation laboratory, anxiety had a significant negative effect on students' simulation learning effectiveness (SLE; β=-0.14, p<0.05) and on self-reflection with insight (SRI; β=-0.52, p<0.01). Self-reflection also had a significant positive effect on simulation learning effectiveness (β=0.37, p<0.01). Anxiety had a significant negative effect on students' nursing competence during the first 2months of practice in a clinical nursing site (β=-0.20, p<0.01). Simulation learning effectiveness and self-reflection and insight also had a significant positive effect on nursing competence during the first 2months of practice in a clinical site (β=0.13; β=0.16, p<0.05), respectively; and (2) when students practice in a clinical setting, their previous experience of nursing competence during the first 2months of clinical care and their self-reflection and insight have a significant positive effect on their 4-month nursing competence (β=0.58; β=0.27, p<0.01). Anxiety, however, had a negative effect on 4-month nursing competence but not significantly. Overall, 41% of the variance in clinical nursing performance was accounted for by the variables in the integrated model. This study highlights that self-reflection with insight and clinical experience may help students to deflect anxiety that may influence the development of clinical competence. Of note is that real-life clinical experience has a stronger effect on enhancing clinical performance than does a simulation experience. Copyright © 2016 Elsevier Ltd. All rights reserved.
Mesoscale Simulation Data for Initializing Fast-Time Wake Transport and Decay Models
NASA Technical Reports Server (NTRS)
Ahmad, Nashat N.; Proctor, Fred H.; Vanvalkenburg, Randal L.; Pruis, Mathew J.; LimonDuparcmeur, Fanny M.
2012-01-01
The fast-time wake transport and decay models require vertical profiles of crosswinds, potential temperature and the eddy dissipation rate as initial conditions. These inputs are normally obtained from various field sensors. In case of data-denied scenarios or operational use, these initial conditions can be provided by mesoscale model simulations. In this study, the vertical profiles of potential temperature from a mesoscale model were used as initial conditions for the fast-time wake models. The mesoscale model simulations were compared against available observations and the wake model predictions were compared with the Lidar measurements from three wake vortex field experiments.
NASA Technical Reports Server (NTRS)
Yasunari, Teppei J.; Colarco, Peter R.; Lau, William K. M.; Osada, Kazuo; Kido, Mizuka; Mahanama, Sarith P. P.; Kim, Kyu-Myong; Da Silva, Arlindo M.
2015-01-01
We compared the observed total dust deposition fluxes during precipitation (TDP) mainly at Toyama in Japan during the period January - April 2009 with results available from four NASA GEOS-5 global model experiments. The modeled results were obtained from three previous experiments and carried out in one experiment, which were all driven by assimilated meteorology and simulating aerosol distributions for the time period. We focus mainly on the observations of two distinct TDP events, which were reported in Osada et al. (2011), at Toyama, Japan, in February (Event B) and March 2009 (Event C). Although all of our GEOS-5 simulations captured aspects of the observed TDP, we found that our low horizontal spatial resolution control experiment performed generally the worst. The other three experiments were run at a higher spatial resolution, with the first differing only in that respect from the control, the second adding imposed a prescribed corrected precipitation product, and the final experiment adding as well assimilation of aerosol optical depth based on MODIS observations. During Event C, the increased horizontal resolution could increase TDP with precipitation increase. There was no significant improvement, however, due to the imposition of the corrected precipitation product. The simulation that incorporated aerosol data assimilation performed was by far the best for this event, but even so could only reproduce less than half of the observed TDP despite the significantly increased atmospheric dust mass concentrations. All three of the high spatial resolution experiments had higher simulated precipitation at Toyama than was observed and that in the lower resolution control run. During Event B, the aerosol data assimilation run did not perform appreciably better than the other higher resolution simulations, suggesting that upstream conditions (i.e., upstream cloudiness), or vertical or horizontal misplacement of the dust plume did not allow for significant improvement in the simulated aerosol distributions. Furthermore, a detailed comparison of observed hourly precipitation and surface particulate mass concentration data suggests that the observed TDP during Event B was highly dependent on short periods of weak precipitation correlated with elevated dust surface concentrations, important details possibly not captured well in a current global model.
Vibration modelling and verifications for whole aero-engine
NASA Astrophysics Data System (ADS)
Chen, G.
2015-08-01
In this study, a new rotor-ball-bearing-casing coupling dynamic model for a practical aero-engine is established. In the coupling system, the rotor and casing systems are modelled using the finite element method, support systems are modelled as lumped parameter models, nonlinear factors of ball bearings and faults are included, and four types of supports and connection models are defined to model the complex rotor-support-casing coupling system of the aero-engine. A new numerical integral method that combines the Newmark-β method and the improved Newmark-β method (Zhai method) is used to obtain the system responses. Finally, the new model is verified in three ways: (1) modal experiment based on rotor-ball bearing rig, (2) modal experiment based on rotor-ball-bearing-casing rig, and (3) fault simulations for a certain type of missile turbofan aero-engine vibration. The results show that the proposed model can not only simulate the natural vibration characteristics of the whole aero-engine but also effectively perform nonlinear dynamic simulations of a whole aero-engine with faults.
Extended MHD Effects in High Energy Density Experiments
NASA Astrophysics Data System (ADS)
Seyler, Charles
2016-10-01
The MHD model is the workhorse for computational modeling of HEDP experiments. Plasma models are inheritably limited in scope, but MHD is expected to be a very good model for studying plasmas at the high densities attained in HEDP experiments. There are, however, important ways in which MHD fails to adequately describe the results, most notably due to the omission of the Hall term in the Ohm's law (a form of extended MHD or XMHD). This talk will discuss these failings by directly comparing simulations of MHD and XMHD for particularly relevant cases. The methodology is to simulate HEDP experiments using a Hall-MHD (HMHD) code based on a highly accurate and robust Discontinuous Galerkin method, and by comparison of HMHD to MHD draw conclusions about the impact of the Hall term. We focus on simulating two experimental pulsed power machines under various scenarios. We examine the MagLIF experiment on the Z-machine at Sandia National Laboratories and liner experiments on the COBRA machine at Cornell. For the MagLIF experiment we find that power flow in the feed leads to low density plasma ablation into the region surrounding the liner. The inflow of this plasma compresses axial magnetic flux onto the liner. In MHD this axial flux tends to resistively decay, whereas in HMHD a force-free current layer sustains the axial flux on the liner leading to a larger ratio of axial to azimuthal flux. During the liner compression the magneto-Rayleigh-Taylor instability leads to helical perturbations due to minimization of field line bending. Simulations of a cylindrical liner using the COBRA machine parameters can under certain conditions exhibit amplification of an axial field due to a force-free low-density current layer separated by some distance from the liner. This results in a configuration in which there is predominately axial field on the liner inside the current layer and azimuthal field outside the layer. We are currently attempting to experimentally verify the simulation results. Collaborator: Nathaniel D. Hamlin, School of Electrical and Computer Engineering, Cornell University, Ithaca, New York.
NASA Astrophysics Data System (ADS)
Shegog, Ross; Lazarus, Melanie M.; Murray, Nancy G.; Diamond, Pamela M.; Sessions, Nathalie; Zsigmond, Eva
2012-10-01
The transgenic mouse model is useful for studying the causes and potential cures for human genetic diseases. Exposing high school biology students to laboratory experience in developing transgenic animal models is logistically prohibitive. Computer-based simulation, however, offers this potential in addition to advantages of fidelity and reach. This study describes and evaluates a computer-based simulation to train advanced placement high school science students in laboratory protocols, a transgenic mouse model was produced. A simulation module on preparing a gene construct in the molecular biology lab was evaluated using a randomized clinical control design with advanced placement high school biology students in Mercedes, Texas ( n = 44). Pre-post tests assessed procedural and declarative knowledge, time on task, attitudes toward computers for learning and towards science careers. Students who used the simulation increased their procedural and declarative knowledge regarding molecular biology compared to those in the control condition (both p < 0.005). Significant increases continued to occur with additional use of the simulation ( p < 0.001). Students in the treatment group became more positive toward using computers for learning ( p < 0.001). The simulation did not significantly affect attitudes toward science in general. Computer simulation of complex transgenic protocols have potential to provide a "virtual" laboratory experience as an adjunct to conventional educational approaches.
Meso-macro simulation of the woven fabric local deformation in draping
NASA Astrophysics Data System (ADS)
Iwata, Akira; Inoue, Takuya; Naouar, Naim; Boisse, Philippe; Lomov, Stepan V.
2018-05-01
The paper reports results of such combined meso-macro modelling for a plain weave carbon fabric with spread yarns. The boundary conditions for a local meso-model are taken from the macro draping simulation. The fabric geometry is modelled with WiseTex and transferred to the finite element package. A hyperelastic constitutive model for the yarns (Charmetant - Boisse) is used in the meso-modelling; the model parameters are identified and validated in independent tension, shear, compaction and bending tests of the yarn and the fabric. The simulation reproduces local yarn slippage and buckling, for example, the yarn distortion on the 3D mould corner (see the figure). The simulations are compared with the local fabric distortions observed during draping experiments.
NASA Astrophysics Data System (ADS)
Madhulatha, A.; Rajeevan, M.; Bhowmik, S. K. Roy; Das, A. K.
2018-01-01
The primary goal of present study is to investigate the impact of assimilation of conventional and satellite radiance observations in simulating the mesoscale convective system (MCS) formed over south east India. An assimilation methodology based on Weather Research and Forecasting model three dimensional variational data assimilation is considered. Few numerical experiments are carried out to examine the individual and combined impact of conventional and non-conventional (satellite radiance) observations. After the successful inclusion of additional observations, strong analysis increments of temperature and moisture fields are noticed and contributed to significant improvement in model's initial fields. The resulting model simulations are able to successfully reproduce the prominent synoptic features responsible for the initiation of MCS. Among all the experiments, the final experiment in which both conventional and satellite radiance observations assimilated has showed considerable impact on the prediction of MCS. The location, genesis, intensity, propagation and development of rain bands associated with the MCS are simulated reasonably well. The biases of simulated temperature, moisture and wind fields at surface and different pressure levels are reduced. Thermodynamic, dynamic and vertical structure of convective cells associated with the passage of MCS are well captured. Spatial distribution of rainfall is fairly reproduced and comparable to TRMM observations. It is demonstrated that incorporation of conventional and satellite radiance observations improved the local and synoptic representation of temperature, moisture fields from surface to different levels of atmosphere. This study highlights the importance of assimilation of conventional and satellite radiances in improving the models initial conditions and simulation of MCS.
Coarse-grained mechanics of viral shells
NASA Astrophysics Data System (ADS)
Klug, William S.; Gibbons, Melissa M.
2008-03-01
We present an approach for creating three-dimensional finite element models of viral capsids from atomic-level structural data (X-ray or cryo-EM). The models capture heterogeneous geometric features and are used in conjunction with three-dimensional nonlinear continuum elasticity to simulate nanoindentation experiments as performed using atomic force microscopy. The method is extremely flexible; able to capture varying levels of detail in the three-dimensional structure. Nanoindentation simulations are presented for several viruses: Hepatitis B, CCMV, HK97, and φ29. In addition to purely continuum elastic models a multiscale technique is developed that combines finite-element kinematics with MD energetics such that large-scale deformations are facilitated by a reduction in degrees of freedom. Simulations of these capsid deformation experiments provide a testing ground for the techniques, as well as insight into the strength-determining mechanisms of capsid deformation. These methods can be extended as a framework for modeling other proteins and macromolecular structures in cell biology.
Deflagration to Detonation Transition (DDT) Simulations of HMX Powder Using the HERMES Model
NASA Astrophysics Data System (ADS)
White, Bradley; Reaugh, John; Tringe, Joseph
2017-06-01
We performed computer simulations of DDT experiments with Class I HMX powder using the HERMES model (High Explosive Response to MEchanical Stimulus) in ALE3D. Parameters for the model were fitted to the limited available mechanical property data of the low-density powder, and to the Shock to Detonation Transition (SDT) test results. The DDT tests were carried out in steel-capped polycarbonate tubes. This arrangement permits direct observation of the event using both flash X-ray radiography and high speed camera imaging, and provides a stringent test of the model. We found the calculated detonation transition to be qualitatively similar to experiment. Through simulation we also explored the effects of confinement strength, the HMX particle size distribution and porosity on the computed detonation transition location. This work was performed under the auspices of the US DOE by LLNL under Contract DE-AC52-07NA27344.
Use of a computer model in the understanding of erythropoietic control mechanisms
NASA Technical Reports Server (NTRS)
Dunn, C. D. R.
1978-01-01
During an eight-week visit approximately 200 simulations using the computer model for the regulation of erythopoiesis were carries out in four general areas: with the human model simulating hypoxia and dehydration, evaluation of the simulation of dehydration using the mouse model. The experiments led to two considerations for the models. Firstly, a direct relationship between erythropoietin concentration and bone marrow sensitivity to the hormone and, secondly, a partial correction of tissue hypoxia prior to compensation by an increased hematocrit. This latter change in particular produced a better simuation of the effects of hypoxia on plasma erythropoietin concentrations.
Test code for the assessment and improvement of Reynolds stress models
NASA Technical Reports Server (NTRS)
Rubesin, M. W.; Viegas, J. R.; Vandromme, D.; Minh, H. HA
1987-01-01
An existing two-dimensional, compressible flow, Navier-Stokes computer code, containing a full Reynolds stress turbulence model, was adapted for use as a test bed for assessing and improving turbulence models based on turbulence simulation experiments. To date, the results of using the code in comparison with simulated channel flow and over an oscillating flat plate have shown that the turbulence model used in the code needs improvement for these flows. It is also shown that direct simulation of turbulent flows over a range of Reynolds numbers are needed to guide subsequent improvement of turbulence models.
ERIC Educational Resources Information Center
Dieckmann, Peter; Friis, Susanne Molin; Lippert, Anne; Ostergaard, Doris
2012-01-01
Introduction: This study describes (a) process goals, (b) success factors, and (c) barriers for optimizing simulation-based learning environments within the simulation setting model developed by Dieckmann. Methods: Seven simulation educators of different experience levels were interviewed using the Critical Incident Technique. Results: (a) The…
ERIC Educational Resources Information Center
Bautista, Nazan Uludag
2011-01-01
This study investigated the effectiveness of an Early Childhood Education science methods course that focused exclusively on providing various mastery (i.e., enactive, cognitive content, and cognitive pedagogical) and vicarious experiences (i.e., cognitive self-modeling, symbolic modeling, and simulated modeling) in increasing preservice…
Yin, J.; Haggerty, R.; Stoliker, D.L.; Kent, D.B.; Istok, J.D.; Greskowiak, J.; Zachara, J.M.
2011-01-01
In the 300 Area of a U(VI)-contaminated aquifer at Hanford, Washington, USA, inorganic carbon and major cations, which have large impacts on U(VI) transport, change on an hourly and seasonal basis near the Columbia River. Batch and column experiments were conducted to investigate the factors controlling U(VI) adsorption/desorption by changing chemical conditions over time. Low alkalinity and low Ca concentrations (Columbia River water) enhanced adsorption and reduced aqueous concentrations. Conversely, high alkalinity and high Ca concentrations (Hanford groundwater) reduced adsorption and increased aqueous concentrations of U(VI). An equilibrium surface complexation model calibrated using laboratory batch experiments accounted for the decrease in U(VI) adsorption observed with increasing (bi)carbonate concentrations and other aqueous chemical conditions. In the column experiment, alternating pulses of river and groundwater caused swings in aqueous U(VI) concentration. A multispecies multirate surface complexation reactive transport model simulated most of the major U(VI) changes in two column experiments. The modeling results also indicated that U(VI) transport in the studied sediment could be simulated by using a single kinetic rate without loss of accuracy in the simulations. Moreover, the capability of the model to predict U(VI) transport in Hanford groundwater under transient chemical conditions depends significantly on the knowledge of real-time change of local groundwater chemistry. Copyright 2011 by the American Geophysical Union.
Yin, Jun; Haggerty, Roy; Stoliker, Deborah L.; Kent, Douglas B.; Istok, Jonathan D.; Greskowiak, Janek; Zachara, John M.
2011-01-01
In the 300 Area of a U(VI)-contaminated aquifer at Hanford, Washington, USA, inorganic carbon and major cations, which have large impacts on U(VI) transport, change on an hourly and seasonal basis near the Columbia River. Batch and column experiments were conducted to investigate the factors controlling U(VI) adsorption/desorption by changing chemical conditions over time. Low alkalinity and low Ca concentrations (Columbia River water) enhanced adsorption and reduced aqueous concentrations. Conversely, high alkalinity and high Ca concentrations (Hanford groundwater) reduced adsorption and increased aqueous concentrations of U(VI). An equilibrium surface complexation model calibrated using laboratory batch experiments accounted for the decrease in U(VI) adsorption observed with increasing (bi)carbonate concentrations and other aqueous chemical conditions. In the column experiment, alternating pulses of river and groundwater caused swings in aqueous U(VI) concentration. A multispecies multirate surface complexation reactive transport model simulated most of the major U(VI) changes in two column experiments. The modeling results also indicated that U(VI) transport in the studied sediment could be simulated by using a single kinetic rate without loss of accuracy in the simulations. Moreover, the capability of the model to predict U(VI) transport in Hanford groundwater under transient chemical conditions depends significantly on the knowledge of real-time change of local groundwater chemistry.
Reshocks, rarefactions, and the generalized Layzer model for hydrodynamic instabilities
NASA Astrophysics Data System (ADS)
Mikaelian, Karnig O.
2009-02-01
We report numerical simulations and analytic modeling of shock tube experiments on Rayleigh-Taylor and Richtmyer-Meshkov instabilities. We examine single interfaces of the type A /B where the incident shock is initiated in A and the transmitted shock proceeds into B. Examples are He/air and air/He. In addition, we study finite-thickness or double-interface A /B/A configurations such as air/SF6/air gas-curtain experiments. We first consider conventional shock tubes that have a "fixed" boundary: A solid endwall which reflects the transmitted shock and reshocks the interface(s). Then we focus on new experiments with a "free" boundary—a membrane disrupted mechanically or by the transmitted shock, sending back a rarefaction toward the interface(s). Complex acceleration histories are achieved, relevant for inertial confinement fusion implosions. We compare our simulation results with a generalized Layzer model for two fluids with time-dependent densities and derive a new freeze-out condition whereby accelerating and compressive forces cancel each other out. Except for the recently reported failures of the Layzer model, the generalized Layzer model and hydrocode simulations for reshocks and rarefactions agree well with each other and remain to be verified experimentally.
Effects of Humidity Swings on Adsorption Columns for Air Revitalization: Modeling and Experiments
NASA Technical Reports Server (NTRS)
LeVan, M. Douglas; Finn, John E.
1997-01-01
The goal of this research was to develop a dynamic model which can predict the effect of humidity swings on activated carbon adsorption beds used to remove trace contaminants from the atmosphere in spacecraft. Specifically, the model was to be incorporated into a computer simulation to predict contaminant concentrations exiting the bed as a function of time after a humidity swing occurs. Predicted breakthrough curves were to be compared to experimentally measured results. In all respects the research was successful. The two major aspects of this research were the mathematical model and the experiments. Experiments were conducted by Mr. Appel using a fixed-bed apparatus at NASA-Ames Research Center during the summers of 1994 and 1995 and during the first 8 months of 1996. Mr. Appel conducted most of his mathematical modeling work at the University of Virginia. The simulation code was used to predict breakthrough curves using adsorption equilibrium correlations developed previously by M. D. LeVan's research group at the University of Virginia. These predictions were compared with the experimental measurements, and this led to improvements in both the simulation code and the apparatus.
Methods Data Qualification Interim Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. Sam Alessi; Tami Grimmett; Leng Vang
The overall goal of the Next Generation Nuclear Plant (NGNP) Data Management and Analysis System (NDMAS) is to maintain data provenance for all NGNP data including the Methods component of NGNP data. Multiple means are available to access data stored in NDMAS. A web portal environment allows users to access data, view the results of qualification tests and view graphs and charts of various attributes of the data. NDMAS also has methods for the management of the data output from VHTR simulation models and data generated from experiments designed to verify and validate the simulation codes. These simulation models representmore » the outcome of mathematical representation of VHTR components and systems. The methods data management approaches described herein will handle data that arise from experiment, simulation, and external sources for the main purpose of facilitating parameter estimation and model verification and validation (V&V). A model integration environment entitled ModelCenter is used to automate the storing of data from simulation model runs to the NDMAS repository. This approach does not adversely change the why computational scientists conduct their work. The method is to be used mainly to store the results of model runs that need to be preserved for auditing purposes or for display to the NDMAS web portal. This interim report demonstrates the currently development of NDMAS for Methods data and discusses data and its qualification that is currently part of NDMAS.« less
Interactive, graphical processing unitbased evaluation of evacuation scenarios at the state scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perumalla, Kalyan S; Aaby, Brandon G; Yoginath, Srikanth B
2011-01-01
In large-scale scenarios, transportation modeling and simulation is severely constrained by simulation time. For example, few real- time simulators scale to evacuation traffic scenarios at the level of an entire state, such as Louisiana (approximately 1 million links) or Florida (2.5 million links). New simulation approaches are needed to overcome severe computational demands of conventional (microscopic or mesoscopic) modeling techniques. Here, a new modeling and execution methodology is explored that holds the potential to provide a tradeoff among the level of behavioral detail, the scale of transportation network, and real-time execution capabilities. A novel, field-based modeling technique and its implementationmore » on graphical processing units are presented. Although additional research with input from domain experts is needed for refining and validating the models, the techniques reported here afford interactive experience at very large scales of multi-million road segments. Illustrative experiments on a few state-scale net- works are described based on an implementation of this approach in a software system called GARFIELD. Current modeling cap- abilities and implementation limitations are described, along with possible use cases and future research.« less
Self-adaptive Fault-Tolerance of HLA-Based Simulations in the Grid Environment
NASA Astrophysics Data System (ADS)
Huang, Jijie; Chai, Xudong; Zhang, Lin; Li, Bo Hu
The objects of a HLA-based simulation can access model services to update their attributes. However, the grid server may be overloaded and refuse the model service to handle objects accesses. Because these objects have been accessed this model service during last simulation loop and their medium state are stored in this server, this may terminate the simulation. A fault-tolerance mechanism must be introduced into simulations. But the traditional fault-tolerance methods cannot meet the above needs because the transmission latency between a federate and the RTI in grid environment varies from several hundred milliseconds to several seconds. By adding model service URLs to the OMT and expanding the HLA services and model services with some interfaces, this paper proposes a self-adaptive fault-tolerance mechanism of simulations according to the characteristics of federates accessing model services. Benchmark experiments indicate that the expanded HLA/RTI can make simulations self-adaptively run in the grid environment.
Antoniotti, M; Park, F; Policriti, A; Ugel, N; Mishra, B
2003-01-01
The analysis of large amounts of data, produced as (numerical) traces of in vivo, in vitro and in silico experiments, has become a central activity for many biologists and biochemists. Recent advances in the mathematical modeling and computation of biochemical systems have moreover increased the prominence of in silico experiments; such experiments typically involve the simulation of sets of Differential Algebraic Equations (DAE), e.g., Generalized Mass Action systems (GMA) and S-systems. In this paper we reason about the necessary theoretical and pragmatic foundations for a query and simulation system capable of analyzing large amounts of such trace data. To this end, we propose to combine in a novel way several well-known tools from numerical analysis (approximation theory), temporal logic and verification, and visualization. The result is a preliminary prototype system: simpathica/xssys. When dealing with simulation data simpathica/xssys exploits the special structure of the underlying DAE, and reduces the search space in an efficient way so as to facilitate any queries about the traces. The proposed system is designed to give the user possibility to systematically analyze and simultaneously query different possible timed evolutions of the modeled system.
Sepkoski, David
2016-08-01
In a famous thought experiment, Stephen Jay Gould asked whether, if one could somehow rewind the history of life back to its initial starting point, the same results would obtain when the "tape" was run forward again. This hypothetical experiment is generally understood as a metaphor supporting Gould's philosophy of evolutionary contingency, which he developed and promoted from the late 1980s until his death in 2002. However, there was a very literal, non-metaphorical inspiration for Gould's thought experiment: since the early 1970s, Gould, along with a group of other paleontologists, was actively engaged in attempts to model and reconstruct the history of life using computer simulations and database analysis. These simulation projects not only demonstrate the impact that computers had on data analysis in paleontology, but also shed light on the close relationship between models and empirical data in data-oriented science. In a sense, I will argue, the models developed by paleontologists through simulation and quantitative analysis of the empirical fossil record in the 1970s and beyond were literal attempts to "replay life's tape" by reconstructing the history of life as data. Copyright © 2015 Elsevier Ltd. All rights reserved.
SmartEye and Polhemus data for vestibulo-ocular reflex and optokinetic reflex model.
Le, Anh Son; Aoki, Hirofumi
2018-06-01
In this data article, this dataset included raw data of head and eye movement that collected by Polhemus (Polhemus Inc) and SmartEye (Smart Eye AB) equipment. Subjects who have driver license participated in this experiment. The experiment was conducted with a driving simulator that was controlled by CarSim (Mechanical simulation Co., Anna Arbor, MI) with the vehicle motion. This data set not only contained the eye and head movement but also had eye gaze, pupil diameter, saccades, and so on. It can be used for the parameter identification of the vestibulor-ocular reflex (VOR) model, simulation eye movement, as well as running other analysis related to eye movement.
Development of Ku-band rendezvous radar tracking and acquisition simulation programs
NASA Technical Reports Server (NTRS)
1986-01-01
The fidelity of the Space Shuttle Radar tracking simulation model was improved. The data from the Shuttle Orbiter Radar Test and Evaluation (SORTE) program experiments performed at the White Sands Missile Range (WSMR) were reviewed and analyzed. The selected flight rendezvous radar data was evaluated. Problems with the Inertial Line-of-Sight (ILOS) angle rate tracker were evaluated using the improved fidelity angle rate tracker simulation model.
Corrias, A.; Jie, X.; Romero, L.; Bishop, M. J.; Bernabeu, M.; Pueyo, E.; Rodriguez, B.
2010-01-01
In this paper, we illustrate how advanced computational modelling and simulation can be used to investigate drug-induced effects on cardiac electrophysiology and on specific biomarkers of pro-arrhythmic risk. To do so, we first perform a thorough literature review of proposed arrhythmic risk biomarkers from the ionic to the electrocardiogram levels. The review highlights the variety of proposed biomarkers, the complexity of the mechanisms of drug-induced pro-arrhythmia and the existence of significant animal species differences in drug-induced effects on cardiac electrophysiology. Predicting drug-induced pro-arrhythmic risk solely using experiments is challenging both preclinically and clinically, as attested by the rise in the cost of releasing new compounds to the market. Computational modelling and simulation has significantly contributed to the understanding of cardiac electrophysiology and arrhythmias over the last 40 years. In the second part of this paper, we illustrate how state-of-the-art open source computational modelling and simulation tools can be used to simulate multi-scale effects of drug-induced ion channel block in ventricular electrophysiology at the cellular, tissue and whole ventricular levels for different animal species. We believe that the use of computational modelling and simulation in combination with experimental techniques could be a powerful tool for the assessment of drug safety pharmacology. PMID:20478918
NASA Astrophysics Data System (ADS)
Dobson, Patrick F.; Kneafsey, Timothy J.; Sonnenthal, Eric L.; Spycher, Nicolas; Apps, John A.
2003-05-01
Plugging of flow paths caused by mineral precipitation in fractures above the potential repository at Yucca Mountain, Nevada could reduce the probability of water seeping into the repository. As part of an ongoing effort to evaluate thermal-hydrological-chemical (THC) effects on flow in fractured media, we performed a laboratory experiment and numerical simulations to investigate mineral dissolution and precipitation under anticipated temperature and pressure conditions in the repository. To replicate mineral dissolution by vapor condensate in fractured tuff, water was flowed through crushed Yucca Mountain tuff at 94 °C. The resulting steady-state fluid composition had a total dissolved solids content of about 140 mg/l; silica was the dominant dissolved constituent. A portion of the steady-state mineralized water was flowed into a vertically oriented planar fracture in a block of welded Topopah Spring Tuff that was maintained at 80 °C at the top and 130 °C at the bottom. The fracture began to seal with amorphous silica within 5 days. A 1-D plug-flow numerical model was used to simulate mineral dissolution, and a similar model was developed to simulate the flow of mineralized water through a planar fracture, where boiling conditions led to mineral precipitation. Predicted concentrations of the major dissolved constituents for the tuff dissolution were within a factor of 2 of the measured average steady-state compositions. The mineral precipitation simulations predicted the precipitation of amorphous silica at the base of the boiling front, leading to a greater than 50-fold decrease in fracture permeability in 5 days, consistent with the laboratory experiment. These results help validate the use of a numerical model to simulate THC processes at Yucca Mountain. The experiment and simulations indicated that boiling and concomitant precipitation of amorphous silica could cause significant reductions in fracture porosity and permeability on a local scale. However, differences in fluid flow rates and thermal gradients between the experimental setup and anticipated conditions at Yucca Mountain need to be factored into scaling the results of the dissolution/precipitation experiments and associated simulations to THC models for the potential Yucca Mountain repository.
NASA Astrophysics Data System (ADS)
Henneberg, Olga; Ament, Felix; Grützun, Verena
2018-05-01
Soil moisture amount and distribution control evapotranspiration and thus impact the occurrence of convective precipitation. Many recent model studies demonstrate that changes in initial soil moisture content result in modified convective precipitation. However, to quantify the resulting precipitation changes, the chaotic behavior of the atmospheric system needs to be considered. Slight changes in the simulation setup, such as the chosen model domain, also result in modifications to the simulated precipitation field. This causes an uncertainty due to stochastic variability, which can be large compared to effects caused by soil moisture variations. By shifting the model domain, we estimate the uncertainty of the model results. Our novel uncertainty estimate includes 10 simulations with shifted model boundaries and is compared to the effects on precipitation caused by variations in soil moisture amount and local distribution. With this approach, the influence of soil moisture amount and distribution on convective precipitation is quantified. Deviations in simulated precipitation can only be attributed to soil moisture impacts if the systematic effects of soil moisture modifications are larger than the inherent simulation uncertainty at the convection-resolving scale. We performed seven experiments with modified soil moisture amount or distribution to address the effect of soil moisture on precipitation. Each of the experiments consists of 10 ensemble members using the deep convection-resolving COSMO model with a grid spacing of 2.8 km. Only in experiments with very strong modification in soil moisture do precipitation changes exceed the model spread in amplitude, location or structure. These changes are caused by a 50 % soil moisture increase in either the whole or part of the model domain or by drying the whole model domain. Increasing or decreasing soil moisture both predominantly results in reduced precipitation rates. Replacing the soil moisture with realistic fields from different days has an insignificant influence on precipitation. The findings of this study underline the need for uncertainty estimates in soil moisture studies based on convection-resolving models.
NASA Astrophysics Data System (ADS)
Venkataraman, Ajey; Shade, Paul A.; Adebisi, R.; Sathish, S.; Pilchak, Adam L.; Viswanathan, G. Babu; Brandes, Matt C.; Mills, Michael J.; Sangid, Michael D.
2017-05-01
Ti-7Al is a good model material for mimicking the α phase response of near- α and α+ β phases of many widely used titanium-based engineering alloys, including Ti-6Al-4V. In this study, three model structures of Ti-7Al are investigated using atomistic simulations by varying the Ti and Al atom positions within the crystalline lattice. These atomic arrangements are based on transmission electron microscopy observations of short-range order. The elastic constants of the three model structures considered are calculated using molecular dynamics simulations. Resonant ultrasound spectroscopy experiments are conducted to obtain the elastic constants at room temperature and a good agreement is found between the simulation and experimental results, providing confidence that the model structures are reasonable. Additionally, energy barriers for crystalline slip are established for these structures by means of calculating the γ-surfaces for different slip systems. Finally, the positions of Al atoms in regards to solid solution strengthening are studied using density functional theory simulations, which demonstrate a higher energy barrier for slip when the Al solute atom is closer to (or at) the fault plane. These results provide quantitative insights into the deformation mechanisms of this alloy.
Evaluation of weather-based rice yield models in India.
Sudharsan, D; Adinarayana, J; Reddy, D Raji; Sreenivas, G; Ninomiya, S; Hirafuji, M; Kiura, T; Tanaka, K; Desai, U B; Merchant, S N
2013-01-01
The objective of this study was to compare two different rice simulation models--standalone (Decision Support System for Agrotechnology Transfer [DSSAT]) and web based (SImulation Model for RIce-Weather relations [SIMRIW])--with agrometeorological data and agronomic parameters for estimation of rice crop production in southern semi-arid tropics of India. Studies were carried out on the BPT5204 rice variety to evaluate two crop simulation models. Long-term experiments were conducted in a research farm of Acharya N G Ranga Agricultural University (ANGRAU), Hyderabad, India. Initially, the results were obtained using 4 years (1994-1997) of data with weather parameters from a local weather station to evaluate DSSAT simulated results with observed values. Linear regression models used for the purpose showed a close relationship between DSSAT and observed yield. Subsequently, yield comparisons were also carried out with SIMRIW and DSSAT, and validated with actual observed values. Realizing the correlation coefficient values of SIMRIW simulation values in acceptable limits, further rice experiments in monsoon (Kharif) and post-monsoon (Rabi) agricultural seasons (2009, 2010 and 2011) were carried out with a location-specific distributed sensor network system. These proximal systems help to simulate dry weight, leaf area index and potential yield by the Java based SIMRIW on a daily/weekly/monthly/seasonal basis. These dynamic parameters are useful to the farming community for necessary decision making in a ubiquitous manner. However, SIMRIW requires fine tuning for better results/decision making.
NASA Astrophysics Data System (ADS)
Chen, B.; Xu, X. Q.; Xia, T. Y.; Li, N. M.; Porkolab, M.; Edlund, E.; LaBombard, B.; Terry, J.; Hughes, J. W.; Ye, M. Y.; Wan, Y. X.
2018-05-01
The heat flux distributions on divertor targets in H-mode plasmas are serious concerns for future devices. We seek to simulate the tokamak boundary plasma turbulence and heat transport in the edge localized mode-suppressed regimes. The improved BOUT++ model shows that not only Ip but also the radial electric field Er plays an important role on the turbulence behavior and sets the heat flux width. Instead of calculating Er from the pressure gradient term (diamagnetic Er), it is calculated from the plasma transport equations with the sheath potential in the scrape-off layer and the plasma density and temperature profiles inside the separatrix from the experiment. The simulation results with the new Er model have better agreement with the experiment than using the diamagnetic Er model: (1) The electromagnetic turbulence in enhanced Dα H-mode shows the characteristics of quasi-coherent modes (QCMs) and broadband turbulence. The mode spectra are in agreement with the phase contrast imaging data and almost has no change in comparison to the cases which use the diamagnetic Er model; (2) the self-consistent boundary Er is needed for the turbulence simulations to get the consistent heat flux width with the experiment; (3) the frequencies of the QCMs are proportional to Er, while the divertor heat flux widths are inversely proportional to Er; and (4) the BOUT++ turbulence simulations yield a similar heat flux width to the experimental Eich scaling law and the prediction from the Goldston heuristic drift model.
NASA Astrophysics Data System (ADS)
Shen, Wenqiang; Tang, Jianping; Wang, Yuan; Wang, Shuyu; Niu, Xiaorui
2017-04-01
In this study, the characteristics of tropical cyclones (TCs) over the East Asia Coordinated Regional Downscaling Experiment domain are examined with the Weather Research and Forecasting (WRF) model. Eight 20-year (1989-2008) simulations are performed using the WRF model, with lateral boundary forcing from the ERA-Interim reanalysis, to test the sensitivity of TC simulation to interior spectral nudging (SN, including nudging time interval, nudging variables) and radiation schemes [Community Atmosphere Model (CAM), Rapid Radiative Transfer Model (RRTM)]. The simulated TCs are compared with the observation from the Regional Specialized Meteorological Centers TC best tracks. It is found that all WRF runs can simulate the climatology of key TC features such as the tracks and location/frequency of genesis reasonably well, and reproduce the inter-annual variations and seasonal cycle of TC counts. The SN runs produce enhanced TC activity compare to the runs without SN. The thermodynamic profile suggests that nudging with horizontal wind increases the unstable of thermodynamic states in tropics, which results in excessive TCs genesis. The experiments with wind and temperature nudging improve the overestimation of TCs numbers, especially suppress the TCs intensification by correct the thermodynamic profile. Weak SN coefficient enhances TCs activity significantly even with wind and temperature nudging. The analysis of TCs numbers and large scale circulation shows that the SN parameters adopted in our experiments do not appear to suppress the formation of TC. The excessive TCs activity in CAM runs relative to RRTM runs are also due to the enhanced atmospheric instability.
NASA Technical Reports Server (NTRS)
Johnson, Daniel E.; Tao, W.-K.; Simpson, J.; Sui, C.-H.; Einaudi, Franco (Technical Monitor)
2001-01-01
Interactions between deep tropical clouds over the western Pacific warm pool and the larger-scale environment are key to understanding climate change. Cloud models are an extremely useful tool in simulating and providing statistical information on heat and moisture transfer processes between cloud systems and the environment, and can therefore be utilized to substantially improve cloud parameterizations in climate models. In this paper, the Goddard Cumulus Ensemble (GCE) cloud-resolving model is used in multi-day simulations of deep tropical convective activity over the Tropical Ocean-Global Atmosphere Coupled Ocean-Atmosphere Response Experiment (TOGA COARE). Large-scale temperature and moisture advective tendencies, and horizontal momentum from the TOGA-COARE Intensive Flux Array (IFA) region, are applied to the GCE version which incorporates cyclical boundary conditions. Sensitivity experiments show that grid domain size produces the largest response to domain-mean temperature and moisture deviations, as well as cloudiness, when compared to grid horizontal or vertical resolution, and advection scheme. It is found that a minimum grid-domain size of 500 km is needed to adequately resolve the convective cloud features. The control experiment shows that the atmospheric heating and moistening is primarily a response to cloud latent processes of condensation/evaporation, and deposition/sublimation, and to a lesser extent, melting of ice particles. Air-sea exchange of heat and moisture is found to be significant, but of secondary importance, while the radiational response is small. The simulated rainfall and atmospheric heating and moistening, agrees well with observations, and performs favorably to other models simulating this case.
FACE computer simulation. [Flexible Arm Controls Experiment
NASA Technical Reports Server (NTRS)
Sadeh, Willy Z.; Szmyd, Jeffrey A.
1990-01-01
A computer simulation of the FACE (Flexible Arm Controls Experiment) was conducted to assess its design for use in the Space Shuttle. The FACE is supposed to be a 14-ft long articulate structure with 4 degrees of freedom, consisting of shoulder pitch and yaw, elbow pitch, and wrist pitch. Kinematics of the FACE was simulated to obtain data on arm operation, function, workspace and interaction. Payload capture ability was modeled. The simulation indicates the capability for detailed kinematic simulation and payload capture ability analysis, and the feasibility of real-time simulation was determined. In addition, the potential for interactive real-time training through integration of the simulation with various interface controllers was revealed. At this stage, the flexibility of the arm was not yet considered.
Pazos, Valérie; Mongrain, Rosaire; Tardif, Jean-Claude
2010-06-01
Clinical studies on lipid-lowering therapy have shown that changing the composition of lipid pools reduced significantly the risk of cardiac events associated with plaque rupture. It has been shown also that changing the composition of the lipid pool affects its mechanical properties. However, knowledge about the mechanical properties of human atherosclerotic lesions remains limited due to the difficulty of the experiments. This paper aims to assess the feasibility of characterizing a lipid pool embedded in the wall of a pressurized vessel using finite-element simulations and an optimization algorithm. Finite-element simulations of inflation experiments were used together with nonlinear least squares algorithm to estimate the material model parameters of the wall and of the inclusion. An optimal fit of the simulated experiment and the real experiment was sought with the parameter estimation algorithm. The method was first tested on a single-layer polyvinyl alcohol (PVA) cryogel stenotic vessel, and then, applied on a double-layered PVA cryogel stenotic vessel with a lipid inclusion.
NASA Astrophysics Data System (ADS)
Smith, S. L.; Chen, B.; Vallina, S. M.
2017-12-01
Biodiversity-Ecosystem Function (BEF) relationships, which are most commonly quantified in terms of productivity or total biomass yield, are known to depend on the timescale of the experiment or field study, both for terrestrial plants and phytoplankton, which have each been widely studied as model ecosystems. Although many BEF relationships are positive (i.e., increasing biodiversity enhances function), in some cases there is an optimal intermediate diversity level (i.e., a uni-modal relationship), and in other cases productivity decreases with certain measures of biodiversity. These differences in BEF relationships cannot be reconciled merely by differences in the timescale of experiments. We will present results from simulation experiments applying recently developed trait-based models of phytoplankton communities and ecosystems, using the `adaptive dynamics' framework to represent continuous distributions of size and other key functional traits. Controlled simulation experiments were conducted with different levels of phytoplankton size-diversity, which through trait-size correlations implicitly represents functional-diversity. One recent study applied a theoretical box model for idealized simulations at different frequencies of disturbance. This revealed how the shapes of BEF relationships depend systematically on the frequency of disturbance and associated nutrient supply. We will also present more recent results obtained using a trait-based plankton ecosystem model embedded in a three-dimensional ocean model applied to the North Pacific. This reveals essentially the same pattern in a spatially explicit model with more realistic environmental forcing. In the relatively more variable subarctic, productivity tends to increase with the size (and hence functional) diversity of phytoplankton, whereas productivity tends to decrease slightly with increasing size-diversity in the relatively calm subtropics. Continuous trait-based models can capture essential features of BEF relationships, while requiring far fewer calculations compared to typical plankton diversity models that explicitly simulate a great many idealized species.
Interim Service ISDN Satellite (ISIS) network model for advanced satellite designs and experiments
NASA Technical Reports Server (NTRS)
Pepin, Gerard R.; Hager, E. Paul
1991-01-01
The Interim Service Integrated Services Digital Network (ISDN) Satellite (ISIS) Network Model for Advanced Satellite Designs and Experiments describes a model suitable for discrete event simulations. A top-down model design uses the Advanced Communications Technology Satellite (ACTS) as its basis. The ISDN modeling abstractions are added to permit the determination and performance for the NASA Satellite Communications Research (SCAR) Program.
Ontological and Epistemological Issues Regarding Climate Models and Computer Experiments
NASA Astrophysics Data System (ADS)
Vezer, M. A.
2010-12-01
Recent philosophical discussions (Parker 2009; Frigg and Reiss 2009; Winsberg, 2009; Morgon 2002, 2003, 2005; Gula 2002) about the ontology of computer simulation experiments and the epistemology of inferences drawn from them are of particular relevance to climate science as computer modeling and analysis are instrumental in understanding climatic systems. How do computer simulation experiments compare with traditional experiments? Is there an ontological difference between these two methods of inquiry? Are there epistemological considerations that result in one type of inference being more reliable than the other? What are the implications of these questions with respect to climate studies that rely on computer simulation analysis? In this paper, I examine these philosophical questions within the context of climate science, instantiating concerns in the philosophical literature with examples found in analysis of global climate change. I concentrate on Wendy Parker’s (2009) account of computer simulation studies, which offers a treatment of these and other questions relevant to investigations of climate change involving such modelling. Two theses at the center of Parker’s account will be the focus of this paper. The first is that computer simulation experiments ought to be regarded as straightforward material experiments; which is to say, there is no significant ontological difference between computer and traditional experimentation. Parker’s second thesis is that some of the emphasis on the epistemological importance of materiality has been misplaced. I examine both of these claims. First, I inquire as to whether viewing computer and traditional experiments as ontologically similar in the way she does implies that there is no proper distinction between abstract experiments (such as ‘thought experiments’ as well as computer experiments) and traditional ‘concrete’ ones. Second, I examine the notion of materiality (i.e., the material commonality between object and target systems) and some arguments for the claim that materiality entails some inferential advantage to traditional experimentation. I maintain that Parker’s account of the ontology of computer simulations has some interesting though potentially problematic implications regarding conventional distinctions between abstract and concrete methods of inquiry. With respect to her account of materiality, I outline and defend an alternative account, posited by Mary Morgan (2002, 2003, 2005), which holds that ontological similarity between target and object systems confers some epistemological advantage to traditional forms of experimental inquiry.
NASA Astrophysics Data System (ADS)
Kim, Kunhwi; Rutqvist, Jonny; Nakagawa, Seiji; Birkholzer, Jens
2017-11-01
This paper presents coupled hydro-mechanical modeling of hydraulic fracturing processes in complex fractured media using a discrete fracture network (DFN) approach. The individual physical processes in the fracture propagation are represented by separate program modules: the TOUGH2 code for multiphase flow and mass transport based on the finite volume approach; and the rigid-body-spring network (RBSN) model for mechanical and fracture-damage behavior, which are coupled with each other. Fractures are modeled as discrete features, of which the hydrological properties are evaluated from the fracture deformation and aperture change. The verification of the TOUGH-RBSN code is performed against a 2D analytical model for single hydraulic fracture propagation. Subsequently, modeling capabilities for hydraulic fracturing are demonstrated through simulations of laboratory experiments conducted on rock-analogue (soda-lime glass) samples containing a designed network of pre-existing fractures. Sensitivity analyses are also conducted by changing the modeling parameters, such as viscosity of injected fluid, strength of pre-existing fractures, and confining stress conditions. The hydraulic fracturing characteristics attributed to the modeling parameters are investigated through comparisons of the simulation results.
NASA Technical Reports Server (NTRS)
Halyo, Nesim; Choi, Sang H.; Chrisman, Dan A., Jr.; Samms, Richard W.
1987-01-01
Dynamic models and computer simulations were developed for the radiometric sensors utilized in the Earth Radiation Budget Experiment (ERBE). The models were developed to understand performance, improve measurement accuracy by updating model parameters and provide the constants needed for the count conversion algorithms. Model simulations were compared with the sensor's actual responses demonstrated in the ground and inflight calibrations. The models consider thermal and radiative exchange effects, surface specularity, spectral dependence of a filter, radiative interactions among an enclosure's nodes, partial specular and diffuse enclosure surface characteristics and steady-state and transient sensor responses. Relatively few sensor nodes were chosen for the models since there is an accuracy tradeoff between increasing the number of nodes and approximating parameters such as the sensor's size, material properties, geometry, and enclosure surface characteristics. Given that the temperature gradients within a node and between nodes are small enough, approximating with only a few nodes does not jeopardize the accuracy required to perform the parameter estimates and error analyses.
Upscaling pore pressure-dependent gas permeability in shales
NASA Astrophysics Data System (ADS)
Ghanbarian, Behzad; Javadpour, Farzam
2017-04-01
Upscaling pore pressure dependence of shale gas permeability is of great importance and interest in the investigation of gas production in unconventional reservoirs. In this study, we apply the Effective Medium Approximation, an upscaling technique from statistical physics, and modify the Doyen model for unconventional rocks. We develop an upscaling model to estimate the pore pressure-dependent gas permeability from pore throat size distribution, pore connectivity, tortuosity, porosity, and gas characteristics. We compare our adapted model with six data sets: three experiments, one pore-network model, and two lattice-Boltzmann simulations. Results showed that the proposed model estimated the gas permeability within a factor of 3 of the measurements/simulations in all data sets except the Eagle Ford experiment for which we discuss plausible sources of discrepancies.
Spin glass model for cell reprogramming
NASA Astrophysics Data System (ADS)
Pusuluri, Sai Teja; Castillo, Horacio E.
2014-03-01
Recent experiments show that differentiated cells can be reprogrammed to become pluripotent stem cells. The possible cell fates can be modeled as attractors in a dynamical system, the ``epigenetic landscape.'' Both cellular differentiation and reprogramming can be described in the landscape picture as motion from one attractor state to another attractor state. We use a simple model based on spin glass theory that can construct a simulated epigenetic landscape starting from the experimental genomic data. We modify the model to incorporate experimental reprogramming protocols. Our simulations successfully reproduce several reprogramming experiments. We probe the robustness of the results against random changes in the model, explore the importance of asymmetric interactions between transcription factors and study the importance of histone modification errors in reprogramming.
Is Water at the Graphite Interface Vapor-like or Ice-like?
Qiu, Yuqing; Lupi, Laura; Molinero, Valeria
2018-04-05
Graphitic surfaces are the main component of soot, a major constituent of atmospheric aerosols. Experiments indicate that soots of different origins display a wide range of abilities to heterogeneously nucleate ice. The ability of pure graphite to nucleate ice in experiments, however, seems to be almost negligible. Nevertheless, molecular simulations with the monatomic water model mW with water-carbon interactions parameterized to reproduce the experimental contact angle of water on graphite predict that pure graphite nucleates ice. According to classical nucleation theory, the ability of a surface to nucleate ice is controlled by the binding free energy between ice immersed in liquid water and the surface. To establish whether the discrepancy in freezing efficiencies of graphite in mW simulations and experiments arises from the coarse resolution of the model or can be fixed by reparameterization, it is important to elucidate the contributions of the water-graphite, water-ice, and ice-water interfaces to the free energy, enthalpy, and entropy of binding for both water and the model. Here we use thermodynamic analysis and free energy calculations to determine these interfacial properties. We demonstrate that liquid water at the graphite interface is not ice-like or vapor-like: it has similar free energy, entropy, and enthalpy as water in the bulk. The thermodynamics of the water-graphite interface is well reproduced by the mW model. We find that the entropy of binding between graphite and ice is positive and dominated, in both experiments and simulations, by the favorable entropy of reducing the ice-water interface. Our analysis indicates that the discrepancy in freezing efficiencies of graphite in experiments and the simulations with mW arises from the inability of the model to simultaneously reproduce the contact angle of liquid water on graphite and the free energy of the ice-graphite interface. This transferability issue is intrinsic to the resolution of the model, and arises from its lack of rotational degrees of freedom.
A simulation framework for the CMS Track Trigger electronics
NASA Astrophysics Data System (ADS)
Amstutz, C.; Magazzù, G.; Weber, M.; Palla, F.
2015-03-01
A simulation framework has been developed to test and characterize algorithms, architectures and hardware implementations of the vastly complex CMS Track Trigger for the high luminosity upgrade of the CMS experiment at the Large Hadron Collider in Geneva. High-level SystemC models of all system components have been developed to simulate a portion of the track trigger. The simulation of the system components together with input data from physics simulations allows evaluating figures of merit, like delays or bandwidths, under realistic conditions. The use of SystemC for high-level modelling allows co-simulation with models developed in Hardware Description Languages, e.g. VHDL or Verilog. Therefore, the simulation framework can also be used as a test bench for digital modules developed for the final system.
Millennial Climatic Fluctuations Are Key to the Structure of Last Glacial Ecosystems
Huntley, Brian; Allen, Judy R. M.; Collingham, Yvonne C.; Hickler, Thomas; Lister, Adrian M.; Singarayer, Joy; Stuart, Anthony J.; Sykes, Martin T.; Valdes, Paul J.
2013-01-01
Whereas fossil evidence indicates extensive treeless vegetation and diverse grazing megafauna in Europe and northern Asia during the last glacial, experiments combining vegetation models and climate models have to-date simulated widespread persistence of trees. Resolving this conflict is key to understanding both last glacial ecosystems and extinction of most of the mega-herbivores. Using a dynamic vegetation model (DVM) we explored the implications of the differing climatic conditions generated by a general circulation model (GCM) in “normal” and “hosing” experiments. Whilst the former approximate interstadial conditions, the latter, designed to mimic Heinrich Events, approximate stadial conditions. The “hosing” experiments gave simulated European vegetation much closer in composition to that inferred from fossil evidence than did the “normal” experiments. Given the short duration of interstadials, and the rate at which forest cover expanded during the late-glacial and early Holocene, our results demonstrate the importance of millennial variability in determining the character of last glacial ecosystems. PMID:23613985
Millennial climatic fluctuations are key to the structure of last glacial ecosystems.
Huntley, Brian; Allen, Judy R M; Collingham, Yvonne C; Hickler, Thomas; Lister, Adrian M; Singarayer, Joy; Stuart, Anthony J; Sykes, Martin T; Valdes, Paul J
2013-01-01
Whereas fossil evidence indicates extensive treeless vegetation and diverse grazing megafauna in Europe and northern Asia during the last glacial, experiments combining vegetation models and climate models have to-date simulated widespread persistence of trees. Resolving this conflict is key to understanding both last glacial ecosystems and extinction of most of the mega-herbivores. Using a dynamic vegetation model (DVM) we explored the implications of the differing climatic conditions generated by a general circulation model (GCM) in "normal" and "hosing" experiments. Whilst the former approximate interstadial conditions, the latter, designed to mimic Heinrich Events, approximate stadial conditions. The "hosing" experiments gave simulated European vegetation much closer in composition to that inferred from fossil evidence than did the "normal" experiments. Given the short duration of interstadials, and the rate at which forest cover expanded during the late-glacial and early Holocene, our results demonstrate the importance of millennial variability in determining the character of last glacial ecosystems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
LaRow, Timothy
The SSTs used in our study come from the Community Climate System Model version 4 (CCSM4) (Gent et al 2011) and from the Canadian Centre for Climate Modeling and Analysis (CanESM2) (Chylek et al20ll) climate models from the fifth Coupled Model Intercomparison Project (CMIP5) (Taylor et al2012). We've examined the tropical cyclones using both the historical simulation that employs volcanic and aerosol forcing as well as the representative concentration pathway 4.5 (RCP4.5). In addition, we've compared the present day North Atlantic tropical cyclone metrics from a previous study (LaRow, 2013) to these climate change experiments. The experimental setup is shownmore » in Table 1. We considered the CMIP5 experiment number '3.2 historical' (Taylor et al,201l), which provides simulations of the recent past (1850-2005). The second set of CMIP5 SSTs is the RCp4.5 experiment where the radiative forcing stabilizes at 45W m-2 after 2100 (experiment number 4.1 in Taylor etal2}ll).« less
NASA Astrophysics Data System (ADS)
Javernick, Luke; Redolfi, Marco; Bertoldi, Walter
2018-05-01
New data collection techniques offer numerical modelers the ability to gather and utilize high quality data sets with high spatial and temporal resolution. Such data sets are currently needed for calibration, verification, and to fuel future model development, particularly morphological simulations. This study explores the use of high quality spatial and temporal data sets of observed bed load transport in braided river flume experiments to evaluate the ability of a two-dimensional model, Delft3D, to predict bed load transport. This study uses a fixed bed model configuration and examines the model's shear stress calculations, which are the foundation to predict the sediment fluxes necessary for morphological simulations. The evaluation is conducted for three flow rates, and model setup used highly accurate Structure-from-Motion (SfM) topography and discharge boundary conditions. The model was hydraulically calibrated using bed roughness, and performance was evaluated based on depth and inundation agreement. Model bed load performance was evaluated in terms of critical shear stress exceedance area compared to maps of observed bed mobility in a flume. Following the standard hydraulic calibration, bed load performance was tested for sensitivity to horizontal eddy viscosity parameterization and bed morphology updating. Simulations produced depth errors equal to the SfM inherent errors, inundation agreement of 77-85%, and critical shear stress exceedance in agreement with 49-68% of the observed active area. This study provides insight into the ability of physically based, two-dimensional simulations to accurately predict bed load as well as the effects of horizontal eddy viscosity and bed updating. Further, this study highlights how using high spatial and temporal data to capture the physical processes at work during flume experiments can help to improve morphological modeling.
Humidity Bias and Effect on Simulated Aerosol Optical Properties during the Ganges Valley Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Yan; Cadeddu, M.; Kotamarthi, V. R.
2016-07-10
The radiosonde humidity profiles available during the Ganges Valley Experiment were compared to those simulated from the regional Weather Research and Forecasting (WRF) model coupled with a chemistry module (WRF -Chern) and the global reanalysis datasets. Large biases were revealed. On a monthly mean basis at Nainital, located in northern India, the WRFChern model simulates a large moist bias in the free troposphere (up to +20%) as well as a large dry bias in the boundary layer (up to -30%). While the overall pattern of the biases is similar, the magnitude of the biases varies from time to time andmore » from one location to another. At Thiruvananthapuram, the magnitude of the dry bias is smaller, and in contrast to Nainital, the higher-resolution regional WRF -Chern model generates larger moist biases in the upper troposphere than the global reanalysis data. Furthermore, the humidity biases in the upper troposphere, while significant, have little impact on the model estimation of column aerosol optical depth (AOD). The frequent occurrences of the dry boundary-layer bias simulated by the large-scale models tend to lead to the underestimation of AOD. It is thus important to quantify the humidity vertical profiles for aerosol simulations over South Asia.« less
Designing Free Energy Surfaces That Match Experimental Data with Metadynamics
White, Andrew D.; Dama, James F.; Voth, Gregory A.
2015-04-30
Creating models that are consistent with experimental data is essential in molecular modeling. This is often done by iteratively tuning the molecular force field of a simulation to match experimental data. An alternative method is to bias a simulation, leading to a hybrid model composed of the original force field and biasing terms. Previously we introduced such a method called experiment directed simulation (EDS). EDS minimally biases simulations to match average values. We also introduce a new method called experiment directed metadynamics (EDM) that creates minimal biases for matching entire free energy surfaces such as radial distribution functions and phi/psimore » angle free energies. It is also possible with EDM to create a tunable mixture of the experimental data and free energy of the unbiased ensemble with explicit ratios. EDM can be proven to be convergent, and we also present proof, via a maximum entropy argument, that the final bias is minimal and unique. Examples of its use are given in the construction of ensembles that follow a desired free energy. Finally, the example systems studied include a Lennard-Jones fluid made to match a radial distribution function, an atomistic model augmented with bioinformatics data, and a three-component electrolyte solution where ab initio simulation data is used to improve a classical empirical model.« less
Designing free energy surfaces that match experimental data with metadynamics.
White, Andrew D; Dama, James F; Voth, Gregory A
2015-06-09
Creating models that are consistent with experimental data is essential in molecular modeling. This is often done by iteratively tuning the molecular force field of a simulation to match experimental data. An alternative method is to bias a simulation, leading to a hybrid model composed of the original force field and biasing terms. We previously introduced such a method called experiment directed simulation (EDS). EDS minimally biases simulations to match average values. In this work, we introduce a new method called experiment directed metadynamics (EDM) that creates minimal biases for matching entire free energy surfaces such as radial distribution functions and phi/psi angle free energies. It is also possible with EDM to create a tunable mixture of the experimental data and free energy of the unbiased ensemble with explicit ratios. EDM can be proven to be convergent, and we also present proof, via a maximum entropy argument, that the final bias is minimal and unique. Examples of its use are given in the construction of ensembles that follow a desired free energy. The example systems studied include a Lennard-Jones fluid made to match a radial distribution function, an atomistic model augmented with bioinformatics data, and a three-component electrolyte solution where ab initio simulation data is used to improve a classical empirical model.
Experiments and Simulations of ITER-like Plasmas in Alcator C-Mod
DOE Office of Scientific and Technical Information (OSTI.GOV)
.R. Wilson, C.E. Kessel, S. Wolfe, I.H. Hutchinson, P. Bonoli, C. Fiore, A.E. Hubbard, J. Hughes, Y. Lin, Y. Ma, D. Mikkelsen, M. Reinke, S. Scott, A.C.C. Sips, S. Wukitch and the C-Mod Team
Alcator C-Mod is performing ITER-like experiments to benchmark and verify projections to 15 MA ELMy H-mode Inductive ITER discharges. The main focus has been on the transient ramp phases. The plasma current in C-Mod is 1.3 MA and toroidal field is 5.4 T. Both Ohmic and ion cyclotron (ICRF) heated discharges are examined. Plasma current rampup experiments have demonstrated that (ICRF and LH) heating in the rise phase can save voltseconds (V-s), as was predicted for ITER by simulations, but showed that the ICRF had no effect on the current profile versus Ohmic discharges. Rampdown experiments show an overcurrent inmore » the Ohmic coil (OH) at the H to L transition, which can be mitigated by remaining in H-mode into the rampdown. Experiments have shown that when the EDA H-mode is preserved well into the rampdown phase, the density and temperature pedestal heights decrease during the plasma current rampdown. Simulations of the full C-Mod discharges have been done with the Tokamak Simulation Code (TSC) and the Coppi-Tang energy transport model is used with modified settings to provide the best fit to the experimental electron temperature profile. Other transport models have been examined also. __________________________________________________« less
NASA Astrophysics Data System (ADS)
Jackson, Thomas; Jost, A. M.; Zhang, Ju; Sridharan, P.; Amadio, G.
2017-06-01
In this work we present three-dimensional mesoscale simulations of detonation initiation in energetic materials. We solve the reactive Euler equations, with the energy equation augmented by a power deposition term. The reaction rate at the mesoscale is modelled using a density-based kinetics scheme, adapted from standard Ignition and Growth models. The deposition term is based on previous results of simulations of pore collapse at the microscale, modelled at the mesoscale as hot-spots. We carry out three-dimensional mesoscale simulations of random packs of HMX crystals in a binder, and show that the transition between no-detonation and detonation depends on the number density of the hot-spots, the initial radius of the hot-spot, the post-shock pressure of an imposed shock, and the amplitude of the power deposition term. The trends of transition at lower pressure of the imposed shock for larger number density of pore observed in experiments is reproduced. Initial attempts to improve the agreement between the simulation and experiments through calibration of various parameters will also be made.
Model-based surgical planning and simulation of cranial base surgery.
Abe, M; Tabuchi, K; Goto, M; Uchino, A
1998-11-01
Plastic skull models of seven individual patients were fabricated by stereolithography from three-dimensional data based on computed tomography bone images. Skull models were utilized for neurosurgical planning and simulation in the seven patients with cranial base lesions that were difficult to remove. Surgical approaches and areas of craniotomy were evaluated using the fabricated skull models. In preoperative simulations, hand-made models of the tumors, major vessels and nerves were placed in the skull models. Step-by-step simulation of surgical procedures was performed using actual surgical tools. The advantages of using skull models to plan and simulate cranial base surgery include a better understanding of anatomic relationships, preoperative evaluation of the proposed procedure, increased understanding by the patient and family, and improved educational experiences for residents and other medical staff. The disadvantages of using skull models include the time and cost of making the models. The skull models provide a more realistic tool that is easier to handle than computer-graphic images. Surgical simulation using models facilitates difficult cranial base surgery and may help reduce surgical complications.
Seth, Ajay; Sherman, Michael; Reinbolt, Jeffrey A.; Delp, Scott L.
2015-01-01
Movement science is driven by observation, but observation alone cannot elucidate principles of human and animal movement. Biomechanical modeling and computer simulation complement observations and inform experimental design. Biological models are complex and specialized software is required for building, validating, and studying them. Furthermore, common access is needed so that investigators can contribute models to a broader community and leverage past work. We are developing OpenSim, a freely available musculoskeletal modeling and simulation application and libraries specialized for these purposes, by providing: musculoskeletal modeling elements, such as biomechanical joints, muscle actuators, ligament forces, compliant contact, and controllers; and tools for fitting generic models to subject-specific data, performing inverse kinematics and forward dynamic simulations. OpenSim performs an array of physics-based analyses to delve into the behavior of musculoskeletal models by employing Simbody, an efficient and accurate multibody system dynamics code. Models are publicly available and are often reused for multiple investigations because they provide a rich set of behaviors that enables different lines of inquiry. This report will discuss one model developed to study walking and applied to gain deeper insights into muscle function in pathological gait and during running. We then illustrate how simulations can test fundamental hypotheses and focus the aims of in vivo experiments, with a postural stability platform and human model that provide a research environment for performing human posture experiments in silico. We encourage wide adoption of OpenSim for community exchange of biomechanical models and methods and welcome new contributors. PMID:25893160
Chow, Alexander K; Sherer, Benjamin A; Yura, Emily; Kielb, Stephanie; Kocjancic, Ervin; Eggener, Scott; Turk, Thomas; Park, Sangtae; Psutka, Sarah; Abern, Michael; Latchamsetty, Kalyan C; Coogan, Christopher L
2017-11-01
To evaluate the Urological resident's attitude and experience with surgical simulation in residency education using a multi-institutional, multi-modality model. Residents from 6 area urology training programs rotated through simulation stations in 4 consecutive sessions from 2014 to 2017. Workshops included GreenLight photovaporization of the prostate, ureteroscopic stone extraction, laparoscopic peg transfer, 3-dimensional laparoscopy rope pass, transobturator sling placement, intravesical injection, high definition video system trainer, vasectomy, and Urolift. Faculty members provided teaching assistance, objective scoring, and verbal feedback. Participants completed a nonvalidated questionnaire evaluating utility of the workshop and soliciting suggestions for improvement. Sixty-three of 75 participants (84%) (postgraduate years 1-6) completed the exit questionnaire. Median rating of exercise usefulness on a scale of 1-10 ranged from 7.5 to 9. On a scale of 0-10, cumulative median scores of the course remained high over 4 years: time limit per station (9; interquartile range [IQR] 2), faculty instruction (9, IQR 2), ease of use (9, IQR 2), face validity (8, IQR 3), and overall course (9, IQR 2). On multivariate analysis, there was no difference in rating of domains between postgraduate years. Sixty-seven percent (42/63) believe that simulation training should be a requirement of Urology residency. Ninety-seven percent (63/65) viewed the laboratory as beneficial to their education. This workshop model is a valuable training experience for residents. Most participants believe that surgical simulation is beneficial and should be a requirement for Urology residency. High ratings of usefulness for each exercise demonstrated excellent face validity provided by the course. Copyright © 2017 Elsevier Inc. All rights reserved.
Chaibub Neto, Elias; Bare, J. Christopher; Margolin, Adam A.
2014-01-01
New algorithms are continuously proposed in computational biology. Performance evaluation of novel methods is important in practice. Nonetheless, the field experiences a lack of rigorous methodology aimed to systematically and objectively evaluate competing approaches. Simulation studies are frequently used to show that a particular method outperforms another. Often times, however, simulation studies are not well designed, and it is hard to characterize the particular conditions under which different methods perform better. In this paper we propose the adoption of well established techniques in the design of computer and physical experiments for developing effective simulation studies. By following best practices in planning of experiments we are better able to understand the strengths and weaknesses of competing algorithms leading to more informed decisions about which method to use for a particular task. We illustrate the application of our proposed simulation framework with a detailed comparison of the ridge-regression, lasso and elastic-net algorithms in a large scale study investigating the effects on predictive performance of sample size, number of features, true model sparsity, signal-to-noise ratio, and feature correlation, in situations where the number of covariates is usually much larger than sample size. Analysis of data sets containing tens of thousands of features but only a few hundred samples is nowadays routine in computational biology, where “omics” features such as gene expression, copy number variation and sequence data are frequently used in the predictive modeling of complex phenotypes such as anticancer drug response. The penalized regression approaches investigated in this study are popular choices in this setting and our simulations corroborate well established results concerning the conditions under which each one of these methods is expected to perform best while providing several novel insights. PMID:25289666
DOE Office of Scientific and Technical Information (OSTI.GOV)
Atamturktur, Sez; Unal, Cetin; Hemez, Francois
The project proposed to provide a Predictive Maturity Framework with its companion metrics that (1) introduce a formalized, quantitative means to communicate information between interested parties, (2) provide scientifically dependable means to claim completion of Validation and Uncertainty Quantification (VU) activities, and (3) guide the decision makers in the allocation of Nuclear Energy’s resources for code development and physical experiments. The project team proposed to develop this framework based on two complimentary criteria: (1) the extent of experimental evidence available for the calibration of simulation models and (2) the sophistication of the physics incorporated in simulation models. The proposed frameworkmore » is capable of quantifying the interaction between the required number of physical experiments and degree of physics sophistication. The project team has developed this framework and implemented it with a multi-scale model for simulating creep of a core reactor cladding. The multi-scale model is composed of the viscoplastic self-consistent (VPSC) code at the meso-scale, which represents the visco-plastic behavior and changing properties of a highly anisotropic material and a Finite Element (FE) code at the macro-scale to represent the elastic behavior and apply the loading. The framework developed takes advantage of the transparency provided by partitioned analysis, where independent constituent codes are coupled in an iterative manner. This transparency allows model developers to better understand and remedy the source of biases and uncertainties, whether they stem from the constituents or the coupling interface by exploiting separate-effect experiments conducted within the constituent domain and integral-effect experiments conducted within the full-system domain. The project team has implemented this procedure with the multi- scale VPSC-FE model and demonstrated its ability to improve the predictive capability of the model. Within this framework, the project team has focused on optimizing resource allocation for improving numerical models through further code development and experimentation. Related to further code development, we have developed a code prioritization index (CPI) for coupled numerical models. CPI is implemented to effectively improve the predictive capability of the coupled model by increasing the sophistication of constituent codes. In relation to designing new experiments, we investigated the information gained by the addition of each new experiment used for calibration and bias correction of a simulation model. Additionally, the variability of ‘information gain’ through the design domain has been investigated in order to identify the experiment settings where maximum information gain occurs and thus guide the experimenters in the selection of the experiment settings. This idea was extended to evaluate the information gain from each experiment can be improved by intelligently selecting the experiments, leading to the development of the Batch Sequential Design (BSD) technique. Additionally, we evaluated the importance of sufficiently exploring the domain of applicability in experiment-based validation of high-consequence modeling and simulation by developing a new metric to quantify coverage. This metric has also been incorporated into the design of new experiments. Finally, we have proposed a data-aware calibration approach for the calibration of numerical models. This new method considers the complexity of a numerical model (the number of parameters to be calibrated, parameter uncertainty, and form of the model) and seeks to identify the number of experiments necessary to calibrate the model based on the level of sophistication of the physics. The final component in the project team’s work to improve model calibration and validation methods is the incorporation of robustness to non-probabilistic uncertainty in the input parameters. This is an improvement to model validation and uncertainty quantification stemming beyond the originally proposed scope of the project. We have introduced a new metric for incorporating the concept of robustness into experiment-based validation of numerical models. This project has accounted for the graduation of two Ph.D. students (Kendra Van Buren and Josh Hegenderfer) and two M.S. students (Matthew Egeberg and Parker Shields). One of the doctoral students is now working in the nuclear engineering field and the other one is a post-doctoral fellow at the Los Alamos National Laboratory. Additionally, two more Ph.D. students (Garrison Stevens and Tunc Kulaksiz) who are working towards graduation have been supported by this project.« less
NASA Astrophysics Data System (ADS)
Armand J, K. M.
2017-12-01
In this study, version 4 of the regional climate model (RegCM4) is used to perform 6 years simulation including one year for spin-up (from January 2001 to December 2006) over Central Africa using four convective schemes: The Emmanuel scheme (MIT), the Grell scheme with Arakawa-Schulbert closure assumption (GAS), the Grell scheme with Fritsch-Chappell closure assumption (GFC) and the Anthes-Kuo scheme (Kuo). We have investigated the ability of the model to simulate precipitation, surface temperature, wind and aerosols optical depth. Emphasis in the model results were made in December-January-February (DJF) and July-August-September (JAS) periods. Two subregions have been identified for more specific analysis namely: zone 1 which corresponds to the sahel region mainly classified as desert and steppe and zone 2 which is a region spanning the tropical rain forest and is characterised by a bimodal rain regime. We found that regardless of periods or simulated parameters, MIT scheme generally has a tendency to overestimate. The GAS scheme is more suitable in simulating the aforementioned parameters, as well as the diurnal cycle of precipitations everywhere over the study domain irrespective of the season. In JAS, model results are similar in the representation of regional wind circulation. Apart from the MIT scheme, all the convective schemes give the same trends in aerosols optical depth simulations. Additional experiment reveals that the use of BATS instead of Zeng scheme to calculate ocean flux appears to improve the quality of the model simulations.
Li, Tingting; Cheng, Zhengguo; Zhang, Le
2017-01-01
Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM) have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM) is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO) by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV) data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency. PMID:29194393
Li, Tingting; Cheng, Zhengguo; Zhang, Le
2017-12-01
Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM) have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM) is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO) by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV) data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simunovic, Srdjan
2015-02-16
CASL's modeling and simulation technology, the Virtual Environment for Reactor Applications (VERA), incorporates coupled physics and science-based models, state-of-the-art numerical methods, modern computational science, integrated uncertainty quantification (UQ) and validation against data from operating pressurized water reactors (PWRs), single-effect experiments, and integral tests. The computational simulation component of VERA is the VERA Core Simulator (VERA-CS). The core simulator is the specific collection of multi-physics computer codes used to model and deplete a LWR core over multiple cycles. The core simulator has a single common input file that drives all of the different physics codes. The parser code, VERAIn, converts VERAmore » Input into an XML file that is used as input to different VERA codes.« less
MHD Simulations of Plasma Dynamics with Non-Axisymmetric Boundaries
NASA Astrophysics Data System (ADS)
Hansen, Chris; Levesque, Jeffrey; Morgan, Kyle; Jarboe, Thomas
2015-11-01
The arbitrary geometry, 3D extended MHD code PSI-TET is applied to linear and non-linear simulations of MCF plasmas with non-axisymmetric boundaries. Progress and results from simulations on two experiments will be presented: 1) Detailed validation studies of the HIT-SI experiment with self-consistent modeling of plasma dynamics in the helicity injectors. Results will be compared to experimental data and NIMROD simulations that model the effect of the helicity injectors through boundary conditions on an axisymmetric domain. 2) Linear studies of HBT-EP with different wall configurations focusing on toroidal asymmetries in the adjustable conducting wall. HBT-EP studies the effect of active/passive stabilization with an adjustable ferritic wall. Results from linear verification and benchmark studies of ideal mode growth with and without toroidal asymmetries will be presented and compared to DCON predictions. Simulations of detailed experimental geometries are enabled by use of the PSI-TET code, which employs a high order finite element method on unstructured tetrahedral grids that are generated directly from CAD models. Further development of PSI-TET will also be presented including work to support resistive wall regions within extended MHD simulations. Work supported by DoE.
ERT to aid in WSN based early warning system for landslides
NASA Astrophysics Data System (ADS)
T, H.
2017-12-01
Amrita University's landslide monitoring and early warning system using Wireless Sensor Networks (WSN) consists of heterogeneous sensors like rain gauge, moisture sensor, piezometer, geophone, inclinometer, tilt meter etc. The information from the sensors are accurate and limited to that point. In order to monitor a large area, ERT can be used in conjunction with WSN technology. To accomplish the feasibility of ERT in landslide early warning along with WSN technology, we have conducted experiments in Amrita's landslide laboratory setup. The experiment was aimed to simulate landslide, and monitor the changes happening in the soil using moisture sensor and ERT. Simulating moisture values from resistivity measurements to a greater accuracy can help in landslide monitoring for large areas. For accomplishing the same we have adapted two mathematical approaches, 1) Regression analysis between resistivity measurements and actual moisture values from moisture sensor, and 2) Using Waxman Smith model to simulate moisture values from resistivity measurements. The simulated moisture values from Waxman Smith model is compared with the actual moisture values and the Mean Square Error (MSE) is found to be 46.33. Regression curve is drawn for the resistivity vs simulated moisture values from Waxman model, and it is compared with the regression curve of actual model, which is shown in figure-1. From figure-1, it is clear that there the regression curve from actual moisture values and the regression curve from simulated moisture values, follow the similar pattern and there is a small difference between them. Moisture values can be simulated to a greater accuracy using actual regression equation, but the limitation is that, regression curves will differ for different sites and different soils. Regression equation from actual moisture values can be used, if we have conducted experiment in the laboratory for a particular soil sample, otherwise with the knowledge of soil properties, Waxman model can be used to simulate moisture values. The promising results assure that, ERT measurements when used in conjunction with WSN technique, vital paramters triggering landslides like moisture can be simulated for a large area, which will help in providing early warning for large areas.
NASA Astrophysics Data System (ADS)
Frenkel, Daan
2007-03-01
During the past decade there has been a unique synergy between theory, experiment and simulation in Soft Matter Physics. In colloid science, computer simulations that started out as studies of highly simplified model systems, have acquired direct experimental relevance because experimental realizations of these simple models can now be synthesized. Whilst many numerical predictions concerning the phase behavior of colloidal systems have been vindicated by experiments, the jury is still out on others. In my talk I will discuss some of the recent technical developments, new findings and open questions in computational soft-matter science.
Simulations in support of the T4B experiment
NASA Astrophysics Data System (ADS)
Qerushi, Artan; Ross, Patrick; Lohff, Chriss; Raymond, Anthony; Montecalvo, Niccolo
2017-10-01
Simulations in support of the T4B experiment are presented. These include a Grad-Shafranov equilibrium solver and equilibrium reconstruction from flux-loop measurements, collision radiative models for plasma spectroscopy (determination of electron density and temperature from line ratios) and fast ion test particle codes for neutral beam - plasma coupling. ©2017 Lockheed Martin Corporation. All Rights Reserved.
Sample Analysis at Mars Instrument Simulator
NASA Technical Reports Server (NTRS)
Benna, Mehdi; Nolan, Tom
2013-01-01
The Sample Analysis at Mars Instrument Simulator (SAMSIM) is a numerical model dedicated to plan and validate operations of the Sample Analysis at Mars (SAM) instrument on the surface of Mars. The SAM instrument suite, currently operating on the Mars Science Laboratory (MSL), is an analytical laboratory designed to investigate the chemical and isotopic composition of the atmosphere and volatiles extracted from solid samples. SAMSIM was developed using Matlab and Simulink libraries of MathWorks Inc. to provide MSL mission planners with accurate predictions of the instrument electrical, thermal, mechanical, and fluid responses to scripted commands. This tool is a first example of a multi-purpose, full-scale numerical modeling of a flight instrument with the purpose of supplementing or even eliminating entirely the need for a hardware engineer model during instrument development and operation. SAMSIM simulates the complex interactions that occur between the instrument Command and Data Handling unit (C&DH) and all subsystems during the execution of experiment sequences. A typical SAM experiment takes many hours to complete and involves hundreds of components. During the simulation, the electrical, mechanical, thermal, and gas dynamics states of each hardware component are accurately modeled and propagated within the simulation environment at faster than real time. This allows the simulation, in just a few minutes, of experiment sequences that takes many hours to execute on the real instrument. The SAMSIM model is divided into five distinct but interacting modules: software, mechanical, thermal, gas flow, and electrical modules. The software module simulates the instrument C&DH by executing a customized version of the instrument flight software in a Matlab environment. The inputs and outputs to this synthetic C&DH are mapped to virtual sensors and command lines that mimic in their structure and connectivity the layout of the instrument harnesses. This module executes, and thus validates, complex command scripts prior to their up-linking to the SAM instrument. As an output, this module generates synthetic data and message logs at a rate that is similar to the actual instrument.
NASA Astrophysics Data System (ADS)
Mondal, P.; Krol, M.; Sleep, B. E.
2015-12-01
A wide variety of groundwater contaminants can be treated with nano-scale zero valent iron (nZVI). However, delivery of nZVI in the subsurface to the treatment zones is challenging as the bare nZVI particles have a higher tendency to agglomerate. The subsurface mobility of nZVI can be enhanced by stabilizing nZVI with polymer, such as carboxymethyl cellulose (CMC). In this study, numerical simulations were conducted to evaluate CMC stabilized nZVI transport behavior in porous media. The numerical simulations were based on a set of laboratory-scale transport experiments that were conducted in a two-dimensional water-saturated glass-walled sandbox (length - 55 cm; height - 45 cm; width - 1.4 cm), uniformly packed with silica sand. In the transport experiments: CMC stabilized nZVI and a non-reactive dye tracer Lissamine Green B (LGB) were used; water specific discharge and CMC concentration were varied; movements of LGB, and CMC-nZVI in the sandbox were tracked using a camera, a light source and a dark box. The concentrations of LGB, CMC, and CMC-nZVI at the sandbox outlet were analyzed. A 2D multiphase flow and transport model was applied to simulate experimental results. The images from LGB dye transport experiments were used to determine the pore water velocities and media permeabilities in various layers in the sand box. These permeability values were used in the subsequent simulations of CMC-nZVI transport. The 2D compositional simulator, modified to include colloid filtration theory (CFT), treated CMC as a solute and nZVI as a colloid. The simulator included composition dependent viscosity to account for CMC injection and mixing, and attachment efficiency as a fitting parameter for nZVI transport modeling. In the experiments, LGB and CMC recoveries were greater than 95%; however, CMC residence time was significantly higher than the LGB residence time and the higher CMC concentration caused higher pressure drops in the sandbox. The nZVI recovery was lower than 40% in all experiments. The simulation results were found to be in good agreement with the experimental results, implying that the compositional simulator including CFT-modified transport equations could be utilized for the estimation of CMC-stabilized nZVI transport in porous media and design of field scale implementations of CMC-nZVI for remediation.
Experiments in sensing transient rotational acceleration cues on a flight simulator
NASA Technical Reports Server (NTRS)
Parrish, R. V.
1979-01-01
Results are presented for two transient motion sensing experiments which were motivated by the identification of an anomalous roll cue (a 'jerk' attributed to an acceleration spike) in a prior investigation of realistic fighter motion simulation. The experimental results suggest the consideration of several issues for motion washout and challenge current sensory system modeling efforts. Although no sensory modeling effort is made it is argued that such models must incorporate the ability to handle transient inputs of short duration (some of which are less than the accepted latency times for sensing), and must represent separate channels for rotational acceleration and velocity sensing.
Analysis of Road Network Pattern Considering Population Distribution and Central Business District
Zhao, Fangxia; Sun, Huijun; Wu, Jianjun; Gao, Ziyou; Liu, Ronghui
2016-01-01
This paper proposes a road network growing model with the consideration of population distribution and central business district (CBD) attraction. In the model, the relative neighborhood graph (RNG) is introduced as the connection mechanism to capture the characteristics of road network topology. The simulation experiment is set up to illustrate the effects of population distribution and CBD attraction on the characteristics of road network. Moreover, several topological attributes of road network is evaluated by using coverage, circuitness, treeness and total length in the experiment. Finally, the suggested model is verified in the simulation of China and Beijing Highway networks. PMID:26981857
An experimental method to verify soil conservation by check dams on the Loess Plateau, China.
Xu, X Z; Zhang, H W; Wang, G Q; Chen, S C; Dang, W Q
2009-12-01
A successful experiment with a physical model requires necessary conditions of similarity. This study presents an experimental method with a semi-scale physical model. The model is used to monitor and verify soil conservation by check dams in a small watershed on the Loess Plateau of China. During experiments, the model-prototype ratio of geomorphic variables was kept constant under each rainfall event. Consequently, experimental data are available for verification of soil erosion processes in the field and for predicting soil loss in a model watershed with check dams. Thus, it can predict the amount of soil loss in a catchment. This study also mentions four criteria: similarities of watershed geometry, grain size and bare land, Froude number (Fr) for rainfall event, and soil erosion in downscaled models. The efficacy of the proposed method was confirmed using these criteria in two different downscaled model experiments. The B-Model, a large scale model, simulates watershed prototype. The two small scale models, D(a) and D(b), have different erosion rates, but are the same size. These two models simulate hydraulic processes in the B-Model. Experiment results show that while soil loss in the small scale models was converted by multiplying the soil loss scale number, it was very close to that of the B-Model. Obviously, with a semi-scale physical model, experiments are available to verify and predict soil loss in a small watershed area with check dam system on the Loess Plateau, China.
Numerical Simulation of the Perrin-Like Experiments
ERIC Educational Resources Information Center
Mazur, Zygmunt; Grech, Dariusz
2008-01-01
A simple model of the random Brownian walk of a spherical mesoscopic particle in viscous liquids is proposed. The model can be solved analytically and simulated numerically. The analytic solution gives the known Einstein-Smoluchowski diffusion law r[superscript 2] = 2Dt, where the diffusion constant D is expressed by the mass and geometry of a…
ERIC Educational Resources Information Center
Shegog, Ross; Lazarus, Melanie M.; Murray, Nancy G.; Diamond, Pamela M.; Sessions, Nathalie; Zsigmond, Eva
2012-01-01
The transgenic mouse model is useful for studying the causes and potential cures for human genetic diseases. Exposing high school biology students to laboratory experience in developing transgenic animal models is logistically prohibitive. Computer-based simulation, however, offers this potential in addition to advantages of fidelity and reach.…
Using STELLA Simulation Models to Teach Natural Resource Economics
ERIC Educational Resources Information Center
Dissanayake, Sahan T. M.
2016-01-01
In this article, the author discusses how graphical simulation models created using STELLA software can be used to present natural resource systems in an intuitive way in undergraduate natural resource economics classes based on his experiences at a leading research university, a state university, and a leading liberal arts college in the United…
A Theory for the Neural Basis of Language Part 2: Simulation Studies of the Model
ERIC Educational Resources Information Center
Baron, R. J.
1974-01-01
Computer simulation studies of the proposed model are presented. Processes demonstrated are (1) verbally directed recall of visual experience; (2) understanding of verbal information; (3) aspects of learning and forgetting; (4) the dependence of recognition and understanding in context; and (5) elementary concepts of sentence production. (Author)
Experimental and theoretical simulations of Titan's VUV photochemistry
NASA Astrophysics Data System (ADS)
Peng, Z.; Carrasco, N.; Pernot, P.
2013-12-01
A new reactor, named APSIS (Atmospheric Photochemistry SImulated by Synchrotron), has been designed to simulate planetary atmospheric photochemistry [Peng et al. JGR-E. 2013, 118, 778]. We report here a study focusing on Titan's upper atmosphere. A nitrogen-methane gas flow was irradiated by a continuous 60-350 nm VUV beam provided by the DISCO line at SOLEIL synchrotron radiation facility. The production of C2-C4 hydrocarbons as well as several nitriles (HCN, CH3 CN and C2N2) was detected by in situ mass spectrometry, in agreement with Cassini's INMS observations at Titan, and ex situ GC-MS of a cryogenic experiment. We compared the mass spectra with those obtained by a plasma experiment [Carrasco et al. Icarus. 2012, 219, 230] and with another synchrotron-based experiment [Imanaka and Smith. PNAS. 2010, 107, 12423], and with the in situ measurements of the INMS instrument onboard Cassini probing the neutral content of Titan's upper atmosphere. In spite of lower photochemical production efficiency and different environmental conditions, the APSIS reactor seems to simulate Titan's neutral composition rather well. To interpret these experimental data, we developed a fully coupled ion-neutral photochemical model of the reactor, with uncertainty management, based on the neutral model of Hébrard et al. [J. Photochem. Photobiol. A. 2006, 7, 211], the model of ion chemistry of Plessis et al. [J. Chem. Phys. 2010, 133, 134110], and a new representation of photolysis cross-sections and branching ratios [Gans et al. Icarus. 2013, 223, 330]. Compared to the measurements, the production in Cn blocks is in good agreement. Ion chemistry and the full dissociative recombination scheme have been demonstrated to be important features of the model. The photolysis was confirmed to be globally influential by sensivity analysis. We observed the importance of the addition of small (C1 or C2) units in molecular growth, as well as 3 growth families, promoted by C2H2, C2H4 and C2H5/C2H6, respectively. Among the three, the C2H2 family, in which the growth pathways of unsaturated species via ion chemistry are the most efficient, is clearly prominent. Our model was also used to interpret the results of the INMS data and Imanaka and Smith's experiments. Through variants of the reference model of the APSIS experiments, we showed that low pressure and low temperature favor the growth of unsaturated species. These conditions are fulfilled in Titan's ionosphere. The INMS neutral spectrum, in which there is mainly the signal of unsaturated species, can be well reproduced by our simulated MS. Compared to the experimental MS of the APSIS experiments and Imanaka and Smith's experiments, the simulated MS systematically underestimate the intensities of the saturated part of each band. After the consideration of the recombinations catalyzed by the reactor's walls, we improved the simulated MS significantly. This suggests the existence of wall effects in the laboratory simulation setups of atmospheric chemistry, leading to an overestimation of the saturated products compared to Titan's chemical products.
Li, Chen; Nagasaki, Masao; Ueno, Kazuko; Miyano, Satoru
2009-04-27
Model checking approaches were applied to biological pathway validations around 2003. Recently, Fisher et al. have proved the importance of model checking approach by inferring new regulation of signaling crosstalk in C. elegans and confirming the regulation with biological experiments. They took a discrete and state-based approach to explore all possible states of the system underlying vulval precursor cell (VPC) fate specification for desired properties. However, since both discrete and continuous features appear to be an indispensable part of biological processes, it is more appropriate to use quantitative models to capture the dynamics of biological systems. Our key motivation of this paper is to establish a quantitative methodology to model and analyze in silico models incorporating the use of model checking approach. A novel method of modeling and simulating biological systems with the use of model checking approach is proposed based on hybrid functional Petri net with extension (HFPNe) as the framework dealing with both discrete and continuous events. Firstly, we construct a quantitative VPC fate model with 1761 components by using HFPNe. Secondly, we employ two major biological fate determination rules - Rule I and Rule II - to VPC fate model. We then conduct 10,000 simulations for each of 48 sets of different genotypes, investigate variations of cell fate patterns under each genotype, and validate the two rules by comparing three simulation targets consisting of fate patterns obtained from in silico and in vivo experiments. In particular, an evaluation was successfully done by using our VPC fate model to investigate one target derived from biological experiments involving hybrid lineage observations. However, the understandings of hybrid lineages are hard to make on a discrete model because the hybrid lineage occurs when the system comes close to certain thresholds as discussed by Sternberg and Horvitz in 1986. Our simulation results suggest that: Rule I that cannot be applied with qualitative based model checking, is more reasonable than Rule II owing to the high coverage of predicted fate patterns (except for the genotype of lin-15ko; lin-12ko double mutants). More insights are also suggested. The quantitative simulation-based model checking approach is a useful means to provide us valuable biological insights and better understandings of biological systems and observation data that may be hard to capture with the qualitative one.
NASA Astrophysics Data System (ADS)
Bautista, Nazan Uludag
2011-06-01
This study investigated the effectiveness of an Early Childhood Education science methods course that focused exclusively on providing various mastery (i.e., enactive, cognitive content, and cognitive pedagogical) and vicarious experiences (i.e., cognitive self-modeling, symbolic modeling, and simulated modeling) in increasing preservice elementary teachers' self-efficacy beliefs. Forty-four preservice elementary teachers participated in the study. Analysis of the quantitative (STEBI-b) and qualitative (informal surveys) data revealed that personal science teaching efficacy and science teaching outcome expectancy beliefs increased significantly over the semester. Enactive mastery, cognitive pedagogical mastery, symbolic modeling, and cognitive self-modeling were the major sources of self-efficacy. This list was followed by cognitive content mastery and simulated modeling. This study has implications for science teacher educators.
Estimation and simulation of multi-beam sonar noise.
Holmin, Arne Johannes; Korneliussen, Rolf J; Tjøstheim, Dag
2016-02-01
Methods for the estimation and modeling of noise present in multi-beam sonar data, including the magnitude, probability distribution, and spatial correlation of the noise, are developed. The methods consider individual acoustic samples and facilitate compensation of highly localized noise as well as subtraction of noise estimates averaged over time. The modeled noise is included in an existing multi-beam sonar simulation model [Holmin, Handegard, Korneliussen, and Tjøstheim, J. Acoust. Soc. Am. 132, 3720-3734 (2012)], resulting in an improved model that can be used to strengthen interpretation of data collected in situ at any signal to noise ratio. Two experiments, from the former study in which multi-beam sonar data of herring schools were simulated, are repeated with inclusion of noise. These experiments demonstrate (1) the potentially large effect of changes in fish orientation on the backscatter from a school, and (2) the estimation of behavioral characteristics such as the polarization and packing density of fish schools. The latter is achieved by comparing real data with simulated data for different polarizations and packing densities.
Charge transfer in model peptides: obtaining Marcus parameters from molecular simulation.
Heck, Alexander; Woiczikowski, P Benjamin; Kubař, Tomáš; Giese, Bernd; Elstner, Marcus; Steinbrecher, Thomas B
2012-02-23
Charge transfer within and between biomolecules remains a highly active field of biophysics. Due to the complexities of real systems, model compounds are a useful alternative to study the mechanistic fundamentals of charge transfer. In recent years, such model experiments have been underpinned by molecular simulation methods as well. In this work, we study electron hole transfer in helical model peptides by means of molecular dynamics simulations. A theoretical framework to extract Marcus parameters of charge transfer from simulations is presented. We find that the peptides form stable helical structures with sequence dependent small deviations from ideal PPII helices. We identify direct exposure of charged side chains to solvent as a cause of high reorganization energies, significantly larger than typical for electron transfer in proteins. This, together with small direct couplings, makes long-range superexchange electron transport in this system very slow. In good agreement with experiment, direct transfer between the terminal amino acid side chains can be dicounted in favor of a two-step hopping process if appropriate bridging groups exist. © 2012 American Chemical Society
A model for the kinetics of a solar-pumped long path laser experiment
NASA Technical Reports Server (NTRS)
Stock, L. V.; Wilson, J. W.; Deyoung, R. J.
1986-01-01
A kinetic model for a solar-simulator pumped iodine laser system is developed and compared to an experiment in which the solar simulator output is dispersed over a large active volume (150 cu cm) with low simulator light intensity (approx. 200 solar constants). A trace foreign gas which quenches the upper level is introduced into the model. Furthermore, a constant representing optical absorption of the stimulated emission is introduced, in addition to a constant representing the scattering at each of the mirrors, via the optical cavity time constant. The non-uniform heating of the gas is treated as well as the pressure change as a function of time within the cavity. With these new phenomena introduced into the kinetic model, a best reasonable fit to the experimental data is found by adjusting the reaction rate coefficients within the range of known uncertainty by numerical methods giving a new bound within this range of uncertainty. The experimental parameters modeled are the lasing time, laser pulse energy, and time to laser threshold.
Hens, Bart; Pathak, Shriram M; Mitra, Amitava; Patel, Nikunjkumar; Liu, Bo; Patel, Sanjaykumar; Jamei, Masoud; Brouwers, Joachim; Augustijns, Patrick; Turner, David B
2017-12-04
The aim of this study was to evaluate gastrointestinal (GI) dissolution, supersaturation, and precipitation of posaconazole, formulated as an acidified (pH 1.6) and neutral (pH 7.1) suspension. A physiologically based pharmacokinetic (PBPK) modeling and simulation tool was applied to simulate GI and systemic concentration-time profiles of posaconazole, which were directly compared with intraluminal and systemic data measured in humans. The Advanced Dissolution Absorption and Metabolism (ADAM) model of the Simcyp Simulator correctly simulated incomplete gastric dissolution and saturated duodenal concentrations of posaconazole in the duodenal fluids following administration of the neutral suspension. In contrast, gastric dissolution was approximately 2-fold higher after administration of the acidified suspension, which resulted in supersaturated concentrations of posaconazole upon transfer to the upper small intestine. The precipitation kinetics of posaconazole were described by two precipitation rate constants, extracted by semimechanistic modeling of a two-stage medium change in vitro dissolution test. The 2-fold difference in exposure in the duodenal compartment for the two formulations corresponded with a 2-fold difference in systemic exposure. This study demonstrated for the first time predictive in silico simulations of GI dissolution, supersaturation, and precipitation for a weakly basic compound in part informed by modeling of in vitro dissolution experiments and validated via clinical measurements in both GI fluids and plasma. Sensitivity analysis with the PBPK model indicated that the critical supersaturation ratio (CSR) and second precipitation rate constant (sPRC) are important parameters of the model. Due to the limitations of the two-stage medium change experiment the CSR was extracted directly from the clinical data. However, in vitro experiments with the BioGIT transfer system performed after completion of the in silico modeling provided an almost identical CSR to the clinical study value; this had no significant impact on the PBPK model predictions.
How well do force fields capture the strength of salt bridges in proteins?
Ahmed, Mustapha Carab; Papaleo, Elena
2018-01-01
Salt bridges form between pairs of ionisable residues in close proximity and are important interactions in proteins. While salt bridges are known to be important both for protein stability, recognition and regulation, we still do not have fully accurate predictive models to assess the energetic contributions of salt bridges. Molecular dynamics simulation is one technique that may be used study the complex relationship between structure, solvation and energetics of salt bridges, but the accuracy of such simulations depends on the force field used. We have used NMR data on the B1 domain of protein G (GB1) to benchmark molecular dynamics simulations. Using enhanced sampling simulations, we calculated the free energy of forming a salt bridge for three possible lysine-carboxylate ionic interactions in GB1. The NMR experiments showed that these interactions are either not formed, or only very weakly formed, in solution. In contrast, we show that the stability of the salt bridges is overestimated, to different extents, in simulations of GB1 using seven out of eight commonly used combinations of fixed charge force fields and water models. We also find that the Amber ff15ipq force field gives rise to weaker salt bridges in good agreement with the NMR experiments. We conclude that many force fields appear to overstabilize these ionic interactions, and that further work may be needed to refine our ability to model quantitatively the stability of salt bridges through simulations. We also suggest that comparisons between NMR experiments and simulations will play a crucial role in furthering our understanding of this important interaction.