Jason Forthofer; Bret Butler
2007-01-01
A computational fluid dynamics (CFD) model and a mass-consistent model were used to simulate winds on simulated fire spread over a simple, low hill. The results suggest that the CFD wind field could significantly change simulated fire spread compared to traditional uniform winds. The CFD fire spread case may match reality better because the winds used in the fire...
An integrated modeling approach to predict flooding on urban basin.
Dey, Ashis Kumar; Kamioka, Seiji
2007-01-01
Correct prediction of flood extents in urban catchments has become a challenging issue. The traditional urban drainage models that consider only the sewerage-network are able to simulate the drainage system correctly until there is no overflow from the network inlet or manhole. When such overflows exist due to insufficient drainage capacity of downstream pipes or channels, it becomes difficult to reproduce the actual flood extents using these traditional one-phase simulation techniques. On the other hand, the traditional 2D models that simulate the surface flooding resulting from rainfall and/or levee break do not consider the sewerage network. As a result, the correct flooding situation is rarely addressed from those available traditional 1D and 2D models. This paper presents an integrated model that simultaneously simulates the sewerage network, river network and 2D mesh network to get correct flood extents. The model has been successfully applied into the Tenpaku basin (Nagoya, Japan), which experienced severe flooding with a maximum flood depth more than 1.5 m on September 11, 2000 when heavy rainfall, 580 mm in 28 hrs (return period > 100 yr), occurred over the catchments. Close agreements between the simulated flood depths and observed data ensure that the present integrated modeling approach is able to reproduce the urban flooding situation accurately, which rarely can be obtained through the traditional 1D and 2D modeling approaches.
Quality assurance paradigms for artificial intelligence in modelling and simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oren, T.I.
1987-04-01
New classes of quality assurance concepts and techniques are required for the advanced knowledge-processing paradigms (such as artificial intelligence, expert systems, or knowledge-based systems) and the complex problems that only simulative systems can cope with. A systematization of quality assurance problems as well as examples are given to traditional and cognizant quality assurance techniques in traditional and cognizant modelling and simulation.
Comparing Traditional versus Alternative Sequencing of Instruction When Using Simulation Modeling
ERIC Educational Resources Information Center
Bowen, Bradley; DeLuca, William
2015-01-01
Many engineering and technology education classrooms incorporate simulation modeling as part of curricula to teach engineering and STEM-based concepts. The traditional method of the learning process has students first learn the content from the classroom teacher and then may have the opportunity to apply the learned content through simulation…
Switching performance of OBS network model under prefetched real traffic
NASA Astrophysics Data System (ADS)
Huang, Zhenhua; Xu, Du; Lei, Wen
2005-11-01
Optical Burst Switching (OBS) [1] is now widely considered as an efficient switching technique in building the next generation optical Internet .So it's very important to precisely evaluate the performance of the OBS network model. The performance of the OBS network model is variable in different condition, but the most important thing is that how it works under real traffic load. In the traditional simulation models, uniform traffics are usually generated by simulation software to imitate the data source of the edge node in the OBS network model, and through which the performance of the OBS network is evaluated. Unfortunately, without being simulated by real traffic, the traditional simulation models have several problems and their results are doubtable. To deal with this problem, we present a new simulation model for analysis and performance evaluation of the OBS network, which uses prefetched IP traffic to be data source of the OBS network model. The prefetched IP traffic can be considered as real IP source of the OBS edge node and the OBS network model has the same clock rate with a real OBS system. So it's easy to conclude that this model is closer to the real OBS system than the traditional ones. The simulation results also indicate that this model is more accurate to evaluate the performance of the OBS network system and the results of this model are closer to the actual situation.
Data-driven train set crash dynamics simulation
NASA Astrophysics Data System (ADS)
Tang, Zhao; Zhu, Yunrui; Nie, Yinyu; Guo, Shihui; Liu, Fengjia; Chang, Jian; Zhang, Jianjun
2017-02-01
Traditional finite element (FE) methods are arguably expensive in computation/simulation of the train crash. High computational cost limits their direct applications in investigating dynamic behaviours of an entire train set for crashworthiness design and structural optimisation. On the contrary, multi-body modelling is widely used because of its low computational cost with the trade-off in accuracy. In this study, a data-driven train crash modelling method is proposed to improve the performance of a multi-body dynamics simulation of train set crash without increasing the computational burden. This is achieved by the parallel random forest algorithm, which is a machine learning approach that extracts useful patterns of force-displacement curves and predicts a force-displacement relation in a given collision condition from a collection of offline FE simulation data on various collision conditions, namely different crash velocities in our analysis. Using the FE simulation results as a benchmark, we compared our method with traditional multi-body modelling methods and the result shows that our data-driven method improves the accuracy over traditional multi-body models in train crash simulation and runs at the same level of efficiency.
Lemaster, Margaret; Flores, Joyce M; Blacketer, Margaret S
2016-02-01
This study explored the effectiveness of simulated mouth models to improve identification and recording of dental restorations when compared to using traditional didactic instruction combined with 2-dimensional images. Simulation has been adopted into medical and dental education curriculum to improve both student learning and patient safety outcomes. A 2-sample, independent t-test analysis of data was conducted to compare graded dental recordings of dental hygiene students using simulated mouth models and dental hygiene students using 2-dimensional photographs. Evaluations from graded dental charts were analyzed and compared between groups of students using the simulated mouth models containing random placement of custom preventive and restorative materials and traditional 2-dimensional representations of didactically described conditions. Results demonstrated a statistically significant (p≤0.0001) difference: for experimental group, students using the simulated mouth models to identify and record dental conditions had a mean of 86.73 and variance of 33.84. The control group students using traditional 2-dimensional images mean graded dental chart scores were 74.43 and variance was 14.25. Using modified simulation technology for dental charting identification may increase level of dental charting skill competency in first year dental hygiene students. Copyright © 2016 The American Dental Hygienists’ Association.
Carolan-Rees, G; Ray, A F
2015-05-01
The aim of this study was to produce an economic cost model comparing the use of the Medaphor ScanTrainer virtual reality training simulator for obstetrics and gynaecology ultrasound to achieve basic competence, with the traditional training method. A literature search and survey of expert opinion were used to identify resources used in training. An executable model was produced in Excel. The model showed a cost saving for a clinic using the ScanTrainer of £7114 per annum. The uncertainties of the model were explored and it was found to be robust. Threshold values for the key drivers of the model were identified. Using the ScanTrainer is cost saving for clinics with at least two trainees per year to train, if it would take at least six lists to train them using the traditional training method and if a traditional training list has at least two fewer patients than a standard list.
Effects of Learning Support in Simulation-Based Physics Learning
ERIC Educational Resources Information Center
Chang, Kuo-En; Chen, Yu-Lung; Lin, He-Yan; Sung, Yao-Ting
2008-01-01
This paper describes the effects of learning support on simulation-based learning in three learning models: experiment prompting, a hypothesis menu, and step guidance. A simulation learning system was implemented based on these three models, and the differences between simulation-based learning and traditional laboratory learning were explored in…
Validation of a 2.5D CFD model for cylindrical gas–solids fluidized beds
Li, Tingwen
2015-09-25
The 2.5D model recently proposed by Li et al. (Li, T., Benyahia, S., Dietiker, J., Musser, J., and Sun, X., 2015. A 2.5D computational method to simulate cylindrical fluidized beds. Chemical Engineering Science. 123, 236-246.) was validated for two cylindrical gas-solids bubbling fluidized bed systems. Different types of particles tested under various flow conditions were simulated using the traditional 2D model and the 2.5D model. Detailed comparison against the experimental measurements on solid concentration and velocity were conducted. Comparing to the traditional Cartesian 2D flow simulation, the 2.5D model yielded better agreement with the experimental data especially for the solidmore » velocity prediction in the column wall region.« less
A DYNAMIC MODEL OF AN ESTUARINE INVASION BY A NON-NATIVE SEAGRASS
Mathematical and simulation models provide an excellent tool for examining and predicting biological invasions in time and space; however, traditional models do not incorporate dynamic rates of population growth, which limits their realism. We developed a spatially explicit simul...
Soft tissue deformation estimation by spatio-temporal Kalman filter finite element method.
Yarahmadian, Mehran; Zhong, Yongmin; Gu, Chengfan; Shin, Jaehyun
2018-01-01
Soft tissue modeling plays an important role in the development of surgical training simulators as well as in robot-assisted minimally invasive surgeries. It has been known that while the traditional Finite Element Method (FEM) promises the accurate modeling of soft tissue deformation, it still suffers from a slow computational process. This paper presents a Kalman filter finite element method to model soft tissue deformation in real time without sacrificing the traditional FEM accuracy. The proposed method employs the FEM equilibrium equation and formulates it as a filtering process to estimate soft tissue behavior using real-time measurement data. The model is temporally discretized using the Newmark method and further formulated as the system state equation. Simulation results demonstrate that the computational time of KF-FEM is approximately 10 times shorter than the traditional FEM and it is still as accurate as the traditional FEM. The normalized root-mean-square error of the proposed KF-FEM in reference to the traditional FEM is computed as 0.0116. It is concluded that the proposed method significantly improves the computational performance of the traditional FEM without sacrificing FEM accuracy. The proposed method also filters noises involved in system state and measurement data.
Power of Models in Longitudinal Study: Findings from a Full-Crossed Simulation Design
ERIC Educational Resources Information Center
Fang, Hua; Brooks, Gordon P.; Rizzo, Maria L.; Espy, Kimberly Andrews; Barcikowski, Robert S.
2009-01-01
Because the power properties of traditional repeated measures and hierarchical multivariate linear models have not been clearly determined in the balanced design for longitudinal studies in the literature, the authors present a power comparison study of traditional repeated measures and hierarchical multivariate linear models under 3…
Wang, Chunfei; Zhang, Guang; Wu, Taihu; Zhan, Ningbo; Wang, Yaling
2016-03-01
High-quality cardiopulmonary resuscitation contributes to cardiac arrest survival. The traditional chest compression (CC) standard, which neglects individual differences, uses unified standards for compression depth and compression rate in practice. In this study, an effective and personalized CC method for automatic mechanical compression devices is provided. We rebuild Charles F. Babbs' human circulation model with a coronary perfusion pressure (CPP) simulation module and propose a closed-loop controller based on a fuzzy control algorithm for CCs, which adjusts the CC depth according to the CPP. Compared with a traditional proportion-integration-differentiation (PID) controller, the performance of the fuzzy controller is evaluated in computer simulation studies. The simulation results demonstrate that the fuzzy closed-loop controller results in shorter regulation time, fewer oscillations and smaller overshoot than traditional PID controllers and outperforms the traditional PID controller for CPP regulation and maintenance.
Charge-dependent many-body exchange and dispersion interactions in combined QM/MM simulations
NASA Astrophysics Data System (ADS)
Kuechler, Erich R.; Giese, Timothy J.; York, Darrin M.
2015-12-01
Accurate modeling of the molecular environment is critical in condensed phase simulations of chemical reactions. Conventional quantum mechanical/molecular mechanical (QM/MM) simulations traditionally model non-electrostatic non-bonded interactions through an empirical Lennard-Jones (LJ) potential which, in violation of intuitive chemical principles, is bereft of any explicit coupling to an atom's local electronic structure. This oversight results in a model whereby short-ranged exchange-repulsion and long-ranged dispersion interactions are invariant to changes in the local atomic charge, leading to accuracy limitations for chemical reactions where significant atomic charge transfer can occur along the reaction coordinate. The present work presents a variational, charge-dependent exchange-repulsion and dispersion model, referred to as the charge-dependent exchange and dispersion (QXD) model, for hybrid QM/MM simulations. Analytic expressions for the energy and gradients are provided, as well as a description of the integration of the model into existing QM/MM frameworks, allowing QXD to replace traditional LJ interactions in simulations of reactive condensed phase systems. After initial validation against QM data, the method is demonstrated by capturing the solvation free energies of a series of small, chlorine-containing compounds that have varying charge on the chlorine atom. The model is further tested on the SN2 attack of a chloride anion on methylchloride. Results suggest that the QXD model, unlike the traditional LJ model, is able to simultaneously obtain accurate solvation free energies for a range of compounds while at the same time closely reproducing the experimental reaction free energy barrier. The QXD interaction model allows explicit coupling of atomic charge with many-body exchange and dispersion interactions that are related to atomic size and provides a more accurate and robust representation of non-electrostatic non-bonded QM/MM interactions.
A Framework for the Optimization of Discrete-Event Simulation Models
NASA Technical Reports Server (NTRS)
Joshi, B. D.; Unal, R.; White, N. H.; Morris, W. D.
1996-01-01
With the growing use of computer modeling and simulation, in all aspects of engineering, the scope of traditional optimization has to be extended to include simulation models. Some unique aspects have to be addressed while optimizing via stochastic simulation models. The optimization procedure has to explicitly account for the randomness inherent in the stochastic measures predicted by the model. This paper outlines a general purpose framework for optimization of terminating discrete-event simulation models. The methodology combines a chance constraint approach for problem formulation, together with standard statistical estimation and analyses techniques. The applicability of the optimization framework is illustrated by minimizing the operation and support resources of a launch vehicle, through a simulation model.
Multi-scale Modeling of Arctic Clouds
NASA Astrophysics Data System (ADS)
Hillman, B. R.; Roesler, E. L.; Dexheimer, D.
2017-12-01
The presence and properties of clouds are critically important to the radiative budget in the Arctic, but clouds are notoriously difficult to represent in global climate models (GCMs). The challenge stems partly from a disconnect in the scales at which these models are formulated and the scale of the physical processes important to the formation of clouds (e.g., convection and turbulence). Because of this, these processes are parameterized in large-scale models. Over the past decades, new approaches have been explored in which a cloud system resolving model (CSRM), or in the extreme a large eddy simulation (LES), is embedded into each gridcell of a traditional GCM to replace the cloud and convective parameterizations to explicitly simulate more of these important processes. This approach is attractive in that it allows for more explicit simulation of small-scale processes while also allowing for interaction between the small and large-scale processes. The goal of this study is to quantify the performance of this framework in simulating Arctic clouds relative to a traditional global model, and to explore the limitations of such a framework using coordinated high-resolution (eddy-resolving) simulations. Simulations from the global model are compared with satellite retrievals of cloud fraction partioned by cloud phase from CALIPSO, and limited-area LES simulations are compared with ground-based and tethered-balloon measurements from the ARM Barrow and Oliktok Point measurement facilities.
The Osseus platform: a prototype for advanced web-based distributed simulation
NASA Astrophysics Data System (ADS)
Franceschini, Derrick; Riecken, Mark
2016-05-01
Recent technological advances in web-based distributed computing and database technology have made possible a deeper and more transparent integration of some modeling and simulation applications. Despite these advances towards true integration of capabilities, disparate systems, architectures, and protocols will remain in the inventory for some time to come. These disparities present interoperability challenges for distributed modeling and simulation whether the application is training, experimentation, or analysis. Traditional approaches call for building gateways to bridge between disparate protocols and retaining interoperability specialists. Challenges in reconciling data models also persist. These challenges and their traditional mitigation approaches directly contribute to higher costs, schedule delays, and frustration for the end users. Osseus is a prototype software platform originally funded as a research project by the Defense Modeling & Simulation Coordination Office (DMSCO) to examine interoperability alternatives using modern, web-based technology and taking inspiration from the commercial sector. Osseus provides tools and services for nonexpert users to connect simulations, targeting the time and skillset needed to successfully connect disparate systems. The Osseus platform presents a web services interface to allow simulation applications to exchange data using modern techniques efficiently over Local or Wide Area Networks. Further, it provides Service Oriented Architecture capabilities such that finer granularity components such as individual models can contribute to simulation with minimal effort.
ERIC Educational Resources Information Center
Reid, Maurice; Brown, Steve; Tabibzadeh, Kambiz
2012-01-01
For the past decade teaching models have been changing, reflecting the dynamics, complexities, and uncertainties of today's organizations. The traditional and the more current active models of learning have disadvantages. Simulation provides a platform to combine the best aspects of both types of teaching practices. This research explores the…
Simulation of human behavior in exposure modeling is a complex task. Traditionally, inter-individual variation in human activity has been modeled by drawing from a pool of single day time-activity diaries such as the US EPA Consolidated Human Activity Database (CHAD). Here, an ag...
Khadivzadeh, Talat; Erfanian, Fatemeh
2012-10-01
Midwifery students experience high levels of stress during their initial clinical practices. Addressing the learner's source of anxiety and discomfort can ease the learning experience and lead to better outcomes. The aim of this study was to find out the effect of a simulation-based course, using simulated patients and simulated gynecologic models on student anxiety and comfort while practicing to provide intrauterine device (IUD) services. Fifty-six eligible midwifery students were randomly allocated into simulation-based and traditional training groups. They participated in a 12-hour workshop in providing IUD services. The simulation group was trained through an educational program including simulated gynecologic models and simulated patients. The students in both groups then practiced IUD consultation and insertion with real patients in the clinic. The students' anxiety in IUD insertion was assessed using the "Spielberger anxiety test" and the "comfort in providing IUD services" questionnaire. There were significant differences between students in 2 aspects of anxiety including state (P < 0.001) and trait (P = 0.024) and the level of comfort (P = 0.000) in providing IUD services in simulation and traditional groups. "Fear of uterine perforation during insertion" was the most important cause of students' anxiety in providing IUD services, which was reported by 74.34% of students. Simulated patients and simulated gynecologic models are effective in optimizing students' anxiety levels when practicing to deliver IUD services. Therefore, it is recommended that simulated patients and simulated gynecologic models be used before engaging students in real clinical practice.
Modeling and simulating industrial land-use evolution in Shanghai, China
NASA Astrophysics Data System (ADS)
Qiu, Rongxu; Xu, Wei; Zhang, John; Staenz, Karl
2018-01-01
This study proposes a cellular automata-based Industrial and Residential Land Use Competition Model to simulate the dynamic spatial transformation of industrial land use in Shanghai, China. In the proposed model, land development activities in a city are delineated as competitions among different land-use types. The Hedonic Land Pricing Model is adopted to implement the competition framework. To improve simulation results, the Land Price Agglomeration Model was devised to simulate and adjust classic land price theory. A new evolutionary algorithm-based parameter estimation method was devised in place of traditional methods. Simulation results show that the proposed model closely resembles actual land transformation patterns and the model can not only simulate land development, but also redevelopment processes in metropolitan areas.
Capturing atmospheric effects on 3D millimeter wave radar propagation patterns
NASA Astrophysics Data System (ADS)
Cook, Richard D.; Fiorino, Steven T.; Keefer, Kevin J.; Stringer, Jeremy
2016-05-01
Traditional radar propagation modeling is done using a path transmittance with little to no input for weather and atmospheric conditions. As radar advances into the millimeter wave (MMW) regime, atmospheric effects such as attenuation and refraction become more pronounced than at traditional radar wavelengths. The DoD High Energy Laser Joint Technology Offices High Energy Laser End-to-End Operational Simulation (HELEEOS) in combination with the Laser Environmental Effects Definition and Reference (LEEDR) code have shown great promise simulating atmospheric effects on laser propagation. Indeed, the LEEDR radiative transfer code has been validated in the UV through RF. Our research attempts to apply these models to characterize the far field radar pattern in three dimensions as a signal propagates from an antenna towards a point in space. Furthermore, we do so using realistic three dimensional atmospheric profiles. The results from these simulations are compared to those from traditional radar propagation software packages. In summary, a fast running method has been investigated which can be incorporated into computational models to enhance understanding and prediction of MMW propagation through various atmospheric and weather conditions.
Tolerance analysis through computational imaging simulations
NASA Astrophysics Data System (ADS)
Birch, Gabriel C.; LaCasse, Charles F.; Stubbs, Jaclynn J.; Dagel, Amber L.; Bradley, Jon
2017-11-01
The modeling and simulation of non-traditional imaging systems require holistic consideration of the end-to-end system. We demonstrate this approach through a tolerance analysis of a random scattering lensless imaging system.
NASA Technical Reports Server (NTRS)
Queen, Eric M.; Omara, Thomas M.
1990-01-01
A realization of a stochastic atmosphere model for use in simulations is presented. The model provides pressure, density, temperature, and wind velocity as a function of latitude, longitude, and altitude, and is implemented in a three degree of freedom simulation package. This implementation is used in the Monte Carlo simulation of an aeroassisted orbital transfer maneuver and results are compared to those of a more traditional approach.
Simulation as a surgical teaching model.
Ruiz-Gómez, José Luis; Martín-Parra, José Ignacio; González-Noriega, Mónica; Redondo-Figuero, Carlos Godofredo; Manuel-Palazuelos, José Carlos
2018-01-01
Teaching of surgery has been affected by many factors over the last years, such as the reduction of working hours, the optimization of the use of the operating room or patient safety. Traditional teaching methodology fails to reduce the impact of these factors on surgeońs training. Simulation as a teaching model minimizes such impact, and is more effective than traditional teaching methods for integrating knowledge and clinical-surgical skills. Simulation complements clinical assistance with training, creating a safe learning environment where patient safety is not affected, and ethical or legal conflicts are avoided. Simulation uses learning methodologies that allow teaching individualization, adapting it to the learning needs of each student. It also allows training of all kinds of technical, cognitive or behavioural skills. Copyright © 2017 AEC. Publicado por Elsevier España, S.L.U. All rights reserved.
Determination of the transmission coefficients for quantum structures using FDTD method.
Peng, Yangyang; Wang, Xiaoying; Sui, Wenquan
2011-12-01
The purpose of this work is to develop a simple method to incorporate quantum effect in traditional finite-difference time-domain (FDTD) simulators. Witch could make it possible to co-simulate systems include quantum structures and traditional components. In this paper, tunneling transmission coefficient is calculated by solving time-domain Schrödinger equation with a developed FDTD technique, called FDTD-S method. To validate the feasibility of the method, a simple resonant tunneling diode (RTD) structure model has been simulated using the proposed method. The good agreement between the numerical and analytical results proves its accuracy. The effectness and accuracy of this approach makes it a potential method for analysis and design of hybrid systems includes quantum structures and traditional components.
Tire-rim interface pressure of a commercial vehicle wheel under radial loads: theory and experiment
NASA Astrophysics Data System (ADS)
Wan, Xiaofei; Shan, Yingchun; Liu, Xiandong; He, Tian; Wang, Jiegong
2017-11-01
The simulation of the radial fatigue test of a wheel has been a necessary tool to improve the design of the wheel and calculate its fatigue life. The simulation model, including the strong nonlinearity of the tire structure and material, may produce accurate results, but often leads to a divergence in calculation. Thus, a simplified simulation model in which the complicated tire model is replaced with a tire-wheel contact pressure model is used extensively in the industry. In this paper, a simplified tire-rim interface pressure model of a wheel under a radial load is established, and the pressure of the wheel under different radial loads is tested. The tire-rim contact behavior affected by the radial load is studied and analyzed according to the test result, and the tire-rim interface pressure extracted from the test result is used to evaluate the simplified pressure model and the traditional cosine function model. The results show that the proposed model may provide a more accurate prediction of the wheel radial fatigue life than the traditional cosine function model.
The Design and the Formative Evaluation of a Web-Based Course for Simulation Analysis Experiences
ERIC Educational Resources Information Center
Tao, Yu-Hui; Guo, Shin-Ming; Lu, Ya-Hui
2006-01-01
Simulation output analysis has received little attention comparing to modeling and programming in real-world simulation applications. This is further evidenced by our observation that students and beginners acquire neither adequate details of knowledge nor relevant experience of simulation output analysis in traditional classroom learning. With…
Hettinger, Lawrence J.; Kirlik, Alex; Goh, Yang Miang; Buckle, Peter
2015-01-01
Accurate comprehension and analysis of complex sociotechnical systems is a daunting task. Empirically examining, or simply envisioning the structure and behaviour of such systems challenges traditional analytic and experimental approaches as well as our everyday cognitive capabilities. Computer-based models and simulations afford potentially useful means of accomplishing sociotechnical system design and analysis objectives. From a design perspective, they can provide a basis for a common mental model among stakeholders, thereby facilitating accurate comprehension of factors impacting system performance and potential effects of system modifications. From a research perspective, models and simulations afford the means to study aspects of sociotechnical system design and operation, including the potential impact of modifications to structural and dynamic system properties, in ways not feasible with traditional experimental approaches. This paper describes issues involved in the design and use of such models and simulations and describes a proposed path forward to their development and implementation. Practitioner Summary: The size and complexity of real-world sociotechnical systems can present significant barriers to their design, comprehension and empirical analysis. This article describes the potential advantages of computer-based models and simulations for understanding factors that impact sociotechnical system design and operation, particularly with respect to process and occupational safety. PMID:25761227
Charge-dependent many-body exchange and dispersion interactions in combined QM/MM simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuechler, Erich R.; Department of Chemistry, University of Minnesota, Minneapolis, Minnesota 55455-0431; Giese, Timothy J.
2015-12-21
Accurate modeling of the molecular environment is critical in condensed phase simulations of chemical reactions. Conventional quantum mechanical/molecular mechanical (QM/MM) simulations traditionally model non-electrostatic non-bonded interactions through an empirical Lennard-Jones (LJ) potential which, in violation of intuitive chemical principles, is bereft of any explicit coupling to an atom’s local electronic structure. This oversight results in a model whereby short-ranged exchange-repulsion and long-ranged dispersion interactions are invariant to changes in the local atomic charge, leading to accuracy limitations for chemical reactions where significant atomic charge transfer can occur along the reaction coordinate. The present work presents a variational, charge-dependent exchange-repulsion andmore » dispersion model, referred to as the charge-dependent exchange and dispersion (QXD) model, for hybrid QM/MM simulations. Analytic expressions for the energy and gradients are provided, as well as a description of the integration of the model into existing QM/MM frameworks, allowing QXD to replace traditional LJ interactions in simulations of reactive condensed phase systems. After initial validation against QM data, the method is demonstrated by capturing the solvation free energies of a series of small, chlorine-containing compounds that have varying charge on the chlorine atom. The model is further tested on the S{sub N}2 attack of a chloride anion on methylchloride. Results suggest that the QXD model, unlike the traditional LJ model, is able to simultaneously obtain accurate solvation free energies for a range of compounds while at the same time closely reproducing the experimental reaction free energy barrier. The QXD interaction model allows explicit coupling of atomic charge with many-body exchange and dispersion interactions that are related to atomic size and provides a more accurate and robust representation of non-electrostatic non-bonded QM/MM interactions.« less
Paddock, Michael T; Bailitz, John; Horowitz, Russ; Khishfe, Basem; Cosby, Karen; Sergel, Michelle J
2015-03-01
Pre-hospital focused assessment with sonography in trauma (FAST) has been effectively used to improve patient care in multiple mass casualty events throughout the world. Although requisite FAST knowledge may now be learned remotely by disaster response team members, traditional live instructor and model hands-on FAST skills training remains logistically challenging. The objective of this pilot study was to compare the effectiveness of a novel portable ultrasound (US) simulator with traditional FAST skills training for a deployed mixed provider disaster response team. We randomized participants into one of three training groups stratified by provider role: Group A. Traditional Skills Training, Group B. US Simulator Skills Training, and Group C. Traditional Skills Training Plus US Simulator Skills Training. After skills training, we measured participants' FAST image acquisition and interpretation skills using a standardized direct observation tool (SDOT) with healthy models and review of FAST patient images. Pre- and post-course US and FAST knowledge were also assessed using a previously validated multiple-choice evaluation. We used the ANOVA procedure to determine the statistical significance of differences between the means of each group's skills scores. Paired sample t-tests were used to determine the statistical significance of pre- and post-course mean knowledge scores within groups. We enrolled 36 participants, 12 randomized to each training group. Randomization resulted in similar distribution of participants between training groups with respect to provider role, age, sex, and prior US training. For the FAST SDOT image acquisition and interpretation mean skills scores, there was no statistically significant difference between training groups. For US and FAST mean knowledge scores, there was a statistically significant improvement between pre- and post-course scores within each group, but again there was not a statistically significant difference between training groups. This pilot study of a deployed mixed-provider disaster response team suggests that a novel portable US simulator may provide equivalent skills training in comparison to traditional live instructor and model training. Further studies with a larger sample size and other measures of short- and long-term clinical performance are warranted.
A Simple Memristor Model for Circuit Simulations
NASA Astrophysics Data System (ADS)
Fullerton, Farrah-Amoy; Joe, Aaleyah; Gergel-Hackett, Nadine; Department of Chemistry; Physics Team
This work describes the development of a model for the memristor, a novel nanoelectronic technology. The model was designed to replicate the real-world electrical characteristics of previously fabricated memristor devices, but was constructed with basic circuit elements using a free widely available circuit simulator, LT Spice. The modeled memrsistors were then used to construct a circuit that performs material implication. Material implication is a digital logic that can be used to perform all of the same basic functions as traditional CMOS gates, but with fewer nanoelectronic devices. This memristor-based digital logic could enable memristors' use in new paradigms of computer architecture with advantages in size, speed, and power over traditional computing circuits. Additionally, the ability to model the real-world electrical characteristics of memristors in a free circuit simulator using its standard library of elements could enable not only the development of memristor material implication, but also the development of a virtually unlimited array of other memristor-based circuits.
Accurately modeling Gaussian beam propagation in the context of Monte Carlo techniques
NASA Astrophysics Data System (ADS)
Hokr, Brett H.; Winblad, Aidan; Bixler, Joel N.; Elpers, Gabriel; Zollars, Byron; Scully, Marlan O.; Yakovlev, Vladislav V.; Thomas, Robert J.
2016-03-01
Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, traditional Monte Carlo methods fail to account for diffraction because they treat light as a particle. This results in converging beams focusing to a point instead of a diffraction limited spot, greatly effecting the accuracy of Monte Carlo simulations near the focal plane. Here, we present a technique capable of simulating a focusing beam in accordance to the rules of Gaussian optics, resulting in a diffraction limited focal spot. This technique can be easily implemented into any traditional Monte Carlo simulation allowing existing models to be converted to include accurate focusing geometries with minimal effort. We will present results for a focusing beam in a layered tissue model, demonstrating that for different scenarios the region of highest intensity, thus the greatest heating, can change from the surface to the focus. The ability to simulate accurate focusing geometries will greatly enhance the usefulness of Monte Carlo for countless applications, including studying laser tissue interactions in medical applications and light propagation through turbid media.
NASA Astrophysics Data System (ADS)
Abisset-Chavanne, Emmanuelle; Duval, Jean Louis; Cueto, Elias; Chinesta, Francisco
2018-05-01
Traditionally, Simulation-Based Engineering Sciences (SBES) has relied on the use of static data inputs (model parameters, initial or boundary conditions, … obtained from adequate experiments) to perform simulations. A new paradigm in the field of Applied Sciences and Engineering has emerged in the last decade. Dynamic Data-Driven Application Systems [9, 10, 11, 12, 22] allow the linkage of simulation tools with measurement devices for real-time control of simulations and applications, entailing the ability to dynamically incorporate additional data into an executing application, and in reverse, the ability of an application to dynamically steer the measurement process. It is in that context that traditional "digital-twins" are giving raise to a new generation of goal-oriented data-driven application systems, also known as "hybrid-twins", embracing models based on physics and models exclusively based on data adequately collected and assimilated for filling the gap between usual model predictions and measurements. Within this framework new methodologies based on model learners, machine learning and kinetic goal-oriented design are defining a new paradigm in materials, processes and systems engineering.
ERIC Educational Resources Information Center
Tural, Güner; Tarakçi, Demet
2017-01-01
Background: One of the topics students have difficulties in understanding is electromagnetic induction. Active learning methods instead of traditional learning method may be able to help facilitate students' understanding such topics more effectively. Purpose: The study investigated the effectiveness of physical models and simulations on students'…
An environmental cost-benefit analysis of alternative green roofing strategies
NASA Astrophysics Data System (ADS)
Richardson, M.; William, R. K.; Goodwell, A. E.; Le, P. V.; Kumar, P.; Stillwell, A. S.
2016-12-01
Green roofs and cool roofs are alternative roofing strategies that mitigate urban heat island effects and improve building energy performance. Green roofs consist of soil and vegetation layers that provide runoff reduction, thermal insulation, and potential natural habitat, but can require regular maintenance. Cool roofs involve a reflective layer that reflects more sunlight than traditional roofing materials, but require additional insulation during winter months. This study evaluates several roofing strategies in terms of energy performance, urban heat island mitigation, water consumption, and economic cost. We use MLCan, a multi-layer canopy model, to simulate irrigated and non-irrigated green roof cases with shallow and deep soil depths during the spring and early summer of 2012, a drought period in central Illinois. Due to the dry conditions studied, periodic irrigation is implemented in the model to evaluate its effect on evapotranspiration. We simulate traditional and cool roof scenarios by altering surface albedo and omitting vegetation and soil layers. We find that both green roofs and cool roofs significantly reduce surface temperature compared to the traditional roof simulation. Cool roof temperatures always remain below air temperature and, similar to traditional roofs, require low maintenance. Green roofs remain close to air temperature and also provide thermal insulation, runoff reduction, and carbon uptake, but might require irrigation during dry periods. Due to the longer lifetime of a green roof compared to cool and traditional roofs, we find that green roofs realize the highest long term cost savings under simulated conditions. However, using longer-life traditional roof materials (which have a higher upfront cost) can help decrease this price differential, making cool roofs the most affordable option due to the higher maintenance costs associated with green roofs
Ontological and Epistemological Issues Regarding Climate Models and Computer Experiments
NASA Astrophysics Data System (ADS)
Vezer, M. A.
2010-12-01
Recent philosophical discussions (Parker 2009; Frigg and Reiss 2009; Winsberg, 2009; Morgon 2002, 2003, 2005; Gula 2002) about the ontology of computer simulation experiments and the epistemology of inferences drawn from them are of particular relevance to climate science as computer modeling and analysis are instrumental in understanding climatic systems. How do computer simulation experiments compare with traditional experiments? Is there an ontological difference between these two methods of inquiry? Are there epistemological considerations that result in one type of inference being more reliable than the other? What are the implications of these questions with respect to climate studies that rely on computer simulation analysis? In this paper, I examine these philosophical questions within the context of climate science, instantiating concerns in the philosophical literature with examples found in analysis of global climate change. I concentrate on Wendy Parker’s (2009) account of computer simulation studies, which offers a treatment of these and other questions relevant to investigations of climate change involving such modelling. Two theses at the center of Parker’s account will be the focus of this paper. The first is that computer simulation experiments ought to be regarded as straightforward material experiments; which is to say, there is no significant ontological difference between computer and traditional experimentation. Parker’s second thesis is that some of the emphasis on the epistemological importance of materiality has been misplaced. I examine both of these claims. First, I inquire as to whether viewing computer and traditional experiments as ontologically similar in the way she does implies that there is no proper distinction between abstract experiments (such as ‘thought experiments’ as well as computer experiments) and traditional ‘concrete’ ones. Second, I examine the notion of materiality (i.e., the material commonality between object and target systems) and some arguments for the claim that materiality entails some inferential advantage to traditional experimentation. I maintain that Parker’s account of the ontology of computer simulations has some interesting though potentially problematic implications regarding conventional distinctions between abstract and concrete methods of inquiry. With respect to her account of materiality, I outline and defend an alternative account, posited by Mary Morgan (2002, 2003, 2005), which holds that ontological similarity between target and object systems confers some epistemological advantage to traditional forms of experimental inquiry.
NASA Astrophysics Data System (ADS)
Qi, Wei; Liu, Junguo; Yang, Hong; Sweetapple, Chris
2018-03-01
Global precipitation products are very important datasets in flow simulations, especially in poorly gauged regions. Uncertainties resulting from precipitation products, hydrological models and their combinations vary with time and data magnitude, and undermine their application to flow simulations. However, previous studies have not quantified these uncertainties individually and explicitly. This study developed an ensemble-based dynamic Bayesian averaging approach (e-Bay) for deterministic discharge simulations using multiple global precipitation products and hydrological models. In this approach, the joint probability of precipitation products and hydrological models being correct is quantified based on uncertainties in maximum and mean estimation, posterior probability is quantified as functions of the magnitude and timing of discharges, and the law of total probability is implemented to calculate expected discharges. Six global fine-resolution precipitation products and two hydrological models of different complexities are included in an illustrative application. e-Bay can effectively quantify uncertainties and therefore generate better deterministic discharges than traditional approaches (weighted average methods with equal and varying weights and maximum likelihood approach). The mean Nash-Sutcliffe Efficiency values of e-Bay are up to 0.97 and 0.85 in training and validation periods respectively, which are at least 0.06 and 0.13 higher than traditional approaches. In addition, with increased training data, assessment criteria values of e-Bay show smaller fluctuations than traditional approaches and its performance becomes outstanding. The proposed e-Bay approach bridges the gap between global precipitation products and their pragmatic applications to discharge simulations, and is beneficial to water resources management in ungauged or poorly gauged regions across the world.
Model-Based Verification and Validation of the SMAP Uplink Processes
NASA Technical Reports Server (NTRS)
Khan, M. Omair; Dubos, Gregory F.; Tirona, Joseph; Standley, Shaun
2013-01-01
This case study stands as an example of how a project can validate a system-level design earlier in the project life cycle than traditional V&V processes by using simulation on a system model. Specifically, this paper describes how simulation was added to a system model of the Soil Moisture Active-Passive (SMAP) mission's uplink process.Also discussed are the advantages and disadvantages of the methods employed and the lessons learned; which are intended to benefit future model-based and simulation-based V&V development efforts.
Identifying model error in metabolic flux analysis - a generalized least squares approach.
Sokolenko, Stanislav; Quattrociocchi, Marco; Aucoin, Marc G
2016-09-13
The estimation of intracellular flux through traditional metabolic flux analysis (MFA) using an overdetermined system of equations is a well established practice in metabolic engineering. Despite the continued evolution of the methodology since its introduction, there has been little focus on validation and identification of poor model fit outside of identifying "gross measurement error". The growing complexity of metabolic models, which are increasingly generated from genome-level data, has necessitated robust validation that can directly assess model fit. In this work, MFA calculation is framed as a generalized least squares (GLS) problem, highlighting the applicability of the common t-test for model validation. To differentiate between measurement and model error, we simulate ideal flux profiles directly from the model, perturb them with estimated measurement error, and compare their validation to real data. Application of this strategy to an established Chinese Hamster Ovary (CHO) cell model shows how fluxes validated by traditional means may be largely non-significant due to a lack of model fit. With further simulation, we explore how t-test significance relates to calculation error and show that fluxes found to be non-significant have 2-4 fold larger error (if measurement uncertainty is in the 5-10 % range). The proposed validation method goes beyond traditional detection of "gross measurement error" to identify lack of fit between model and data. Although the focus of this work is on t-test validation and traditional MFA, the presented framework is readily applicable to other regression analysis methods and MFA formulations.
NASA Technical Reports Server (NTRS)
Brandon, Jay M.; Foster, John V.
1998-01-01
As airplane designs have trended toward the expansion of flight envelopes into the high angle of attack and high angular rate regimes, concerns regarding modeling the complex unsteady aerodynamics for simulation have arisen. Most current modeling methods still rely on traditional body axis damping coefficients that are measured using techniques which were intended for relatively benign flight conditions. This paper presents recent wind tunnel results obtained during large-amplitude pitch, roll and yaw testing of several fighter airplane configurations. A review of the similitude requirements for applying sub-scale test results to full-scale conditions is presented. Data is then shown to be a strong function of Strouhal number - both the traditional damping terms, but also the associated static stability terms. Additionally, large effects of sideslip are seen in the damping parameter that should be included in simulation math models. Finally, an example of the inclusion of frequency effects on the data in a simulation is shown.
Simulating Exposure Concentrations of Engineered Nanomaterials in Surface Water Systems: WASP8
The unique properties of engineered nanomaterials led to their increased production and potential release into the environment. Currently available environmental fate models developed for traditional contaminants are limited in their ability to simulate nanomaterials’ envir...
Bayesian hierarchical model for large-scale covariance matrix estimation.
Zhu, Dongxiao; Hero, Alfred O
2007-12-01
Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.
A review of the use of simulation in dental education.
Perry, Suzanne; Bridges, Susan Margaret; Burrow, Michael Francis
2015-02-01
In line with the advances in technology and communication, medical simulations are being developed to support the acquisition of requisite psychomotor skills before real-life clinical applications. This review article aimed to give a general overview of simulation in a cognate field, clinical dental education. Simulations in dentistry are not a new phenomenon; however, recent developments in virtual-reality technology using computer-generated medical simulations of 3-dimensional images or environments are providing more optimal practice conditions to smooth the transition from the traditional model-based simulation laboratory to the clinic. Evidence as to the positive aspects of virtual reality include increased effectiveness in comparison with traditional simulation teaching techniques, more efficient learning, objective and reproducible feedback, unlimited training hours, and enhanced cost-effectiveness for teaching establishments. Negative aspects have been indicated as initial setup costs, faculty training, and the lack of a variety of content and current educational simulation programs.
Simulation in Surgical Education
de Montbrun, Sandra L.; MacRae, Helen
2012-01-01
The pedagogical approach to surgical training has changed significantly over the past few decades. No longer are surgical skills solely acquired through a traditional apprenticeship model of training. The acquisition of many technical and nontechnical skills is moving from the operating room to the surgical skills laboratory through the use of simulation. Many platforms exist for the learning and assessment of surgical skills. In this article, the authors provide a broad overview of some of the currently available surgical simulation modalities including bench-top models, laparoscopic simulators, simulation for new surgical technologies, and simulation for nontechnical surgical skills. PMID:23997671
Best opening face system for sweepy, eccentric logs : a user’s guide
David W. Lewis
1985-01-01
Log breakdown simulation models have gained rapid acceptance within the sawmill industry in the last 15 years. Although they have many advantages over traditional decision making tools, the existing models do not calculate yield correctly when used to simulate the breakdown of eccentric, sweepy logs in North American sawmills producing softwood dimension lumber. In an...
High-resolution numerical models for smoke transport in plumes from wildland fires
Philip Cunningham; Scott Goodrick
2013-01-01
A high-resolution large-eddy simulation (LES) model is employed to examine the fundamental structure and dynamics of buoyant plumes arising from heat sources representative of wildland fires. Herein we describe several aspects of the mean properties of the simulated plumes. Mean plume trajectories are apparently well described by the traditional two-thirds law for...
Evaluation of Teaching the IS-LM Model through a Simulation Program
ERIC Educational Resources Information Center
Pablo-Romero, Maria del Populo; Pozo-Barajas, Rafael; Gomez-Calero, Maria de la Palma
2012-01-01
The IS-ML model is a basic tool used in the teaching of short-term macroeconomics. Teaching is essentially done through the use of graphs. However, the way these graphs are traditionally taught does not allow the learner to easily visualise changes in the curves. The IS-LM simulation program overcomes difficulties encountered in understanding the…
Optimization of GM(1,1) power model
NASA Astrophysics Data System (ADS)
Luo, Dang; Sun, Yu-ling; Song, Bo
2013-10-01
GM (1,1) power model is the expansion of traditional GM (1,1) model and Grey Verhulst model. Compared with the traditional models, GM (1,1) power model has the following advantage: The power exponent in the model which best matches the actual data values can be found by certain technology. So, GM (1,1) power model can reflect nonlinear features of the data, simulate and forecast with high accuracy. It's very important to determine the best power exponent during the modeling process. In this paper, according to the GM(1,1) power model of albino equation is Bernoulli equation, through variable substitution, turning it into the GM(1,1) model of the linear albino equation form, and then through the grey differential equation properly built, established GM(1,1) power model, and parameters with pattern search method solution. Finally, we illustrate the effectiveness of the new methods with the example of simulating and forecasting the promotion rates from senior secondary schools to higher education in China.
Simulated Long-term Effects of the MOFEP Cutting Treatments
David R. Larsen
1997-01-01
Changes in average basal area and volume per acre were simulated for a 35-year pertod using the treatments designated for sites 4, 5, and 6 of the Missouri Ozark Forest Ecosystem Project. A traditional growth and yield model (Central States TWIGS variant of the Forest Vegetation Simulator) was used with Landscape Management System Software to simulate and display...
Model-as-a-service (MaaS) using the cloud service innovation platform (CSIP)
USDA-ARS?s Scientific Manuscript database
Cloud infrastructures for modelling activities such as data processing, performing environmental simulations, or conducting model calibrations/optimizations provide a cost effective alternative to traditional high performance computing approaches. Cloud-based modelling examples emerged into the more...
Parallel discrete event simulation using shared memory
NASA Technical Reports Server (NTRS)
Reed, Daniel A.; Malony, Allen D.; Mccredie, Bradley D.
1988-01-01
With traditional event-list techniques, evaluating a detailed discrete-event simulation-model can often require hours or even days of computation time. By eliminating the event list and maintaining only sufficient synchronization to ensure causality, parallel simulation can potentially provide speedups that are linear in the numbers of processors. A set of shared-memory experiments, using the Chandy-Misra distributed-simulation algorithm, to simulate networks of queues is presented. Parameters of the study include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors. These experiments show that Chandy-Misra distributed simulation is a questionable alternative to sequential-simulation of most queueing network models.
Snip, L J P; Flores-Alsina, X; Aymerich, I; Rodríguez-Mozaz, S; Barceló, D; Plósz, B G; Corominas, Ll; Rodriguez-Roda, I; Jeppsson, U; Gernaey, K V
2016-11-01
The use of process models to simulate the fate of micropollutants in wastewater treatment plants is constantly growing. However, due to the high workload and cost of measuring campaigns, many simulation studies lack sufficiently long time series representing realistic wastewater influent dynamics. In this paper, the feasibility of the Benchmark Simulation Model No. 2 (BSM2) influent generator is tested to create realistic dynamic influent (micro)pollutant disturbance scenarios. The presented set of models is adjusted to describe the occurrence of three pharmaceutical compounds and one of each of its metabolites with samples taken every 2-4h: the anti-inflammatory drug ibuprofen (IBU), the antibiotic sulfamethoxazole (SMX) and the psychoactive carbamazepine (CMZ). Information about type of excretion and total consumption rates forms the basis for creating the data-defined profiles used to generate the dynamic time series. In addition, the traditional influent characteristics such as flow rate, ammonium, particulate chemical oxygen demand and temperature are also modelled using the same framework with high frequency data. The calibration is performed semi-automatically with two different methods depending on data availability. The 'traditional' variables are calibrated with the Bootstrap method while the pharmaceutical loads are estimated with a least squares approach. The simulation results demonstrate that the BSM2 influent generator can describe the dynamics of both traditional variables and pharmaceuticals. Lastly, the study is complemented with: 1) the generation of longer time series for IBU following the same catchment principles; 2) the study of the impact of in-sewer SMX biotransformation when estimating the average daily load; and, 3) a critical discussion of the results, and the future opportunities of the presented approach balancing model structure/calibration procedure complexity versus predictive capabilities. Copyright © 2016. Published by Elsevier B.V.
Application of global kinetic models to HMX beta-delta transition and cookoff processes.
Wemhoff, Aaron P; Burnham, Alan K; Nichols, Albert L
2007-03-08
The reduction of the number of reactions in kinetic models for both the HMX (octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine) beta-delta phase transition and thermal cookoff provides an attractive alternative to traditional multi-stage kinetic models due to reduced calibration effort requirements. In this study, we use the LLNL code ALE3D to provide calibrated kinetic parameters for a two-reaction bidirectional beta-delta HMX phase transition model based on Sandia instrumented thermal ignition (SITI) and scaled thermal explosion (STEX) temperature history curves, and a Prout-Tompkins cookoff model based on one-dimensional time to explosion (ODTX) data. Results show that the two-reaction bidirectional beta-delta transition model presented here agrees as well with STEX and SITI temperature history curves as a reversible four-reaction Arrhenius model yet requires an order of magnitude less computational effort. In addition, a single-reaction Prout-Tompkins model calibrated to ODTX data provides better agreement with ODTX data than a traditional multistep Arrhenius model and can contain up to 90% fewer chemistry-limited time steps for low-temperature ODTX simulations. Manual calibration methods for the Prout-Tompkins kinetics provide much better agreement with ODTX experimental data than parameters derived from differential scanning calorimetry (DSC) measurements at atmospheric pressure. The predicted surface temperature at explosion for STEX cookoff simulations is a weak function of the cookoff model used, and a reduction of up to 15% of chemistry-limited time steps can be achieved by neglecting the beta-delta transition for this type of simulation. Finally, the inclusion of the beta-delta transition model in the overall kinetics model can affect the predicted time to explosion by 1% for the traditional multistep Arrhenius approach, and up to 11% using a Prout-Tompkins cookoff model.
RECENT ADVANCES IN THE MODELING OF AIRBORNE SUBSTANCES
Since the 1950's, the primary mission of the Atmospheric Modeling Division has been to develop and evaluate air quality simulation models. While the Division has traditionally focused the research on the meteorological aspects of these models, this focus has expanded in recent...
NASA Astrophysics Data System (ADS)
Radev, Dimitar; Lokshina, Izabella
2010-11-01
The paper examines self-similar (or fractal) properties of real communication network traffic data over a wide range of time scales. These self-similar properties are very different from the properties of traditional models based on Poisson and Markov-modulated Poisson processes. Advanced fractal models of sequentional generators and fixed-length sequence generators, and efficient algorithms that are used to simulate self-similar behavior of IP network traffic data are developed and applied. Numerical examples are provided; and simulation results are obtained and analyzed.
The effects of seed dispersal on the simulation of long-term forest landscape change
Hong S. He; David J. Mladenoff
1999-01-01
The study of forest landscape change requires an understanding of the complex interactions of both spatial and temporal factors. Traditionally, forest gap models have been used to simulate change on small and independent plots. While gap models are useful in examining forest ecological dynamics across temporal scales, large, spatial processes, such as seed dispersal,...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wemhoff, A P; Burnham, A K; Nichols III, A L
The reduction of the number of reactions in kinetic models for both the HMX beta-delta phase transition and thermal cookoff provides an attractive alternative to traditional multi-stage kinetic models due to reduced calibration effort requirements. In this study, we use the LLNL code ALE3D to provide calibrated kinetic parameters for a two-reaction bidirectional beta-delta HMX phase transition model based on Sandia Instrumented Thermal Ignition (SITI) and Scaled Thermal Explosion (STEX) temperature history curves, and a Prout-Tompkins cookoff model based on One-Dimensional Time to Explosion (ODTX) data. Results show that the two-reaction bidirectional beta-delta transition model presented here agrees as wellmore » with STEX and SITI temperature history curves as a reversible four-reaction Arrhenius model, yet requires an order of magnitude less computational effort. In addition, a single-reaction Prout-Tompkins model calibrated to ODTX data provides better agreement with ODTX data than a traditional multi-step Arrhenius model, and can contain up to 90% less chemistry-limited time steps for low-temperature ODTX simulations. Manual calibration methods for the Prout-Tompkins kinetics provide much better agreement with ODTX experimental data than parameters derived from Differential Scanning Calorimetry (DSC) measurements at atmospheric pressure. The predicted surface temperature at explosion for STEX cookoff simulations is a weak function of the cookoff model used, and a reduction of up to 15% of chemistry-limited time steps can be achieved by neglecting the beta-delta transition for this type of simulation. Finally, the inclusion of the beta-delta transition model in the overall kinetics model can affect the predicted time to explosion by 1% for the traditional multi-step Arrhenius approach, while up to 11% using a Prout-Tompkins cookoff model.« less
Towards a unified Global Weather-Climate Prediction System
NASA Astrophysics Data System (ADS)
Lin, S. J.
2016-12-01
The Geophysical Fluid Dynamics Laboratory has been developing a unified regional-global modeling system with variable resolution capabilities that can be used for severe weather predictions and kilometer scale regional climate simulations within a unified global modeling system. The foundation of this flexible modeling system is the nonhydrostatic Finite-Volume Dynamical Core on the Cubed-Sphere (FV3). A unique aspect of FV3 is that it is "vertically Lagrangian" (Lin 2004), essentially reducing the equation sets to two dimensions, and is the single most important reason why FV3 outperforms other non-hydrostatic cores. Owning to its accuracy, adaptability, and computational efficiency, the FV3 has been selected as the "engine" for NOAA's Next Generation Global Prediction System (NGGPS). We have built into the modeling system a stretched grid, a two-way regional-global nested grid, and an optimal combination of the stretched and two-way nests capability, making kilometer-scale regional simulations within a global modeling system feasible. Our main scientific goal is to enable simulations of high impact weather phenomena (such as tornadoes, thunderstorms, category-5 hurricanes) within an IPCC-class climate modeling system previously regarded as impossible. In this presentation I will demonstrate that, with the FV3, it is computationally feasible to simulate not only super-cell thunderstorms, but also the subsequent genesis of tornado-like vortices using a global model that was originally designed for climate simulations. The development and tuning strategy between traditional weather and climate models are fundamentally different due to different metrics. We were able to adapt and use traditional "climate" metrics or standards, such as angular momentum conservation, energy conservation, and flux balance at top of the atmosphere, and gain insight into problems of traditional weather prediction model for medium-range weather prediction, and vice versa. Therefore, the unification in weather and climate models can happen not just at the algorithm or parameterization level, but also in the metric and tuning strategy used for both applications, and ultimately, with benefits to both weather and climate applications.
Coarse-graining to the meso and continuum scales with molecular-dynamics-like models
NASA Astrophysics Data System (ADS)
Plimpton, Steve
Many engineering-scale problems that industry or the national labs try to address with particle-based simulations occur at length and time scales well beyond the most optimistic hopes of traditional coarse-graining methods for molecular dynamics (MD), which typically start at the atomic scale and build upward. However classical MD can be viewed as an engine for simulating particles at literally any length or time scale, depending on the models used for individual particles and their interactions. To illustrate I'll highlight several coarse-grained (CG) materials models, some of which are likely familiar to molecular-scale modelers, but others probably not. These include models for water droplet freezing on surfaces, dissipative particle dynamics (DPD) models of explosives where particles have internal state, CG models of nano or colloidal particles in solution, models for aspherical particles, Peridynamics models for fracture, and models of granular materials at the scale of industrial processing. All of these can be implemented as MD-style models for either soft or hard materials; in fact they are all part of our LAMMPS MD package, added either by our group or contributed by collaborators. Unlike most all-atom MD simulations, CG simulations at these scales often involve highly non-uniform particle densities. So I'll also discuss a load-balancing method we've implemented for these kinds of models, which can improve parallel efficiencies. From the physics point-of-view, these models may be viewed as non-traditional or ad hoc. But because they are MD-style simulations, there's an opportunity for physicists to add statistical mechanics rigor to individual models. Or, in keeping with a theme of this session, to devise methods that more accurately bridge models from one scale to the next.
Analysis of mixed model in gear transmission based on ADAMS
NASA Astrophysics Data System (ADS)
Li, Xiufeng; Wang, Yabin
2012-09-01
The traditional method of mechanical gear driving simulation includes gear pair method and solid to solid contact method. The former has higher solving efficiency but lower results accuracy; the latter usually obtains higher precision of results while the calculation process is complex, also it is not easy to converge. Currently, most of the researches are focused on the description of geometric models and the definition of boundary conditions. However, none of them can solve the problems fundamentally. To improve the simulation efficiency while ensure the results with high accuracy, a mixed model method which uses gear tooth profiles to take the place of the solid gear to simulate gear movement is presented under these circumstances. In the process of modeling, build the solid models of the mechanism in the SolidWorks firstly; Then collect the point coordinates of outline curves of the gear using SolidWorks API and create fit curves in Adams based on the point coordinates; Next, adjust the position of those fitting curves according to the position of the contact area; Finally, define the loading conditions, boundary conditions and simulation parameters. The method provides gear shape information by tooth profile curves; simulates the mesh process through tooth profile curve to curve contact and offer mass as well as inertia data via solid gear models. This simulation process combines the two models to complete the gear driving analysis. In order to verify the validity of the method presented, both theoretical derivation and numerical simulation on a runaway escapement are conducted. The results show that the computational efficiency of the mixed model method is 1.4 times over the traditional method which contains solid to solid contact. Meanwhile, the simulation results are more closely to theoretical calculations. Consequently, mixed model method has a high application value regarding to the study of the dynamics of gear mechanism.
Simulation Exercises for an Undergraduate Digital Process Control Course.
ERIC Educational Resources Information Center
Reeves, Deborah E.; Schork, F. Joseph
1988-01-01
Presents six problems from an alternative approach to homework traditionally given to follow-up lectures. Stresses the advantage of longer term exercises which allow for creativity and independence on the part of the student. Problems include: "System Model,""Open-Loop Simulation,""PID Control,""Dahlin…
A Modal Model to Simulate Typical Structural Dynamic Nonlinearity [PowerPoint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayes, Randall L.; Pacini, Benjamin Robert; Roettgen, Dan
2016-01-01
Some initial investigations have been published which simulate nonlinear response with almost traditional modal models: instead of connecting the modal mass to ground through the traditional spring and damper, a nonlinear Iwan element was added. This assumes that the mode shapes do not change with amplitude and there are no interactions between modal degrees of freedom. This work expands on these previous studies. An impact experiment is performed on a structure which exhibits typical structural dynamic nonlinear response, i.e. weak frequency dependence and strong damping dependence on the amplitude of vibration. Use of low level modal test results in combinationmore » with high level impacts are processed using various combinations of modal filtering, the Hilbert Transform and band-pass filtering to develop response data that are then fit with various nonlinear elements to create a nonlinear pseudo-modal model. Simulations of forced response are compared with high level experimental data for various nonlinear element assumptions.« less
A Modal Model to Simulate Typical Structural Dynamic Nonlinearity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pacini, Benjamin Robert; Mayes, Randall L.; Roettgen, Daniel R
2015-10-01
Some initial investigations have been published which simulate nonlinear response with almost traditional modal models: instead of connecting the modal mass to ground through the traditional spring and damper, a nonlinear Iwan element was added. This assumes that the mode shapes do not change with amplitude and there are no interactions between modal degrees of freedom. This work expands on these previous studies. An impact experiment is performed on a structure which exhibits typical structural dynamic nonlinear response, i.e. weak frequency dependence and strong damping dependence on the amplitude of vibration. Use of low level modal test results in combinationmore » with high level impacts are processed using various combinations of modal filtering, the Hilbert Transform and band-pass filtering to develop response data that are then fit with various nonlinear elements to create a nonlinear pseudo-modal model. Simulations of forced response are compared with high level experimental data for various nonlinear element assumptions.« less
Stochastic Simulation and Forecast of Hydrologic Time Series Based on Probabilistic Chaos Expansion
NASA Astrophysics Data System (ADS)
Li, Z.; Ghaith, M.
2017-12-01
Hydrological processes are characterized by many complex features, such as nonlinearity, dynamics and uncertainty. How to quantify and address such complexities and uncertainties has been a challenging task for water engineers and managers for decades. To support robust uncertainty analysis, an innovative approach for the stochastic simulation and forecast of hydrologic time series is developed is this study. Probabilistic Chaos Expansions (PCEs) are established through probabilistic collocation to tackle uncertainties associated with the parameters of traditional hydrological models. The uncertainties are quantified in model outputs as Hermite polynomials with regard to standard normal random variables. Sequentially, multivariate analysis techniques are used to analyze the complex nonlinear relationships between meteorological inputs (e.g., temperature, precipitation, evapotranspiration, etc.) and the coefficients of the Hermite polynomials. With the established relationships between model inputs and PCE coefficients, forecasts of hydrologic time series can be generated and the uncertainties in the future time series can be further tackled. The proposed approach is demonstrated using a case study in China and is compared to a traditional stochastic simulation technique, the Markov-Chain Monte-Carlo (MCMC) method. Results show that the proposed approach can serve as a reliable proxy to complicated hydrological models. It can provide probabilistic forecasting in a more computationally efficient manner, compared to the traditional MCMC method. This work provides technical support for addressing uncertainties associated with hydrological modeling and for enhancing the reliability of hydrological modeling results. Applications of the developed approach can be extended to many other complicated geophysical and environmental modeling systems to support the associated uncertainty quantification and risk analysis.
NASA Astrophysics Data System (ADS)
Li, Mingchao; Han, Shuai; Zhou, Sibao; Zhang, Ye
2018-06-01
Based on a 3D model of a discrete fracture network (DFN) in a rock mass, an improved projective method for computing the 3D mechanical connectivity rate was proposed. The Monte Carlo simulation method, 2D Poisson process and 3D geological modeling technique were integrated into a polyhedral DFN modeling approach, and the simulation results were verified by numerical tests and graphical inspection. Next, the traditional projective approach for calculating the rock mass connectivity rate was improved using the 3D DFN models by (1) using the polyhedral model to replace the Baecher disk model; (2) taking the real cross section of the rock mass, rather than a part of the cross section, as the test plane; and (3) dynamically searching the joint connectivity rates using different dip directions and dip angles at different elevations to calculate the maximum, minimum and average values of the joint connectivity at each elevation. In a case study, the improved method and traditional method were used to compute the mechanical connectivity rate of the slope of a dam abutment. The results of the two methods were further used to compute the cohesive force of the rock masses. Finally, a comparison showed that the cohesive force derived from the traditional method had a higher error, whereas the cohesive force derived from the improved method was consistent with the suggested values. According to the comparison, the effectivity and validity of the improved method were verified indirectly.
Parallel discrete event simulation: A shared memory approach
NASA Technical Reports Server (NTRS)
Reed, Daniel A.; Malony, Allen D.; Mccredie, Bradley D.
1987-01-01
With traditional event list techniques, evaluating a detailed discrete event simulation model can often require hours or even days of computation time. Parallel simulation mimics the interacting servers and queues of a real system by assigning each simulated entity to a processor. By eliminating the event list and maintaining only sufficient synchronization to insure causality, parallel simulation can potentially provide speedups that are linear in the number of processors. A set of shared memory experiments is presented using the Chandy-Misra distributed simulation algorithm to simulate networks of queues. Parameters include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors. These experiments show that Chandy-Misra distributed simulation is a questionable alternative to sequential simulation of most queueing network models.
Don C. Bragg; Jeffrey L. Kershner
2004-01-01
Riparian large woody debris (LWD) recruitment simulations have traditionally applied a random angle of tree fall from two well-forested stream banks. We used a riparian LWD recruitment model (CWD, version 1.4) to test the validity these assumptions. Both the number of contributing forest banks and predominant tree fall direction significantly influenced simulated...
Performance Analysis of Cloud Computing Architectures Using Discrete Event Simulation
NASA Technical Reports Server (NTRS)
Stocker, John C.; Golomb, Andrew M.
2011-01-01
Cloud computing offers the economic benefit of on-demand resource allocation to meet changing enterprise computing needs. However, the flexibility of cloud computing is disadvantaged when compared to traditional hosting in providing predictable application and service performance. Cloud computing relies on resource scheduling in a virtualized network-centric server environment, which makes static performance analysis infeasible. We developed a discrete event simulation model to evaluate the overall effectiveness of organizations in executing their workflow in traditional and cloud computing architectures. The two part model framework characterizes both the demand using a probability distribution for each type of service request as well as enterprise computing resource constraints. Our simulations provide quantitative analysis to design and provision computing architectures that maximize overall mission effectiveness. We share our analysis of key resource constraints in cloud computing architectures and findings on the appropriateness of cloud computing in various applications.
Under the Toxic Substances Control Act (TSCA), the Environmental Protection Agency (EPA) is required to perform new chemical reviews of nanomaterials identified in premanufacture notices. However, environmental fate models developed for traditional contaminants are limited in the...
Self-adaptive Fault-Tolerance of HLA-Based Simulations in the Grid Environment
NASA Astrophysics Data System (ADS)
Huang, Jijie; Chai, Xudong; Zhang, Lin; Li, Bo Hu
The objects of a HLA-based simulation can access model services to update their attributes. However, the grid server may be overloaded and refuse the model service to handle objects accesses. Because these objects have been accessed this model service during last simulation loop and their medium state are stored in this server, this may terminate the simulation. A fault-tolerance mechanism must be introduced into simulations. But the traditional fault-tolerance methods cannot meet the above needs because the transmission latency between a federate and the RTI in grid environment varies from several hundred milliseconds to several seconds. By adding model service URLs to the OMT and expanding the HLA services and model services with some interfaces, this paper proposes a self-adaptive fault-tolerance mechanism of simulations according to the characteristics of federates accessing model services. Benchmark experiments indicate that the expanded HLA/RTI can make simulations self-adaptively run in the grid environment.
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.
1997-01-01
The NASA Lewis Research Center is developing analytical methods and software tools to create a bridge between the controls and computational fluid dynamics (CFD) disciplines. Traditionally, control design engineers have used coarse nonlinear simulations to generate information for the design of new propulsion system controls. However, such traditional methods are not adequate for modeling the propulsion systems of complex, high-speed vehicles like the High Speed Civil Transport. To properly model the relevant flow physics of high-speed propulsion systems, one must use simulations based on CFD methods. Such CFD simulations have become useful tools for engineers that are designing propulsion system components. The analysis techniques and software being developed as part of this effort are an attempt to evolve CFD into a useful tool for control design as well. One major aspect of this research is the generation of linear models from steady-state CFD results. CFD simulations, often used during the design of high-speed inlets, yield high resolution operating point data. Under a NASA grant, the University of Akron has developed analytical techniques and software tools that use these data to generate linear models for control design. The resulting linear models have the same number of states as the original CFD simulation, so they are still very large and computationally cumbersome. Model reduction techniques have been successfully applied to reduce these large linear models by several orders of magnitude without significantly changing the dynamic response. The result is an accurate, easy to use, low-order linear model that takes less time to generate than those generated by traditional means. The development of methods for generating low-order linear models from steady-state CFD is most complete at the one-dimensional level, where software is available to generate models with different kinds of input and output variables. One-dimensional methods have been extended somewhat so that linear models can also be generated from two- and three-dimensional steady-state results. Standard techniques are adequate for reducing the order of one-dimensional CFD-based linear models. However, reduction of linear models based on two- and three-dimensional CFD results is complicated by very sparse, ill-conditioned matrices. Some novel approaches are being investigated to solve this problem.
ERIC Educational Resources Information Center
Pant, Mohan Dev
2011-01-01
The Burr families (Type III and Type XII) of distributions are traditionally used in the context of statistical modeling and for simulating non-normal distributions with moment-based parameters (e.g., Skew and Kurtosis). In educational and psychological studies, the Burr families of distributions can be used to simulate extremely asymmetrical and…
Real-time simulation of an F110/STOVL turbofan engine
NASA Technical Reports Server (NTRS)
Drummond, Colin K.; Ouzts, Peter J.
1989-01-01
A traditional F110-type turbofan engine model was extended to include a ventral nozzle and two thrust-augmenting ejectors for Short Take-Off Vertical Landing (STOVL) aircraft applications. Development of the real-time F110/STOVL simulation required special attention to the modeling approach to component performance maps, the low pressure turbine exit mixing region, and the tailpipe dynamic approximation. Simulation validation derives by comparing output from the ADSIM simulation with the output for a validated F110/STOVL General Electric Aircraft Engines FORTRAN deck. General Electric substantiated basic engine component characteristics through factory testing and full scale ejector data.
Yang, Guowei; You, Shengzui; Bi, Meihua; Fan, Bing; Lu, Yang; Zhou, Xuefang; Li, Jing; Geng, Hujun; Wang, Tianshu
2017-09-10
Free-space optical (FSO) communication utilizing a modulating retro-reflector (MRR) is an innovative way to convey information between the traditional optical transceiver and the semi-passive MRR unit that reflects optical signals. The reflected signals experience turbulence-induced fading in the double-pass channel, which is very different from that in the traditional single-pass FSO channel. In this paper, we consider the corner cube reflector (CCR) as the retro-reflective device in the MRR. A general geometrical model of the CCR is established based on the ray tracing method to describe the ray trajectory inside the CCR. This ray tracing model could treat the general case that the optical beam is obliquely incident on the hypotenuse surface of the CCR with the dihedral angle error and surface nonflatness. Then, we integrate this general CCR model into the wave-optics (WO) simulation to construct the double-pass beam propagation simulation. This double-pass simulation contains the forward propagation from the transceiver to the MRR through the atmosphere, the retro-reflection of the CCR, and the backward propagation from the MRR to the transceiver, which can be realized by a single-pass WO simulation, the ray tracing CCR model, and another single-pass WO simulation, respectively. To verify the proposed CCR model and double-pass WO simulation, the effective reflection area, the incremental phase, and the reflected beam spot on the transceiver plane of the CCR are analyzed, and the numerical results are in agreement with the previously published results. Finally, we use the double-pass WO simulation to investigate the double-pass channel in the MRR FSO systems. The histograms of the turbulence-induced fading in the forward and backward channels are obtained from the simulation data and are fitted by gamma-gamma (ΓΓ) distributions. As the two opposite channels are highly correlated, we model the double-pass channel fading by the product of two correlated ΓΓ random variables (RVs).
The WRF-CMAQ Integrated On-Line Modeling System: Development, Testing, and Initial Applications
Traditionally, atmospheric chemistry-transport and meteorology models have been applied in an off-line paradigm, in which archived output on the dynamical state of the atmosphere simulated using the meteorology model is used to drive transport and chemistry calculations of atmos...
ERIC Educational Resources Information Center
Martinez, Guadalupe; Naranjo, Francisco L.; Perez, Angel L.; Suero, Maria Isabel; Pardo, Pedro J.
2011-01-01
This study compared the educational effects of computer simulations developed in a hyper-realistic virtual environment with the educational effects of either traditional schematic simulations or a traditional optics laboratory. The virtual environment was constructed on the basis of Java applets complemented with a photorealistic visual output.…
Linking MODFLOW with an agent-based land-use model to support decision making
Reeves, H.W.; Zellner, M.L.
2010-01-01
The U.S. Geological Survey numerical groundwater flow model, MODFLOW, was integrated with an agent-based land-use model to yield a simulator for environmental planning studies. Ultimately, this integrated simulator will be used as a means to organize information, illustrate potential system responses, and facilitate communication within a participatory modeling framework. Initial results show the potential system response to different zoning policy scenarios in terms of the spatial patterns of development, which is referred to as urban form, and consequent impacts on groundwater levels. These results illustrate how the integrated simulator is capable of representing the complexity of the system. From a groundwater modeling perspective, the most important aspect of the integration is that the simulator generates stresses on the groundwater system within the simulation in contrast to the traditional approach that requires the user to specify the stresses through time. Copyright ?? 2010 The Author(s). Journal compilation ?? 2010 National Ground Water Association.
Modelling and simulating reaction-diffusion systems using coloured Petri nets.
Liu, Fei; Blätke, Mary-Ann; Heiner, Monika; Yang, Ming
2014-10-01
Reaction-diffusion systems often play an important role in systems biology when developmental processes are involved. Traditional methods of modelling and simulating such systems require substantial prior knowledge of mathematics and/or simulation algorithms. Such skills may impose a challenge for biologists, when they are not equally well-trained in mathematics and computer science. Coloured Petri nets as a high-level and graphical language offer an attractive alternative, which is easily approachable. In this paper, we investigate a coloured Petri net framework integrating deterministic, stochastic and hybrid modelling formalisms and corresponding simulation algorithms for the modelling and simulation of reaction-diffusion processes that may be closely coupled with signalling pathways, metabolic reactions and/or gene expression. Such systems often manifest multiscaleness in time, space and/or concentration. We introduce our approach by means of some basic diffusion scenarios, and test it against an established case study, the Brusselator model. Copyright © 2014 Elsevier Ltd. All rights reserved.
Multi-Site λ-dynamics for simulated Structure-Activity Relationship studies
Knight, Jennifer L.; Brooks, Charles L.
2011-01-01
Multi-Site λ-dynamics (MSλD) is a new free energy simulation method that is based on λ-dynamics. It has been developed to enable multiple substituents at multiple sites on a common ligand core to be modeled simultaneously and their free energies assessed. The efficacy of MSλD for estimating relative hydration free energies and relative binding affinties is demonstrated using three test systems. Model compounds representing multiple identical benzene, dihydroxybenzene and dimethoxybenzene molecules show total combined MSλD trajectory lengths of ~1.5 ns are sufficient to reliably achieve relative hydration free energy estimates within 0.2 kcal/mol and are less sensitive to the number of trajectories that are used to generate these estimates for hybrid ligands that contain up to ten substituents modeled at a single site or five substituents modeled at each of two sites. Relative hydration free energies among six benzene derivatives calculated from MSλD simulations are in very good agreement with those from alchemical free energy simulations (with average unsigned differences of 0.23 kcal/mol and R2=0.991) and experiment (with average unsigned errors of 1.8 kcal/mol and R2=0.959). Estimates of the relative binding affinities among 14 inhibitors of HIV-1 reverse transcriptase obtained from MSλD simulations are in reasonable agreement with those from traditional free energy simulations and experiment (average unsigned errors of 0.9 kcal/mol and R2=0.402). For the same level of accuracy and precision MSλD simulations are achieved ~20–50 times faster than traditional free energy simulations and thus with reliable force field parameters can be used effectively to screen tens to hundreds of compounds in structure-based drug design applications. PMID:22125476
The Role of Simulation in Microsurgical Training.
Evgeniou, Evgenios; Walker, Harriet; Gujral, Sameer
Simulation has been established as an integral part of microsurgical training. The aim of this study was to assess and categorize the various simulation models in relation to the complexity of the microsurgical skill being taught and analyze the assessment methods commonly employed in microsurgical simulation training. Numerous courses have been established using simulation models. These models can be categorized, according to the level of complexity of the skill being taught, into basic, intermediate, and advanced. Microsurgical simulation training should be assessed using validated assessment methods. Assessment methods vary significantly from subjective expert opinions to self-assessment questionnaires and validated global rating scales. The appropriate assessment method should carefully be chosen based on the simulation modality. Simulation models should be validated, and a model with appropriate fidelity should be chosen according to the microsurgical skill being taught. Assessment should move from traditional simple subjective evaluations of trainee performance to validated tools. Future studies should assess the transferability of skills gained during simulation training to the real-life setting. Copyright © 2018 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wan, Hui; Rasch, Philip J.; Zhang, Kai
2014-09-08
This paper explores the feasibility of an experimentation strategy for investigating sensitivities in fast components of atmospheric general circulation models. The basic idea is to replace the traditional serial-in-time long-term climate integrations by representative ensembles of shorter simulations. The key advantage of the proposed method lies in its efficiency: since fewer days of simulation are needed, the computational cost is less, and because individual realizations are independent and can be integrated simultaneously, the new dimension of parallelism can dramatically reduce the turnaround time in benchmark tests, sensitivities studies, and model tuning exercises. The strategy is not appropriate for exploring sensitivitymore » of all model features, but it is very effective in many situations. Two examples are presented using the Community Atmosphere Model version 5. The first example demonstrates that the method is capable of characterizing the model cloud and precipitation sensitivity to time step length. A nudging technique is also applied to an additional set of simulations to help understand the contribution of physics-dynamics interaction to the detected time step sensitivity. In the second example, multiple empirical parameters related to cloud microphysics and aerosol lifecycle are perturbed simultaneously in order to explore which parameters have the largest impact on the simulated global mean top-of-atmosphere radiation balance. Results show that in both examples, short ensembles are able to correctly reproduce the main signals of model sensitivities revealed by traditional long-term climate simulations for fast processes in the climate system. The efficiency of the ensemble method makes it particularly useful for the development of high-resolution, costly and complex climate models.« less
The role of simulation in neurosurgery.
Rehder, Roberta; Abd-El-Barr, Muhammad; Hooten, Kristopher; Weinstock, Peter; Madsen, Joseph R; Cohen, Alan R
2016-01-01
In an era of residency duty-hour restrictions, there has been a recent effort to implement simulation-based training methods in neurosurgery teaching institutions. Several surgical simulators have been developed, ranging from physical models to sophisticated virtual reality systems. To date, there is a paucity of information describing the clinical benefits of existing simulators and the assessment strategies to help implement them into neurosurgical curricula. Here, we present a systematic review of the current models of simulation and discuss the state-of-the-art and future directions for simulation in neurosurgery. Retrospective literature review. Multiple simulators have been developed for neurosurgical training, including those for minimally invasive procedures, vascular, skull base, pediatric, tumor resection, functional neurosurgery, and spine surgery. The pros and cons of existing systems are reviewed. Advances in imaging and computer technology have led to the development of different simulation models to complement traditional surgical training. Sophisticated virtual reality (VR) simulators with haptic feedback and impressive imaging technology have provided novel options for training in neurosurgery. Breakthrough training simulation using 3D printing technology holds promise for future simulation practice, proving high-fidelity patient-specific models to complement residency surgical learning.
Design of Accelerator Online Simulator Server Using Structured Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Guobao; /Brookhaven; Chu, Chungming
2012-07-06
Model based control plays an important role for a modern accelerator during beam commissioning, beam study, and even daily operation. With a realistic model, beam behaviour can be predicted and therefore effectively controlled. The approach used by most current high level application environments is to use a built-in simulation engine and feed a realistic model into that simulation engine. Instead of this traditional monolithic structure, a new approach using a client-server architecture is under development. An on-line simulator server is accessed via network accessible structured data. With this approach, a user can easily access multiple simulation codes. This paper describesmore » the design, implementation, and current status of PVData, which defines the structured data, and PVAccess, which provides network access to the structured data.« less
Modeling NIF experimental designs with adaptive mesh refinement and Lagrangian hydrodynamics
NASA Astrophysics Data System (ADS)
Koniges, A. E.; Anderson, R. W.; Wang, P.; Gunney, B. T. N.; Becker, R.; Eder, D. C.; MacGowan, B. J.; Schneider, M. B.
2006-06-01
Incorporation of adaptive mesh refinement (AMR) into Lagrangian hydrodynamics algorithms allows for the creation of a highly powerful simulation tool effective for complex target designs with three-dimensional structure. We are developing an advanced modeling tool that includes AMR and traditional arbitrary Lagrangian-Eulerian (ALE) techniques. Our goal is the accurate prediction of vaporization, disintegration and fragmentation in National Ignition Facility (NIF) experimental target elements. Although our focus is on minimizing the generation of shrapnel in target designs and protecting the optics, the general techniques are applicable to modern advanced targets that include three-dimensional effects such as those associated with capsule fill tubes. Several essential computations in ordinary radiation hydrodynamics need to be redesigned in order to allow for AMR to work well with ALE, including algorithms associated with radiation transport. Additionally, for our goal of predicting fragmentation, we include elastic/plastic flow into our computations. We discuss the integration of these effects into a new ALE-AMR simulation code. Applications of this newly developed modeling tool as well as traditional ALE simulations in two and three dimensions are applied to NIF early-light target designs.
Expedited Systems Engineering for Rapid Capability and Urgent Needs
2012-12-31
rapid organizations start to differ from traditional ones, and there is a shift in energy , commitment, and knowledge. These findings are motivated by...123 C.7.1 Description: Integration of Modeling and Simulation , Software Design, and...differ from traditional ones, and there is a shift in energy , commitment, and knowledge. These findings are motivated by an analysis of effective
Zhang, Xiaoshuai; Yang, Xiaowei; Yuan, Zhongshang; Liu, Yanxun; Li, Fangyu; Peng, Bin; Zhu, Dianwen; Zhao, Jinghua; Xue, Fuzhong
2013-01-01
For genome-wide association data analysis, two genes in any pathway, two SNPs in the two linked gene regions respectively or in the two linked exons respectively within one gene are often correlated with each other. We therefore proposed the concept of gene-gene co-association, which refers to the effects not only due to the traditional interaction under nearly independent condition but the correlation between two genes. Furthermore, we constructed a novel statistic for detecting gene-gene co-association based on Partial Least Squares Path Modeling (PLSPM). Through simulation, the relationship between traditional interaction and co-association was highlighted under three different types of co-association. Both simulation and real data analysis demonstrated that the proposed PLSPM-based statistic has better performance than single SNP-based logistic model, PCA-based logistic model, and other gene-based methods. PMID:23620809
The role of conviction and narrative in decision-making under radical uncertainty.
Tuckett, David; Nikolic, Milena
2017-08-01
We propose conviction narrative theory (CNT) to broaden decision-making theory in order to better understand and analyse how subjectively means-end rational actors cope in contexts in which the traditional assumptions in decision-making models fail to hold. Conviction narratives enable actors to draw on their beliefs, causal models, and rules of thumb to identify opportunities worth acting on, to simulate the future outcome of their actions, and to feel sufficiently convinced to act. The framework focuses on how narrative and emotion combine to allow actors to deliberate and to select actions that they think will produce the outcomes they desire. It specifies connections between particular emotions and deliberative thought, hypothesising that approach and avoidance emotions evoked during narrative simulation play a crucial role. Two mental states, Divided and Integrated, in which narratives can be formed or updated, are introduced and used to explain some familiar problems that traditional models cannot.
Zhang, Xiaoshuai; Yang, Xiaowei; Yuan, Zhongshang; Liu, Yanxun; Li, Fangyu; Peng, Bin; Zhu, Dianwen; Zhao, Jinghua; Xue, Fuzhong
2013-01-01
For genome-wide association data analysis, two genes in any pathway, two SNPs in the two linked gene regions respectively or in the two linked exons respectively within one gene are often correlated with each other. We therefore proposed the concept of gene-gene co-association, which refers to the effects not only due to the traditional interaction under nearly independent condition but the correlation between two genes. Furthermore, we constructed a novel statistic for detecting gene-gene co-association based on Partial Least Squares Path Modeling (PLSPM). Through simulation, the relationship between traditional interaction and co-association was highlighted under three different types of co-association. Both simulation and real data analysis demonstrated that the proposed PLSPM-based statistic has better performance than single SNP-based logistic model, PCA-based logistic model, and other gene-based methods.
The role of conviction and narrative in decision-making under radical uncertainty
Tuckett, David; Nikolic, Milena
2017-01-01
We propose conviction narrative theory (CNT) to broaden decision-making theory in order to better understand and analyse how subjectively means–end rational actors cope in contexts in which the traditional assumptions in decision-making models fail to hold. Conviction narratives enable actors to draw on their beliefs, causal models, and rules of thumb to identify opportunities worth acting on, to simulate the future outcome of their actions, and to feel sufficiently convinced to act. The framework focuses on how narrative and emotion combine to allow actors to deliberate and to select actions that they think will produce the outcomes they desire. It specifies connections between particular emotions and deliberative thought, hypothesising that approach and avoidance emotions evoked during narrative simulation play a crucial role. Two mental states, Divided and Integrated, in which narratives can be formed or updated, are introduced and used to explain some familiar problems that traditional models cannot. PMID:28804217
Simulation-based MDP verification for leading-edge masks
NASA Astrophysics Data System (ADS)
Su, Bo; Syrel, Oleg; Pomerantsev, Michael; Hagiwara, Kazuyuki; Pearman, Ryan; Pang, Leo; Fujimara, Aki
2017-07-01
For IC design starts below the 20nm technology node, the assist features on photomasks shrink well below 60nm and the printed patterns of those features on masks written by VSB eBeam writers start to show a large deviation from the mask designs. Traditional geometry-based fracturing starts to show large errors for those small features. As a result, other mask data preparation (MDP) methods have become available and adopted, such as rule-based Mask Process Correction (MPC), model-based MPC and eventually model-based MDP. The new MDP methods may place shot edges slightly differently from target to compensate for mask process effects, so that the final patterns on a mask are much closer to the design (which can be viewed as the ideal mask), especially for those assist features. Such an alteration generally produces better masks that are closer to the intended mask design. Traditional XOR-based MDP verification cannot detect problems caused by eBeam effects. Much like model-based OPC verification which became a necessity for OPC a decade ago, we see the same trend in MDP today. Simulation-based MDP verification solution requires a GPU-accelerated computational geometry engine with simulation capabilities. To have a meaningful simulation-based mask check, a good mask process model is needed. The TrueModel® system is a field tested physical mask model developed by D2S. The GPU-accelerated D2S Computational Design Platform (CDP) is used to run simulation-based mask check, as well as model-based MDP. In addition to simulation-based checks such as mask EPE or dose margin, geometry-based rules are also available to detect quality issues such as slivers or CD splits. Dose margin related hotspots can also be detected by setting a correct detection threshold. In this paper, we will demonstrate GPU-acceleration for geometry processing, and give examples of mask check results and performance data. GPU-acceleration is necessary to make simulation-based mask MDP verification acceptable.
NASA Technical Reports Server (NTRS)
Oluwole, Oluwayemisi O.; Wong, Hsi-Wu; Green, William
2012-01-01
AdapChem software enables high efficiency, low computational cost, and enhanced accuracy on computational fluid dynamics (CFD) numerical simulations used for combustion studies. The software dynamically allocates smaller, reduced chemical models instead of the larger, full chemistry models to evolve the calculation while ensuring the same accuracy to be obtained for steady-state CFD reacting flow simulations. The software enables detailed chemical kinetic modeling in combustion CFD simulations. AdapChem adapts the reaction mechanism used in the CFD to the local reaction conditions. Instead of a single, comprehensive reaction mechanism throughout the computation, a dynamic distribution of smaller, reduced models is used to capture accurately the chemical kinetics at a fraction of the cost of the traditional single-mechanism approach.
SIVEH: numerical computing simulation of wireless energy-harvesting sensor nodes.
Sanchez, Antonio; Blanc, Sara; Climent, Salvador; Yuste, Pedro; Ors, Rafael
2013-09-04
The paper presents a numerical energy harvesting model for sensor nodes, SIVEH (Simulator I-V for EH), based on I-V hardware tracking. I-V tracking is demonstrated to be more accurate than traditional energy modeling techniques when some of the components present different power dissipation at either different operating voltages or drawn currents. SIVEH numerical computing allows fast simulation of long periods of time-days, weeks, months or years-using real solar radiation curves. Moreover, SIVEH modeling has been enhanced with sleep time rate dynamic adjustment, while seeking energy-neutral operation. This paper presents the model description, a functional verification and a critical comparison with the classic energy approach.
SIVEH: Numerical Computing Simulation of Wireless Energy-Harvesting Sensor Nodes
Sanchez, Antonio; Blanc, Sara; Climent, Salvador; Yuste, Pedro; Ors, Rafael
2013-01-01
The paper presents a numerical energy harvesting model for sensor nodes, SIVEH (Simulator I–V for EH), based on I–V hardware tracking. I–V tracking is demonstrated to be more accurate than traditional energy modeling techniques when some of the components present different power dissipation at either different operating voltages or drawn currents. SIVEH numerical computing allows fast simulation of long periods of time—days, weeks, months or years—using real solar radiation curves. Moreover, SIVEH modeling has been enhanced with sleep time rate dynamic adjustment, while seeking energy-neutral operation. This paper presents the model description, a functional verification and a critical comparison with the classic energy approach. PMID:24008287
NASA Technical Reports Server (NTRS)
daSilva, Arlinda
2012-01-01
A model-based Observing System Simulation Experiment (OSSE) is a framework for numerical experimentation in which observables are simulated from fields generated by an earth system model, including a parameterized description of observational error characteristics. Simulated observations can be used for sampling studies, quantifying errors in analysis or retrieval algorithms, and ultimately being a planning tool for designing new observing missions. While this framework has traditionally been used to assess the impact of observations on numerical weather prediction, it has a much broader applicability, in particular to aerosols and chemical constituents. In this talk we will give a general overview of Observing System Simulation Experiments (OSSE) activities at NASA's Global Modeling and Assimilation Office, with focus on its emerging atmospheric composition component.
Hybrid Simulation in Teaching Clinical Breast Examination to Medical Students.
Nassif, Joseph; Sleiman, Abdul-Karim; Nassar, Anwar H; Naamani, Sima; Sharara-Chami, Rana
2017-10-10
Clinical breast examination (CBE) is traditionally taught to third-year medical students using a lecture and a tabletop breast model. The opportunity to clinically practice CBE depends on patient availability and willingness to be examined by students, especially in culturally sensitive environments. We propose the use of a hybrid simulation model consisting of a standardized patient (SP) wearing a silicone breast simulator jacket and hypothesize that this, compared to traditional teaching methods, would result in improved learning. Consenting third-year medical students (N = 82) at a university-affiliated tertiary care center were cluster-randomized into two groups: hybrid simulation (breast jacket + SP) and control (tabletop breast model). Students received the standard lecture by instructors blinded to the randomization, followed by randomization group-based learning and practice sessions. Two weeks later, participants were assessed in an Objective Structured Clinical Examination (OSCE), which included three stations with SPs blinded to the intervention. The SPs graded the students on CBE completeness, and students completed a self-assessment of their performance and confidence during the examination. CBE completeness scores did not differ between the two groups (p = 0.889). Hybrid simulation improved lesion identification grades (p < 0.001) without increasing false positives. Hybrid simulation relieved the fear of missing a lesion on CBE (p = 0.043) and increased satisfaction with the teaching method among students (p = 0.002). As a novel educational tool, hybrid simulation improves the sensitivity of CBE performed by medical students without affecting its specificity. Hybrid simulation may play a role in increasing the confidence of medical students during CBE.
Hydrological Simulation Program - FORTRAN (HSPF) Data Formatting Tool (HDFT)
The HSPF data formatting and unit conversion tool has two seperate applications: a web-based application and a desktop application. The tool was developed to aid users in formatting data for HSPF stormwater modeling applications. Unlike traditional HSPF modeling applications, sto...
NASA Technical Reports Server (NTRS)
Robinson, Daryl C.
2002-01-01
While the results of this paper are similar to those of [I], in this paper technical difficulties present in [I] are eliminated, producing better results, enabling one to more readily see the benefits of Prioritized CSMA (PCSMA). A new analysis section also helps to generalize this research so that it is not limited to exploration of the new concept of PCSMA. Commercially available network simulation software, OPNET version 7.0, simulations are presented involving an important application of the Aeronautical Telecommunications Network (ATN), Controller Pilot Data Link Communications (CPDLC) over the Very High Frequency Data Link Mode 2 (VDL-2). Communication is modeled for essentially all incoming and outgoing nonstop air-traffic for just three United States cities: Cleveland, Cincinnati, and Detroit. The simulation involves 111 Air Traffic Control (ATC) ground stations, 32 airports distributed throughout the U.S., which are either sources or destinations for the air traffic landing or departing from the three cities, and also 1,235 equally equipped aircraft-taking off, flying realistic free-flight trajectories, and landing in a 24-hr period. Collision-less PCSMA is successfully tested and compared with the traditional CSMA typically associated with VDL-2. The performance measures include latency, throughput, and packet loss. As expected, PCSMA is much quicker and more efficient than traditional CSMA. These simulation results show the potency of PCSMA for implementing low latency, high throughput and efficient connectivity. Moreover, since PCSMA outperforms traditional CSMA, by simulating with it, we can determine the limits of performance beyond which traditional CSMA may not pass. So we have the tools to determine the traffic-loading conditions where traditional CSMA will fail, and we are testing a new and better data link that could replace it with relative ease. Work is currently being done to drastically expand the number of flights to make the simulation more representative of the National Aerospace System.
A Simple and Accurate Rate-Driven Infiltration Model
NASA Astrophysics Data System (ADS)
Cui, G.; Zhu, J.
2017-12-01
In this study, we develop a novel Rate-Driven Infiltration Model (RDIMOD) for simulating infiltration into soils. Unlike traditional methods, RDIMOD avoids numerically solving the highly non-linear Richards equation or simply modeling with empirical parameters. RDIMOD employs infiltration rate as model input to simulate one-dimensional infiltration process by solving an ordinary differential equation. The model can simulate the evolutions of wetting front, infiltration rate, and cumulative infiltration on any surface slope including vertical and horizontal directions. Comparing to the results from the Richards equation for both vertical infiltration and horizontal infiltration, RDIMOD simply and accurately predicts infiltration processes for any type of soils and soil hydraulic models without numerical difficulty. Taking into account the accuracy, capability, and computational effectiveness and stability, RDIMOD can be used in large-scale hydrologic and land-atmosphere modeling.
Ho, Cheng-Ting; Lin, Hsiu-Hsia; Liou, Eric J. W.; Lo, Lun-Jou
2017-01-01
Traditional planning method for orthognathic surgery has limitations of cephalometric analysis, especially for patients with asymmetry. The aim of this study was to assess surgical plan modification after 3-demensional (3D) simulation. The procedures were to perform traditional surgical planning, construction of 3D model for the initial surgical plan (P1), 3D model of altered surgical plan after simulation (P2), comparison between P1 and P2 models, surgical execution, and postoperative validation using superimposition and root-mean-square difference (RMSD) between postoperative 3D image and P2 simulation model. Surgical plan was modified after 3D simulation in 93% of the cases. Absolute linear changes of landmarks in mediolateral direction (x-axis) were significant and between 1.11 to 1.62 mm. The pitch, yaw, and roll rotation as well as ramus inclination correction also showed significant changes after the 3D planning. Yaw rotation of the maxillomandibular complex (1.88 ± 0.32°) and change of ramus inclination (3.37 ± 3.21°) were most frequently performed for correction of the facial asymmetry. Errors between the postsurgical image and 3D simulation were acceptable, with RMSD 0.63 ± 0.25 mm for the maxilla and 0.85 ± 0.41 mm for the mandible. The information from this study could be used to augment the clinical planning and surgical execution when a conventional approach is applied. PMID:28071714
Physical Uncertainty Bounds (PUB)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vaughan, Diane Elizabeth; Preston, Dean L.
2015-03-19
This paper introduces and motivates the need for a new methodology for determining upper bounds on the uncertainties in simulations of engineered systems due to limited fidelity in the composite continuum-level physics models needed to simulate the systems. We show that traditional uncertainty quantification methods provide, at best, a lower bound on this uncertainty. We propose to obtain bounds on the simulation uncertainties by first determining bounds on the physical quantities or processes relevant to system performance. By bounding these physics processes, as opposed to carrying out statistical analyses of the parameter sets of specific physics models or simply switchingmore » out the available physics models, one can obtain upper bounds on the uncertainties in simulated quantities of interest.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Denholm, P.; Hummon, M.
2013-02-01
Concentrating solar power (CSP) deployed with thermal energy storage (TES) provides a dispatchable source of renewable energy. The value of CSP with TES, as with other potential generation resources, needs to be established using traditional utility planning tools. Production cost models, which simulate the operation of grid, are often used to estimate the operational value of different generation mixes. CSP with TES has historically had limited analysis in commercial production simulations. This document describes the implementation of CSP with TES in a commercial production cost model. It also describes the simulation of grid operations with CSP in a test systemmore » consisting of two balancing areas located primarily in Colorado.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Denholm, P.; Hummon, M.
2012-11-01
Concentrating solar power (CSP) deployed with thermal energy storage (TES) provides a dispatchable source of renewable energy. The value of CSP with TES, as with other potential generation resources, needs to be established using traditional utility planning tools. Production cost models, which simulate the operation of grid, are often used to estimate the operational value of different generation mixes. CSP with TES has historically had limited analysis in commercial production simulations. This document describes the implementation of CSP with TES in a commercial production cost model. It also describes the simulation of grid operations with CSP in a test systemmore » consisting of two balancing areas located primarily in Colorado.« less
Nonlinear Prediction Model for Hydrologic Time Series Based on Wavelet Decomposition
NASA Astrophysics Data System (ADS)
Kwon, H.; Khalil, A.; Brown, C.; Lall, U.; Ahn, H.; Moon, Y.
2005-12-01
Traditionally forecasting and characterizations of hydrologic systems is performed utilizing many techniques. Stochastic linear methods such as AR and ARIMA and nonlinear ones such as statistical learning theory based tools have been extensively used. The common difficulty to all methods is the determination of sufficient and necessary information and predictors for a successful prediction. Relationships between hydrologic variables are often highly nonlinear and interrelated across the temporal scale. A new hybrid approach is proposed for the simulation of hydrologic time series combining both the wavelet transform and the nonlinear model. The present model employs some merits of wavelet transform and nonlinear time series model. The Wavelet Transform is adopted to decompose a hydrologic nonlinear process into a set of mono-component signals, which are simulated by nonlinear model. The hybrid methodology is formulated in a manner to improve the accuracy of a long term forecasting. The proposed hybrid model yields much better results in terms of capturing and reproducing the time-frequency properties of the system at hand. Prediction results are promising when compared to traditional univariate time series models. An application of the plausibility of the proposed methodology is provided and the results conclude that wavelet based time series model can be utilized for simulating and forecasting of hydrologic variable reasonably well. This will ultimately serve the purpose of integrated water resources planning and management.
Dual Purpose Simulation: New Data Link Test and Comparison With VDL-2
NASA Technical Reports Server (NTRS)
Robinson, Daryl C.
2005-01-01
While the results of this paper are similar to those of previous research, in this paper technical difficulties present there are eliminated, producing better results, enabling one to more readily see the benefits of Prioritized CSMA (PCSMA). A new analysis section also helps to generalize this research so that it is not limited to exploration of the new concept of PCSMA. Commercially available network simulation software, OPNET version 7.0, simulations are presented involving an important application of the Aeronautical Telecommunications Network (ATN), Controller Pilot Data Link Communications (CPDLC) over the Very High Frequency Data Link Mode 2 (VDL-2). Communication is modeled for essentially all incoming and outgoing nonstop air traffic for just three United States cities: Cleveland, Cincinnati, and Detroit. The simulation involves 111 Air Traffic Control (ATC) ground stations, 32 airports distributed throughout the U.S., which are either sources or destinations for the air traffic landing or departing from the three cities, and also 1,235 equally equipped aircraft taking off, flying realistic free-flight trajectories, and landing in a 24-hr period. Collision-less PCSMA is successfully tested and compared with the traditional CSMA typically associated with VDL- 2. The performance measures include latency, throughput, and packet loss. As expected, PCSMA is much quicker and more efficient than traditional CSMA. These simulation results show the potency of PCSMA for implementing low latency, high throughput and efficient connectivity. Moreover, since PCSMA outperforms traditional CSMA, by simulating with it, we can determine the limits of performance beyond which traditional CSMA may not pass. We are testing a new and better data link that could replace CSMA with relative ease. Work is underway to drastically expand the number of flights to make the simulation more representative of the National Aerospace System.
Dual Purpose Simulation: New Data Link Test and Comparison with VDL-2
NASA Technical Reports Server (NTRS)
Robinson, Daryl C.
2002-01-01
While the results of this paper are similar to those of previous research, in this paper technical difficulties present there are eliminated, producing better results, enabling one to more readily see the benefits of Prioritized CSMA (PCSMA). A new analysis section also helps to generalize this research so that it is not limited to exploration of the new concept of PCSMA. Commercially available network simulation software, OPNET version 7.0, simulations are presented involving an important application of the Aeronautical Telecommunications Network (A TN), Controller Pilot Data Link Communications (CPDLC) over the Very High Frequency Data Link Mode 2 (VDL-2). Communication is modeled for essentially all incoming and outgoing nonstop air traffic for just three United States cities: Cleveland, Cincinnati, and Detroit. The simulation involves 111 Air Traffic Control (ATC) ground stations, 32 airports distributed throughout the U.S., which are either sources or destinations for the air traffic landing or departing from the three cities, and also 1,235 equally equipped aircraft- taking off, flying realistic free- flight trajectories, and landing in a 24-hr period. Collision-less PCSMA is successfully tested and compared with the traditional CSMA typically associated with VDL-2. The performance measures include latency, throughput, and packet loss. As expected, PCSMA is much quicker and more efficient than traditional CSMA. These simulation results show the potency of PC SMA for implementing low latency, high throughput and efficient connectivity. Moreover, since PCSMA out performs traditional CSMA, by simulating with it, we can determine the limits of performance beyond which traditional CSMA may not pass. We are testing a new and better data link that could replace CSMA with relative ease. Work is underway to drastically expand the number of flights to make the simulation more representative of the National Aerospace System.
SDG and qualitative trend based model multiple scale validation
NASA Astrophysics Data System (ADS)
Gao, Dong; Xu, Xin; Yin, Jianjin; Zhang, Hongyu; Zhang, Beike
2017-09-01
Verification, Validation and Accreditation (VV&A) is key technology of simulation and modelling. For the traditional model validation methods, the completeness is weak; it is carried out in one scale; it depends on human experience. The SDG (Signed Directed Graph) and qualitative trend based multiple scale validation is proposed. First the SDG model is built and qualitative trends are added to the model. And then complete testing scenarios are produced by positive inference. The multiple scale validation is carried out by comparing the testing scenarios with outputs of simulation model in different scales. Finally, the effectiveness is proved by carrying out validation for a reactor model.
Modelling Social Learning in Monkeys
ERIC Educational Resources Information Center
Kendal, Jeremy R.
2008-01-01
The application of modelling to social learning in monkey populations has been a neglected topic. Recently, however, a number of statistical, simulation and analytical approaches have been developed to help examine social learning processes, putative traditions, the use of social learning strategies and the diffusion dynamics of socially…
Hybrid LES/RANS simulation of a turbulent boundary layer over a rectangular cavity
NASA Astrophysics Data System (ADS)
Zhang, Qi; Haering, Sigfried; Oliver, Todd; Moser, Robert
2016-11-01
We report numerical investigations of a turbulent boundary layer over a rectangular cavity using a new hybrid RANS/LES model and the traditional Detached Eddy Simulation (DES). Our new hybrid method aims to address many of the shortcomings from the traditional DES. In the new method, RANS/LES blending controlled by a parameter that measures the ratio of the modeled subgrid kinetic energy to an estimate of the subgrid energy based on the resolved scales. The result is a hybrid method automatically resolves as much turbulence as can be supported by the grid and transitions appropriately from RANS to LES without the need for ad hoc delaying functions that are often required for DES. Further, the new model is designed to improve upon DES by accounting for the effects of grid anisotropy and inhomogeneity in the LES region. We present comparisons of the flow features inside the cavity and the pressure time history and spectra as computed using the new hybrid model and DES.
Building aggregate timber supply models from individual harvest choice
Maksym Polyakov; David N. Wear; Robert Huggett
2009-01-01
Timber supply has traditionally been modelled using aggregate data. In this paper, we build aggregate supply models for four roundwood products for the US state of North Carolina from a stand-level harvest choice model applied to detailed forest inventory. The simulated elasticities of pulpwood supply are much lower than reported by previous studies. Cross price...
MP-Pic simulation of CFB riser with EMMS-based drag model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, F.; Song, F.; Benyahia, S.
2012-01-01
MP-PIC (multi-phase particle in cell) method combined with the EMMS (energy minimization multi- scale) drag force model was implemented with the open source program MFIX to simulate the gas–solid flows in CFB (circulatingfluidizedbed) risers. Calculated solid flux by the EMMS drag agrees well with the experimental value; while the traditional homogeneous drag over-predicts this value. EMMS drag force model can also predict the macro-and meso-scale structures. Quantitative comparison of the results by the EMMS drag force model and the experimental measurements show high accuracy of the model. The effects of the number of particles per parcel and wall conditions onmore » the simulation results have also been investigated in the paper. This work proved that MP-PIC combined with the EMMS drag model can successfully simulate the fluidized flows in CFB risers and it serves as a candidate to realize real-time simulation of industrial processes in the future.« less
NASA Technical Reports Server (NTRS)
Slafer, Loren I.
1989-01-01
Realtime simulation and hardware-in-the-loop testing is being used extensively in all phases of the design, development, and testing of the attitude control system (ACS) for the new Hughes HS601 satellite bus. Realtime, hardware-in-the-loop simulation, integrated with traditional analysis and pure simulation activities is shown to provide a highly efficient and productive overall development program. Implementation of high fidelity simulations of the satellite dynamics and control system algorithms, capable of real-time execution (using applied Dynamics International's System 100), provides a tool which is capable of being integrated with the critical flight microprocessor to create a mixed simulation test (MST). The MST creates a highly accurate, detailed simulated on-orbit test environment, capable of open and closed loop ACS testing, in which the ACS design can be validated. The MST is shown to provide a valuable extension of traditional test methods. A description of the MST configuration is presented, including the spacecraft dynamics simulation model, sensor and actuator emulators, and the test support system. Overall system performance parameters are presented. MST applications are discussed; supporting ACS design, developing on-orbit system performance predictions, flight software development and qualification testing (augmenting the traditional software-based testing), mission planning, and a cost-effective subsystem-level acceptance test. The MST is shown to provide an ideal tool in which the ACS designer can fly the spacecraft on the ground.
On the Subgrid-Scale Modeling of Compressible Turbulence
NASA Technical Reports Server (NTRS)
Squires, Kyle; Zeman, Otto
1990-01-01
A new sub-grid scale model is presented for the large-eddy simulation of compressible turbulence. In the proposed model, compressibility contributions have been incorporated in the sub-grid scale eddy viscosity which, in the incompressible limit, reduce to a form originally proposed by Smagorinsky (1963). The model has been tested against a simple extension of the traditional Smagorinsky eddy viscosity model using simulations of decaying, compressible homogeneous turbulence. Simulation results show that the proposed model provides greater dissipation of the compressive modes of the resolved-scale velocity field than does the Smagorinsky eddy viscosity model. For an initial r.m.s. turbulence Mach number of 1.0, simulations performed using the Smagorinsky model become physically unrealizable (i.e., negative energies) because of the inability of the model to sufficiently dissipate fluctuations due to resolved scale velocity dilations. The proposed model is able to provide the necessary dissipation of this energy and maintain the realizability of the flow. Following Zeman (1990), turbulent shocklets are considered to dissipate energy independent of the Kolmogorov energy cascade. A possible parameterization of dissipation by turbulent shocklets for Large-Eddy Simulation is also presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Roekel, Luke
We have conducted a suite of Large Eddy Simulation (LES) to form the basis of a multi-model comparison (left). The results have led to proposed model improvements. We have verified that Eulerian-Lagrangian effective diffusivity estimates of mesoscale mixing are consistent with traditional particle statistics metrics (right). LES and Lagrangian particles will be utilized to better represent the movement of water into and out of the mixed layer.
Huang, Zhijiong; Hu, Yongtao; Zheng, Junyu; Yuan, Zibing; Russell, Armistead G; Ou, Jiamin; Zhong, Zhuangmin
2017-04-04
The traditional reduced-form model (RFM) based on the high-order decoupled direct method (HDDM), is an efficient uncertainty analysis approach for air quality models, but it has large biases in uncertainty propagation due to the limitation of the HDDM in predicting nonlinear responses to large perturbations of model inputs. To overcome the limitation, a new stepwise-based RFM method that combines several sets of local sensitive coefficients under different conditions is proposed. Evaluations reveal that the new RFM improves the prediction of nonlinear responses. The new method is applied to quantify uncertainties in simulated PM 2.5 concentrations in the Pearl River Delta (PRD) region of China as a case study. Results show that the average uncertainty range of hourly PM 2.5 concentrations is -28% to 57%, which can cover approximately 70% of the observed PM 2.5 concentrations, while the traditional RFM underestimates the upper bound of the uncertainty range by 1-6%. Using a variance-based method, the PM 2.5 boundary conditions and primary PM 2.5 emissions are found to be the two major uncertainty sources in PM 2.5 simulations. The new RFM better quantifies the uncertainty range in model simulations and can be applied to improve applications that rely on uncertainty information.
Torsional Vibration Analysis of Reciprocating Compressor Trains driven by Induction Motors
NASA Astrophysics Data System (ADS)
Brunelli, M.; Fusi, A.; Grasso, F.; Pasteur, F.; Ussi, A.
2015-08-01
The dynamic study of electric motor driven compressors, for Oil&Gas (O&G) applications, are traditionally performed in two steps separating the mechanical and the electrical systems. The packager conducts a Torsional Vibration Analysis (TVA) modeling the mechanical system with a lumped parameter scheme, without taking into account the electrical part. The electric motor supplier later performs a source current pulsation analysis on the electric motor system, based on the TVA results. The mechanical and the electrical systems are actually linked by the electromagnetic effect. The effect of the motor air-gap on TVA has only recently been taken into account by adding a spring and a damper between motor and ground in the model. This model is more accurate than the traditional one, but is applicable only to the steady-state condition and still fails to consider the reciprocal effects between the two parts of the system. In this paper the torsional natural frequencies calculated using both the traditional and the new model have been compared. Furthermore, simulation of the complete system has been achieved through the use of LMS AMESim, multi-physics, one-dimensional simulation software that simultaneously solves the shafts rotation and electric motor voltage equation. Finally, the transient phenomena that occur during start-up have been studied.
USDA-ARS?s Scientific Manuscript database
Traditional dryland crop management includes fallow and intensive tillage, which have reduced soil organic carbon (SOC) over the past century raising concerns regarding soil health and sustainability. The objectives of this study were to: 1) use CQESTR, a process-based C model, to simulate SOC dynam...
Modeling and simulation for space medicine operations: preliminary requirements considered
NASA Technical Reports Server (NTRS)
Dawson, D. L.; Billica, R. D.; McDonald, P. V.
2001-01-01
The NASA Space Medicine program is now developing plans for more extensive use of high-fidelity medical simulation systems. The use of simulation is seen as means to more effectively use the limited time available for astronaut medical training. Training systems should be adaptable for use in a variety of training environments, including classrooms or laboratories, space vehicle mockups, analog environments, and in microgravity. Modeling and simulation can also provide the space medicine development program a mechanism for evaluation of other medical technologies under operationally realistic conditions. Systems and procedures need preflight verification with ground-based testing. Traditionally, component testing has been accomplished, but practical means for "human in the loop" verification of patient care systems have been lacking. Medical modeling and simulation technology offer potential means to accomplish such validation work. Initial considerations in the development of functional requirements and design standards for simulation systems for space medicine are discussed.
Requirements for Modeling and Simulation for Space Medicine Operations: Preliminary Considerations
NASA Technical Reports Server (NTRS)
Dawson, David L.; Billica, Roger D.; Logan, James; McDonald, P. Vernon
2001-01-01
The NASA Space Medicine program is now developing plans for more extensive use of high-fidelity medical Simulation systems. The use of simulation is seen as means to more effectively use the limited time available for astronaut medical training. Training systems should be adaptable for use in a variety of training environments, including classrooms or laboratories, space vehicle mockups, analog environments, and in microgravity. Modeling and simulation can also provide the space medicine development program a mechanism for evaluation of other medical technologies under operationally realistic conditions. Systems and procedures need preflight verification with ground-based testing. Traditionally, component testing has been accomplished, but practical means for "human in the loop" verification of patient care systems have been lacking. Medical modeling and simulation technology offer potential means to accomplish such validation work. Initial considerations in the development of functional requirements and design standards for simulation systems for space medicine are discussed.
Laparoscopic skills acquisition: a study of simulation and traditional training.
Marlow, Nicholas; Altree, Meryl; Babidge, Wendy; Field, John; Hewett, Peter; Maddern, Guy J
2014-12-01
Training in basic laparoscopic skills can be undertaken using traditional methods, where trainees are educated by experienced surgeons through a process of graduated responsibility or by simulation-based training. This study aimed to assess whether simulation trained individuals reach the same level of proficiency in basic laparoscopic skills as traditional trained participants when assessed in a simulated environment. A prospective study was undertaken. Participants were allocated to one of two cohorts according to surgical experience. Participants from the inexperienced cohort were randomized to receive training in basic laparoscopic skills on either a box trainer or a virtual reality simulator. They were then assessed on the simulator on which they did not receive training. Participants from the experienced cohort, considered to have received traditional training in basic laparoscopic skills, did not receive simulation training and were randomized to either the box trainer or virtual reality simulator for skills assessment. The assessment scores from different cohorts on either simulator were then compared. A total of 138 participants completed the assessment session, 101 in the inexperienced simulation-trained cohort and 37 on the experienced traditionally trained cohort. There was no statistically significant difference between the training outcomes of simulation and traditionally trained participants, irrespective of the simulator type used. The results demonstrated that participants trained on either a box trainer or virtual reality simulator achieved a level of basic laparoscopic skills assessed in a simulated environment that was not significantly different from participants who had been traditionally trained in basic laparoscopic skills. © 2013 Royal Australasian College of Surgeons.
Evolution of egoism on semi-directed and undirected Barabási-Albert networks
NASA Astrophysics Data System (ADS)
Lima, F. W. S.
2015-05-01
Through Monte Carlo simulations, we study the evolution of the four strategies: Ethnocentric, altruistic, egoistic and cosmopolitan in one community of individuals. Interactions and reproduction among computational agents are simulated on undirected and semi-directed Barabási-Albert (BA) networks. We study the Hammond-Axelrod (HA) model on undirected and semi-directed BA networks for the asexual reproduction case. With a small modification in the traditional HA model, our simulations showed that egoism wins, differently from other results found in the literature where ethnocentric strategy is common. Here, mechanisms such as reciprocity are absent.
A simulation technique for predicting thickness of thermal sprayed coatings
NASA Technical Reports Server (NTRS)
Goedjen, John G.; Miller, Robert A.; Brindley, William J.; Leissler, George W.
1995-01-01
The complexity of many of the components being coated today using the thermal spray process makes the trial and error approach traditionally followed in depositing a uniform coating inadequate, thereby necessitating a more analytical approach to developing robotic trajectories. A two dimensional finite difference simulation model has been developed to predict the thickness of coatings deposited using the thermal spray process. The model couples robotic and component trajectories and thermal spraying parameters to predict coating thickness. Simulations and experimental verification were performed on a rotating disk to evaluate the predictive capabilities of the approach.
Integrating interactive computational modeling in biology curricula.
Helikar, Tomáš; Cutucache, Christine E; Dahlquist, Lauren M; Herek, Tyler A; Larson, Joshua J; Rogers, Jim A
2015-03-01
While the use of computer tools to simulate complex processes such as computer circuits is normal practice in fields like engineering, the majority of life sciences/biological sciences courses continue to rely on the traditional textbook and memorization approach. To address this issue, we explored the use of the Cell Collective platform as a novel, interactive, and evolving pedagogical tool to foster student engagement, creativity, and higher-level thinking. Cell Collective is a Web-based platform used to create and simulate dynamical models of various biological processes. Students can create models of cells, diseases, or pathways themselves or explore existing models. This technology was implemented in both undergraduate and graduate courses as a pilot study to determine the feasibility of such software at the university level. First, a new (In Silico Biology) class was developed to enable students to learn biology by "building and breaking it" via computer models and their simulations. This class and technology also provide a non-intimidating way to incorporate mathematical and computational concepts into a class with students who have a limited mathematical background. Second, we used the technology to mediate the use of simulations and modeling modules as a learning tool for traditional biological concepts, such as T cell differentiation or cell cycle regulation, in existing biology courses. Results of this pilot application suggest that there is promise in the use of computational modeling and software tools such as Cell Collective to provide new teaching methods in biology and contribute to the implementation of the "Vision and Change" call to action in undergraduate biology education by providing a hands-on approach to biology.
Power-based Shift Schedule for Pure Electric Vehicle with a Two-speed Automatic Transmission
NASA Astrophysics Data System (ADS)
Wang, Jiaqi; Liu, Yanfang; Liu, Qiang; Xu, Xiangyang
2016-11-01
This paper introduces a comprehensive shift schedule for a two-speed automatic transmission of pure electric vehicle. Considering about driving ability and efficiency performance of electric vehicles, the power-based shift schedule is proposed with three principles. This comprehensive shift schedule regards the vehicle current speed and motor load power as input parameters to satisfy the vehicle driving power demand with lowest energy consumption. A simulation model has been established to verify the dynamic and economic performance of comprehensive shift schedule. Compared with traditional dynamic and economic shift schedules, simulation results indicate that the power-based shift schedule is superior to traditional shift schedules.
Simulation: Moving from Technology Challenge to Human Factors Success
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gould, Derek A., E-mail: dgould@liv.ac.uk; Chalmers, Nicholas; Johnson, Sheena J.
2012-06-15
Recognition of the many limitations of traditional apprenticeship training is driving new approaches to learning medical procedural skills. Among simulation technologies and methods available today, computer-based systems are topical and bring the benefits of automated, repeatable, and reliable performance assessments. Human factors research is central to simulator model development that is relevant to real-world imaging-guided interventional tasks and to the credentialing programs in which it would be used.
Animated-simulation modeling facilitates clinical-process costing.
Zelman, W N; Glick, N D; Blackmore, C C
2001-09-01
Traditionally, the finance department has assumed responsibility for assessing process costs in healthcare organizations. To enhance process-improvement efforts, however, many healthcare providers need to include clinical staff in process cost analysis. Although clinical staff often use electronic spreadsheets to model the cost of specific processes, PC-based animated-simulation tools offer two major advantages over spreadsheets: they allow clinicians to interact more easily with the costing model so that it more closely represents the process being modeled, and they represent cost output as a cost range rather than as a single cost estimate, thereby providing more useful information for decision making.
Leveraging Modeling Approaches: Reaction Networks and Rules
Blinov, Michael L.; Moraru, Ion I.
2012-01-01
We have witnessed an explosive growth in research involving mathematical models and computer simulations of intracellular molecular interactions, ranging from metabolic pathways to signaling and gene regulatory networks. Many software tools have been developed to aid in the study of such biological systems, some of which have a wealth of features for model building and visualization, and powerful capabilities for simulation and data analysis. Novel high resolution and/or high throughput experimental techniques have led to an abundance of qualitative and quantitative data related to the spatio-temporal distribution of molecules and complexes, their interactions kinetics, and functional modifications. Based on this information, computational biology researchers are attempting to build larger and more detailed models. However, this has proved to be a major challenge. Traditionally, modeling tools require the explicit specification of all molecular species and interactions in a model, which can quickly become a major limitation in the case of complex networks – the number of ways biomolecules can combine to form multimolecular complexes can be combinatorially large. Recently, a new breed of software tools has been created to address the problems faced when building models marked by combinatorial complexity. These have a different approach for model specification, using reaction rules and species patterns. Here we compare the traditional modeling approach with the new rule-based methods. We make a case for combining the capabilities of conventional simulation software with the unique features and flexibility of a rule-based approach in a single software platform for building models of molecular interaction networks. PMID:22161349
Leveraging modeling approaches: reaction networks and rules.
Blinov, Michael L; Moraru, Ion I
2012-01-01
We have witnessed an explosive growth in research involving mathematical models and computer simulations of intracellular molecular interactions, ranging from metabolic pathways to signaling and gene regulatory networks. Many software tools have been developed to aid in the study of such biological systems, some of which have a wealth of features for model building and visualization, and powerful capabilities for simulation and data analysis. Novel high-resolution and/or high-throughput experimental techniques have led to an abundance of qualitative and quantitative data related to the spatiotemporal distribution of molecules and complexes, their interactions kinetics, and functional modifications. Based on this information, computational biology researchers are attempting to build larger and more detailed models. However, this has proved to be a major challenge. Traditionally, modeling tools require the explicit specification of all molecular species and interactions in a model, which can quickly become a major limitation in the case of complex networks - the number of ways biomolecules can combine to form multimolecular complexes can be combinatorially large. Recently, a new breed of software tools has been created to address the problems faced when building models marked by combinatorial complexity. These have a different approach for model specification, using reaction rules and species patterns. Here we compare the traditional modeling approach with the new rule-based methods. We make a case for combining the capabilities of conventional simulation software with the unique features and flexibility of a rule-based approach in a single software platform for building models of molecular interaction networks.
Development of NASA's Models and Simulations Standard
NASA Technical Reports Server (NTRS)
Bertch, William J.; Zang, Thomas A.; Steele, Martin J.
2008-01-01
From the Space Shuttle Columbia Accident Investigation, there were several NASA-wide actions that were initiated. One of these actions was to develop a standard for development, documentation, and operation of Models and Simulations. Over the course of two-and-a-half years, a team of NASA engineers, representing nine of the ten NASA Centers developed a Models and Simulation Standard to address this action. The standard consists of two parts. The first is the traditional requirements section addressing programmatics, development, documentation, verification, validation, and the reporting of results from both the M&S analysis and the examination of compliance with this standard. The second part is a scale for evaluating the credibility of model and simulation results using levels of merit associated with 8 key factors. This paper provides an historical account of the challenges faced by and the processes used in this committee-based development effort. This account provides insights into how other agencies might approach similar developments. Furthermore, we discuss some specific applications of models and simulations used to assess the impact of this standard on future model and simulation activities.
Simulated full-waveform lidar compared to Riegl VZ-400 terrestrial laser scans
NASA Astrophysics Data System (ADS)
Kim, Angela M.; Olsen, Richard C.; Béland, Martin
2016-05-01
A 3-D Monte Carlo ray-tracing simulation of LiDAR propagation models the reflection, transmission and ab- sorption interactions of laser energy with materials in a simulated scene. In this presentation, a model scene consisting of a single Victorian Boxwood (Pittosporum undulatum) tree is generated by the high-fidelity tree voxel model VoxLAD using high-spatial resolution point cloud data from a Riegl VZ-400 terrestrial laser scanner. The VoxLAD model uses terrestrial LiDAR scanner data to determine Leaf Area Density (LAD) measurements for small volume voxels (20 cm sides) of a single tree canopy. VoxLAD is also used in a non-traditional fashion in this case to generate a voxel model of wood density. Information from the VoxLAD model is used within the LiDAR simulation to determine the probability of LiDAR energy interacting with materials at a given voxel location. The LiDAR simulation is defined to replicate the scanning arrangement of the Riegl VZ-400; the resulting simulated full-waveform LiDAR signals compare favorably to those obtained with the Riegl VZ-400 terrestrial laser scanner.
Hybrid ODE/SSA methods and the cell cycle model
NASA Astrophysics Data System (ADS)
Wang, S.; Chen, M.; Cao, Y.
2017-07-01
Stochastic effect in cellular systems has been an important topic in systems biology. Stochastic modeling and simulation methods are important tools to study stochastic effect. Given the low efficiency of stochastic simulation algorithms, the hybrid method, which combines an ordinary differential equation (ODE) system with a stochastic chemically reacting system, shows its unique advantages in the modeling and simulation of biochemical systems. The efficiency of hybrid method is usually limited by reactions in the stochastic subsystem, which are modeled and simulated using Gillespie's framework and frequently interrupt the integration of the ODE subsystem. In this paper we develop an efficient implementation approach for the hybrid method coupled with traditional ODE solvers. We also compare the efficiency of hybrid methods with three widely used ODE solvers RADAU5, DASSL, and DLSODAR. Numerical experiments with three biochemical models are presented. A detailed discussion is presented for the performances of three ODE solvers.
A parabolic model of drag coefficient for storm surge simulation in the South China Sea
Peng, Shiqiu; Li, Yineng
2015-01-01
Drag coefficient (Cd) is an essential metric in the calculation of momentum exchange over the air-sea interface and thus has large impacts on the simulation or forecast of the upper ocean state associated with sea surface winds such as storm surges. Generally, Cd is a function of wind speed. However, the exact relationship between Cd and wind speed is still in dispute, and the widely-used formula that is a linear function of wind speed in an ocean model could lead to large bias at high wind speed. Here we establish a parabolic model of Cd based on storm surge observations and simulation in the South China Sea (SCS) through a number of tropical cyclone cases. Simulation of storm surges for independent Tropical cyclones (TCs) cases indicates that the new parabolic model of Cd outperforms traditional linear models. PMID:26499262
A parabolic model of drag coefficient for storm surge simulation in the South China Sea.
Peng, Shiqiu; Li, Yineng
2015-10-26
Drag coefficient (Cd) is an essential metric in the calculation of momentum exchange over the air-sea interface and thus has large impacts on the simulation or forecast of the upper ocean state associated with sea surface winds such as storm surges. Generally, Cd is a function of wind speed. However, the exact relationship between Cd and wind speed is still in dispute, and the widely-used formula that is a linear function of wind speed in an ocean model could lead to large bias at high wind speed. Here we establish a parabolic model of Cd based on storm surge observations and simulation in the South China Sea (SCS) through a number of tropical cyclone cases. Simulation of storm surges for independent Tropical cyclones (TCs) cases indicates that the new parabolic model of Cd outperforms traditional linear models.
A parabolic model of drag coefficient for storm surge simulation in the South China Sea
NASA Astrophysics Data System (ADS)
Peng, Shiqiu; Li, Yineng
2015-10-01
Drag coefficient (Cd) is an essential metric in the calculation of momentum exchange over the air-sea interface and thus has large impacts on the simulation or forecast of the upper ocean state associated with sea surface winds such as storm surges. Generally, Cd is a function of wind speed. However, the exact relationship between Cd and wind speed is still in dispute, and the widely-used formula that is a linear function of wind speed in an ocean model could lead to large bias at high wind speed. Here we establish a parabolic model of Cd based on storm surge observations and simulation in the South China Sea (SCS) through a number of tropical cyclone cases. Simulation of storm surges for independent Tropical cyclones (TCs) cases indicates that the new parabolic model of Cd outperforms traditional linear models.
Transducer model produces facilitation from opposite-sign flanks
NASA Technical Reports Server (NTRS)
Solomon, J. A.; Watson, A. B.; Morgan, M. J.
1999-01-01
Small spots, lines and Gabor patterns can be easier to detect when they are superimposed upon similar spots, lines and Gabor patterns. Traditionally, such facilitation has been understood to be a consequence of nonlinear contrast transduction. Facilitation has also been reported to arise from non-overlapping patterns with opposite sign. We point out that this result does not preclude the traditional explanation for superimposed targets. Moreover, we find that facilitation from opposite-sign flanks is weaker than facilitation from same-sign flanks. Simulations with a transducer model produce opposite-sign facilitation.
NASA Technical Reports Server (NTRS)
Wang, Gang
2003-01-01
A multi grid solution procedure for the numerical simulation of turbulent flows in complex geometries has been developed. A Full Multigrid-Full Approximation Scheme (FMG-FAS) is incorporated into the continuity and momentum equations, while the scalars are decoupled from the multi grid V-cycle. A standard kappa-Epsilon turbulence model with wall functions has been used to close the governing equations. The numerical solution is accomplished by solving for the Cartesian velocity components either with a traditional grid staggering arrangement or with a multiple velocity grid staggering arrangement. The two solution methodologies are evaluated for relative computational efficiency. The solution procedure with traditional staggering arrangement is subsequently applied to calculate the flow and temperature fields around a model Short Take-off and Vertical Landing (STOVL) aircraft hovering in ground proximity.
NASA Astrophysics Data System (ADS)
Hu, S. X.; Collins, L. A.; Boehly, T. R.; Ding, Y. H.; Radha, P. B.; Goncharov, V. N.; Karasiev, V. V.; Collins, G. W.; Regan, S. P.; Campbell, E. M.
2018-05-01
Polystyrene (CH), commonly known as "plastic," has been one of the widely used ablator materials for capsule designs in inertial confinement fusion (ICF). Knowing its precise properties under high-energy-density conditions is crucial to understanding and designing ICF implosions through radiation-hydrodynamic simulations. For this purpose, systematic ab initio studies on the static, transport, and optical properties of CH, in a wide range of density and temperature conditions (ρ = 0.1 to 100 g/cm3 and T = 103 to 4 × 106 K), have been conducted using quantum molecular dynamics (QMD) simulations based on the density functional theory. We have built several wide-ranging, self-consistent material-properties tables for CH, such as the first-principles equation of state, the QMD-based thermal conductivity (κQMD) and ionization, and the first-principles opacity table. This paper is devoted to providing a review on (1) what results were obtained from these systematic ab initio studies; (2) how these self-consistent results were compared with both traditional plasma-physics models and available experiments; and (3) how these first-principles-based properties of polystyrene affect the predictions of ICF target performance, through both 1-D and 2-D radiation-hydrodynamic simulations. In the warm dense regime, our ab initio results, which can significantly differ from predictions of traditional plasma-physics models, compared favorably with experiments. When incorporated into hydrocodes for ICF simulations, these first-principles material properties of CH have produced significant differences over traditional models in predicting 1-D/2-D target performance of ICF implosions on OMEGA and direct-drive-ignition designs for the National Ignition Facility. Finally, we will discuss the implications of these studies on the current small-margin ICF target designs using a CH ablator.
OpenWorm: an open-science approach to modeling Caenorhabditis elegans.
Szigeti, Balázs; Gleeson, Padraig; Vella, Michael; Khayrulin, Sergey; Palyanov, Andrey; Hokanson, Jim; Currie, Michael; Cantarelli, Matteo; Idili, Giovanni; Larson, Stephen
2014-01-01
OpenWorm is an international collaboration with the aim of understanding how the behavior of Caenorhabditis elegans (C. elegans) emerges from its underlying physiological processes. The project has developed a modular simulation engine to create computational models of the worm. The modularity of the engine makes it possible to easily modify the model, incorporate new experimental data and test hypotheses. The modeling framework incorporates both biophysical neuronal simulations and a novel fluid-dynamics-based soft-tissue simulation for physical environment-body interactions. The project's open-science approach is aimed at overcoming the difficulties of integrative modeling within a traditional academic environment. In this article the rationale is presented for creating the OpenWorm collaboration, the tools and resources developed thus far are outlined and the unique challenges associated with the project are discussed.
Lumped-parameters equivalent circuit for condenser microphones modeling.
Esteves, Josué; Rufer, Libor; Ekeom, Didace; Basrour, Skandar
2017-10-01
This work presents a lumped parameters equivalent model of condenser microphone based on analogies between acoustic, mechanical, fluidic, and electrical domains. Parameters of the model were determined mainly through analytical relations and/or finite element method (FEM) simulations. Special attention was paid to the air gap modeling and to the use of proper boundary condition. Corresponding lumped-parameters were obtained as results of FEM simulations. Because of its simplicity, the model allows a fast simulation and is readily usable for microphone design. This work shows the validation of the equivalent circuit on three real cases of capacitive microphones, including both traditional and Micro-Electro-Mechanical Systems structures. In all cases, it has been demonstrated that the sensitivity and other related data obtained from the equivalent circuit are in very good agreement with available measurement data.
Design, modelling and simulation aspects of an ankle rehabilitation device
NASA Astrophysics Data System (ADS)
Racu, C. M.; Doroftei, I.
2016-08-01
Ankle injuries are amongst the most common injuries of the lower limb. Besides initial treatment, rehabilitation of the patients plays a crucial role for future activities and proper functionality of the foot. Traditionally, ankle injuries are rehabilitated via physiotherapy, using simple equipment like elastic bands and rollers, requiring intensive efforts of therapists and patients. Thus, the need of robotic devices emerges. In this paper, the design concept and some modelling and simulation aspects of a novel ankle rehabilitation device are presented.
Effect of Differential Item Functioning on Test Equating
ERIC Educational Resources Information Center
Kabasakal, Kübra Atalay; Kelecioglu, Hülya
2015-01-01
This study examines the effect of differential item functioning (DIF) items on test equating through multilevel item response models (MIRMs) and traditional IRMs. The performances of three different equating models were investigated under 24 different simulation conditions, and the variables whose effects were examined included sample size, test…
Mathematical Formulation of Multivariate Euclidean Models for Discrimination Methods.
ERIC Educational Resources Information Center
Mullen, Kenneth; Ennis, Daniel M.
1987-01-01
Multivariate models for the triangular and duo-trio methods are described, and theoretical methods are compared to a Monte Carlo simulation. Implications are discussed for a new theory of multidimensional scaling which challenges the traditional assumption that proximity measures and perceptual distances are monotonically related. (Author/GDC)
USDA-ARS?s Scientific Manuscript database
For more than three decades, researchers have utilized the Snowmelt Runoff Model (SRM) to test the impacts of climate change on streamflow of snow-fed systems. In this study, the hydrological effects of climate change are modeled over three sequential years using SRM with both typical and recommende...
Joseph K. O. Amoah; Devendra M. Amatya; Soronnadi Nnaji
2012-01-01
Hydrologic models often require correct estimates of surface macro-depressional storage to accurately simulate rainfallârunoff processes. Traditionally, depression storage is determined through model calibration or lumped with soil storage components or on an ad hoc basis. This paper investigates a holistic approach for estimating surface depressional storage capacity...
Ralph L. Amateis; Harold E. Burkhart
2012-01-01
Demand for traditional wood products from southern forests continues to increase even as demand for woody biomass for uses such as biofuels is on the rise. How to manage the plantation resource to meet demand for multiple products from a shrinking land base is of critical importance. Nontraditional plantation systems comprised of two populations planted on the same...
ERIC Educational Resources Information Center
Grover, Anita; Lam, Tai Ning; Hunt, C. Anthony
2008-01-01
We present a simulation tool to aid the study of basic pharmacology principles. By taking advantage of the properties of agent-based modeling, the tool facilitates taking a mechanistic approach to learning basic concepts, in contrast to the traditional empirical methods. Pharmacodynamics is a particular aspect of pharmacology that can benefit from…
NASA Astrophysics Data System (ADS)
Skamarock, W. C.
2015-12-01
One of the major problems in atmospheric model applications is the representation of deep convection within the models; explicit simulation of deep convection on fine meshes performs much better than sub-grid parameterized deep convection on coarse meshes. Unfortunately, the high cost of explicit convective simulation has meant it has only been used to down-scale global simulations in weather prediction and regional climate applications, typically using traditional one-way interactive nesting technology. We have been performing real-time weather forecast tests using a global non-hydrostatic atmospheric model (the Model for Prediction Across Scales, MPAS) that employs a variable-resolution unstructured Voronoi horizontal mesh (nominally hexagons) to span hydrostatic to nonhydrostatic scales. The smoothly varying Voronoi mesh eliminates many downscaling problems encountered using traditional one- or two-way grid nesting. Our test weather forecasts cover two periods - the 2015 Spring Forecast Experiment conducted at the NOAA Storm Prediction Center during the month of May in which we used a 50-3 km mesh, and the PECAN field program examining nocturnal convection over the US during the months of June and July in which we used a 15-3 km mesh. An important aspect of this modeling system is that the model physics be scale-aware, particularly the deep convection parameterization. These MPAS simulations employ the Grell-Freitas scale-aware convection scheme. Our test forecasts show that the scheme produces a gradual transition in the deep convection, from the deep unstable convection being handled entirely by the convection scheme on the coarse mesh regions (dx > 15 km), to the deep convection being almost entirely explicit on the 3 km NA region of the meshes. We will present results illustrating the performance of critical aspects of the MPAS model in these tests.
Simulation in shoulder surgery.
Colaço, Henry B; Tennent, Duncan
2016-10-01
Simulation is a rapidly developing field in medical education. There is a growing need for trainee surgeons to acquire surgical skills in a cost-effective learning environment to improve patient safety and compensate for a reduction in training time and operative experience. Although simulation is not a replacement for traditional models of surgical training, and robust assessment metrics need to be validated before widespread use for accreditation, it is a useful adjunct that may ultimately lead to improving surgical outcomes for our patients.
Energy model for rumor propagation on social networks
NASA Astrophysics Data System (ADS)
Han, Shuo; Zhuang, Fuzhen; He, Qing; Shi, Zhongzhi; Ao, Xiang
2014-01-01
With the development of social networks, the impact of rumor propagation on human lives is more and more significant. Due to the change of propagation mode, traditional rumor propagation models designed for word-of-mouth process may not be suitable for describing the rumor spreading on social networks. To overcome this shortcoming, we carefully analyze the mechanisms of rumor propagation and the topological properties of large-scale social networks, then propose a novel model based on the physical theory. In this model, heat energy calculation formula and Metropolis rule are introduced to formalize this problem and the amount of heat energy is used to measure a rumor’s impact on a network. Finally, we conduct track experiments to show the evolution of rumor propagation, make comparison experiments to contrast the proposed model with the traditional models, and perform simulation experiments to study the dynamics of rumor spreading. The experiments show that (1) the rumor propagation simulated by our model goes through three stages: rapid growth, fluctuant persistence and slow decline; (2) individuals could spread a rumor repeatedly, which leads to the rumor’s resurgence; (3) rumor propagation is greatly influenced by a rumor’s attraction, the initial rumormonger and the sending probability.
Hands-on Simulation versus Traditional Video-learning in Teaching Microsurgery Technique
SAKAMOTO, Yusuke; OKAMOTO, Sho; SHIMIZU, Kenzo; ARAKI, Yoshio; HIRAKAWA, Akihiro; WAKABAYASHI, Toshihiko
2017-01-01
Bench model hands-on learning may be more effective than traditional didactic practice in some surgical fields. However, this has not been reported for microsurgery. Our study objective was to demonstrate the efficacy of bench model hands-on learning in acquiring microsuturing skills. The secondary objective was to evaluate the aptitude for microsurgery based on personality assessment. Eighty-six medical students comprising 62 men and 24 women were randomly assigned to either 20 min of hands-on learning with a bench model simulator or 20 min of video-learning using an instructional video. They then practiced microsuturing for 40 min. Each student then made three knots, and the time to complete the task was recorded. The final products were scored by two independent graders in a blind fashion. All participants then took a personality test, and their microsuture test scores and the time to complete the task were compared. The time to complete the task was significantly shorter in the simulator group than in the video-learning group. The final product scores tended to be higher with simulator-learning than with video-learning, but the difference was not significant. Students with high “extraversion” scores on the personality inventory took a shorter time to complete the suturing test. Simulator-learning was more effective for microsurgery training than video instruction, especially in understanding the procedure. There was a weak association between personality traits and microsurgery skill. PMID:28381653
Zhang, Weihong; Howell, Steven C; Wright, David W; Heindel, Andrew; Qiu, Xiangyun; Chen, Jianhan; Curtis, Joseph E
2017-05-01
We describe a general method to use Monte Carlo simulation followed by torsion-angle molecular dynamics simulations to create ensembles of structures to model a wide variety of soft-matter biological systems. Our particular emphasis is focused on modeling low-resolution small-angle scattering and reflectivity structural data. We provide examples of this method applied to HIV-1 Gag protein and derived fragment proteins, TraI protein, linear B-DNA, a nucleosome core particle, and a glycosylated monoclonal antibody. This procedure will enable a large community of researchers to model low-resolution experimental data with greater accuracy by using robust physics based simulation and sampling methods which are a significant improvement over traditional methods used to interpret such data. Published by Elsevier Inc.
Aqil, Muhammad; Kita, Ichiro; Yano, Akira; Nishiyama, Soichi
2007-10-01
Traditionally, the multiple linear regression technique has been one of the most widely used models in simulating hydrological time series. However, when the nonlinear phenomenon is significant, the multiple linear will fail to develop an appropriate predictive model. Recently, neuro-fuzzy systems have gained much popularity for calibrating the nonlinear relationships. This study evaluated the potential of a neuro-fuzzy system as an alternative to the traditional statistical regression technique for the purpose of predicting flow from a local source in a river basin. The effectiveness of the proposed identification technique was demonstrated through a simulation study of the river flow time series of the Citarum River in Indonesia. Furthermore, in order to provide the uncertainty associated with the estimation of river flow, a Monte Carlo simulation was performed. As a comparison, a multiple linear regression analysis that was being used by the Citarum River Authority was also examined using various statistical indices. The simulation results using 95% confidence intervals indicated that the neuro-fuzzy model consistently underestimated the magnitude of high flow while the low and medium flow magnitudes were estimated closer to the observed data. The comparison of the prediction accuracy of the neuro-fuzzy and linear regression methods indicated that the neuro-fuzzy approach was more accurate in predicting river flow dynamics. The neuro-fuzzy model was able to improve the root mean square error (RMSE) and mean absolute percentage error (MAPE) values of the multiple linear regression forecasts by about 13.52% and 10.73%, respectively. Considering its simplicity and efficiency, the neuro-fuzzy model is recommended as an alternative tool for modeling of flow dynamics in the study area.
Experiences in integrating auto-translated state-chart designs for model checking
NASA Technical Reports Server (NTRS)
Pingree, P. J.; Benowitz, E. G.
2003-01-01
In the complex environment of JPL's flight missions with increasing dependency on advanced software designs, traditional software validation methods of simulation and testing are being stretched to adequately cover the needs of software development.
2017-06-01
This research expands the modeling and simulation (M and S) body of knowledge through the development of an Implicit Model Development Process (IMDP...When augmented to traditional Model Development Processes (MDP), the IMDP enables the development of models that can address a broader array of...where a broader, more holistic approach of defining a models referent is achieved. Next, the IMDP codifies the process for implementing the improved model
NASA Astrophysics Data System (ADS)
Klimczak, Marcin; Bojarski, Jacek; Ziembicki, Piotr; Kęskiewicz, Piotr
2017-11-01
The requirements concerning energy performance of buildings and their internal installations, particularly HVAC systems, have been growing continuously in Poland and all over the world. The existing, traditional calculation methods following from the static heat exchange model are frequently not sufficient for a reasonable heating design of a building. Both in Poland and elsewhere in the world, methods and software are employed which allow a detailed simulation of the heating and moisture conditions in a building, and also an analysis of the performance of HVAC systems within a building. However, these systems are usually difficult in use and complex. In addition, the development of a simulation model that is sufficiently adequate to the real building requires considerable time involvement of a designer, is time-consuming and laborious. A simplification of the simulation model of a building renders it possible to reduce the costs of computer simulations. The paper analyses in detail the effect of introducing a number of different variants of the simulation model developed in Design Builder on the quality of final results obtained. The objective of this analysis is to find simplifications which allow obtaining simulation results which have an acceptable level of deviations from the detailed model, thus facilitating a quick energy performance analysis of a given building.
Programming PHREEQC calculations with C++ and Python a comparative study
Charlton, Scott R.; Parkhurst, David L.; Muller, Mike
2011-01-01
The new IPhreeqc module provides an application programming interface (API) to facilitate coupling of other codes with the U.S. Geological Survey geochemical model PHREEQC. Traditionally, loose coupling of PHREEQC with other applications required methods to create PHREEQC input files, start external PHREEQC processes, and process PHREEQC output files. IPhreeqc eliminates most of this effort by providing direct access to PHREEQC capabilities through a component object model (COM), a library, or a dynamically linked library (DLL). Input and calculations can be specified through internally programmed strings, and all data exchange between an application and the module can occur in computer memory. This study compares simulations programmed in C++ and Python that are tightly coupled with IPhreeqc modules to the traditional simulations that are loosely coupled to PHREEQC. The study compares performance, quantifies effort, and evaluates lines of code and the complexity of the design. The comparisons show that IPhreeqc offers a more powerful and simpler approach for incorporating PHREEQC calculations into transport models and other applications that need to perform PHREEQC calculations. The IPhreeqc module facilitates the design of coupled applications and significantly reduces run times. Even a moderate knowledge of one of the supported programming languages allows more efficient use of PHREEQC than the traditional loosely coupled approach.
Research on fuzzy PID control to electronic speed regulator
NASA Astrophysics Data System (ADS)
Xu, Xiao-gang; Chen, Xue-hui; Zheng, Sheng-guo
2007-12-01
As an important part of diesel engine, the speed regulator plays an important role in stabilizing speed and improving engine's performance. Because there are so many model parameters of diesel-engine considered in traditional PID control and these parameters present non-linear characteristic.The method to adjust engine speed using traditional PID is not considered as a best way. Especially for the diesel-engine generator set. In this paper, the Fuzzy PID control strategy is proposed. Some problems about its utilization in electronic speed regulator are discussed. A mathematical model of electric control system for diesel-engine generator set is established and the way of the PID parameters in the model to affect the function of system is analyzed. And then it is proposed the differential coefficient must be applied in control design for reducing dynamic deviation of system and adjusting time. Based on the control theory, a study combined control with PID calculation together for turning fuzzy PID parameter is implemented. And also a simulation experiment about electronic speed regulator system was conducted using Matlab/Simulink and the Fuzzy-Toolbox. Compared with the traditional PID Algorithm, the simulated results presented obvious improvements in the instantaneous speed governing rate and steady state speed governing rate of diesel-engine generator set when the fuzzy logic control strategy used.
Simulation of longitudinal dynamics of long freight trains in positioning operations
NASA Astrophysics Data System (ADS)
Qi, Zhaohui; Huang, Zhihao; Kong, Xianchao
2012-09-01
Positioning operations are performed in a railway goods yard, in which the freight train is pulled precisely at a specific point by a positioner. The positioner moves strictly according to the predesigned speed and provides all the traction and braking forces which are highly dependent on the longitudinal dynamic response. In order to improve the efficiency and protect the wagons from damage during positioning operations, the design speed of the positioner has to be optimised based on the simulation of longitudinal train dynamics. However, traditional models of longitudinal train dynamics are not accurate enough in some aspects. In this study, we make some changes in the traditional theory to make it suitable for the study of long freight trains in positioning operations. In the proposed method, instead of the traction force on the train, the motion of the positioner is assumed to be known; more importantly, the traditional draft gear model with nonlinear spring and linear damping is replaced by a more detailed model based on the achievement of contact and impact mechanics; the switching effects of the resistance and the coupler slack are also taken into consideration. Numerical examples that deal with positioning operations on the straight lines, slope lines and curving lines are given.
Extending simulation modeling to activity-based costing for clinical procedures.
Glick, N D; Blackmore, C C; Zelman, W N
2000-04-01
A simulation model was developed to measure costs in an Emergency Department setting for patients presenting with possible cervical-spine injury who needed radiological imaging. Simulation, a tool widely used to account for process variability but typically focused on utilization and throughput analysis, is being introduced here as a realistic means to perform an activity-based-costing (ABC) analysis, because traditional ABC methods have difficulty coping with process variation in healthcare. Though the study model has a very specific application, it can be generalized to other settings simply by changing the input parameters. In essence, simulation was found to be an accurate and viable means to conduct an ABC analysis; in fact, the output provides more complete information than could be achieved through other conventional analyses, which gives management more leverage with which to negotiate contractual reimbursements.
Modeling and Simulation for Mission Operations Work System Design
NASA Technical Reports Server (NTRS)
Sierhuis, Maarten; Clancey, William J.; Seah, Chin; Trimble, Jay P.; Sims, Michael H.
2003-01-01
Work System analysis and design is complex and non-deterministic. In this paper we describe Brahms, a multiagent modeling and simulation environment for designing complex interactions in human-machine systems. Brahms was originally conceived as a business process design tool that simulates work practices, including social systems of work. We describe our modeling and simulation method for mission operations work systems design, based on a research case study in which we used Brahms to design mission operations for a proposed discovery mission to the Moon. We then describe the results of an actual method application project-the Brahms Mars Exploration Rover. Space mission operations are similar to operations of traditional organizations; we show that the application of Brahms for space mission operations design is relevant and transferable to other types of business processes in organizations.
High performance cellular level agent-based simulation with FLAME for the GPU.
Richmond, Paul; Walker, Dawn; Coakley, Simon; Romano, Daniela
2010-05-01
Driven by the availability of experimental data and ability to simulate a biological scale which is of immediate interest, the cellular scale is fast emerging as an ideal candidate for middle-out modelling. As with 'bottom-up' simulation approaches, cellular level simulations demand a high degree of computational power, which in large-scale simulations can only be achieved through parallel computing. The flexible large-scale agent modelling environment (FLAME) is a template driven framework for agent-based modelling (ABM) on parallel architectures ideally suited to the simulation of cellular systems. It is available for both high performance computing clusters (www.flame.ac.uk) and GPU hardware (www.flamegpu.com) and uses a formal specification technique that acts as a universal modelling format. This not only creates an abstraction from the underlying hardware architectures, but avoids the steep learning curve associated with programming them. In benchmarking tests and simulations of advanced cellular systems, FLAME GPU has reported massive improvement in performance over more traditional ABM frameworks. This allows the time spent in the development and testing stages of modelling to be drastically reduced and creates the possibility of real-time visualisation for simple visual face-validation.
Particle-in-cell numerical simulations of a cylindrical Hall thruster with permanent magnets
NASA Astrophysics Data System (ADS)
Miranda, Rodrigo A.; Martins, Alexandre A.; Ferreira, José L.
2017-10-01
The cylindrical Hall thruster (CHT) is a propulsion device that offers high propellant utilization and performance at smaller dimensions and lower power levels than traditional Hall thrusters. In this paper we present first results of a numerical model of a CHT. This model solves particle and field dynamics self-consistently using a particle-in-cell approach. We describe a number of techniques applied to reduce the execution time of the numerical simulations. The specific impulse and thrust computed from our simulations are in agreement with laboratory experiments. This simplified model will allow for a detailed analysis of different thruster operational parameters and obtain an optimal configuration to be implemented at the Plasma Physics Laboratory at the University of Brasília.
Mounts, W M; Liebman, M N
1997-07-01
We have developed a method for representing biological pathways and simulating their behavior based on the use of stochastic activity networks (SANs). SANs, an extension of the original Petri net, have been used traditionally to model flow systems including data-communications networks and manufacturing processes. We apply the methodology to the blood coagulation cascade, a biological flow system, and present the representation method as well as results of simulation studies based on published experimental data. In addition to describing the dynamic model, we also present the results of its utilization to perform simulations of clinical states including hemophilia's A and B as well as sensitivity analysis of individual factors and their impact on thrombin production.
A Mathematical Tumor Model with Immune Resistance and Drug Therapy: An Optimal Control Approach
De Pillis, L. G.; Radunskaya, A.
2001-01-01
We present a competition model of cancer tumor growth that includes both the immune system response and drug therapy. This is a four-population model that includes tumor cells, host cells, immune cells, and drug interaction. We analyze the stability of the drug-free equilibria with respect to the immune response in order to look for target basins of attraction. One of our goals was to simulate qualitatively the asynchronous tumor-drug interaction known as “Jeffs phenomenon.” The model we develop is successful in generating this asynchronous response behavior. Our other goal was to identify treatment protocols that could improve standard pulsed chemotherapymore » regimens. Using optimal control theory with constraints and numerical simulations, we obtain new therapy protocols that we then compare with traditional pulsed periodic treatment. The optimal control generated therapies produce larger oscillations in the tumor population over time. However, by the end of the treatment period, total tumor size is smaller than that achieved through traditional pulsed therapy, and the normal cell population suffers nearly no oscillations.« less
A Mathematical Tumor Model with Immune Resistance and Drug Therapy: An Optimal Control Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Pillis, L. G.; Radunskaya, A.
We present a competition model of cancer tumor growth that includes both the immune system response and drug therapy. This is a four-population model that includes tumor cells, host cells, immune cells, and drug interaction. We analyze the stability of the drug-free equilibria with respect to the immune response in order to look for target basins of attraction. One of our goals was to simulate qualitatively the asynchronous tumor-drug interaction known as “Jeffs phenomenon.” The model we develop is successful in generating this asynchronous response behavior. Our other goal was to identify treatment protocols that could improve standard pulsed chemotherapymore » regimens. Using optimal control theory with constraints and numerical simulations, we obtain new therapy protocols that we then compare with traditional pulsed periodic treatment. The optimal control generated therapies produce larger oscillations in the tumor population over time. However, by the end of the treatment period, total tumor size is smaller than that achieved through traditional pulsed therapy, and the normal cell population suffers nearly no oscillations.« less
NASA Astrophysics Data System (ADS)
Yin, Yanshu; Feng, Wenjie
2017-12-01
In this paper, a location-based multiple point statistics method is developed to model a non-stationary reservoir. The proposed method characterizes the relationship between the sedimentary pattern and the deposit location using the relative central position distance function, which alleviates the requirement that the training image and the simulated grids have the same dimension. The weights in every direction of the distance function can be changed to characterize the reservoir heterogeneity in various directions. The local integral replacements of data events, structured random path, distance tolerance and multi-grid strategy are applied to reproduce the sedimentary patterns and obtain a more realistic result. This method is compared with the traditional Snesim method using a synthesized 3-D training image of Poyang Lake and a reservoir model of Shengli Oilfield in China. The results indicate that the new method can reproduce the non-stationary characteristics better than the traditional method and is more suitable for simulation of delta-front deposits. These results show that the new method is a powerful tool for modelling a reservoir with non-stationary characteristics.
Results of including geometric nonlinearities in an aeroelastic model of an F/A-18
NASA Technical Reports Server (NTRS)
Buttrill, Carey S.
1989-01-01
An integrated, nonlinear simulation model suitable for aeroelastic modeling of fixed-wing aircraft has been developed. While the author realizes that the subject of modeling rotating, elastic structures is not closed, it is believed that the equations of motion developed and applied herein are correct to second order and are suitable for use with typical aircraft structures. The equations are not suitable for large elastic deformation. In addition, the modeling framework generalizes both the methods and terminology of non-linear rigid-body airplane simulation and traditional linear aeroelastic modeling. Concerning the importance of angular/elastic inertial coupling in the dynamic analysis of fixed-wing aircraft, the following may be said. The rigorous inclusion of said coupling is not without peril and must be approached with care. In keeping with the same engineering judgment that guided the development of the traditional aeroelastic equations, the effect of non-linear inertial effects for most airplane applications is expected to be small. A parameter does not tell the whole story, however, and modes flagged by the parameter as significant also need to be checked to see if the coupling is not a one-way path, i.e., the inertially affected modes can influence other modes.
Manufacturing data analytics using a virtual factory representation.
Jain, Sanjay; Shao, Guodong; Shin, Seung-Jun
2017-01-01
Large manufacturers have been using simulation to support decision-making for design and production. However, with the advancement of technologies and the emergence of big data, simulation can be utilised to perform and support data analytics for associated performance gains. This requires not only significant model development expertise, but also huge data collection and analysis efforts. This paper presents an approach within the frameworks of Design Science Research Methodology and prototyping to address the challenge of increasing the use of modelling, simulation and data analytics in manufacturing via reduction of the development effort. The use of manufacturing simulation models is presented as data analytics applications themselves and for supporting other data analytics applications by serving as data generators and as a tool for validation. The virtual factory concept is presented as the vehicle for manufacturing modelling and simulation. Virtual factory goes beyond traditional simulation models of factories to include multi-resolution modelling capabilities and thus allowing analysis at varying levels of detail. A path is proposed for implementation of the virtual factory concept that builds on developments in technologies and standards. A virtual machine prototype is provided as a demonstration of the use of a virtual representation for manufacturing data analytics.
Maljovec, D.; Liu, S.; Wang, B.; ...
2015-07-14
Here, dynamic probabilistic risk assessment (DPRA) methodologies couple system simulator codes (e.g., RELAP and MELCOR) with simulation controller codes (e.g., RAVEN and ADAPT). Whereas system simulator codes model system dynamics deterministically, simulation controller codes introduce both deterministic (e.g., system control logic and operating procedures) and stochastic (e.g., component failures and parameter uncertainties) elements into the simulation. Typically, a DPRA is performed by sampling values of a set of parameters and simulating the system behavior for that specific set of parameter values. For complex systems, a major challenge in using DPRA methodologies is to analyze the large number of scenarios generated,more » where clustering techniques are typically employed to better organize and interpret the data. In this paper, we focus on the analysis of two nuclear simulation datasets that are part of the risk-informed safety margin characterization (RISMC) boiling water reactor (BWR) station blackout (SBO) case study. We provide the domain experts a software tool that encodes traditional and topological clustering techniques within an interactive analysis and visualization environment, for understanding the structures of such high-dimensional nuclear simulation datasets. We demonstrate through our case study that both types of clustering techniques complement each other for enhanced structural understanding of the data.« less
a Discrete Mathematical Model to Simulate Malware Spreading
NASA Astrophysics Data System (ADS)
Del Rey, A. Martin; Sánchez, G. Rodriguez
2012-10-01
With the advent and worldwide development of Internet, the study and control of malware spreading has become very important. In this sense, some mathematical models to simulate malware propagation have been proposed in the scientific literature, and usually they are based on differential equations exploiting the similarities with mathematical epidemiology. The great majority of these models study the behavior of a particular type of malware called computer worms; indeed, to the best of our knowledge, no model has been proposed to simulate the spreading of a computer virus (the traditional type of malware which differs from computer worms in several aspects). In this sense, the purpose of this work is to introduce a new mathematical model not based on continuous mathematics tools but on discrete ones, to analyze and study the epidemic behavior of computer virus. Specifically, cellular automata are used in order to design such model.
Rafkin, Scot C R; Sta Maria, Magdalena R V; Michaels, Timothy I
2002-10-17
Mesoscale (<100 km) atmospheric phenomena are ubiquitous on Mars, as revealed by Mars Orbiter Camera images. Numerical models provide an important means of investigating martian atmospheric dynamics, for which data availability is limited. But the resolution of general circulation models, which are traditionally used for such research, is not sufficient to resolve mesoscale phenomena. To provide better understanding of these relatively small-scale phenomena, mesoscale models have recently been introduced. Here we simulate the mesoscale spiral dust cloud observed over the caldera of the volcano Arsia Mons by using the Mars Regional Atmospheric Modelling System. Our simulation uses a hierarchy of nested models with grid sizes ranging from 240 km to 3 km, and reveals that the dust cloud is an indicator of a greater but optically thin thermal circulation that reaches heights of up to 30 km, and transports dust horizontally over thousands of kilometres.
Simulations and Evaluation of Mesoscale Convective Systems in a Multi-scale Modeling Framework (MMF)
NASA Astrophysics Data System (ADS)
Chern, J. D.; Tao, W. K.
2017-12-01
It is well known that the mesoscale convective systems (MCS) produce more than 50% of rainfall in most tropical regions and play important roles in regional and global water cycles. Simulation of MCSs in global and climate models is a very challenging problem. Typical MCSs have horizontal scale of a few hundred kilometers. Models with a domain of several hundred kilometers and fine enough resolution to properly simulate individual clouds are required to realistically simulate MCSs. The multiscale modeling framework (MMF), which replaces traditional cloud parameterizations with cloud-resolving models (CRMs) within a host atmospheric general circulation model (GCM), has shown some capabilities of simulating organized MCS-like storm signals and propagations. However, its embedded CRMs typically have small domain (less than 128 km) and coarse resolution ( 4 km) that cannot realistically simulate MCSs and individual clouds. In this study, a series of simulations were performed using the Goddard MMF. The impacts of the domain size and model grid resolution of the embedded CRMs on simulating MCSs are examined. The changes of cloud structure, occurrence, and properties such as cloud types, updraft and downdraft, latent heating profile, and cold pool strength in the embedded CRMs are examined in details. The simulated MCS characteristics are evaluated against satellite measurements using the Goddard Satellite Data Simulator Unit. The results indicate that embedded CRMs with large domain and fine resolution tend to produce better simulations compared to those simulations with typical MMF configuration (128 km domain size and 4 km model grid spacing).
Modeling and simulation of continuous wave velocity radar based on third-order DPLL
NASA Astrophysics Data System (ADS)
Di, Yan; Zhu, Chen; Hong, Ma
2015-02-01
Second-order digital phase-locked-loop (DPLL) is widely used in traditional Continuous wave (CW) velocity radar with poor performance in high dynamic conditions. Using the third-order DPLL can improve the performance. Firstly, the echo signal model of CW radar is given. Secondly, theoretical derivations of the tracking performance in different velocity conditions are given. Finally, simulation model of CW radar is established based on Simulink tool. Tracking performance of the two kinds of DPLL in different acceleration and jerk conditions is studied by this model. The results show that third-order PLL has better performance in high dynamic conditions. This model provides a platform for further research of CW radar.
NASA Technical Reports Server (NTRS)
Koster, Randal D.; Broecker, Wallace S.; Jouzel, Jean; Suozzo, Robert J.; Russell, Gary L.; Rind, David
1989-01-01
Observational evidence suggests that of the tritium produced during nuclear bomb tests that has already reached the ocean, more than twice as much arrived through vapor impact as through precipitation. In the present study, the Goddard Institute for Space Studies 8 x 10 deg atmospheric general circulation model is used to simulate tritium transport from the upper atmosphere to the ocean. The simulation indicates that tritium delivery to the ocean via vapor impact is about equal to that via precipitation. The model result is relatively insensitive to several imposed changes in tritium source location, in model parameterizations, and in model resolution. Possible reasons for the discrepancy are explored.
Electric potential calculation in molecular simulation of electric double layer capacitors
NASA Astrophysics Data System (ADS)
Wang, Zhenxing; Olmsted, David L.; Asta, Mark; Laird, Brian B.
2016-11-01
For the molecular simulation of electric double layer capacitors (EDLCs), a number of methods have been proposed and implemented to determine the one-dimensional electric potential profile between the two electrodes at a fixed potential difference. In this work, we compare several of these methods for a model LiClO4-acetonitrile/graphite EDLC simulated using both the traditional fixed-charged method (FCM), in which a fixed charge is assigned a priori to the electrode atoms, or the recently developed constant potential method (CPM) (2007 J. Chem. Phys. 126 084704), where the electrode charges are allowed to fluctuate to keep the potential fixed. Based on an analysis of the full three-dimensional electric potential field, we suggest a method for determining the averaged one-dimensional electric potential profile that can be applied to both the FCM and CPM simulations. Compared to traditional methods based on numerically solving the one-dimensional Poisson’s equation, this method yields better accuracy and no supplemental assumptions.
CSMA Versus Prioritized CSMA for Air-Traffic-Control Improvement
NASA Technical Reports Server (NTRS)
Robinson, Daryl C.
2001-01-01
OPNET version 7.0 simulations are presented involving an important application of the Aeronautical Telecommunications Network (ATN), Controller Pilot Data Link Communications (CPDLC) over the Very High Frequency Data Link, Mode 2 (VDL-2). Communication is modeled for essentially all incoming and outgoing nonstop air-traffic for just three United States cities: Cleveland, Cincinnati, and Detroit. There are 32 airports in the simulation, 29 of which are either sources or destinations for the air-traffic of the aforementioned three airports. The simulation involves 111 Air Traffic Control (ATC) ground stations, and 1,235 equally equipped aircraft-taking off, flying realistic free-flight trajectories, and landing in a 24-hr period. Collisionless, Prioritized Carrier Sense Multiple Access (CSMA) is successfully tested and compared with the traditional CSMA typically associated with VDL-2. The performance measures include latency, throughput, and packet loss. As expected, Prioritized CSMA is much quicker and more efficient than traditional CSMA. These simulation results show the potency of Prioritized CSMA for implementing low latency, high throughput, and efficient connectivity.
NASA Astrophysics Data System (ADS)
Mao, Chao; Chen, Shou
2017-01-01
According to the traditional entropy value method still have low evaluation accuracy when evaluating the performance of mining projects, a performance evaluation model of mineral project founded on improved entropy is proposed. First establish a new weight assignment model founded on compatible matrix analysis of analytic hierarchy process (AHP) and entropy value method, when the compatibility matrix analysis to achieve consistency requirements, if it has differences between subjective weights and objective weights, moderately adjust both proportions, then on this basis, the fuzzy evaluation matrix for performance evaluation. The simulation experiments show that, compared with traditional entropy and compatible matrix analysis method, the proposed performance evaluation model of mining project based on improved entropy value method has higher accuracy assessment.
Non-linear modelling and control of semi-active suspensions with variable damping
NASA Astrophysics Data System (ADS)
Chen, Huang; Long, Chen; Yuan, Chao-Chun; Jiang, Hao-Bin
2013-10-01
Electro-hydraulic dampers can provide variable damping force that is modulated by varying the command current; furthermore, they offer advantages such as lower power, rapid response, lower cost, and simple hardware. However, accurate characterisation of non-linear f-v properties in pre-yield and force saturation in post-yield is still required. Meanwhile, traditional linear or quarter vehicle models contain various non-linearities. The development of a multi-body dynamics model is very complex, and therefore, SIMPACK was used with suitable improvements for model development and numerical simulations. A semi-active suspension was built based on a belief-desire-intention (BDI)-agent model framework. Vehicle handling dynamics were analysed, and a co-simulation analysis was conducted in SIMPACK and MATLAB to evaluate the BDI-agent controller. The design effectively improved ride comfort, handling stability, and driving safety. A rapid control prototype was built based on dSPACE to conduct a real vehicle test. The test and simulation results were consistent, which verified the simulation.
Octree-based Global Earthquake Simulations
NASA Astrophysics Data System (ADS)
Ramirez-Guzman, L.; Juarez, A.; Bielak, J.; Salazar Monroy, E. F.
2017-12-01
Seismological research has motivated recent efforts to construct more accurate three-dimensional (3D) velocity models of the Earth, perform global simulations of wave propagation to validate models, and also to study the interaction of seismic fields with 3D structures. However, traditional methods for seismogram computation at global scales are limited by computational resources, relying primarily on traditional methods such as normal mode summation or two-dimensional numerical methods. We present an octree-based mesh finite element implementation to perform global earthquake simulations with 3D models using topography and bathymetry with a staircase approximation, as modeled by the Carnegie Mellon Finite Element Toolchain Hercules (Tu et al., 2006). To verify the implementation, we compared the synthetic seismograms computed in a spherical earth against waveforms calculated using normal mode summation for the Preliminary Earth Model (PREM) for a point source representation of the 2014 Mw 7.3 Papanoa, Mexico earthquake. We considered a 3 km-thick ocean layer for stations with predominantly oceanic paths. Eigen frequencies and eigen functions were computed for toroidal, radial, and spherical oscillations in the first 20 branches. Simulations are valid at frequencies up to 0.05 Hz. Matching among the waveforms computed by both approaches, especially for long period surface waves, is excellent. Additionally, we modeled the Mw 9.0 Tohoku-Oki earthquake using the USGS finite fault inversion. Topography and bathymetry from ETOPO1 are included in a mesh with more than 3 billion elements; constrained by the computational resources available. We compared estimated velocity and GPS synthetics against observations at regional and teleseismic stations of the Global Seismological Network and discuss the differences among observations and synthetics, revealing that heterogeneity, particularly in the crust, needs to be considered.
NASA Astrophysics Data System (ADS)
Santos, Léonard; Thirel, Guillaume; Perrin, Charles
2017-04-01
Errors made by hydrological models may come from a problem in parameter estimation, uncertainty on observed measurements, numerical problems and from the model conceptualization that simplifies the reality. Here we focus on this last issue of hydrological modeling. One of the solutions to reduce structural uncertainty is to use a multimodel method, taking advantage of the great number and the variability of existing hydrological models. In particular, because different models are not similarly good in all situations, using multimodel approaches can improve the robustness of modeled outputs. Traditionally, in hydrology, multimodel methods are based on the output of the model (the simulated flow series). The aim of this poster is to introduce a different approach based on the internal variables of the models. The method is inspired by the SUper MOdel (SUMO, van den Berge et al., 2011) developed for climatology. The idea of the SUMO method is to correct the internal variables of a model taking into account the values of the internal variables of (an)other model(s). This correction is made bilaterally between the different models. The ensemble of the different models constitutes a super model in which all the models exchange information on their internal variables with each other at each time step. Due to this continuity in the exchanges, this multimodel algorithm is more dynamic than traditional multimodel methods. The method will be first tested using two GR4J models (in a state-space representation) with different parameterizations. The results will be presented and compared to traditional multimodel methods that will serve as benchmarks. In the future, other rainfall-runoff models will be used in the super model. References van den Berge, L. A., Selten, F. M., Wiegerinck, W., and Duane, G. S. (2011). A multi-model ensemble method that combines imperfect models through learning. Earth System Dynamics, 2(1) :161-177.
Multiresolution modeling with a JMASS-JWARS HLA Federation
NASA Astrophysics Data System (ADS)
Prince, John D.; Painter, Ron D.; Pendell, Brian; Richert, Walt; Wolcott, Christopher
2002-07-01
CACI, Inc.-Federal has built, tested, and demonstrated the use of a JMASS-JWARS HLA Federation that supports multi- resolution modeling of a weapon system and its subsystems in a JMASS engineering and engagement model environment, while providing a realistic JWARS theater campaign-level synthetic battle space and operational context to assess the weapon system's value added and deployment/employment supportability in a multi-day, combined force-on-force scenario. Traditionally, acquisition analyses require a hierarchical suite of simulation models to address engineering, engagement, mission and theater/campaign measures of performance, measures of effectiveness and measures of merit. Configuring and running this suite of simulations and transferring the appropriate data between each model is both time consuming and error prone. The ideal solution would be a single simulation with the requisite resolution and fidelity to perform all four levels of acquisition analysis. However, current computer hardware technologies cannot deliver the runtime performance necessary to support the resulting extremely large simulation. One viable alternative is to integrate the current hierarchical suite of simulation models using the DoD's High Level Architecture in order to support multi- resolution modeling. An HLA integration eliminates the extremely large model problem, provides a well-defined and manageable mixed resolution simulation and minimizes VV&A issues.
Statistically Modeling I-V Characteristics of CNT-FET with LASSO
NASA Astrophysics Data System (ADS)
Ma, Dongsheng; Ye, Zuochang; Wang, Yan
2017-08-01
With the advent of internet of things (IOT), the need for studying new material and devices for various applications is increasing. Traditionally we build compact models for transistors on the basis of physics. But physical models are expensive and need a very long time to adjust for non-ideal effects. As the vision for the application of many novel devices is not certain or the manufacture process is not mature, deriving generalized accurate physical models for such devices is very strenuous, whereas statistical modeling is becoming a potential method because of its data oriented property and fast implementation. In this paper, one classical statistical regression method, LASSO, is used to model the I-V characteristics of CNT-FET and a pseudo-PMOS inverter simulation based on the trained model is implemented in Cadence. The normalized relative mean square prediction error of the trained model versus experiment sample data and the simulation results show that the model is acceptable for digital circuit static simulation. And such modeling methodology can extend to general devices.
NASA Astrophysics Data System (ADS)
Huang, W. D.; Fan, H. G.; Chen, N. X.
2012-11-01
To study the interaction between the transient flow in pipe and the unsteady turbulent flow in turbine, a coupled model of the transient flow in the pipe and three-dimensional unsteady flow in the turbine is developed based on the method of characteristics and the fluid governing equation in the accelerated rotational relative coordinate. The load-rejection process under the closing of guide vanes of the hydraulic power plant is simulated by the coupled method, the traditional transient simulation method and traditional three-dimensional unsteady flow calculation method respectively and the results are compared. The pressure, unit flux and rotation speed calculated by three methods show a similar change trend. However, because the elastic water hammer in the pipe and the pressure fluctuation in the turbine have been considered in the coupled method, the increase of pressure at spiral inlet is higher and the pressure fluctuation in turbine is stronger.
Predicted performance of an integrated modular engine system
NASA Technical Reports Server (NTRS)
Binder, Michael; Felder, James L.
1993-01-01
Space vehicle propulsion systems are traditionally comprised of a cluster of discrete engines, each with its own set of turbopumps, valves, and a thrust chamber. The Integrated Modular Engine (IME) concept proposes a vehicle propulsion system comprised of multiple turbopumps, valves, and thrust chambers which are all interconnected. The IME concept has potential advantages in fault-tolerance, weight, and operational efficiency compared with the traditional clustered engine configuration. The purpose of this study is to examine the steady-state performance of an IME system with various components removed to simulate fault conditions. An IME configuration for a hydrogen/oxygen expander cycle propulsion system with four sets of turbopumps and eight thrust chambers has been modeled using the Rocket Engine Transient Simulator (ROCETS) program. The nominal steady-state performance is simulated, as well as turbopump thrust chamber and duct failures. The impact of component failures on system performance is discussed in the context of the system's fault tolerant capabilities.
Rotational Brownian Dynamics simulations of clathrin cage formation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ilie, Ioana M.; Briels, Wim J.; MESA+ Institute for Nanotechnology, University of Twente, P.O. Box 217, 7500 AE Enschede
2014-08-14
The self-assembly of nearly rigid proteins into ordered aggregates is well suited for modeling by the patchy particle approach. Patchy particles are traditionally simulated using Monte Carlo methods, to study the phase diagram, while Brownian Dynamics simulations would reveal insights into the assembly dynamics. However, Brownian Dynamics of rotating anisotropic particles gives rise to a number of complications not encountered in translational Brownian Dynamics. We thoroughly test the Rotational Brownian Dynamics scheme proposed by Naess and Elsgaeter [Macromol. Theory Simul. 13, 419 (2004); Naess and Elsgaeter Macromol. Theory Simul. 14, 300 (2005)], confirming its validity. We then apply the algorithmmore » to simulate a patchy particle model of clathrin, a three-legged protein involved in vesicle production from lipid membranes during endocytosis. Using this algorithm we recover time scales for cage assembly comparable to those from experiments. We also briefly discuss the undulatory dynamics of the polyhedral cage.« less
The Emergence of Agent-Based Technology as an Architectural Component of Serious Games
NASA Technical Reports Server (NTRS)
Phillips, Mark; Scolaro, Jackie; Scolaro, Daniel
2010-01-01
The evolution of games as an alternative to traditional simulations in the military context has been gathering momentum over the past five years, even though the exploration of their use in the serious sense has been ongoing since the mid-nineties. Much of the focus has been on the aesthetics of the visuals provided by the core game engine as well as the artistry provided by talented development teams to produce not only breathtaking artwork, but highly immersive game play. Consideration of game technology is now so much a part of the modeling and simulation landscape that it is becoming difficult to distinguish traditional simulation solutions from game-based approaches. But games have yet to provide the much needed interactive free play that has been the domain of semi-autonomous forces (SAF). The component-based middleware architecture that game engines provide promises a great deal in terms of options for the integration of agent solutions to support the development of non-player characters that engage the human player without the deterministic nature of scripted behaviors. However, there are a number of hard-learned lessons on the modeling and simulation side of the equation that game developers have yet to learn, such as: correlation of heterogeneous systems, scalability of both terrain and numbers of non-player entities, and the bi-directional nature of simulation to game interaction provided by Distributed Interactive Simulation (DIS) and High Level Architecture (HLA).
On simulation of local fluxes in molecular junctions
NASA Astrophysics Data System (ADS)
Cabra, Gabriel; Jensen, Anders; Galperin, Michael
2018-05-01
We present a pedagogical review of the current density simulation in molecular junction models indicating its advantages and deficiencies in analysis of local junction transport characteristics. In particular, we argue that current density is a universal tool which provides more information than traditionally simulated bond currents, especially when discussing inelastic processes. However, current density simulations are sensitive to the choice of basis and electronic structure method. We note that while discussing the local current conservation in junctions, one has to account for the source term caused by the open character of the system and intra-molecular interactions. Our considerations are illustrated with numerical simulations of a benzenedithiol molecular junction.
Abiotic/biotic coupling in the rhizosphere: a reactive transport modeling analysis
Lawrence, Corey R.; Steefel, Carl; Maher, Kate
2014-01-01
A new generation of models is needed to adequately simulate patterns of soil biogeochemical cycling in response changing global environmental drivers. For example, predicting the influence of climate change on soil organic matter storage and stability requires models capable of addressing complex biotic/abiotic interactions of rhizosphere and weathering processes. Reactive transport modeling provides a powerful framework simulating these interactions and the resulting influence on soil physical and chemical characteristics. Incorporation of organic reactions in an existing reactive transport model framework has yielded novel insights into soil weathering and development but much more work is required to adequately capture root and microbial dynamics in the rhizosphere. This endeavor provides many advantages over traditional soil biogeochemical models but also many challenges.
Kerr, Brendan; Hawkins, Trisha Lee-Ann; Herman, Robert; Barnes, Sue; Kaufmann, Stephanie; Fraser, Kristin; Ma, Irene W Y
2013-07-18
Although simulation-based training is increasingly used for medical education, its benefits in continuing medical education (CME) are less established. This study seeks to evaluate the feasibility of incorporating simulation-based training into a CME conference and compare its effectiveness with the traditional workshop in improving knowledge and self-reported confidence. Participants (N=27) were group randomized to either a simulation-based workshop or a traditional case-based workshop. Post-training, knowledge assessment score neither did increase significantly in the traditional group (d=0.13; p=0.76) nor did significantly decrease in the simulation group (d= - 0.44; p=0.19). Self-reported comfort in patient assessment parameters increased in both groups (p<0.05 in all). However, only the simulation group reported an increase in comfort in patient management (d=1.1, p=0.051 for the traditional group and d=1.3; p= 0.0003 for the simulation group). At 1 month, comfort measures in the traditional group increased consistently over time while these measures in the simulation group increased post-workshop but decreased by 1 month, suggesting that some of the effects of training with simulation may be short lived. The use of simulation-based training was not associated with benefits in knowledge acquisition, knowledge retention, or comfort in patient assessment. It was associated with superior outcomes in comfort in patient management, but this benefit may be short-lived. Further studies are required to better define the conditions under which simulation-based training is beneficial.
Kerr, Brendan; Hawkins, Trisha Lee-Ann; Herman, Robert; Barnes, Sue; Kaufmann, Stephanie; Fraser, Kristin; Ma, Irene W. Y.
2013-01-01
Introduction Although simulation-based training is increasingly used for medical education, its benefits in continuing medical education (CME) are less established. This study seeks to evaluate the feasibility of incorporating simulation-based training into a CME conference and compare its effectiveness with the traditional workshop in improving knowledge and self-reported confidence. Methods Participants (N=27) were group randomized to either a simulation-based workshop or a traditional case-based workshop. Results Post-training, knowledge assessment score neither did increase significantly in the traditional group (d=0.13; p=0.76) nor did significantly decrease in the simulation group (d= − 0.44; p=0.19). Self-reported comfort in patient assessment parameters increased in both groups (p<0.05 in all). However, only the simulation group reported an increase in comfort in patient management (d=1.1, p=0.051 for the traditional group and d=1.3; p= 0.0003 for the simulation group). At 1 month, comfort measures in the traditional group increased consistently over time while these measures in the simulation group increased post-workshop but decreased by 1 month, suggesting that some of the effects of training with simulation may be short lived. Discussion The use of simulation-based training was not associated with benefits in knowledge acquisition, knowledge retention, or comfort in patient assessment. It was associated with superior outcomes in comfort in patient management, but this benefit may be short-lived. Further studies are required to better define the conditions under which simulation-based training is beneficial. PMID:23870304
Kerr, Brendan; Lee-Ann Hawkins, Trisha; Herman, Robert; Barnes, Sue; Kaufmann, Stephanie; Fraser, Kristin; Ma, Irene W Y
2013-01-01
Introduction Although simulation-based training is increasingly used for medical education, its benefits in continuing medical education (CME) are less established. This study seeks to evaluate the feasibility of incorporating simulation-based training into a CME conference and compare its effectiveness with the traditional workshop in improving knowledge and self-reported confidence. Methods Participants (N=27) were group randomized to either a simulation-based workshop or a traditional case-based workshop. Results Post-training, knowledge assessment score neither did increase significantly in the traditional group (d=0.13; p=0.76) nor did significantly decrease in the simulation group (d= - 0.44; p=0.19). Self-reported comfort in patient assessment parameters increased in both groups (p<0.05 in all). However, only the simulation group reported an increase in comfort in patient management (d=1.1, p=0.051 for the traditional group and d=1.3; p= 0.0003 for the simulation group). At 1 month, comfort measures in the traditional group increased consistently over time while these measures in the simulation group increased post-workshop but decreased by 1 month, suggesting that some of the effects of training with simulation may be short lived. Discussion The use of simulation-based training was not associated with benefits in knowledge acquisition, knowledge retention, or comfort in patient assessment. It was associated with superior outcomes in comfort in patient management, but this benefit may be short-lived. Further studies are required to better define the conditions under which simulation-based training is beneficial.
Simulation of Mercury's magnetosheath with a combined hybrid-paraboloid model
NASA Astrophysics Data System (ADS)
Parunakian, David; Dyadechkin, Sergey; Alexeev, Igor; Belenkaya, Elena; Khodachenko, Maxim; Kallio, Esa; Alho, Markku
2017-08-01
In this paper we introduce a novel approach for modeling planetary magnetospheres that involves a combination of the hybrid model and the paraboloid magnetosphere model (PMM); we further refer to it as the combined hybrid model. While both of these individual models have been successfully applied in the past, their combination enables us both to overcome the traditional difficulties of hybrid models to develop a self-consistent magnetic field and to compensate the lack of plasma simulation in the PMM. We then use this combined model to simulate Mercury's magnetosphere and investigate the geometry and configuration of Mercury's magnetosheath controlled by various conditions in the interplanetary medium. The developed approach provides a unique comprehensive view of Mercury's magnetospheric environment for the first time. Using this setup, we compare the locations of the bow shock and the magnetopause as determined by simulations with the locations predicted by stand-alone PMM runs and also verify the magnetic and dynamic pressure balance at the magnetopause. We also compare the results produced by these simulations with observational data obtained by the magnetometer on board the MErcury Surface, Space ENvironment, GEochemistry, and Ranging (MESSENGER) spacecraft along a dusk-dawn orbit and discuss the signatures of the magnetospheric features that appear in these simulations. Overall, our analysis suggests that combining the semiempirical PMM with a self-consistent global kinetic model creates new modeling possibilities which individual models cannot provide on their own.
Statistical Surrogate Modeling of Atmospheric Dispersion Events Using Bayesian Adaptive Splines
NASA Astrophysics Data System (ADS)
Francom, D.; Sansó, B.; Bulaevskaya, V.; Lucas, D. D.
2016-12-01
Uncertainty in the inputs of complex computer models, including atmospheric dispersion and transport codes, is often assessed via statistical surrogate models. Surrogate models are computationally efficient statistical approximations of expensive computer models that enable uncertainty analysis. We introduce Bayesian adaptive spline methods for producing surrogate models that capture the major spatiotemporal patterns of the parent model, while satisfying all the necessities of flexibility, accuracy and computational feasibility. We present novel methodological and computational approaches motivated by a controlled atmospheric tracer release experiment conducted at the Diablo Canyon nuclear power plant in California. Traditional methods for building statistical surrogate models often do not scale well to experiments with large amounts of data. Our approach is well suited to experiments involving large numbers of model inputs, large numbers of simulations, and functional output for each simulation. Our approach allows us to perform global sensitivity analysis with ease. We also present an approach to calibration of simulators using field data.
Numerical simulation of failure behavior of granular debris flows based on flume model tests.
Zhou, Jian; Li, Ye-xun; Jia, Min-cai; Li, Cui-na
2013-01-01
In this study, the failure behaviors of debris flows were studied by flume model tests with artificial rainfall and numerical simulations (PFC(3D)). Model tests revealed that grain sizes distribution had profound effects on failure mode, and the failure in slope of medium sand started with cracks at crest and took the form of retrogressive toe sliding failure. With the increase of fine particles in soil, the failure mode of the slopes changed to fluidized flow. The discrete element method PFC(3D) can overcome the hypothesis of the traditional continuous medium mechanic and consider the simple characteristics of particle. Thus, a numerical simulations model considering liquid-solid coupled method has been developed to simulate the debris flow. Comparing the experimental results, the numerical simulation result indicated that the failure mode of the failure of medium sand slope was retrogressive toe sliding, and the failure of fine sand slope was fluidized sliding. The simulation result is consistent with the model test and theoretical analysis, and grain sizes distribution caused different failure behavior of granular debris flows. This research should be a guide to explore the theory of debris flow and to improve the prevention and reduction of debris flow.
Application of GIS to modified models of vehicle emission dispersion
NASA Astrophysics Data System (ADS)
Jin, Taosheng; Fu, Lixin
This paper reports on a preliminary study of the forecast and evaluation of transport-related air pollution dispersion in urban areas. Some modifications of the traditional Gauss dispersion models are provided, and especially a crossroad model is built, which considers the great variation of vehicle emission attributed to different driving patterns at the crossroad. The above models are combined with a self-developed geographic information system (GIS) platform, and a simulative system with graphical interfaces is built. The system aims at visually describing the influences on the urban environment by urban traffic characteristics and therefore gives a reference to the improvement of urban air quality. Due to the introduction of a self-developed GIS platform and a creative crossroad model, the system is more effective, flexible and accurate. Finally, a comparison of the simulated (predicted) and observed hourly concentration is given, which indicates a good simulation.
Modeling the pressure-dilatation correlation
NASA Technical Reports Server (NTRS)
Sarkar, S.
1991-01-01
It is generally accepted that pressure dilatation, which is an additional compressibility term in turbulence transport equations, may be important for high speed flows. Recent direct simulations of homogeneous shear turbulence have given concrete evidence that the pressure dilatation is important insofar that it contributes to the reduced growth of turbulent kinetic energy due to compressibility effects. The problem of modeling pressure dilatation is addressed. A component of the pressure dilatation is isolated which exhibits temporal oscillations and, using direct numerical simulations of homogeneous shear turbulence and isotropic turbulence, show that it has a negligible contribution to the evolution of turbulent kinetic energy. Then, an analysis for the case of homogeneous turbulence is performed to obtain a model for the nonoscillatory pressure dilatation. This model algebraically relates the pressure dilatation to quantities traditionally obtained in incompressible turbulence closures. The model is validated by direct comparison with the pressure dilatation data obtained from the simulations.
NASA Technical Reports Server (NTRS)
Richardson, Brian; Kenny, Jeremy
2015-01-01
Injector design is a critical part of the development of a rocket Thrust Chamber Assembly (TCA). Proper detailed injector design can maximize propulsion efficiency while minimizing the potential for failures in the combustion chamber. Traditional design and analysis methods for hydrocarbon-fuel injector elements are based heavily on empirical data and models developed from heritage hardware tests. Using this limited set of data produces challenges when trying to design a new propulsion system where the operating conditions may greatly differ from heritage applications. Time-accurate, Three-Dimensional (3-D) Computational Fluid Dynamics (CFD) modeling of combusting flows inside of injectors has long been a goal of the fluid analysis group at Marshall Space Flight Center (MSFC) and the larger CFD modeling community. CFD simulation can provide insight into the design and function of an injector that cannot be obtained easily through testing or empirical comparisons to existing hardware. However, the traditional finite-rate chemistry modeling approach utilized to simulate combusting flows for complex fuels, such as Rocket Propellant-2 (RP-2), is prohibitively expensive and time consuming even with a large amount of computational resources. MSFC has been working, in partnership with Streamline Numerics, Inc., to develop a computationally efficient, flamelet-based approach for modeling complex combusting flow applications. In this work, a flamelet modeling approach is used to simulate time-accurate, 3-D, combusting flow inside a single Gas Centered Swirl Coaxial (GCSC) injector using the flow solver, Loci-STREAM. CFD simulations were performed for several different injector geometries. Results of the CFD analysis helped guide the design of the injector from an initial concept to a tested prototype. The results of the CFD analysis are compared to data gathered from several hot-fire, single element injector tests performed in the Air Force Research Lab EC-1 test facility located at Edwards Air Force Base.
LEGEND, a LEO-to-GEO Environment Debris Model
NASA Technical Reports Server (NTRS)
Liou, Jer Chyi; Hall, Doyle T.
2013-01-01
LEGEND (LEO-to-GEO Environment Debris model) is a three-dimensional orbital debris evolutionary model that is capable of simulating the historical and future debris populations in the near-Earth environment. The historical component in LEGEND adopts a deterministic approach to mimic the known historical populations. Launched rocket bodies, spacecraft, and mission-related debris (rings, bolts, etc.) are added to the simulated environment. Known historical breakup events are reproduced, and fragments down to 1 mm in size are created. The LEGEND future projection component adopts a Monte Carlo approach and uses an innovative pair-wise collision probability evaluation algorithm to simulate the future breakups and the growth of the debris populations. This algorithm is based on a new "random sampling in time" approach that preserves characteristics of the traditional approach and captures the rapidly changing nature of the orbital debris environment. LEGEND is a Fortran 90-based numerical simulation program. It operates in a UNIX/Linux environment.
Prediction of normalized biodiesel properties by simulation of multiple feedstock blends.
García, Manuel; Gonzalo, Alberto; Sánchez, José Luis; Arauzo, Jesús; Peña, José Angel
2010-06-01
A continuous process for biodiesel production has been simulated using Aspen HYSYS V7.0 software. As fresh feed, feedstocks with a mild acid content have been used. The process flowsheet follows a traditional alkaline transesterification scheme constituted by esterification, transesterification and purification stages. Kinetic models taking into account the concentration of the different species have been employed in order to simulate the behavior of the CSTR reactors and the product distribution within the process. The comparison between experimental data found in literature and the predicted normalized properties, has been discussed. Additionally, a comparison between different thermodynamic packages has been performed. NRTL activity model has been selected as the most reliable of them. The combination of these models allows the prediction of 13 out of 25 parameters included in standard EN-14214:2003, and confers simulators a great value as predictive as well as optimization tool. (c) 2010 Elsevier Ltd. All rights reserved.
3D superwide-angle one-way propagator and its application in seismic modeling and imaging
NASA Astrophysics Data System (ADS)
Jia, Xiaofeng; Jiang, Yunong; Wu, Ru-Shan
2018-07-01
Traditional one-way wave-equation based propagators have been widely used in past decades. Comparing to two-way propagators, one-way methods have higher efficiency and lower memory demands. These two features are especially important in solving large-scale 3D problems. However, regular one-way propagators cannot simulate waves that propagate in large angles within 90° because of their inherent wide angle limitation. Traditional one-way can only propagate along the determined direction (e.g., z-direction), so simulation of turning waves is beyond the ability of one-way methods. We develop 3D superwide-angle one-way propagator to overcome angle limitation and to simulate turning waves with superwide-angle propagation angle (>90°) for modeling and imaging complex geological structures. Wavefields propagating along vertical and horizontal directions are combined using typical stacking scheme. A weight function related to the propagation angle is used for combining and updating wavefields in each propagating step. In the implementation, we use graphics processing units (GPU) to accelerate the process. Typical workflow is designed to exploit the advantages of GPU architecture. Numerical examples show that the method achieves higher accuracy in modeling and imaging steep structures than regular one-way propagators. Actually, superwide-angle one-way propagator can be applied based on any one-way method to improve the effects of seismic modeling and imaging.
Nataraja, R M; Webb, N; Lopez, P J
2018-04-01
Surgical training has changed radically in the last few decades. The traditional Halstedian model of time-bound apprenticeship has been replaced with competency-based training. In our previous article, we presented an overview of learning theory relevant to clinical teaching; a summary for the busy paediatric surgeon and urologist. We introduced the concepts underpinning current changes in surgical education and training. In this next article, we give an overview of the various modalities of surgical simulation, the educational principles that underlie them, and potential applications in clinical practice. These modalities include; open surgical models and trainers, laparoscopic bench trainers, virtual reality trainers, simulated patients and role-play, hybrid simulation, scenario-based simulation, distributed simulation, virtual reality, and online simulation. Specific examples of technology that may be used for these modalities are included but this is not a comprehensive review of all available products. Copyright © 2018 Journal of Pediatric Urology Company. Published by Elsevier Ltd. All rights reserved.
Wen, Tingxi; Medveczky, David; Wu, Jackie; Wu, Jianhuang
2018-01-25
Colonoscopy plays an important role in the clinical screening and management of colorectal cancer. The traditional 'see one, do one, teach one' training style for such invasive procedure is resource intensive and ineffective. Given that colonoscopy is difficult, and time-consuming to master, the use of virtual reality simulators to train gastroenterologists in colonoscopy operations offers a promising alternative. In this paper, a realistic and real-time interactive simulator for training colonoscopy procedure is presented, which can even include polypectomy simulation. Our approach models the colonoscopy as thick flexible elastic rods with different resolutions which are dynamically adaptive to the curvature of the colon. More material characteristics of this deformable material are integrated into our discrete model to realistically simulate the behavior of the colonoscope. We present a simulator for training colonoscopy procedure. In addition, we propose a set of key aspects of our simulator that give fast, high fidelity feedback to trainees. We also conducted an initial validation of this colonoscopic simulator to determine its clinical utility and efficacy.
NASA Technical Reports Server (NTRS)
Striepe, Scott A.; Blanchard, Robert C.; Kirsch, Michael F.; Fowler, Wallace T.
2007-01-01
On January 14, 2005, ESA's Huygens probe separated from NASA's Cassini spacecraft, entered the Titan atmosphere and landed on its surface. As part of NASA Engineering Safety Center Independent Technical Assessment of the Huygens entry, descent, and landing, and an agreement with ESA, NASA provided results of all EDL analyses and associated findings to the Huygens project team prior to probe entry. In return, NASA was provided the flight data from the probe so that trajectory reconstruction could be done and simulation models assessed. Trajectory reconstruction of the Huygens entry probe at Titan was accomplished using two independent approaches: a traditional method and a POST2-based method. Results from both approaches are discussed in this paper.
Sun, Chenjing; Qi, Xiaokun
2018-01-01
Lumbar puncture (LP) is an essential part of adult neurology residency training. Technologic as well as nontechnologic training is needed. However, current assessment tools mostly focus on the technologic aspects of LP. We propose a training method-problem- and simulator-based learning (PSBL)-in LP residency training to develop overall skills of neurology residents. We enrolled 60 neurology postgraduate-year-1 residents from our standardized residents training center and randomly divided them into 2 groups: traditional teaching group and PSBL group. After training, we assessed the extent that the residents were ready to perform LP and tracked successful LPs performed by the residents. We then asked residents to complete questionnaires about the training models. Performance scores and the results of questionnaires were compared between the 2 groups. Students and faculty concluded that PSBL provided a more effective learning experience than the traditional teaching model. Although no statistical difference was found in the pretest, posttest, and improvement rate scores between the 2 groups, based on questionnaire scores and number of successful LPs after training, the PSBL group showed a statistically significant improvement compared with the traditional group. Findings indicated that nontechnical elements, such as planning before the procedure and controlling uncertainties during the procedure, are more crucial than technical elements. Compared with traditional teaching model, PSBL for LP training can develop overall surgical skills, including technical and nontechnical elements, improving performance. Residents in the PSBL group were more confident and effective in performing LP. Copyright © 2017 Elsevier Inc. All rights reserved.
Stefanidis, Dimitrios; Scerbo, Mark W; Montero, Paul N; Acker, Christina E; Smith, Warren D
2012-01-01
We hypothesized that novices will perform better in the operating room after simulator training to automaticity compared with traditional proficiency based training (current standard training paradigm). Simulator-acquired skill translates to the operating room, but the skill transfer is incomplete. Secondary task metrics reflect the ability of trainees to multitask (automaticity) and may improve performance assessment on simulators and skill transfer by indicating when learning is complete. Novices (N = 30) were enrolled in an IRB-approved, blinded, randomized, controlled trial. Participants were randomized into an intervention (n = 20) and a control (n = 10) group. The intervention group practiced on the FLS suturing task until they achieved expert levels of time and errors (proficiency), were tested on a live porcine fundoplication model, continued simulator training until they achieved expert levels on a visual spatial secondary task (automaticity) and were retested on the operating room (OR) model. The control group participated only during testing sessions. Performance scores were compared within and between groups during testing sessions. : Intervention group participants achieved proficiency after 54 ± 14 and automaticity after additional 109 ± 57 repetitions. Participants achieved better scores in the OR after automaticity training [345 (range, 0-537)] compared with after proficiency-based training [220 (range, 0-452; P < 0.001]. Simulator training to automaticity takes more time but is superior to proficiency-based training, as it leads to improved skill acquisition and transfer. Secondary task metrics that reflect trainee automaticity should be implemented during simulator training to improve learning and skill transfer.
Subgrid-scale Condensation Modeling for Entropy-based Large Eddy Simulations of Clouds
NASA Astrophysics Data System (ADS)
Kaul, C. M.; Schneider, T.; Pressel, K. G.; Tan, Z.
2015-12-01
An entropy- and total water-based formulation of LES thermodynamics, such as that used by the recently developed code PyCLES, is advantageous from physical and numerical perspectives. However, existing closures for subgrid-scale thermodynamic fluctuations assume more traditional choices for prognostic thermodynamic variables, such as liquid potential temperature, and are not directly applicable to entropy-based modeling. Since entropy and total water are generally nonlinearly related to diagnosed quantities like temperature and condensate amounts, neglecting their small-scale variability can lead to bias in simulation results. Here we present the development of a subgrid-scale condensation model suitable for use with entropy-based thermodynamic formulations.
Calibrating cellular automaton models for pedestrians walking through corners
NASA Astrophysics Data System (ADS)
Dias, Charitha; Lovreglio, Ruggiero
2018-05-01
Cellular Automata (CA) based pedestrian simulation models have gained remarkable popularity as they are simpler and easier to implement compared to other microscopic modeling approaches. However, incorporating traditional floor field representations in CA models to simulate pedestrian corner navigation behavior could result in unrealistic behaviors. Even though several previous studies have attempted to enhance CA models to realistically simulate pedestrian maneuvers around bends, such modifications have not been calibrated or validated against empirical data. In this study, two static floor field (SFF) representations, namely 'discrete representation' and 'continuous representation', are calibrated for CA-models to represent pedestrians' walking behavior around 90° bends. Trajectory data collected through a controlled experiment are used to calibrate these model representations. Calibration results indicate that although both floor field representations can represent pedestrians' corner navigation behavior, the 'continuous' representation fits the data better. Output of this study could be beneficial for enhancing the reliability of existing CA-based models by representing pedestrians' corner navigation behaviors more realistically.
A Complex-Valued Firing-Rate Model That Approximates the Dynamics of Spiking Networks
Schaffer, Evan S.; Ostojic, Srdjan; Abbott, L. F.
2013-01-01
Firing-rate models provide an attractive approach for studying large neural networks because they can be simulated rapidly and are amenable to mathematical analysis. Traditional firing-rate models assume a simple form in which the dynamics are governed by a single time constant. These models fail to replicate certain dynamic features of populations of spiking neurons, especially those involving synchronization. We present a complex-valued firing-rate model derived from an eigenfunction expansion of the Fokker-Planck equation and apply it to the linear, quadratic and exponential integrate-and-fire models. Despite being almost as simple as a traditional firing-rate description, this model can reproduce firing-rate dynamics due to partial synchronization of the action potentials in a spiking model, and it successfully predicts the transition to spike synchronization in networks of coupled excitatory and inhibitory neurons. PMID:24204236
Temporal rainfall estimation using input data reduction and model inversion
NASA Astrophysics Data System (ADS)
Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.
2016-12-01
Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a demonstration of equifinality. The use of a likelihood function that considers both rainfall and streamflow error combined with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.
Tack, Ignace L M M; Logist, Filip; Noriega Fernández, Estefanía; Van Impe, Jan F M
2015-02-01
Traditional kinetic models in predictive microbiology reliably predict macroscopic dynamics of planktonically-growing cell cultures in homogeneous liquid food systems. However, most food products have a semi-solid structure, where microorganisms grow locally in colonies. Individual colony cells exhibit strongly different and non-normally distributed behavior due to local nutrient competition. As a result, traditional models considering average population behavior in a homogeneous system do not describe colony dynamics in full detail. To incorporate local resource competition and individual cell differences, an individual-based modeling approach has been applied to Escherichia coli K-12 MG1655 colonies, considering the microbial cell as modeling unit. The first contribution of this individual-based model is to describe single colony growth under nutrient-deprived conditions. More specifically, the linear and stationary phase in the evolution of the colony radius, the evolution from a disk-like to branching morphology, and the emergence of a starvation zone in the colony center are simulated and compared to available experimental data. These phenomena occur earlier at more severe nutrient depletion conditions, i.e., at lower nutrient diffusivity and initial nutrient concentration in the medium. Furthermore, intercolony interactions have been simulated. Higher inoculum densities lead to stronger intercolony interactions, such as colony merging and smaller colony sizes, due to nutrient competition. This individual-based model contributes to the elucidation of characteristic experimentally observed colony behavior from mechanistic information about cellular physiology and interactions. Copyright © 2014 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, L. A.; Boehly, T. R.; Ding, Y. H.
Polystyrene (CH), commonly known as “plastic,” has been one of the widely used ablator materials for capsule designs in inertial confinement fusion (ICF). Knowing its precise properties under high-energy-density conditions is crucial to understanding and designing ICF implosions through radiation–hydrodynamic simulations. For this purpose, systematic ab initio studies on the static, transport, and optical properties of CH, in a wide range of density and temperature conditions (ρ= 0.1 to 100 g/cm 3 and T = 10 3 to 4 × 10 6K), have been conducted using quantum molecular dynamics (QMD) simulations based on the density functional theory. We have builtmore » several wide-ranging, self-consistent material-properties tables for CH, such as the first-principles equation of state (FPEOS), the QMD-based thermal conductivity (Κ QMD) and ionization, and the first-principles opacity table (FPOT). This paper is devoted to providing a review on (1) what results were obtained from these systematic ab initio studies; (2) how these self-consistent results were compared with both traditional plasma-physics models and available experiments; and (3) how these first-principles–based properties of polystyrene affect the predictions of ICF target performance, through both 1-D and 2-D radiation–hydrodynamic simulations. In the warm dense regime, our ab initio results, which can significantly differ from predictions of traditional plasma-physics models, compared favorably with experiments. When incorporated into hydrocodes for ICF simulations, these first-principles material properties of CH have produced significant differences over traditional models in predicting 1-D/2-D target performance of ICF implosions on OMEGA and direct-drive–ignition designs for the National Ignition Facility. Lastly, we will discuss the implications of these studies on the current small-margin ICF target designs using a CH ablator.« less
Jenkins, Paul J; McDonald, David A; Van Der Meer, Robert; Morton, Alec; Nugent, Margaret; Rymaszewski, Lech A
2017-01-01
Objective Healthcare faces the continual challenge of improving outcome while aiming to reduce cost. The aim of this study was to determine the micro cost differences of the Glasgow non-operative trauma virtual pathway in comparison to a traditional pathway. Design Discrete event simulation was used to model and analyse cost and resource utilisation with an activity-based costing approach. Data for a full comparison before the process change was unavailable so we used a modelling approach, comparing a virtual fracture clinic (VFC) with a simulated traditional fracture clinic (TFC). Setting The orthopaedic unit VFC pathway pioneered at Glasgow Royal Infirmary has attracted significant attention and interest and is the focus of this cost study. Outcome measures Our study focused exclusively on patients with non-operative trauma attending emergency department or the minor injuries unit and the subsequent step in the patient pathway. Retrospective studies of patient outcomes as a result of the protocol introductions for specific injuries are presented in association with activity costs from the models. Results Patients are satisfied with the new pathway, the information provided and the outcome of their injuries (Evidence Level IV). There was a 65% reduction in the number of first outpatient face-to-face (f2f) attendances in orthopaedics. In the VFC pathway, the resources required per day were significantly lower for all staff groups (p≤0.001). The overall cost per patient of the VFC pathway was £22.84 (95% CI 21.74 to 23.92) per patient compared with £36.81 (95% CI 35.65 to 37.97) for the TFC pathway. Conclusions Our results give a clearer picture of the cost comparison of the virtual pathway over a wholly traditional f2f clinic system. The use of simulation-based stochastic costings in healthcare economic analysis has been limited to date, but this study provides evidence for adoption of this method as a basis for its application in other healthcare settings. PMID:28882905
Collins, L. A.; Boehly, T. R.; Ding, Y. H.; ...
2018-03-23
Polystyrene (CH), commonly known as “plastic,” has been one of the widely used ablator materials for capsule designs in inertial confinement fusion (ICF). Knowing its precise properties under high-energy-density conditions is crucial to understanding and designing ICF implosions through radiation–hydrodynamic simulations. For this purpose, systematic ab initio studies on the static, transport, and optical properties of CH, in a wide range of density and temperature conditions (ρ= 0.1 to 100 g/cm 3 and T = 10 3 to 4 × 10 6K), have been conducted using quantum molecular dynamics (QMD) simulations based on the density functional theory. We have builtmore » several wide-ranging, self-consistent material-properties tables for CH, such as the first-principles equation of state (FPEOS), the QMD-based thermal conductivity (Κ QMD) and ionization, and the first-principles opacity table (FPOT). This paper is devoted to providing a review on (1) what results were obtained from these systematic ab initio studies; (2) how these self-consistent results were compared with both traditional plasma-physics models and available experiments; and (3) how these first-principles–based properties of polystyrene affect the predictions of ICF target performance, through both 1-D and 2-D radiation–hydrodynamic simulations. In the warm dense regime, our ab initio results, which can significantly differ from predictions of traditional plasma-physics models, compared favorably with experiments. When incorporated into hydrocodes for ICF simulations, these first-principles material properties of CH have produced significant differences over traditional models in predicting 1-D/2-D target performance of ICF implosions on OMEGA and direct-drive–ignition designs for the National Ignition Facility. Lastly, we will discuss the implications of these studies on the current small-margin ICF target designs using a CH ablator.« less
A Single Column Model Ensemble Approach Applied to the TWP-ICE Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davies, Laura; Jakob, Christian; Cheung, K.
2013-06-27
Single column models (SCM) are useful testbeds for investigating the parameterisation schemes of numerical weather prediction and climate models. The usefulness of SCM simulations are limited, however, by the accuracy of the best-estimate large-scale data prescribed. One method to address this uncertainty is to perform ensemble simulations of the SCM. This study first derives an ensemble of large-scale data for the Tropical Warm Pool International Cloud Experiment (TWP-ICE) based on an estimate of a possible source of error in the best-estimate product. This data is then used to carry out simulations with 11 SCM and 2 cloud-resolving models (CRM). Best-estimatemore » simulations are also performed. All models show that moisture related variables are close to observations and there are limited differences between the best-estimate and ensemble mean values. The models, however, show different sensitivities to changes in the forcing particularly when weakly forced. The ensemble simulations highlight important differences in the moisture budget between the SCM and CRM. Systematic differences are also apparent in the ensemble mean vertical structure of cloud variables. The ensemble is further used to investigate relations between cloud variables and precipitation identifying large differences between CRM and SCM. This study highlights that additional information can be gained by performing ensemble simulations enhancing the information derived from models using the more traditional single best-estimate simulation.« less
Growth and yield models for central hardwoods
Martin E. Dale; Donald E. Hilt
1989-01-01
Over the last 20 years computers have become an efficient tool to estimate growth and yield. Computerized yield estimates vary from simple approximation or interpolation of traditional normal yield tables to highly sophisticated programs that simulate the growth and yield of each individual tree.
NASA Technical Reports Server (NTRS)
Dubos, Gregory F.; Cornford, Steven
2012-01-01
While the ability to model the state of a space system over time is essential during spacecraft operations, the use of time-based simulations remains rare in preliminary design. The absence of the time dimension in most traditional early design tools can however become a hurdle when designing complex systems whose development and operations can be disrupted by various events, such as delays or failures. As the value delivered by a space system is highly affected by such events, exploring the trade space for designs that yield the maximum value calls for the explicit modeling of time.This paper discusses the use of discrete-event models to simulate spacecraft development schedule as well as operational scenarios and on-orbit resources in the presence of uncertainty. It illustrates how such simulations can be utilized to support trade studies, through the example of a tool developed for DARPA's F6 program to assist the design of "fractionated spacecraft".
Model-based verification and validation of the SMAP uplink processes
NASA Astrophysics Data System (ADS)
Khan, M. O.; Dubos, G. F.; Tirona, J.; Standley, S.
Model-Based Systems Engineering (MBSE) is being used increasingly within the spacecraft design community because of its benefits when compared to document-based approaches. As the complexity of projects expands dramatically with continually increasing computational power and technology infusion, the time and effort needed for verification and validation (V& V) increases geometrically. Using simulation to perform design validation with system-level models earlier in the life cycle stands to bridge the gap between design of the system (based on system-level requirements) and verifying those requirements/validating the system as a whole. This case study stands as an example of how a project can validate a system-level design earlier in the project life cycle than traditional V& V processes by using simulation on a system model. Specifically, this paper describes how simulation was added to a system model of the Soil Moisture Active-Passive (SMAP) mission's uplink process. Also discussed are the advantages and disadvantages of the methods employed and the lessons learned; which are intended to benefit future model-based and simulation-based development efforts.
Erdemir, Ahmet; Guess, Trent M.; Halloran, Jason P.; Modenese, Luca; Reinbolt, Jeffrey A.; Thelen, Darryl G.; Umberger, Brian R.
2016-01-01
Objective The overall goal of this document is to demonstrate that dissemination of models and analyses for assessing the reproducibility of simulation results can be incorporated in the scientific review process in biomechanics. Methods As part of a special issue on model sharing and reproducibility in IEEE Transactions on Biomedical Engineering, two manuscripts on computational biomechanics were submitted: A. Rajagopal et al., IEEE Trans. Biomed. Eng., 2016 and A. Schmitz and D. Piovesan, IEEE Trans. Biomed. Eng., 2016. Models used in these studies were shared with the scientific reviewers and the public. In addition to the standard review of the manuscripts, the reviewers downloaded the models and performed simulations that reproduced results reported in the studies. Results There was general agreement between simulation results of the authors and those of the reviewers. Discrepancies were resolved during the necessary revisions. The manuscripts and instructions for download and simulation were updated in response to the reviewers’ feedback; changes that may otherwise have been missed if explicit model sharing and simulation reproducibility analysis were not conducted in the review process. Increased burden on the authors and the reviewers, to facilitate model sharing and to repeat simulations, were noted. Conclusion When the authors of computational biomechanics studies provide access to models and data, the scientific reviewers can download and thoroughly explore the model, perform simulations, and evaluate simulation reproducibility beyond the traditional manuscript-only review process. Significance Model sharing and reproducibility analysis in scholarly publishing will result in a more rigorous review process, which will enhance the quality of modeling and simulation studies and inform future users of computational models. PMID:28072567
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rian, D.T.; Hage, A.
1994-12-31
A numerical simulator is often used as a reservoir management tool. One of its main purposes is to aid in the evaluation of number of wells, well locations and start time for wells. Traditionally, the optimization of a field development is done by a manual trial and error process. In this paper, an example of an automated technique is given. The core in the automization process is the reservoir simulator Frontline. Frontline is based on front tracking techniques, which makes it fast and accurate compared to traditional finite difference simulators. Due to its CPU-efficiency the simulator has been coupled withmore » an optimization module, which enables automatic optimization of location of wells, number of wells and start-up times. The simulator was used as an alternative method in the evaluation of waterflooding in a North Sea fractured chalk reservoir. Since Frontline, in principle, is 2D, Buckley-Leverett pseudo functions were used to represent the 3rd dimension. The area full field simulation model was run with up to 25 wells for 20 years in less than one minute of Vax 9000 CPU-time. The automatic Frontline evaluation indicated that a peripheral waterflood could double incremental recovery compared to a central pattern drive.« less
A new model for two-dimensional numerical simulation of pseudo-2D gas-solids fluidized beds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Tingwen; Zhang, Yongmin
2013-10-11
Pseudo-two dimensional (pseudo-2D) fluidized beds, for which the thickness of the system is much smaller than the other two dimensions, is widely used to perform fundamental studies on bubble behavior, solids mixing, or clustering phenomenon in different gas-solids fluidization systems. The abundant data from such experimental systems are very useful for numerical model development and validation. However, it has been reported that two-dimensional (2D) computational fluid dynamic (CFD) simulations of pseudo-2D gas-solids fluidized beds usually predict poor quantitative agreement with the experimental data, especially for the solids velocity field. In this paper, a new model is proposed to improve themore » 2D numerical simulations of pseudo-2D gas-solids fluidized beds by properly accounting for the frictional effect of the front and back walls. Two previously reported pseudo-2D experimental systems were simulated with this model. Compared to the traditional 2D simulations, significant improvements in the numerical predictions have been observed and the predicted results are in better agreement with the available experimental data.« less
Robustness and Uncertainty: Applications for Policy in Climate and Hydrological Modeling
NASA Astrophysics Data System (ADS)
Fields, A. L., III
2015-12-01
Policymakers must often decide how to proceed when presented with conflicting simulation data from hydrological, climatological, and geological models. While laboratory sciences often appeal to the reproducibility of results to argue for the validity of their conclusions, simulations cannot use this strategy for a number of pragmatic and methodological reasons. However, robustness of predictions and causal structures can serve the same function for simulations as reproducibility does for laboratory experiments and field observations in either adjudicating between conflicting results or showing that there is insufficient justification to externally validate the results. Additionally, an interpretation of the argument from robustness is presented that involves appealing to the convergence of many well-built and diverse models rather than the more common version which involves appealing to the probability that one of a set of models is likely to be true. This interpretation strengthens the case for taking robustness as an additional requirement for the validation of simulation results and ultimately supports the idea that computer simulations can provide information about the world that is just as trustworthy as data from more traditional laboratory studies and field observations. Understanding the importance of robust results for the validation of simulation data is especially important for policymakers making decisions on the basis of potentially conflicting models. Applications will span climate, hydrological, and hydroclimatological models.
NASA Astrophysics Data System (ADS)
Luo, W.; Pelletier, J. D.; Smith, T.; Whalley, K.; Shelhamer, A.; Darling, A.; Ormand, C. J.; Duffin, K.; Hung, W. C.; Iverson, E. A. R.; Shernoff, D.; Zhai, X.; Chiang, J. L.; Lotter, N.
2016-12-01
The Web-based Interactive Landform Simulation Model - Grand Canyon (WILSIM-GC, http://serc.carleton.edu/landform/) is a simplified version of a physically-based model that simulates bedrock channel erosion, cliff retreat, and base level change. Students can observe the landform evolution in animation under different scenarios by changing parameter values. In addition, cross-sections and profiles at different time intervals can be displayed and saved for further quantitative analysis. Students were randomly assigned to a treatment group (using WILSIM-GC simulation) or a control group (using traditional paper-based material). Pre- and post-tests were administered to measure students' understanding of the concepts and processes related to Grand Canyon formation and evolution. Results from the ANOVA showed that for both groups there were statistically significant growth in scores from pre-test to post-test [F(1, 47) = 25.82, p < .001], but the growth in scores between the two groups was not statistically significant [F(1, 47) = 0.08, p =.774]. In semester 1, the WILSIM-GC group showed greater growth, while in semester 2, the paper-based group showed greater growth. Additionally, a significant time × group × gender × semester interaction effect was observed [F(1, 47) = 4.76, p =.034]. Here, in semester 1 female students were more strongly advantaged by the WILSIM-GC intervention than male students, while in semester 2, female students were less strongly advantaged than male students. The new results are consistent with our initial findings (Luo et al., 2016) and others reported in the literature, i.e., simulation approach is at least equally effective as traditional paper-based method in teaching students about landform evolution. Survey data indicate that students favor the simulation approach. Further study is needed to investigate the reasons for the difference by gender.
Pan, Feng; Reifsnider, Odette; Zheng, Ying; Proskorovsky, Irina; Li, Tracy; He, Jianming; Sorensen, Sonja V
2018-04-01
Treatment landscape in prostate cancer has changed dramatically with the emergence of new medicines in the past few years. The traditional survival partition model (SPM) cannot accurately predict long-term clinical outcomes because it is limited by its ability to capture the key consequences associated with this changing treatment paradigm. The objective of this study was to introduce and validate a discrete-event simulation (DES) model for prostate cancer. A DES model was developed to simulate overall survival (OS) and other clinical outcomes based on patient characteristics, treatment received, and disease progression history. We tested and validated this model with clinical trial data from the abiraterone acetate phase III trial (COU-AA-302). The model was constructed with interim data (55% death) and validated with the final data (96% death). Predicted OS values were also compared with those from the SPM. The DES model's predicted time to chemotherapy and OS are highly consistent with the final observed data. The model accurately predicts the OS hazard ratio from the final data cut (predicted: 0.74; 95% confidence interval [CI] 0.64-0.85 and final actual: 0.74; 95% CI 0.6-0.88). The log-rank test to compare the observed and predicted OS curves indicated no statistically significant difference between observed and predicted curves. However, the predictions from the SPM based on interim data deviated significantly from the final data. Our study showed that a DES model with properly developed risk equations presents considerable improvements to the more traditional SPM in flexibility and predictive accuracy of long-term outcomes. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, L.; Xu, C.-Y.; Engeland, K.
2012-04-01
With respect to model calibration, parameter estimation and analysis of uncertainty sources, different approaches have been used in hydrological models. Bayesian method is one of the most widely used methods for uncertainty assessment of hydrological models, which incorporates different sources of information into a single analysis through Bayesian theorem. However, none of these applications can well treat the uncertainty in extreme flows of hydrological models' simulations. This study proposes a Bayesian modularization method approach in uncertainty assessment of conceptual hydrological models by considering the extreme flows. It includes a comprehensive comparison and evaluation of uncertainty assessments by a new Bayesian modularization method approach and traditional Bayesian models using the Metropolis Hasting (MH) algorithm with the daily hydrological model WASMOD. Three likelihood functions are used in combination with traditional Bayesian: the AR (1) plus Normal and time period independent model (Model 1), the AR (1) plus Normal and time period dependent model (Model 2) and the AR (1) plus multi-normal model (Model 3). The results reveal that (1) the simulations derived from Bayesian modularization method are more accurate with the highest Nash-Sutcliffe efficiency value, and (2) the Bayesian modularization method performs best in uncertainty estimates of entire flows and in terms of the application and computational efficiency. The study thus introduces a new approach for reducing the extreme flow's effect on the discharge uncertainty assessment of hydrological models via Bayesian. Keywords: extreme flow, uncertainty assessment, Bayesian modularization, hydrological model, WASMOD
Developing Flexible Discrete Event Simulation Models in an Uncertain Policy Environment
NASA Technical Reports Server (NTRS)
Miranda, David J.; Fayez, Sam; Steele, Martin J.
2011-01-01
On February 1st, 2010 U.S. President Barack Obama submitted to Congress his proposed budget request for Fiscal Year 2011. This budget included significant changes to the National Aeronautics and Space Administration (NASA), including the proposed cancellation of the Constellation Program. This change proved to be controversial and Congressional approval of the program's official cancellation would take many months to complete. During this same period an end-to-end discrete event simulation (DES) model of Constellation operations was being built through the joint efforts of Productivity Apex Inc. (PAl) and Science Applications International Corporation (SAIC) teams under the guidance of NASA. The uncertainty in regards to the Constellation program presented a major challenge to the DES team, as to: continue the development of this program-of-record simulation, while at the same time remain prepared for possible changes to the program. This required the team to rethink how it would develop it's model and make it flexible enough to support possible future vehicles while at the same time be specific enough to support the program-of-record. This challenge was compounded by the fact that this model was being developed through the traditional DES process-orientation which lacked the flexibility of object-oriented approaches. The team met this challenge through significant pre-planning that led to the "modularization" of the model's structure by identifying what was generic, finding natural logic break points, and the standardization of interlogic numbering system. The outcome of this work resulted in a model that not only was ready to be easily modified to support any future rocket programs, but also a model that was extremely structured and organized in a way that facilitated rapid verification. This paper discusses in detail the process the team followed to build this model and the many advantages this method provides builders of traditional process-oriented discrete event simulations.
Genetic Adaptive Control for PZT Actuators
NASA Technical Reports Server (NTRS)
Kim, Jeongwook; Stover, Shelley K.; Madisetti, Vijay K.
1995-01-01
A piezoelectric transducer (PZT) is capable of providing linear motion if controlled correctly and could provide a replacement for traditional heavy and large servo systems using motors. This paper focuses on a genetic model reference adaptive control technique (GMRAC) for a PZT which is moving a mirror where the goal is to keep the mirror velocity constant. Genetic Algorithms (GAs) are an integral part of the GMRAC technique acting as the search engine for an optimal PID controller. Two methods are suggested to control the actuator in this research. The first one is to change the PID parameters and the other is to add an additional reference input in the system. The simulation results of these two methods are compared. Simulated Annealing (SA) is also used to solve the problem. Simulation results of GAs and SA are compared after simulation. GAs show the best result according to the simulation results. The entire model is designed using the Mathworks' Simulink tool.
Single-pass memory system evaluation for multiprogramming workloads
NASA Technical Reports Server (NTRS)
Conte, Thomas M.; Hwu, Wen-Mei W.
1990-01-01
Modern memory systems are composed of levels of cache memories, a virtual memory system, and a backing store. Varying more than a few design parameters and measuring the performance of such systems has traditionally be constrained by the high cost of simulation. Models of cache performance recently introduced reduce the cost simulation but at the expense of accuracy of performance prediction. Stack-based methods predict performance accurately using one pass over the trace for all cache sizes, but these techniques have been limited to fully-associative organizations. This paper presents a stack-based method of evaluating the performance of cache memories using a recurrence/conflict model for the miss ratio. Unlike previous work, the performance of realistic cache designs, such as direct-mapped caches, are predicted by the method. The method also includes a new approach to the problem of the effects of multiprogramming. This new technique separates the characteristics of the individual program from that of the workload. The recurrence/conflict method is shown to be practical, general, and powerful by comparing its performance to that of a popular traditional cache simulator. The authors expect that the availability of such a tool will have a large impact on future architectural studies of memory systems.
Application of physics engines in virtual worlds
NASA Astrophysics Data System (ADS)
Norman, Mark; Taylor, Tim
2002-03-01
Dynamic virtual worlds potentially can provide a much richer and more enjoyable experience than static ones. To realize such worlds, three approaches are commonly used. The first of these, and still widely applied, involves importing traditional animations from a modeling system such as 3D Studio Max. This approach is therefore limited to predefined animation scripts or combinations/blends thereof. The second approach involves the integration of some specific-purpose simulation code, such as car dynamics, and is thus generally limited to one (class of) application(s). The third approach involves the use of general-purpose physics engines, which promise to enable a range of compelling dynamic virtual worlds and to considerably speed up development. By far the largest market today for real-time simulation is computer games, revenues exceeding those of the movie industry. Traditionally, the simulation is produced by game developers in-house for specific titles. However, off-the-shelf middleware physics engines are now available for use in games and related domains. In this paper, we report on our experiences of using middleware physics engines to create a virtual world as an interactive experience, and an advanced scenario where artificial life techniques generate controllers for physically modeled characters.
ERIC Educational Resources Information Center
Graf, Edith Aurora
2014-01-01
In "How Task Features Impact Evidence from Assessments Embedded in Simulations and Games," Almond, Kim, Velasquez, and Shute have prepared a thought-provoking piece contrasting the roles of task model variables in a traditional assessment of mathematics word problems to their roles in "Newton's Playground," a game designed…
Development and application of computational aerothermodynamics flowfield computer codes
NASA Technical Reports Server (NTRS)
Venkatapathy, Ethiraj
1993-01-01
Computations are presented for one-dimensional, strong shock waves that are typical of those that form in front of a reentering spacecraft. The fluid mechanics and thermochemistry are modeled using two different approaches. The first employs traditional continuum techniques in solving the Navier-Stokes equations. The second-approach employs a particle simulation technique (the direct simulation Monte Carlo method, DSMC). The thermochemical models employed in these two techniques are quite different. The present investigation presents an evaluation of thermochemical models for nitrogen under hypersonic flow conditions. Four separate cases are considered. The cases are governed, respectively, by the following: vibrational relaxation; weak dissociation; strong dissociation; and weak ionization. In near-continuum, hypersonic flow, the nonequilibrium thermochemical models employed in continuum and particle simulations produce nearly identical solutions. Further, the two approaches are evaluated successfully against available experimental data for weakly and strongly dissociating flows.
Kawamura, Kazuya; Kobayashi, Yo; Fujie, Masakatsu G
2007-01-01
Medical technology has advanced with the introduction of robot technology, making previous medical treatments that were very difficult far more possible. However, operation of a surgical robot demands substantial training and continual practice on the part of the surgeon because it requires difficult techniques that are different from those of traditional surgical procedures. We focused on a simulation technology based on the physical characteristics of organs. In this research, we proposed the development of surgical simulation, based on a physical model, for intra-operative navigation by a surgeon. In this paper, we describe the design of our system, in particular our organ deformation calculator. The proposed simulation system consists of an organ deformation calculator and virtual slave manipulators. We obtained adequate experimental results of a target node at a nearby point of interaction, because this point ensures better accuracy for our simulation model. The next research step would be to focus on a surgical environment in which internal organ models would be integrated into a slave simulation system.
Fadda, Elisa; Woods, Robert J
2010-08-01
The characterization of the 3D structure of oligosaccharides, their conjugates and analogs is particularly challenging for traditional experimental methods. Molecular simulation methods provide a basis for interpreting sparse experimental data and for independently predicting conformational and dynamic properties of glycans. Here, we summarize and analyze the issues associated with modeling carbohydrates, with a detailed discussion of four of the most recently developed carbohydrate force fields, reviewed in terms of applicability to natural glycans, carbohydrate-protein complexes and the emerging area of glycomimetic drugs. In addition, we discuss prospectives and new applications of carbohydrate modeling in drug discovery.
Understanding sources of organic aerosol during CalNex-2010 using the CMAQ-VBS
Community Multiscale Air Quality (CMAQ) model simulations utilizing the traditional organic aerosol (OA) treatment (CMAQ-AE6) and a volatility basis set (VBS) treatment for OA (CMAQ-VBS) were evaluated against measurements collected at routine monitoring networks (Chemical Specia...
Hu, Suxing; Collins, Lee A.; Goncharov, V. N.; ...
2016-05-26
Using first-principles (FP) methods, we have performed ab initio compute for the equation of state (EOS), thermal conductivity, and opacity of deuterium-tritium (DT) in a wide range of densities and temperatures for inertial confinement fusion (ICF) applications. These systematic investigations have recently been expanded to accurately compute the plasma properties of CH ablators under extreme conditions. In particular, the first-principles EOS and thermal-conductivity tables of CH are self-consistently built from such FP calculations, which are benchmarked by experimental measurements. When compared with the traditional models used for these plasma properties in hydrocodes, significant differences have been identified in the warmmore » dense plasma regime. When these FP-calculated properties of DT and CH were used in our hydrodynamic simulations of ICF implosions, we found that the target performance in terms of neutron yield and energy gain can vary by a factor of 2 to 3, relative to traditional model simulations.« less
Optimal Wastewater Loading under Conflicting Goals and Technology Limitations in a Riverine System.
Rafiee, Mojtaba; Lyon, Steve W; Zahraie, Banafsheh; Destouni, Georgia; Jaafarzadeh, Nemat
2017-03-01
This paper investigates a novel simulation-optimization (S-O) framework for identifying optimal treatment levels and treatment processes for multiple wastewater dischargers to rivers. A commonly used water quality simulation model, Qual2K, was linked to a Genetic Algorithm optimization model for exploration of relevant fuzzy objective-function formulations for addressing imprecision and conflicting goals of pollution control agencies and various dischargers. Results showed a dynamic flow dependence of optimal wastewater loading with good convergence to near global optimum. Explicit considerations of real-world technological limitations, which were developed here in a new S-O framework, led to better compromise solutions between conflicting goals than those identified within traditional S-O frameworks. The newly developed framework, in addition to being more technologically realistic, is also less complicated and converges on solutions more rapidly than traditional frameworks. This technique marks a significant step forward for development of holistic, riverscape-based approaches that balance the conflicting needs of the stakeholders.
Rao, Gauri G; Ly, Neang S; Haas, Curtis E; Garonzik, Samira; Forrest, Alan; Bulitta, Jurgen B; Kelchlin, Pamela A; Holden, Patricia N; Nation, Roger L; Li, Jian; Tsuji, Brian T
2014-01-01
Increasing evidence suggests that colistin monotherapy is suboptimal at currently recommended doses. We hypothesized that front-loading provides an improved dosing strategy for polymyxin antibiotics to maximize killing and minimize total exposure. Here, we utilized an in vitro pharmacodynamic model to examine the impact of front-loaded colistin regimens against a high bacterial density (10(8) CFU/ml) of Pseudomonas aeruginosa. The pharmacokinetics were simulated for patients with hepatic (half-life [t1/2] of 3.2 h) or renal (t1/2 of 14.8 h) disease. Front-loaded regimens (n=5) demonstrated improvement in bacterial killing, with reduced overall free drug areas under the concentration-time curve (fAUC) compared to those with traditional dosing regimens (n=14) with various dosing frequencies (every 12 h [q12h] and q24h). In the renal failure simulations, front-loaded regimens at lower exposures (fAUC of 143 mg · h/liter) obtained killing activity similar to that of traditional regimens (fAUC of 268 mg · h/liter), with an ∼97% reduction in the area under the viable count curve over 48 h. In hepatic failure simulations, front-loaded regimens yielded rapid initial killing by up to 7 log10 within 2 h, but considerable regrowth occurred for both front-loaded and traditional regimens. No regimen eradicated the high bacterial inoculum of P. aeruginosa. The current study, which utilizes an in vitro pharmacodynamic infection model, demonstrates the potential benefits of front-loading strategies for polymyxins simulating differential pharmacokinetics in patients with hepatic and renal failure at a range of doses. Our findings may have important clinical implications, as front-loading polymyxins as a part of a combination regimen may be a viable strategy for aggressive treatment of high-bacterial-burden infections.
Global Fluxon Modeling of the Solar Corona and Inner Heliosphere
NASA Astrophysics Data System (ADS)
Lamb, D. A.; DeForest, C. E.
2017-12-01
The fluxon approach to MHD modeling enables simulations of low-beta plasmas in the absence of undesirable numerical effects such as diffusion and magnetic reconnection. The magnetic field can be modeled as a collection of discrete field lines ("fluxons") containing a set amount of magnetic flux in a prescribed field topology. Due to the fluxon model's pseudo-Lagrangian grid, simulations can be completed in a fraction of the time of traditional grid-based simulations, enabling near-real-time simulations of the global magnetic field structure and its influence on solar wind properties. Using SDO/HMI synoptic magnetograms as lower magnetic boundary conditions, and a separate one-dimensional fluid flow model attached to each fluxon, we compare the resulting fluxon relaxations with other commonly-used global models (such as PFSS), and with white-light images of the corona (including the August 2017 total solar eclipse). Finally, we show the computed magnetic field expansion ratio, and the modeled solar wind speed near the coronal-heliospheric transition. Development of the fluxon MHD model FLUX (the Field Line Universal relaXer), has been funded by NASA's Living with a Star program and by Southwest Research Institute.
Parallel Optimization of 3D Cardiac Electrophysiological Model Using GPU
Xia, Yong; Zhang, Henggui
2015-01-01
Large-scale 3D virtual heart model simulations are highly demanding in computational resources. This imposes a big challenge to the traditional computation resources based on CPU environment, which already cannot meet the requirement of the whole computation demands or are not easily available due to expensive costs. GPU as a parallel computing environment therefore provides an alternative to solve the large-scale computational problems of whole heart modeling. In this study, using a 3D sheep atrial model as a test bed, we developed a GPU-based simulation algorithm to simulate the conduction of electrical excitation waves in the 3D atria. In the GPU algorithm, a multicellular tissue model was split into two components: one is the single cell model (ordinary differential equation) and the other is the diffusion term of the monodomain model (partial differential equation). Such a decoupling enabled realization of the GPU parallel algorithm. Furthermore, several optimization strategies were proposed based on the features of the virtual heart model, which enabled a 200-fold speedup as compared to a CPU implementation. In conclusion, an optimized GPU algorithm has been developed that provides an economic and powerful platform for 3D whole heart simulations. PMID:26581957
Parallel Optimization of 3D Cardiac Electrophysiological Model Using GPU.
Xia, Yong; Wang, Kuanquan; Zhang, Henggui
2015-01-01
Large-scale 3D virtual heart model simulations are highly demanding in computational resources. This imposes a big challenge to the traditional computation resources based on CPU environment, which already cannot meet the requirement of the whole computation demands or are not easily available due to expensive costs. GPU as a parallel computing environment therefore provides an alternative to solve the large-scale computational problems of whole heart modeling. In this study, using a 3D sheep atrial model as a test bed, we developed a GPU-based simulation algorithm to simulate the conduction of electrical excitation waves in the 3D atria. In the GPU algorithm, a multicellular tissue model was split into two components: one is the single cell model (ordinary differential equation) and the other is the diffusion term of the monodomain model (partial differential equation). Such a decoupling enabled realization of the GPU parallel algorithm. Furthermore, several optimization strategies were proposed based on the features of the virtual heart model, which enabled a 200-fold speedup as compared to a CPU implementation. In conclusion, an optimized GPU algorithm has been developed that provides an economic and powerful platform for 3D whole heart simulations.
Simulating a Thin Accretion Disk Using PLUTO
NASA Astrophysics Data System (ADS)
Phillipson, Rebecca; Vogeley, Michael S.; Boyd, Patricia T.
2017-08-01
Accreting black hole systems such as X-ray binaries and active galactic nuclei exhibit variability in their luminosity on many timescales ranging from milliseconds to tens of days, and even hundreds of days. The mechanism(s) driving this variability and the relationship between short- and long-term variability is poorly understood. Current studies on accretion disks seek to determine how the changes in black hole mass, the rate at which mass accretes onto the central black hole, and the external environment affect the variability on scales ranging from stellar-mass black holes to supermassive black holes. Traditionally, the fluid mechanics equations governing accretion disks have been simplified by considering only the kinematics of the disk, and perhaps magnetic fields, in order for their phenomenological behavior to be predicted analytically. We seek to employ numerical techniques to study accretion disks including more complicated physics traditionally ignored in order to more accurately understand their behavior over time. We present a proof-of-concept three dimensional, global simulation using the astrophysical hydrodynamic code PLUTO of a simplified thin disk model about a central black hole which will serve as the basis for development of more complicated models including external effects such as radiation and magnetic fields. We also develop a tool to generate a synthetic light curve that displays the variability in luminosity of the simulation over time. The preliminary simulation and accompanying synthetic light curve demonstrate that PLUTO is a reliable code to perform sophisticated simulations of accretion disk systems which can then be compared to observational results.
[Simulation and data analysis of stereological modeling based on virtual slices].
Wang, Hao; Shen, Hong; Bai, Xiao-yan
2008-05-01
To establish a computer-assisted stereological model for simulating the process of slice section and evaluate the relationship between section surface and estimated three-dimensional structure. The model was designed by mathematic method as a win32 software based on the MFC using Microsoft visual studio as IDE for simulating the infinite process of sections and analysis of the data derived from the model. The linearity of the fitting of the model was evaluated by comparison with the traditional formula. The win32 software based on this algorithm allowed random sectioning of the particles distributed randomly in an ideal virtual cube. The stereological parameters showed very high throughput (>94.5% and 92%) in homogeneity and independence tests. The data of density, shape and size of the section were tested to conform to normal distribution. The output of the model and that from the image analysis system showed statistical correlation and consistency. The algorithm we described can be used for evaluating the stereologic parameters of the structure of tissue slices.
Stochastic Time Models of Syllable Structure
Shaw, Jason A.; Gafos, Adamantios I.
2015-01-01
Drawing on phonology research within the generative linguistics tradition, stochastic methods, and notions from complex systems, we develop a modelling paradigm linking phonological structure, expressed in terms of syllables, to speech movement data acquired with 3D electromagnetic articulography and X-ray microbeam methods. The essential variable in the models is syllable structure. When mapped to discrete coordination topologies, syllabic organization imposes systematic patterns of variability on the temporal dynamics of speech articulation. We simulated these dynamics under different syllabic parses and evaluated simulations against experimental data from Arabic and English, two languages claimed to parse similar strings of segments into different syllabic structures. Model simulations replicated several key experimental results, including the fallibility of past phonetic heuristics for syllable structure, and exposed the range of conditions under which such heuristics remain valid. More importantly, the modelling approach consistently diagnosed syllable structure proving resilient to multiple sources of variability in experimental data including measurement variability, speaker variability, and contextual variability. Prospects for extensions of our modelling paradigm to acoustic data are also discussed. PMID:25996153
NASA Astrophysics Data System (ADS)
Fitkov-Norris, Elena; Yeghiazarian, Ara
2016-11-01
The analytical tools available to social scientists have traditionally been adapted from tools originally designed for analysis of natural science phenomena. This article discusses the applicability of systems dynamics - a qualitative based modelling approach, as a possible analysis and simulation tool that bridges the gap between social and natural sciences. After a brief overview of the systems dynamics modelling methodology, the advantages as well as limiting factors of systems dynamics to the potential applications in the field of social sciences and human interactions are discussed. The issues arise with regards to operationalization and quantification of latent constructs at the simulation building stage of the systems dynamics methodology and measurement theory is proposed as a ready and waiting solution to the problem of dynamic model calibration, with a view of improving simulation model reliability and validity and encouraging the development of standardised, modular system dynamics models that can be used in social science research.
Research on frequency control strategy of interconnected region based on fuzzy PID
NASA Astrophysics Data System (ADS)
Zhang, Yan; Li, Chunlan
2018-05-01
In order to improve the frequency control performance of the interconnected power grid, overcome the problems of poor robustness and slow adjustment of traditional regulation, the paper puts forward a frequency control method based on fuzzy PID. The method takes the frequency deviation and tie-line deviation of each area as the control objective, takes the regional frequency deviation and its deviation as input, and uses fuzzy mathematics theory, adjusts PID control parameters online. By establishing the regional frequency control model of water-fire complementary power generation in MATLAB, the regional frequency control strategy is given, and three control modes (TBC-FTC, FTC-FTC, FFC-FTC) are simulated and analyzed. The simulation and experimental results show that, this method has better control performance compared with the traditional regional frequency regulation.
NASA Astrophysics Data System (ADS)
Chang, Jiang-Hao; Yu, Jing-Cun; Liu, Zhi-Xin
2016-09-01
The full-space transient electromagnetic response of water-filled goaves in coal mines were numerically modeled. Traditional numerical modeling methods cannot be used to simulate the underground full-space transient electromagnetic field. We used multiple transmitting loops instead of the traditional single transmitting loop to load the transmitting loop into Cartesian grids. We improved the method for calculating the z-component of the magnetic field based on the characteristics of full space. Then, we established the fullspace 3D geoelectrical model using geological data for coalmines. In addition, the transient electromagnetic responses of water-filled goaves of variable shape at different locations were simulated by using the finite-difference time-domain (FDTD) method. Moreover, we evaluated the apparent resistivity results. The numerical modeling results suggested that the resistivity differences between the coal seam and its roof and floor greatly affect the distribution of apparent resistivity, resulting in nearly circular contours with the roadway head at the center. The actual distribution of apparent resistivity for different geoelectrical models of water in goaves was consistent with the models. However, when the goaf water was located in one side, a false low-resistivity anomaly would appear on the other side owing to the full-space effect but the response was much weaker. Finally, the modeling results were subsequently confirmed by drilling, suggesting that the proposed method was effective.
Evolution of tag-based cooperation with emotion on complex networks
NASA Astrophysics Data System (ADS)
Lima, F. W. S.
2018-04-01
We study the evolution of the four strategies: Ethnocentric, altruistic, egoistic and cosmopolitan in one community of individuals through Monte Carlo simulations. Interactions and reproduction among computational agents are simulated on undirected Barabási-Albert (UBA) networks and Erdös-Rènyi random graphs (ER).We study the Hammond-Axelrod model on both UBA networks and ER random graphs for the asexual reproduction case. We use a modified version of the traditional Hammond-Axelrod model and we also allow the agents’ decisions about one of the strategies to take into account the emotion among their equals. Our simulations showed that egoism and altruism win, differently from other results found in the literature where ethnocentric strategy is common.
Add Control: plant virtualization for control solutions in WWTP.
Maiza, M; Bengoechea, A; Grau, P; De Keyser, W; Nopens, I; Brockmann, D; Steyer, J P; Claeys, F; Urchegui, G; Fernández, O; Ayesa, E
2013-01-01
This paper summarizes part of the research work carried out in the Add Control project, which proposes an extension of the wastewater treatment plant (WWTP) models and modelling architectures used in traditional WWTP simulation tools, addressing, in addition to the classical mass transformations (transport, physico-chemical phenomena, biological reactions), all the instrumentation, actuation and automation & control components (sensors, actuators, controllers), considering their real behaviour (signal delays, noise, failures and power consumption of actuators). Its ultimate objective is to allow a rapid transition from the simulation of the control strategy to its implementation at full-scale plants. Thus, this paper presents the application of the Add Control simulation platform for the design and implementation of new control strategies at the WWTP of Mekolalde.
Quasi-Monte Carlo Methods Applied to Tau-Leaping in Stochastic Biological Systems.
Beentjes, Casper H L; Baker, Ruth E
2018-05-25
Quasi-Monte Carlo methods have proven to be effective extensions of traditional Monte Carlo methods in, amongst others, problems of quadrature and the sample path simulation of stochastic differential equations. By replacing the random number input stream in a simulation procedure by a low-discrepancy number input stream, variance reductions of several orders have been observed in financial applications. Analysis of stochastic effects in well-mixed chemical reaction networks often relies on sample path simulation using Monte Carlo methods, even though these methods suffer from typical slow [Formula: see text] convergence rates as a function of the number of sample paths N. This paper investigates the combination of (randomised) quasi-Monte Carlo methods with an efficient sample path simulation procedure, namely [Formula: see text]-leaping. We show that this combination is often more effective than traditional Monte Carlo simulation in terms of the decay of statistical errors. The observed convergence rate behaviour is, however, non-trivial due to the discrete nature of the models of chemical reactions. We explain how this affects the performance of quasi-Monte Carlo methods by looking at a test problem in standard quadrature.
Simulation-based medical education: time for a pedagogical shift.
Kalaniti, Kaarthigeyan; Campbell, Douglas M
2015-01-01
The purpose of medical education at all levels is to prepare physicians with the knowledge and comprehensive skills, required to deliver safe and effective patient care. The traditional 'apprentice' learning model in medical education is undergoing a pedagogical shift to a 'simulation-based' learning model. Experiential learning, deliberate practice and the ability to provide immediate feedback are the primary advantages of simulation-based medical education. It is an effective way to develop new skills, identify knowledge gaps, reduce medical errors, and maintain infrequently used clinical skills even among experienced clinical teams, with the overall goal of improving patient care. Although simulation cannot replace clinical exposure as a form of experiential learning, it promotes learning without compromising patient safety. This new paradigm shift is revolutionizing medical education in the Western world. It is time that the developing countries embrace this new pedagogical shift.
Strom, Suzanne L; Anderson, Craig L; Yang, Luanna; Canales, Cecilia; Amin, Alpesh; Lotfipour, Shahram; McCoy, C Eric; Osborn, Megan Boysen; Langdorf, Mark I
2015-11-01
Traditional Advanced Cardiac Life Support (ACLS) courses are evaluated using written multiple-choice tests. High-fidelity simulation is a widely used adjunct to didactic content, and has been used in many specialties as a training resource as well as an evaluative tool. There are no data to our knowledge that compare simulation examination scores with written test scores for ACLS courses. To compare and correlate a novel high-fidelity simulation-based evaluation with traditional written testing for senior medical students in an ACLS course. We performed a prospective cohort study to determine the correlation between simulation-based evaluation and traditional written testing in a medical school simulation center. Students were tested on a standard acute coronary syndrome/ventricular fibrillation cardiac arrest scenario. Our primary outcome measure was correlation of exam results for 19 volunteer fourth-year medical students after a 32-hour ACLS-based Resuscitation Boot Camp course. Our secondary outcome was comparison of simulation-based vs. written outcome scores. The composite average score on the written evaluation was substantially higher (93.6%) than the simulation performance score (81.3%, absolute difference 12.3%, 95% CI [10.6-14.0%], p<0.00005). We found a statistically significant moderate correlation between simulation scenario test performance and traditional written testing (Pearson r=0.48, p=0.04), validating the new evaluation method. Simulation-based ACLS evaluation methods correlate with traditional written testing and demonstrate resuscitation knowledge and skills. Simulation may be a more discriminating and challenging testing method, as students scored higher on written evaluation methods compared to simulation.
Development of IR imaging system simulator
NASA Astrophysics Data System (ADS)
Xiang, Xinglang; He, Guojing; Dong, Weike; Dong, Lu
2017-02-01
To overcome the disadvantages of the tradition semi-physical simulation and injection simulation equipment in the performance evaluation of the infrared imaging system (IRIS), a low-cost and reconfigurable IRIS simulator, which can simulate the realistic physical process of infrared imaging, is proposed to test and evaluate the performance of the IRIS. According to the theoretical simulation framework and the theoretical models of the IRIS, the architecture of the IRIS simulator is constructed. The 3D scenes are generated and the infrared atmospheric transmission effects are simulated using OGRE technology in real-time on the computer. The physical effects of the IRIS are classified as the signal response characteristic, modulation transfer characteristic and noise characteristic, and they are simulated on the single-board signal processing platform based on the core processor FPGA in real-time using high-speed parallel computation method.
How to assess the impact of a physical parameterization in simulations of moist convection?
NASA Astrophysics Data System (ADS)
Grabowski, Wojciech
2017-04-01
A numerical model capable in simulating moist convection (e.g., cloud-resolving model or large-eddy simulation model) consists of a fluid flow solver combined with required representations (i.e., parameterizations) of physical processes. The later typically include cloud microphysics, radiative transfer, and unresolved turbulent transport. Traditional approaches to investigate impacts of such parameterizations on convective dynamics involve parallel simulations with different parameterization schemes or with different scheme parameters. Such methodologies are not reliable because of the natural variability of a cloud field that is affected by the feedback between the physics and dynamics. For instance, changing the cloud microphysics typically leads to a different realization of the cloud-scale flow, and separating dynamical and microphysical impacts is difficult. This presentation will present a novel modeling methodology, the piggybacking, that allows studying the impact of a physical parameterization on cloud dynamics with confidence. The focus will be on the impact of cloud microphysics parameterization. Specific examples of the piggybacking approach will include simulations concerning the hypothesized deep convection invigoration in polluted environments, the validity of the saturation adjustment in modeling condensation in moist convection, and separation of physical impacts from statistical uncertainty in simulations applying particle-based Lagrangian microphysics, the super-droplet method.
A cellular automata model for social-learning processes in a classroom context
NASA Astrophysics Data System (ADS)
Bordogna, C. M.; Albano, E. V.
2002-02-01
A model for teaching-learning processes that take place in the classroom is proposed and simulated numerically. Recent ideas taken from the fields of sociology, educational psychology, statistical physics and computational science are key ingredients of the model. Results of simulations are consistent with well-established empirical results obtained in classrooms by means of different evaluation tools. It is shown that students engaged in collaborative groupwork reach higher achievements than those attending traditional lectures only. However, in many cases, this difference is subtle and consequently very difficult to be detected using tests. The influence of the number of students forming the collaborative groups on the average knowledge achieved is also studied and discussed.
PSYCHE: An Object-Oriented Approach to Simulating Medical Education
Mullen, Jamie A.
1990-01-01
Traditional approaches to computer-assisted instruction (CAI) do not provide realistic simulations of medical education, in part because they do not utilize heterogeneous knowledge bases for their source of domain knowledge. PSYCHE, a CAI program designed to teach hypothetico-deductive psychiatric decision-making to medical students, uses an object-oriented implementation of an intelligent tutoring system (ITS) to model the student, domain expert, and tutor. It models the transactions between the participants in complex transaction chains, and uses heterogeneous knowledge bases to represent both domain and procedural knowledge in clinical medicine. This object-oriented approach is a flexible and dynamic approach to modeling, and represents a potentially valuable tool for the investigation of medical education and decision-making.
NASA Astrophysics Data System (ADS)
van der Plas, Peter; Guerriero, Suzanne; Cristiano, Leorato; Rugina, Ana
2012-08-01
Modelling and simulation can support a number of use cases across the spacecraft development life-cycle. Given the increasing complexity of space missions, the observed general trend is for a more extensive usage of simulation already in the early phases. A major perceived advantage is that modelling and simulation can enable the validation of critical aspects of the spacecraft design before the actual development is started, as such reducing the risk in later phases.Failure Detection, Isolation, and Recovery (FDIR) is one of the areas with a high potential to benefit from early modelling and simulation. With the increasing level of required spacecraft autonomy, FDIR specifications can grow in such a way that the traditional document-based review process soon becomes inadequate.This paper shows that FDIR modelling and simulation in a system context can provide a powerful tool to support the FDIR verification process. It is highlighted that FDIR modelling at this early stage requires heterogeneous modelling tools and languages, in order to provide an adequate functional description of the different components (i.e. FDIR functions, environment, equipment, etc.) to be modelled.For this reason, an FDIR simulation framework is proposed in this paper. This framework is based on a number of tools already available in the Avionics Systems Laboratory at ESTEC, which are the Avionics Test Bench Functional Engineering Simulator (ATB FES), Matlab/Simulink, TASTE, and Real Time Developer Studio (RTDS).The paper then discusses the application of the proposed simulation framework to a real case-study, i.e. the FDIR modelling of a satellite in support of actual ESA mission. Challenges and benefits of the approach are described. Finally, lessons learned and the generality of the proposed approach are discussed.
Schlüter, Daniela K; Ramis-Conde, Ignacio; Chaplain, Mark A J
2015-02-06
Studying the biophysical interactions between cells is crucial to understanding how normal tissue develops, how it is structured and also when malfunctions occur. Traditional experiments try to infer events at the tissue level after observing the behaviour of and interactions between individual cells. This approach assumes that cells behave in the same biophysical manner in isolated experiments as they do within colonies and tissues. In this paper, we develop a multi-scale multi-compartment mathematical model that accounts for the principal biophysical interactions and adhesion pathways not only at a cell-cell level but also at the level of cell colonies (in contrast to the traditional approach). Our results suggest that adhesion/separation forces between cells may be lower in cell colonies than traditional isolated single-cell experiments infer. As a consequence, isolated single-cell experiments may be insufficient to deduce important biological processes such as single-cell invasion after detachment from a solid tumour. The simulations further show that kinetic rates and cell biophysical characteristics such as pressure-related cell-cycle arrest have a major influence on cell colony patterns and can allow for the development of protrusive cellular structures as seen in invasive cancer cell lines independent of expression levels of pro-invasion molecules.
Schlüter, Daniela K.; Ramis-Conde, Ignacio; Chaplain, Mark A. J.
2015-01-01
Studying the biophysical interactions between cells is crucial to understanding how normal tissue develops, how it is structured and also when malfunctions occur. Traditional experiments try to infer events at the tissue level after observing the behaviour of and interactions between individual cells. This approach assumes that cells behave in the same biophysical manner in isolated experiments as they do within colonies and tissues. In this paper, we develop a multi-scale multi-compartment mathematical model that accounts for the principal biophysical interactions and adhesion pathways not only at a cell–cell level but also at the level of cell colonies (in contrast to the traditional approach). Our results suggest that adhesion/separation forces between cells may be lower in cell colonies than traditional isolated single-cell experiments infer. As a consequence, isolated single-cell experiments may be insufficient to deduce important biological processes such as single-cell invasion after detachment from a solid tumour. The simulations further show that kinetic rates and cell biophysical characteristics such as pressure-related cell-cycle arrest have a major influence on cell colony patterns and can allow for the development of protrusive cellular structures as seen in invasive cancer cell lines independent of expression levels of pro-invasion molecules. PMID:25519994
Modeling and Simulation of Agents in Resource Strategy Games
2008-01-01
reference to psychic concepts. These actors are emotionless geniuses. • Descriptive agents: Following the new tradition of BGT, these agents are...Followers – tend to be sons of Moderate Y Followers who were Wahhabi and college-trained, unemployed , running religious schools in family homes. Earlier
A Sequential Monte Carlo Approach for Streamflow Forecasting
NASA Astrophysics Data System (ADS)
Hsu, K.; Sorooshian, S.
2008-12-01
As alternatives to traditional physically-based models, Artificial Neural Network (ANN) models offer some advantages with respect to the flexibility of not requiring the precise quantitative mechanism of the process and the ability to train themselves from the data directly. In this study, an ANN model was used to generate one-day-ahead streamflow forecasts from the precipitation input over a catchment. Meanwhile, the ANN model parameters were trained using a Sequential Monte Carlo (SMC) approach, namely Regularized Particle Filter (RPF). The SMC approaches are known for their capabilities in tracking the states and parameters of a nonlinear dynamic process based on the Baye's rule and the proposed effective sampling and resampling strategies. In this study, five years of daily rainfall and streamflow measurement were used for model training. Variable sample sizes of RPF, from 200 to 2000, were tested. The results show that, after 1000 RPF samples, the simulation statistics, in terms of correlation coefficient, root mean square error, and bias, were stabilized. It is also shown that the forecasted daily flows fit the observations very well, with the correlation coefficient of higher than 0.95. The results of RPF simulations were also compared with those from the popular back-propagation ANN training approach. The pros and cons of using SMC approach and the traditional back-propagation approach will be discussed.
A Single-column Model Ensemble Approach Applied to the TWP-ICE Experiment
NASA Technical Reports Server (NTRS)
Davies, L.; Jakob, C.; Cheung, K.; DelGenio, A.; Hill, A.; Hume, T.; Keane, R. J.; Komori, T.; Larson, V. E.; Lin, Y.;
2013-01-01
Single-column models (SCM) are useful test beds for investigating the parameterization schemes of numerical weather prediction and climate models. The usefulness of SCM simulations are limited, however, by the accuracy of the best estimate large-scale observations prescribed. Errors estimating the observations will result in uncertainty in modeled simulations. One method to address the modeled uncertainty is to simulate an ensemble where the ensemble members span observational uncertainty. This study first derives an ensemble of large-scale data for the Tropical Warm Pool International Cloud Experiment (TWP-ICE) based on an estimate of a possible source of error in the best estimate product. These data are then used to carry out simulations with 11 SCM and two cloud-resolving models (CRM). Best estimate simulations are also performed. All models show that moisture-related variables are close to observations and there are limited differences between the best estimate and ensemble mean values. The models, however, show different sensitivities to changes in the forcing particularly when weakly forced. The ensemble simulations highlight important differences in the surface evaporation term of the moisture budget between the SCM and CRM. Differences are also apparent between the models in the ensemble mean vertical structure of cloud variables, while for each model, cloud properties are relatively insensitive to forcing. The ensemble is further used to investigate cloud variables and precipitation and identifies differences between CRM and SCM particularly for relationships involving ice. This study highlights the additional analysis that can be performed using ensemble simulations and hence enables a more complete model investigation compared to using the more traditional single best estimate simulation only.
Development of Quantitative Specifications for Simulating the Stress Environment
1992-03-01
reconsideration of Broadbent’s filter model of selective attention. Ouarterlv Journal of Experimental P, 2&, 167-178. Szpiler, J. A., & Epstein, S. (1976...Justifloation / ~~By . . Availabiltty Co0eu blat A~imii~Ia Spcil LIST OF FIGURES Figure Page I Model of Stress and Performance ................... 9 2...no tradition of performance in the face of combat, no role models , no weapons, and little preparation for this environment. In discussing maintenance
Model-Based Verification and Validation of Spacecraft Avionics
NASA Technical Reports Server (NTRS)
Khan, M. Omair; Sievers, Michael; Standley, Shaun
2012-01-01
Verification and Validation (V&V) at JPL is traditionally performed on flight or flight-like hardware running flight software. For some time, the complexity of avionics has increased exponentially while the time allocated for system integration and associated V&V testing has remained fixed. There is an increasing need to perform comprehensive system level V&V using modeling and simulation, and to use scarce hardware testing time to validate models; the norm for thermal and structural V&V for some time. Our approach extends model-based V&V to electronics and software through functional and structural models implemented in SysML. We develop component models of electronics and software that are validated by comparison with test results from actual equipment. The models are then simulated enabling a more complete set of test cases than possible on flight hardware. SysML simulations provide access and control of internal nodes that may not be available in physical systems. This is particularly helpful in testing fault protection behaviors when injecting faults is either not possible or potentially damaging to the hardware. We can also model both hardware and software behaviors in SysML, which allows us to simulate hardware and software interactions. With an integrated model and simulation capability we can evaluate the hardware and software interactions and identify problems sooner. The primary missing piece is validating SysML model correctness against hardware; this experiment demonstrated such an approach is possible.
An Object-Oriented Serial DSMC Simulation Package
NASA Astrophysics Data System (ADS)
Liu, Hongli; Cai, Chunpei
2011-05-01
A newly developed three-dimensional direct simulation Monte Carlo (DSMC) simulation package, named GRASP ("Generalized Rarefied gAs Simulation Package"), is reported in this paper. This package utilizes the concept of simulation engine, many C++ features and software design patterns. The package has an open architecture which can benefit further development and maintenance of the code. In order to reduce the engineering time for three-dimensional models, a hybrid grid scheme, combined with a flexible data structure compiled by C++ language, are implemented in this package. This scheme utilizes a local data structure based on the computational cell to achieve high performance on workstation processors. This data structure allows the DSMC algorithm to be very efficiently parallelized with domain decomposition and it provides much flexibility in terms of grid types. This package can utilize traditional structured, unstructured or hybrid grids within the framework of a single code to model arbitrarily complex geometries and to simulate rarefied gas flows. Benchmark test cases indicate that this package has satisfactory accuracy for complex rarefied gas flows.
Stochastic simulation by image quilting of process-based geological models
NASA Astrophysics Data System (ADS)
Hoffimann, Júlio; Scheidt, Céline; Barfod, Adrian; Caers, Jef
2017-09-01
Process-based modeling offers a way to represent realistic geological heterogeneity in subsurface models. The main limitation lies in conditioning such models to data. Multiple-point geostatistics can use these process-based models as training images and address the data conditioning problem. In this work, we further develop image quilting as a method for 3D stochastic simulation capable of mimicking the realism of process-based geological models with minimal modeling effort (i.e. parameter tuning) and at the same time condition them to a variety of data. In particular, we develop a new probabilistic data aggregation method for image quilting that bypasses traditional ad-hoc weighting of auxiliary variables. In addition, we propose a novel criterion for template design in image quilting that generalizes the entropy plot for continuous training images. The criterion is based on the new concept of voxel reuse-a stochastic and quilting-aware function of the training image. We compare our proposed method with other established simulation methods on a set of process-based training images of varying complexity, including a real-case example of stochastic simulation of the buried-valley groundwater system in Denmark.
Hu, S. X.; Collins, Lee A.; Goncharov, V. N.; ...
2016-04-14
Using quantum molecular-dynamics (QMD) methods based on the density functional theory, we have performed first-principles investigations on the ionization and thermal conductivity of polystyrene (CH) over a wide range of plasma conditions (ρ = 0.5 to 100 g/cm 3 and T = 15,625 to 500,000 K). The ionization data from orbital-free molecular-dynamics calculations have been fitted with a “Saha-type” model as a function of the CH plasma density and temperature, which exhibits the correct behaviors of continuum lowering and pressure ionization. The thermal conductivities (κ QMD) of CH, derived directly from the Kohn–Sham molecular-dynamics calculations, are then analytically fitted withmore » a generalized Coulomb logarithm [(lnΛ) QMD] over a wide range of plasma conditions. When compared with the traditional ionization and thermal conductivity models used in radiation–hydrodynamics codes for inertial confinement fusion simulations, the QMD results show a large difference in the low-temperature regime in which strong coupling and electron degeneracy play an essential role in determining plasma properties. Furthermore, hydrodynamic simulations of cryogenic deuterium–tritium targets with CH ablators on OMEGA and the National Ignition Facility using the QMD-derived ionization and thermal conductivity of CH have predicted –20% variation in target performance in terms of hot-spot pressure and neutron yield (gain) with respect to traditional model simulations.« less
Self-consistent core-pedestal transport simulations with neural network accelerated models
Meneghini, Orso; Smith, Sterling P.; Snyder, Philip B.; ...
2017-07-12
Fusion whole device modeling simulations require comprehensive models that are simultaneously physically accurate, fast, robust, and predictive. In this paper we describe the development of two neural-network (NN) based models as a means to perform a snon-linear multivariate regression of theory-based models for the core turbulent transport fluxes, and the pedestal structure. Specifically, we find that a NN-based approach can be used to consistently reproduce the results of the TGLF and EPED1 theory-based models over a broad range of plasma regimes, and with a computational speedup of several orders of magnitudes. These models are then integrated into a predictive workflowmore » that allows prediction with self-consistent core-pedestal coupling of the kinetic profiles within the last closed flux surface of the plasma. Finally, the NN paradigm is capable of breaking the speed-accuracy trade-off that is expected of traditional numerical physics models, and can provide the missing link towards self-consistent coupled core-pedestal whole device modeling simulations that are physically accurate and yet take only seconds to run.« less
Self-consistent core-pedestal transport simulations with neural network accelerated models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meneghini, Orso; Smith, Sterling P.; Snyder, Philip B.
Fusion whole device modeling simulations require comprehensive models that are simultaneously physically accurate, fast, robust, and predictive. In this paper we describe the development of two neural-network (NN) based models as a means to perform a snon-linear multivariate regression of theory-based models for the core turbulent transport fluxes, and the pedestal structure. Specifically, we find that a NN-based approach can be used to consistently reproduce the results of the TGLF and EPED1 theory-based models over a broad range of plasma regimes, and with a computational speedup of several orders of magnitudes. These models are then integrated into a predictive workflowmore » that allows prediction with self-consistent core-pedestal coupling of the kinetic profiles within the last closed flux surface of the plasma. Finally, the NN paradigm is capable of breaking the speed-accuracy trade-off that is expected of traditional numerical physics models, and can provide the missing link towards self-consistent coupled core-pedestal whole device modeling simulations that are physically accurate and yet take only seconds to run.« less
Self-consistent core-pedestal transport simulations with neural network accelerated models
NASA Astrophysics Data System (ADS)
Meneghini, O.; Smith, S. P.; Snyder, P. B.; Staebler, G. M.; Candy, J.; Belli, E.; Lao, L.; Kostuk, M.; Luce, T.; Luda, T.; Park, J. M.; Poli, F.
2017-08-01
Fusion whole device modeling simulations require comprehensive models that are simultaneously physically accurate, fast, robust, and predictive. In this paper we describe the development of two neural-network (NN) based models as a means to perform a snon-linear multivariate regression of theory-based models for the core turbulent transport fluxes, and the pedestal structure. Specifically, we find that a NN-based approach can be used to consistently reproduce the results of the TGLF and EPED1 theory-based models over a broad range of plasma regimes, and with a computational speedup of several orders of magnitudes. These models are then integrated into a predictive workflow that allows prediction with self-consistent core-pedestal coupling of the kinetic profiles within the last closed flux surface of the plasma. The NN paradigm is capable of breaking the speed-accuracy trade-off that is expected of traditional numerical physics models, and can provide the missing link towards self-consistent coupled core-pedestal whole device modeling simulations that are physically accurate and yet take only seconds to run.
Cai, Junmeng; Liu, Ronghou
2008-05-01
In the present paper, a new distributed activation energy model has been developed, considering the reaction order and the dependence of frequency factor on temperature. The proposed DAEM cannot be solved directly in a closed from, thus a method was used to obtain the numerical solution of the new DAEM equation. Two numerical examples to illustrate the proposed method were presented. The traditional DAEM and new DAEM have been used to simulate the pyrolytic process of some types of biomass. The new DAEM fitted the experimental data much better than the traditional DAEM as the dependence of the frequency factor on temperature was taken into account.
Interactive Correlation Analysis and Visualization of Climate Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Kwan-Liu
The relationship between our ability to analyze and extract insights from visualization of climate model output and the capability of the available resources to make those visualizations has reached a crisis point. The large volume of data currently produced by climate models is overwhelming the current, decades-old visualization workflow. The traditional methods for visualizing climate output also have not kept pace with changes in the types of grids used, the number of variables involved, and the number of different simulations performed with a climate model or the feature-richness of high-resolution simulations. This project has developed new and faster methods formore » visualization in order to get the most knowledge out of the new generation of high-resolution climate models. While traditional climate images will continue to be useful, there is need for new approaches to visualization and analysis of climate data if we are to gain all the insights available in ultra-large data sets produced by high-resolution model output and ensemble integrations of climate models such as those produced for the Coupled Model Intercomparison Project. Towards that end, we have developed new visualization techniques for performing correlation analysis. We have also introduced highly scalable, parallel rendering methods for visualizing large-scale 3D data. This project was done jointly with climate scientists and visualization researchers at Argonne National Laboratory and NCAR.« less
Temporal Gillespie Algorithm: Fast Simulation of Contagion Processes on Time-Varying Networks
Vestergaard, Christian L.; Génois, Mathieu
2015-01-01
Stochastic simulations are one of the cornerstones of the analysis of dynamical processes on complex networks, and are often the only accessible way to explore their behavior. The development of fast algorithms is paramount to allow large-scale simulations. The Gillespie algorithm can be used for fast simulation of stochastic processes, and variants of it have been applied to simulate dynamical processes on static networks. However, its adaptation to temporal networks remains non-trivial. We here present a temporal Gillespie algorithm that solves this problem. Our method is applicable to general Poisson (constant-rate) processes on temporal networks, stochastically exact, and up to multiple orders of magnitude faster than traditional simulation schemes based on rejection sampling. We also show how it can be extended to simulate non-Markovian processes. The algorithm is easily applicable in practice, and as an illustration we detail how to simulate both Poissonian and non-Markovian models of epidemic spreading. Namely, we provide pseudocode and its implementation in C++ for simulating the paradigmatic Susceptible-Infected-Susceptible and Susceptible-Infected-Recovered models and a Susceptible-Infected-Recovered model with non-constant recovery rates. For empirical networks, the temporal Gillespie algorithm is here typically from 10 to 100 times faster than rejection sampling. PMID:26517860
Temporal Gillespie Algorithm: Fast Simulation of Contagion Processes on Time-Varying Networks.
Vestergaard, Christian L; Génois, Mathieu
2015-10-01
Stochastic simulations are one of the cornerstones of the analysis of dynamical processes on complex networks, and are often the only accessible way to explore their behavior. The development of fast algorithms is paramount to allow large-scale simulations. The Gillespie algorithm can be used for fast simulation of stochastic processes, and variants of it have been applied to simulate dynamical processes on static networks. However, its adaptation to temporal networks remains non-trivial. We here present a temporal Gillespie algorithm that solves this problem. Our method is applicable to general Poisson (constant-rate) processes on temporal networks, stochastically exact, and up to multiple orders of magnitude faster than traditional simulation schemes based on rejection sampling. We also show how it can be extended to simulate non-Markovian processes. The algorithm is easily applicable in practice, and as an illustration we detail how to simulate both Poissonian and non-Markovian models of epidemic spreading. Namely, we provide pseudocode and its implementation in C++ for simulating the paradigmatic Susceptible-Infected-Susceptible and Susceptible-Infected-Recovered models and a Susceptible-Infected-Recovered model with non-constant recovery rates. For empirical networks, the temporal Gillespie algorithm is here typically from 10 to 100 times faster than rejection sampling.
Spinello, Elio F; Fischbach, Ronald
2008-01-01
This study investigated the use of a Web-based community health simulation as a problem-based learning (PBL) experience for undergraduate students majoring in public health. The study sought to determine whether students who participated in the online simulation achieved differences in academic and attitudinal outcomes compared with students who participated in a traditional PBL exercise. Using a nonexperimental comparative design, 21 undergraduate students enrolled in a health-behavior course were each randomly assigned to one of four workgroups. Each workgroup was randomly assigned the semester-long simulation project or the traditional PBL exercise. Survey instruments were used to measure students' attitudes toward the course, their perceptions of the learning community, and perceptions of their own cognitive learning. Content analysis of final essay exams and group reports was used to identify differences in academic outcomes and students' level of conceptual understanding of health-behavior theory. Findings indicated that students participating in the simulation produced higher mean final exam scores compared with students participating in the traditional PBL (p=0.03). Students in the simulation group also outperformed students in the traditional group with respect to their understanding of health-behavior theory (p=0.04). Students in the simulation group, however, rated their own level of cognitive learning lower than did students in the traditional group (p=0.03). By bridging time and distance constraints of the traditional classroom setting, an online simulation may be an effective PBL approach for public health students. Recommendations include further research using a larger sample to explore students' perceptions of learning when participating in simulated real-world activities. Additional research focusing on possible differences between actual and perceived learning relative to PBL methods and student workgroup dynamics is also recommended.
The changing paradigm for integrated simulation in support of Command and Control (C2)
NASA Astrophysics Data System (ADS)
Riecken, Mark; Hieb, Michael
2016-05-01
Modern software and network technologies are on the verge of enabling what has eluded the simulation and operational communities for more than two decades, truly integrating simulation functionality into operational Command and Control (C2) capabilities. This deep integration will benefit multiple stakeholder communities from experimentation and test to training by providing predictive and advanced analytics. There is a new opportunity to support operations with simulation once a deep integration is achieved. While it is true that doctrinal and acquisition issues remain to be addressed, nonetheless it is increasingly obvious that few technical barriers persist. How will this change the way in which common simulation and operational data is stored and accessed? As the Services move towards single networks, will there be technical and policy issues associated with sharing those operational networks with simulation data, even if the simulation data is operational in nature (e.g., associated with planning)? How will data models that have traditionally been simulation only be merged in with operational data models? How will the issues of trust be addressed?
Ritchie, Marylyn D; White, Bill C; Parker, Joel S; Hahn, Lance W; Moore, Jason H
2003-01-01
Background Appropriate definition of neural network architecture prior to data analysis is crucial for successful data mining. This can be challenging when the underlying model of the data is unknown. The goal of this study was to determine whether optimizing neural network architecture using genetic programming as a machine learning strategy would improve the ability of neural networks to model and detect nonlinear interactions among genes in studies of common human diseases. Results Using simulated data, we show that a genetic programming optimized neural network approach is able to model gene-gene interactions as well as a traditional back propagation neural network. Furthermore, the genetic programming optimized neural network is better than the traditional back propagation neural network approach in terms of predictive ability and power to detect gene-gene interactions when non-functional polymorphisms are present. Conclusion This study suggests that a machine learning strategy for optimizing neural network architecture may be preferable to traditional trial-and-error approaches for the identification and characterization of gene-gene interactions in common, complex human diseases. PMID:12846935
Andersen, Steven Arild Wuyts; Mikkelsen, Peter Trier; Konge, Lars; Cayé-Thomasen, Per; Sørensen, Mads Sølvsten
2016-01-01
The cognitive load (CL) theoretical framework suggests that working memory is limited, which has implications for learning and skills acquisition. Complex learning situations such as surgical skills training can potentially induce a cognitive overload, inhibiting learning. This study aims to compare CL in traditional cadaveric dissection training and virtual reality (VR) simulation training of mastoidectomy. A prospective, crossover study. Participants performed cadaveric dissection before VR simulation of the procedure or vice versa. CL was estimated by secondary-task reaction time testing at baseline and during the procedure in both training modalities. The national Danish temporal bone course. A total of 40 novice otorhinolaryngology residents. Reaction time was increased by 20% in VR simulation training and 55% in cadaveric dissection training of mastoidectomy compared with baseline measurements. Traditional dissection training increased CL significantly more than VR simulation training (p < 0.001). VR simulation training imposed a lower CL than traditional cadaveric dissection training of mastoidectomy. Learning complex surgical skills can be a challenge for the novice and mastoidectomy skills training could potentially be optimized by employing VR simulation training first because of the lower CL. Traditional dissection training could then be used to supplement skills training after basic competencies have been acquired in the VR simulation. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Colvin, M.E.; Bettoli, Phillip William; Scholten, G.D.
2013-01-01
Equilibrium yield models predict the total biomass removed from an exploited stock; however, traditional yield models must be modified to simulate roe yields because a linear relationship between age (or length) and mature ovary weight does not typically exist. We extended the traditional Beverton-Holt equilibrium yield model to predict roe yields of Paddlefish Polyodon spathula in Kentucky Lake, Tennessee-Kentucky, as a function of varying conditional fishing mortality rates (10-70%), conditional natural mortality rates (cm; 9% and 18%), and four minimum size limits ranging from 864 to 1,016mm eye-to-fork length. These results were then compared to a biomass-based yield assessment. Analysis of roe yields indicated the potential for growth overfishing at lower exploitation rates and smaller minimum length limits than were suggested by the biomass-based assessment. Patterns of biomass and roe yields in relation to exploitation rates were similar regardless of the simulated value of cm, thus indicating that the results were insensitive to changes in cm. Our results also suggested that higher minimum length limits would increase roe yield and reduce the potential for growth overfishing and recruitment overfishing at the simulated cm values. Biomass-based equilibrium yield assessments are commonly used to assess the effects of harvest on other caviar-based fisheries; however, our analysis demonstrates that such assessments likely underestimate the probability and severity of growth overfishing when roe is targeted. Therefore, equilibrium roe yield-per-recruit models should also be considered to guide the management process for caviar-producing fish species.
Sutton, Steven C; Hu, Mingxiu
2006-05-05
Many mathematical models have been proposed for establishing an in vitro/in vivo correlation (IVIVC). The traditional IVIVC model building process consists of 5 steps: deconvolution, model fitting, convolution, prediction error evaluation, and cross-validation. This is a time-consuming process and typically a few models at most are tested for any given data set. The objectives of this work were to (1) propose a statistical tool to screen models for further development of an IVIVC, (2) evaluate the performance of each model under different circumstances, and (3) investigate the effectiveness of common statistical model selection criteria for choosing IVIVC models. A computer program was developed to explore which model(s) would be most likely to work well with a random variation from the original formulation. The process used Monte Carlo simulation techniques to build IVIVC models. Data-based model selection criteria (Akaike Information Criteria [AIC], R2) and the probability of passing the Food and Drug Administration "prediction error" requirement was calculated. To illustrate this approach, several real data sets representing a broad range of release profiles are used to illustrate the process and to demonstrate the advantages of this automated process over the traditional approach. The Hixson-Crowell and Weibull models were often preferred over the linear. When evaluating whether a Level A IVIVC model was possible, the model selection criteria AIC generally selected the best model. We believe that the approach we proposed may be a rapid tool to determine which IVIVC model (if any) is the most applicable.
Hydrogeologic unit flow characterization using transition probability geostatistics.
Jones, Norman L; Walker, Justin R; Carle, Steven F
2005-01-01
This paper describes a technique for applying the transition probability geostatistics method for stochastic simulation to a MODFLOW model. Transition probability geostatistics has some advantages over traditional indicator kriging methods including a simpler and more intuitive framework for interpreting geologic relationships and the ability to simulate juxtapositional tendencies such as fining upward sequences. The indicator arrays generated by the transition probability simulation are converted to layer elevation and thickness arrays for use with the new Hydrogeologic Unit Flow package in MODFLOW 2000. This makes it possible to preserve complex heterogeneity while using reasonably sized grids and/or grids with nonuniform cell thicknesses.
Atomic-level simulation of ferroelectricity in perovskite solid solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sepliarsky, M.; Instituto de Fisica Rosario, CONICET-UNR, Rosario,; Phillpot, S. R.
2000-06-26
Building on the insights gained from electronic-structure calculations and from experience obtained with an earlier atomic-level method, we developed an atomic-level simulation approach based on the traditional Buckingham potential with shell model which correctly reproduces the ferroelectric phase behavior and dielectric and piezoelectric properties of KNbO{sub 3}. This approach now enables the simulation of solid solutions and defected systems; we illustrate this capability by elucidating the ferroelectric properties of a KTa{sub 0.5}Nb{sub 0.5}O{sub 3} random solid solution. (c) 2000 American Institute of Physics.
Computer modeling describes gravity-related adaptation in cell cultures.
Alexandrov, Ludmil B; Alexandrova, Stoyana; Usheva, Anny
2009-12-16
Questions about the changes of biological systems in response to hostile environmental factors are important but not easy to answer. Often, the traditional description with differential equations is difficult due to the overwhelming complexity of the living systems. Another way to describe complex systems is by simulating them with phenomenological models such as the well-known evolutionary agent-based model (EABM). Here we developed an EABM to simulate cell colonies as a multi-agent system that adapts to hyper-gravity in starvation conditions. In the model, the cell's heritable characteristics are generated and transferred randomly to offspring cells. After a qualitative validation of the model at normal gravity, we simulate cellular growth in hyper-gravity conditions. The obtained data are consistent with previously confirmed theoretical and experimental findings for bacterial behavior in environmental changes, including the experimental data from the microgravity Atlantis and the Hypergravity 3000 experiments. Our results demonstrate that it is possible to utilize an EABM with realistic qualitative description to examine the effects of hypergravity and starvation on complex cellular entities.
Costa, Paulo R; Caldas, Linda V E
2002-01-01
This work presents the development and evaluation using modern techniques to calculate radiation protection barriers in clinical radiographic facilities. Our methodology uses realistic primary and scattered spectra. The primary spectra were computer simulated using a waveform generalization and a semiempirical model (the Tucker-Barnes-Chakraborty model). The scattered spectra were obtained from published data. An analytical function was used to produce attenuation curves from polychromatic radiation for specified kVp, waveform, and filtration. The results of this analytical function are given in ambient dose equivalent units. The attenuation curves were obtained by application of Archer's model to computer simulation data. The parameters for the best fit to the model using primary and secondary radiation data from different radiographic procedures were determined. They resulted in an optimized model for shielding calculation for any radiographic room. The shielding costs were about 50% lower than those calculated using the traditional method based on Report No. 49 of the National Council on Radiation Protection and Measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhan, Lixin; Jiang, Runqing; Osei, Ernest K.
2014-08-15
Flattening filter free (FFF) beams have been adopted by many clinics and used for patient treatment. However, compared to the traditional flattened beams, we have limited knowledge of FFF beams. In this study, we successfully modeled the 6 MV FFF beam for Varian TrueBeam accelerator with the Monte Carlo (MC) method. Both the percentage depth dose and profiles match well to the Golden Beam Data (GBD) from Varian. MC simulations were then performed to predict the relative output factors. The in-water output ratio, Scp, was simulated in water phantom and data obtained agrees well with GBD. The in-air output ratio,more » Sc, was obtained by analyzing the phase space placed at isocenter, in air, and computing the ratio of water Kerma rates for different field sizes. The phantom scattering factor, Sp, can then be obtained from the traditional way of taking the ratio of Scp and Sc. We also simulated Sp using a recently proposed method based on only the primary beam dose delivery in water phantom. Because there is no concern of lateral electronic disequilibrium, this method is more suitable for small fields. The results from both methods agree well with each other. The flattened 6 MV beam was simulated and compared to 6 MV FFF. The comparison confirms that 6 MV FFF has less scattering from the Linac head and less phantom scattering contribution to the central axis dose, which will be helpful for improving accuracy in beam modeling and dose calculation in treatment planning systems.« less
Li, Jia; Xu, Zhenming; Zhou, Yaohe
2008-05-30
Traditionally, the mixture metals from waste printed circuit board (PCB) were sent to the smelt factory to refine pure copper. Some valuable metals (aluminum, zinc and tin) with low content in PCB were lost during smelt. A new method which used roll-type electrostatic separator (RES) to recovery low content metals in waste PCB was presented in this study. The theoretic model which was established from computing electric field and the analysis of forces on the particles was used to write a program by MATLAB language. The program was design to simulate the process of separating mixture metal particles. Electrical, material and mechanical factors were analyzed to optimize the operating parameters of separator. The experiment results of separating copper and aluminum particles by RES had a good agreement with computer simulation results. The model could be used to simulate separating other metal (tin, zinc, etc.) particles during the process of recycling waste PCBs by RES.
Realistic Modeling of Multi-Scale MHD Dynamics of the Solar Atmosphere
NASA Technical Reports Server (NTRS)
Kitiashvili, Irina; Mansour, Nagi N.; Wray, Alan; Couvidat, Sebastian; Yoon, Seokkwan; Kosovichev, Alexander
2014-01-01
Realistic 3D radiative MHD simulations open new perspectives for understanding the turbulent dynamics of the solar surface, its coupling to the atmosphere, and the physical mechanisms of generation and transport of non-thermal energy. Traditionally, plasma eruptions and wave phenomena in the solar atmosphere are modeled by prescribing artificial driving mechanisms using magnetic or gas pressure forces that might arise from magnetic field emergence or reconnection instabilities. In contrast, our 'ab initio' simulations provide a realistic description of solar dynamics naturally driven by solar energy flow. By simulating the upper convection zone and the solar atmosphere, we can investigate in detail the physical processes of turbulent magnetoconvection, generation and amplification of magnetic fields, excitation of MHD waves, and plasma eruptions. We present recent simulation results of the multi-scale dynamics of quiet-Sun regions, and energetic effects in the atmosphere and compare with observations. For the comparisons we calculate synthetic spectro-polarimetric data to model observational data of SDO, Hinode, and New Solar Telescope.
Xue, Lianqing; Yang, Fan; Yang, Changbing; Wei, Guanghui; Li, Wenqian; He, Xinlin
2018-01-11
Understanding the mechanism of complicated hydrological processes is important for sustainable management of water resources in an arid area. This paper carried out the simulations of water movement for the Manas River Basin (MRB) using the improved semi-distributed Topographic hydrologic model (TOPMODEL) with a snowmelt model and topographic index algorithm. A new algorithm is proposed to calculate the curve of topographic index using internal tangent circle on a conical surface. Based on the traditional model, the improved indicator of temperature considered solar radiation is used to calculate the amount of snowmelt. The uncertainty of parameters for the TOPMODEL model was analyzed using the generalized likelihood uncertainty estimation (GLUE) method. The proposed model shows that the distribution of the topographic index is concentrated in high mountains, and the accuracy of runoff simulation has certain enhancement by considering radiation. Our results revealed that the performance of the improved TOPMODEL is acceptable and comparable to runoff simulation in the MRB. The uncertainty of the simulations resulted from the parameters and structures of model, climatic and anthropogenic factors. This study is expected to serve as a valuable complement for widely application of TOPMODEL and identify the mechanism of hydrological processes in arid area.
Model Performance Evaluation and Scenario Analysis ...
This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors. The performance measures include error analysis, coefficient of determination, Nash-Sutcliffe efficiency, and a new weighted rank method. These performance metrics only provide useful information about the overall model performance. Note that MPESA is based on the separation of observed and simulated time series into magnitude and sequence components. The separation of time series into magnitude and sequence components and the reconstruction back to time series provides diagnostic insights to modelers. For example, traditional approaches lack the capability to identify if the source of uncertainty in the simulated data is due to the quality of the input data or the way the analyst adjusted the model parameters. This report presents a suite of model diagnostics that identify if mismatches between observed and simulated data result from magnitude or sequence related errors. MPESA offers graphical and statistical options that allow HSPF users to compare observed and simulated time series and identify the parameter values to adjust or the input data to modify. The scenario analysis part of the too
Exitus: An Agent-Based Evacuation Simulation Model for Heterogeneous Populations
ERIC Educational Resources Information Center
Manley, Matthew T.
2012-01-01
Evacuation planning for private-sector organizations is an important consideration given the continuing occurrence of both natural and human-caused disasters that inordinately affect them. Unfortunately, the traditional management approach that is focused on fire drills presents several practical challenges at the scale required for many…
ERIC Educational Resources Information Center
Dolan, Conor V.; Molenaar, Peter C. M.
1994-01-01
In multigroup covariance structure analysis with structured means, the traditional latent selection model is formulated as a special case of phenotypic selection. Illustrations with real and simulated data demonstrate how one can test specific hypotheses concerning selection on latent variables. (SLD)
Developmental and Reproductive Toxicity (DART) testing is important for assessing the potential consequences of drug and chemical exposure on human health and well-being. Complexity of pregnancy and the reproductive cycle makes DART testing challenging and costly for traditional ...
Jensen, Katrine; Ringsted, Charlotte; Hansen, Henrik Jessen; Petersen, René Horsleben; Konge, Lars
2014-06-01
Video-assisted thoracic surgery is gradually replacing conventional open thoracotomy as the method of choice for the treatment of early-stage non-small cell lung cancers, and thoracic surgical trainees must learn and master this technique. Simulation-based training could help trainees overcome the first part of the learning curve, but no virtual-reality simulators for thoracoscopy are commercially available. This study aimed to investigate whether training on a laparoscopic simulator enables trainees to perform a thoracoscopic lobectomy. Twenty-eight surgical residents were randomized to either virtual-reality training on a nephrectomy module or traditional black-box simulator training. After a retention period they performed a thoracoscopic lobectomy on a porcine model and their performance was scored using a previously validated assessment tool. The groups did not differ in age or gender. All participants were able to complete the lobectomy. The performance of the black-box group was significantly faster during the test scenario than the virtual-reality group: 26.6 min (SD 6.7 min) versus 32.7 min (SD 7.5 min). No difference existed between the two groups when comparing bleeding and anatomical and non-anatomical errors. Simulation-based training and targeted instructions enabled the trainees to perform a simulated thoracoscopic lobectomy. Traditional black-box training was more effective than virtual-reality laparoscopy training. Thus, a dedicated simulator for thoracoscopy should be available before establishing systematic virtual-reality training programs for trainees in thoracic surgery.
Guo, Lei; Li, Zhengyan; Gao, Pei; Hu, Hong; Gibson, Mark
2015-11-01
Bisphenol A (BPA) occurs widely in natural waters with both traditional and reproductive toxicity to various aquatic species. The water quality criteria (WQC), however, have not been established in China, which hinders the ecological risk assessment for the pollutant. This study therefore aims to derive the water quality criteria for BPA based on both acute and chronic toxicity endpoints and to assess the ecological risk in surface waters of China. A total of 15 acute toxicity values tested with aquatic species resident in China were found in published literature, which were simulated with the species sensitivity distribution (SSD) model for the derivation of criterion maximum concentration (CMC). 18 chronic toxicity values with traditional endpoints were simulated for the derivation of traditional criterion continuous concentration (CCC) and 12 chronic toxicity values with reproductive endpoints were for reproductive CCC. Based on the derived WQC, the ecological risk of BPA in surface waters of China was assessed with risk quotient (RQ) method. The results showed that the CMC, traditional CCC and reproductive CCC were 1518μgL(-1), 2.19μgL(-1) and 0.86μgL(-1), respectively. The acute risk of BPA was negligible with RQ values much lower than 0.1. The chronic risk was however much higher with RQ values of between 0.01-3.76 and 0.03-9.57 based on traditional and reproductive CCC, respectively. The chronic RQ values on reproductive endpoints were about threefold as high as those on traditional endpoints, indicating that ecological risk assessment based on traditional effects may not guarantee the safety of aquatic biota. Copyright © 2015 Elsevier Ltd. All rights reserved.
Hsieh, Chih-Chen; Jain, Semant; Larson, Ronald G
2006-01-28
A very stiff finitely extensible nonlinear elastic (FENE)-Fraenkel spring is proposed to replace the rigid rod in the bead-rod model. This allows the adoption of a fast predictor-corrector method so that large time steps can be taken in Brownian dynamics (BD) simulations without over- or understretching the stiff springs. In contrast to the simple bead-rod model, BD simulations with beads and FENE-Fraenkel (FF) springs yield a random-walk configuration at equilibrium. We compare the simulation results of the free-draining bead-FF-spring model with those for the bead-rod model in relaxation, start-up of uniaxial extensional, and simple shear flows, and find that both methods generate nearly identical results. The computational cost per time step for a free-draining BD simulation with the proposed bead-FF-spring model is about twice as high as the traditional bead-rod model with the midpoint algorithm of Liu [J. Chem. Phys. 90, 5826 (1989)]. Nevertheless, computations with the bead-FF-spring model are as efficient as those with the bead-rod model in extensional flow because the former allows larger time steps. Moreover, the Brownian contribution to the stress for the bead-FF-spring model is isotropic and therefore simplifies the calculation of the polymer stresses. In addition, hydrodynamic interaction can more easily be incorporated into the bead-FF-spring model than into the bead-rod model since the metric force arising from the non-Cartesian coordinates used in bead-rod simulations is absent from bead-spring simulations. Finally, with our newly developed bead-FF-spring model, existing computer codes for the bead-spring models can trivially be converted to ones for effective bead-rod simulations merely by replacing the usual FENE or Cohen spring law with a FENE-Fraenkel law, and this convertibility provides a very convenient way to perform multiscale BD simulations.
NASA Astrophysics Data System (ADS)
Hsieh, Chih-Chen; Jain, Semant; Larson, Ronald G.
2006-01-01
A very stiff finitely extensible nonlinear elastic (FENE)-Fraenkel spring is proposed to replace the rigid rod in the bead-rod model. This allows the adoption of a fast predictor-corrector method so that large time steps can be taken in Brownian dynamics (BD) simulations without over- or understretching the stiff springs. In contrast to the simple bead-rod model, BD simulations with beads and FENE-Fraenkel (FF) springs yield a random-walk configuration at equilibrium. We compare the simulation results of the free-draining bead-FF-spring model with those for the bead-rod model in relaxation, start-up of uniaxial extensional, and simple shear flows, and find that both methods generate nearly identical results. The computational cost per time step for a free-draining BD simulation with the proposed bead-FF-spring model is about twice as high as the traditional bead-rod model with the midpoint algorithm of Liu [J. Chem. Phys. 90, 5826 (1989)]. Nevertheless, computations with the bead-FF-spring model are as efficient as those with the bead-rod model in extensional flow because the former allows larger time steps. Moreover, the Brownian contribution to the stress for the bead-FF-spring model is isotropic and therefore simplifies the calculation of the polymer stresses. In addition, hydrodynamic interaction can more easily be incorporated into the bead-FF-spring model than into the bead-rod model since the metric force arising from the non-Cartesian coordinates used in bead-rod simulations is absent from bead-spring simulations. Finally, with our newly developed bead-FF-spring model, existing computer codes for the bead-spring models can trivially be converted to ones for effective bead-rod simulations merely by replacing the usual FENE or Cohen spring law with a FENE-Fraenkel law, and this convertibility provides a very convenient way to perform multiscale BD simulations.
Reducing numerical costs for core wide nuclear reactor CFD simulations by the Coarse-Grid-CFD
NASA Astrophysics Data System (ADS)
Viellieber, Mathias; Class, Andreas G.
2013-11-01
Traditionally complete nuclear reactor core simulations are performed with subchannel analysis codes, that rely on experimental and empirical input. The Coarse-Grid-CFD (CGCFD) intends to replace the experimental or empirical input with CFD data. The reactor core consists of repetitive flow patterns, allowing the general approach of creating a parametrized model for one segment and composing many of those to obtain the entire reactor simulation. The method is based on a detailed and well-resolved CFD simulation of one representative segment. From this simulation we extract so-called parametrized volumetric forces which close, an otherwise strongly under resolved, coarsely-meshed model of a complete reactor setup. While the formulation so far accounts for forces created internally in the fluid others e.g. obstruction and flow deviation through spacers and wire wraps, still need to be accounted for if the geometric details are not represented in the coarse mesh. These are modelled with an Anisotropic Porosity Formulation (APF). This work focuses on the application of the CGCFD to a complete reactor core setup and the accomplishment of the parametrization of the volumetric forces.
Multiscale predictions of aviation-attributable PM2.5 for US ...
Aviation activities represent an important and unique mode of transportation, but also impact air quality. In this study, we aim to quantify the impact of aircraft on air quality, focusing on aviation-attributable PM2.5 at scales ranging from local (a few kilometers) to continental (spanning hundreds of kilometers) using the Community Multiscale Air Quality-Advanced Plume Treatment (CMAQ-APT) model. In our CMAQ-APT simulations, a plume scale treatment is applied to aircraft emissions from 99 major U.S. airports over the contiguous U.S. in January and July 2005. In addition to the plume scale treatment, we account for the formation of non-traditional secondary organic aerosols (NTSOA) from the oxidation of semivolatile and intermediate volatility organic compounds (S/IVOCs) emitted from aircraft, and utilize alternative emission estimates from the Aerosol Dynamics Simulation Code (ADSC). ADSC is a 1-D plume scale model that estimates engine specific PM and S/IVOC emissions at ambient conditions, accounting for relative humidity and temperature. We estimated monthly and contiguous U.S. average aviation-attributable PM2.5 to be 2.7 ng m−3 in January and 2.6 ng m−3 in July using CMAQ-APT with ADSC emissions. This represents an increase of 40% and 12% in January and July, respectively, over impacts using traditional modeling approaches (traditional emissions without APT). The maximum fine scale (subgrid scale) hourly impacts at a major airport were 133.6 μg m−
NASA Astrophysics Data System (ADS)
Augustine, Carlyn
2018-01-01
Type Ia Supernovae are thermonuclear explosions of white dwarf (WD) stars. Past studies predict the existence of "hybrid" white dwarfs, made of a C/O/Ne core with a O/Ne shell, and that these are viable progenitors for supernovae. More recent work found that the C/O core is mixed with the surrounding O/Ne while the WD cools. Inspired by this scenario, we performed simulations of thermonuclear supernovae in the single degenerate paradigm from these hybrid progenitors. Our investigation began by constructing a hybrid white dwarf model with the one-dimensional stellar evolution code MESA. The model was allowed to go through unstable interior mixing ignite carbon burning centrally. The MESA model was then mapped to a two-dimensional initial condition and an explosion simulated from that with FLASH. For comparison, a similar simulation of an explosion was performed from a traditional C/O progenitor WD. Comparing the yields produced by explosion simulations allows us to determine which model produces more 56Ni, and therefore brighter events, and how explosions from these models differ from explosions from previous models without the mixing during the WD cooling.
Simulations of galaxy cluster collisions with a dark plasma component
NASA Astrophysics Data System (ADS)
Spethmann, Christian; Veermäe, Hardi; Sepp, Tiit; Heikinheimo, Matti; Deshev, Boris; Hektor, Andi; Raidal, Martti
2017-12-01
Context. Dark plasma is an intriguing form of self-interacting dark matter with an effective fluid-like behavior, which is well motivated by various theoretical particle physics models. Aims: We aim to find an explanation for an isolated mass clump in the Abell 520 system, which cannot be explained by traditional models of dark matter, but has been detected in weak lensing observations. Methods: We performed N-body smoothed particle hydrodynamics simulations of galaxy cluster collisions with a two component model of dark matter, which is assumed to consist of a predominant non-interacting dark matter component and a 10-40% mass fraction of dark plasma. Results: The mass of a possible dark clump was calculated for each simulation in a parameter scan over the underlying model parameters. In two higher resolution simulations shock-waves and Mach cones were observed to form in the dark plasma halos. Conclusions: By choosing suitable simulation parameters, the observed distributions of dark matter in both the Bullet cluster (1E 0657-558) and Abell 520 (MS 0451.5+0250) can be qualitatively reproduced. Movies associated to Figs. A.1 and A.2 are available at http://www.aanda.org
Benchmarking Model Variants in Development of a Hardware-in-the-Loop Simulation System
NASA Technical Reports Server (NTRS)
Aretskin-Hariton, Eliot D.; Zinnecker, Alicia M.; Kratz, Jonathan L.; Culley, Dennis E.; Thomas, George L.
2016-01-01
Distributed engine control architecture presents a significant increase in complexity over traditional implementations when viewed from the perspective of system simulation and hardware design and test. Even if the overall function of the control scheme remains the same, the hardware implementation can have a significant effect on the overall system performance due to differences in the creation and flow of data between control elements. A Hardware-in-the-Loop (HIL) simulation system is under development at NASA Glenn Research Center that enables the exploration of these hardware dependent issues. The system is based on, but not limited to, the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k). This paper describes the step-by-step conversion from the self-contained baseline model to the hardware in the loop model, and the validation of each step. As the control model hardware fidelity was improved during HIL system development, benchmarking simulations were performed to verify that engine system performance characteristics remained the same. The results demonstrate the goal of the effort; the new HIL configurations have similar functionality and performance compared to the baseline C-MAPSS40k system.
DEM Based Modeling: Grid or TIN? The Answer Depends
NASA Astrophysics Data System (ADS)
Ogden, F. L.; Moreno, H. A.
2015-12-01
The availability of petascale supercomputing power has enabled process-based hydrological simulations on large watersheds and two-way coupling with mesoscale atmospheric models. Of course with increasing watershed scale come corresponding increases in watershed complexity, including wide ranging water management infrastructure and objectives, and ever increasing demands for forcing data. Simulations of large watersheds using grid-based models apply a fixed resolution over the entire watershed. In large watersheds, this means an enormous number of grids, or coarsening of the grid resolution to reduce memory requirements. One alternative to grid-based methods is the triangular irregular network (TIN) approach. TINs provide the flexibility of variable resolution, which allows optimization of computational resources by providing high resolution where necessary and low resolution elsewhere. TINs also increase required effort in model setup, parameter estimation, and coupling with forcing data which are often gridded. This presentation discusses the costs and benefits of the use of TINs compared to grid-based methods, in the context of large watershed simulations within the traditional gridded WRF-HYDRO framework and the new TIN-based ADHydro high performance computing watershed simulator.
SPH-based numerical simulations of flow slides in municipal solid waste landfills.
Huang, Yu; Dai, Zili; Zhang, Weijie; Huang, Maosong
2013-03-01
Most municipal solid waste (MSW) is disposed of in landfills. Over the past few decades, catastrophic flow slides have occurred in MSW landfills around the world, causing substantial economic damage and occasionally resulting in human victims. It is therefore important to predict the run-out, velocity and depth of such slides in order to provide adequate mitigation and protection measures. To overcome the limitations of traditional numerical methods for modelling flow slides, a mesh-free particle method entitled smoothed particle hydrodynamics (SPH) is introduced in this paper. The Navier-Stokes equations were adopted as the governing equations and a Bingham model was adopted to analyse the relationship between material stress rates and particle motion velocity. The accuracy of the model is assessed using a series of verifications, and then flow slides that occurred in landfills located in Sarajevo and Bandung were simulated to extend its applications. The simulated results match the field data well and highlight the capability of the proposed SPH modelling method to simulate such complex phenomena as flow slides in MSW landfills.
Fadda, Elisa; Woods, Robert J.
2014-01-01
The characterization of the 3D structure of oligosaccharides, their conjugates and analogs is particularly challenging for traditional experimental methods. Molecular simulation methods provide a basis for interpreting sparse experimental data and for independently predicting conformational and dynamic properties of glycans. Here, we summarize and analyze the issues associated with modeling carbohydrates, with a detailed discussion of four of the most recently developed carbohydrate force fields, reviewed in terms of applicability to natural glycans, carbohydrate–protein complexes and the emerging area of glycomimetic drugs. In addition, we discuss prospectives and new applications of carbohydrate modeling in drug discovery. PMID:20594934
NASA Astrophysics Data System (ADS)
Zhang, Ning; Du, Yunsong; Miao, Shiguang; Fang, Xiaoyi
2016-08-01
The simulation performance over complex building clusters of a wind simulation model (Wind Information Field Fast Analysis model, WIFFA) in a micro-scale air pollutant dispersion model system (Urban Microscale Air Pollution dispersion Simulation model, UMAPS) is evaluated using various wind tunnel experimental data including the CEDVAL (Compilation of Experimental Data for Validation of Micro-Scale Dispersion Models) wind tunnel experiment data and the NJU-FZ experiment data (Nanjing University-Fang Zhuang neighborhood wind tunnel experiment data). The results show that the wind model can reproduce the vortexes triggered by urban buildings well, and the flow patterns in urban street canyons and building clusters can also be represented. Due to the complex shapes of buildings and their distributions, the simulation deviations/discrepancies from the measurements are usually caused by the simplification of the building shapes and the determination of the key zone sizes. The computational efficiencies of different cases are also discussed in this paper. The model has a high computational efficiency compared to traditional numerical models that solve the Navier-Stokes equations, and can produce very high-resolution (1-5 m) wind fields of a complex neighborhood scale urban building canopy (~ 1 km ×1 km) in less than 3 min when run on a personal computer.
ERIC Educational Resources Information Center
Wang, Hung-Yuan; Duh, Henry Been-Lirn; Li, Nai; Lin, Tzung-Jin; Tsai, Chin-Chung
2014-01-01
The purpose of this study is to investigate and compare students' collaborative inquiry learning behaviors and their behavior patterns in an augmented reality (AR) simulation system and a traditional 2D simulation system. Their inquiry and discussion processes were analyzed by content analysis and lag sequential analysis (LSA). Forty…
Simulation of an array-based neural net model
NASA Technical Reports Server (NTRS)
Barnden, John A.
1987-01-01
Research in cognitive science suggests that much of cognition involves the rapid manipulation of complex data structures. However, it is very unclear how this could be realized in neural networks or connectionist systems. A core question is: how could the interconnectivity of items in an abstract-level data structure be neurally encoded? The answer appeals mainly to positional relationships between activity patterns within neural arrays, rather than directly to neural connections in the traditional way. The new method was initially devised to account for abstract symbolic data structures, but it also supports cognitively useful spatial analogue, image-like representations. As the neural model is based on massive, uniform, parallel computations over 2D arrays, the massively parallel processor is a convenient tool for simulation work, although there are complications in using the machine to the fullest advantage. An MPP Pascal simulation program for a small pilot version of the model is running.
PowderSim: Lagrangian Discrete and Mesh-Free Continuum Simulation Code for Cohesive Soils
NASA Technical Reports Server (NTRS)
Johnson, Scott; Walton, Otis; Settgast, Randolph
2013-01-01
PowderSim is a calculation tool that combines a discrete-element method (DEM) module, including calibrated interparticle-interaction relationships, with a mesh-free, continuum, SPH (smoothed-particle hydrodynamics) based module that utilizes enhanced, calibrated, constitutive models capable of mimicking both large deformations and the flow behavior of regolith simulants and lunar regolith under conditions anticipated during in situ resource utilization (ISRU) operations. The major innovation introduced in PowderSim is to use a mesh-free method (SPH-based) with a calibrated and slightly modified critical-state soil mechanics constitutive model to extend the ability of the simulation tool to also address full-scale engineering systems in the continuum sense. The PowderSim software maintains the ability to address particle-scale problems, like size segregation, in selected regions with a traditional DEM module, which has improved contact physics and electrostatic interaction models.
Large-eddy simulation of flow in a plane, asymmetric diffuser
NASA Technical Reports Server (NTRS)
Kaltenbach, Hans-Jakob
1993-01-01
Recent improvements in subgrid-scale modeling as well as increases in computer power make it feasible to investigate flows using large-eddy simulation (LES) which have been traditionally studied with techniques based on Reynolds averaging. However, LES has not yet been applied to many flows of immediate technical interest. Preliminary results from LES of a plane diffuser flow are described. The long term goal of this work is to investigate flow separation as well as separation control in ducts and ramp-like geometries.
Pilot Evaluation of Adaptive Control in Motion-Based Flight Simulator
NASA Technical Reports Server (NTRS)
Kaneshige, John T.; Campbell, Stefan Forrest
2009-01-01
The objective of this work is to assess the strengths, weaknesses, and robustness characteristics of several MRAC (Model-Reference Adaptive Control) based adaptive control technologies garnering interest from the community as a whole. To facilitate this, a control study using piloted and unpiloted simulations to evaluate sensitivities and handling qualities was conducted. The adaptive control technologies under consideration were ALR (Adaptive Loop Recovery), BLS (Bounded Linear Stability), Hybrid Adaptive Control, L1, OCM (Optimal Control Modification), PMRAC (Predictor-based MRAC), and traditional MRAC
The pressure-dilatation correlation in compressible flows
NASA Technical Reports Server (NTRS)
Sarkar, S.
1992-01-01
Simulations of simple compressible flows have been performed to enable the direct estimation of the pressure-dilatation correlation. The generally accepted belief that this correlation may be important in high-speed flows has been verified by the simulations. The pressure-dilatation correlation is theoretically investigated by considering the equation for fluctuating pressure in an arbitrary compressible flow. This leads to the isolation of a component of the pressure-dilatation that exhibits temporal oscillations on a fast time scale. Direct numerical simulations of homogeneous shear turbulence and isotropic turbulence show that this fast component has a negligible contribution to the evolution of turbulent kinetic energy. Then, an analysis for the case of homogeneous turbulence is performed to obtain a formal solution for the nonoscillatory pressure-dilatation. Simplifications lead to a model that algebraically relates the pressure-dilatation to quantities traditionally obtained in incompressible turbulence closures. The model is validated by direct comparison with the simulations.
NASA Astrophysics Data System (ADS)
Gao, Yang; Leung, L. Ruby; Zhao, Chun; Hagos, Samson
2017-03-01
Simulating summer precipitation is a significant challenge for climate models that rely on cumulus parameterizations to represent moist convection processes. Motivated by recent advances in computing that support very high-resolution modeling, this study aims to systematically evaluate the effects of model resolution and convective parameterizations across the gray zone resolutions. Simulations using the Weather Research and Forecasting model were conducted at grid spacings of 36 km, 12 km, and 4 km for two summers over the conterminous U.S. The convection-permitting simulations at 4 km grid spacing are most skillful in reproducing the observed precipitation spatial distributions and diurnal variability. Notable differences are found between simulations with the traditional Kain-Fritsch (KF) and the scale-aware Grell-Freitas (GF) convection schemes, with the latter more skillful in capturing the nocturnal timing in the Great Plains and North American monsoon regions. The GF scheme also simulates a smoother transition from convective to large-scale precipitation as resolution increases, resulting in reduced sensitivity to model resolution compared to the KF scheme. Nonhydrostatic dynamics has a positive impact on precipitation over complex terrain even at 12 km and 36 km grid spacings. With nudging of the winds toward observations, we show that the conspicuous warm biases in the Southern Great Plains are related to precipitation biases induced by large-scale circulation biases, which are insensitive to model resolution. Overall, notable improvements in simulating summer rainfall and its diurnal variability through convection-permitting modeling and scale-aware parameterizations suggest promising venues for improving climate simulations of water cycle processes.
A Nonlocal Peridynamic Plasticity Model for the Dynamic Flow and Fracture of Concrete.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vogler, Tracy; Lammi, Christopher James
A nonlocal, ordinary peridynamic constitutive model is formulated to numerically simulate the pressure-dependent flow and fracture of heterogeneous, quasi-brittle ma- terials, such as concrete. Classical mechanics and traditional computational modeling methods do not accurately model the distributed fracture observed within this family of materials. The peridynamic horizon, or range of influence, provides a characteristic length to the continuum and limits localization of fracture. Scaling laws are derived to relate the parameters of peridynamic constitutive model to the parameters of the classical Drucker-Prager plasticity model. Thermodynamic analysis of associated and non-associated plastic flow is performed. An implicit integration algorithm is formu-more » lated to calculate the accumulated plastic bond extension and force state. The gov- erning equations are linearized and the simulation of the quasi-static compression of a cylinder is compared to the classical theory. A dissipation-based peridynamic bond failure criteria is implemented to model fracture and the splitting of a concrete cylinder is numerically simulated. Finally, calculation of the impact and spallation of a con- crete structure is performed to assess the suitability of the material and failure models for simulating concrete during dynamic loadings. The peridynamic model is found to accurately simulate the inelastic deformation and fracture behavior of concrete during compression, splitting, and dynamically induced spall. The work expands the types of materials that can be modeled using peridynamics. A multi-scale methodology for simulating concrete to be used in conjunction with the plasticity model is presented. The work was funded by LDRD 158806.« less
Discontinuous Galerkin Methods for Turbulence Simulation
NASA Technical Reports Server (NTRS)
Collis, S. Scott
2002-01-01
A discontinuous Galerkin (DG) method is formulated, implemented, and tested for simulation of compressible turbulent flows. The method is applied to turbulent channel flow at low Reynolds number, where it is found to successfully predict low-order statistics with fewer degrees of freedom than traditional numerical methods. This reduction is achieved by utilizing local hp-refinement such that the computational grid is refined simultaneously in all three spatial coordinates with decreasing distance from the wall. Another advantage of DG is that Dirichlet boundary conditions can be enforced weakly through integrals of the numerical fluxes. Both for a model advection-diffusion problem and for turbulent channel flow, weak enforcement of wall boundaries is found to improve results at low resolution. Such weak boundary conditions may play a pivotal role in wall modeling for large-eddy simulation.
Conservative mixing, competitive mixing and their applications
NASA Astrophysics Data System (ADS)
Klimenko, A. Y.
2010-12-01
In many of the models applied to simulations of turbulent transport and turbulent combustion, the mixing between particles is used to reflect the influence of the continuous diffusion terms in the transport equations. Stochastic particles with properties and mixing can be used not only for simulating turbulent combustion, but also for modeling a large spectrum of physical phenomena. Traditional mixing, which is commonly used in the modeling of turbulent reacting flows, is conservative: the total amount of scalar is (or should be) preserved during a mixing event. It is worthwhile, however, to consider a more general mixing that does not possess these conservative properties; hence, our consideration lies beyond traditional mixing. In non-conservative mixing, the particle post-mixing average becomes biased towards one of the particles participating in mixing. The extreme form of non-conservative mixing can be called competitive mixing or competition: after a mixing event, the loser particle simply receives the properties of the winner particle. Particles with non-conservative mixing can be used to emulate various phenomena involving competition. In particular, we investigate cyclic behavior that can be attributed to complex competing systems. We show that the localness and intransitivity of competitive mixing are linked to the cyclic behavior.
NASA Technical Reports Server (NTRS)
Connolly, Joseph W.; Csank, Jeffrey Thomas; Chicatelli, Amy; Kilver, Jacob
2013-01-01
This paper covers the development of a model-based engine control (MBEC) methodology featuring a self tuning on-board model applied to an aircraft turbofan engine simulation. Here, the Commercial Modular Aero-Propulsion System Simulation 40,000 (CMAPSS40k) serves as the MBEC application engine. CMAPSS40k is capable of modeling realistic engine performance, allowing for a verification of the MBEC over a wide range of operating points. The on-board model is a piece-wise linear model derived from CMAPSS40k and updated using an optimal tuner Kalman Filter (OTKF) estimation routine, which enables the on-board model to self-tune to account for engine performance variations. The focus here is on developing a methodology for MBEC with direct control of estimated parameters of interest such as thrust and stall margins. Investigations using the MBEC to provide a stall margin limit for the controller protection logic are presented that could provide benefits over a simple acceleration schedule that is currently used in traditional engine control architectures.
Zhou, Miaolei; Wang, Shoubin; Gao, Wei
2013-01-01
As a new type of intelligent material, magnetically shape memory alloy (MSMA) has a good performance in its applications in the actuator manufacturing. Compared with traditional actuators, MSMA actuator has the advantages as fast response and large deformation; however, the hysteresis nonlinearity of the MSMA actuator restricts its further improving of control precision. In this paper, an improved Krasnosel'skii-Pokrovskii (KP) model is used to establish the hysteresis model of MSMA actuator. To identify the weighting parameters of the KP operators, an improved gradient correction algorithm and a variable step-size recursive least square estimation algorithm are proposed in this paper. In order to demonstrate the validity of the proposed modeling approach, simulation experiments are performed, simulations with improved gradient correction algorithm and variable step-size recursive least square estimation algorithm are studied, respectively. Simulation results of both identification algorithms demonstrate that the proposed modeling approach in this paper can establish an effective and accurate hysteresis model for MSMA actuator, and it provides a foundation for improving the control precision of MSMA actuator.
Hysteresis Modeling of Magnetic Shape Memory Alloy Actuator Based on Krasnosel'skii-Pokrovskii Model
Wang, Shoubin; Gao, Wei
2013-01-01
As a new type of intelligent material, magnetically shape memory alloy (MSMA) has a good performance in its applications in the actuator manufacturing. Compared with traditional actuators, MSMA actuator has the advantages as fast response and large deformation; however, the hysteresis nonlinearity of the MSMA actuator restricts its further improving of control precision. In this paper, an improved Krasnosel'skii-Pokrovskii (KP) model is used to establish the hysteresis model of MSMA actuator. To identify the weighting parameters of the KP operators, an improved gradient correction algorithm and a variable step-size recursive least square estimation algorithm are proposed in this paper. In order to demonstrate the validity of the proposed modeling approach, simulation experiments are performed, simulations with improved gradient correction algorithm and variable step-size recursive least square estimation algorithm are studied, respectively. Simulation results of both identification algorithms demonstrate that the proposed modeling approach in this paper can establish an effective and accurate hysteresis model for MSMA actuator, and it provides a foundation for improving the control precision of MSMA actuator. PMID:23737730
Validating the simulation of large-scale parallel applications using statistical characteristics
Zhang, Deli; Wilke, Jeremiah; Hendry, Gilbert; ...
2016-03-01
Simulation is a widely adopted method to analyze and predict the performance of large-scale parallel applications. Validating the hardware model is highly important for complex simulations with a large number of parameters. Common practice involves calculating the percent error between the projected and the real execution time of a benchmark program. However, in a high-dimensional parameter space, this coarse-grained approach often suffers from parameter insensitivity, which may not be known a priori. Moreover, the traditional approach cannot be applied to the validation of software models, such as application skeletons used in online simulations. In this work, we present a methodologymore » and a toolset for validating both hardware and software models by quantitatively comparing fine-grained statistical characteristics obtained from execution traces. Although statistical information has been used in tasks like performance optimization, this is the first attempt to apply it to simulation validation. Lastly, our experimental results show that the proposed evaluation approach offers significant improvement in fidelity when compared to evaluation using total execution time, and the proposed metrics serve as reliable criteria that progress toward automating the simulation tuning process.« less
Evaluating synoptic systems in the CMIP5 climate models over the Australian region
NASA Astrophysics Data System (ADS)
Gibson, Peter B.; Uotila, Petteri; Perkins-Kirkpatrick, Sarah E.; Alexander, Lisa V.; Pitman, Andrew J.
2016-10-01
Climate models are our principal tool for generating the projections used to inform climate change policy. Our confidence in projections depends, in part, on how realistically they simulate present day climate and associated variability over a range of time scales. Traditionally, climate models are less commonly assessed at time scales relevant to daily weather systems. Here we explore the utility of a self-organizing maps (SOMs) procedure for evaluating the frequency, persistence and transitions of daily synoptic systems in the Australian region simulated by state-of-the-art global climate models. In terms of skill in simulating the climatological frequency of synoptic systems, large spread was observed between models. A positive association between all metrics was found, implying that relative skill in simulating the persistence and transitions of systems is related to skill in simulating the climatological frequency. Considering all models and metrics collectively, model performance was found to be related to model horizontal resolution but unrelated to vertical resolution or representation of the stratosphere. In terms of the SOM procedure, the timespan over which evaluation was performed had some influence on model performance skill measures, as did the number of circulation types examined. These findings have implications for selecting models most useful for future projections over the Australian region, particularly for projections related to synoptic scale processes and phenomena. More broadly, this study has demonstrated the utility of the SOMs procedure in providing a process-based evaluation of climate models.
NASA Astrophysics Data System (ADS)
Sanderson, B. M.
2017-12-01
The CMIP ensembles represent the most comprehensive source of information available to decision-makers for climate adaptation, yet it is clear that there are fundamental limitations in our ability to treat the ensemble as an unbiased sample of possible future climate trajectories. There is considerable evidence that models are not independent, and increasing complexity and resolution combined with computational constraints prevent a thorough exploration of parametric uncertainty or internal variability. Although more data than ever is available for calibration, the optimization of each model is influenced by institutional priorities, historical precedent and available resources. The resulting ensemble thus represents a miscellany of climate simulators which defy traditional statistical interpretation. Models are in some cases interdependent, but are sufficiently complex that the degree of interdependency is conditional on the application. Configurations have been updated using available observations to some degree, but not in a consistent or easily identifiable fashion. This means that the ensemble cannot be viewed as a true posterior distribution updated by available data, but nor can observational data alone be used to assess individual model likelihood. We assess recent literature for combining projections from an imperfect ensemble of climate simulators. Beginning with our published methodology for addressing model interdependency and skill in the weighting scheme for the 4th US National Climate Assessment, we consider strategies for incorporating process-based constraints on future response, perturbed parameter experiments and multi-model output into an integrated framework. We focus on a number of guiding questions: Is the traditional framework of confidence in projections inferred from model agreement leading to biased or misleading conclusions? Can the benefits of upweighting skillful models be reconciled with the increased risk of truth lying outside the weighted ensemble distribution? If CMIP is an ensemble of partially informed best-guesses, can we infer anything about the parent distribution of all possible models of the climate system (and if not, are we implicitly under-representing the risk of a climate catastrophe outside of the envelope of CMIP simulations)?
Miniaturization of Micro-Solder Bumps and Effect of IMC on Stress Distribution
NASA Astrophysics Data System (ADS)
Choudhury, Soud Farhan; Ladani, Leila
2016-07-01
As the joints become smaller in more advanced packages and devices, intermetallic (IMCs) volume ratio increases, which significantly impacts the overall mechanical behavior of joints. The existence of only a few grains of Sn (Tin) and IMC materials results in anisotropic elastic and plastic behavior which is not detectable using conventional finite element (FE) simulation with average properties for polycrystalline material. In this study, crystal plasticity finite element (CPFE) simulation is used to model the whole joint including copper, Sn solder and Cu6Sn5 IMC material. Experimental lap-shear test results for solder joints from the literature were used to validate the models. A comparative analysis between traditional FE, CPFE and experiments was conducted. The CPFE model was able to correlate the experiments more closely compared to traditional FE analysis because of its ability to capture micro-mechanical anisotropic behavior. Further analysis was conducted to evaluate the effect of IMC thickness on stress distribution in micro-bumps using a systematic numerical experiment with IMC thickness ranging from 0% to 80%. The analysis was conducted on micro-bumps with single crystal Sn and bicrystal Sn. The overall stress distribution and shear deformation changes as the IMC thickness increases. The model with higher IMC thickness shows a stiffer shear response, and provides a higher shear yield strength.
Search for function coefficient distribution in traditional Chinese medicine network
NASA Astrophysics Data System (ADS)
He, Yue; Zhang, Peipei; Sun, Anzheng; Su, Beibei; He, Da-Ren
2004-03-01
We suggest a model for a simulation on development of traditional Chinese medicine system. Suppose there are a certain number of Chinese medicines. Each of them is given randomly a "function coefficient", which has a value between 0 and 1. The larger it is the stronger is its function for solving one healthy problem and serving as an "emperor" in a prescription formulation. The smaller it is the stronger is its function for harmonizing and/or accessorizing a prescription formulation. In every step of time a new medicine is discovered. With a probability, P(m), which is determined according to our statistical investigation results, it can produce a new prescription formulation with other m-1 medicines. We assume that the probability for choosing the function coefficients of these m medicines follow a distribution function, which is everywhere smooth. A program has been set up to perform a search for this function form so that the simulation results show a best agreement to our statistical data. We believe the result function form will be helpful for an understanding on real development of traditional Chinese medicine system.
An automatic and effective parameter optimization method for model tuning
NASA Astrophysics Data System (ADS)
Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.
2015-05-01
Physical parameterizations in General Circulation Models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determines parameter sensitivity and the other chooses the optimum initial value of sensitive parameters, are introduced before the downhill simplex method to reduce the computational cost and improve the tuning performance. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9%. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameters tuning during the model development stage.
Empirical tools for simulating salinity in the estuaries in Everglades National Park, Florida
NASA Astrophysics Data System (ADS)
Marshall, F. E.; Smith, D. T.; Nickerson, D. M.
2011-12-01
Salinity in a shallow estuary is affected by upland freshwater inputs (surface runoff, stream/canal flows, groundwater), atmospheric processes (precipitation, evaporation), marine connectivity, and wind patterns. In Everglades National Park (ENP) in South Florida, the unique Everglades ecosystem exists as an interconnected system of fresh, brackish, and salt water marshes, mangroves, and open water. For this effort a coastal aquifer conceptual model of the Everglades hydrologic system was used with traditional correlation and regression hydrologic techniques to create a series of multiple linear regression (MLR) salinity models from observed hydrologic, marine, and weather data. The 37 ENP MLR salinity models cover most of the estuarine areas of ENP and produce daily salinity simulations that are capable of estimating 65-80% of the daily variability in salinity depending upon the model. The Root Mean Squared Error is typically about 2-4 salinity units, and there is little bias in the predictions. However, the absolute error of a model prediction in the nearshore embayments and the mangrove zone of Florida Bay may be relatively large for a particular daily simulation during the seasonal transitions. Comparisons show that the models group regionally by similar independent variables and salinity regimes. The MLR salinity models have approximately the same expected range of simulation accuracy and error as higher spatial resolution salinity models.
Zhao, Yi Chen; Kennedy, Gregor; Yukawa, Kumiko; Pyman, Brian; O'Leary, Stephen
2011-03-01
A significant benefit of virtual reality (VR) simulation is the ability to provide self-direct learning for trainees. This study aims to determine whether there are any differences in performance of cadaver temporal bone dissections between novices who received traditional teaching methods and those who received unsupervised self-directed learning in a VR temporal bone simulator. Randomized blinded control trial. Royal Victorian Eye and Ear Hospital. Twenty novice trainees. After receiving an hour lecture, participants were randomized into 2 groups to receive an additional 2 hours of training via traditional teaching methods or self-directed learning using a VR simulator with automated guidance. The simulation environment presented participants with structured training tasks, which were accompanied by real-time computer-generated feedback as well as real operative videos and photos. After the training, trainees were asked to perform a cortical mastoidectomy on a cadaveric temporal bone. The dissection was videotaped and assessed by 3 otologists blinded to participants' teaching group. The overall performance scores of the simulator-based training group were significantly higher than those of the traditional training group (67% vs 29%; P < .001), with an intraclass correlation coefficient of 0.93, indicating excellent interrater reliability. Using other assessments of performance, such as injury size, the VR simulator-based training group also performed better than the traditional group. This study indicates that self-directed learning on VR simulators can be used to improve performance on cadaver dissection in novice trainees compared with traditional teaching methods alone.
NASA Astrophysics Data System (ADS)
Guler Yigitoglu, Askin
In the context of long operation of nuclear power plants (NPPs) (i.e., 60-80 years, and beyond), investigation of the aging of passive systems, structures and components (SSCs) is important to assess safety margins and to decide on reactor life extension as indicated within the U.S. Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) Program. In the traditional probabilistic risk assessment (PRA) methodology, evaluating the potential significance of aging of passive SSCs on plant risk is challenging. Although passive SSC failure rates can be added as initiating event frequencies or basic event failure rates in the traditional event-tree/fault-tree methodology, these failure rates are generally based on generic plant failure data which means that the true state of a specific plant is not reflected in a realistic manner on aging effects. Dynamic PRA methodologies have gained attention recently due to their capability to account for the plant state and thus address the difficulties in the traditional PRA modeling of aging effects of passive components using physics-based models (and also in the modeling of digital instrumentation and control systems). Physics-based models can capture the impact of complex aging processes (e.g., fatigue, stress corrosion cracking, flow-accelerated corrosion, etc.) on SSCs and can be utilized to estimate passive SSC failure rates using realistic NPP data from reactor simulation, as well as considering effects of surveillance and maintenance activities. The objectives of this dissertation are twofold: The development of a methodology for the incorporation of aging modeling of passive SSC into a reactor simulation environment to provide a framework for evaluation of their risk contribution in both the dynamic and traditional PRA; and the demonstration of the methodology through its application to pressurizer surge line pipe weld and steam generator tubes in commercial nuclear power plants. In the proposed methodology, a multi-state physics based model is selected to represent the aging process. The model is modified via sojourn time approach to reflect the operational and maintenance history dependence of the transition rates. Thermal-hydraulic parameters of the model are calculated via the reactor simulation environment and uncertainties associated with both parameters and the models are assessed via a two-loop Monte Carlo approach (Latin hypercube sampling) to propagate input probability distributions through the physical model. The effort documented in this thesis towards this overall objective consists of : i) defining a process for selecting critical passive components and related aging mechanisms, ii) aging model selection, iii) calculating the probability that aging would cause the component to fail, iv) uncertainty/sensitivity analyses, v) procedure development for modifying an existing PRA to accommodate consideration of passive component failures, and, vi) including the calculated failure probability in the modified PRA. The proposed methodology is applied to pressurizer surge line pipe weld aging and steam generator tube degradation in pressurized water reactors.
NASA Technical Reports Server (NTRS)
Shih, Tsan-Hsing; Liu, Nan-Suey
2009-01-01
Very large eddy simulation (VLES) of the nonreacting turbulent flow in a single-element lean direct injection (LDI) combustor has been successfully performed via the approach known as the partially resolved numerical simulation (PRNS/VLES) using a nonlinear subscale model. The grid is the same as the one used in a previous RANS simulation, which was considered as too coarse for a traditional LES simulation. In this study, we first carry out a steady RANS simulation to provide the initial flow field for the subsequent PRNS/VLES simulation. We have also carried out an unsteady RANS (URANS) simulation for the purpose of comparing its results with that of the PRNS/VLES simulation. In addition, these calculated results are compared with the experimental data. The present effort has demonstrated that the PRNS/VLES approach, while using a RANS type of grid, is able to reveal the dynamically important, unsteady large-scale turbulent structures occurring in the flow field of a single-element LDI combustor. The interactions of these coherent structures play a critical role in the dispersion of the fuel, hence, the mixing between the fuel and the oxidizer in a combustor.
Statistical physics approaches to Alzheimer's disease
NASA Astrophysics Data System (ADS)
Peng, Shouyong
Alzheimer's disease (AD) is the most common cause of late life dementia. In the brain of an AD patient, neurons are lost and spatial neuronal organizations (microcolumns) are disrupted. An adequate quantitative analysis of microcolumns requires that we automate the neuron recognition stage in the analysis of microscopic images of human brain tissue. We propose a recognition method based on statistical physics. Specifically, Monte Carlo simulations of an inhomogeneous Potts model are applied for image segmentation. Unlike most traditional methods, this method improves the recognition of overlapped neurons, and thus improves the overall recognition percentage. Although the exact causes of AD are unknown, as experimental advances have revealed the molecular origin of AD, they have continued to support the amyloid cascade hypothesis, which states that early stages of aggregation of amyloid beta (Abeta) peptides lead to neurodegeneration and death. X-ray diffraction studies reveal the common cross-beta structural features of the final stable aggregates-amyloid fibrils. Solid-state NMR studies also reveal structural features for some well-ordered fibrils. But currently there is no feasible experimental technique that can reveal the exact structure or the precise dynamics of assembly and thus help us understand the aggregation mechanism. Computer simulation offers a way to understand the aggregation mechanism on the molecular level. Because traditional all-atom continuous molecular dynamics simulations are not fast enough to investigate the whole aggregation process, we apply coarse-grained models and discrete molecular dynamics methods to increase the simulation speed. First we use a coarse-grained two-bead (two beads per amino acid) model. Simulations show that peptides can aggregate into multilayer beta-sheet structures, which agree with X-ray diffraction experiments. To better represent the secondary structure transition happening during aggregation, we refine the model to four beads per amino acid. Typical essential interactions, such as backbone hydrogen bond, hydrophobic and electrostatic interactions, are incorporated into our model. We study the aggregation of Abeta16-22, a peptide that can aggregate into a well-ordered fibrillar structure in experiments. Our results show that randomly-oriented monomers can aggregate into fibrillar subunits, which agree not only with X-ray diffraction experiments but also with solid-state NMR studies. Our findings demonstrate that coarse-grained models and discrete molecular dynamics simulations can help researchers understand the aggregation mechanism of amyloid peptides.
Anderson, Gillian H; Jenkins, Paul J; McDonald, David A; Van Der Meer, Robert; Morton, Alec; Nugent, Margaret; Rymaszewski, Lech A
2017-09-07
Healthcare faces the continual challenge of improving outcome while aiming to reduce cost. The aim of this study was to determine the micro cost differences of the Glasgow non-operative trauma virtual pathway in comparison to a traditional pathway. Discrete event simulation was used to model and analyse cost and resource utilisation with an activity-based costing approach. Data for a full comparison before the process change was unavailable so we used a modelling approach, comparing a virtual fracture clinic (VFC) with a simulated traditional fracture clinic (TFC). The orthopaedic unit VFC pathway pioneered at Glasgow Royal Infirmary has attracted significant attention and interest and is the focus of this cost study. Our study focused exclusively on patients with non-operative trauma attending emergency department or the minor injuries unit and the subsequent step in the patient pathway. Retrospective studies of patient outcomes as a result of the protocol introductions for specific injuries are presented in association with activity costs from the models. Patients are satisfied with the new pathway, the information provided and the outcome of their injuries (Evidence Level IV). There was a 65% reduction in the number of first outpatient face-to-face (f2f) attendances in orthopaedics. In the VFC pathway, the resources required per day were significantly lower for all staff groups (p≤0.001). The overall cost per patient of the VFC pathway was £22.84 (95% CI 21.74 to 23.92) per patient compared with £36.81 (95% CI 35.65 to 37.97) for the TFC pathway. Our results give a clearer picture of the cost comparison of the virtual pathway over a wholly traditional f2f clinic system. The use of simulation-based stochastic costings in healthcare economic analysis has been limited to date, but this study provides evidence for adoption of this method as a basis for its application in other healthcare settings. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
A Fast Method for Embattling Optimization of Ground-Based Radar Surveillance Network
NASA Astrophysics Data System (ADS)
Jiang, H.; Cheng, H.; Zhang, Y.; Liu, J.
A growing number of space activities have created an orbital debris environment that poses increasing impact risks to existing space systems and human space flight. For the safety of in-orbit spacecraft, a lot of observation facilities are needed to catalog space objects, especially in low earth orbit. Surveillance of Low earth orbit objects are mainly rely on ground-based radar, due to the ability limitation of exist radar facilities, a large number of ground-based radar need to build in the next few years in order to meet the current space surveillance demands. How to optimize the embattling of ground-based radar surveillance network is a problem to need to be solved. The traditional method for embattling optimization of ground-based radar surveillance network is mainly through to the detection simulation of all possible stations with cataloged data, and makes a comprehensive comparative analysis of various simulation results with the combinational method, and then selects an optimal result as station layout scheme. This method is time consuming for single simulation and high computational complexity for the combinational analysis, when the number of stations increases, the complexity of optimization problem will be increased exponentially, and cannot be solved with traditional method. There is no better way to solve this problem till now. In this paper, target detection procedure was simplified. Firstly, the space coverage of ground-based radar was simplified, a space coverage projection model of radar facilities in different orbit altitudes was built; then a simplified objects cross the radar coverage model was established according to the characteristics of space objects orbit motion; after two steps simplification, the computational complexity of the target detection was greatly simplified, and simulation results shown the correctness of the simplified results. In addition, the detection areas of ground-based radar network can be easily computed with the simplified model, and then optimized the embattling of ground-based radar surveillance network with the artificial intelligent algorithm, which can greatly simplifies the computational complexities. Comparing with the traditional method, the proposed method greatly improved the computational efficiency.
A Stigmergy Collaboration Approach in the Open Source Software Developer Community
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, Xiaohui; Pullum, Laura L; Treadwell, Jim N
2009-01-01
The communication model of some self-organized online communities is significantly different from the traditional social network based community. It is problematic to use social network analysis to analyze the collaboration structure and emergent behaviors in these communities because these communities lack peer-to-peer connections. Stigmergy theory provides an explanation of the collaboration model of these communities. In this research, we present a stigmergy approach for building an agent-based simulation to simulate the collaboration model in the open source software (OSS) developer community. We used a group of actors who collaborate on OSS projects through forums as our frame of reference andmore » investigated how the choices actors make in contributing their work on the projects determines the global status of the whole OSS project. In our simulation, the forum posts serve as the digital pheromone and the modified Pierre-Paul Grasse pheromone model is used for computing the developer agents behavior selection probability.« less
NASA Astrophysics Data System (ADS)
Yue, Yingchao; Fan, Wenhui; Xiao, Tianyuan; Ma, Cheng
2013-07-01
High level architecture(HLA) is the open standard in the collaborative simulation field. Scholars have been paying close attention to theoretical research on and engineering applications of collaborative simulation based on HLA/RTI, which extends HLA in various aspects like functionality and efficiency. However, related study on the load balancing problem of HLA collaborative simulation is insufficient. Without load balancing, collaborative simulation under HLA/RTI may encounter performance reduction or even fatal errors. In this paper, load balancing is further divided into static problems and dynamic problems. A multi-objective model is established and the randomness of model parameters is taken into consideration for static load balancing, which makes the model more credible. The Monte Carlo based optimization algorithm(MCOA) is excogitated to gain static load balance. For dynamic load balancing, a new type of dynamic load balancing problem is put forward with regards to the variable-structured collaborative simulation under HLA/RTI. In order to minimize the influence against the running collaborative simulation, the ordinal optimization based algorithm(OOA) is devised to shorten the optimization time. Furthermore, the two algorithms are adopted in simulation experiments of different scenarios, which demonstrate their effectiveness and efficiency. An engineering experiment about collaborative simulation under HLA/RTI of high speed electricity multiple units(EMU) is also conducted to indentify credibility of the proposed models and supportive utility of MCOA and OOA to practical engineering systems. The proposed research ensures compatibility of traditional HLA, enhances the ability for assigning simulation loads onto computing units both statically and dynamically, improves the performance of collaborative simulation system and makes full use of the hardware resources.
A Teamwork-Oriented Air Traffic Control Simulator
2006-06-01
the software development methodology of this work , this chapter is viewed as the acquisition phase of this model. The end of the ...Maintenance phase Changed Verification Retirement Development Maintenance 37 because the different controllers working in these phases usually...traditional operation such as scaling the airport and personalizing the working environment. 4. Pilot Specification The
Towards New Multiplatform Hybrid Online Laboratory Models
ERIC Educational Resources Information Center
Rodriguez-Gil, Luis; García-Zubia, Javier; Orduña, Pablo; López-de-Ipiña, Diego
2017-01-01
Online laboratories have traditionally been split between virtual labs, with simulated components; and remote labs, with real components. The former tend to provide less realism but to be easily scalable and less expensive to maintain, while the latter are fully real but tend to require a higher maintenance effort and be more error-prone. This…
Simulation of metal additive manufacturing microstructures using kinetic Monte Carlo
Rodgers, Theron M.; Madison, Jonathan D.; Tikare, Veena
2017-04-19
Additive manufacturing (AM) is of tremendous interest given its ability to realize complex, non-traditional geometries in engineered structural materials. But, microstructures generated from AM processes can be equally, if not more, complex than their conventionally processed counterparts. While some microstructural features observed in AM may also occur in more traditional solidification processes, the introduction of spatially and temporally mobile heat sources can result in significant microstructural heterogeneity. While grain size and shape in metal AM structures are understood to be highly dependent on both local and global temperature profiles, the exact form of this relation is not well understood. Wemore » implement an idealized molten zone and temperature-dependent grain boundary mobility in a kinetic Monte Carlo model to predict three-dimensional grain structure in additively manufactured metals. In order to demonstrate the flexibility of the model, synthetic microstructures are generated under conditions mimicking relatively diverse experimental results present in the literature. Simulated microstructures are then qualitatively and quantitatively compared to their experimental complements and are shown to be in good agreement.« less
NASA Astrophysics Data System (ADS)
Tran, H. N. Q.; Tran, T. T.; Mansfield, M. L.; Lyman, S. N.
2014-12-01
Contributions of emissions from oil and gas activities to elevated ozone concentrations in the Uintah Basin - Utah were evaluated using the CMAQ Integrated Source Apportionment Method (CMAQ-ISAM) technique, and were compared with the results of traditional budgeting methods. Unlike the traditional budgeting method, which compares simulations with and without emissions of the source(s) in question to quantify its impacts, the CMAQ-ISAM technique assigns tags to emissions of each source and tracks their evolution through physical and chemical processes to quantify the final ozone product yield from the source. Model simulations were performed for two episodes in winter 2013 of low and high ozone to provide better understanding of source contributions under different weather conditions. Due to the highly nonlinear ozone chemistry, results obtained from the two methods differed significantly. The growing oil and gas industry in the Uintah Basin is the largest contributor to the elevated zone (>75 ppb) observed in the Basin. This study therefore provides an insight into the impact of oil and gas industry on the ozone issue, and helps in determining effective control strategies.
Evaluating climate models: Should we use weather or climate observations?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oglesby, Robert J; Erickson III, David J
2009-12-01
Calling the numerical models that we use for simulations of climate change 'climate models' is a bit of a misnomer. These 'general circulation models' (GCMs, AKA global climate models) and their cousins the 'regional climate models' (RCMs) are actually physically-based weather simulators. That is, these models simulate, either globally or locally, daily weather patterns in response to some change in forcing or boundary condition. These simulated weather patterns are then aggregated into climate statistics, very much as we aggregate observations into 'real climate statistics'. Traditionally, the output of GCMs has been evaluated using climate statistics, as opposed to their abilitymore » to simulate realistic daily weather observations. At the coarse global scale this may be a reasonable approach, however, as RCM's downscale to increasingly higher resolutions, the conjunction between weather and climate becomes more problematic. We present results from a series of present-day climate simulations using the WRF ARW for domains that cover North America, much of Latin America, and South Asia. The basic domains are at a 12 km resolution, but several inner domains at 4 km have also been simulated. These include regions of complex topography in Mexico, Colombia, Peru, and Sri Lanka, as well as a region of low topography and fairly homogeneous land surface type (the U.S. Great Plains). Model evaluations are performed using standard climate analyses (e.g., reanalyses; NCDC data) but also using time series of daily station observations. Preliminary results suggest little difference in the assessment of long-term mean quantities, but the variability on seasonal and interannual timescales is better described. Furthermore, the value-added by using daily weather observations as an evaluation tool increases with the model resolution.« less
The l z ( p ) * Person-Fit Statistic in an Unfolding Model Context.
Tendeiro, Jorge N
2017-01-01
Although person-fit analysis has a long-standing tradition within item response theory, it has been applied in combination with dominance response models almost exclusively. In this article, a popular log likelihood-based parametric person-fit statistic under the framework of the generalized graded unfolding model is used. Results from a simulation study indicate that the person-fit statistic performed relatively well in detecting midpoint response style patterns and not so well in detecting extreme response style patterns.
A review of simulation platforms in surgery of the temporal bone.
Bhutta, M F
2016-10-01
Surgery of the temporal bone is a high-risk activity in an anatomically complex area. Simulation enables rehearsal of such surgery. The traditional simulation platform is the cadaveric temporal bone, but in recent years other simulation platforms have been created, including plastic and virtual reality platforms. To undertake a review of simulation platforms for temporal bone surgery, specifically assessing their educational value in terms of validity and in enabling transition to surgery. Systematic qualitative review. Search of the Pubmed, CINAHL, BEI and ERIC databases. Assessment of reported outcomes in terms of educational value. A total of 49 articles were included, covering cadaveric, animal, plastic and virtual simulation platforms. Cadaveric simulation is highly rated as an educational tool, but there may be a ceiling effect on educational outcomes after drilling 8-10 temporal bones. Animal models show significant anatomical variation from man. Plastic temporal bone models offer much potential, but at present lack sufficient anatomical or haptic validity. Similarly, virtual reality platforms lack sufficient anatomical or haptic validity, but with technological improvements they are advancing rapidly. At present, cadaveric simulation remains the best platform for training in temporal bone surgery. Technological advances enabling improved materials or modelling mean that in the future plastic or virtual platforms may become comparable to cadaveric platforms, and also offer additional functionality including patient-specific simulation from CT data. © 2015 John Wiley & Sons Ltd.
Hybrid model for simulation of plasma jet injection in tokamak
NASA Astrophysics Data System (ADS)
Galkin, Sergei A.; Bogatu, I. N.
2016-10-01
Hybrid kinetic model of plasma treats the ions as kinetic particles and the electrons as charge neutralizing massless fluid. The model is essentially applicable when most of the energy is concentrated in the ions rather than in the electrons, i.e. it is well suited for the high-density hyper-velocity C60 plasma jet. The hybrid model separates the slower ion time scale from the faster electron time scale, which becomes disregardable. That is why hybrid codes consistently outperform the traditional PIC codes in computational efficiency, still resolving kinetic ions effects. We discuss 2D hybrid model and code with exact energy conservation numerical algorithm and present some results of its application to simulation of C60 plasma jet penetration through tokamak-like magnetic barrier. We also examine the 3D model/code extension and its possible applications to tokamak and ionospheric plasmas. The work is supported in part by US DOE DE-SC0015776 Grant.
Consequence modeling using the fire dynamics simulator.
Ryder, Noah L; Sutula, Jason A; Schemel, Christopher F; Hamer, Andrew J; Van Brunt, Vincent
2004-11-11
The use of Computational Fluid Dynamics (CFD) and in particular Large Eddy Simulation (LES) codes to model fires provides an efficient tool for the prediction of large-scale effects that include plume characteristics, combustion product dispersion, and heat effects to adjacent objects. This paper illustrates the strengths of the Fire Dynamics Simulator (FDS), an LES code developed by the National Institute of Standards and Technology (NIST), through several small and large-scale validation runs and process safety applications. The paper presents two fire experiments--a small room fire and a large (15 m diameter) pool fire. The model results are compared to experimental data and demonstrate good agreement between the models and data. The validation work is then extended to demonstrate applicability to process safety concerns by detailing a model of a tank farm fire and a model of the ignition of a gaseous fuel in a confined space. In this simulation, a room was filled with propane, given time to disperse, and was then ignited. The model yields accurate results of the dispersion of the gas throughout the space. This information can be used to determine flammability and explosive limits in a space and can be used in subsequent models to determine the pressure and temperature waves that would result from an explosion. The model dispersion results were compared to an experiment performed by Factory Mutual. Using the above examples, this paper will demonstrate that FDS is ideally suited to build realistic models of process geometries in which large scale explosion and fire failure risks can be evaluated with several distinct advantages over more traditional CFD codes. Namely transient solutions to fire and explosion growth can be produced with less sophisticated hardware (lower cost) than needed for traditional CFD codes (PC type computer verses UNIX workstation) and can be solved for longer time histories (on the order of hundreds of seconds of computed time) with minimal computer resources and length of model run. Additionally results that are produced can be analyzed, viewed, and tabulated during and following a model run within a PC environment. There are some tradeoffs, however, as rapid computations in PC's may require a sacrifice in the grid resolution or in the sub-grid modeling, depending on the size of the geometry modeled.
NASA Astrophysics Data System (ADS)
Nan, Miao; Junfeng, Li; Tianshu, Wang
2017-01-01
Subjected to external lateral excitations, large-amplitude sloshing may take place in propellant tanks, especially for spacecraft in low-gravity conditions, such as landers in the process of hover and obstacle avoidance during lunar soft landing. Due to lateral force of the order of gravity in magnitude, the amplitude of liquid sloshing becomes too big for the traditional equivalent model to be accurate. Therefore, a new equivalent mechanical model, denominated the "composite model", that can address large-amplitude lateral sloshing in partially filled spherical tanks is established in this paper, with both translational and rotational excitations considered. The hypothesis of liquid equilibrium position following equivalent gravity is first proposed. By decomposing the large-amplitude motion of a liquid into bulk motion following the equivalent gravity and additional small-amplitude sloshing, a better simulation of large-amplitude liquid sloshing is presented. The effectiveness and accuracy of the model are verified by comparing the slosh forces and moments to results of the traditional model and CFD software.
Ru, Sushan; Hardner, Craig; Carter, Patrick A; Evans, Kate; Main, Dorrie; Peace, Cameron
2016-01-01
Seedling selection identifies superior seedlings as candidate cultivars based on predicted genetic potential for traits of interest. Traditionally, genetic potential is determined by phenotypic evaluation. With the availability of DNA tests for some agronomically important traits, breeders have the opportunity to include DNA information in their seedling selection operations—known as marker-assisted seedling selection. A major challenge in deploying marker-assisted seedling selection in clonally propagated crops is a lack of knowledge in genetic gain achievable from alternative strategies. Existing models based on additive effects considering seed-propagated crops are not directly relevant for seedling selection of clonally propagated crops, as clonal propagation captures all genetic effects, not just additive. This study modeled genetic gain from traditional and various marker-based seedling selection strategies on a single trait basis through analytical derivation and stochastic simulation, based on a generalized seedling selection scheme of clonally propagated crops. Various trait-test scenarios with a range of broad-sense heritability and proportion of genotypic variance explained by DNA markers were simulated for two populations with different segregation patterns. Both derived and simulated results indicated that marker-based strategies tended to achieve higher genetic gain than phenotypic seedling selection for a trait where the proportion of genotypic variance explained by marker information was greater than the broad-sense heritability. Results from this study provides guidance in optimizing genetic gain from seedling selection for single traits where DNA tests providing marker information are available. PMID:27148453
NASA Astrophysics Data System (ADS)
Zhong, Fulin; Li, Ting; Pan, Boan; Wang, Pengbo
2017-02-01
Laser acupuncture is an effective photochemical and nonthermal stimulation of traditional acupuncture points with lowintensity laser irradiation, which is advantageous in painless, sterile, and safe compared to traditional acupuncture. Laser diode (LD) provides single wavelength and relatively-higher power light for phototherapy. The quantitative effect of illumination parameters of LD in use of laser acupuncture is crucial for practical operation of laser acupuncture. However, this issue is not fully demonstrated, especially since experimental methodologies with animals or human are pretty hard to address to this issue. For example, in order to protect viability of cells and tissue, and get better therapeutic effect, it's necessary to control the output power varied at 5mW 10mW range, while the optimized power is still not clear. This study aimed to quantitatively optimize the laser output power, wavelength, and irradiation direction with highly realistic modeling of light transport in acupunctured tissue. A Monte Carlo Simulation software for 3D vowelized media and the highest-precision human anatomical model Visible Chinese Human (VCH) were employed. Our 3D simulation results showed that longer wavelength/higher illumination power, larger absorption in laser acupuncture; the vertical direction emission of the acupuncture laser results in higher amount of light absorption in both the acupunctured voxel of tissue and muscle layer. Our 3D light distribution of laser acupuncture within VCH tissue model is potential to be used in optimization and real time guidance in clinical manipulation of laser acupuncture.
Teaching Physics Using Virtual Reality
NASA Astrophysics Data System (ADS)
Savage, C.; McGrath, D.; McIntyre, T.; Wegener, M.; Williamson, M.
2010-07-01
We present an investigation of game-like simulations for physics teaching. We report on the effectiveness of the interactive simulation "Real Time Relativity" for learning special relativity. We argue that the simulation not only enhances traditional learning, but also enables new types of learning that challenge the traditional curriculum. The lessons drawn from this work are being applied to the development of a simulation for enhancing the learning of quantum mechanics.
High-resolution regional climate model evaluation using variable-resolution CESM over California
NASA Astrophysics Data System (ADS)
Huang, X.; Rhoades, A.; Ullrich, P. A.; Zarzycki, C. M.
2015-12-01
Understanding the effect of climate change at regional scales remains a topic of intensive research. Though computational constraints remain a problem, high horizontal resolution is needed to represent topographic forcing, which is a significant driver of local climate variability. Although regional climate models (RCMs) have traditionally been used at these scales, variable-resolution global climate models (VRGCMs) have recently arisen as an alternative for studying regional weather and climate allowing two-way interaction between these domains without the need for nudging. In this study, the recently developed variable-resolution option within the Community Earth System Model (CESM) is assessed for long-term regional climate modeling over California. Our variable-resolution simulations will focus on relatively high resolutions for climate assessment, namely 28km and 14km regional resolution, which are much more typical for dynamically downscaled studies. For comparison with the more widely used RCM method, the Weather Research and Forecasting (WRF) model will be used for simulations at 27km and 9km. All simulations use the AMIP (Atmospheric Model Intercomparison Project) protocols. The time period is from 1979-01-01 to 2005-12-31 (UTC), and year 1979 was discarded as spin up time. The mean climatology across California's diverse climate zones, including temperature and precipitation, is analyzed and contrasted with the Weather Research and Forcasting (WRF) model (as a traditional RCM), regional reanalysis, gridded observational datasets and uniform high-resolution CESM at 0.25 degree with the finite volume (FV) dynamical core. The results show that variable-resolution CESM is competitive in representing regional climatology on both annual and seasonal time scales. This assessment adds value to the use of VRGCMs for projecting climate change over the coming century and improve our understanding of both past and future regional climate related to fine-scale processes. This assessment is also relevant for addressing the scale limitation of current RCMs or VRGCMs when next-generation model resolution increases to ~10km and beyond.
Benchmarking nitrogen removal suspended-carrier biofilm systems using dynamic simulation.
Vanhooren, H; Yuan, Z; Vanrolleghem, P A
2002-01-01
We are witnessing an enormous growth in biological nitrogen removal from wastewater. It presents specific challenges beyond traditional COD (carbon) removal. A possibility for optimised process design is the use of biomass-supporting media. In this paper, attached growth processes (AGP) are evaluated using dynamic simulations. The advantages of these systems that were qualitatively described elsewhere, are validated quantitatively based on a simulation benchmark for activated sludge treatment systems. This simulation benchmark is extended with a biofilm model that allows for fast and accurate simulation of the conversion of different substrates in a biofilm. The economic feasibility of this system is evaluated using the data generated with the benchmark simulations. Capital savings due to volume reduction and reduced sludge production are weighed out against increased aeration costs. In this evaluation, effluent quality is integrated as well.
A Non-Incompressible Non-Boussinesq (NINB) framework for studying atmospheric turbulence
NASA Astrophysics Data System (ADS)
Yan, C.; Archer, C. L.; Xie, S.; Ghaisas, N.
2015-12-01
The incompressible assumption is widely used for studying the turbulent atmospheric boundary layer (ABL) and is generally accepted when the Mach number < ~0.3 (velocity < ~100 m/s). Since the tips of modern wind turbine blades can reach and exceed this threshold, neglecting air compressibility will introduce errors. In addition, if air incompressibility does not hold, then the Boussinesq approximation, by which air density is treated as a constant except in the gravity term of the Navier-Stokes equation, is also invalidated. Here, we propose a new theoretical framework, called NINB for Non-Incompressible Non-Boussinesq, in which air is not considered incompressible and air density is treated as a non-turbulent 4D variable. First, the NINB mass, momentum, and energy conservation equations are developed using Reynolds averaging. Second, numerical simulations of the NINB equations, coupled with a k-epsilon turbulence model, are performed with the finite-volume method. Wind turbines are modeled with the actuator-line model using SOWFA (Software for Offshore/onshore Wind Farm Applications). Third, NINB results are compared with the traditional incompressible buoyant simulations performed by SOWFA with the same set up. The results show differences between NINB and traditional simulations in the neutral atmosphere with a wind turbine. The largest differences in wind speed (up to 1 m/s), turbulent kinetic energy (~10%), dissipation rate (~5%), and shear stress (~10%) occur near the turbine tip region. The power generation differences are 5-15% (depending on setup). These preliminary results suggest that compressibility effects are non-negligible around wind turbines and should be taken into account when forecasting wind power. Since only a few extra terms are introduced, the NINB framework may be an alternative to the traditional incompressible Boussinesq framework for studying the turbulent ABL in general (i.e., without turbines) in the absence of shock waves.
Validation and Verification of LADEE Models and Software
NASA Technical Reports Server (NTRS)
Gundy-Burlet, Karen
2013-01-01
The Lunar Atmosphere Dust Environment Explorer (LADEE) mission will orbit the moon in order to measure the density, composition and time variability of the lunar dust environment. The ground-side and onboard flight software for the mission is being developed using a Model-Based Software methodology. In this technique, models of the spacecraft and flight software are developed in a graphical dynamics modeling package. Flight Software requirements are prototyped and refined using the simulated models. After the model is shown to work as desired in this simulation framework, C-code software is automatically generated from the models. The generated software is then tested in real time Processor-in-the-Loop and Hardware-in-the-Loop test beds. Travelling Road Show test beds were used for early integration tests with payloads and other subsystems. Traditional techniques for verifying computational sciences models are used to characterize the spacecraft simulation. A lightweight set of formal methods analysis, static analysis, formal inspection and code coverage analyses are utilized to further reduce defects in the onboard flight software artifacts. These techniques are applied early and often in the development process, iteratively increasing the capabilities of the software and the fidelity of the vehicle models and test beds.
Kaabi, Mohamed Ghaith; Tonnelier, Arnaud; Martinez, Dominique
2011-05-01
In traditional event-driven strategies, spike timings are analytically given or calculated with arbitrary precision (up to machine precision). Exact computation is possible only for simplified neuron models, mainly the leaky integrate-and-fire model. In a recent paper, Zheng, Tonnelier, and Martinez (2009) introduced an approximate event-driven strategy, named voltage stepping, that allows the generic simulation of nonlinear spiking neurons. Promising results were achieved in the simulation of single quadratic integrate-and-fire neurons. Here, we assess the performance of voltage stepping in network simulations by considering more complex neurons (quadratic integrate-and-fire neurons with adaptation) coupled with multiple synapses. To handle the discrete nature of synaptic interactions, we recast voltage stepping in a general framework, the discrete event system specification. The efficiency of the method is assessed through simulations and comparisons with a modified time-stepping scheme of the Runge-Kutta type. We demonstrated numerically that the original order of voltage stepping is preserved when simulating connected spiking neurons, independent of the network activity and connectivity.
NASA Astrophysics Data System (ADS)
Câmara, L. D. T.
2015-09-01
The solvent-gradient simulated moving bed process (SG-SMB) is the new tendency in the performance improvement if compared to the traditional isocratic solvent conditions. In such SG-SMB separation process the modulation of the solvent strength leads to significant increase in the purities and productivity followed by reduction in the solvent consumption. A stepwise modelling approach was utilized in the representation of the interconnected chromatographic columns of the system combined with lumped mass transfer models between the solid and liquid phase. The influence of the solvent modifier was considered applying the Abel model which takes into account the effect of modifier volume fraction over the partition coefficient. The modelling and simulations were carried out and compared to the experimental SG-SMB separation of the amino acids phenylalanine and tryptophan. A lumped mass transfer kinetic model was applied for both the modifier (ethanol) as well as the solutes. The simulation results showed that such simple and global mass transfer models are enough to represent all the mass transfer effect between the solid adsorbent and the liquid phase. The separation performance can be improved reducing the interaction or the mass transfer kinetic effect between the solid adsorbent phase and the modifier. The simulations showed great agreement fitting the experimental data of the amino acids concentrations both at the extract as well as at the raffinate.
Klerman, Elizabeth B; Beckett, Scott A; Landrigan, Christopher P
2016-09-13
In 2011 the U.S. Accreditation Council for Graduate Medical Education began limiting first year resident physicians (interns) to shifts of ≤16 consecutive hours. Controversy persists regarding the effectiveness of this policy for reducing errors and accidents while promoting education and patient care. Using a mathematical model of the effects of circadian rhythms and length of time awake on objective performance and subjective alertness, we quantitatively compared predictions for traditional intern schedules to those that limit work to ≤ 16 consecutive hours. We simulated two traditional schedules and three novel schedules using the mathematical model. The traditional schedules had extended duration work shifts (≥24 h) with overnight work shifts every second shift (including every third night, Q3) or every third shift (including every fourth night, Q4) night; the novel schedules had two different cross-cover (XC) night team schedules (XC-V1 and XC-V2) and a Rapid Cycle Rotation (RCR) schedule. Predicted objective performance and subjective alertness for each work shift were computed for each individual's schedule within a team and then combined for the team as a whole. Our primary outcome was the amount of time within a work shift during which a team's model-predicted objective performance and subjective alertness were lower than that expected after 16 or 24 h of continuous wake in an otherwise rested individual. The model predicted fewer hours with poor performance and alertness, especially during night-time work hours, for all three novel schedules than for either the traditional Q3 or Q4 schedules. Three proposed schedules that eliminate extended shifts may improve performance and alertness compared with traditional Q3 or Q4 schedules. Predicted times of worse performance and alertness were at night, which is also a time when supervision of trainees is lower. Mathematical modeling provides a quantitative comparison approach with potential to aid residency programs in schedule analysis and redesign.
An Evolutionary Game Theory Model of Revision-Resistant Motivations and Strategic Reasoning
2008-08-01
The model also is consistent with a number of findings on the nature of emotions and related forms of motivation. 15. SUBJECT TERMS...because human beings have some kinds of motivations that are not reducible to the economist’s traditional notion of ordered preferences. These...simulation is designed in light of the most current human subject research on a widely studied game: the Ultimatum Game. This allows us to test
Gore, Teresa
2017-06-15
The purpose of this study was to explore the relationship of baccalaureate nursing students' (BSN) perceived learning effectiveness using the Clinical Learning Environments Comparison Survey of different levels of fidelity simulation and traditional clinical experiences. A convenience sample of 103 first semester BSN enrolled in a fundamental/assessment clinical course and 155 fifth semester BSN enrolled in a leadership clinical course participated in this study. A descriptive correlational design was used for this cross-sectional study to evaluate students' perceptions after a simulation experience and the completion of the traditional clinical experiences. The subscales measured were communication, nursing leadership, and teaching-learning dyad. No statistical differences were noted based on the learning objectives. The communication subscale showed a tendency toward preference for traditional clinical experiences in meeting students perceived learning for communication. For student perceived learning effectiveness, faculty should determine the appropriate level of fidelity in simulation based on the learning objectives.
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Chern, Jiun-Dar
2017-01-01
The importance of precipitating mesoscale convective systems (MCSs) has been quantified from TRMM precipitation radar and microwave imager retrievals. MCSs generate more than 50% of the rainfall in most tropical regions. MCSs usually have horizontal scales of a few hundred kilometers (km); therefore, a large domain with several hundred km is required for realistic simulations of MCSs in cloud-resolving models (CRMs). Almost all traditional global and climate models do not have adequate parameterizations to represent MCSs. Typical multi-scale modeling frameworks (MMFs) may also lack the resolution (4 km grid spacing) and domain size (128 km) to realistically simulate MCSs. In this study, the impact of MCSs on precipitation is examined by conducting model simulations using the Goddard Cumulus Ensemble (GCE) model and Goddard MMF (GMMF). The results indicate that both models can realistically simulate MCSs with more grid points (i.e., 128 and 256) and higher resolutions (1 or 2 km) compared to those simulations with fewer grid points (i.e., 32 and 64) and low resolution (4 km). The modeling results also show the strengths of the Hadley circulations, mean zonal and regional vertical velocities, surface evaporation, and amount of surface rainfall are weaker or reduced in the GMMF when using more CRM grid points and higher CRM resolution. In addition, the results indicate that large-scale surface evaporation and wind feed back are key processes for determining the surface rainfall amount in the GMMF. A sensitivity test with reduced sea surface temperatures shows both reduced surface rainfall and evaporation.
NASA Astrophysics Data System (ADS)
Tao, Wei-Kuo; Chern, Jiun-Dar
2017-06-01
The importance of precipitating mesoscale convective systems (MCSs) has been quantified from TRMM precipitation radar and microwave imager retrievals. MCSs generate more than 50% of the rainfall in most tropical regions. MCSs usually have horizontal scales of a few hundred kilometers (km); therefore, a large domain with several hundred km is required for realistic simulations of MCSs in cloud-resolving models (CRMs). Almost all traditional global and climate models do not have adequate parameterizations to represent MCSs. Typical multiscale modeling frameworks (MMFs) may also lack the resolution (4 km grid spacing) and domain size (128 km) to realistically simulate MCSs. The impact of MCSs on precipitation is examined by conducting model simulations using the Goddard Cumulus Ensemble (GCE, a CRM) model and Goddard MMF that uses the GCEs as its embedded CRMs. Both models can realistically simulate MCSs with more grid points (i.e., 128 and 256) and higher resolutions (1 or 2 km) compared to those simulations with fewer grid points (i.e., 32 and 64) and low resolution (4 km). The modeling results also show the strengths of the Hadley circulations, mean zonal and regional vertical velocities, surface evaporation, and amount of surface rainfall are weaker or reduced in the Goddard MMF when using more CRM grid points and higher CRM resolution. In addition, the results indicate that large-scale surface evaporation and wind feedback are key processes for determining the surface rainfall amount in the GMMF. A sensitivity test with reduced sea surface temperatures shows both reduced surface rainfall and evaporation.
Ciesielski, Peter N.; Crowley, Michael F.; Nimlos, Mark R.; ...
2014-12-09
Biomass exhibits a complex microstructure of directional pores that impact how heat and mass are transferred within biomass particles during conversion processes. However, models of biomass particles used in simulations of conversion processes typically employ oversimplified geometries such as spheres and cylinders and neglect intraparticle microstructure. In this study, we develop 3D models of biomass particles with size, morphology, and microstructure based on parameters obtained from quantitative image analysis. We obtain measurements of particle size and morphology by analyzing large ensembles of particles that result from typical size reduction methods, and we delineate several representative size classes. Microstructural parameters, includingmore » cell wall thickness and cell lumen dimensions, are measured directly from micrographs of sectioned biomass. A general constructive solid geometry algorithm is presented that produces models of biomass particles based on these measurements. Next, we employ the parameters obtained from image analysis to construct models of three different particle size classes from two different feedstocks representing a hardwood poplar species ( Populus tremuloides, quaking aspen) and a softwood pine ( Pinus taeda, loblolly pine). Finally, we demonstrate the utility of the models and the effects explicit microstructure by performing finite-element simulations of intraparticle heat and mass transfer, and the results are compared to similar simulations using traditional simplified geometries. In conclusion, we show how the behavior of particle models with more realistic morphology and explicit microstructure departs from that of spherical models in simulations of transport phenomena and that species-dependent differences in microstructure impact simulation results in some cases.« less
A Review of Numerical Simulation and Analytical Modeling for Medical Devices Safety in MRI
Kabil, J.; Belguerras, L.; Trattnig, S.; Pasquier, C.; Missoffe, A.
2016-01-01
Summary Objectives To review past and present challenges and ongoing trends in numerical simulation for MRI (Magnetic Resonance Imaging) safety evaluation of medical devices. Methods A wide literature review on numerical and analytical simulation on simple or complex medical devices in MRI electromagnetic fields shows the evolutions through time and a growing concern for MRI safety over the years. Major issues and achievements are described, as well as current trends and perspectives in this research field. Results Numerical simulation of medical devices is constantly evolving, supported by calculation methods now well-established. Implants with simple geometry can often be simulated in a computational human model, but one issue remaining today is the experimental validation of these human models. A great concern is to assess RF heating on implants too complex to be traditionally simulated, like pacemaker leads. Thus, ongoing researches focus on alternative hybrids methods, both numerical and experimental, with for example a transfer function method. For the static field and gradient fields, analytical models can be used for dimensioning simple implants shapes, but limited for complex geometries that cannot be studied with simplifying assumptions. Conclusions Numerical simulation is an essential tool for MRI safety testing of medical devices. The main issues remain the accuracy of simulations compared to real life and the studies of complex devices; but as the research field is constantly evolving, some promising ideas are now under investigation to take up the challenges. PMID:27830244
ERIC Educational Resources Information Center
Baser, Mustafa
2006-01-01
The objective of this research is to investigate the effects of simulations based on conceptual change conditions (CCS) and traditional confirmatory simulations (TCS) on pre-service elementary school teachers' understanding of direct current electric circuits. The data was collected from a sample consisting of 89 students; 48 students in the…
Kent, Dea J
2010-01-01
I compared the effects of a just-in-time educational intervention (educational materials for dressing application attached to the manufacturer's dressing package) to traditional wound care education on reported confidence and dressing application in a simulated model. Nurses from a variety of backgrounds were recruited for this study. The nurses possessed all levels of education ranging from licensed practical nurse to master of science in nursing. Both novice and seasoned nurses were included, with no stipulations regarding years of nursing experience. Exclusion criteria included nurses who spent less than 50% of their time in direct patient care and nurses with advanced wound care training and/or certification (CWOCN, CWON). Study settings included community-based acute care facilities, critical access hospitals, long-term care facilities, long-term acute care facilities, and home care agencies. No level 1 trauma centers were included in the study for geographical reasons. Participants were randomly allocated to control or intervention groups. Each participant completed the Kent Dressing Confidence Assessment tool. Subjects were then asked to apply the dressing to a wound model under the observation of either the principal investigator or a trained observer, who scored the accuracy of dressing application according to established criteria. None of the 139 nurses who received traditional dressing packaging were able to apply the dressing to a wound model correctly. In contrast, 88% of the nurses who received the package with the educational guide attached to it were able to apply the dressing to a wound model correctly (χ2 = 107.22, df = 1, P = .0001). Nurses who received the dressing package with the attached educational guide agreed that this feature gave them confidence to correctly apply the dressing (88%), while no nurse agreed that the traditional package gave him or her the confidence to apply the dressing correctly (χ2 = 147.47, df = 4, P < .0001). A just-in-time education intervention improved nurses' confidence when applying an unfamiliar dressing and accuracy of application when applying the dressing to a simulated model compared to traditional wound care education.
Integration of multiple theories for the simulation of laser interference lithography processes
NASA Astrophysics Data System (ADS)
Lin, Te-Hsun; Yang, Yin-Kuang; Fu, Chien-Chung
2017-11-01
The periodic structure of laser interference lithography (LIL) fabrication is superior to other lithography technologies. In contrast to traditional lithography, LIL has the advantages of being a simple optical system with no mask requirements, low cost, high depth of focus, and large patterning area in a single exposure. Generally, a simulation pattern for the periodic structure is obtained through optical interference prior to its fabrication through LIL. However, the LIL process is complex and combines the fields of optical and polymer materials; thus, a single simulation theory cannot reflect the real situation. Therefore, this research integrates multiple theories, including those of optical interference, standing waves, and photoresist characteristics, to create a mathematical model for the LIL process. The mathematical model can accurately estimate the exposure time and reduce the LIL process duration through trial and error.
Coarse Grid CFD for underresolved simulation
NASA Astrophysics Data System (ADS)
Class, Andreas G.; Viellieber, Mathias O.; Himmel, Steffen R.
2010-11-01
CFD simulation of the complete reactor core of a nuclear power plant requires exceedingly huge computational resources so that this crude power approach has not been pursued yet. The traditional approach is 1D subchannel analysis employing calibrated transport models. Coarse grid CFD is an attractive alternative technique based on strongly under-resolved CFD and the inviscid Euler equations. Obviously, using inviscid equations and coarse grids does not resolve all the physics requiring additional volumetric source terms modelling viscosity and other sub-grid effects. The source terms are implemented via correlations derived from fully resolved representative simulations which can be tabulated or computed on the fly. The technique is demonstrated for a Carnot diffusor and a wire-wrap fuel assembly [1]. [4pt] [1] Himmel, S.R. phd thesis, Stuttgart University, Germany 2009, http://bibliothek.fzk.de/zb/berichte/FZKA7468.pdf
Impacts of high resolution data on traveler compliance levels in emergency evacuation simulations
Lu, Wei; Han, Lee D.; Liu, Cheng; ...
2016-05-05
In this article, we conducted a comparison study of evacuation assignment based on Traffic Analysis Zones (TAZ) and high resolution LandScan USA Population Cells (LPC) with detailed real world roads network. A platform for evacuation modeling built on high resolution population distribution data and activity-based microscopic traffic simulation was proposed. This platform can be extended to any cities in the world. The results indicated that evacuee compliance behavior affects evacuation efficiency with traditional TAZ assignment, but it did not significantly compromise the performance with high resolution LPC assignment. The TAZ assignment also underestimated the real travel time during evacuation. Thismore » suggests that high data resolution can improve the accuracy of traffic modeling and simulation. The evacuation manager should consider more diverse assignment during emergency evacuation to avoid congestions.« less
Origin of coronal mass ejection and magnetic cloud: Thermal or magnetic driven?
NASA Technical Reports Server (NTRS)
Zhang, Gong-Liang; Wang, Chi; He, Shuang-Hua
1995-01-01
A fundamental problem in Solar-Terrestrial Physics is the origin of the solar transient plasma output, which includes the coronal mass ejection and its interplanetary manifestation, e.g. the magnetic cloud. The traditional blast wave model resulted from solar thermal pressure impulse has faced with challenge during recent years. In the MHD numerical simulation study of CME, the authors find that the basic feature of the asymmetrical event on 18 August 1980 can be reproduced neither by a thermal pressure nor by a speed increment. Also, the thermal pressure model fails in simulating the interplanetary structure with low thermal pressure and strong magnetic field strength, representative of a typical magnetic cloud. Instead, the numerical simulation results are in favor of the magnetic field expansion as the likely mechanism for both the asymmetrical CME event and magnetic cloud.
Integration of multiple theories for the simulation of laser interference lithography processes.
Lin, Te-Hsun; Yang, Yin-Kuang; Fu, Chien-Chung
2017-11-24
The periodic structure of laser interference lithography (LIL) fabrication is superior to other lithography technologies. In contrast to traditional lithography, LIL has the advantages of being a simple optical system with no mask requirements, low cost, high depth of focus, and large patterning area in a single exposure. Generally, a simulation pattern for the periodic structure is obtained through optical interference prior to its fabrication through LIL. However, the LIL process is complex and combines the fields of optical and polymer materials; thus, a single simulation theory cannot reflect the real situation. Therefore, this research integrates multiple theories, including those of optical interference, standing waves, and photoresist characteristics, to create a mathematical model for the LIL process. The mathematical model can accurately estimate the exposure time and reduce the LIL process duration through trial and error.
Roadmap to an Engineering-Scale Nuclear Fuel Performance & Safety Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, John A; Clarno, Kevin T; Hansen, Glen A
2009-09-01
Developing new fuels and qualifying them for large-scale deployment in power reactors is a lengthy and expensive process, typically spanning a period of two decades from concept to licensing. Nuclear fuel designers serve an indispensable role in the process, at the initial exploratory phase as well as in analysis of the testing results. In recent years fuel performance capabilities based on first principles have been playing more of a role in what has traditionally been an empirically dominated process. Nonetheless, nuclear fuel behavior is based on the interaction of multiple complex phenomena, and recent evolutionary approaches are being applied moremore » on a phenomenon-by-phenomenon basis, targeting localized problems, as opposed to a systematic approach based on a fundamental understanding of all interacting parameters. Advanced nuclear fuels are generally more complex, and less understood, than the traditional fuels used in existing reactors (ceramic UO{sub 2} with burnable poisons and other minor additives). The added challenges are primarily caused by a less complete empirical database and, in the case of recycled fuel, the inherent variability in fuel compositions. It is clear that using the traditional approach to develop and qualify fuels over the entire range of variables pertinent to the U.S. Department of Energy (DOE) Office of Nuclear Energy on a timely basis with available funds would be very challenging, if not impossible. As a result the DOE Office of Nuclear Energy has launched the Nuclear Energy Advanced Modeling and Simulation (NEAMS) approach to revolutionize fuel development. This new approach is predicated upon transferring the recent advances in computational sciences and computer technologies into the fuel development program. The effort will couple computational science with recent advances in the fundamental understanding of physical phenomena through ab initio modeling and targeted phenomenological testing to leapfrog many fuel-development activities. Realizing the full benefits of this approach will likely take some time. However, it is important that the developmental activities for modeling and simulation be tightly coupled with the experimental activities to maximize feedback effects and accelerate both the experimental and analytical elements of the program toward a common objective. The close integration of modeling and simulation and experimental activities is key to developing a useful fuel performance simulation capability, providing a validated design and analysis tool, and understanding the uncertainties within the models and design process. The efforts of this project are integrally connected to the Transmutation Fuels Campaign (TFC), which maintains as a primary objective to formulate, fabricate, and qualify a transuranic-based fuel with added minor actinides for use in future fast reactors. Additional details of the TFC scope can be found in the Transmutation Fuels Campaign Execution Plan. This project is an integral component of the TFC modeling and simulation effort, and this multiyear plan borrowed liberally from the Transmutation Fuels Campaign Modeling and Simulation Roadmap. This document provides the multiyear staged development plan to develop a continuum-level Integrated Performance and Safety Code (IPSC) to predict the behavior of the fuel and cladding during normal reactor operations and anticipated transients up to the point of clad breach.« less
NASA Astrophysics Data System (ADS)
He, Yingqing; Ai, Bin; Yao, Yao; Zhong, Fajun
2015-06-01
Cellular automata (CA) have proven to be very effective for simulating and predicting the spatio-temporal evolution of complex geographical phenomena. Traditional methods generally pose problems in determining the structure and parameters of CA for a large, complex region or a long-term simulation. This study presents a self-adaptive CA model integrated with an artificial immune system to discover dynamic transition rules automatically. The model's parameters are allowed to be self-modified with the application of multi-temporal remote sensing images: that is, the CA can adapt itself to the changed and complex environment. Therefore, urban dynamic evolution rules over time can be efficiently retrieved by using this integrated model. The proposed AIS-based CA model was then used to simulate the rural-urban land conversion of Guangzhou city, located in the core of China's Pearl River Delta. The initial urban land was directly classified from TM satellite image in the year 1990. Urban land in the years 1995, 2000, 2005, 2009 and 2012 was correspondingly used as the observed data to calibrate the model's parameters. With the quantitative index figure of merit (FoM) and pattern similarity, the comparison was further performed between the AIS-based model and a Logistic CA model. The results indicate that the AIS-based CA model can perform better and with higher precision in simulating urban evolution, and the simulated spatial pattern is closer to the actual development situation.
Improving Hydrological Simulations by Incorporating GRACE Data for Parameter Calibration
NASA Astrophysics Data System (ADS)
Bai, P.
2017-12-01
Hydrological model parameters are commonly calibrated by observed streamflow data. This calibration strategy is questioned when the modeled hydrological variables of interest are not limited to streamflow. Well-performed streamflow simulations do not guarantee the reliable reproduction of other hydrological variables. One of the reasons is that hydrological model parameters are not reasonably identified. The Gravity Recovery and Climate Experiment (GRACE) satellite-derived total water storage change (TWSC) data provide an opportunity to constrain hydrological model parameterizations in combination with streamflow observations. We constructed a multi-objective calibration scheme based on GRACE-derived TWSC and streamflow observations, with the aim of improving the parameterizations of hydrological models. The multi-objective calibration scheme was compared with the traditional single-objective calibration scheme, which is based only on streamflow observations. Two monthly hydrological models were employed on 22 Chinese catchments with different hydroclimatic conditions. The model evaluation was performed using observed streamflows, GRACE-derived TWSC, and evapotranspiraiton (ET) estimates from flux towers and from the water balance approach. Results showed that the multi-objective calibration provided more reliable TWSC and ET simulations without significant deterioration in the accuracy of streamflow simulations than the single-objective calibration. In addition, the improvements of TWSC and ET simulations were more significant in relatively dry catchments than in relatively wet catchments. This study highlights the importance of including additional constraints besides streamflow observations in the parameter estimation to improve the performances of hydrological models.
NASA Astrophysics Data System (ADS)
Kuramochi, Kazuki; Akiyama, Kazunori; Ikeda, Shiro; Tazaki, Fumie; Fish, Vincent L.; Pu, Hung-Yi; Asada, Keiichi; Honma, Mareki
2018-05-01
We propose a new imaging technique for interferometry using sparse modeling, utilizing two regularization terms: the ℓ 1-norm and a new function named total squared variation (TSV) of the brightness distribution. First, we demonstrate that our technique may achieve a superresolution of ∼30% compared with the traditional CLEAN beam size using synthetic observations of two point sources. Second, we present simulated observations of three physically motivated static models of Sgr A* with the Event Horizon Telescope (EHT) to show the performance of proposed techniques in greater detail. Remarkably, in both the image and gradient domains, the optimal beam size minimizing root-mean-squared errors is ≲10% of the traditional CLEAN beam size for ℓ 1+TSV regularization, and non-convolved reconstructed images have smaller errors than beam-convolved reconstructed images. This indicates that TSV is well matched to the expected physical properties of the astronomical images and the traditional post-processing technique of Gaussian convolution in interferometric imaging may not be required. We also propose a feature-extraction method to detect circular features from the image of a black hole shadow and use it to evaluate the performance of the image reconstruction. With this method and reconstructed images, the EHT can constrain the radius of the black hole shadow with an accuracy of ∼10%–20% in present simulations for Sgr A*, suggesting that the EHT would be able to provide useful independent measurements of the mass of the supermassive black holes in Sgr A* and also another primary target, M87.
Direct Numerical Simulation and Theories of Wall Turbulence with a Range of Pressure Gradients
NASA Technical Reports Server (NTRS)
Coleman, G. N.; Garbaruk, A.; Spalart, P. R.
2014-01-01
A new Direct Numerical Simulation (DNS) of Couette-Poiseuille flow at a higher Reynolds number is presented and compared with DNS of other wall-bounded flows. It is analyzed in terms of testing semi-theoretical proposals for universal behavior of the velocity, mixing length, or eddy viscosity in pressure gradients, and in terms of assessing the accuracy of two turbulence models. These models are used in two modes, the traditional one with only a dependence on the wall-normal coordinate y, and a newer one in which a lateral dependence on z is added. For pure Couette flow and the Couette-Poiseuille case considered here, this z-dependence allows some models to generate steady streamwise vortices, which generally improves the agreement with DNS and experiment. On the other hand, it complicates the comparison between DNS and models.
NASA Astrophysics Data System (ADS)
Song, Yang; Srinivasan, Bhuvana
2017-10-01
The discontinuous Galerkin (DG) method has the advantage of resolving shocks and sharp gradients that occur in neutral fluids and plasmas. An unstructured DG code has been developed in this work to study plasma instabilities using the two-fluid plasma model. Unstructured meshes are known to produce small and randomized grid errors compared to traditional structured meshes. Computational tests for Rayleigh-Taylor instabilities in radially-converging flows are performed using the MHD model. Choice of grid geometry is not obvious for simulations of instabilities in these circular configurations. Comparisons of the effects for different grids are made. A 2D magnetic nozzle simulation using the two-fluid plasma model is also performed. A vacuum boundary condition technique is applied to accurately solve the Riemann problem on the edge of the plume.
Jia, Shiyu; Zhang, Weizhong; Yu, Xiaokang; Pan, Zhenkuan
2015-09-01
Surgical simulators need to simulate interactive cutting of deformable objects in real time. The goal of this work was to design an interactive cutting algorithm that eliminates traditional cutting state classification and can work simultaneously with real-time GPU-accelerated deformation without affecting its numerical stability. A modified virtual node method for cutting is proposed. Deformable object is modeled as a real tetrahedral mesh embedded in a virtual tetrahedral mesh, and the former is used for graphics rendering and collision, while the latter is used for deformation. Cutting algorithm first subdivides real tetrahedrons to eliminate all face and edge intersections, then splits faces, edges and vertices along cutting tool trajectory to form cut surfaces. Next virtual tetrahedrons containing more than one connected real tetrahedral fragments are duplicated, and connectivity between virtual tetrahedrons is updated. Finally, embedding relationship between real and virtual tetrahedral meshes is updated. Co-rotational linear finite element method is used for deformation. Cutting and collision are processed by CPU, while deformation is carried out by GPU using OpenCL. Efficiency of GPU-accelerated deformation algorithm was tested using block models with varying numbers of tetrahedrons. Effectiveness of our cutting algorithm under multiple cuts and self-intersecting cuts was tested using a block model and a cylinder model. Cutting of a more complex liver model was performed, and detailed performance characteristics of cutting, deformation and collision were measured and analyzed. Our cutting algorithm can produce continuous cut surfaces when traditional minimal element creation algorithm fails. Our GPU-accelerated deformation algorithm remains stable with constant time step under multiple arbitrary cuts and works on both NVIDIA and AMD GPUs. GPU-CPU speed ratio can be as high as 10 for models with 80,000 tetrahedrons. Forty to sixty percent real-time performance and 100-200 Hz simulation rate are achieved for the liver model with 3,101 tetrahedrons. Major bottlenecks for simulation efficiency are cutting, collision processing and CPU-GPU data transfer. Future work needs to improve on these areas.
A method of LED free-form tilted lens rapid modeling based on scheme language
NASA Astrophysics Data System (ADS)
Dai, Yidan
2017-10-01
According to nonimaging optical principle and traditional LED free-form surface lens, a new kind of LED free-form tilted lens was designed. And a method of rapid modeling based on Scheme language was proposed. The mesh division method was applied to obtain the corresponding surface configuration according to the character of the light source and the desired energy distribution on the illumination plane. Then 3D modeling software and the Scheme language programming are used to generate lens model respectively. With the help of optical simulation software, a light source with the size of 1mm*1mm*1mm in volume is used in experiment, and the lateral migration distance of illumination area is 0.5m, in which total one million rays are computed. We could acquire the simulated results of both models. The simulated output result shows that the Scheme language can prevent the model deformation problems caused by the process of the model transfer, and the degree of illumination uniformity is reached to 82%, and the offset angle is 26°. Also, the efficiency of modeling process is greatly increased by using Scheme language.
NASA Astrophysics Data System (ADS)
Li, Guangquan; Field, Malcolm S.
2014-03-01
Documenting and understanding water balances in a karst watershed in which groundwater and surface water resources are strongly interconnected are important aspects for managing regional water resources. Assessing water balances in karst watersheds can be difficult, however, because karst watersheds are so very strongly affected by groundwater flows through solution conduits that are often connected to one or more sinkholes. In this paper we develop a mathematical model to approximate sinkhole porosity from discharge at a downstream spring. The model represents a combination of a traditional linear reservoir model with turbulent hydrodynamics in the solution conduit connecting the downstream spring with the upstream sinkhole, which allows for the simulation of spring discharges and estimation of sinkhole porosity. Noting that spring discharge is an integral of all aspects of water storage and flow, it is mainly dependent on the behavior of the karst aquifer as a whole and can be adequately simulated using the analytical model described in this paper. The model is advantageous in that it obviates the need for a sophisticated numerical model that is much more costly to calibrate and operate. The model is demonstrated using the St. Marks River Watershed in northwestern Florida.
Flexible multibody simulation of automotive systems with non-modal model reduction techniques
NASA Astrophysics Data System (ADS)
Shiiba, Taichi; Fehr, Jörg; Eberhard, Peter
2012-12-01
The stiffness of the body structure of an automobile has a strong relationship with its noise, vibration, and harshness (NVH) characteristics. In this paper, the effect of the stiffness of the body structure upon ride quality is discussed with flexible multibody dynamics. In flexible multibody simulation, the local elastic deformation of the vehicle has been described traditionally with modal shape functions. Recently, linear model reduction techniques from system dynamics and mathematics came into the focus to find more sophisticated elastic shape functions. In this work, the NVH-relevant states of a racing kart are simulated, whereas the elastic shape functions are calculated with modern model reduction techniques like moment matching by projection on Krylov-subspaces, singular value decomposition-based reduction techniques, and combinations of those. The whole elastic multibody vehicle model consisting of tyres, steering, axle, etc. is considered, and an excitation with a vibration characteristics in a wide frequency range is evaluated in this paper. The accuracy and the calculation performance of those modern model reduction techniques is investigated including a comparison of the modal reduction approach.
Guo, Ruiying; Nendel, Claas; Rahn, Clive; Jiang, Chunguang; Chen, Qing
2010-06-01
Vegetable production in China is associated with high inputs of nitrogen, posing a risk of losses to the environment. Organic matter mineralisation is a considerable source of nitrogen (N) which is hard to quantify. In a two-year greenhouse cucumber experiment with different N treatments in North China, non-observed pathways of the N cycle were estimated using the EU-Rotate_N simulation model. EU-Rotate_N was calibrated against crop dry matter and soil moisture data to predict crop N uptake, soil mineral N contents, N mineralisation and N loss. Crop N uptake (Modelling Efficiencies (ME) between 0.80 and 0.92) and soil mineral N contents in different soil layers (ME between 0.24 and 0.74) were satisfactorily simulated by the model for all N treatments except for the traditional N management. The model predicted high N mineralisation rates and N leaching losses, suggesting that previously published estimates of N leaching for these production systems strongly underestimated the mineralisation of N from organic matter. Copyright 2010 Elsevier Ltd. All rights reserved.
A study of different modeling choices for simulating platelets within the immersed boundary method
Shankar, Varun; Wright, Grady B.; Fogelson, Aaron L.; Kirby, Robert M.
2012-01-01
The Immersed Boundary (IB) method is a widely-used numerical methodology for the simulation of fluid–structure interaction problems. The IB method utilizes an Eulerian discretization for the fluid equations of motion while maintaining a Lagrangian representation of structural objects. Operators are defined for transmitting information (forces and velocities) between these two representations. Most IB simulations represent their structures with piecewise linear approximations and utilize Hookean spring models to approximate structural forces. Our specific motivation is the modeling of platelets in hemodynamic flows. In this paper, we study two alternative representations – radial basis functions (RBFs) and Fourier-based (trigonometric polynomials and spherical harmonics) representations – for the modeling of platelets in two and three dimensions within the IB framework, and compare our results with the traditional piecewise linear approximation methodology. For different representative shapes, we examine the geometric modeling errors (position and normal vectors), force computation errors, and computational cost and provide an engineering trade-off strategy for when and why one might select to employ these different representations. PMID:23585704
Human Papillomavirus Vaccination at a Time of Changing Sexual Behavior
Lazzarato, Fulvio; Brisson, Marc; Franceschi, Silvia
2016-01-01
Human papillomavirus (HPV) prevalence varies widely worldwide. We used a transmission model to show links between age-specific sexual patterns and HPV vaccination effectiveness. We considered rural India and the United States as examples of 2 heterosexual populations with traditional age-specific sexual behavior and gender-similar age-specific sexual behavior, respectively. We simulated these populations by using age-specific rates of sexual activity and age differences between sexual partners and found that transitions from traditional to gender-similar sexual behavior in women <35 years of age can result in increased (2.6-fold in our study) HPV16 prevalence. Our model shows that reductions in HPV16 prevalence are larger if vaccination occurs in populations before transitions in sexual behavior and that increased risk for HPV infection attributable to transition is preventable by early vaccination. Our study highlights the importance of using time-limited opportunities to introduce HPV vaccination in traditional populations before changes in age-specific sexual patterns occur. PMID:26691673
Human Papillomavirus Vaccination at a Time of Changing Sexual Behavior.
Baussano, Iacopo; Lazzarato, Fulvio; Brisson, Marc; Franceschi, Silvia
2016-01-01
Human papillomavirus (HPV) prevalence varies widely worldwide. We used a transmission model to show links between age-specific sexual patterns and HPV vaccination effectiveness. We considered rural India and the United States as examples of 2 heterosexual populations with traditional age-specific sexual behavior and gender-similar age-specific sexual behavior, respectively. We simulated these populations by using age-specific rates of sexual activity and age differences between sexual partners and found that transitions from traditional to gender-similar sexual behavior in women <35 years of age can result in increased (2.6-fold in our study) HPV16 prevalence. Our model shows that reductions in HPV16 prevalence are larger if vaccination occurs in populations before transitions in sexual behavior and that increased risk for HPV infection attributable to transition is preventable by early vaccination. Our study highlights the importance of using time-limited opportunities to introduce HPV vaccination in traditional populations before changes in age-specific sexual patterns occur.
Krejci, Caroline C; Stone, Richard T; Dorneich, Michael C; Gilbert, Stephen B
2016-02-01
Factors influencing long-term viability of an intermediated regional food supply network (food hub) were modeled using agent-based modeling techniques informed by interview data gathered from food hub participants. Previous analyses of food hub dynamics focused primarily on financial drivers rather than social factors and have not used mathematical models. Based on qualitative and quantitative data gathered from 22 customers and 11 vendors at a midwestern food hub, an agent-based model (ABM) was created with distinct consumer personas characterizing the range of consumer priorities. A comparison study determined if the ABM behaved differently than a model based on traditional economic assumptions. Further simulation studies assessed the effect of changes in parameters, such as producer reliability and the consumer profiles, on long-term food hub sustainability. The persona-based ABM model produced different and more resilient results than the more traditional way of modeling consumers. Reduced producer reliability significantly reduced trade; in some instances, a modest reduction in reliability threatened the sustainability of the system. Finally, a modest increase in price-driven consumers at the outset of the simulation quickly resulted in those consumers becoming a majority of the overall customer base. Results suggest that social factors, such as desire to support the community, can be more important than financial factors. An ABM of food hub dynamics, based on human factors data gathered from the field, can be a useful tool for policy decisions. Similar approaches can be used for modeling customer dynamics with other sustainable organizations. © 2015, Human Factors and Ergonomics Society.
Chylek, Lily A.; Harris, Leonard A.; Tung, Chang-Shung; Faeder, James R.; Lopez, Carlos F.
2013-01-01
Rule-based modeling was developed to address the limitations of traditional approaches for modeling chemical kinetics in cell signaling systems. These systems consist of multiple interacting biomolecules (e.g., proteins), which themselves consist of multiple parts (e.g., domains, linear motifs, and sites of phosphorylation). Consequently, biomolecules that mediate information processing generally have the potential to interact in multiple ways, with the number of possible complexes and post-translational modification states tending to grow exponentially with the number of binary interactions considered. As a result, only large reaction networks capture all possible consequences of the molecular interactions that occur in a cell signaling system, which is problematic because traditional modeling approaches for chemical kinetics (e.g., ordinary differential equations) require explicit network specification. This problem is circumvented through representation of interactions in terms of local rules. With this approach, network specification is implicit and model specification is concise. Concise representation results in a coarse graining of chemical kinetics, which is introduced because all reactions implied by a rule inherit the rate law associated with that rule. Coarse graining can be appropriate if interactions are modular, and the coarseness of a model can be adjusted as needed. Rules can be specified using specialized model-specification languages, and recently developed tools designed for specification of rule-based models allow one to leverage powerful software engineering capabilities. A rule-based model comprises a set of rules, which can be processed by general-purpose simulation and analysis tools to achieve different objectives (e.g., to perform either a deterministic or stochastic simulation). PMID:24123887
Educating the delivery of bad news in medicine: Preceptorship versus simulation
Jacques, Andrew P; Adkins, Eric J; Knepel, Sheri; Boulger, Creagh; Miller, Jessica; Bahner, David P
2011-01-01
Simulation experiences have begun to replace traditional education models of teaching the skill of bad news delivery in medical education. The tiered apprenticeship model of medical education emphasizes experiential learning. Studies have described a lack of support in bad news delivery and inadequacy of training in this important clinical skill as well as poor familial comprehension and dissatisfaction on the part of physicians in training regarding the resident delivery of bad news. Many residency training programs lacked a formalized training curriculum in the delivery of bad news. Simulation teaching experiences may address these noted clinical deficits in the delivery of bad news to patients and their families. Unique experiences can be role-played with this educational technique to simulate perceived learner deficits. A variety of scenarios can be constructed within the framework of the simulation training method to address specific cultural and religious responses to bad news in the medical setting. Even potentially explosive and violent scenarios can be role-played in order to prepare physicians for these rare and difficult situations. While simulation experiences cannot supplant the model of positive, real-life clinical teaching in the delivery of bad news, simulation of clinical scenarios with scripting, self-reflection, and peer-to-peer feedback can be powerful educational tools. Simulation training can help to develop the skills needed to effectively and empathetically deliver bad news to patients and families in medical practice. PMID:22229135
Educating the delivery of bad news in medicine: Preceptorship versus simulation.
Jacques, Andrew P; Adkins, Eric J; Knepel, Sheri; Boulger, Creagh; Miller, Jessica; Bahner, David P
2011-07-01
Simulation experiences have begun to replace traditional education models of teaching the skill of bad news delivery in medical education. The tiered apprenticeship model of medical education emphasizes experiential learning. Studies have described a lack of support in bad news delivery and inadequacy of training in this important clinical skill as well as poor familial comprehension and dissatisfaction on the part of physicians in training regarding the resident delivery of bad news. Many residency training programs lacked a formalized training curriculum in the delivery of bad news. Simulation teaching experiences may address these noted clinical deficits in the delivery of bad news to patients and their families. Unique experiences can be role-played with this educational technique to simulate perceived learner deficits. A variety of scenarios can be constructed within the framework of the simulation training method to address specific cultural and religious responses to bad news in the medical setting. Even potentially explosive and violent scenarios can be role-played in order to prepare physicians for these rare and difficult situations. While simulation experiences cannot supplant the model of positive, real-life clinical teaching in the delivery of bad news, simulation of clinical scenarios with scripting, self-reflection, and peer-to-peer feedback can be powerful educational tools. Simulation training can help to develop the skills needed to effectively and empathetically deliver bad news to patients and families in medical practice.
Use of High-resolution WRF Simulations to Forecast Lightning Threat
NASA Technical Reports Server (NTRS)
McCaul, William E.; LaCasse, K.; Goodman, S. J.
2006-01-01
Recent observational studies have confirmed the existence of a robust statistical relationship between lightning flash rates and the amount of large precipitating ice hydrometeors in storms. This relationship is exploited, in conjunction with the capabilities of recent forecast models such as WRF, to forecast the threat of lightning from convective storms using the output fields from the model forecasts. The simulated vertical flux of graupel at -15C is used in this study as a proxy for charge separation processes and their associated lightning risk. Six-h simulations are conducted for a number of case studies for which three-dimensional lightning validation data from the North Alabama Lightning Mapping Array are available. Experiments indicate that initialization of the WRF model on a 2 km grid using Eta boundary conditions, Doppler radar radial velocity and reflectivity fields, and METAR and ACARS data yield the most realistic simulations. An array of subjective and objective statistical metrics are employed to document the utility of the WRF forecasts. The simulation results are also compared to other more traditional means of forecasting convective storms, such as those based on inspection of the convective available potential energy field.
Simulation of Urban Rainfall-Runoff in Piedmont Cities: A Case Study in Jinan City, China
NASA Astrophysics Data System (ADS)
Chang, X.; Xu, Z.; Zhao, G.; Li, H.
2017-12-01
During the past decades, frequent flooding disasters in urban areas resulted in catastrophic impacts such as human life casualties and property damages especially in piedmont cities due to its specific topography. In this study, a piedmont urban flooding model was developed in the Huangtaiqiao catchment based on SWMM. The sub-catchments in this piedmont area were divided into mountainous area, plain area and main urban area according to the variations of underlying surface topography. The impact of different routing mode and channel roughness on simulation results was quantitatively analyzed under different types of scenarios, and genetic algorithm was used to optimize model parameters. Results show that the simulation is poor (with a mean Nash coefficient of 0.61) when using the traditional routing mode in SWMM model, which usually ignores terrain variance in piedmont area. However, when the difference of routing mode, percent routed and channel roughness are considered, the prediction precision of model were significantly increased (with a mean Nash coefficient of 0.86), indicating that the difference of surface topography significantly affects the simulation results in piedmont cities. The relevant results would provide the scientific basis and technical support for rainfall-runoff simulation, flood control and disaster alleviation in piedmont cities.
Interactive visualization to advance earthquake simulation
Kellogg, L.H.; Bawden, G.W.; Bernardin, T.; Billen, M.; Cowgill, E.; Hamann, B.; Jadamec, M.; Kreylos, O.; Staadt, O.; Sumner, D.
2008-01-01
The geological sciences are challenged to manage and interpret increasing volumes of data as observations and simulations increase in size and complexity. For example, simulations of earthquake-related processes typically generate complex, time-varying data sets in two or more dimensions. To facilitate interpretation and analysis of these data sets, evaluate the underlying models, and to drive future calculations, we have developed methods of interactive visualization with a special focus on using immersive virtual reality (VR) environments to interact with models of Earth's surface and interior. Virtual mapping tools allow virtual "field studies" in inaccessible regions. Interactive tools allow us to manipulate shapes in order to construct models of geological features for geodynamic models, while feature extraction tools support quantitative measurement of structures that emerge from numerical simulation or field observations, thereby enabling us to improve our interpretation of the dynamical processes that drive earthquakes. VR has traditionally been used primarily as a presentation tool, albeit with active navigation through data. Reaping the full intellectual benefits of immersive VR as a tool for scientific analysis requires building on the method's strengths, that is, using both 3D perception and interaction with observed or simulated data. This approach also takes advantage of the specialized skills of geological scientists who are trained to interpret, the often limited, geological and geophysical data available from field observations. ?? Birkhaueser 2008.
Fuzzy PID control algorithm based on PSO and application in BLDC motor
NASA Astrophysics Data System (ADS)
Lin, Sen; Wang, Guanglong
2017-06-01
A fuzzy PID control algorithm is studied based on improved particle swarm optimization (PSO) to perform Brushless DC (BLDC) motor control which has high accuracy, good anti-jamming capability and steady state accuracy compared with traditional PID control. The mathematical and simulation model is established for BLDC motor by simulink software, and the speed loop of the fuzzy PID controller is designed. The simulation results show that the fuzzy PID control algorithm based on PSO has higher stability, high control precision and faster dynamic response speed.
NASA Astrophysics Data System (ADS)
Goldenberg, J.; Libai, B.; Solomon, S.; Jan, N.; Stauffer, D.
2000-09-01
A percolation model is presented, with computer simulations for illustrations, to show how the sales of a new product may penetrate the consumer market. We review the traditional approach in the marketing literature, which is based on differential or difference equations similar to the logistic equation (Bass, Manage. Sci. 15 (1969) 215). This mean-field approach is contrasted with the discrete percolation on a lattice, with simulations of "social percolation" (Solomon et al., Physica A 277 (2000) 239) in two to five dimensions giving power laws instead of exponential growth, and strong fluctuations right at the percolation threshold.
NASA Astrophysics Data System (ADS)
Aburas, Maher Milad; Ho, Yuek Ming; Ramli, Mohammad Firuz; Ash'aari, Zulfa Hanan
2017-07-01
The creation of an accurate simulation of future urban growth is considered one of the most important challenges in urban studies that involve spatial modeling. The purpose of this study is to improve the simulation capability of an integrated CA-Markov Chain (CA-MC) model using CA-MC based on the Analytical Hierarchy Process (AHP) and CA-MC based on Frequency Ratio (FR), both applied in Seremban, Malaysia, as well as to compare the performance and accuracy between the traditional and hybrid models. Various physical, socio-economic, utilities, and environmental criteria were used as predictors, including elevation, slope, soil texture, population density, distance to commercial area, distance to educational area, distance to residential area, distance to industrial area, distance to roads, distance to highway, distance to railway, distance to power line, distance to stream, and land cover. For calibration, three models were applied to simulate urban growth trends in 2010; the actual data of 2010 were used for model validation utilizing the Relative Operating Characteristic (ROC) and Kappa coefficient methods Consequently, future urban growth maps of 2020 and 2030 were created. The validation findings confirm that the integration of the CA-MC model with the FR model and employing the significant driving force of urban growth in the simulation process have resulted in the improved simulation capability of the CA-MC model. This study has provided a novel approach for improving the CA-MC model based on FR, which will provide powerful support to planners and decision-makers in the development of future sustainable urban planning.
Simulating the Use of Alternative Fuels in a Turbofan Engine
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.; Chin, Jeffrey Chevoor; Liu, Yuan
2013-01-01
The interest in alternative fuels for aviation has created a need to evaluate their effect on engine performance. The use of dynamic turbofan engine simulations enables the comparative modeling of the performance of these fuels on a realistic test bed in terms of dynamic response and control compared to traditional fuels. The analysis of overall engine performance and response characteristics can lead to a determination of the practicality of using specific alternative fuels in commercial aircraft. This paper describes a procedure to model the use of alternative fuels in a large commercial turbofan engine, and quantifies their effects on engine and vehicle performance. In addition, the modeling effort notionally demonstrates that engine performance may be maintained by modifying engine control system software parameters to account for the alternative fuel.
Electron-phonon interaction within classical molecular dynamics
Tamm, A.; Samolyuk, G.; Correa, A. A.; ...
2016-07-14
Here, we present a model for nonadiabatic classical molecular dynamics simulations that captures with high accuracy the wave-vector q dependence of the phonon lifetimes, in agreement with quantum mechanics calculations. It is based on a local view of the e-ph interaction where individual atom dynamics couples to electrons via a damping term that is obtained as the low-velocity limit of the stopping power of a moving ion in a host. The model is parameter free, as its components are derived from ab initio-type calculations, is readily extended to the case of alloys, and is adequate for large-scale molecular dynamics computermore » simulations. We also show how this model removes some oversimplifications of the traditional ionic damped dynamics commonly used to describe situations beyond the Born-Oppenheimer approximation.« less
NASA Technical Reports Server (NTRS)
Brown, Robert B.
1994-01-01
A software pilot model for Space Shuttle proximity operations is developed, utilizing fuzzy logic. The model is designed to emulate a human pilot during the terminal phase of a Space Shuttle approach to the Space Station. The model uses the same sensory information available to a human pilot and is based upon existing piloting rules and techniques determined from analysis of human pilot performance. Such a model is needed to generate numerous rendezvous simulations to various Space Station assembly stages for analysis of current NASA procedures and plume impingement loads on the Space Station. The advantages of a fuzzy logic pilot model are demonstrated by comparing its performance with NASA's man-in-the-loop simulations and with a similar model based upon traditional Boolean logic. The fuzzy model is shown to respond well from a number of initial conditions, with results typical of an average human. In addition, the ability to model different individual piloting techniques and new piloting rules is demonstrated.
Model-order reduction of lumped parameter systems via fractional calculus
NASA Astrophysics Data System (ADS)
Hollkamp, John P.; Sen, Mihir; Semperlotti, Fabio
2018-04-01
This study investigates the use of fractional order differential models to simulate the dynamic response of non-homogeneous discrete systems and to achieve efficient and accurate model order reduction. The traditional integer order approach to the simulation of non-homogeneous systems dictates the use of numerical solutions and often imposes stringent compromises between accuracy and computational performance. Fractional calculus provides an alternative approach where complex dynamical systems can be modeled with compact fractional equations that not only can still guarantee analytical solutions, but can also enable high levels of order reduction without compromising on accuracy. Different approaches are explored in order to transform the integer order model into a reduced order fractional model able to match the dynamic response of the initial system. Analytical and numerical results show that, under certain conditions, an exact match is possible and the resulting fractional differential models have both a complex and frequency-dependent order of the differential operator. The implications of this type of approach for both model order reduction and model synthesis are discussed.
A family of dynamic models for large-eddy simulation
NASA Technical Reports Server (NTRS)
Carati, D.; Jansen, K.; Lund, T.
1995-01-01
Since its first application, the dynamic procedure has been recognized as an effective means to compute rather than prescribe the unknown coefficients that appear in a subgrid-scale model for Large-Eddy Simulation (LES). The dynamic procedure is usually used to determine the nondimensional coefficient in the Smagorinsky (1963) model. In reality the procedure is quite general and it is not limited to the Smagorinsky model by any theoretical or practical constraints. The purpose of this note is to consider a generalized family of dynamic eddy viscosity models that do not necessarily rely on the local equilibrium assumption built into the Smagorinsky model. By invoking an inertial range assumption, it will be shown that the coefficients in the new models need not be nondimensional. This additional degree of freedom allows the use of models that are scaled on traditionally unknown quantities such as the dissipation rate. In certain cases, the dynamic models with dimensional coefficients are simpler to implement, and allow for a 30% reduction in the number of required filtering operations.
NASA Astrophysics Data System (ADS)
Munigety, Caleb Ronald
2018-04-01
The traditional traffic microscopic simulation models consider driver and vehicle as a single unit to represent the movements of drivers in a traffic stream. Due to this very fact, the traditional car-following models have the driver behavior related parameters, but ignore the vehicle related aspects. This approach is appropriate for homogeneous traffic conditions where car is the major vehicle type. However, in heterogeneous traffic conditions where multiple vehicle types are present, it becomes important to incorporate the vehicle related parameters exclusively to account for the varying dynamic and static characteristics. Thus, this paper presents a driver-vehicle integrated model hinged on the principles involved in physics-based spring-mass-damper mechanical system. While the spring constant represents the driver’s aggressiveness, the damping constant and the mass component take care of the stability and size/weight related aspects, respectively. The proposed model when tested, behaved pragmatically in representing the vehicle-type dependent longitudinal movements of vehicles.
Traversari, Roberto; Goedhart, Rien; Schraagen, Jan Maarten
2013-01-01
The objective is evaluation of a traditionally designed operating room using simulation of various surgical workflows. A literature search showed that there is no evidence for an optimal operating room layout regarding the position and size of an ultraclean ventilation (UCV) canopy with a separate preparation room for laying out instruments and in which patients are induced in the operating room itself. Neither was literature found reporting on process simulation being used for this application. Many technical guidelines and designs have mainly evolved over time, and there is no evidence on whether the proposed measures are also effective for the optimization of the layout for workflows. The study was conducted by applying observational techniques to simulated typical surgical procedures. Process simulations which included complete surgical teams and equipment required for the intervention were carried out for four typical interventions. Four observers used a form to record conflicts with the clean area boundaries and the height of the supply bridge. Preferences for particular layouts were discussed with the surgical team after each simulated procedure. We established that a clean area measuring 3 × 3 m and a supply bridge height of 2.05 m was satisfactory for most situations, provided a movable operation table is used. The only cases in which conflicts with the supply bridge were observed were during the use of a surgical robot (Da Vinci) and a surgical microscope. During multiple trauma interventions, bottlenecks regarding the dimensions of the clean area will probably arise. The process simulation of four typical interventions has led to significantly different operating room layouts than were arrived at through the traditional design process. Evidence-based design, human factors, work environment, operating room, traditional design, process simulation, surgical workflowsPreferred Citation: Traversari, R., Goedhart, R., & Schraagen, J. M. (2013). Process simulation during the design process makes the difference: Process simulations applied to a traditional design. Health Environments Research & Design Journal 6(2), pp 58-76.
Developmental forecasts simulations with the Eta-CMAQ modeling system over the continental U.S. were initiated in 2005. This paper presents an evaluation of surface O3 forecast over different regions of the continental U.S. In addition, to the traditional operational e...
ERIC Educational Resources Information Center
Fraley, R. Chris; Roisman, Glenn I.; Haltigan, John D.
2013-01-01
Psychologists have long debated the role of early experience in social and cognitive development. However, traditional approaches to studying this issue are not well positioned to address this debate. The authors present simulations that indicate that the associations between early experiences and later outcomes should approach different…
Integrated Personal Protective Equipment Standards Support Model
2008-04-01
traditional SCBA showed that the distribution of the weight is important as well. Twelve firefighters performed simulated fire -fighting and rescue exercises...respiratory equipment standards and five National Fire Protection Association (NFPA) protective suit, clothing, and respirator standards. The...respirators. The clothing standards were for protective ensembles for urban search and rescue operations, open circuit SCBA for fire and emergency
A Probabilistic Framework for the Validation and Certification of Computer Simulations
NASA Technical Reports Server (NTRS)
Ghanem, Roger; Knio, Omar
2000-01-01
The paper presents a methodology for quantifying, propagating, and managing the uncertainty in the data required to initialize computer simulations of complex phenomena. The purpose of the methodology is to permit the quantitative assessment of a certification level to be associated with the predictions from the simulations, as well as the design of a data acquisition strategy to achieve a target level of certification. The value of a methodology that can address the above issues is obvious, specially in light of the trend in the availability of computational resources, as well as the trend in sensor technology. These two trends make it possible to probe physical phenomena both with physical sensors, as well as with complex models, at previously inconceivable levels. With these new abilities arises the need to develop the knowledge to integrate the information from sensors and computer simulations. This is achieved in the present work by tracing both activities back to a level of abstraction that highlights their commonalities, thus allowing them to be manipulated in a mathematically consistent fashion. In particular, the mathematical theory underlying computer simulations has long been associated with partial differential equations and functional analysis concepts such as Hilbert spares and orthogonal projections. By relying on a probabilistic framework for the modeling of data, a Hilbert space framework emerges that permits the modeling of coefficients in the governing equations as random variables, or equivalently, as elements in a Hilbert space. This permits the development of an approximation theory for probabilistic problems that parallels that of deterministic approximation theory. According to this formalism, the solution of the problem is identified by its projection on a basis in the Hilbert space of random variables, as opposed to more traditional techniques where the solution is approximated by its first or second-order statistics. The present representation, in addition to capturing significantly more information than the traditional approach, facilitates the linkage between different interacting stochastic systems as is typically observed in real-life situations.
Enabling parallel simulation of large-scale HPC network systems
Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.; ...
2016-04-07
Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks usedmore » in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations« less
Enabling parallel simulation of large-scale HPC network systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.
Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks usedmore » in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations« less
Mechem, David B.; Giangrande, Scott E.
2018-03-01
Here, the controls on precipitation onset and the transition from shallow cumulus to congestus are explored using a suite of 16 large–eddy simulations based on the 25 May 2011 event from the Midlatitude Continental Convective Clouds Experiment (MC3E). The thermodynamic variables in the model are relaxed at various timescales to observationally constrained temperature and moisture profiles in order to better reproduce the observed behavior of precipitation onset and total precipitation. Three of the simulations stand out as best matching the precipitation observations and also perform well for independent comparisons of cloud fraction, precipitation area fraction, and evolution of cloud topmore » occurrence. All three simulations exhibit a destabilization over time, which leads to a transition to deeper clouds, but the evolution of traditional stability metrics by themselves is not able to explain differences in the simulations. Conditionally sampled cloud properties (in particular, mean cloud buoyancy), however, do elicit differences among the simulations. The inability of environmental profiles alone to discern subtle differences among the simulations and the usefulness of conditionally sampled model quantities argue for hybrid observational/modeling approaches. These combined approaches enable a more complete physical understanding of cloud systems by combining observational sampling of time–varying three–dimensional meteorological quantities and cloud properties, along with detailed representation of cloud microphysical and dynamical processes from numerical models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mechem, David B.; Giangrande, Scott E.
Here, the controls on precipitation onset and the transition from shallow cumulus to congestus are explored using a suite of 16 large–eddy simulations based on the 25 May 2011 event from the Midlatitude Continental Convective Clouds Experiment (MC3E). The thermodynamic variables in the model are relaxed at various timescales to observationally constrained temperature and moisture profiles in order to better reproduce the observed behavior of precipitation onset and total precipitation. Three of the simulations stand out as best matching the precipitation observations and also perform well for independent comparisons of cloud fraction, precipitation area fraction, and evolution of cloud topmore » occurrence. All three simulations exhibit a destabilization over time, which leads to a transition to deeper clouds, but the evolution of traditional stability metrics by themselves is not able to explain differences in the simulations. Conditionally sampled cloud properties (in particular, mean cloud buoyancy), however, do elicit differences among the simulations. The inability of environmental profiles alone to discern subtle differences among the simulations and the usefulness of conditionally sampled model quantities argue for hybrid observational/modeling approaches. These combined approaches enable a more complete physical understanding of cloud systems by combining observational sampling of time–varying three–dimensional meteorological quantities and cloud properties, along with detailed representation of cloud microphysical and dynamical processes from numerical models.« less
NASA Astrophysics Data System (ADS)
Mechem, David B.; Giangrande, Scott E.
2018-03-01
Controls on precipitation onset and the transition from shallow cumulus to congestus are explored using a suite of 16 large-eddy simulations based on the 25 May 2011 event from the Midlatitude Continental Convective Clouds Experiment (MC3E). The thermodynamic variables in the model are relaxed at various timescales to observationally constrained temperature and moisture profiles in order to better reproduce the observed behavior of precipitation onset and total precipitation. Three of the simulations stand out as best matching the precipitation observations and also perform well for independent comparisons of cloud fraction, precipitation area fraction, and evolution of cloud top occurrence. All three simulations exhibit a destabilization over time, which leads to a transition to deeper clouds, but the evolution of traditional stability metrics by themselves is not able to explain differences in the simulations. Conditionally sampled cloud properties (in particular, mean cloud buoyancy), however, do elicit differences among the simulations. The inability of environmental profiles alone to discern subtle differences among the simulations and the usefulness of conditionally sampled model quantities argue for hybrid observational/modeling approaches. These combined approaches enable a more complete physical understanding of cloud systems by combining observational sampling of time-varying three-dimensional meteorological quantities and cloud properties, along with detailed representation of cloud microphysical and dynamical processes from numerical models.
The TensorMol-0.1 model chemistry: a neural network augmented with long-range physics.
Yao, Kun; Herr, John E; Toth, David W; Mckintyre, Ryker; Parkhill, John
2018-02-28
Traditional force fields cannot model chemical reactivity, and suffer from low generality without re-fitting. Neural network potentials promise to address these problems, offering energies and forces with near ab initio accuracy at low cost. However a data-driven approach is naturally inefficient for long-range interatomic forces that have simple physical formulas. In this manuscript we construct a hybrid model chemistry consisting of a nearsighted neural network potential with screened long-range electrostatic and van der Waals physics. This trained potential, simply dubbed "TensorMol-0.1", is offered in an open-source Python package capable of many of the simulation types commonly used to study chemistry: geometry optimizations, harmonic spectra, open or periodic molecular dynamics, Monte Carlo, and nudged elastic band calculations. We describe the robustness and speed of the package, demonstrating its millihartree accuracy and scalability to tens-of-thousands of atoms on ordinary laptops. We demonstrate the performance of the model by reproducing vibrational spectra, and simulating the molecular dynamics of a protein. Our comparisons with electronic structure theory and experimental data demonstrate that neural network molecular dynamics is poised to become an important tool for molecular simulation, lowering the resource barrier to simulating chemistry.
Analysis of Gas-Particle Flows through Multi-Scale Simulations
NASA Astrophysics Data System (ADS)
Gu, Yile
Multi-scale structures are inherent in gas-solid flows, which render the modeling efforts challenging. On one hand, detailed simulations where the fine structures are resolved and particle properties can be directly specified can account for complex flow behaviors, but they are too computationally expensive to apply for larger systems. On the other hand, coarse-grained simulations demand much less computations but they necessitate constitutive models which are often not readily available for given particle properties. The present study focuses on addressing this issue, as it seeks to provide a general framework through which one can obtain the required constitutive models from detailed simulations. To demonstrate the viability of this general framework in which closures can be proposed for different particle properties, we focus on the van der Waals force of interaction between particles. We start with Computational Fluid Dynamics (CFD) - Discrete Element Method (DEM) simulations where the fine structures are resolved and van der Waals force between particles can be directly specified, and obtain closures for stress and drag that are required for coarse-grained simulations. Specifically, we develop a new cohesion model that appropriately accounts for van der Waals force between particles to be used for CFD-DEM simulations. We then validate this cohesion model and the CFD-DEM approach by showing that it can qualitatively capture experimental results where the addition of small particles to gas fluidization reduces bubble sizes. Based on the DEM and CFD-DEM simulation results, we propose stress models that account for the van der Waals force between particles. Finally, we apply machine learning, specifically neural networks, to obtain a drag model that captures the effects from fine structures and inter-particle cohesion. We show that this novel approach using neural networks, which can be readily applied for other closures other than drag here, can take advantage of the large amount of data generated from simulations, and therefore offer superior modeling performance over traditional approaches.
A Comparison of Two Balance Calibration Model Building Methods
NASA Technical Reports Server (NTRS)
DeLoach, Richard; Ulbrich, Norbert
2007-01-01
Simulated strain-gage balance calibration data is used to compare the accuracy of two balance calibration model building methods for different noise environments and calibration experiment designs. The first building method obtains a math model for the analysis of balance calibration data after applying a candidate math model search algorithm to the calibration data set. The second building method uses stepwise regression analysis in order to construct a model for the analysis. Four balance calibration data sets were simulated in order to compare the accuracy of the two math model building methods. The simulated data sets were prepared using the traditional One Factor At a Time (OFAT) technique and the Modern Design of Experiments (MDOE) approach. Random and systematic errors were introduced in the simulated calibration data sets in order to study their influence on the math model building methods. Residuals of the fitted calibration responses and other statistical metrics were compared in order to evaluate the calibration models developed with different combinations of noise environment, experiment design, and model building method. Overall, predicted math models and residuals of both math model building methods show very good agreement. Significant differences in model quality were attributable to noise environment, experiment design, and their interaction. Generally, the addition of systematic error significantly degraded the quality of calibration models developed from OFAT data by either method, but MDOE experiment designs were more robust with respect to the introduction of a systematic component of the unexplained variance.
The application of the pilot points in groundwater numerical inversion model
NASA Astrophysics Data System (ADS)
Hu, Bin; Teng, Yanguo; Cheng, Lirong
2015-04-01
Numerical inversion simulation of groundwater has been widely applied in groundwater. Compared to traditional forward modeling, inversion model has more space to study. Zones and inversing modeling cell by cell are conventional methods. Pilot points is a method between them. The traditional inverse modeling method often uses software dividing the model into several zones with a few parameters needed to be inversed. However, distribution is usually too simple for modeler and result of simulation deviation. Inverse cell by cell will get the most actual parameter distribution in theory, but it need computational complexity greatly and quantity of survey data for geological statistical simulation areas. Compared to those methods, pilot points distribute a set of points throughout the different model domains for parameter estimation. Property values are assigned to model cells by Kriging to ensure geological units within the parameters of heterogeneity. It will reduce requirements of simulation area geological statistics and offset the gap between above methods. Pilot points can not only save calculation time, increase fitting degree, but also reduce instability of numerical model caused by numbers of parameters and other advantages. In this paper, we use pilot point in a field which structure formation heterogeneity and hydraulics parameter was unknown. We compare inversion modeling results of zones and pilot point methods. With the method of comparative analysis, we explore the characteristic of pilot point in groundwater inversion model. First, modeler generates an initial spatially correlated field given a geostatistical model by the description of the case site with the software named Groundwater Vistas 6. Defining Kriging to obtain the value of the field functions over the model domain on the basis of their values at measurement and pilot point locations (hydraulic conductivity), then we assign pilot points to the interpolated field which have been divided into 4 zones. And add range of disturbance values to inversion targets to calculate the value of hydraulic conductivity. Third, after inversion calculation (PEST), the interpolated field will minimize an objective function measuring the misfit between calculated and measured data. It's an optimization problem to find the optimum value of parameters. After the inversion modeling, the following major conclusion can be found out: (1) In a field structure formation is heterogeneity, the results of pilot point method is more real: better fitting result of parameters, more stable calculation of numerical simulation (stable residual distribution). Compared to zones, it is better of reflecting the heterogeneity of study field. (2) Pilot point method ensures that each parameter is sensitive and not entirely dependent on other parameters. Thus it guarantees the relative independence and authenticity of parameters evaluation results. However, it costs more time to calculate than zones. Key words: groundwater; pilot point; inverse model; heterogeneity; hydraulic conductivity
Fully Coupled Simulation of Lithium Ion Battery Cell Performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trembacki, Bradley L.; Murthy, Jayathi Y.; Roberts, Scott Alan
Lithium-ion battery particle-scale (non-porous electrode) simulations applied to resolved electrode geometries predict localized phenomena and can lead to better informed decisions on electrode design and manufacturing. This work develops and implements a fully-coupled finite volume methodology for the simulation of the electrochemical equations in a lithium-ion battery cell. The model implementation is used to investigate 3D battery electrode architectures that offer potential energy density and power density improvements over traditional layer-by-layer particle bed battery geometries. Advancement of micro-scale additive manufacturing techniques has made it possible to fabricate these 3D electrode microarchitectures. A variety of 3D battery electrode geometries are simulatedmore » and compared across various battery discharge rates and length scales in order to quantify performance trends and investigate geometrical factors that improve battery performance. The energy density and power density of the 3D battery microstructures are compared in several ways, including a uniform surface area to volume ratio comparison as well as a comparison requiring a minimum manufacturable feature size. Significant performance improvements over traditional particle bed electrode designs are observed, and electrode microarchitectures derived from minimal surfaces are shown to be superior. A reduced-order volume-averaged porous electrode theory formulation for these unique 3D batteries is also developed, allowing simulations on the full-battery scale. Electrode concentration gradients are modeled using the diffusion length method, and results for plate and cylinder electrode geometries are compared to particle-scale simulation results. Additionally, effective diffusion lengths that minimize error with respect to particle-scale results for gyroid and Schwarz P electrode microstructures are determined.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Redline, Erica Marie; Bolintineanu, Dan S.; Lane, J. Matthew
The aim of this study was to alter polymerization chemistry to improve network homogeneity in free-radical crosslinked systems. It was hypothesized that a reduction in heterogeneity of the network would lead to improved mechanical performance. Experiments and simulations were carried out to investigate the connection between polymerization chemistry, network structure and mechanical properties. Experiments were conducted on two different monomer systems - the first is a single monomer system, urethane dimethacrylate (UDMA), and the second is a two-monomer system consisting of bisphenol A glycidyl dimethacrylate (BisGMA) and triethylene glycol dimethacrylate (TEGDMA) in a ratio of 70/30 BisGMA/TEGDMA by weight. Themore » methacrylate systems were crosslinked using traditional radical polymeriza- tion (TRP) with azobisisobutyronitrile (AIBN) or benzoyl peroxide (BPO) as an initiator; TRP systems were used as the control. The monomers were also cross-linked using activator regenerated by electron transfer atom transfer radical polymerization (ARGET ATRP) as a type of controlled radical polymerization (CRP). FTIR and DSC were used to monitor reac- tion kinetics of the systems. The networks were analyzed using NMR, DSC, X-ray diffraction (XRD), atomic force microscopy (AFM), and small angle X-ray scattering (SAXS). These techniques were employed in an attempt to quantify differences between the traditional and controlled radical polymerizations. While a quantitative methodology for characterizing net- work morphology was not established, SAXS and AFM have shown some promising initial results. Additionally, differences in mechanical behavior were observed between traditional and controlled radical polymerized thermosets in the BisGMA/TEGDMA system but not in the UDMA materials; this finding may be the result of network ductility variations between the two materials. Coarse-grained molecular dynamics simulations employing a novel model of the CRP reaction were carried out for the UDMA system, with parameters calibrated based on fully atomistic simulations of the UDMA monomer in the liquid state. Detailed metrics based on network graph theoretical approaches were implemented to quantify the bond network topology resulting from simulations. For a broad range of polymerization parameters, no discernible differences were seen between TRP and CRP UDMA simulations at equal conversions, although clear differences exist as a function of conversion. Both findings are consistent with experiments. Despite a number of shortcomings, these models have demonstrated the potential of molecular simulations for studying network topology in these systems.« less
Luo, Yong; Wu, Wenqi; Babu, Ravindra; Tang, Kanghua; Luo, Bing
2012-01-01
COMPASS is an indigenously developed Chinese global navigation satellite system and will share many features in common with GPS (Global Positioning System). Since the ultra-tight GPS/INS (Inertial Navigation System) integration shows its advantage over independent GPS receivers in many scenarios, the federated ultra-tight COMPASS/INS integration has been investigated in this paper, particularly, by proposing a simplified prefilter model. Compared with a traditional prefilter model, the state space of this simplified system contains only carrier phase, carrier frequency and carrier frequency rate tracking errors. A two-quadrant arctangent discriminator output is used as a measurement. Since the code tracking error related parameters were excluded from the state space of traditional prefilter models, the code/carrier divergence would destroy the carrier tracking process, and therefore an adaptive Kalman filter algorithm tuning process noise covariance matrix based on state correction sequence was incorporated to compensate for the divergence. The federated ultra-tight COMPASS/INS integration was implemented with a hardware COMPASS intermediate frequency (IF), and INS's accelerometers and gyroscopes signal sampling system. Field and simulation test results showed almost similar tracking and navigation performances for both the traditional prefilter model and the proposed system; however, the latter largely decreased the computational load. PMID:23012564
NASA Astrophysics Data System (ADS)
Liang, Yingjie; Chen, Wen
2018-04-01
The mean squared displacement (MSD) of the traditional ultraslow diffusion is a logarithmic function of time. Recently, the continuous time random walk model is employed to characterize this ultraslow diffusion dynamics by connecting the heavy-tailed logarithmic function and its variation as the asymptotical waiting time density. In this study we investigate the limiting waiting time density of a general ultraslow diffusion model via the inverse Mittag-Leffler function, whose special case includes the traditional logarithmic ultraslow diffusion model. The MSD of the general ultraslow diffusion model is analytically derived as an inverse Mittag-Leffler function, and is observed to increase even more slowly than that of the logarithmic function model. The occurrence of very long waiting time in the case of the inverse Mittag-Leffler function has the largest probability compared with the power law model and the logarithmic function model. The Monte Carlo simulations of one dimensional sample path of a single particle are also performed. The results show that the inverse Mittag-Leffler waiting time density is effective in depicting the general ultraslow random motion.
An automatic and effective parameter optimization method for model tuning
NASA Astrophysics Data System (ADS)
Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.
2015-11-01
Physical parameterizations in general circulation models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time-consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determining the model's sensitivity to the parameters and the other choosing the optimum initial value for those sensitive parameters, are introduced before the downhill simplex method. This new method reduces the number of parameters to be tuned and accelerates the convergence of the downhill simplex method. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.
Marshall, Deborah A; Burgos-Liz, Lina; IJzerman, Maarten J; Crown, William; Padula, William V; Wong, Peter K; Pasupathy, Kalyan S; Higashi, Mitchell K; Osgood, Nathaniel D
2015-03-01
In a previous report, the ISPOR Task Force on Dynamic Simulation Modeling Applications in Health Care Delivery Research Emerging Good Practices introduced the fundamentals of dynamic simulation modeling and identified the types of health care delivery problems for which dynamic simulation modeling can be used more effectively than other modeling methods. The hierarchical relationship between the health care delivery system, providers, patients, and other stakeholders exhibits a level of complexity that ought to be captured using dynamic simulation modeling methods. As a tool to help researchers decide whether dynamic simulation modeling is an appropriate method for modeling the effects of an intervention on a health care system, we presented the System, Interactions, Multilevel, Understanding, Loops, Agents, Time, Emergence (SIMULATE) checklist consisting of eight elements. This report builds on the previous work, systematically comparing each of the three most commonly used dynamic simulation modeling methods-system dynamics, discrete-event simulation, and agent-based modeling. We review criteria for selecting the most suitable method depending on 1) the purpose-type of problem and research questions being investigated, 2) the object-scope of the model, and 3) the method to model the object to achieve the purpose. Finally, we provide guidance for emerging good practices for dynamic simulation modeling in the health sector, covering all aspects, from the engagement of decision makers in the model design through model maintenance and upkeep. We conclude by providing some recommendations about the application of these methods to add value to informed decision making, with an emphasis on stakeholder engagement, starting with the problem definition. Finally, we identify areas in which further methodological development will likely occur given the growing "volume, velocity and variety" and availability of "big data" to provide empirical evidence and techniques such as machine learning for parameter estimation in dynamic simulation models. Upon reviewing this report in addition to using the SIMULATE checklist, the readers should be able to identify whether dynamic simulation modeling methods are appropriate to address the problem at hand and to recognize the differences of these methods from those of other, more traditional modeling approaches such as Markov models and decision trees. This report provides an overview of these modeling methods and examples of health care system problems in which such methods have been useful. The primary aim of the report was to aid decisions as to whether these simulation methods are appropriate to address specific health systems problems. The report directs readers to other resources for further education on these individual modeling methods for system interventions in the emerging field of health care delivery science and implementation. Copyright © 2015. Published by Elsevier Inc.
Modelling, simulation and applications of longitudinal train dynamics
NASA Astrophysics Data System (ADS)
Cole, Colin; Spiryagin, Maksym; Wu, Qing; Sun, Yan Quan
2017-10-01
Significant developments in longitudinal train simulation and an overview of the approaches to train models and modelling vehicle force inputs are firstly presented. The most important modelling task, that of the wagon connection, consisting of energy absorption devices such as draft gears and buffers, draw gear stiffness, coupler slack and structural stiffness is then presented. Detailed attention is given to the modelling approaches for friction wedge damped and polymer draft gears. A significant issue in longitudinal train dynamics is the modelling and calculation of the input forces - the co-dimensional problem. The need to push traction performances higher has led to research and improvement in the accuracy of traction modelling which is discussed. A co-simulation method that combines longitudinal train simulation, locomotive traction control and locomotive vehicle dynamics is presented. The modelling of other forces, braking propulsion resistance, curve drag and grade forces are also discussed. As extensions to conventional longitudinal train dynamics, lateral forces and coupler impacts are examined in regards to interaction with wagon lateral and vertical dynamics. Various applications of longitudinal train dynamics are then presented. As an alternative to the tradition single wagon mass approach to longitudinal train dynamics, an example incorporating fully detailed wagon dynamics is presented for a crash analysis problem. Further applications of starting traction, air braking, distributed power, energy analysis and tippler operation are also presented.
NASA Astrophysics Data System (ADS)
Yang, Xuhong; Jin, Xiaobin; Guo, Beibei; Long, Ying; Zhou, Yinkang
2015-05-01
Constructing a spatially explicit time series of historical cultivated land is of upmost importance for climatic and ecological studies that make use of Land Use and Cover Change (LUCC) data. Some scholars have made efforts to simulate and reconstruct the quantitative information on historical land use at the global or regional level based on "top-down" decision-making behaviors to match overall cropland area to land parcels using land arability and universal parameters. Considering the concentrated distribution of cultivated land and various factors influencing cropland distribution, including environmental and human factors, this study developed a "bottom-up" model of historical cropland based on constrained Cellular Automaton (CA). Our model takes a historical cropland area as an external variable and the cropland distribution in 1980 as the maximum potential scope of historical cropland. We selected elevation, slope, water availability, average annual precipitation, and distance to the nearest rural settlement as the main influencing factors of land use suitability. Then, an available labor force index is used as a proxy for the amount of cropland to inspect and calibrate these spatial patterns. This paper applies the model to a traditional cultivated region in China and reconstructs its spatial distribution of cropland during 6 periods. The results are shown as follows: (1) a constrained CA is well suited for simulating and reconstructing the spatial distribution of cropland in China's traditional cultivated region. (2) Taking the different factors affecting spatial pattern of cropland into consideration, the partitioning of the research area effectively reflected the spatial differences in cropland evolution rules and rates. (3) Compared with "HYDE datasets", this research has formed higher-resolution Boolean spatial distribution datasets of historical cropland with a more definitive concept of spatial pattern in terms of fractional format. We conclude that our reconstruction is closer to the actual change pattern of the traditional cultivated region in China.
Computational study on cortical spreading depression based on a generalized cellular automaton model
NASA Astrophysics Data System (ADS)
Chen, Shangbin; Hu, Lele; Li, Bing; Xu, Changcheng; Liu, Qian
2009-02-01
Cortical spreading depression (CSD) is an important neurophysiological phenomenon correlating with some neural disorders, such as migraine, cerebral ischemia and epilepsy. By now, we are still not clear about the mechanisms of CSD's initiation and propagation, also the relevance between CSD and those neural diseases. Nevertheless, characterization of CSD, especially the spatiotemporal evolution, will promote the understanding of the CSD's nature and mechanisms. Besides the previous experimental work on charactering the spatiotemporal evolution of CSD in rats by optical intrinsic signal imaging, a computational study based on a generalized cellular automaton (CA) model was proposed here. In the model, we exploited a generalized neighborhood connection rule: a central CA cell is related with a group of surrounding CA cells with different weight coefficients. By selecting special parameters, the generalized CA model could be transformed to the traditional CA models with von Neumann, Moore and hexagon neighborhood connection means. Hence, the new model covered several properties of CSD simulated in traditional CA models: 1) expanding from the origin site like a circular wave; 2) annihilation of two waves traveling in opposite directions after colliding; 3) wavefront of CSD breaking and recovering when and after encountering an obstacle. By setting different refractory period in the different CA lattice field, different connection coefficient in different direction within the defined neighborhood, inhomogeneous propagation of CSD was simulated with high fidelity. The computational results were analogous to the reported time-varying CSD waves by optical imaging. So, the generalized CA model would be useful to study CSD because of its intuitive appeal and computational efficiency.
Tang, Chen; Lu, Wenjing; Chen, Song; Zhang, Zhen; Li, Botao; Wang, Wenping; Han, Lin
2007-10-20
We extend and refine previous work [Appl. Opt. 46, 2907 (2007)]. Combining the coupled nonlinear partial differential equations (PDEs) denoising model with the ordinary differential equations enhancement method, we propose the new denoising and enhancing model for electronic speckle pattern interferometry (ESPI) fringe patterns. Meanwhile, we propose the backpropagation neural networks (BPNN) method to obtain unwrapped phase values based on a skeleton map instead of traditional interpolations. We test the introduced methods on the computer-simulated speckle ESPI fringe patterns and experimentally obtained fringe pattern, respectively. The experimental results show that the coupled nonlinear PDEs denoising model is capable of effectively removing noise, and the unwrapped phase values obtained by the BPNN method are much more accurate than those obtained by the well-known traditional interpolation. In addition, the accuracy of the BPNN method is adjustable by changing the parameters of networks such as the number of neurons.
Staging scientific controversies: a gallery test on science museums' interactivity.
Yaneva, Albena; Rabesandratana, Tania Mara; Greiner, Birgit
2009-01-01
The "transfer" model in science communication has been addressed critically from different perspectives, while the advantages of the interactive model have been continuously praised. Yet, little is done to account for the specific role of the interactive model in communicating "unfinished science." The traditional interactive methods in museums are not sufficient to keep pace with rapid scientific developments. Interactive exchanges between laypeople and experts are thought mainly through the lens of a dialogue that is facilitated and framed by the traditional "conference room" architecture. Drawing on the results of a small-scale experiment in a gallery space, we argue for the need for a new "architecture of interaction" in museum settings based on art installation and simulation techniques, which will enhance the communication potentials of science museums and will provide conditions for a fruitful even-handed exchange of expert and lay knowledge.
Approximate Bayesian computation for spatial SEIR(S) epidemic models.
Brown, Grant D; Porter, Aaron T; Oleson, Jacob J; Hinman, Jessica A
2018-02-01
Approximate Bayesia n Computation (ABC) provides an attractive approach to estimation in complex Bayesian inferential problems for which evaluation of the kernel of the posterior distribution is impossible or computationally expensive. These highly parallelizable techniques have been successfully applied to many fields, particularly in cases where more traditional approaches such as Markov chain Monte Carlo (MCMC) are impractical. In this work, we demonstrate the application of approximate Bayesian inference to spatially heterogeneous Susceptible-Exposed-Infectious-Removed (SEIR) stochastic epidemic models. These models have a tractable posterior distribution, however MCMC techniques nevertheless become computationally infeasible for moderately sized problems. We discuss the practical implementation of these techniques via the open source ABSEIR package for R. The performance of ABC relative to traditional MCMC methods in a small problem is explored under simulation, as well as in the spatially heterogeneous context of the 2014 epidemic of Chikungunya in the Americas. Copyright © 2017 Elsevier Ltd. All rights reserved.
The evolution of social learning mechanisms and cultural phenomena in group foragers.
van der Post, Daniel J; Franz, Mathias; Laland, Kevin N
2017-02-10
Advanced cognitive abilities are widely thought to underpin cultural traditions and cumulative cultural change. In contrast, recent simulation models have found that basic social influences on learning suffice to support both cultural phenomena. In the present study we test the predictions of these models in the context of skill learning, in a model with stochastic demographics, variable group sizes, and evolved parameter values, exploring the cultural ramifications of three different social learning mechanisms. Our results show that that simple forms of social learning such as local enhancement, can generate traditional differences in the context of skill learning. In contrast, we find cumulative cultural change is supported by observational learning, but not local or stimulus enhancement, which supports the idea that advanced cognitive abilities are important for generating this cultural phenomenon in the context of skill learning. Our results help to explain the observation that animal cultures are widespread, but cumulative cultural change might be rare.
Model and Study on Cascade Control System Based on IGBT Chopping Control
NASA Astrophysics Data System (ADS)
Niu, Yuxin; Chen, Liangqiao; Wang, Shuwen
2018-01-01
Thyristor cascade control system has a wide range of applications in the industrial field, but the traditional cascade control system has some shortcomings, such as a low power factor, serious harmonic pollution. In this paper, not only analyzing its system structure and working principle, but also discussing the two main factors affecting the power factor. Chopping-control cascade control system, adopted a new power switching device IGBT, which could overcome traditional cascade control system’s two main drawbacks efficiently. The basic principle of this cascade control system is discussed in this paper and the model of speed control system is built by using MATLAB/Simulink software. Finally, the simulation results of the system shows that the system works efficiently. This system is worthy to be spread widely in engineering application.
System Dynamics Modeling of Transboundary Systems: The Bear River Basin Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerald Sehlke; Jake Jacobson
2005-09-01
System dynamics is a computer-aided approach to evaluating the interrelationships of different components and activities within complex systems. Recently, system dynamics models have been developed in areas such as policy design, biological and medical modeling, energy and the environmental analysis, and in various other areas in the natural and social sciences. The Idaho National Engineering and Environmental Laboratory, a multi-purpose national laboratory managed by the Department of Energy, has developed a systems dynamics model in order to evaluate its utility for modeling large complex hydrological systems. We modeled the Bear River Basin, a transboundary basin that includes portions of Idaho,more » Utah and Wyoming. We found that system dynamics modeling is very useful for integrating surface water and groundwater data and for simulating the interactions between these sources within a given basin. In addition, we also found system dynamics modeling is useful for integrating complex hydrologic data with other information (e.g., policy, regulatory and management criteria) to produce a decision support system. Such decision support systems can allow managers and stakeholders to better visualize the key hydrologic elements and management constraints in the basin, which enables them to better understand the system via the simulation of multiple “what-if” scenarios. Although system dynamics models can be developed to conduct traditional hydraulic/hydrologic surface water or groundwater modeling, we believe that their strength lies in their ability to quickly evaluate trends and cause–effect relationships in large-scale hydrological systems; for integrating disparate data; for incorporating output from traditional hydraulic/hydrologic models; and for integration of interdisciplinary data, information and criteria to support better management decisions.« less
System Dynamics Modeling of Transboundary Systems: the Bear River Basin Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerald Sehlke; Jacob J. Jacobson
2005-09-01
System dynamics is a computer-aided approach to evaluating the interrelationships of different components and activities within complex systems. Recently, system dynamics models have been developed in areas such as policy design, biological and medical modeling, energy and the environmental analysis, and in various other areas in the natural and social sciences. The Idaho National Engineering and Environmental Laboratory, a multi-purpose national laboratory managed by the Department of Energy, has developed a systems dynamics model in order to evaluate its utility for modeling large complex hydrological systems. We modeled the Bear River Basin, a transboundary basin that includes portions of Idaho,more » Utah and Wyoming. We found that system dynamics modeling is very useful for integrating surface water and ground water data and for simulating the interactions between these sources within a given basin. In addition, we also found system dynamics modeling is useful for integrating complex hydrologic data with other information (e.g., policy, regulatory and management criteria) to produce a decision support system. Such decision support systems can allow managers and stakeholders to better visualize the key hydrologic elements and management constraints in the basin, which enables them to better understand the system via the simulation of multiple “what-if” scenarios. Although system dynamics models can be developed to conduct traditional hydraulic/hydrologic surface water or ground water modeling, we believe that their strength lies in their ability to quickly evaluate trends and cause–effect relationships in large-scale hydrological systems; for integrating disparate data; for incorporating output from traditional hydraulic/hydrologic models; and for integration of interdisciplinary data, information and criteria to support better management decisions.« less
Graphic and haptic simulation system for virtual laparoscopic rectum surgery.
Pan, Jun J; Chang, Jian; Yang, Xiaosong; Zhang, Jian J; Qureshi, Tahseen; Howell, Robert; Hickish, Tamas
2011-09-01
Medical simulators with vision and haptic feedback techniques offer a cost-effective and efficient alternative to the traditional medical trainings. They have been used to train doctors in many specialties of medicine, allowing tasks to be practised in a safe and repetitive manner. This paper describes a virtual-reality (VR) system which will help to influence surgeons' learning curves in the technically challenging field of laparoscopic surgery of the rectum. Data from MRI of the rectum and real operation videos are used to construct the virtual models. A haptic force filter based on radial basis functions is designed to offer realistic and smooth force feedback. To handle collision detection efficiently, a hybrid model is presented to compute the deformation of intestines. Finally, a real-time cutting technique based on mesh is employed to represent the incision operation. Despite numerous research efforts, fast and realistic solutions of soft tissues with large deformation, such as intestines, prove extremely challenging. This paper introduces our latest contribution to this endeavour. With this system, the user can haptically operate with the virtual rectum and simultaneously watch the soft tissue deformation. Our system has been tested by colorectal surgeons who believe that the simulated tactile and visual feedbacks are realistic. It could replace the traditional training process and effectively transfer surgical skills to novices. Copyright © 2011 John Wiley & Sons, Ltd.
Simulation in an Undergraduate Nursing Pharmacology Course: A Pilot Study.
Tinnon, Elizabeth; Newton, Rebecca
This study examined the effectiveness of simulation as a method of teaching pharmacological concepts to nursing students; perceptions of satisfaction with simulation as a teaching strategy were also evaluated. Second-semester juniors participated in three simulations and completed the National League for Nursing Student Satisfaction and Self-Confidence in Learning Questionnaire and the Student Evaluation of Educational Quality Survey; a control group received traditional lectures. A unit exam on anticoagulant therapy content was administered to measure effectiveness. Findings support that simulation is as effective as traditional lecture for an undergraduate pharmacology course.
Lattice Boltzmann model for simulation of magnetohydrodynamics
NASA Technical Reports Server (NTRS)
Chen, Shiyi; Chen, Hudong; Martinez, Daniel; Matthaeus, William
1991-01-01
A numerical method, based on a discrete Boltzmann equation, is presented for solving the equations of magnetohydrodynamics (MHD). The algorithm provides advantages similar to the cellular automaton method in that it is local and easily adapted to parallel computing environments. Because of much lower noise levels and less stringent requirements on lattice size, the method appears to be more competitive with traditional solution methods. Examples show that the model accurately reproduces both linear and nonlinear MHD phenomena.
NASA Astrophysics Data System (ADS)
Clegg, Richard A.; Hayhurst, Colin J.
1999-06-01
Ceramic materials, including glass, are commonly used as ballistic protection materials. The response of a ceramic to impact, perforation and penetration is complex and difficult and/or expensive to instrument for obtaining detailed physical data. This paper demonstrates how a hydrocode, such as AUTODYN, can be used to aid in the understanding of the response of brittle materials to high pressure impact loading and thus promote an efficient and cost effective design process. Hydrocode simulations cannot be made without appropriate characterisation of the material. Because of the complexitiy of the response of ceramic materials this often requires a number of complex material tests. Here we present a methodology for using the results of flyer plate tests, in conjunction with numerical simulations, to derive input to the Johnson-Holmquist material model for ceramics. Most of the research effort in relation to the development of hydrocode material models for ceramics has concentrated on the material behaviour under compression and shear. While the penetration process is dominated by these aspects of the material response, the final damaged state of the material can be significantly influenced by the tensile behaviour. Modelling of the final damage state is important since this is often the only physical information which is available. In this paper we present a unique implementation, in a hydrocode, for improved modelling of brittle materials in the tensile regime. Tensile failure initiation is based on any combination of principal stress or strain while the post-failure tensile response of the material is controlled through a Rankine plasticity damaging failure surface. The tensile failure surface can be combined with any of the traditional plasticity and/or compressive damage models. Finally, the models and data are applied in both traditional grid based Lagrangian and Eulerian solution techniques and the relativley new SPH (Smooth Particle Hydrodynamics) meshless technique. Simulations of long rod impacts onto ceramic faced armour and hypervelocity impacts on glass solar array space structures are presented and compared with experiments.
Cannon, Robert C; Gleeson, Padraig; Crook, Sharon; Ganapathy, Gautham; Marin, Boris; Piasini, Eugenio; Silver, R Angus
2014-01-01
Computational models are increasingly important for studying complex neurophysiological systems. As scientific tools, it is essential that such models can be reproduced and critically evaluated by a range of scientists. However, published models are currently implemented using a diverse set of modeling approaches, simulation tools, and computer languages making them inaccessible and difficult to reproduce. Models also typically contain concepts that are tightly linked to domain-specific simulators, or depend on knowledge that is described exclusively in text-based documentation. To address these issues we have developed a compact, hierarchical, XML-based language called LEMS (Low Entropy Model Specification), that can define the structure and dynamics of a wide range of biological models in a fully machine readable format. We describe how LEMS underpins the latest version of NeuroML and show that this framework can define models of ion channels, synapses, neurons and networks. Unit handling, often a source of error when reusing models, is built into the core of the language by specifying physical quantities in models in terms of the base dimensions. We show how LEMS, together with the open source Java and Python based libraries we have developed, facilitates the generation of scripts for multiple neuronal simulators and provides a route for simulator free code generation. We establish that LEMS can be used to define models from systems biology and map them to neuroscience-domain specific simulators, enabling models to be shared between these traditionally separate disciplines. LEMS and NeuroML 2 provide a new, comprehensive framework for defining computational models of neuronal and other biological systems in a machine readable format, making them more reproducible and increasing the transparency and accessibility of their underlying structure and properties.
Cannon, Robert C.; Gleeson, Padraig; Crook, Sharon; Ganapathy, Gautham; Marin, Boris; Piasini, Eugenio; Silver, R. Angus
2014-01-01
Computational models are increasingly important for studying complex neurophysiological systems. As scientific tools, it is essential that such models can be reproduced and critically evaluated by a range of scientists. However, published models are currently implemented using a diverse set of modeling approaches, simulation tools, and computer languages making them inaccessible and difficult to reproduce. Models also typically contain concepts that are tightly linked to domain-specific simulators, or depend on knowledge that is described exclusively in text-based documentation. To address these issues we have developed a compact, hierarchical, XML-based language called LEMS (Low Entropy Model Specification), that can define the structure and dynamics of a wide range of biological models in a fully machine readable format. We describe how LEMS underpins the latest version of NeuroML and show that this framework can define models of ion channels, synapses, neurons and networks. Unit handling, often a source of error when reusing models, is built into the core of the language by specifying physical quantities in models in terms of the base dimensions. We show how LEMS, together with the open source Java and Python based libraries we have developed, facilitates the generation of scripts for multiple neuronal simulators and provides a route for simulator free code generation. We establish that LEMS can be used to define models from systems biology and map them to neuroscience-domain specific simulators, enabling models to be shared between these traditionally separate disciplines. LEMS and NeuroML 2 provide a new, comprehensive framework for defining computational models of neuronal and other biological systems in a machine readable format, making them more reproducible and increasing the transparency and accessibility of their underlying structure and properties. PMID:25309419
Numerical simulation of a compressible homogeneous, turbulent shear flow. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Feiereisen, W. J.; Reynolds, W. C.; Ferziger, J. H.
1981-01-01
A direct, low Reynolds number, numerical simulation was performed on a homogeneous turbulent shear flow. The full compressible Navier-Stokes equations were used in a simulation on the ILLIAC IV computer with a 64,000 mesh. The flow fields generated by the code are used as an experimental data base, to examine the behavior of the Reynols stresses in this simple, compressible flow. The variation of the structure of the stresses and their dynamic equations as the character of the flow changed is emphasized. The structure of the tress tensor is more heavily dependent on the shear number and less on the fluctuating Mach number. The pressure-strain correlation tensor in the dynamic uations is directly calculated in this simulation. These correlations are decomposed into several parts, as contrasted with the traditional incompressible decomposition into two parts. The performance of existing models for the conventional terms is examined, and a model is proposed for the 'mean fluctuating' part.
Key technology research of HILS based on real-time operating system
NASA Astrophysics Data System (ADS)
Wang, Fankai; Lu, Huiming; Liu, Che
2018-03-01
In order to solve the problems that the long development cycle of traditional simulation and digital simulation doesn't have the characteristics of real time, this paper designed a HILS(Hardware In the Loop Simulation) system based on the real-time operating platform xPC. This system solved the communication problems between HMI and Simulink models through the MATLAB engine interface, and realized the functions of system setting, offline simulation, model compiling and downloading, etc. Using xPC application interface and integrating the TeeChart ActiveX chart component to realize the monitoring function of real-time target application; Each functional block in the system is encapsulated in the form of DLL, and the data interaction between modules was realized by MySQL database technology. When the HILS system runs, search the address of the online xPC target by means of the Ping command, to establish the Tcp/IP communication between the two machines. The technical effectiveness of the developed system is verified through the typical power station control system.
Wang, Hong-Hua
2014-01-01
A precise mathematical model plays a pivotal role in the simulation, evaluation, and optimization of photovoltaic (PV) power systems. Different from the traditional linear model, the model of PV module has the features of nonlinearity and multiparameters. Since conventional methods are incapable of identifying the parameters of PV module, an excellent optimization algorithm is required. Artificial fish swarm algorithm (AFSA), originally inspired by the simulation of collective behavior of real fish swarms, is proposed to fast and accurately extract the parameters of PV module. In addition to the regular operation, a mutation operator (MO) is designed to enhance the searching performance of the algorithm. The feasibility of the proposed method is demonstrated by various parameters of PV module under different environmental conditions, and the testing results are compared with other studied methods in terms of final solutions and computational time. The simulation results show that the proposed method is capable of obtaining higher parameters identification precision. PMID:25243233
A comparative study of inelastic scattering models at energy levels ranging from 0.5 keV to 10 keV
NASA Astrophysics Data System (ADS)
Hu, Chia-Yu; Lin, Chun-Hung
2017-03-01
Six models, including a single-scattering model, four hybrid models, and one dielectric function model, were evaluated using Monte Carlo simulations for aluminum and copper at incident beam energies ranging from 0.5 keV to 10 keV. The inelastic mean free path, mean energy loss per unit path length, and backscattering coefficients obtained by these models are compared and discussed to understand the merits of the various models. ANOVA (analysis of variance) statistical models were used to quantify the effects of inelastic cross section and energy loss models on the basis of the simulated results deviation from the experimental data for the inelastic mean free path, the mean energy loss per unit path length, and the backscattering coefficient, as well as their correlations. This work in this study is believed to be the first application of ANOVA models towards evaluating inelastic electron beam scattering models. This approach is an improvement over the traditional approach which involves only visual estimation of the difference between the experimental data and simulated results. The data suggests that the optimization of the effective electron number per atom, binding energy, and cut-off energy of an inelastic model for different materials at different beam energies is more important than the selection of inelastic models for Monte Carlo electron scattering simulation. During the simulations, parameters in the equations should be tuned according to different materials for different beam energies rather than merely employing default parameters for an arbitrary material. Energy loss models and cross-section formulas are not the main factors influencing energy loss. Comparison of the deviation of the simulated results from the experimental data shows a significant correlation (p < 0.05) between the backscattering coefficient and energy loss per unit path length. The inclusion of backscattering electrons generated by both primary and secondary electrons for backscattering coefficient simulation is recommended for elements with high atomic numbers. In hybrid models, introducing the inner shell ionization model improves the accuracy of simulated results.
Bridging the gap: enhancing interprofessional education using simulation.
Robertson, James; Bandali, Karim
2008-10-01
Simulated learning and interprofessional education (IPE) are increasingly becoming more prevalent in health care curriculum. As the focus shifts to patient-centred care, health professionals will need to learn with, from and about one another in real-life settings in order to facilitate teamwork and collaboration. The provision of simulated learning in an interprofessional environment helps replicate these settings thereby providing the traditional medical education model with opportunities for growth and innovation. Learning in context is an essential psychological and cognitive aspect of education.This paper offers a conceptual analysis of the salient issues related to IPE and medical simulation. In addition, the paper argues for the integration of simulation into IPE in order to develop innovative approaches for the delivery of education and improved clinical practice that may benefit students and all members of the health care team.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, S. X., E-mail: shu@lle.rochester.edu; Goncharov, V. N.; McCrory, R. L.
2016-04-15
Using quantum molecular-dynamics (QMD) methods based on the density functional theory, we have performed first-principles investigations of the ionization and thermal conductivity of polystyrene (CH) over a wide range of plasma conditions (ρ = 0.5 to 100 g/cm{sup 3} and T = 15 625 to 500 000 K). The ionization data from orbital-free molecular-dynamics calculations have been fitted with a “Saha-type” model as a function of the CH plasma density and temperature, which gives an increasing ionization as the CH density increases even at low temperatures (T < 50 eV). The orbital-free molecular dynamics method is only used to gauge the average ionization behavior of CH under the average-atommore » model in conjunction with the pressure-matching mixing rule. The thermal conductivities (κ{sub QMD}) of CH, derived directly from the Kohn–Sham molecular-dynamics calculations, are then analytically fitted with a generalized Coulomb logarithm [(lnΛ){sub QMD}] over a wide range of plasma conditions. When compared with the traditional ionization and thermal conductivity models used in radiation–hydrodynamics codes for inertial confinement fusion simulations, the QMD results show a large difference in the low-temperature regime in which strong coupling and electron degeneracy play an essential role in determining plasma properties. Hydrodynamic simulations of cryogenic deuterium–tritium targets with CH ablators on OMEGA and the National Ignition Facility using the QMD-derived ionization and thermal conductivity of CH have predicted ∼20% variation in target performance in terms of hot-spot pressure and neutron yield (gain) with respect to traditional model simulations.« less
Varadhan, Ravi; Wang, Sue-Jane
2016-01-01
Treatment effect heterogeneity is a well-recognized phenomenon in randomized controlled clinical trials. In this paper, we discuss subgroup analyses with prespecified subgroups of clinical or biological importance. We explore various alternatives to the naive (the traditional univariate) subgroup analyses to address the issues of multiplicity and confounding. Specifically, we consider a model-based Bayesian shrinkage (Bayes-DS) and a nonparametric, empirical Bayes shrinkage approach (Emp-Bayes) to temper the optimism of traditional univariate subgroup analyses; a standardization approach (standardization) that accounts for correlation between baseline covariates; and a model-based maximum likelihood estimation (MLE) approach. The Bayes-DS and Emp-Bayes methods model the variation in subgroup-specific treatment effect rather than testing the null hypothesis of no difference between subgroups. The standardization approach addresses the issue of confounding in subgroup analyses. The MLE approach is considered only for comparison in simulation studies as the “truth” since the data were generated from the same model. Using the characteristics of a hypothetical large outcome trial, we perform simulation studies and articulate the utilities and potential limitations of these estimators. Simulation results indicate that Bayes-DS and Emp-Bayes can protect against optimism present in the naïve approach. Due to its simplicity, the naïve approach should be the reference for reporting univariate subgroup-specific treatment effect estimates from exploratory subgroup analyses. Standardization, although it tends to have a larger variance, is suggested when it is important to address the confounding of univariate subgroup effects due to correlation between baseline covariates. The Bayes-DS approach is available as an R package (DSBayes). PMID:26485117
Research on the Diesel Engine with Sliding Mode Variable Structure Theory
NASA Astrophysics Data System (ADS)
Ma, Zhexuan; Mao, Xiaobing; Cai, Le
2018-05-01
This study constructed the nonlinear mathematical model of the diesel engine high-pressure common rail (HPCR) system through two polynomial fitting which was treated as a kind of affine nonlinear system. Based on sliding-mode variable structure control (SMVSC) theory, a sliding-mode controller for affine nonlinear systems was designed for achieving the control of common rail pressure and the diesel engine’s rotational speed. Finally, on the simulation platform of MATLAB, the designed nonlinear HPCR system was simulated. The simulation results demonstrated that sliding-mode variable structure control algorithm shows favourable control performances which are overcoming the shortcomings of traditional PID control in overshoot, parameter adjustment, system precision, adjustment time and ascending time.
Spatio-temporal Eigenvector Filtering: Application on Bioenergy Crop Impacts
NASA Astrophysics Data System (ADS)
Wang, M.; Kamarianakis, Y.; Georgescu, M.
2017-12-01
A suite of 10-year ensemble-based simulations was conducted to investigate the hydroclimatic impacts due to large-scale deployment of perennial bioenergy crops across the continental United States. Given the large size of the simulated dataset (about 60Tb), traditional hierarchical spatio-temporal statistical modelling cannot be implemented for the evaluation of physics parameterizations and biofuel impacts. In this work, we propose a filtering algorithm that takes into account the spatio-temporal autocorrelation structure of the data while avoiding spatial confounding. This method is used to quantify the robustness of simulated hydroclimatic impacts associated with bioenergy crops to alternative physics parameterizations and observational datasets. Results are evaluated against those obtained from three alternative Bayesian spatio-temporal specifications.
Multi-agent fare optimization model of two modes problem and its analysis based on edge of chaos
NASA Astrophysics Data System (ADS)
Li, Xue-yan; Li, Xue-mei; Li, Xue-wei; Qiu, He-ting
2017-03-01
This paper proposes a new framework of fare optimization & game model for studying the competition between two travel modes (high speed railway and civil aviation) in which passengers' group behavior is taken into consideration. The small-world network is introduced to construct the multi-agent model of passengers' travel mode choice. The cumulative prospect theory is adopted to depict passengers' bounded rationality, the heterogeneity of passengers' reference point is depicted using the idea of group emotion computing. The conceptions of "Langton parameter" and "evolution entropy" in the theory of "edge of chaos" are introduced to create passengers' "decision coefficient" and "evolution entropy of travel mode choice" which are used to quantify passengers' group behavior. The numerical simulation and the analysis of passengers' behavior show that (1) the new model inherits the features of traditional model well and the idea of self-organizing traffic flow evolution fully embodies passengers' bounded rationality, (2) compared with the traditional model (logit model), when passengers are in the "edge of chaos" state, the total profit of the transportation system is higher.
Molecular Dynamics based on a Generalized Born solvation model: application to protein folding
NASA Astrophysics Data System (ADS)
Onufriev, Alexey
2004-03-01
An accurate description of the aqueous environment is essential for realistic biomolecular simulations, but may become very expensive computationally. We have developed a version of the Generalized Born model suitable for describing large conformational changes in macromolecules. The model represents the solvent implicitly as continuum with the dielectric properties of water, and include charge screening effects of salt. The computational cost associated with the use of this model in Molecular Dynamics simulations is generally considerably smaller than the cost of representing water explicitly. Also, compared to traditional Molecular Dynamics simulations based on explicit water representation, conformational changes occur much faster in implicit solvation environment due to the absence of viscosity. The combined speed-up allow one to probe conformational changes that occur on much longer effective time-scales. We apply the model to folding of a 46-residue three helix bundle protein (residues 10-55 of protein A, PDB ID 1BDD). Starting from an unfolded structure at 450 K, the protein folds to the lowest energy state in 6 ns of simulation time, which takes about a day on a 16 processor SGI machine. The predicted structure differs from the native one by 2.4 A (backbone RMSD). Analysis of the structures seen on the folding pathway reveals details of the folding process unavailable form experiment.
Explicit simulation of ice particle habits in a Numerical Weather Prediction Model
NASA Astrophysics Data System (ADS)
Hashino, Tempei
2007-05-01
This study developed a scheme for explicit simulation of ice particle habits in Numerical Weather Prediction (NWP) Models. The scheme is called Spectral Ice Habit Prediction System (SHIPS), and the goal is to retain growth history of ice particles in the Eulerian dynamics framework. It diagnoses characteristics of ice particles based on a series of particle property variables (PPVs) that reflect history of microphysieal processes and the transport between mass bins and air parcels in space. Therefore, categorization of ice particles typically used in bulk microphysical parameterization and traditional bin models is not necessary, so that errors that stem from the categorization can be avoided. SHIPS predicts polycrystals as well as hexagonal monocrystals based on empirically derived habit frequency and growth rate, and simulates the habit-dependent aggregation and riming processes by use of the stochastic collection equation with predicted PPVs. Idealized two dimensional simulations were performed with SHIPS in a NWP model. The predicted spatial distribution of ice particle habits and types, and evolution of particle size distributions showed good quantitative agreement with observation This comprehensive model of ice particle properties, distributions, and evolution in clouds can be used to better understand problems facing wide range of research disciplines, including microphysics processes, radiative transfer in a cloudy atmosphere, data assimilation, and weather modification.
Computer simulations for lab experiences in secondary physics
NASA Astrophysics Data System (ADS)
Murphy, David Shannon
Physical science instruction often involves modeling natural systems, such as electricity that possess particles which are invisible to the unaided eye. The effect of these particles' motion is observable, but the particles are not directly observable to humans. Simulations have been developed in physics, chemistry and biology that, under certain circumstances, have been found to allow students to gain insight into the operation of the systems they model. This study compared the use of a DC circuit simulation, a modified simulation, static graphics, and traditional bulbs and wires to compare gains in DC circuit knowledge as measured by the DIRECT instrument, a multiple choice instrument previously developed to assess DC circuit knowledge. Gender, prior DC circuit knowledge and subsets of DC circuit knowledge of students were also compared. The population (n=166) was comprised of high school freshmen students from an eastern Kentucky public school with a population of 1100 students and followed a quantitative quasi experimental research design. Differences between treatment groups were not statistically significant. Keywords: Simulations, Static Images, Science Education, DC Circuit Instruction, Phet.
NASA Astrophysics Data System (ADS)
He, Xiulan; Sonnenborg, Torben O.; Jørgensen, Flemming; Jensen, Karsten H.
2017-03-01
Stationarity has traditionally been a requirement of geostatistical simulations. A common way to deal with non-stationarity is to divide the system into stationary sub-regions and subsequently merge the realizations for each region. Recently, the so-called partition approach that has the flexibility to model non-stationary systems directly was developed for multiple-point statistics simulation (MPS). The objective of this study is to apply the MPS partition method with conventional borehole logs and high-resolution airborne electromagnetic (AEM) data, for simulation of a real-world non-stationary geological system characterized by a network of connected buried valleys that incise deeply into layered Miocene sediments (case study in Denmark). The results show that, based on fragmented information of the formation boundaries, the MPS partition method is able to simulate a non-stationary system including valley structures embedded in a layered Miocene sequence in a single run. Besides, statistical information retrieved from the AEM data improved the simulation of the geology significantly, especially for the deep-seated buried valley sediments where borehole information is sparse.
Computer Simulation and Digital Resources for Plastic Surgery Psychomotor Education.
Diaz-Siso, J Rodrigo; Plana, Natalie M; Stranix, John T; Cutting, Court B; McCarthy, Joseph G; Flores, Roberto L
2016-10-01
Contemporary plastic surgery residents are increasingly challenged to learn a greater number of complex surgical techniques within a limited period. Surgical simulation and digital education resources have the potential to address some limitations of the traditional training model, and have been shown to accelerate knowledge and skills acquisition. Although animal, cadaver, and bench models are widely used for skills and procedure-specific training, digital simulation has not been fully embraced within plastic surgery. Digital educational resources may play a future role in a multistage strategy for skills and procedures training. The authors present two virtual surgical simulators addressing procedural cognition for cleft repair and craniofacial surgery. Furthermore, the authors describe how partnerships among surgical educators, industry, and philanthropy can be a successful strategy for the development and maintenance of digital simulators and educational resources relevant to plastic surgery training. It is our responsibility as surgical educators not only to create these resources, but to demonstrate their utility for enhanced trainee knowledge and technical skills development. Currently available digital resources should be evaluated in partnership with plastic surgery educational societies to guide trainees and practitioners toward effective digital content.
Simulation of the regional groundwater-flow system of the Menominee Indian Reservation, Wisconsin
Juckem, Paul F.; Dunning, Charles P.
2015-01-01
The likely extent of the Neopit wastewater plume was simulated by using the groundwater-flow model and Monte Carlo techniques to evaluate the sensitivity of predictive simulations to a range of model parameter values. Wastewater infiltrated from the currently operating lagoons flows predominantly south toward Tourtillotte Creek. Some of the infiltrated wastewater is simulated as having a low probability of flowing beneath Tourtillotte Creek to the nearby West Branch Wolf River. Results for the probable extent of the wastewater plume are considered to be qualitative because the method only considers advective flow and does not account for processes affecting contaminant transport in porous media. Therefore, results for the probable extent of the wastewater plume are sensitive to the number of particles used to represent flow from the lagoon and the resolution of a synthetic grid used for the analysis. Nonetheless, it is expected that the qualitative results may be of use for identifying potential downgradient areas of concern that can then be evaluated using the quantitative “area contributing recharge to wells” method or traditional contaminant-transport simulations.
NASA Astrophysics Data System (ADS)
Chen, Tzikang J.; Shiao, Michael
2016-04-01
This paper verified a generic and efficient assessment concept for probabilistic fatigue life management. The concept is developed based on an integration of damage tolerance methodology, simulations methods1, 2, and a probabilistic algorithm RPI (recursive probability integration)3-9 considering maintenance for damage tolerance and risk-based fatigue life management. RPI is an efficient semi-analytical probabilistic method for risk assessment subjected to various uncertainties such as the variability in material properties including crack growth rate, initial flaw size, repair quality, random process modeling of flight loads for failure analysis, and inspection reliability represented by probability of detection (POD). In addition, unlike traditional Monte Carlo simulations (MCS) which requires a rerun of MCS when maintenance plan is changed, RPI can repeatedly use a small set of baseline random crack growth histories excluding maintenance related parameters from a single MCS for various maintenance plans. In order to fully appreciate the RPI method, a verification procedure was performed. In this study, MC simulations in the orders of several hundred billions were conducted for various flight conditions, material properties, and inspection scheduling, POD and repair/replacement strategies. Since the MC simulations are time-consuming methods, the simulations were conducted parallelly on DoD High Performance Computers (HPC) using a specialized random number generator for parallel computing. The study has shown that RPI method is several orders of magnitude more efficient than traditional Monte Carlo simulations.
2014-03-01
purpose of the study was to determine if the use of a simulator is at least as effective in marksmanship training as traditional dry fire techniques...determine if the use of a simulator is at least as effective in marksmanship training as traditional dry fire techniques. A between-groups study with a...marksmanship. Naval commands could use the information to effectively maintain gun qualifications for inport duty section watch bills and constant anti
Atomistic to continuum modeling of solidification microstructures
Karma, Alain; Tourret, Damien
2015-09-26
We summarize recent advances in modeling of solidification microstructures using computational methods that bridge atomistic to continuum scales. We first discuss progress in atomistic modeling of equilibrium and non-equilibrium solid–liquid interface properties influencing microstructure formation, as well as interface coalescence phenomena influencing the late stages of solidification. The latter is relevant in the context of hot tearing reviewed in the article by M. Rappaz in this issue. We then discuss progress to model microstructures on a continuum scale using phase-field methods. We focus on selected examples in which modeling of 3D cellular and dendritic microstructures has been directly linked tomore » experimental observations. Finally, we discuss a recently introduced coarse-grained dendritic needle network approach to simulate the formation of well-developed dendritic microstructures. The approach reliably bridges the well-separated scales traditionally simulated by phase-field and grain structure models, hence opening new avenues for quantitative modeling of complex intra- and inter-grain dynamical interactions on a grain scale.« less
A Practical Philosophy of Complex Climate Modelling
NASA Technical Reports Server (NTRS)
Schmidt, Gavin A.; Sherwood, Steven
2014-01-01
We give an overview of the practice of developing and using complex climate models, as seen from experiences in a major climate modelling center and through participation in the Coupled Model Intercomparison Project (CMIP).We discuss the construction and calibration of models; their evaluation, especially through use of out-of-sample tests; and their exploitation in multi-model ensembles to identify biases and make predictions. We stress that adequacy or utility of climate models is best assessed via their skill against more naive predictions. The framework we use for making inferences about reality using simulations is naturally Bayesian (in an informal sense), and has many points of contact with more familiar examples of scientific epistemology. While the use of complex simulations in science is a development that changes much in how science is done in practice, we argue that the concepts being applied fit very much into traditional practices of the scientific method, albeit those more often associated with laboratory work.
NASA Technical Reports Server (NTRS)
Rogers, Ralph V.
1993-01-01
The TATSS Project's goal was to develop a design for computer software that would support the attainment of the following objectives for the air traffic simulation model: (1) Full freedom of movement for each aircraft object in the simulation model. Each aircraft object may follow any designated flight plan or flight path necessary as required by the experiment under consideration. (2) Object position precision up to +/- 3 meters vertically and +/- 15 meters horizontally. (3) Aircraft maneuvering in three space with the object position precision identified above. (4) Air traffic control operations and procedures. (5) Radar, communication, navaid, and landing aid performance. (6) Weather. (7) Ground obstructions and terrain. (8) Detection and recording of separation violations. (9) Measures of performance including deviations from flight plans, air space violations, air traffic control messages per aircraft, and traditional temporal based measures.
Conceptual Hierarchies in a Flat Attractor Network
O’Connor, Christopher M.; Cree, George S.; McRae, Ken
2009-01-01
The structure of people’s conceptual knowledge of concrete nouns has traditionally been viewed as hierarchical (Collins & Quillian, 1969). For example, superordinate concepts (vegetable) are assumed to reside at a higher level than basic-level concepts (carrot). A feature-based attractor network with a single layer of semantic features developed representations of both basic-level and superordinate concepts. No hierarchical structure was built into the network. In Experiment and Simulation 1, the graded structure of categories (typicality ratings) is accounted for by the flat attractor-network. Experiment and Simulation 2 show that, as with basic-level concepts, such a network predicts feature verification latencies for superordinate concepts (vegetable
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnamurthy, Dheepak
This paper is an overview of Power System Simulation Toolbox (psst). psst is an open-source Python application for the simulation and analysis of power system models. psst simulates the wholesale market operation by solving a DC Optimal Power Flow (DCOPF), Security Constrained Unit Commitment (SCUC) and a Security Constrained Economic Dispatch (SCED). psst also includes models for the various entities in a power system such as Generator Companies (GenCos), Load Serving Entities (LSEs) and an Independent System Operator (ISO). psst features an open modular object oriented architecture that will make it useful for researchers to customize, expand, experiment beyond solvingmore » traditional problems. psst also includes a web based Graphical User Interface (GUI) that allows for user friendly interaction and for implementation on remote High Performance Computing (HPCs) clusters for parallelized operations. This paper also provides an illustrative application of psst and benchmarks with standard IEEE test cases to show the advanced features and the performance of toolbox.« less
Incorporating simulation in vascular surgery education.
Bismuth, Jean; Donovan, Michael A; O'Malley, Marcia K; El Sayed, Hosam F; Naoum, Joseph J; Peden, Eric K; Davies, Mark G; Lumsden, Alan B
2010-10-01
The traditional apprenticeship model introduced by Halsted of "learning by doing" may just not be valid in the modern practice of vascular surgery. The model is often criticized for being somewhat unstructured because a resident's experience is based on what comes through the "door." In an attempt to promote uniformity of training, multiple national organizations are currently delineating standard curricula for each trainee to govern the knowledge and cases required in a vascular residency. However, the outcomes are anything but uniform. This means that we graduate vascular specialists with a surprisingly wide spectrum of abilities. Use of simulation may benefit trainees in attaining a level of technical expertise that will benefit themselves and their patients. Furthermore, there is likely a need to establish a simulation-based certification process for graduating trainees to further ascertain minimum technical abilities. Copyright © 2010 Society for Vascular Surgery. Published by Mosby, Inc. All rights reserved.
A Bayesian network model for predicting pregnancy after in vitro fertilization.
Corani, G; Magli, C; Giusti, A; Gianaroli, L; Gambardella, L M
2013-11-01
We present a Bayesian network model for predicting the outcome of in vitro fertilization (IVF). The problem is characterized by a particular missingness process; we propose a simple but effective averaging approach which improves parameter estimates compared to the traditional MAP estimation. We present results with generated data and the analysis of a real data set. Moreover, we assess by means of a simulation study the effectiveness of the model in supporting the selection of the embryos to be transferred. © 2013 Elsevier Ltd. All rights reserved.
Parallel computing in enterprise modeling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.
2008-08-01
This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priorimore » ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.« less
Hanuschkin, Alexander; Kunkel, Susanne; Helias, Moritz; Morrison, Abigail; Diesmann, Markus
2010-01-01
Traditionally, event-driven simulations have been limited to the very restricted class of neuronal models for which the timing of future spikes can be expressed in closed form. Recently, the class of models that is amenable to event-driven simulation has been extended by the development of techniques to accurately calculate firing times for some integrate-and-fire neuron models that do not enable the prediction of future spikes in closed form. The motivation of this development is the general perception that time-driven simulations are imprecise. Here, we demonstrate that a globally time-driven scheme can calculate firing times that cannot be discriminated from those calculated by an event-driven implementation of the same model; moreover, the time-driven scheme incurs lower computational costs. The key insight is that time-driven methods are based on identifying a threshold crossing in the recent past, which can be implemented by a much simpler algorithm than the techniques for predicting future threshold crossings that are necessary for event-driven approaches. As run time is dominated by the cost of the operations performed at each incoming spike, which includes spike prediction in the case of event-driven simulation and retrospective detection in the case of time-driven simulation, the simple time-driven algorithm outperforms the event-driven approaches. Additionally, our method is generally applicable to all commonly used integrate-and-fire neuronal models; we show that a non-linear model employing a standard adaptive solver can reproduce a reference spike train with a high degree of precision. PMID:21031031
Accuracy of finite-difference modeling of seismic waves : Simulation versus laboratory measurements
NASA Astrophysics Data System (ADS)
Arntsen, B.
2017-12-01
The finite-difference technique for numerical modeling of seismic waves is still important and for some areas extensively used.For exploration purposes is finite-difference simulation at the core of both traditional imaging techniques such as reverse-time migration and more elaborate Full-Waveform Inversion techniques.The accuracy and fidelity of finite-difference simulation of seismic waves are hard to quantify and meaningfully error analysis is really onlyeasily available for simplistic media. A possible alternative to theoretical error analysis is provided by comparing finite-difference simulated data with laboratory data created using a scale model. The advantage of this approach is the accurate knowledge of the model, within measurement precision, and the location of sources and receivers.We use a model made of PVC immersed in water and containing horizontal and tilted interfaces together with several spherical objects to generateultrasonic pressure reflection measurements. The physical dimensions of the model is of the order of a meter, which after scaling represents a model with dimensions of the order of 10 kilometer and frequencies in the range of one to thirty hertz.We find that for plane horizontal interfaces the laboratory data can be reproduced by the finite-difference scheme with relatively small error, but for steeply tilted interfaces the error increases. For spherical interfaces the discrepancy between laboratory data and simulated data is sometimes much more severe, to the extent that it is not possible to simulate reflections from parts of highly curved bodies. The results are important in view of the fact that finite-difference modeling is often at the core of imaging and inversion algorithms tackling complicatedgeological areas with highly curved interfaces.
Analytic Guided-Search Model of Human Performance Accuracy in Target- Localization Search Tasks
NASA Technical Reports Server (NTRS)
Eckstein, Miguel P.; Beutter, Brent R.; Stone, Leland S.
2000-01-01
Current models of human visual search have extended the traditional serial/parallel search dichotomy. Two successful models for predicting human visual search are the Guided Search model and the Signal Detection Theory model. Although these models are inherently different, it has been difficult to compare them because the Guided Search model is designed to predict response time, while Signal Detection Theory models are designed to predict performance accuracy. Moreover, current implementations of the Guided Search model require the use of Monte-Carlo simulations, a method that makes fitting the model's performance quantitatively to human data more computationally time consuming. We have extended the Guided Search model to predict human accuracy in target-localization search tasks. We have also developed analytic expressions that simplify simulation of the model to the evaluation of a small set of equations using only three free parameters. This new implementation and extension of the Guided Search model will enable direct quantitative comparisons with human performance in target-localization search experiments and with the predictions of Signal Detection Theory and other search accuracy models.
NASA Astrophysics Data System (ADS)
Phillips, M.; Denning, A. S.; Randall, D. A.; Branson, M.
2016-12-01
Multi-scale models of the atmosphere provide an opportunity to investigate processes that are unresolved by traditional Global Climate Models while at the same time remaining viable in terms of computational resources for climate-length time scales. The MMF represents a shift away from large horizontal grid spacing in traditional GCMs that leads to overabundant light precipitation and lack of heavy events, toward a model where precipitation intensity is allowed to vary over a much wider range of values. Resolving atmospheric motions on the scale of 4 km makes it possible to recover features of precipitation, such as intense downpours, that were previously only obtained by computationally expensive regional simulations. These heavy precipitation events may have little impact on large-scale moisture and energy budgets, but are outstanding in terms of interaction with the land surface and potential impact on human life. Three versions of the Community Earth System Model were used in this study; the standard CESM, the multi-scale `Super-Parameterized' CESM where large-scale parameterizations have been replaced with a 2D cloud-permitting model, and a multi-instance land version of the SP-CESM where each column of the 2D CRM is allowed to interact with an individual land unit. These simulations were carried out using prescribed Sea Surface Temperatures for the period from 1979-2006 with daily precipitation saved for all 28 years. Comparisons of the statistical properties of precipitation between model architectures and against observations from rain gauges were made, with specific focus on detection and evaluation of extreme precipitation events.
NASA Astrophysics Data System (ADS)
Hoepfer, Matthias
Over the last two decades, computer modeling and simulation have evolved as the tools of choice for the design and engineering of dynamic systems. With increased system complexities, modeling and simulation become essential enablers for the design of new systems. Some of the advantages that modeling and simulation-based system design allows for are the replacement of physical tests to ensure product performance, reliability and quality, the shortening of design cycles due to the reduced need for physical prototyping, the design for mission scenarios, the invoking of currently nonexisting technologies, and the reduction of technological and financial risks. Traditionally, dynamic systems are modeled in a monolithic way. Such monolithic models include all the data, relations and equations necessary to represent the underlying system. With increased complexity of these models, the monolithic model approach reaches certain limits regarding for example, model handling and maintenance. Furthermore, while the available computer power has been steadily increasing according to Moore's Law (a doubling in computational power every 10 years), the ever-increasing complexities of new models have negated the increased resources available. Lastly, modern systems and design processes are interdisciplinary, enforcing the necessity to make models more flexible to be able to incorporate different modeling and design approaches. The solution to bypassing the shortcomings of monolithic models is cosimulation. In a very general sense, co-simulation addresses the issue of linking together different dynamic sub-models to a model which represents the overall, integrated dynamic system. It is therefore an important enabler for the design of interdisciplinary, interconnected, highly complex dynamic systems. While a basic co-simulation setup can be very easy, complications can arise when sub-models display behaviors such as algebraic loops, singularities, or constraints. This work frames the co-simulation approach to modeling and simulation. It lays out the general approach to dynamic system co-simulation, and gives a comprehensive overview of what co-simulation is and what it is not. It creates a taxonomy of the requirements and limits of co-simulation, and the issues arising with co-simulating sub-models. Possible solutions towards resolving the stated problems are investigated to a certain depth. A particular focus is given to the issue of time stepping. It will be shown that for dynamic models, the selection of the simulation time step is a crucial issue with respect to computational expense, simulation accuracy, and error control. The reasons for this are discussed in depth, and a time stepping algorithm for co-simulation with unknown dynamic sub-models is proposed. Motivations and suggestions for the further treatment of selected issues are presented.
Pre-compression volume on flow ripple reduction of a piston pump
NASA Astrophysics Data System (ADS)
Xu, Bing; Song, Yuechao; Yang, Huayong
2013-11-01
Axial piston pump with pre-compression volume(PCV) has lower flow ripple in large scale of operating condition than the traditional one. However, there is lack of precise simulation model of the axial piston pump with PCV, so the parameters of PCV are difficult to be determined. A finite element simulation model for piston pump with PCV is built by considering the piston movement, the fluid characteristic(including fluid compressibility and viscosity) and the leakage flow rate. Then a test of the pump flow ripple called the secondary source method is implemented to validate the simulation model. Thirdly, by comparing results among the simulation results, test results and results from other publications at the same operating condition, the simulation model is validated and used in optimizing the axial piston pump with PCV. According to the pump flow ripples obtained by the simulation model with different PCV parameters, the flow ripple is the smallest when the PCV angle is 13°, the PCV volume is 1.3×10-4 m3 at such operating condition that the pump suction pressure is 2 MPa, the pump delivery pressure 15 MPa, the pump speed 1 000 r/min, the swash plate angle 13°. At the same time, the flow ripple can be reduced when the pump suction pressure is 2 MPa, the pump delivery pressure is 5 MPa,15 MPa, 22 MPa, pump speed is 400 r/min, 1 000 r/min, 1 500 r/min, the swash plate angle is 11°, 13°, 15° and 17°, respectively. The finite element simulation model proposed provides a method for optimizing the PCV structure and guiding for designing a quieter axial piston pump.
Simulations of ecosystem hydrological processes using a unified multi-scale model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiaofan; Liu, Chongxuan; Fang, Yilin
2015-01-01
This paper presents a unified multi-scale model (UMSM) that we developed to simulate hydrological processes in an ecosystem containing both surface water and groundwater. The UMSM approach modifies the Navier–Stokes equation by adding a Darcy force term to formulate a single set of equations to describe fluid momentum and uses a generalized equation to describe fluid mass balance. The advantage of the approach is that the single set of the equations can describe hydrological processes in both surface water and groundwater where different models are traditionally required to simulate fluid flow. This feature of the UMSM significantly facilitates modelling ofmore » hydrological processes in ecosystems, especially at locations where soil/sediment may be frequently inundated and drained in response to precipitation, regional hydrological and climate changes. In this paper, the UMSM was benchmarked using WASH123D, a model commonly used for simulating coupled surface water and groundwater flow. Disney Wilderness Preserve (DWP) site at the Kissimmee, Florida, where active field monitoring and measurements are ongoing to understand hydrological and biogeochemical processes, was then used as an example to illustrate the UMSM modelling approach. The simulations results demonstrated that the DWP site is subject to the frequent changes in soil saturation, the geometry and volume of surface water bodies, and groundwater and surface water exchange. All the hydrological phenomena in surface water and groundwater components including inundation and draining, river bank flow, groundwater table change, soil saturation, hydrological interactions between groundwater and surface water, and the migration of surface water and groundwater interfaces can be simultaneously simulated using the UMSM. Overall, the UMSM offers a cross-scale approach that is particularly suitable to simulate coupled surface and ground water flow in ecosystems with strong surface water and groundwater interactions.« less
A review of haptic simulator for oral and maxillofacial surgery based on virtual reality.
Chen, Xiaojun; Hu, Junlei
2018-06-01
Traditional medical training in oral and maxillofacial surgery (OMFS) may be limited by its low efficiency and high price due to the shortage of cadaver resources. With the combination of visual rendering and feedback force, surgery simulators become increasingly popular in hospitals and medical schools as an alternative to the traditional training. Areas covered: The major goal of this review is to provide a comprehensive reference source of current and future developments of haptic OMFS simulators based on virtual reality (VR) for relevant researchers. Expert commentary: Visual rendering, haptic rendering, tissue deformation, and evaluation are key components of haptic surgery simulator based on VR. Compared with traditional medical training, virtual and tactical fusion of virtual environment in surgery simulator enables considerably vivid sensation, and the operators have more opportunities to practice surgical skills and receive objective evaluation as reference.
NASA Astrophysics Data System (ADS)
Razurel, Pierre; Niayifar, Amin; Perona, Paolo
2017-04-01
Hydropower plays an important role in supplying worldwide energy demand where it contributes to approximately 16% of global electricity production. Although hydropower, as an emission-free renewable energy, is a reliable source of energy to mitigate climate change, its development will increase river exploitation. The environmental impacts associated with both small hydropower plants (SHP) and traditional dammed systems have been found to the consequence of changing natural flow regime with other release policies, e.g. the minimal flow. Nowadays, in some countries, proportional allocation rules are also applied aiming to mimic the natural flow variability. For example, these dynamic rules are part of the environmental guidance in the United Kingdom and constitute an improvement in comparison to static rules. In a context in which the full hydropower potential might be reached in a close future, a solution to optimize the water allocation seems essential. In this work, we present a model that enables to simulate a wide range of water allocation rules (static and dynamic) for a specific hydropower plant and to evaluate their associated economic and ecological benefits. It is developed in the form of a graphical user interface (GUI) where, depending on the specific type of hydropower plant (i.e., SHP or traditional dammed system), the user is able to specify the different characteristics (e.g., hydrological data and turbine characteristics) of the studied system. As an alternative to commonly used policies, a new class of dynamic allocation functions (non-proportional repartition rules) is introduced (e.g., Razurel et al., 2016). The efficiency plot resulting from the simulations shows the environmental indicator and the energy produced for each allocation policies. The optimal water distribution rules can be identified on the Pareto's frontier, which is obtained by stochastic optimization in the case of storage systems (e.g., Niayifar and Perona, submitted) and by direct simulation for small hydropower ones (Razurel et al., 2016). Compared to proportional and constant minimal flows, economic and ecological efficiencies are found to be substantially improved in the case of using non-proportional water allocation rules for both SHP and traditional systems.
Sokolova, Ekaterina; Aström, Johan; Pettersson, Thomas J R; Bergstedt, Olof; Hermansson, Malte
2012-01-17
The implementation of microbial fecal source tracking (MST) methods in drinking water management is limited by the lack of knowledge on the transport and decay of host-specific genetic markers in water sources. To address these limitations, the decay and transport of human (BacH) and ruminant (BacR) fecal Bacteroidales 16S rRNA genetic markers in a drinking water source (Lake Rådasjön in Sweden) were simulated using a microbiological model coupled to a three-dimensional hydrodynamic model. The microbiological model was calibrated using data from outdoor microcosm trials performed in March, August, and November 2010 to determine the decay of BacH and BacR markers in relation to traditional fecal indicators. The microcosm trials indicated that the persistence of BacH and BacR in the microcosms was not significantly different from the persistence of traditional fecal indicators. The modeling of BacH and BacR transport within the lake illustrated that the highest levels of genetic markers at the raw water intakes were associated with human fecal sources (on-site sewers and emergency sewer overflow). This novel modeling approach improves the interpretation of MST data, especially when fecal pollution from the same host group is released into the water source from different sites in the catchment.
NASA Astrophysics Data System (ADS)
Adams, Jordan M.; Gasparini, Nicole M.; Hobley, Daniel E. J.; Tucker, Gregory E.; Hutton, Eric W. H.; Nudurupati, Sai S.; Istanbulluoglu, Erkan
2017-04-01
Representation of flowing water in landscape evolution models (LEMs) is often simplified compared to hydrodynamic models, as LEMs make assumptions reducing physical complexity in favor of computational efficiency. The Landlab modeling framework can be used to bridge the divide between complex runoff models and more traditional LEMs, creating a new type of framework not commonly used in the geomorphology or hydrology communities. Landlab is a Python-language library that includes tools and process components that can be used to create models of Earth-surface dynamics over a range of temporal and spatial scales. The Landlab OverlandFlow component is based on a simplified inertial approximation of the shallow water equations, following the solution of de Almeida et al.(2012). This explicit two-dimensional hydrodynamic algorithm simulates a flood wave across a model domain, where water discharge and flow depth are calculated at all locations within a structured (raster) grid. Here, we illustrate how the OverlandFlow component contained within Landlab can be applied as a simplified event-based runoff model and how to couple the runoff model with an incision model operating on decadal timescales. Examples of flow routing on both real and synthetic landscapes are shown. Hydrographs from a single storm at multiple locations in the Spring Creek watershed, Colorado, USA, are illustrated, along with a map of shear stress applied on the land surface by flowing water. The OverlandFlow component can also be coupled with the Landlab DetachmentLtdErosion component to illustrate how the non-steady flow routing regime impacts incision across a watershed. The hydrograph and incision results are compared to simulations driven by steady-state runoff. Results from the coupled runoff and incision model indicate that runoff dynamics can impact landscape relief and channel concavity, suggesting that, on landscape evolution timescales, the OverlandFlow model may lead to differences in simulated topography in comparison with traditional methods. The exploratory test cases described within demonstrate how the OverlandFlow component can be used in both hydrologic and geomorphic applications.
Analysis on design and performance of a solar rotary house
NASA Astrophysics Data System (ADS)
Fan, Xuhong; Zhang, Zhaochang; Yang, Fan; Cao, Lilin; Xu, Jing; Yuan, Mingyang
2017-04-01
A solar rotary house is designed, composed of rotating main structure, fixed cylinder, rotating drive system, solar photovoltaic system and so on, to achieve 360° rotation. Thus it can change the dark and humid situation of the traditional fixed house shade. Its bearing capacity, driving force and safety are analyzed. Rotary driving force and living energy are provided by solar photovoltaic system on roofs and walls. The Phonenics, Ecotect simulation analysis conclude that the rotating house indoor has better natural ventilation effect, more uniform lighting, better the sunshine time compared with traditional houses, becoming a green, energy-saving, comfortable building model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krein, Gastao; Leme, Rafael R.; Woitek, Marcio
Traditional Monte Carlo simulations of QCD in the presence of a baryon chemical potential are plagued by the complex phase problem and new numerical approaches are necessary for studying the phase diagram of the theory. In this work we consider a Z{sub 3} Polyakov loop model for the deconfining phase transition in QCD and discuss how a flux representation of the model in terms of dimer and monomer variable solves the complex action problem. We present results of numerical simulations using a worm algorithm for the specific heat and two-point correlation function of Polyakov loops. Evidences of a first ordermore » deconfinement phase transition are discussed.« less
Eiber, Calvin D; Morley, John W; Lovell, Nigel H; Suaning, Gregg J
2014-01-01
We present a computational model of the optic pathway which has been adapted to simulate cortical responses to visual-prosthetic stimulation. This model reproduces the statistically observed distributions of spikes for cortical recordings of sham and maximum-intensity stimuli, while simultaneously generating cellular receptive fields consistent with those observed using traditional visual neuroscience methods. By inverting this model to generate candidate phosphenes which could generate the responses observed to novel stimulation strategies, we hope to aid the development of said strategies in-vivo before being deployed in clinical settings.
The impact of mesoscale convective systems on global precipitation: A modeling study
NASA Astrophysics Data System (ADS)
Tao, Wei-Kuo
2017-04-01
The importance of precipitating mesoscale convective systems (MCSs) has been quantified from TRMM precipitation radar and microwave imager retrievals. MCSs generate more than 50% of the rainfall in most tropical regions. Typical MCSs have horizontal scales of a few hundred kilometers (km); therefore, a large domain and high resolution are required for realistic simulations of MCSs in cloud-resolving models (CRMs). Almost all traditional global and climate models do not have adequate parameterizations to represent MCSs. Typical multi-scale modeling frameworks (MMFs) with 32 CRM grid points and 4 km grid spacing also might not have sufficient resolution and domain size for realistically simulating MCSs. In this study, the impact of MCSs on precipitation processes is examined by conducting numerical model simulations using the Goddard Cumulus Ensemble model (GCE) and Goddard MMF (GMMF). The results indicate that both models can realistically simulate MCSs with more grid points (i.e., 128 and 256) and higher resolutions (1 or 2 km) compared to those simulations with less grid points (i.e., 32 and 64) and low resolution (4 km). The modeling results also show that the strengths of the Hadley circulations, mean zonal and regional vertical velocities, surface evaporation, and amount of surface rainfall are either weaker or reduced in the GMMF when using more CRM grid points and higher CRM resolution. In addition, the results indicate that large-scale surface evaporation and wind feed back are key processes for determining the surface rainfall amount in the GMMF. A sensitivity test with reduced sea surface temperatures (SSTs) is conducted and results in both reduced surface rainfall and evaporation.
Frank K. Lake
2013-01-01
Indigenous peopleâs detailed traditional knowledge about fire, although superficially referenced in various writings, has not for the most part been analyzed in detail or simulated by resource managers, wildlife biologists, and ecologists. . . . Instead, scientists have developed the principles and theories of fire ecology, fire behavior and effects models, and...
Minneti, Michael; Baker, Craig J; Sullivan, Maura E
The landscape of graduate medical education has changed dramatically over the past decade and the traditional apprenticeship model has undergone scrutiny and modifications. The mandate of the 80-hour work-week, the introduction of integrated residency programs, increased global awareness about patient safety along with financial constraints have spurred changes in graduate educational practices. In addition, new technologies, more complex procedures, and a host of external constraints have changed where and how we teach technical and procedural skills. Simulation-based training has been embraced by the surgical community and has quickly become an essential component of most residency programs as a method to add efficacy to the traditional learning model. The purpose of this paper is twofold: (1) to describe the development of a perfused cadaver model with dynamic vital sign regulation, and (2) to assess the impact of a curriculum using this model and real world scenarios to teach surgical skills and error management. By providing a realistic training environment our aim is to enhance the acquisition of surgical skills and provide a more thorough assessment of resident performance. Twenty-six learners participated in the scenarios. Qualitative data showed that participants felt that the simulation model was realistic, and that participating in the scenarios helped them gain new knowledge, learn new surgical techniques and increase their confidence performing the skill in a clinical setting. Identifying the importance of both technical and nontechnical skills in surgical education has hastened the need for more realistic simulators and environments in which they are placed. Team members should be able to interact in ways that allow for a global display of their skills thus helping to provide a more comprehensive assessment by faculty and learners. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Su, Liyun; Zhao, Yanyong; Yan, Tianshun; Li, Fenglan
2012-01-01
Multivariate local polynomial fitting is applied to the multivariate linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to non-parametric technique of local polynomial estimation, it is unnecessary to know the form of heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we verify that the regression coefficients is asymptotic normal based on numerical simulations and normal Q-Q plots of residuals. Finally, the simulation results and the local polynomial estimation of real data indicate that our approach is surely effective in finite-sample situations.
Lemkul, Justin A; Roux, Benoît; van der Spoel, David; MacKerell, Alexander D
2015-07-15
Explicit treatment of electronic polarization in empirical force fields used for molecular dynamics simulations represents an important advancement in simulation methodology. A straightforward means of treating electronic polarization in these simulations is the inclusion of Drude oscillators, which are auxiliary, charge-carrying particles bonded to the cores of atoms in the system. The additional degrees of freedom make these simulations more computationally expensive relative to simulations using traditional fixed-charge (additive) force fields. Thus, efficient tools are needed for conducting these simulations. Here, we present the implementation of highly scalable algorithms in the GROMACS simulation package that allow for the simulation of polarizable systems using extended Lagrangian dynamics with a dual Nosé-Hoover thermostat as well as simulations using a full self-consistent field treatment of polarization. The performance of systems of varying size is evaluated, showing that the present code parallelizes efficiently and is the fastest implementation of the extended Lagrangian methods currently available for simulations using the Drude polarizable force field. © 2015 Wiley Periodicals, Inc.
A Parallel Adaboost-Backpropagation Neural Network for Massive Image Dataset Classification
NASA Astrophysics Data System (ADS)
Cao, Jianfang; Chen, Lichao; Wang, Min; Shi, Hao; Tian, Yun
2016-12-01
Image classification uses computers to simulate human understanding and cognition of images by automatically categorizing images. This study proposes a faster image classification approach that parallelizes the traditional Adaboost-Backpropagation (BP) neural network using the MapReduce parallel programming model. First, we construct a strong classifier by assembling the outputs of 15 BP neural networks (which are individually regarded as weak classifiers) based on the Adaboost algorithm. Second, we design Map and Reduce tasks for both the parallel Adaboost-BP neural network and the feature extraction algorithm. Finally, we establish an automated classification model by building a Hadoop cluster. We use the Pascal VOC2007 and Caltech256 datasets to train and test the classification model. The results are superior to those obtained using traditional Adaboost-BP neural network or parallel BP neural network approaches. Our approach increased the average classification accuracy rate by approximately 14.5% and 26.0% compared to the traditional Adaboost-BP neural network and parallel BP neural network, respectively. Furthermore, the proposed approach requires less computation time and scales very well as evaluated by speedup, sizeup and scaleup. The proposed approach may provide a foundation for automated large-scale image classification and demonstrates practical value.
A Parallel Adaboost-Backpropagation Neural Network for Massive Image Dataset Classification.
Cao, Jianfang; Chen, Lichao; Wang, Min; Shi, Hao; Tian, Yun
2016-12-01
Image classification uses computers to simulate human understanding and cognition of images by automatically categorizing images. This study proposes a faster image classification approach that parallelizes the traditional Adaboost-Backpropagation (BP) neural network using the MapReduce parallel programming model. First, we construct a strong classifier by assembling the outputs of 15 BP neural networks (which are individually regarded as weak classifiers) based on the Adaboost algorithm. Second, we design Map and Reduce tasks for both the parallel Adaboost-BP neural network and the feature extraction algorithm. Finally, we establish an automated classification model by building a Hadoop cluster. We use the Pascal VOC2007 and Caltech256 datasets to train and test the classification model. The results are superior to those obtained using traditional Adaboost-BP neural network or parallel BP neural network approaches. Our approach increased the average classification accuracy rate by approximately 14.5% and 26.0% compared to the traditional Adaboost-BP neural network and parallel BP neural network, respectively. Furthermore, the proposed approach requires less computation time and scales very well as evaluated by speedup, sizeup and scaleup. The proposed approach may provide a foundation for automated large-scale image classification and demonstrates practical value.
A Parallel Adaboost-Backpropagation Neural Network for Massive Image Dataset Classification
Cao, Jianfang; Chen, Lichao; Wang, Min; Shi, Hao; Tian, Yun
2016-01-01
Image classification uses computers to simulate human understanding and cognition of images by automatically categorizing images. This study proposes a faster image classification approach that parallelizes the traditional Adaboost-Backpropagation (BP) neural network using the MapReduce parallel programming model. First, we construct a strong classifier by assembling the outputs of 15 BP neural networks (which are individually regarded as weak classifiers) based on the Adaboost algorithm. Second, we design Map and Reduce tasks for both the parallel Adaboost-BP neural network and the feature extraction algorithm. Finally, we establish an automated classification model by building a Hadoop cluster. We use the Pascal VOC2007 and Caltech256 datasets to train and test the classification model. The results are superior to those obtained using traditional Adaboost-BP neural network or parallel BP neural network approaches. Our approach increased the average classification accuracy rate by approximately 14.5% and 26.0% compared to the traditional Adaboost-BP neural network and parallel BP neural network, respectively. Furthermore, the proposed approach requires less computation time and scales very well as evaluated by speedup, sizeup and scaleup. The proposed approach may provide a foundation for automated large-scale image classification and demonstrates practical value. PMID:27905520
Dynamics of synthetic drugs transmission model with psychological addicts and general incidence rate
NASA Astrophysics Data System (ADS)
Ma, Mingju; Liu, Sanyang; Xiang, Hong; Li, Jun
2018-02-01
Synthetic drugs are replacing traditional ones and becoming the main popular ones gradually, which have given rise to serious social issues in recent years. In this paper, a synthetic drugs transmission model with psychological addicts and general contact rate is proposed. The local and global stabilities are decided by the basic reproduction number R0. By analyzing the sensitivity of parameters, we obtain that controlling psychological addiction is better than drugs treatment. These results are verified by numerical simulations.
Energy performance of building fabric - Comparing two types of vernacular residential houses
NASA Astrophysics Data System (ADS)
Draganova, Vanya Y.; Matsumoto, Hiroshi; Tsuzuki, Kazuyo
2017-10-01
Notwithstanding apparent differences, Japanese and Bulgarian traditional residential houses share a lot of common features - building materials, building techniques, even layout design. Despite the similarities, these two types of houses have not been compared so far. The study initiates such comparison. The focus is on houses in areas with similar climate in both countries. Current legislation requirements are compared, as well as the criteria for thermal comfort of people. Achieving high energy performance results from a dynamic system of 4 main key factors - thermal comfort range, heating/cooling source, building envelope and climatic conditions. A change in any single one of them can affect the final energy performance. However, it can be expected that a combination of changes in more than one factor usually occurs. The aim of this study is to evaluate the correlation between the thermal performance of building envelope designed under current regulations and a traditional one, having in mind the different thermal comfort range in the two countries. A sample building model is calculated in Scenario 1 - Japanese traditional building fabric, Scenario 2 - Bulgarian traditional building fabric and Scenario 3 - meeting the requirements of the more demanding current regulations. The energy modelling is conducted using EnergyPlus through OpenStudio cross-platform of software tools. The 3D geometry for the simulation is created using OpenStudio SketchUp Plug-in. Equal number of inhabitants, electricity consumption and natural ventilation is assumed. The results show that overall low energy consumption can be achieved using traditional building fabric as well, when paired with a wider thermal comfort range. Under these conditions traditional building design is still viable today. This knowledge can reestablish the use of traditional building fabric in contemporary design, stimulate preservation of local culture, building traditions and community identity.
Bayesian Approaches for Model and Multi-mission Satellites Data Fusion
NASA Astrophysics Data System (ADS)
Khaki, M., , Dr; Forootan, E.; Awange, J.; Kuhn, M.
2017-12-01
Traditionally, data assimilation is formulated as a Bayesian approach that allows one to update model simulations using new incoming observations. This integration is necessary due to the uncertainty in model outputs, which mainly is the result of several drawbacks, e.g., limitations in accounting for the complexity of real-world processes, uncertainties of (unknown) empirical model parameters, and the absence of high resolution (both spatially and temporally) data. Data assimilation, however, requires knowledge of the physical process of a model, which may be either poorly described or entirely unavailable. Therefore, an alternative method is required to avoid this dependency. In this study we present a novel approach which can be used in hydrological applications. A non-parametric framework based on Kalman filtering technique is proposed to improve hydrological model estimates without using a model dynamics. Particularly, we assesse Kalman-Taken formulations that take advantage of the delay coordinate method to reconstruct nonlinear dynamics in the absence of the physical process. This empirical relationship is then used instead of model equations to integrate satellite products with model outputs. We use water storage variables from World-Wide Water Resources Assessment (W3RA) simulations and update them using data known as the Gravity Recovery And Climate Experiment (GRACE) terrestrial water storage (TWS) and also surface soil moisture data from the Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E) over Australia for the period of 2003 to 2011. The performance of the proposed integration method is compared with data obtained from the more traditional assimilation scheme using the Ensemble Square-Root Filter (EnSRF) filtering technique (Khaki et al., 2017), as well as by evaluating them against ground-based soil moisture and groundwater observations within the Murray-Darling Basin.
NASA Astrophysics Data System (ADS)
Powell, Gavin; Markham, Keith C.; Marshall, David
2000-06-01
This paper presents the results of an investigation leading into an implementation of FLIR and LADAR data simulation for use in a multi sensor data fusion automated target recognition system. At present the main areas of application are in military environments but systems can easily be adapted to other areas such as security applications, robotics and autonomous cars. Recent developments have been away from traditional sensor modeling and toward modeling of features that are external to the system, such as atmosphere and part occlusion, to create a more realistic and rounded system. We have implemented such techniques and introduced a means of inserting these models into a highly detailed scene model to provide a rich data set for later processing. From our study and implementation we are able to embed sensor model components into a commercial graphics and animation package, along with object and terrain models, which can be easily used to create a more realistic sequence of images.
Individual Colorimetric Observer Model
Asano, Yuta; Fairchild, Mark D.; Blondé, Laurent
2016-01-01
This study proposes a vision model for individual colorimetric observers. The proposed model can be beneficial in many color-critical applications such as color grading and soft proofing to assess ranges of color matches instead of a single average match. We extended the CIE 2006 physiological observer by adding eight additional physiological parameters to model individual color-normal observers. These eight parameters control lens pigment density, macular pigment density, optical densities of L-, M-, and S-cone photopigments, and λmax shifts of L-, M-, and S-cone photopigments. By identifying the variability of each physiological parameter, the model can simulate color matching functions among color-normal populations using Monte Carlo simulation. The variabilities of the eight parameters were identified through two steps. In the first step, extensive reviews of past studies were performed for each of the eight physiological parameters. In the second step, the obtained variabilities were scaled to fit a color matching dataset. The model was validated using three different datasets: traditional color matching, applied color matching, and Rayleigh matches. PMID:26862905
Hulme, Adam; Thompson, Jason; Nielsen, Rasmus Oestergaard; Read, Gemma J M; Salmon, Paul M
2018-06-18
There have been recent calls for the application of the complex systems approach in sports injury research. However, beyond theoretical description and static models of complexity, little progress has been made towards formalising this approach in way that is practical to sports injury scientists and clinicians. Therefore, our objective was to use a computational modelling method and develop a dynamic simulation in sports injury research. Agent-based modelling (ABM) was used to model the occurrence of sports injury in a synthetic athlete population. The ABM was developed based on sports injury causal frameworks and was applied in the context of distance running-related injury (RRI). Using the acute:chronic workload ratio (ACWR), we simulated the dynamic relationship between changes in weekly running distance and RRI through the manipulation of various 'athlete management tools'. The findings confirmed that building weekly running distances over time, even within the reported ACWR 'sweet spot', will eventually result in RRI as athletes reach and surpass their individual physical workload limits. Introducing training-related error into the simulation and the modelling of a 'hard ceiling' dynamic resulted in a higher RRI incidence proportion across the population at higher absolute workloads. The presented simulation offers a practical starting point to further apply more sophisticated computational models that can account for the complex nature of sports injury aetiology. Alongside traditional forms of scientific inquiry, the use of ABM and other simulation-based techniques could be considered as a complementary and alternative methodological approach in sports injury research. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
NASA Astrophysics Data System (ADS)
Zhao, Lifei; Li, Zhen; Caswell, Bruce; Ouyang, Jie; Karniadakis, George Em
2018-06-01
We simulate complex fluids by means of an on-the-fly coupling of the bulk rheology to the underlying microstructure dynamics. In particular, a continuum model of polymeric fluids is constructed without a pre-specified constitutive relation, but instead it is actively learned from mesoscopic simulations where the dynamics of polymer chains is explicitly computed. To couple the bulk rheology of polymeric fluids and the microscale dynamics of polymer chains, the continuum approach (based on the finite volume method) provides the transient flow field as inputs for the (mesoscopic) dissipative particle dynamics (DPD), and in turn DPD returns an effective constitutive relation to close the continuum equations. In this multiscale modeling procedure, we employ an active learning strategy based on Gaussian process regression (GPR) to minimize the number of expensive DPD simulations, where adaptively selected DPD simulations are performed only as necessary. Numerical experiments are carried out for flow past a circular cylinder of a non-Newtonian fluid, modeled at the mesoscopic level by bead-spring chains. The results show that only five DPD simulations are required to achieve an effective closure of the continuum equations at Reynolds number Re = 10. Furthermore, when Re is increased to 100, only one additional DPD simulation is required for constructing an extended GPR-informed model closure. Compared to traditional message-passing multiscale approaches, applying an active learning scheme to multiscale modeling of non-Newtonian fluids can significantly increase the computational efficiency. Although the method demonstrated here obtains only a local viscosity from the polymer dynamics, it can be extended to other multiscale models of complex fluids whose macro-rheology is unknown.
Extending rule-based methods to model molecular geometry and 3D model resolution.
Hoard, Brittany; Jacobson, Bruna; Manavi, Kasra; Tapia, Lydia
2016-08-01
Computational modeling is an important tool for the study of complex biochemical processes associated with cell signaling networks. However, it is challenging to simulate processes that involve hundreds of large molecules due to the high computational cost of such simulations. Rule-based modeling is a method that can be used to simulate these processes with reasonably low computational cost, but traditional rule-based modeling approaches do not include details of molecular geometry. The incorporation of geometry into biochemical models can more accurately capture details of these processes, and may lead to insights into how geometry affects the products that form. Furthermore, geometric rule-based modeling can be used to complement other computational methods that explicitly represent molecular geometry in order to quantify binding site accessibility and steric effects. We propose a novel implementation of rule-based modeling that encodes details of molecular geometry into the rules and binding rates. We demonstrate how rules are constructed according to the molecular curvature. We then perform a study of antigen-antibody aggregation using our proposed method. We simulate the binding of antibody complexes to binding regions of the shrimp allergen Pen a 1 using a previously developed 3D rigid-body Monte Carlo simulation, and we analyze the aggregate sizes. Then, using our novel approach, we optimize a rule-based model according to the geometry of the Pen a 1 molecule and the data from the Monte Carlo simulation. We use the distances between the binding regions of Pen a 1 to optimize the rules and binding rates. We perform this procedure for multiple conformations of Pen a 1 and analyze the impact of conformation and resolution on the optimal rule-based model. We find that the optimized rule-based models provide information about the average steric hindrance between binding regions and the probability that antibodies will bind to these regions. These optimized models quantify the variation in aggregate size that results from differences in molecular geometry and from model resolution.
The power of structural modeling of sub-grid scales - application to astrophysical plasmas
NASA Astrophysics Data System (ADS)
Georgiev Vlaykov, Dimitar; Grete, Philipp
2015-08-01
In numerous astrophysical phenomena the dynamical range can span 10s of orders of magnitude. This implies more than billions of degrees-of-freedom and precludes direct numerical simulations from ever being a realistic possibility. A physical model is necessary to capture the unresolved physics occurring at the sub-grid scales (SGS).Structural modeling is a powerful concept which renders itself applicable to various physical systems. It stems from the idea of capturing the structure of the SGS terms in the evolution equations based on the scale-separation mechanism and independently of the underlying physics. It originates in the hydrodynamics field of large-eddy simulations. We apply it to the study of astrophysical MHD.Here, we present a non-linear SGS model for compressible MHD turbulence. The model is validated a priori at the tensorial, vectorial and scalar levels against of set of high-resolution simulations of stochastically forced homogeneous isotropic turbulence in a periodic box. The parameter space spans 2 decades in sonic Mach numbers (0.2 - 20) and approximately one decade in magnetic Mach number ~(1-8). This covers the super-Alfvenic sub-, trans-, and hyper-sonic regimes, with a range of plasma beta from 0.05 to 25. The Reynolds number is of the order of 103.At the tensor level, the model components correlate well with the turbulence ones, at the level of 0.8 and above. Vectorially, the alignment with the true SGS terms is encouraging with more than 50% of the model within 30° of the data. At the scalar level we look at the dynamics of the SGS energy and cross-helicity. The corresponding SGS flux terms have median correlations of ~0.8. Physically, the model represents well the two directions of the energy cascade.In comparison, traditional functional models exhibit poor local correlations with the data already at the scalar level. Vectorially, they are indifferent to the anisotropy of the SGS terms. They often struggle to represent the energy backscatter from small to large scales as well as the turbulent dynamo mechanism.Overall, the new model surpasses the traditional ones in all tests by a large margin.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akhil Datta-Gupta
2006-12-31
We explore the use of efficient streamline-based simulation approaches for modeling partitioning interwell tracer tests in hydrocarbon reservoirs. Specifically, we utilize the unique features of streamline models to develop an efficient approach for interpretation and history matching of field tracer response. A critical aspect here is the underdetermined and highly ill-posed nature of the associated inverse problems. We have investigated the relative merits of the traditional history matching ('amplitude inversion') and a novel travel time inversion in terms of robustness of the method and convergence behavior of the solution. We show that the traditional amplitude inversion is orders of magnitudemore » more non-linear and the solution here is likely to get trapped in local minimum, leading to inadequate history match. The proposed travel time inversion is shown to be extremely efficient and robust for practical field applications. The streamline approach is generalized to model water injection in naturally fractured reservoirs through the use of a dual media approach. The fractures and matrix are treated as separate continua that are connected through a transfer function, as in conventional finite difference simulators for modeling fractured systems. A detailed comparison with a commercial finite difference simulator shows very good agreement. Furthermore, an examination of the scaling behavior of the computation time indicates that the streamline approach is likely to result in significant savings for large-scale field applications. We also propose a novel approach to history matching finite-difference models that combines the advantage of the streamline models with the versatility of finite-difference simulation. In our approach, we utilize the streamline-derived sensitivities to facilitate history matching during finite-difference simulation. The use of finite-difference model allows us to account for detailed process physics and compressibility effects. The approach is very fast and avoids much of the subjective judgments and time-consuming trial-and-errors associated with manual history matching. We demonstrate the power and utility of our approach using a synthetic example and two field examples. We have also explored the use of a finite difference reservoir simulator, UTCHEM, for field-scale design and optimization of partitioning interwell tracer tests. The finite-difference model allows us to include detailed physics associated with reactive tracer transport, particularly those related with transverse and cross-streamline mechanisms. We have investigated the potential use of downhole tracer samplers and also the use of natural tracers for the design of partitioning tracer tests. Finally, we discuss several alternative ways of using partitioning interwell tracer tests (PITTs) in oil fields for the calculation of oil saturation, swept pore volume and sweep efficiency, and assess the accuracy of such tests under a variety of reservoir conditions.« less
Hogan, Michael P; Pace, David E; Hapgood, Joanne; Boone, Darrell C
2006-11-01
Situation awareness (SA) is defined as the perception of elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future. This construct is vital to decision making in intense, dynamic environments. It has been used in aviation as it relates to pilot performance, but has not been applied to medical education. The most widely used objective tool for measuring trainee SA is the Situation Awareness Global Assessment Technique (SAGAT). The purpose of this study was to design and validate SAGAT for assessment of practical trauma skills, and to compare SAGAT results to traditional checklist style scoring. Using the Human Patient Simulator, we designed SAGAT for practical trauma skills assessment based on Advanced Trauma Life Support objectives. Sixteen subjects (four staff surgeons, four senior residents, four junior residents, and four medical students) participated in three scenarios each. They were assessed using SAGAT and traditional checklist assessment. A questionnaire was used to assess possible confounding factors in attaining SA and overall trainee satisfaction. SAGAT was found to show significant difference (analysis of variance; p < 0.001) in scores based on level of training lending statistical support to construct validity. SAGAT was likewise found to display reliability (Cronbach's alpha 0.767), and significant scoring correlation with traditional checklist performance measures (Pearson's coefficient 0.806). The questionnaire revealed no confounding factors and universal satisfaction with the human patient simulator and SAGAT. SAGAT is a valid, reliable assessment tool for trauma trainees in the dynamic clinical environment created by human patient simulation. Information provided by SAGAT could provide specific feedback, direct individualized teaching, and support curriculum change. Introduction of SAGAT could improve the current assessment model for practical trauma education.
Gussmann, Maya; Kirkeby, Carsten; Græsbøll, Kaare; Farre, Michael; Halasa, Tariq
2018-07-14
Intramammary infections (IMI) in dairy cattle lead to economic losses for farmers, both through reduced milk production and disease control measures. We present the first strain-, cow- and herd-specific bio-economic simulation model of intramammary infections in a dairy cattle herd. The model can be used to investigate the cost-effectiveness of different prevention and control strategies against IMI. The objective of this study was to describe a transmission framework, which simulates spread of IMI causing pathogens through different transmission modes. These include the traditional contagious and environmental spread and a new opportunistic transmission mode. In addition, the within-herd transmission dynamics of IMI causing pathogens were studied. Sensitivity analysis was conducted to investigate the influence of input parameters on model predictions. The results show that the model is able to represent various within-herd levels of IMI prevalence, depending on the simulated pathogens and their parameter settings. The parameters can be adjusted to include different combinations of IMI causing pathogens at different prevalence levels, representing herd-specific situations. The model is most sensitive to varying the transmission rate parameters and the strain-specific recovery rates from IMI. It can be used for investigating both short term operational and long term strategic decisions for the prevention and control of IMI in dairy cattle herds. Copyright © 2018 Elsevier Ltd. All rights reserved.
Ramsingh, Davinder; Alexander, Brenton; Le, Khanhvan; Williams, Wendell; Canales, Cecilia; Cannesson, Maxime
2014-09-01
To expose residents to two methods of education for point-of-care ultrasound, a traditional didactic lecture and a model/simulation-based lecture, which focus on concepts of cardiopulmonary function, volume status, and evaluation of severe thoracic/abdominal injuries; and to assess which method is more effective. Single-center, prospective, blinded trial. University hospital. Anesthesiology residents who were assigned to an educational day during the two-month research study period. Residents were allocated to two groups to receive either a 90-minute, one-on-one didactic lecture or a 90-minute lecture in a simulation center, during which they practiced on a human model and simulation mannequin (normal pathology). Data points included a pre-lecture multiple-choice test, post-lecture multiple-choice test, and post-lecture, human model-based examination. Post-lecture tests were performed within three weeks of the lecture. An experienced sonographer who was blinded to the education modality graded the model-based skill assessment examinations. Participants completed a follow-up survey to assess the perceptions of the quality of their instruction between the two groups. 20 residents completed the study. No differences were noted between the two groups in pre-lecture test scores (P = 0.97), but significantly higher scores for the model/simulation group occurred on both the post-lecture multiple choice (P = 0.038) and post-lecture model (P = 0.041) examinations. Follow-up resident surveys showed significantly higher scores in the model/simulation group regarding overall interest in perioperative ultrasound (P = 0.047) as well understanding of the physiologic concepts (P = 0.021). A model/simulation-based based lecture series may be more effective in teaching the skills needed to perform a point-of-care ultrasound examination to anesthesiology residents. Copyright © 2014 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Sieh-Bliss, Selina
2014-01-01
While there is evidence in the literature measuring effective clinical teacher characteristics in traditional experiences, little is known of effective characteristics expected from clinical teachers during simulated clinical experiences. This study examined which clinical teaching behaviors and characteristics are perceived by nursing students'…
Test-Retest Reliability of Computerized, Everyday Memory Measures and Traditional Memory Tests.
ERIC Educational Resources Information Center
Youngjohn, James R.; And Others
Test-retest reliabilities and practice effect magnitudes were considered for nine computer-simulated tasks of everyday cognition and five traditional neuropsychological tests. The nine simulated everyday memory tests were from the Memory Assessment Clinic battery as follows: (1) simple reaction time while driving; (2) divided attention (driving…
Student Learning Opportunities in Traditional and Computer-Mediated Internships
ERIC Educational Resources Information Center
Bayerlein, Leopold; Jeske, Debora
2018-01-01
Purpose: The purpose of this paper is to provide a student learning outcome focussed assessment of the benefits and limitations of traditional internships, e-internships, and simulated internships to evaluate the potential of computer-mediated internships (CMIs) (e-internships and simulated internships) within higher education from a student…
Epidemic spreading in weighted networks: an edge-based mean-field solution.
Yang, Zimo; Zhou, Tao
2012-05-01
Weight distribution greatly impacts the epidemic spreading taking place on top of networks. This paper presents a study of a susceptible-infected-susceptible model on regular random networks with different kinds of weight distributions. Simulation results show that the more homogeneous weight distribution leads to higher epidemic prevalence, which, unfortunately, could not be captured by the traditional mean-field approximation. This paper gives an edge-based mean-field solution for general weight distribution, which can quantitatively reproduce the simulation results. This method could be applied to characterize the nonequilibrium steady states of dynamical processes on weighted networks.
Simulating competitive egress of noncircular pedestrians.
Hidalgo, R C; Parisi, D R; Zuriguel, I
2017-04-01
We present a numerical framework to simulate pedestrian dynamics in highly competitive conditions by means of a force-based model implemented with spherocylindrical particles instead of the traditional, symmetric disks. This modification of the individuals' shape allows one to naturally reproduce recent experimental findings of room evacuations through narrow doors in situations where the contact pressure among the pedestrians was rather large. In particular, we obtain a power-law tail distribution of the time lapses between the passage of consecutive individuals. In addition, we show that this improvement leads to new features where the particles' rotation acquires great significance.
One-Dimensional Fast Transient Simulator for Modeling Cadmium Sulfide/Cadmium Telluride Solar Cells
NASA Astrophysics Data System (ADS)
Guo, Da
Solar energy, including solar heating, solar architecture, solar thermal electricity and solar photovoltaics, is one of the primary alternative energy sources to fossil fuel. Being one of the most important techniques, significant research has been conducted in solar cell efficiency improvement. Simulation of various structures and materials of solar cells provides a deeper understanding of device operation and ways to improve their efficiency. Over the last two decades, polycrystalline thin-film Cadmium-Sulfide and Cadmium-Telluride (CdS/CdTe) solar cells fabricated on glass substrates have been considered as one of the most promising candidate in the photovoltaic technologies, for their similar efficiency and low costs when compared to traditional silicon-based solar cells. In this work a fast one dimensional time-dependent/steady-state drift-diffusion simulator, accelerated by adaptive non-uniform mesh and automatic time-step control, for modeling solar cells has been developed and has been used to simulate a CdS/CdTe solar cell. These models are used to reproduce transients of carrier transport in response to step-function signals of different bias and varied light intensity. The time-step control models are also used to help convergence in steady-state simulations where constrained material constants, such as carrier lifetimes in the order of nanosecond and carrier mobility in the order of 100 cm2/Vs, must be applied.
NASA Astrophysics Data System (ADS)
Fang, Ye; Feng, Sheng; Tam, Ka-Ming; Yun, Zhifeng; Moreno, Juana; Ramanujam, J.; Jarrell, Mark
2014-10-01
Monte Carlo simulations of the Ising model play an important role in the field of computational statistical physics, and they have revealed many properties of the model over the past few decades. However, the effect of frustration due to random disorder, in particular the possible spin glass phase, remains a crucial but poorly understood problem. One of the obstacles in the Monte Carlo simulation of random frustrated systems is their long relaxation time making an efficient parallel implementation on state-of-the-art computation platforms highly desirable. The Graphics Processing Unit (GPU) is such a platform that provides an opportunity to significantly enhance the computational performance and thus gain new insight into this problem. In this paper, we present optimization and tuning approaches for the CUDA implementation of the spin glass simulation on GPUs. We discuss the integration of various design alternatives, such as GPU kernel construction with minimal communication, memory tiling, and look-up tables. We present a binary data format, Compact Asynchronous Multispin Coding (CAMSC), which provides an additional 28.4% speedup compared with the traditionally used Asynchronous Multispin Coding (AMSC). Our overall design sustains a performance of 33.5 ps per spin flip attempt for simulating the three-dimensional Edwards-Anderson model with parallel tempering, which significantly improves the performance over existing GPU implementations.
Interactive Visualization to Advance Earthquake Simulation
NASA Astrophysics Data System (ADS)
Kellogg, Louise H.; Bawden, Gerald W.; Bernardin, Tony; Billen, Magali; Cowgill, Eric; Hamann, Bernd; Jadamec, Margarete; Kreylos, Oliver; Staadt, Oliver; Sumner, Dawn
2008-04-01
The geological sciences are challenged to manage and interpret increasing volumes of data as observations and simulations increase in size and complexity. For example, simulations of earthquake-related processes typically generate complex, time-varying data sets in two or more dimensions. To facilitate interpretation and analysis of these data sets, evaluate the underlying models, and to drive future calculations, we have developed methods of interactive visualization with a special focus on using immersive virtual reality (VR) environments to interact with models of Earth’s surface and interior. Virtual mapping tools allow virtual “field studies” in inaccessible regions. Interactive tools allow us to manipulate shapes in order to construct models of geological features for geodynamic models, while feature extraction tools support quantitative measurement of structures that emerge from numerical simulation or field observations, thereby enabling us to improve our interpretation of the dynamical processes that drive earthquakes. VR has traditionally been used primarily as a presentation tool, albeit with active navigation through data. Reaping the full intellectual benefits of immersive VR as a tool for scientific analysis requires building on the method’s strengths, that is, using both 3D perception and interaction with observed or simulated data. This approach also takes advantage of the specialized skills of geological scientists who are trained to interpret, the often limited, geological and geophysical data available from field observations.