Science.gov

Sample records for based simulation methods

  1. Kinetic Plasma Simulation Using a Quadrature-based Moment Method

    NASA Astrophysics Data System (ADS)

    Larson, David J.

    2008-11-01

    The recently developed quadrature-based moment method [Desjardins, Fox, and Villedieu, J. Comp. Phys. 227 (2008)] is an interesting alternative to standard Lagrangian particle simulations. The two-node quadrature formulation allows multiple flow velocities within a cell, thus correctly representing crossing particle trajectories and lower-order velocity moments without resorting to Lagrangian methods. Instead of following many particles per cell, the Eulerian transport equations are solved for selected moments of the kinetic equation. The moments are then inverted to obtain a discrete representation of the velocity distribution function. Potential advantages include reduced computational cost, elimination of statistical noise, and a simpler treatment of collisional effects. We present results obtained using the quadrature-based moment method applied to the Vlasov equation in simple one-dimensional electrostatic plasma simulations. In addition we explore the use of the moment inversion process in modeling collisional processes within the Complex Particle Kinetics framework.

  2. A finite mass based method for Vlasov-Poisson simulations

    NASA Astrophysics Data System (ADS)

    Larson, David; Young, Christopher

    2014-10-01

    A method for the numerical simulation of plasma dynamics using discrete particles is introduced. The shape function kinetics (SFK) method is based on decomposing the mass into discrete particles using shape functions of compact support. The particle positions and shape evolve in response to internal velocity spread and external forces. Remapping is necessary in order to maintain accuracy and two strategies for remapping the particles are discussed. Numerical simulations of standard test problems illustrate the advantages of the method which include very low noise compared to the standard particle-in-cell technique, inherent positivity, large dynamic range, and ease of implementation. This work was performed under the auspices of the U.S. Department of Energy by LLNL under Contract DE-AC52-07NA27344. C. V. Young acknowledges the support of the DOE NNSA Stewardship Science Graduate Fellowship under Contract DE-FC52-08NA28752.

  3. Optimal grid-based methods for thin film micromagnetics simulations

    NASA Astrophysics Data System (ADS)

    Muratov, C. B.; Osipov, V. V.

    2006-08-01

    Thin film micromagnetics are a broad class of materials with many technological applications, primarily in magnetic memory. The dynamics of the magnetization distribution in these materials is traditionally modeled by the Landau-Lifshitz-Gilbert (LLG) equation. Numerical simulations of the LLG equation are complicated by the need to compute the stray field due to the inhomogeneities in the magnetization which presents the chief bottleneck for the simulation speed. Here, we introduce a new method for computing the stray field in a sample for a reduced model of ultra-thin film micromagnetics. The method uses a recently proposed idea of optimal finite difference grids for approximating Neumann-to-Dirichlet maps and has an advantage of being able to use non-uniform discretization in the film plane, as well as an efficient way of dealing with the boundary conditions at infinity for the stray field. We present several examples of the method's implementation and give a detailed comparison of its performance for studying domain wall structures compared to the conventional FFT-based methods.

  4. Study of Flapping Flight Using Discrete Vortex Method Based Simulations

    NASA Astrophysics Data System (ADS)

    Devranjan, S.; Jalikop, Shreyas V.; Sreenivas, K. R.

    2013-12-01

    In recent times, research in the area of flapping flight has attracted renewed interest with an endeavor to use this mechanism in Micro Air vehicles (MAVs). For a sustained and high-endurance flight, having larger payload carrying capacity we need to identify a simple and efficient flapping-kinematics. In this paper, we have used flow visualizations and Discrete Vortex Method (DVM) based simulations for the study of flapping flight. Our results highlight that simple flapping kinematics with down-stroke period (tD) shorter than the upstroke period (tU) would produce a sustained lift. We have identified optimal asymmetry ratio (Ar = tD/tU), for which flapping-wings will produce maximum lift and find that introducing optimal wing flexibility will further enhances the lift.

  5. Numerical simulation of thermal discharge based on FVM method

    NASA Astrophysics Data System (ADS)

    Yu, Yunli; Wang, Deguan; Wang, Zhigang; Lai, Xijun

    2006-01-01

    A two-dimensional numerical model is proposed to simulate the thermal discharge from a power plant in Jiangsu Province. The equations in the model consist of two-dimensional non-steady shallow water equations and thermal waste transport equations. Finite volume method (FVM) is used to discretize the shallow water equations, and flux difference splitting (FDS) scheme is applied. The calculated area with the same temperature increment shows the effect of thermal discharge on sea water. A comparison between simulated results and the experimental data shows good agreement. It indicates that this method can give high precision in the heat transfer simulation in coastal areas.

  6. The Simulation of the Recharging Method Based on Solar Radiation for an Implantable Biosensor.

    PubMed

    Li, Yun; Song, Yong; Kong, Xianyue; Li, Maoyuan; Zhao, Yufei; Hao, Qun; Gao, Tianxin

    2016-01-01

    A method of recharging implantable biosensors based on solar radiation is proposed. Firstly, the models of the proposed method are developed. Secondly, the recharging processes based on solar radiation are simulated using Monte Carlo (MC) method and the energy distributions of sunlight within the different layers of human skin have been achieved and discussed. Finally, the simulation results are verified experimentally, which indicates that the proposed method will contribute to achieve a low-cost, convenient and safe method for recharging implantable biosensors.

  7. A method for MREIT-based source imaging: simulation studies.

    PubMed

    Song, Yizhuang; Jeong, Woo Chul; Woo, Eung Je; Seo, Jin Keun

    2016-08-01

    This paper aims to provide a method for using magnetic resonance electrical impedance tomography (MREIT) to visualize local conductivity changes associated with evoked neuronal activities in the brain. MREIT is an MRI-based technique for conductivity mapping by probing the magnetic flux density induced by an externally injected current through surface electrodes. Since local conductivity changes resulting from evoked neural activities are very small (less than a few %), a major challenge is to acquire exogenous magnetic flux density data exceeding a certain noise level. Noting that the signal-to-noise ratio is proportional to the square root of the number of averages, it is important to reduce the data acquisition time to get more averages within a given total data collection time. The proposed method uses a sub-sampled k-space data set in the phase-encoding direction to significantly reduce the data acquisition time. Since the sub-sampled data violates the Nyquist criteria, we only get a nonlinearly wrapped version of the exogenous magnetic flux density data, which is insufficient for conductivity imaging. Taking advantage of the sparseness of the conductivity change, the proposed method detects local conductivity changes by estimating the time-change of the Laplacian of the nonlinearly wrapped data. PMID:27401235

  8. A method for MREIT-based source imaging: simulation studies

    NASA Astrophysics Data System (ADS)

    Song, Yizhuang; Jeong, Woo Chul; Woo, Eung Je; Seo, Jin Keun

    2016-08-01

    This paper aims to provide a method for using magnetic resonance electrical impedance tomography (MREIT) to visualize local conductivity changes associated with evoked neuronal activities in the brain. MREIT is an MRI-based technique for conductivity mapping by probing the magnetic flux density induced by an externally injected current through surface electrodes. Since local conductivity changes resulting from evoked neural activities are very small (less than a few %), a major challenge is to acquire exogenous magnetic flux density data exceeding a certain noise level. Noting that the signal-to-noise ratio is proportional to the square root of the number of averages, it is important to reduce the data acquisition time to get more averages within a given total data collection time. The proposed method uses a sub-sampled k-space data set in the phase-encoding direction to significantly reduce the data acquisition time. Since the sub-sampled data violates the Nyquist criteria, we only get a nonlinearly wrapped version of the exogenous magnetic flux density data, which is insufficient for conductivity imaging. Taking advantage of the sparseness of the conductivity change, the proposed method detects local conductivity changes by estimating the time-change of the Laplacian of the nonlinearly wrapped data.

  9. Human swallowing simulation based on videofluorography images using Hamiltonian MPS method

    NASA Astrophysics Data System (ADS)

    Kikuchi, Takahiro; Michiwaki, Yukihiro; Kamiya, Tetsu; Toyama, Yoshio; Tamai, Tasuku; Koshizuka, Seiichi

    2015-09-01

    In developed nations, swallowing disorders and aspiration pneumonia have become serious problems. We developed a method to simulate the behavior of the organs involved in swallowing to clarify the mechanisms of swallowing and aspiration. The shape model is based on anatomically realistic geometry, and the motion model utilizes forced displacements based on realistic dynamic images to reflect the mechanisms of human swallowing. The soft tissue organs are modeled as nonlinear elastic material using the Hamiltonian MPS method. This method allows for stable simulation of the complex swallowing movement. A penalty method using metaballs is employed to simulate contact between organ walls and smooth sliding along the walls. We performed four numerical simulations under different analysis conditions to represent four cases of swallowing, including a healthy volunteer and a patient with a swallowing disorder. The simulation results were compared to examine the epiglottic downfolding mechanism, which strongly influences the risk of aspiration.

  10. The Simulation of the Recharging Method Based on Solar Radiation for an Implantable Biosensor

    PubMed Central

    Li, Yun; Song, Yong; Kong, Xianyue; Li, Maoyuan; Zhao, Yufei; Hao, Qun; Gao, Tianxin

    2016-01-01

    A method of recharging implantable biosensors based on solar radiation is proposed. Firstly, the models of the proposed method are developed. Secondly, the recharging processes based on solar radiation are simulated using Monte Carlo (MC) method and the energy distributions of sunlight within the different layers of human skin have been achieved and discussed. Finally, the simulation results are verified experimentally, which indicates that the proposed method will contribute to achieve a low-cost, convenient and safe method for recharging implantable biosensors. PMID:27626422

  11. The Simulation of the Recharging Method Based on Solar Radiation for an Implantable Biosensor.

    PubMed

    Li, Yun; Song, Yong; Kong, Xianyue; Li, Maoyuan; Zhao, Yufei; Hao, Qun; Gao, Tianxin

    2016-01-01

    A method of recharging implantable biosensors based on solar radiation is proposed. Firstly, the models of the proposed method are developed. Secondly, the recharging processes based on solar radiation are simulated using Monte Carlo (MC) method and the energy distributions of sunlight within the different layers of human skin have been achieved and discussed. Finally, the simulation results are verified experimentally, which indicates that the proposed method will contribute to achieve a low-cost, convenient and safe method for recharging implantable biosensors. PMID:27626422

  12. A Research of Weapon System Storage Reliability Simulation Method Based on Fuzzy Theory

    NASA Astrophysics Data System (ADS)

    Shi, Yonggang; Wu, Xuguang; Chen, Haijian; Xu, Tingxue

    Aimed at the problem of the new, complicated weapon equipment system storage reliability analyze, the paper researched on the methods of fuzzy fault tree analysis and fuzzy system storage reliability simulation, discussed the path that regarded weapon system as fuzzy system, and researched the storage reliability of weapon system based on fuzzy theory, provided a method of storage reliability research for the new, complicated weapon equipment system. As an example, built up the fuzzy fault tree of one type missile control instrument based on function analysis, and used the method of fuzzy system storage reliability simulation to analyze storage reliability index of control instrument.

  13. Real-time simulation of ultrasound refraction phenomena using ray-trace based wavefront construction method.

    PubMed

    Szostek, Kamil; Piórkowski, Adam

    2016-10-01

    Ultrasound (US) imaging is one of the most popular techniques used in clinical diagnosis, mainly due to lack of adverse effects on patients and the simplicity of US equipment. However, the characteristics of the medium cause US imaging to imprecisely reconstruct examined tissues. The artifacts are the results of wave phenomena, i.e. diffraction or refraction, and should be recognized during examination to avoid misinterpretation of an US image. Currently, US training is based on teaching materials and simulators and ultrasound simulation has become an active research area in medical computer science. Many US simulators are limited by the complexity of the wave phenomena, leading to intensive sophisticated computation that makes it difficult for systems to operate in real time. To achieve the required frame rate, the vast majority of simulators reduce the problem of wave diffraction and refraction. The following paper proposes a solution for an ultrasound simulator based on methods known in geophysics. To improve simulation quality, a wavefront construction method was adapted which takes into account the refraction phenomena. This technique uses ray tracing and velocity averaging to construct wavefronts in the simulation. Instead of a geological medium, real CT scans are applied. This approach can produce more realistic projections of pathological findings and is also capable of providing real-time simulation. PMID:27586490

  14. Real-time simulation of ultrasound refraction phenomena using ray-trace based wavefront construction method.

    PubMed

    Szostek, Kamil; Piórkowski, Adam

    2016-10-01

    Ultrasound (US) imaging is one of the most popular techniques used in clinical diagnosis, mainly due to lack of adverse effects on patients and the simplicity of US equipment. However, the characteristics of the medium cause US imaging to imprecisely reconstruct examined tissues. The artifacts are the results of wave phenomena, i.e. diffraction or refraction, and should be recognized during examination to avoid misinterpretation of an US image. Currently, US training is based on teaching materials and simulators and ultrasound simulation has become an active research area in medical computer science. Many US simulators are limited by the complexity of the wave phenomena, leading to intensive sophisticated computation that makes it difficult for systems to operate in real time. To achieve the required frame rate, the vast majority of simulators reduce the problem of wave diffraction and refraction. The following paper proposes a solution for an ultrasound simulator based on methods known in geophysics. To improve simulation quality, a wavefront construction method was adapted which takes into account the refraction phenomena. This technique uses ray tracing and velocity averaging to construct wavefronts in the simulation. Instead of a geological medium, real CT scans are applied. This approach can produce more realistic projections of pathological findings and is also capable of providing real-time simulation.

  15. Evaluation of a clinical simulation-based assessment method for EHR-platforms.

    PubMed

    Jensen, Sanne; Rasmussen, Stine Loft; Lyng, Karen Marie

    2014-01-01

    In a procurement process assessment of issues like human factors and interaction between technology and end-users can be challenging. In a large public procurement of an Electronic health record-platform (EHR-platform) in Denmark a clinical simulation-based method for assessing and comparing human factor issues was developed and evaluated. This paper describes the evaluation of the method, its advantages and disadvantages. Our findings showed that clinical simulation is beneficial for assessing user satisfaction, usefulness and patient safety, all though it is resource demanding. The method made it possible to assess qualitative topics during the procurement and it provides an excellent ground for user involvement.

  16. Evaluation of a clinical simulation-based assessment method for EHR-platforms.

    PubMed

    Jensen, Sanne; Rasmussen, Stine Loft; Lyng, Karen Marie

    2014-01-01

    In a procurement process assessment of issues like human factors and interaction between technology and end-users can be challenging. In a large public procurement of an Electronic health record-platform (EHR-platform) in Denmark a clinical simulation-based method for assessing and comparing human factor issues was developed and evaluated. This paper describes the evaluation of the method, its advantages and disadvantages. Our findings showed that clinical simulation is beneficial for assessing user satisfaction, usefulness and patient safety, all though it is resource demanding. The method made it possible to assess qualitative topics during the procurement and it provides an excellent ground for user involvement. PMID:25160323

  17. Physics-based statistical model and simulation method of RF propagation in urban environments

    DOEpatents

    Pao, Hsueh-Yuan; Dvorak, Steven L.

    2010-09-14

    A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.

  18. Apparatus and method for interaction phenomena with world modules in data-flow-based simulation

    DOEpatents

    Xavier, Patrick G.; Gottlieb, Eric J.; McDonald, Michael J.; Oppel, III, Fred J.

    2006-08-01

    A method and apparatus accommodate interaction phenomenon in a data-flow-based simulation of a system of elements, by establishing meta-modules to simulate system elements and by establishing world modules associated with interaction phenomena. World modules are associated with proxy modules from a group of meta-modules associated with one of the interaction phenomenon. The world modules include a communication world, a sensor world, a mobility world, and a contact world. World modules can be further associated with other world modules if necessary. Interaction phenomenon are simulated in corresponding world modules by accessing member functions in the associated group of proxy modules. Proxy modules can be dynamically allocated at a desired point in the simulation to accommodate the addition of elements in the system of elements such as a system of robots, a system of communication terminals, or a system of vehicles, being simulated.

  19. Methods for simulation-based analysis of fluid-structure interaction.

    SciTech Connect

    Barone, Matthew Franklin; Payne, Jeffrey L.

    2005-10-01

    Methods for analysis of fluid-structure interaction using high fidelity simulations are critically reviewed. First, a literature review of modern numerical techniques for simulation of aeroelastic phenomena is presented. The review focuses on methods contained within the arbitrary Lagrangian-Eulerian (ALE) framework for coupling computational fluid dynamics codes to computational structural mechanics codes. The review treats mesh movement algorithms, the role of the geometric conservation law, time advancement schemes, wetted surface interface strategies, and some representative applications. The complexity and computational expense of coupled Navier-Stokes/structural dynamics simulations points to the need for reduced order modeling to facilitate parametric analysis. The proper orthogonal decomposition (POD)/Galerkin projection approach for building a reduced order model (ROM) is presented, along with ideas for extension of the methodology to allow construction of ROMs based on data generated from ALE simulations.

  20. Two dimensional Unite element method simulation to determine the brain capacitance based on ECVT measurement

    NASA Astrophysics Data System (ADS)

    Sirait, S. H.; Taruno, W. P.; Khotimah, S. N.; Haryanto, F.

    2016-03-01

    A simulation to determine capacitance of brain's electrical activity based on two electrodes ECVT was conducted in this study. This study began with construction of 2D coronal head geometry with five different layers and ECVT sensor design, and then both of these designs were merged. After that, boundary conditions were applied on two electrodes in the ECVT sensor. The first electrode was defined as a Dirichlet boundary condition with 20 V in potential and another electrode was defined as a Dirichlet boundary condition with 0 V in potential. Simulated Hodgkin-Huxley -based action potentials were applied as electrical activity of the brain and sequentially were put on 3 different cross-sectional positions. As the governing equation, the Poisson equation was implemented in the geometry. Poisson equation was solved by finite element method. The simulation showed that the simulated capacitance values were affected by action potentials and cross-sectional action potential positions.

  1. Simulation of ultrasonic wave propagation in welds using ray-based methods

    NASA Astrophysics Data System (ADS)

    Gardahaut, A.; Jezzine, K.; Cassereau, D.; Leymarie, N.

    2014-04-01

    Austenitic or bimetallic welds are particularly difficult to control due to their anisotropic and inhomogeneous properties. In this paper, we present a ray-based method to simulate the propagation of ultrasonic waves in such structures, taking into account their internal properties. This method is applied on a smooth representation of the orientation of the grain in the weld. The propagation model consists in solving the eikonal and transport equations in an inhomogeneous anisotropic medium. Simulation results are presented and compared to finite elements for a distribution of grain orientation expressed in a closed-form.

  2. GPU-accelerated Monte Carlo simulation of particle coagulation based on the inverse method

    NASA Astrophysics Data System (ADS)

    Wei, J.; Kruis, F. E.

    2013-09-01

    Simulating particle coagulation using Monte Carlo methods is in general a challenging computational task due to its numerical complexity and the computing cost. Currently, the lowest computing costs are obtained when applying a graphic processing unit (GPU) originally developed for speeding up graphic processing in the consumer market. In this article we present an implementation of accelerating a Monte Carlo method based on the Inverse scheme for simulating particle coagulation on the GPU. The abundant data parallelism embedded within the Monte Carlo method is explained as it will allow an efficient parallelization of the MC code on the GPU. Furthermore, the computation accuracy of the MC on GPU was validated with a benchmark, a CPU-based discrete-sectional method. To evaluate the performance gains by using the GPU, the computing time on the GPU against its sequential counterpart on the CPU were compared. The measured speedups show that the GPU can accelerate the execution of the MC code by a factor 10-100, depending on the chosen particle number of simulation particles. The algorithm shows a linear dependence of computing time with the number of simulation particles, which is a remarkable result in view of the n2 dependence of the coagulation.

  3. Misclassification Errors in Unsupervised Classification Methods. Comparison Based on the Simulation of Targeted Proteomics Data

    PubMed Central

    Andreev, Victor P; Gillespie, Brenda W; Helfand, Brian T; Merion, Robert M

    2016-01-01

    Unsupervised classification methods are gaining acceptance in omics studies of complex common diseases, which are often vaguely defined and are likely the collections of disease subtypes. Unsupervised classification based on the molecular signatures identified in omics studies have the potential to reflect molecular mechanisms of the subtypes of the disease and to lead to more targeted and successful interventions for the identified subtypes. Multiple classification algorithms exist but none is ideal for all types of data. Importantly, there are no established methods to estimate sample size in unsupervised classification (unlike power analysis in hypothesis testing). Therefore, we developed a simulation approach allowing comparison of misclassification errors and estimating the required sample size for a given effect size, number, and correlation matrix of the differentially abundant proteins in targeted proteomics studies. All the experiments were performed in silico. The simulated data imitated the expected one from the study of the plasma of patients with lower urinary tract dysfunction with the aptamer proteomics assay Somascan (SomaLogic Inc, Boulder, CO), which targeted 1129 proteins, including 330 involved in inflammation, 180 in stress response, 80 in aging, etc. Three popular clustering methods (hierarchical, k-means, and k-medoids) were compared. K-means clustering performed much better for the simulated data than the other two methods and enabled classification with misclassification error below 5% in the simulated cohort of 100 patients based on the molecular signatures of 40 differentially abundant proteins (effect size 1.5) from among the 1129-protein panel. PMID:27524871

  4. Agent-based modeling: Methods and techniques for simulating human systems

    PubMed Central

    Bonabeau, Eric

    2002-01-01

    Agent-based modeling is a powerful simulation modeling technique that has seen a number of applications in the last few years, including applications to real-world business problems. After the basic principles of agent-based simulation are briefly introduced, its four areas of application are discussed by using real-world applications: flow simulation, organizational simulation, market simulation, and diffusion simulation. For each category, one or several business applications are described and analyzed. PMID:12011407

  5. The method of infrared image simulation based on the measured image

    NASA Astrophysics Data System (ADS)

    Lou, Shuli; Liu, Liang; Ren, Jiancun

    2015-10-01

    The development of infrared imaging guidance technology has promoted the research of infrared imaging simulation technology and the key of infrared imaging simulation is the generation of IR image. The generation of IR image is worthful in military and economy. In order to solve the problem of credibility and economy of infrared scene generation, a method of infrared scene generation based on the measured image is proposed. Through researching on optical properties of ship-target and sea background, ship-target images with various gestures are extracted from recorded images based on digital image processing technology. The ship-target image is zoomed in and out to simulate the relative motion between the viewpoint and the target according to field of view and the distance between the target and the sensor. The gray scale of ship-target image is adjusted to simulate the radiation change of the ship-target according to the distance between the viewpoint and the target and the atmospheric transmission. Frames of recorded infrared images without target are interpolated to simulate high frame rate of missile. Processed ship-target images and sea-background infrared images are synthetized to obtain infrared scenes according to different viewpoints. Experiments proved that this method is flexible and applicable, and the fidelity and the reliability of synthesis infrared images can be guaranteed.

  6. [A rapid prototype fabrication method of dental splint based on 3D simulation and technology].

    PubMed

    Lin, Yanping; Chen, Xiaojun; Zhang, Shilei; Wang, Chengtao

    2006-04-01

    The conventional design and fabrication of the dental splint (in orthognathic surgery) is based on the preoperative planning and model surgery so this process is of low precision and efficiency. In order to solve the problems and be up to the trend of computer-assisted surgery, we have developed a novel method to design and fabricate the dental splint--computer-generated dental splint, which is based on three-dimensional model simulation and rapid prototype technology. After the surgical planning and simulation of 3D model, we can modify the model to be superior in chewing action (functional) and overall facial appearance (aesthetic). Then, through the Boolean operation of the dental splint blank and the maxillofacial bone model the model of dental splint is formed. At last, the dental splint model is fabricated through rapid prototype machine and applied in clinic. The result indicates that, with the use of this method, the surgical precision and efficiency are improved.

  7. Efficient Molecular Dynamics Simulations of Multiple Radical Center Systems Based on the Fragment Molecular Orbital Method

    SciTech Connect

    Nakata, Hiroya; Schmidt, Michael W; Fedorov, Dmitri G; Kitaura, Kazuo; Nakamura, Shinichiro; Gordon, Mark S

    2014-10-16

    The fully analytic energy gradient has been developed and implemented for the restricted open-shell Hartree–Fock (ROHF) method based on the fragment molecular orbital (FMO) theory for systems that have multiple open-shell molecules. The accuracy of the analytic ROHF energy gradient is compared with the corresponding numerical gradient, illustrating the accuracy of the analytic gradient. The ROHF analytic gradient is used to perform molecular dynamics simulations of an unusual open-shell system, liquid oxygen, and mixtures of oxygen and nitrogen. These molecular dynamics simulations provide some insight about how triplet oxygen molecules interact with each other. Timings reveal that the method can calculate the energy gradient for a system containing 4000 atoms in only 6 h. Therefore, it is concluded that the FMO-ROHF method will be useful for investigating systems with multiple open shells.

  8. Efficient molecular dynamics simulations of multiple radical center systems based on the fragment molecular orbital method.

    PubMed

    Nakata, Hiroya; Schmidt, Michael W; Fedorov, Dmitri G; Kitaura, Kazuo; Nakamura, Shinichiro; Gordon, Mark S

    2014-10-16

    The fully analytic energy gradient has been developed and implemented for the restricted open-shell Hartree-Fock (ROHF) method based on the fragment molecular orbital (FMO) theory for systems that have multiple open-shell molecules. The accuracy of the analytic ROHF energy gradient is compared with the corresponding numerical gradient, illustrating the accuracy of the analytic gradient. The ROHF analytic gradient is used to perform molecular dynamics simulations of an unusual open-shell system, liquid oxygen, and mixtures of oxygen and nitrogen. These molecular dynamics simulations provide some insight about how triplet oxygen molecules interact with each other. Timings reveal that the method can calculate the energy gradient for a system containing 4000 atoms in only 6 h. Therefore, it is concluded that the FMO-ROHF method will be useful for investigating systems with multiple open shells.

  9. Three-dimensional imaging simulation of active laser detection based on DLOS method

    NASA Astrophysics Data System (ADS)

    Zhang, Chuanxin; Zhou, Honghe; Chen, Xiang; Yuan, Yuan; Shuai, Yong; Tan, Heping

    2016-07-01

    The technology of active laser detection is widely used in many different fields nowadays. With the development of computer technology, programmable software simulation can provide reference for the design of active laser detection. The characteristics of the active laser detecting systems also can be judged more visual. Based on the features of the active laser detection, an improved method of radiative transfer calculation (Double Line Of Sight) was developed, and the simulation models of complete active laser detecting imaging were founded. Compared with the results calculated by the Monte Carlo method, the correctness of the improved method was verified. The results of active laser detecting imaging of complex three-dimensional targets in different atmospheric scenes were compared. The influence of different atmospheric dielectric property were analyzed, which provides effective reference for the design of active laser detection.

  10. A Monte Carlo simulation based inverse propagation method for stochastic model updating

    NASA Astrophysics Data System (ADS)

    Bao, Nuo; Wang, Chunjie

    2015-08-01

    This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.

  11. Thermoelastic Simulations Based on Discontinuous Galerkin Methods: Formulation and Application in Gas Turbines

    NASA Astrophysics Data System (ADS)

    Hao, Zengrong; Gu, Chunwei; Song, Yin

    2016-06-01

    This study extends the discontinuous Galerkin (DG) methods to simulations of thermoelasticity. A thermoelastic formulation of interior penalty DG (IP-DG) method is presented and aspects of the numerical implementation are discussed in matrix form. The content related to thermal expansion effects is illustrated explicitly in the discretized equation system. The feasibility of the method for general thermoelastic simulations is validated through typical test cases, including tackling stress discontinuities caused by jumps of thermal expansive properties and controlling accompanied non-physical oscillations through adjusting the magnitude of IP term. The developed simulation platform upon the method is applied to the engineering analysis of thermoelastic performance for a turbine vane and a series of vanes with various types of simplified thermal barrier coating (TBC) systems. This analysis demonstrates that while TBC properties on heat conduction are generally the major consideration for protecting the alloy base vanes, the mechanical properties may have more significant effects on protections of coatings themselves. Changing characteristics of normal tractions on TBC/base interface, closely related to the occurrence of coating failures, over diverse components distributions along TBC thickness of the functional graded materials are summarized and analysed, illustrating the opposite tendencies in situations with different thermal-stress-free temperatures for coatings.

  12. An novel frequent probability pattern mining algorithm based on circuit simulation method in uncertain biological networks

    PubMed Central

    2014-01-01

    Background Motif mining has always been a hot research topic in bioinformatics. Most of current research on biological networks focuses on exact motif mining. However, due to the inevitable experimental error and noisy data, biological network data represented as the probability model could better reflect the authenticity and biological significance, therefore, it is more biological meaningful to discover probability motif in uncertain biological networks. One of the key steps in probability motif mining is frequent pattern discovery which is usually based on the possible world model having a relatively high computational complexity. Methods In this paper, we present a novel method for detecting frequent probability patterns based on circuit simulation in the uncertain biological networks. First, the partition based efficient search is applied to the non-tree like subgraph mining where the probability of occurrence in random networks is small. Then, an algorithm of probability isomorphic based on circuit simulation is proposed. The probability isomorphic combines the analysis of circuit topology structure with related physical properties of voltage in order to evaluate the probability isomorphism between probability subgraphs. The circuit simulation based probability isomorphic can avoid using traditional possible world model. Finally, based on the algorithm of probability subgraph isomorphism, two-step hierarchical clustering method is used to cluster subgraphs, and discover frequent probability patterns from the clusters. Results The experiment results on data sets of the Protein-Protein Interaction (PPI) networks and the transcriptional regulatory networks of E. coli and S. cerevisiae show that the proposed method can efficiently discover the frequent probability subgraphs. The discovered subgraphs in our study contain all probability motifs reported in the experiments published in other related papers. Conclusions The algorithm of probability graph isomorphism

  13. A flood map based DOI decoding method for block detector: a GATE simulation study.

    PubMed

    Shi, Han; Du, Dong; Su, Zhihong; Peng, Qiyu

    2014-01-01

    Positron Emission Tomography (PET) systems using detectors with Depth of Interaction (DOI) capabilities could achieve higher spatial resolution and better image quality than those without DOI. Up till now, most DOI methods developed are not cost-efficient for a whole body PET system. In this paper, we present a DOI decoding method based on flood map for low-cost conventional block detector with four-PMT readout. Using this method, the DOI information can be directly extracted from the DOI-related crystal spot deformation in the flood map. GATE simulations are then carried out to validate the method, confirming a DOI sorting accuracy of 85.27%. Therefore, we conclude that this method has the potential to be applied in conventional detectors to achieve a reasonable DOI measurement without dramatically increasing their complexity and cost of an entire PET system.

  14. Using simulations to evaluate Mantel-based methods for assessing landscape resistance to gene flow.

    PubMed

    Zeller, Katherine A; Creech, Tyler G; Millette, Katie L; Crowhurst, Rachel S; Long, Robert A; Wagner, Helene H; Balkenhol, Niko; Landguth, Erin L

    2016-06-01

    Mantel-based tests have been the primary analytical methods for understanding how landscape features influence observed spatial genetic structure. Simulation studies examining Mantel-based approaches have highlighted major challenges associated with the use of such tests and fueled debate on when the Mantel test is appropriate for landscape genetics studies. We aim to provide some clarity in this debate using spatially explicit, individual-based, genetic simulations to examine the effects of the following on the performance of Mantel-based methods: (1) landscape configuration, (2) spatial genetic nonequilibrium, (3) nonlinear relationships between genetic and cost distances, and (4) correlation among cost distances derived from competing resistance models. Under most conditions, Mantel-based methods performed poorly. Causal modeling identified the true model only 22% of the time. Using relative support and simple Mantel r values boosted performance to approximately 50%. Across all methods, performance increased when landscapes were more fragmented, spatial genetic equilibrium was reached, and the relationship between cost distance and genetic distance was linearized. Performance depended on cost distance correlations among resistance models rather than cell-wise resistance correlations. Given these results, we suggest that the use of Mantel tests with linearized relationships is appropriate for discriminating among resistance models that have cost distance correlations <0.85 with each other for causal modeling, or <0.95 for relative support or simple Mantel r. Because most alternative parameterizations of resistance for the same landscape variable will result in highly correlated cost distances, the use of Mantel test-based methods to fine-tune resistance values will often not be effective. PMID:27516868

  15. Using simulations to evaluate Mantel-based methods for assessing landscape resistance to gene flow.

    PubMed

    Zeller, Katherine A; Creech, Tyler G; Millette, Katie L; Crowhurst, Rachel S; Long, Robert A; Wagner, Helene H; Balkenhol, Niko; Landguth, Erin L

    2016-06-01

    Mantel-based tests have been the primary analytical methods for understanding how landscape features influence observed spatial genetic structure. Simulation studies examining Mantel-based approaches have highlighted major challenges associated with the use of such tests and fueled debate on when the Mantel test is appropriate for landscape genetics studies. We aim to provide some clarity in this debate using spatially explicit, individual-based, genetic simulations to examine the effects of the following on the performance of Mantel-based methods: (1) landscape configuration, (2) spatial genetic nonequilibrium, (3) nonlinear relationships between genetic and cost distances, and (4) correlation among cost distances derived from competing resistance models. Under most conditions, Mantel-based methods performed poorly. Causal modeling identified the true model only 22% of the time. Using relative support and simple Mantel r values boosted performance to approximately 50%. Across all methods, performance increased when landscapes were more fragmented, spatial genetic equilibrium was reached, and the relationship between cost distance and genetic distance was linearized. Performance depended on cost distance correlations among resistance models rather than cell-wise resistance correlations. Given these results, we suggest that the use of Mantel tests with linearized relationships is appropriate for discriminating among resistance models that have cost distance correlations <0.85 with each other for causal modeling, or <0.95 for relative support or simple Mantel r. Because most alternative parameterizations of resistance for the same landscape variable will result in highly correlated cost distances, the use of Mantel test-based methods to fine-tune resistance values will often not be effective.

  16. Simulations of Ground Motion in Southern California based upon the Spectral-Element Method

    NASA Astrophysics Data System (ADS)

    Tromp, J.; Komatitsch, D.; Liu, Q.

    2003-12-01

    We use the spectral-element method to simulate ground motion generated by recent well-recorded small earthquakes in Southern California. Simulations are performed using a new sedimentary basin model that is constrained by hundreds of petroleum industry well logs and more than twenty thousand kilometers of seismic reflection profiles. The numerical simulations account for 3D variations of seismic wave speeds and density, topography and bathymetry, and attenuation. Simulations for several small recent events demonstrate that the combination of a detailed sedimentary basin model and an accurate numerical technique facilitates the simulation of ground motion at periods of 2 seconds and longer inside the Los Angeles basin and 6 seconds and longer elsewhere. Peak ground displacement, velocity and acceleration maps illustrate that significant amplification occurs in the basin. Centroid-Moment Tensor mechanisms are obtained based upon Pnl and surface waveforms and numerically calculated 3D Frechet derivatives. We use a combination of waveform and waveform-envelope misfit criteria, and facilitate pure double-couple or zero-trace moment-tensor inversions.

  17. Development and elaboration of numerical method for simulating gas–liquid–solid three-phase flows based on particle method

    NASA Astrophysics Data System (ADS)

    Takahashi, Ryohei; Mamori, Hiroya; Yamamoto, Makoto

    2016-02-01

    A numerical method for simulating gas-liquid-solid three-phase flows based on the moving particle semi-implicit (MPS) approach was developed in this study. Computational instability often occurs in multiphase flow simulations if the deformations of the free surfaces between different phases are large, among other reasons. To avoid this instability, this paper proposes an improved coupling procedure between different phases in which the physical quantities of particles in different phases are calculated independently. We performed numerical tests on two illustrative problems: a dam-break problem and a solid-sphere impingement problem. The former problem is a gas-liquid two-phase problem, and the latter is a gas-liquid-solid three-phase problem. The computational results agree reasonably well with the experimental results. Thus, we confirmed that the proposed MPS method reproduces the interaction between different phases without inducing numerical instability.

  18. Simulation of 2D Brain's Potential Distribution Based on Two Electrodes ECVT Using Finite Element Method

    NASA Astrophysics Data System (ADS)

    Sirait, S. H.; Edison, R. E.; Baidillah, M. R.; Taruno, W. P.; Haryanto, F.

    2016-08-01

    The aim of this study is to simulate the potential distribution of 2D brain geometry based on two electrodes ECVT. ECVT (electrical capacitance tomography) is a tomography modality which produces dielectric distribution image of a subject from several capacitance electrodes measurements. This study begins by producing the geometry of 2D brain based on MRI image and then setting the boundary conditions on the boundaries of the geometry. The values of boundary conditions follow the potential values used in two electrodes brain ECVT, and for this reason the first boundary is set to 20 volt and 2.5 MHz signal and another boundary is set to ground. Poisson equation is implemented as the governing equation in the 2D brain geometry and finite element method is used to solve the equation. Simulated Hodgkin-Huxley action potential is applied as disturbance potential in the geometry. We divide this study into two which comprises simulation without disturbance potential and simulation with disturbance potential. From this study, each of time dependent potential distributions from non-disturbance and disturbance potential of the 2D brain geometry has been generated.

  19. Method and simulation for spacecraft clock correction based on x-ray pulsars signal

    NASA Astrophysics Data System (ADS)

    Gui, Xianzhou; Sun, Chen; Huang, Senlin

    2015-07-01

    X-ray pulsar-based spacecraft navigation comes to be a new kind of autonomous navigation technology with high potential, for the advantages of high reliability, good autonomy, high precision and wide applicability. Timing, determination of position and attitude are main prospects of using X-ray pulsars [1,2]. To realize the pulse signal timing, in this paper, a Phase-Locked Loop circuit for tracking pulsar signal frequency is designed; PLL is built in the Simulink environment and tested by using simple pulse signal to get circuit parameters with good track effect. The Crab Nebula pulse profile, which is used as the simulation signal source, is modelled by using the mathematical method [3]. The simulation results show that the PLL circuit designed in the paper can track the frequency of pulse signal precisely and can be used for spacecraft clock correction.

  20. Stray light analysis and suppression method of dynamic star simulator based on LCOS splicing technology

    NASA Astrophysics Data System (ADS)

    Meng, Yao; Zhang, Guo-yu

    2015-10-01

    Star simulator acts ground calibration equipment of the star sensor, It testes the related parameters and performance of the star sensor. At present, when the dynamic star simulator based on LCOS splicing is identified by the star sensor, there is a major problem which is the poor LCOS contrast. In this paper, we analysis the cause of LC OS stray light , which is the relation between the incident angle of light and contrast ratio and set up the function relationship between the angle and the irradiance of the stray light. According to this relationship, we propose a scheme that we control the incident angle . It is a popular method to use the compound parabolic concentrator (CPC), although it can control any angle what we want in theory, in fact, we usually use it above +/-15° because of the length and the manufacturing cost. Then I set a telescopic system in front of the CPC , that principle is the same as the laser beam expander. We simulate the CPC with the Tracepro, it simulate the exit surface irradiance. The telescopic system should be designed by the ZEMAX because of the chromatic aberration correction. As a result, we get a collimating light source which the viewing angle is less than +/-5° and the area of uniform irradiation surface is greater than 20mm×20mm.

  1. Copula-based method for multisite monthly and daily streamflow simulation

    NASA Astrophysics Data System (ADS)

    Chen, Lu; Singh, Vijay P.; Guo, Shenglian; Zhou, Jianzhong; Zhang, Junhong

    2015-09-01

    Multisite stochastic simulation of streamflow sequences is needed for water resources planning and management. In this study, a new copula-based method is proposed for generating long-term multisite monthly and daily streamflow data. A multivariate copula, which is established based on bivariate copulas and conditional probability distributions, is employed to describe temporal dependences (single site) and spatial dependences (between sites). Monthly or daily streamflows at multiple sites are then generated by sampling from the conditional copula. Three tributaries of Colorado River and the upper Yangtze River are selected to evaluate the proposed methodology. Results show that the generated data at both higher and lower time scales can capture the distribution properties of the single site and preserve the spatial correlation of streamflows at different locations. The main advantage of the method is that the trivairate copula can be established using three bivariate copulas and the model parameters can be easily estimated using the Kendall tau rank correlation coefficient, which makes it possible to generate daily streamflow data. The method provides a new tool for multisite stochastic simulation.

  2. Copula-based method for Multisite Monthly and Daily Streamflow Simulation

    NASA Astrophysics Data System (ADS)

    Chen, L.; Dai, M.; Singh, V. P.; Guo, S.

    2014-12-01

    Multisite stochastic simulation of streamflow sequences is needed for water resources planning and management. In this study, a new copula-based method is proposed for generating long-term multisite monthly and daily streamflow data. A multivariate copula, which is established based on bivariate copulas and conditional probability distributions, is employed to describe temporal dependences (single site) and spatial dependences (between sites). Monthly or daily streamflows at multiple sites are then generated by sampling from the conditional copula. Three tributaries of Colorado River and the upper Yangtze River are selected to evaluate the proposed methodology. Results show that the generated data at both higher and lower time scales can capture the distribution properties of the single site and preserve the spatial correlation of streamflows at different locations. The main advantage of the method is that the model parameters can be easily estimated using Kendall tau rank correlation coefficient, which makes it possible to generate daily streamflow data. The method provides a new tool for multisite stochastic simulation.

  3. Is social projection based on simulation or theory? Why new methods are needed for differentiating.

    PubMed

    Bazinger, Claudia; Kühberger, Anton

    2012-12-01

    The literature on social cognition reports many instances of a phenomenon titled 'social projection' or 'egocentric bias'. These terms indicate egocentric predictions, i.e., an over-reliance on the self when predicting the cognition, emotion, or behavior of other people. The classic method to diagnose egocentric prediction is to establish high correlations between our own and other people's cognition, emotion, or behavior. We argue that this method is incorrect because there is a different way to come to a correlation between own and predicted states, namely, through the use of theoretical knowledge. Thus, the use of correlational measures is not sufficient to identify the source of social predictions. Based on the distinction between simulation theory and theory theory, we propose the following alternative methods for inferring prediction strategies: independent vs. juxtaposed predictions, the use of 'hot' mental processes, and the use of participants' self-reports.

  4. Is social projection based on simulation or theory? Why new methods are needed for differentiating

    PubMed Central

    Bazinger, Claudia; Kühberger, Anton

    2012-01-01

    The literature on social cognition reports many instances of a phenomenon titled ‘social projection’ or ‘egocentric bias’. These terms indicate egocentric predictions, i.e., an over-reliance on the self when predicting the cognition, emotion, or behavior of other people. The classic method to diagnose egocentric prediction is to establish high correlations between our own and other people's cognition, emotion, or behavior. We argue that this method is incorrect because there is a different way to come to a correlation between own and predicted states, namely, through the use of theoretical knowledge. Thus, the use of correlational measures is not sufficient to identify the source of social predictions. Based on the distinction between simulation theory and theory theory, we propose the following alternative methods for inferring prediction strategies: independent vs. juxtaposed predictions, the use of ‘hot’ mental processes, and the use of participants’ self-reports. PMID:23209342

  5. Broken wires diagnosis method numerical simulation based on smart cable structure

    NASA Astrophysics Data System (ADS)

    Li, Sheng; Zhou, Min; Yang, Yan

    2014-12-01

    The smart cable with embedded distributed fiber optical Bragg grating (FBG) sensors was chosen as the object to study a new diagnosis method about broken wires of the bridge cable. The diagnosis strategy based on cable force and stress distribution state of steel wires was put forward. By establishing the bridge-cable and cable-steel wires model, the broken wires sample database was simulated numerically. A method of the characterization cable state pattern which can both represent the degree and location of broken wires inside a cable was put forward. The training and predicting results of the sample database by the back propagation (BP) neural network showed that the proposed broken wires diagnosis method was feasible and expanded the broken wires diagnosis research area by using the smart cable which was used to be only representing cable force.

  6. Evaluation of FTIR-based analytical methods for the analysis of simulated wastes

    SciTech Connect

    Rebagay, T.V.; Cash, R.J.; Dodd, D.A.; Lockrem, L.L.; Meacham, J.E.; Winkelman, W.D.

    1994-09-30

    Three FTIR-based analytical methods that have potential to characterize simulated waste tank materials have been evaluated. These include: (1) fiber optics, (2) modular transfer optic using light guides equipped with non-contact sampling peripherals, and (3) photoacoustic spectroscopy. Pertinent instrumentation and experimental procedures for each method are described. The results show that the near-infrared (NIR) region of the infrared spectrum is the region of choice for the measurement of moisture in waste simulants. Differentiation of the NIR spectrum, as a preprocessing steps, will improve the analytical result. Preliminary data indicate that prominent combination bands of water and the first overtone band of the ferrocyanide stretching vibration may be utilized to measure water and ferrocyanide species simultaneously. Both near-infrared and mid-infrared spectra must be collected, however, to measure ferrocyanide species unambiguously and accurately. For ease of sample handling and the potential for field or waste tank deployment, the FTIR-Fiber Optic method is preferred over the other two methods. Modular transfer optic using light guides and photoacoustic spectroscopy may be used as backup systems and for the validation of the fiber optic data.

  7. A simple numerical method for snowmelt simulation based on the equation of heat energy.

    PubMed

    Stojković, Milan; Jaćimović, Nenad

    2016-01-01

    This paper presents one-dimensional numerical model for snowmelt/accumulation simulations, based on the equation of heat energy. It is assumed that the snow column is homogeneous at the current time step; however, its characteristics such as snow density and thermal conductivity are treated as functions of time. The equation of heat energy for snow column is solved using the implicit finite difference method. The incoming energy at the snow surface includes the following parts: conduction, convection, radiation and the raindrop energy. Along with the snow melting process, the model includes a model for snow accumulation. The Euler method for the numerical integration of the balance equation is utilized in the proposed model. The model applicability is demonstrated at the meteorological station Zlatibor, located in the western region of Serbia at 1,028 meters above sea level (m.a.s.l.) Simulation results of snowmelt/accumulation suggest that the proposed model achieved better agreement with observed data in comparison with the temperature index method. The proposed method may be utilized as part of a deterministic hydrological model in order to improve short and long term predictions of possible flood events. PMID:27054726

  8. A simple numerical method for snowmelt simulation based on the equation of heat energy.

    PubMed

    Stojković, Milan; Jaćimović, Nenad

    2016-01-01

    This paper presents one-dimensional numerical model for snowmelt/accumulation simulations, based on the equation of heat energy. It is assumed that the snow column is homogeneous at the current time step; however, its characteristics such as snow density and thermal conductivity are treated as functions of time. The equation of heat energy for snow column is solved using the implicit finite difference method. The incoming energy at the snow surface includes the following parts: conduction, convection, radiation and the raindrop energy. Along with the snow melting process, the model includes a model for snow accumulation. The Euler method for the numerical integration of the balance equation is utilized in the proposed model. The model applicability is demonstrated at the meteorological station Zlatibor, located in the western region of Serbia at 1,028 meters above sea level (m.a.s.l.) Simulation results of snowmelt/accumulation suggest that the proposed model achieved better agreement with observed data in comparison with the temperature index method. The proposed method may be utilized as part of a deterministic hydrological model in order to improve short and long term predictions of possible flood events.

  9. Simulating rare events using a weighted ensemble-based string method.

    PubMed

    Adelman, Joshua L; Grabe, Michael

    2013-01-28

    We introduce an extension to the weighted ensemble (WE) path sampling method to restrict sampling to a one-dimensional path through a high dimensional phase space. Our method, which is based on the finite-temperature string method, permits efficient sampling of both equilibrium and non-equilibrium systems. Sampling obtained from the WE method guides the adaptive refinement of a Voronoi tessellation of order parameter space, whose generating points, upon convergence, coincide with the principle reaction pathway. We demonstrate the application of this method to several simple, two-dimensional models of driven Brownian motion and to the conformational change of the nitrogen regulatory protein C receiver domain using an elastic network model. The simplicity of the two-dimensional models allows us to directly compare the efficiency of the WE method to conventional brute force simulations and other path sampling algorithms, while the example of protein conformational change demonstrates how the method can be used to efficiently study transitions in the space of many collective variables.

  10. IR imaging simulation and analysis for aeroengine exhaust system based on reverse Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Chen, Shiguo; Chen, Lihai; Mo, Dongla; Shi, Jingcheng

    2014-11-01

    The IR radiation characteristics of aeroengine are the important basis for IR stealth design and anti-stealth detection of aircraft. With the development of IR imaging sensor technology, the importance of aircraft IR stealth increases. An effort is presented to explore target IR radiation imaging simulation based on Reverse Monte Carlo Method (RMCM), which combined with the commercial CFD software. Flow and IR radiation characteristics of an aeroengine exhaust system are investigated, which developing a full size geometry model based on the actual parameters, using a flow-IR integration structured mesh, obtaining the engine performance parameters as the inlet boundary conditions of mixer section, and constructing a numerical simulation model of engine exhaust system of IR radiation characteristics based on RMCM. With the above models, IR radiation characteristics of aeroengine exhaust system is given, and focuses on the typical detecting band of IR spectral radiance imaging at azimuth 20°. The result shows that: (1) in small azimuth angle, the IR radiation is mainly from the center cone of all hot parts; near the azimuth 15°, mixer has the biggest radiation contribution, while center cone, turbine and flame stabilizer equivalent; (2) the main radiation components and space distribution in different spectrum is different, CO2 at 4.18, 4.33 and 4.45 micron absorption and emission obviously, H2O at 3.0 and 5.0 micron absorption and emission obviously.

  11. Method for distributed agent-based non-expert simulation of manufacturing process behavior

    DOEpatents

    Ivezic, Nenad; Potok, Thomas E.

    2004-11-30

    A method for distributed agent based non-expert simulation of manufacturing process behavior on a single-processor computer comprises the steps of: object modeling a manufacturing technique having a plurality of processes; associating a distributed agent with each the process; and, programming each the agent to respond to discrete events corresponding to the manufacturing technique, wherein each discrete event triggers a programmed response. The method can further comprise the step of transmitting the discrete events to each agent in a message loop. In addition, the programming step comprises the step of conditioning each agent to respond to a discrete event selected from the group consisting of a clock tick message, a resources received message, and a request for output production message.

  12. Density-of-states based Monte Carlo methods for simulation of biological systems

    NASA Astrophysics Data System (ADS)

    Rathore, Nitin; Knotts, Thomas A.; de Pablo, Juan J.

    2004-03-01

    We have developed density-of-states [1] based Monte Carlo techniques for simulation of biological molecules. Two such methods are discussed. The first, Configurational Temperature Density of States (CTDOS) [2], relies on computing the density of states of a peptide system from knowledge of its configurational temperature. The reciprocal of this intrinsic temperature, computed from instantaneous configurational information of the system, is integrated to arrive at the density of states. The method shows improved efficiency and accuracy over techniques that are based on histograms of random visits to distinct energy states. The second approach, Expanded Ensemble Density of States (EXEDOS), incorporates elements from both the random walk method and the expanded ensemble formalism. It is used in this work to study mechanical deformation of model peptides. Results are presented in the form of force-extension curves and the corresponding potentials of mean force. The application of this proposed technique is further generalized to other biological systems; results will be presented for ion transport through protein channels, base stacking in nucleic acids and hybridization of DNA strands. [1]. F. Wang and D. P. Landau, Phys. Rev. Lett., 86, 2050 (2001). [2]. N. Rathore, T. A. Knotts IV and J. J. de Pablo, Biophys. J., Dec. (2003).

  13. Spin tracking simulations in AGS based on ray-tracing methods - bare lattice, no snakes -

    SciTech Connect

    Meot, F.; Ahrens, L.; Gleen, J.; Huang, H.; Luccio, A.; MacKay, W. W.; Roser, T.; Tsoupas, N.

    2009-09-01

    This Note reports on the first simulations of and spin dynamics in the AGS using the ray-tracing code Zgoubi. It includes lattice analysis, comparisons with MAD, DA tracking, numerical calculation of depolarizing resonance strengths and comparisons with analytical models, etc. It also includes details on the setting-up of Zgoubi input data files and on the various numerical methods of concern in and available from Zgoubi. Simulations of crossing and neighboring of spin resonances in AGS ring, bare lattice, without snake, have been performed, in order to assess the capabilities of Zgoubi in that matter, and are reported here. This yields a rather long document. The two main reasons for that are, on the one hand the desire of an extended investigation of the energy span, and on the other hand a thorough comparison of Zgoubi results with analytical models as the 'thin lens' approximation, the weak resonance approximation, and the static case. Section 2 details the working hypothesis : AGS lattice data, formulae used for deriving various resonance related quantities from the ray-tracing based 'numerical experiments', etc. Section 3 gives inventories of the intrinsic and imperfection resonances together with, in a number of cases, the strengths derived from the ray-tracing. Section 4 gives the details of the numerical simulations of resonance crossing, including behavior of various quantities (closed orbit, synchrotron motion, etc.) aimed at controlling that the conditions of particle and spin motions are correct. In a similar manner Section 5 gives the details of the numerical simulations of spin motion in the static case: fixed energy in the neighboring of the resonance. In Section 6, weak resonances are explored, Zgoubi results are compared with the Fresnel integrals model. Section 7 shows the computation of the {rvec n} vector in the AGS lattice and tuning considered. Many details on the numerical conditions as data files etc. are given in the Appendix Section

  14. A new method to extract stable feature points based on self-generated simulation images

    NASA Astrophysics Data System (ADS)

    Long, Fei; Zhou, Bin; Ming, Delie; Tian, Jinwen

    2015-10-01

    Recently, image processing has got a lot of attention in the field of photogrammetry, medical image processing, etc. Matching two or more images of the same scene taken at different times, by different cameras, or from different viewpoints, is a popular and important problem. Feature extraction plays an important part in image matching. Traditional SIFT detectors reject the unstable points by eliminating the low contrast and edge response points. The disadvantage is the need to set the threshold manually. The main idea of this paper is to get the stable extremums by machine learning algorithm. Firstly we use ASIFT approach coupled with the light changes and blur to generate multi-view simulated images, which make up the set of the simulated images of the original image. According to the way of generating simulated images set, affine transformation of each generated image is also known. Instead of the traditional matching process which contain the unstable RANSAC method to get the affine transformation, this approach is more stable and accurate. Secondly we calculate the stability value of the feature points by the set of image with its affine transformation. Then we get the different feature properties of the feature point, such as DOG features, scales, edge point density, etc. Those two form the training set while stability value is the dependent variable and feature property is the independent variable. At last, a process of training by Rank-SVM is taken. We will get a weight vector. In use, based on the feature properties of each points and weight vector calculated by training, we get the sort value of each feature point which refers to the stability value, then we sort the feature points. In conclusion, we applied our algorithm and the original SIFT detectors to test as a comparison. While in different view changes, blurs, illuminations, it comes as no surprise that experimental results show that our algorithm is more efficient.

  15. [Method for environmental management in paper industry based on pollution control technology simulation].

    PubMed

    Zhang, Xue-Ying; Wen, Zong-Guo

    2014-11-01

    To evaluate the reduction potential of industrial water pollutant emissions and to study the application of technology simulation in pollutant control and environment management, an Industrial Reduction Potential Analysis and Environment Management (IRPAEM) model was developed based on coupling of "material-process-technology-product". The model integrated bottom-up modeling and scenario analysis method, and was applied to China's paper industry. Results showed that under CM scenario, the reduction potentials of waster water, COD and ammonia nitrogen would reach 7 x 10(8) t, 39 x 10(4) t and 0.3 x 10(4) t, respectively in 2015, 13.8 x 10(8) t, 56 x 10(4) t and 0.5 x 10(4) t, respectively in 2020. Strengthening the end-treatment would still be the key method to reduce emissions during 2010-2020, while the reduction effect of structure adjustment would be more obvious during 2015-2020. Pollution production could basically reach the domestic or international advanced level of clean production in 2015 and 2020; the index of wastewater and ammonia nitrogen would basically meet the emission standards in 2015 and 2020 while COD would not.

  16. Mechanical Stress Simulation of Scored Tablets Based on the Finite Element Method and Experimental Verification.

    PubMed

    Okada, Nobuto; Hayashi, Yoshihiro; Onuki, Yoshinori; Miura, Takahiro; Obata, Yasuko; Takayama, Kozo

    2016-01-01

    Scored tablets can be divided into equal halves for individual treatment of patients. However, the relationships between scored shapes and tablet characteristics such as the dividing strength, halving equality, and breaking strength are poorly understood. The purpose of this study was to simulate the mechanical stress distribution of scored tablets by using the finite element method (FEM). A runnel of triangle pole on the top surface of flat tablets was fabricated as the score shape. The depth and angle of the scores were selected as design variables. Elastic parameters such as a Young's modulus and a Poisson ratio for the model powder bed were measured. FEM simulation was then applied to the scored tablets, represented as a continuum elastic model. Stress distributions in the inner structure of the tablets were simulated after applying external force. The adequacy of the simulation was evaluated in experiments using scored tablets. As a result, we observed a relatively good agreement between the FEM simulation and the experiments, suggesting that FEM simulation is advantageous for designing scored tablets. PMID:27477653

  17. [Simulation of water and carbon fluxes in harvard forest area based on data assimilation method].

    PubMed

    Zhang, Ting-Long; Sun, Rui; Zhang, Rong-Hua; Zhang, Lei

    2013-10-01

    Model simulation and in situ observation are the two most important means in studying the water and carbon cycles of terrestrial ecosystems, but have their own advantages and shortcomings. To combine these two means would help to reflect the dynamic changes of ecosystem water and carbon fluxes more accurately. Data assimilation provides an effective way to integrate the model simulation and in situ observation. Based on the observation data from the Harvard Forest Environmental Monitoring Site (EMS), and by using ensemble Kalman Filter algorithm, this paper assimilated the field measured LAI and remote sensing LAI into the Biome-BGC model to simulate the water and carbon fluxes in Harvard forest area. As compared with the original model simulated without data assimilation, the improved Biome-BGC model with the assimilation of the field measured LAI in 1998, 1999, and 2006 increased the coefficient of determination R2 between model simulation and flux observation for the net ecosystem exchange (NEE) and evapotranspiration by 8.4% and 10.6%, decreased the sum of absolute error (SAE) and root mean square error (RMSE) of NEE by 17.7% and 21.2%, and decreased the SAE and RMSE of the evapotranspiration by 26. 8% and 28.3%, respectively. After assimilated the MODIS LAI products of 2000-2004 into the improved Biome-BGC model, the R2 between simulated and observed results of NEE and evapotranspiration was increased by 7.8% and 4.7%, the SAE and RMSE of NEE were decreased by 21.9% and 26.3%, and the SAE and RMSE of evapotranspiration were decreased by 24.5% and 25.5%, respectively. It was suggested that the simulation accuracy of ecosystem water and carbon fluxes could be effectively improved if the field measured LAI or remote sensing LAI was integrated into the model.

  18. A simulation-based marginal method for longitudinal data with dropout and mismeasured covariates.

    PubMed

    Yi, Grace Y

    2008-07-01

    Longitudinal data often contain missing observations and error-prone covariates. Extensive attention has been directed to analysis methods to adjust for the bias induced by missing observations. There is relatively little work on investigating the effects of covariate measurement error on estimation of the response parameters, especially on simultaneously accounting for the biases induced by both missing values and mismeasured covariates. It is not clear what the impact of ignoring measurement error is when analyzing longitudinal data with both missing observations and error-prone covariates. In this article, we study the effects of covariate measurement error on estimation of the response parameters for longitudinal studies. We develop an inference method that adjusts for the biases induced by measurement error as well as by missingness. The proposed method does not require the full specification of the distribution of the response vector but only requires modeling its mean and variance structures. Furthermore, the proposed method employs the so-called functional modeling strategy to handle the covariate process, with the distribution of covariates left unspecified. These features, plus the simplicity of implementation, make the proposed method very attractive. In this paper, we establish the asymptotic properties for the resulting estimators. With the proposed method, we conduct sensitivity analyses on a cohort data set arising from the Framingham Heart Study. Simulation studies are carried out to evaluate the impact of ignoring covariate measurement error and to assess the performance of the proposed method. PMID:18199691

  19. Two methods for transmission line simulation model creation based on time domain measurements

    NASA Astrophysics Data System (ADS)

    Rinas, D.; Frei, S.

    2011-07-01

    The emission from transmission lines plays an important role in the electromagnetic compatibility of automotive electronic systems. In a frequency range below 200 MHz radiation from cables is often the dominant emission factor. In higher frequency ranges radiation from PCBs and their housing becomes more relevant. Main sources for this emission are the conducting traces. The established field measurement methods according CISPR 25 for evaluation of emissions suffer from the need to use large anechoic chambers. Furthermore measurement data can not be used for simulation model creation in order to compute the overall fields radiated from a car. In this paper a method to determine the far-fields and a simulation model of radiating transmission lines, esp. cable bundles and conducting traces on planar structures, is proposed. The method measures the electromagnetic near-field above the test object. Measurements are done in time domain in order to get phase information and to reduce measurement time. On the basis of near-field data equivalent source identification can be done. Considering correlations between sources along each conductive structure in model creation process, the model accuracy increases and computational costs can be reduced.

  20. Simulation of the electrode shape change in electrochemical machining based on the level set method

    NASA Astrophysics Data System (ADS)

    Topa, V.; Purcar, M.; Avram, A.; Munteanu, C.; Chereches, R.; Grindei, L.

    2012-04-01

    This paper proposes a generally applicable numerical algorithm for the simulation of two dimensional electrode shape changes during electrochemical machining processes. The computational model consists of two coupled problems: an electrode shape change rate analysis and a moving boundary problem. The innovative aspect is that the workpiece shape is computed over a number of predefined time steps by convection of its surface with a velocity proportional and in the direction of the local electrode shape change rate. An example related to the electrochemical machining of a slot in a stainless steel plate is presented here to demonstrate the strong features of the proposed method.

  1. Full wave simulation of waves in ECRIS plasmas based on the finite element method

    SciTech Connect

    Torrisi, G.; Mascali, D.; Neri, L.; Castro, G.; Patti, G.; Celona, L.; Gammino, S.; Ciavola, G.; Di Donato, L.; Sorbello, G.; Isernia, T.

    2014-02-12

    This paper describes the modeling and the full wave numerical simulation of electromagnetic waves propagation and absorption in an anisotropic magnetized plasma filling the resonant cavity of an electron cyclotron resonance ion source (ECRIS). The model assumes inhomogeneous, dispersive and tensorial constitutive relations. Maxwell's equations are solved by the finite element method (FEM), using the COMSOL Multiphysics{sup ®} suite. All the relevant details have been considered in the model, including the non uniform external magnetostatic field used for plasma confinement, the local electron density profile resulting in the full-3D non uniform magnetized plasma complex dielectric tensor. The more accurate plasma simulations clearly show the importance of cavity effect on wave propagation and the effects of a resonant surface. These studies are the pillars for an improved ECRIS plasma modeling, that is mandatory to optimize the ion source output (beam intensity distribution and charge state, especially). Any new project concerning the advanced ECRIS design will take benefit by an adequate modeling of self-consistent wave absorption simulations.

  2. Monte Carlo method based radiative transfer simulation of stochastic open forest generated by circle packing application

    NASA Astrophysics Data System (ADS)

    Jin, Shengye; Tamura, Masayuki

    2013-10-01

    Monte Carlo Ray Tracing (MCRT) method is a versatile application for simulating radiative transfer regime of the Solar - Atmosphere - Landscape system. Moreover, it can be used to compute the radiation distribution over a complex landscape configuration, as an example like a forest area. Due to its robustness to the complexity of the 3-D scene altering, MCRT method is also employed for simulating canopy radiative transfer regime as the validation source of other radiative transfer models. In MCRT modeling within vegetation, one basic step is the canopy scene set up. 3-D scanning application was used for representing canopy structure as accurately as possible, but it is time consuming. Botanical growth function can be used to model the single tree growth, but cannot be used to express the impaction among trees. L-System is also a functional controlled tree growth simulation model, but it costs large computing memory. Additionally, it only models the current tree patterns rather than tree growth during we simulate the radiative transfer regime. Therefore, it is much more constructive to use regular solid pattern like ellipsoidal, cone, cylinder etc. to indicate single canopy. Considering the allelopathy phenomenon in some open forest optical images, each tree in its own `domain' repels other trees. According to this assumption a stochastic circle packing algorithm is developed to generate the 3-D canopy scene in this study. The canopy coverage (%) and the tree amount (N) of the 3-D scene are declared at first, similar to the random open forest image. Accordingly, we randomly generate each canopy radius (rc). Then we set the circle central coordinate on XY-plane as well as to keep circles separate from each other by the circle packing algorithm. To model the individual tree, we employ the Ishikawa's tree growth regressive model to set the tree parameters including DBH (dt), tree height (H). However, the relationship between canopy height (Hc) and trunk height (Ht) is

  3. Entropy in bimolecular simulations: A comprehensive review of atomic fluctuations-based methods.

    PubMed

    Kassem, Summer; Ahmed, Marawan; El-Sheikh, Salah; Barakat, Khaled H

    2015-11-01

    Entropy of binding constitutes a major, and in many cases a detrimental, component of the binding affinity in biomolecular interactions. While the enthalpic part of the binding free energy is easier to calculate, estimating the entropy of binding is further more complicated. A precise evaluation of entropy requires a comprehensive exploration of the complete phase space of the interacting entities. As this task is extremely hard to accomplish in the context of conventional molecular simulations, calculating entropy has involved many approximations. Most of these golden standard methods focused on developing a reliable estimation of the conformational part of the entropy. Here, we review these methods with a particular emphasis on the different techniques that extract entropy from atomic fluctuations. The theoretical formalisms behind each method is explained highlighting its strengths as well as its limitations, followed by a description of a number of case studies for each method. We hope that this brief, yet comprehensive, review provides a useful tool to understand these methods and realize the practical issues that may arise in such calculations.

  4. Incompressible SPH method based on Rankine source solution for violent water wave simulation

    NASA Astrophysics Data System (ADS)

    Zheng, X.; Ma, Q. W.; Duan, W. Y.

    2014-11-01

    With wide applications, the smoothed particle hydrodynamics method (abbreviated as SPH) has become an important numerical tool for solving complex flows, in particular those with a rapidly moving free surface. For such problems, the incompressible Smoothed Particle Hydrodynamics (ISPH) has been shown to yield better and more stable pressure time histories than the traditional SPH by many papers in literature. However, the existing ISPH method directly approximates the second order derivatives of the functions to be solved by using the Poisson equation. The order of accuracy of the method becomes low, especially when particles are distributed in a disorderly manner, which generally happens for modelling violent water waves. This paper introduces a new formulation using the Rankine source solution. In the new approach to the ISPH, the Poisson equation is first transformed into another form that does not include any derivative of the functions to be solved, and as a result, does not need to numerically approximate derivatives. The advantage of the new approach without need of numerical approximation of derivatives is obvious, potentially leading to a more robust numerical method. The newly formulated method is tested by simulating various water waves, and its convergent behaviours are numerically studied in this paper. Its results are compared with experimental data in some cases and reasonably good agreement is achieved. More importantly, numerical results clearly show that the newly developed method does need less number of particles and so less computational costs to achieve the similar level of accuracy, or to produce more accurate results with the same number of particles compared with the traditional SPH and existing ISPH when it is applied to modelling water waves.

  5. Data base simulator

    SciTech Connect

    Pack, D. J.

    1982-03-01

    This document describes the features of and input to a computer program written for the purpose of generating data bases whose data values contain deterministically known errors. The development of the computer program was motivated by the need to assess automatic data editing procedures for data validation of real data bases. The observed values in the simulated data are the sum of generated true values and generated error values. For a given variable, true data values may be generated by any of the following six methods: frequency distribution, conditional frequency distribution, analysis of variance model, multiple regression model, ARIMA time series model, membership within a defined constrained region. The error values for a given variable may be simulated from an independent distribution or from a distribution dependent upon the error values of other specified variables. The computer program described can be used to satisfy other needs in the area of data simulation beyond the specific need expressed above. Since the addition of errors to the true values is optional, one may readily simulate observed data for variables using one or more of the six previously listed methods.

  6. The Corrected Simulation Method of Critical Heat Flux Prediction for Water-Cooled Divertor Based on Euler Homogeneous Model

    NASA Astrophysics Data System (ADS)

    Zhang, Jingyang; Han, Le; Chang, Haiping; Liu, Nan; Xu, Tiejun

    2016-02-01

    An accurate critical heat flux (CHF) prediction method is the key factor for realizing the steady-state operation of a water-cooled divertor that works under one-sided high heating flux conditions. An improved CHF prediction method based on Euler's homogeneous model for flow boiling combined with realizable k-ɛ model for single-phase flow is adopted in this paper in which time relaxation coefficients are corrected by the Hertz-Knudsen formula in order to improve the calculation accuracy of vapor-liquid conversion efficiency under high heating flux conditions. Moreover, local large differences of liquid physical properties due to the extreme nonuniform heating flux on cooling wall along the circumference direction are revised by formula IAPWS-IF97. Therefore, this method can improve the calculation accuracy of heat and mass transfer between liquid phase and vapor phase in a CHF prediction simulation of water-cooled divertors under the one-sided high heating condition. An experimental example is simulated based on the improved and the uncorrected methods. The simulation results, such as temperature, void fraction and heat transfer coefficient, are analyzed to achieve the CHF prediction. The results show that the maximum error of CHF based on the improved method is 23.7%, while that of CHF based on uncorrected method is up to 188%, as compared with the experiment results of Ref. [12]. Finally, this method is verified by comparison with the experimental data obtained by International Thermonuclear Experimental Reactor (ITER), with a maximum error of 6% only. This method provides an efficient tool for the CHF prediction of water-cooled divertors. supported by the National Magnetic Confinement Fusion Science Program of China (No. 2010GB104005) and National Natural Science Foundation of China (No. 51406085)

  7. DETECTORS AND EXPERIMENTAL METHODS Design and simulations for the detector based on DSSSD

    NASA Astrophysics Data System (ADS)

    Xu, Yan-Bing; Wang, Huan-Yu; Meng, Xiang-Cheng; Wang, Hui; Lu, Hong; Ma, Yu-Qian; Li, Xin-Qiao; Shi, Feng; Wang, Ping; Zhao, Xiao-Yun; Wu, Feng

    2010-12-01

    The present paper describes the design and simulation results of a position-sensitive charged particle detector based on the Double Sided Silicon Strip Detector (DSSSD). Also, the characteristics of the DSSSD and its testing result were are discussed. With the application of the DSSSD, the position-sensitive charged particle detector can not only give particle flux and energy spectra information and identify different types of charged particles, but also measure the location and angle of incident particles. As the detector can make multiparameter measurements of charged particles, it is widely used in space detection and exploration missions, such as charged particle detection related to earthquakes, space environment monitoring and solar activity inspection.

  8. Thickness-based adaptive mesh refinement methods for multi-phase flow simulations with thin regions

    SciTech Connect

    Chen, Xiaodong; Yang, Vigor

    2014-07-15

    In numerical simulations of multi-scale, multi-phase flows, grid refinement is required to resolve regions with small scales. A notable example is liquid-jet atomization and subsequent droplet dynamics. It is essential to characterize the detailed flow physics with variable length scales with high fidelity, in order to elucidate the underlying mechanisms. In this paper, two thickness-based mesh refinement schemes are developed based on distance- and topology-oriented criteria for thin regions with confining wall/plane of symmetry and in any situation, respectively. Both techniques are implemented in a general framework with a volume-of-fluid formulation and an adaptive-mesh-refinement capability. The distance-oriented technique compares against a critical value, the ratio of an interfacial cell size to the distance between the mass center of the cell and a reference plane. The topology-oriented technique is developed from digital topology theories to handle more general conditions. The requirement for interfacial mesh refinement can be detected swiftly, without the need of thickness information, equation solving, variable averaging or mesh repairing. The mesh refinement level increases smoothly on demand in thin regions. The schemes have been verified and validated against several benchmark cases to demonstrate their effectiveness and robustness. These include the dynamics of colliding droplets, droplet motions in a microchannel, and atomization of liquid impinging jets. Overall, the thickness-based refinement technique provides highly adaptive meshes for problems with thin regions in an efficient and fully automatic manner.

  9. Methods of channeling simulation

    SciTech Connect

    Barrett, J.H.

    1989-06-01

    Many computer simulation programs have been used to interpret experiments almost since the first channeling measurements were made. Certain aspects of these programs are important in how accurately they simulate ions in crystals; among these are the manner in which the structure of the crystal is incorporated, how any quantity of interest is computed, what ion-atom potential is used, how deflections are computed from the potential, incorporation of thermal vibrations of the lattice atoms, correlations of thermal vibrations, and form of stopping power. Other aspects of the programs are included to improve the speed; among these are table lookup, importance sampling, and the multiparameter method. It is desirable for programs to facilitate incorporation of special features of interest in special situations; examples are relaxations and enhanced vibrations of surface atoms, easy substitution of an alternate potential for comparison, change of row directions from layer to layer in strained-layer lattices, and different vibration amplitudes for substitutional solute or impurity atoms. Ways of implementing all of these aspects and features and the consequences of them will be discussed. 30 refs., 3 figs.

  10. A method to generate equivalent energy spectra and filtration models based on measurement for multidetector CT Monte Carlo dosimetry simulations.

    PubMed

    Turner, Adam C; Zhang, Di; Kim, Hyun J; DeMarco, John J; Cagnon, Chris H; Angel, Erin; Cody, Dianna D; Stevens, Donna M; Primak, Andrew N; McCollough, Cynthia H; McNitt-Gray, Michael F

    2009-06-01

    The purpose of this study was to present a method for generating x-ray source models for performing Monte Carlo (MC) radiation dosimetry simulations of multidetector row CT (MDCT) scanners. These so-called "equivalent" source models consist of an energy spectrum and filtration description that are generated based wholly on the measured values and can be used in place of proprietary manufacturer's data for scanner-specific MDCT MC simulations. Required measurements include the half value layers (HVL1 and HVL2) and the bowtie profile (exposure values across the fan beam) for the MDCT scanner of interest. Using these measured values, a method was described (a) to numerically construct a spectrum with the calculated HVLs approximately equal to those measured (equivalent spectrum) and then (b) to determine a filtration scheme (equivalent filter) that attenuates the equivalent spectrum in a similar fashion as the actual filtration attenuates the actual x-ray beam, as measured by the bowtie profile measurements. Using this method, two types of equivalent source models were generated: One using a spectrum based on both HVL1 and HVL2 measurements and its corresponding filtration scheme and the second consisting of a spectrum based only on the measured HVL1 and its corresponding filtration scheme. Finally, a third type of source model was built based on the spectrum and filtration data provided by the scanner's manufacturer. MC simulations using each of these three source model types were evaluated by comparing the accuracy of multiple CT dose index (CTDI) simulations to measured CTDI values for 64-slice scanners from the four major MDCT manufacturers. Comprehensive evaluations were carried out for each scanner using each kVp and bowtie filter combination available. CTDI experiments were performed for both head (16 cm in diameter) and body (32 cm in diameter) CTDI phantoms using both central and peripheral measurement positions. Both equivalent source model types result in

  11. Efficacy of laser-based irrigant activation methods in removing debris from simulated root canal irregularities.

    PubMed

    Deleu, Ellen; Meire, Maarten A; De Moor, Roeland J G

    2015-02-01

    In root canal therapy, irrigating solutions are essential to assist in debridement and disinfection, but their spread and action is often restricted by canal anatomy. Hence, activation of irrigants is suggested to improve their distribution in the canal system, increasing irrigation effectiveness. Activation can be done with lasers, termed laser-activated irrigation (LAI). The purpose of this in vitro study was to compare the efficacy of different irrigant activation methods in removing debris from simulated root canal irregularities. Twenty-five straight human canine roots were embedded in resin, split, and their canals prepared to a standardized shape. A groove was cut in the wall of each canal and filled with dentin debris. Canals were filled with sodium hypochlorite and six irrigant activation procedures were tested: conventional needle irrigation (CI), manual-dynamic irrigation with a tapered gutta percha cone (manual-dynamic irrigation (MDI)), passive ultrasonic irrigation, LAI with 2,940-nm erbium-doped yttrium aluminum garnet (Er:YAG) laser with a plain fiber tip inside the canal (Er-flat), LAI with Er:YAG laser with a conical tip held at the canal entrance (Er-PIPS), and LAI with a 980-nm diode laser moving the fiber inside the canal (diode). The amount of remaining debris in the groove was scored and compared among the groups using non-parametric tests. Conventional irrigation removed significantly less debris than all other groups. The Er:YAG with plain fiber tip was more efficient than MDI, CI, diode, and Er:YAG laser with PIPS tip in removing debris from simulated root canal irregularities.

  12. Early breast cancer detection method based on a simulation study of single-channel passive microwave radiometry imaging

    NASA Astrophysics Data System (ADS)

    Kostopoulos, Spiros A.; Savva, Andonis D.; Asvestas, Pantelis A.; Nikolopoulos, Christos D.; Capsalis, Christos N.; Cavouras, Dionisis A.

    2015-09-01

    The aim of the present study is to provide a methodology for detecting temperature alterations in human breast, based on single channel microwave radiometer imaging. Radiometer measurements were simulated by modelling the human breast, the temperature distribution, and the antenna characteristics. Moreover, a simulated lesion of variable size and position in the breast was employed to provide for slight temperature changes in the breast. To detect the presence of a lesion, the temperature distribution in the breast was reconstructed. This was accomplished by assuming that temperature distribution is the mixture of distributions with unknown parameters, which were determined by means of the least squares and the singular value decomposition methods. The proposed method was validated in a variety of scenarios by altering the lesion size and location and radiometer position. The method proved capable in identifying temperature alterations caused by lesions, at different locations in the breast.

  13. A new variable parallel holes collimator for scintigraphic device with validation method based on Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Trinci, G.; Massari, R.; Scandellari, M.; Boccalini, S.; Costantini, S.; Di Sero, R.; Basso, A.; Sala, R.; Scopinaro, F.; Soluri, A.

    2010-09-01

    The aim of this work is to show a new scintigraphic device able to change automatically the length of its collimator in order to adapt the spatial resolution value to gamma source distance. This patented technique replaces the need for collimator change that standard gamma cameras still feature. Monte Carlo simulations represent the best tool in searching new technological solutions for such an innovative collimation structure. They also provide a valid analysis on response of gamma cameras performances as well as on advantages and limits of this new solution. Specifically, Monte Carlo simulations are realized with GEANT4 (GEometry ANd Tracking) framework and the specific simulation object is a collimation method based on separate blocks that can be brought closer and farther, in order to reach and maintain specific spatial resolution values for all source-detector distances. To verify the accuracy and the faithfulness of these simulations, we have realized experimental measurements with identical setup and conditions. This confirms the power of the simulation as an extremely useful tool, especially where new technological solutions need to be studied, tested and analyzed before their practical realization. The final aim of this new collimation system is the improvement of the SPECT techniques, with the real control of the spatial resolution value during tomographic acquisitions. This principle did allow us to simulate a tomographic acquisition of two capillaries of radioactive solution, in order to verify the possibility to clearly distinguish them.

  14. Ground motion simulation for the 23 August 2011, Mineral, Virginia earthquake using physics-based and stochastic broadband methods

    USGS Publications Warehouse

    Sun, Xiaodan; Hartzell, Stephen; Rezaeian, Sanaz

    2015-01-01

    Three broadband simulation methods are used to generate synthetic ground motions for the 2011 Mineral, Virginia, earthquake and compare with observed motions. The methods include a physics‐based model by Hartzell et al. (1999, 2005), a stochastic source‐based model by Boore (2009), and a stochastic site‐based model by Rezaeian and Der Kiureghian (2010, 2012). The ground‐motion dataset consists of 40 stations within 600 km of the epicenter. Several metrics are used to validate the simulations: (1) overall bias of response spectra and Fourier spectra (from 0.1 to 10 Hz); (2) spatial distribution of residuals for GMRotI50 peak ground acceleration (PGA), peak ground velocity, and pseudospectral acceleration (PSA) at various periods; (3) comparison with ground‐motion prediction equations (GMPEs) for the eastern United States. Our results show that (1) the physics‐based model provides satisfactory overall bias from 0.1 to 10 Hz and produces more realistic synthetic waveforms; (2) the stochastic site‐based model also yields more realistic synthetic waveforms and performs superiorly for frequencies greater than about 1 Hz; (3) the stochastic source‐based model has larger bias at lower frequencies (<0.5  Hz) and cannot reproduce the varying frequency content in the time domain. The spatial distribution of GMRotI50 residuals shows that there is no obvious pattern with distance in the simulation bias, but there is some azimuthal variability. The comparison between synthetics and GMPEs shows similar fall‐off with distance for all three models, comparable PGA and PSA amplitudes for the physics‐based and stochastic site‐based models, and systematic lower amplitudes for the stochastic source‐based model at lower frequencies (<0.5  Hz).

  15. Parallel adaptive mesh refinement method based on WENO finite difference scheme for the simulation of multi-dimensional detonation

    NASA Astrophysics Data System (ADS)

    Wang, Cheng; Dong, XinZhuang; Shu, Chi-Wang

    2015-10-01

    For numerical simulation of detonation, computational cost using uniform meshes is large due to the vast separation in both time and space scales. Adaptive mesh refinement (AMR) is advantageous for problems with vastly different scales. This paper aims to propose an AMR method with high order accuracy for numerical investigation of multi-dimensional detonation. A well-designed AMR method based on finite difference weighted essentially non-oscillatory (WENO) scheme, named as AMR&WENO is proposed. A new cell-based data structure is used to organize the adaptive meshes. The new data structure makes it possible for cells to communicate with each other quickly and easily. In order to develop an AMR method with high order accuracy, high order prolongations in both space and time are utilized in the data prolongation procedure. Based on the message passing interface (MPI) platform, we have developed a workload balancing parallel AMR&WENO code using the Hilbert space-filling curve algorithm. Our numerical experiments with detonation simulations indicate that the AMR&WENO is accurate and has a high resolution. Moreover, we evaluate and compare the performance of the uniform mesh WENO scheme and the parallel AMR&WENO method. The comparison results provide us further insight into the high performance of the parallel AMR&WENO method.

  16. Physical parameter identification method based on modal analysis for two-axis on-road vehicles: Theory and simulation

    NASA Astrophysics Data System (ADS)

    Zheng, Minyi; Zhang, Bangji; Zhang, Jie; Zhang, Nong

    2016-03-01

    Physical parameters are very important for vehicle dynamic modeling and analysis. However, most of physical parameter identification methods are assuming some physical parameters of vehicle are known, and the other unknown parameters can be identified. In order to identify physical parameters of vehicle in the case that all physical parameters are unknown, a methodology based on the State Variable Method(SVM) for physical parameter identification of two-axis on-road vehicle is presented. The modal parameters of the vehicle are identified by the SVM, furthermore, the physical parameters of the vehicle are estimated by least squares method. In numerical simulations, physical parameters of Ford Granada are chosen as parameters of vehicle model, and half-sine bump function is chosen to simulate tire stimulated by impulse excitation. The first numerical simulation shows that the present method can identify all of the physical parameters and the largest absolute value of percentage error of the identified physical parameter is 0.205%; and the effect of the errors of additional mass, structural parameter and measurement noise are discussed in the following simulations, the results shows that when signal contains 30 dB noise, the largest absolute value of percentage error of the identification is 3.78%. These simulations verify that the presented method is effective and accurate for physical parameter identification of two-axis on-road vehicles. The proposed methodology can identify all physical parameters of 7-DOF vehicle model by using free-decay responses of vehicle without need to assume some physical parameters are known.

  17. Physical parameter identification method based on modal analysis for two-axis on-road vehicles: Theory and simulation

    NASA Astrophysics Data System (ADS)

    Zheng, Minyi; Zhang, Bangji; Zhang, Jie; Zhang, Nong

    2016-07-01

    Physical parameters are very important for vehicle dynamic modeling and analysis. However, most of physical parameter identification methods are assuming some physical parameters of vehicle are known, and the other unknown parameters can be identified. In order to identify physical parameters of vehicle in the case that all physical parameters are unknown, a methodology based on the State Variable Method(SVM) for physical parameter identification of two-axis on-road vehicle is presented. The modal parameters of the vehicle are identified by the SVM, furthermore, the physical parameters of the vehicle are estimated by least squares method. In numerical simulations, physical parameters of Ford Granada are chosen as parameters of vehicle model, and half-sine bump function is chosen to simulate tire stimulated by impulse excitation. The first numerical simulation shows that the present method can identify all of the physical parameters and the largest absolute value of percentage error of the identified physical parameter is 0.205%; and the effect of the errors of additional mass, structural parameter and measurement noise are discussed in the following simulations, the results shows that when signal contains 30 dB noise, the largest absolute value of percentage error of the identification is 3.78%. These simulations verify that the presented method is effective and accurate for physical parameter identification of two-axis on-road vehicles. The proposed methodology can identify all physical parameters of 7-DOF vehicle model by using free-decay responses of vehicle without need to assume some physical parameters are known.

  18. A fast flux tube-based method for solute-transport simulation

    NASA Astrophysics Data System (ADS)

    Atteia, Olivier; Huberson, Serge; Dupuy, Alain

    2011-03-01

    A new method to calculate the transport of dissolved species in aquifers is presented. This approach is an extension of the stream tubes which are used for flow computation. The flux tubes defined here are conservative for solutes, but not for water mass. The flux tubes are first defined in a general domain and then calculated in a two-dimensional uniform flow field. The tubes' computation is based on a parametric solution. The method is extended further in order to deal with heterogeneous media. A particle-tracking algorithm is used where the deviation of the flux-tube boundaries due to dispersion is accounted for. The approximate solution obtained by this approach is compared to classical numerical solutions given by a finite difference code (RT3D) and a finite element code (FEFLOW). This comparison was performed for several test cases with increasing complexity. The differences between the flux-tube approach and the other methods always remain small, even regarding mass conservation. The major advantage of the flux-tube approach is the ability to reach a solution quickly, as the method is hundreds to thousands of times faster than classical finite difference or finite element models.

  19. On-orbit servicing system assessment and optimization methods based on lifecycle simulation under mixed aleatory and epistemic uncertainties

    NASA Astrophysics Data System (ADS)

    Yao, Wen; Chen, Xiaoqian; Huang, Yiyong; van Tooren, Michel

    2013-06-01

    To assess the on-orbit servicing (OOS) paradigm and optimize its utilities by taking advantage of its inherent flexibility and responsiveness, the OOS system assessment and optimization methods based on lifecycle simulation under uncertainties are studied. The uncertainty sources considered in this paper include both the aleatory (random launch/OOS operation failure and on-orbit component failure) and the epistemic (the unknown trend of the end-used market price) types. Firstly, the lifecycle simulation under uncertainties is discussed. The chronological flowchart is presented. The cost and benefit models are established, and the uncertainties thereof are modeled. The dynamic programming method to make optimal decision in face of the uncertain events is introduced. Secondly, the method to analyze the propagation effects of the uncertainties on the OOS utilities is studied. With combined probability and evidence theory, a Monte Carlo lifecycle Simulation based Unified Uncertainty Analysis (MCS-UUA) approach is proposed, based on which the OOS utility assessment tool under mixed uncertainties is developed. Thirdly, to further optimize the OOS system under mixed uncertainties, the reliability-based optimization (RBO) method is studied. To alleviate the computational burden of the traditional RBO method which involves nested optimum search and uncertainty analysis, the framework of Sequential Optimization and Mixed Uncertainty Analysis (SOMUA) is employed to integrate MCS-UUA, and the RBO algorithm SOMUA-MCS is developed. Fourthly, a case study on the OOS system for a hypothetical GEO commercial communication satellite is investigated with the proposed assessment tool. Furthermore, the OOS system is optimized with SOMUA-MCS. Lastly, some conclusions are given and future research prospects are highlighted.

  20. Simulation of metal cutting using the particle finite-element method and a physically based plasticity model

    NASA Astrophysics Data System (ADS)

    Rodríguez, J. M.; Jonsén, P.; Svoboda, A.

    2016-08-01

    Metal cutting is one of the most common metal-shaping processes. In this process, specified geometrical and surface properties are obtained through the break-up of material and removal by a cutting edge into a chip. The chip formation is associated with large strains, high strain rates and locally high temperatures due to adiabatic heating. These phenomena together with numerical complications make modeling of metal cutting difficult. Material models, which are crucial in metal-cutting simulations, are usually calibrated based on data from material testing. Nevertheless, the magnitudes of strains and strain rates involved in metal cutting are several orders of magnitude higher than those generated from conventional material testing. Therefore, a highly desirable feature is a material model that can be extrapolated outside the calibration range. In this study, a physically based plasticity model based on dislocation density and vacancy concentration is used to simulate orthogonal metal cutting of AISI 316L. The material model is implemented into an in-house particle finite-element method software. Numerical simulations are in agreement with experimental results, but also with previous results obtained with the finite-element method.

  1. National Clinical Skills Competition: an effective simulation-based method to improve undergraduate medical education in China

    PubMed Central

    Jiang, Guanchao; Chen, Hong; Wang, Qiming; Chi, Baorong; He, Qingnan; Xiao, Haipeng; Zhou, Qinghuan; Liu, Jing; Wang, Shan

    2016-01-01

    Background The National Clinical Skills Competition has been held in China for 5 consecutive years since 2010 to promote undergraduate education reform and improve the teaching quality. The effects of the simulation-based competition will be analyzed in this study. Methods Participation in the competitions and the compilation of the questions used in the competition finals are summarized, and the influence and guidance quality are further analyzed. Through the nationwide distribution of questionnaires in medical colleges, the effects of the simulation-based competition on promoting undergraduate medical education reform were evaluated. Results The results show that approximately 450 students from more than 110 colleges (accounting for 81% of colleges providing undergraduate clinical medical education in China) participated in the competition each year. The knowledge, skills, and attitudes were comprehensively evaluated by simulation-based assessment. Eight hundred and eighty copies of the questionnaires were distributed to 110 participating medical schools in 2015. In total, 752 valid responses were received across 95 schools. The majority of the interviewees agreed or strongly agreed that competition promoted the adoption of advanced educational principles (76.8%), updated the curriculum model and instructional methods (79.8%), strengthened faculty development (84.0%), improved educational resources (82.1%), and benefited all students (53.4%). Conclusions The National Clinical Skills Competition is widely accepted in China. It has effectively promoted the reform and development of undergraduate medical education in China. PMID:26894586

  2. Task-based image quality evaluation of iterative reconstruction methods for low dose CT using computer simulations

    NASA Astrophysics Data System (ADS)

    Xu, Jingyan; Fuld, Matthew K.; Fung, George S. K.; Tsui, Benjamin M. W.

    2015-04-01

    Iterative reconstruction (IR) methods for x-ray CT is a promising approach to improve image quality or reduce radiation dose to patients. The goal of this work was to use task based image quality measures and the channelized Hotelling observer (CHO) to evaluate both analytic and IR methods for clinical x-ray CT applications. We performed realistic computer simulations at five radiation dose levels, from a clinical reference low dose D0 to 25% D0. A fixed size and contrast lesion was inserted at different locations into the liver of the XCAT phantom to simulate a weak signal. The simulated data were reconstructed on a commercial CT scanner (SOMATOM Definition Flash; Siemens, Forchheim, Germany) using the vendor-provided analytic (WFBP) and IR (SAFIRE) methods. The reconstructed images were analyzed by CHOs with both rotationally symmetric (RS) and rotationally oriented (RO) channels, and with different numbers of lesion locations (5, 10, and 20) in a signal known exactly (SKE), background known exactly but variable (BKEV) detection task. The area under the receiver operating characteristic curve (AUC) was used as a summary measure to compare the IR and analytic methods; the AUC was also used as the equal performance criterion to derive the potential dose reduction factor of IR. In general, there was a good agreement in the relative AUC values of different reconstruction methods using CHOs with RS and RO channels, although the CHO with RO channels achieved higher AUCs than RS channels. The improvement of IR over analytic methods depends on the dose level. The reference dose level D0 was based on a clinical low dose protocol, lower than the standard dose due to the use of IR methods. At 75% D0, the performance improvement was statistically significant (p < 0.05). The potential dose reduction factor also depended on the detection task. For the SKE/BKEV task involving 10 lesion locations, a dose reduction of at least 25% from D0 was achieved.

  3. A low numerical dissipation patch-based adaptive mesh refinement method for large-eddy simulation of compressible flows

    NASA Astrophysics Data System (ADS)

    Pantano, C.; Deiterding, R.; Hill, D. J.; Pullin, D. I.

    2007-01-01

    We present a methodology for the large-eddy simulation of compressible flows with a low-numerical dissipation scheme and structured adaptive mesh refinement (SAMR). A description of a conservative, flux-based hybrid numerical method that uses both centered finite-difference and a weighted essentially non-oscillatory (WENO) scheme is given, encompassing the cases of scheme alternation and internal mesh interfaces resulting from SAMR. In this method, the centered scheme is used in turbulent flow regions while WENO is employed to capture shocks. One-, two- and three-dimensional numerical experiments and example simulations are presented including homogeneous shock-free turbulence, a turbulent jet and the strongly shock-driven mixing of a Richtmyer-Meshkov instability.

  4. Evaluation of methods for thermal management in a coal-based SOFC turbine hybrid through numerical simulation

    SciTech Connect

    Tucker, D.A.; VanOsdol, J.G.; Liese, E.A.; Lawson, L.; Zitney, S.E.; Gemmen, R.S.; Ford, J.C.; Haynes, C.

    2001-01-01

    Managing the temperatures and heat transfer in the fuel cell of a solid oxide fuel cell (SOFC) gas turbine (GT) hybrid fired on coal syngas presents certain challenges over a natural gas based system, in that the latter can take advantage of internal reforming to offset heat generated in the fuel cell. Three coal based SOFC/GT configuration designs for thermal management in the main power block are evaluated using steady state numerical simulations developed in ASPEN Plus. A comparison is made on the basis of efficiency, operability issues and component integration. To focus on the effects of different power block configurations, the analysis assumes a consistent syngas composition in each case, and does not explicitly include gasification or syngas cleanup. A fuel cell module rated at 240MW was used as a common basis for three different methods. Advantages and difficulties for each configuration are identified in the simulations.

  5. Occurrence and simulation of trihalomethanes in swimming pool water: A simple prediction method based on DOC and mass balance.

    PubMed

    Peng, Di; Saravia, Florencia; Abbt-Braun, Gudrun; Horn, Harald

    2016-01-01

    Trihalomethanes (THM) are the most typical disinfection by-products (DBPs) found in public swimming pool water. DBPs are produced when organic and inorganic matter in water reacts with chemical disinfectants. The irregular contribution of substances from pool visitors and long contact time with disinfectant make the forecast of THM in pool water a challenge. In this work occurrence of THM in a public indoor swimming pool was investigated and correlated with the dissolved organic carbon (DOC). Daily sampling of pool water for 26 days showed a positive correlation between DOC and THM with a time delay of about two days, while THM and DOC didn't directly correlate with the number of visitors. Based on the results and mass-balance in the pool water, a simple simulation model for estimating THM concentration in indoor swimming pool water was proposed. Formation of THM from DOC, volatilization into air and elimination by pool water treatment were included in the simulation. Formation ratio of THM gained from laboratory analysis using native pool water and information from field study in an indoor swimming pool reduced the uncertainty of the simulation. The simulation was validated by measurements in the swimming pool for 50 days. The simulated results were in good compliance with measured results. This work provides a useful and simple method for predicting THM concentration and its accumulation trend for long term in indoor swimming pool water.

  6. Numerical Simulation of Evacuation Process in Malaysia By Using Distinct-Element-Method Based Multi-Agent Model

    NASA Astrophysics Data System (ADS)

    Abustan, M. S.; Rahman, N. A.; Gotoh, H.; Harada, E.; Talib, S. H. A.

    2016-07-01

    In Malaysia, not many researches on crowd evacuation simulation had been reported. Hence, the development of numerical crowd evacuation process by taking into account people behavioral patterns and psychological characteristics is crucial in Malaysia. On the other hand, tsunami disaster began to gain attention of Malaysian citizens after the 2004 Indian Ocean Tsunami that need quick evacuation process. In relation to the above circumstances, we have conducted simulations of tsunami evacuation process at the Miami Beach of Penang Island by using Distinct Element Method (DEM)-based crowd behavior simulator. The main objectives are to investigate and reproduce current conditions of evacuation process at the said locations under different hypothetical scenarios for the efficiency study of the evacuation. The sim-1 is initial condition of evacuation planning while sim-2 as improvement of evacuation planning by adding new evacuation area. From the simulation result, sim-2 have a shorter time of evacuation process compared to the sim-1. The evacuation time recuded 53 second. The effect of the additional evacuation place is confirmed from decreasing of the evacuation completion time. Simultaneously, the numerical simulation may be promoted as an effective tool in studying crowd evacuation process.

  7. Simulation of two-dimensional target motion based on a liquid crystal beam steering method

    NASA Astrophysics Data System (ADS)

    Lin, Yixiang; Ai, Yong; Shan, Xin; Liu, Min

    2015-05-01

    A simulation platform is established for target motion using a liquid crystal (LC) spatial light modulator as a nonmechanical beam steering control device. By controlling the period and orientation of the phase grating generated by the spatial light modulator, the platform realizes two-dimensional (2-D) beam steering using a single LC device. The zenith and azimuth angle range from 0 deg to 2.89 deg and from 0 deg to 360 deg, respectively, with control resolution of 0.0226 deg and 0.0300 deg, respectively. The response time of the beam steering is always less than 0.04 s, irrespective of steering angle. Three typical aircraft tracks are imitated to evaluate the performance of the simulation platform. The correlation coefficients between the theoretical and simulated motions are larger than 0.9822. Results show that it is highly feasible to realize 2-D target motion simulation using the LC spatial light modulator.

  8. Simulation of an orifice scrubber performance based on Eulerian/Lagrangian method.

    PubMed

    Mohebbi, A; Taheri, M; Fathikaljahi, J; Talaie, M R

    2003-06-27

    A mathematical model based on Eulerian/Lagrangian method has been developed to predict particle collection efficiency from a gas stream in an orifice scrubber. This model takes into account Eulerian approach for particle dispersion, Lagrangian approach for droplet movement and particle-source-in-cell (PSI-CELL) model for calculating droplet concentration distribution. In order to compute fluid velocity profiles, the normal k-epsilon turbulent flow model with inclusion of body force due to drag force between fluid and droplets has been used. Experimental data of Taheri et al. [J. Air Pollut. Control Assoc. 23 (11) (1973) 963] have been used to test the results of the mathematical model. The results from the model are in good agreement with the experimental data. After validating the model the effect of operating parameters such as liquid to gas flow rate ratio, gas velocity at orifice opening, and particle diameter were obtained on the collection efficiency.

  9. Genetic Algorithm Based Simulated Annealing Method for Solving Unit Commitment Problem in Utility System

    NASA Astrophysics Data System (ADS)

    Rajan, C. Christober Asir

    2010-10-01

    The objective of this paper is to find the generation scheduling such that the total operating cost can be minimized, when subjected to a variety of constraints. This also means that it is desirable to find the optimal generating unit commitment in the power system for the next H hours. Genetic Algorithms (GA's) are general-purpose optimization techniques based on principles inspired from the biological evolution using metaphors of mechanisms such as neural section, genetic recombination and survival of the fittest. In this, the unit commitment schedule is coded as a string of symbols. An initial population of parent solutions is generated at random. Here, each schedule is formed by committing all the units according to their initial status ("flat start"). Here the parents are obtained from a pre-defined set of solution's i.e. each and every solution is adjusted to meet the requirements. Then, a random recommitment is carried out with respect to the unit's minimum down times. And SA improves the status. A 66-bus utility power system with twelve generating units in India demonstrates the effectiveness of the proposed approach. Numerical results are shown comparing the cost solutions and computation time obtained by using the Genetic Algorithm method and other conventional methods.

  10. Aerothermoelastic modeling and simulation of aerospace vehicles using particle-based methods

    NASA Astrophysics Data System (ADS)

    Mason, Matthew Scott

    As hypersonic aerospace vehicles are designed to increased performance specifications utilizing lighter weight, higher strength materials, fluid-structural interaction (FSI) effects become increasingly important to model, especially considering the increasing use of numerical models in many phases of design. When a fluid flows over a solid, a force is imparted on the solid and the solid deforms. This deformation, in turn, causes a change in the fluid flow field which modifies the force distribution on the structure. This FSI induced deformation is a primary area of study within the field of aeroelasticity. To further complicate the matter, thermodynamic and chemical effects are vitally important to model in the hypersonic flow regime. Traditionally, two separate numerical models are utilized to model the fluid and solid phases and a coupling algorithm accomplishes the task of modeling FSI. Coupling between the two solvers introduces numerical inaccuracies, inefficiencies, and for many mesh-based solvers, large deformations cannot be modeled. For this research, a combined Eulerian grid-based and Lagrangian particle-based solver known as the Material Point Method (MPM) is introduced and defined from prior research by others, and the particular MPM numerical code utilized in this research is outlined. The code combines the two separate solvers into a single numerical algorithm with separate constitutive relations for the fluid and solid phase, thereby allowing FSI modeling within a single computational framework. A limiter is applied to reduce numerical noise and oscillations around shock and expansion waves and exhibits a large reduction in oscillation amplitude and frequency. A Fourier's Law of Conduction heat transfer algorithm is implemented for heat transfer at a fluid-structure interface. The results from this heat transfer algorithm are compared with an independently developed numerical code for the single ramp case and experimental data for the double cone

  11. Effects of Simulated Marker Placement Deviations on Running Kinematics and Evaluation of a Morphometric-Based Placement Feedback Method

    PubMed Central

    Osis, Sean T.; Hettinga, Blayne A.; Macdonald, Shari; Ferber, Reed

    2016-01-01

    In order to provide effective test-retest and pooling of information from clinical gait analyses, it is critical to ensure that the data produced are as reliable as possible. Furthermore, it has been shown that anatomical marker placement is the largest source of inter-examiner variance in gait analyses. However, the effects of specific, known deviations in marker placement on calculated kinematic variables are unclear, and there is currently no mechanism to provide location-based feedback regarding placement consistency. The current study addresses these disparities by: applying a simulation of marker placement deviations to a large (n = 411) database of runners; evaluating a recently published method of morphometric-based deviation detection; and pilot-testing a system of location-based feedback for marker placements. Anatomical markers from a standing neutral trial were moved virtually by up to 30 mm to simulate deviations. Kinematic variables during running were then calculated using the original, and altered static trials. Results indicate that transverse plane angles at the knee and ankle are most sensitive to deviations in marker placement (7.59 degrees of change for every 10 mm of marker error), followed by frontal plane knee angles (5.17 degrees for every 10 mm). Evaluation of the deviation detection method demonstrated accuracies of up to 82% in classifying placements as deviant. Finally, pilot testing of a new methodology for providing location-based feedback demonstrated reductions of up to 80% in the deviation of outcome kinematics. PMID:26765846

  12. Applying Synchronous Methods during the Development of an Online Classroom-Based Simulation

    ERIC Educational Resources Information Center

    Ferry, Brian; Kervin, Lisa

    2006-01-01

    Purpose: The purpose of this paper is to report the impact of an online simulation that was designed to provide pre-service teachers with experience in dealing with complex classroom situations associated with the teaching of literacy. Design/methodology/approach: A developmental approach to the research was used. This is also known as "design…

  13. A simulation based method to assess inversion algorithms for transverse relaxation data

    NASA Astrophysics Data System (ADS)

    Ghosh, Supriyo; Keener, Kevin M.; Pan, Yong

    2008-04-01

    NMR relaxometry is a very useful tool for understanding various chemical and physical phenomena in complex multiphase systems. A Carr-Purcell-Meiboom-Gill (CPMG) [P.T. Callaghan, Principles of Nuclear Magnetic Resonance Microscopy, Clarendon Press, Oxford, 1991] experiment is an easy and quick way to obtain transverse relaxation constant (T2) in low field. Most of the samples usually have a distribution of T2 values. Extraction of this distribution of T2s from the noisy decay data is essentially an ill-posed inverse problem. Various inversion approaches have been used to solve this problem, to date. A major issue in using an inversion algorithm is determining how accurate the computed distribution is. A systematic analysis of an inversion algorithm, UPEN [G.C. Borgia, R.J.S. Brown, P. Fantazzini, Uniform-penalty inversion of multiexponential decay data, Journal of Magnetic Resonance 132 (1998) 65-77; G.C. Borgia, R.J.S. Brown, P. Fantazzini, Uniform-penalty inversion of multiexponential decay data II. Data spacing, T2 data, systematic data errors, and diagnostics, Journal of Magnetic Resonance 147 (2000) 273-285] was performed by means of simulated CPMG data generation. Through our simulation technique and statistical analyses, the effects of various experimental parameters on the computed distribution were evaluated. We converged to the true distribution by matching up the inversion results from a series of true decay data and a noisy simulated data. In addition to simulation studies, the same approach was also applied on real experimental data to support the simulation results.

  14. Monte Carlo simulation methods in moment-based scale-bridging algorithms for thermal radiative-transfer problems

    SciTech Connect

    Densmore, J.D.; Park, H.; Wollaber, A.B.; Rauenzahn, R.M.; Knoll, D.A.

    2015-03-01

    We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption–emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck–Cummings algorithm.

  15. Simulation modeling based method for choosing an effective set of fault tolerance mechanisms for real-time avionics systems

    NASA Astrophysics Data System (ADS)

    Bakhmurov, A. G.; Balashov, V. V.; Glonina, A. B.; Pashkov, V. N.; Smeliansky, R. L.; Volkanov, D. Yu.

    2013-12-01

    In this paper, the reliability allocation problem (RAP) for real-time avionics systems (RTAS) is considered. The proposed method for solving this problem consists of two steps: (i) creation of an RTAS simulation model at the necessary level of abstraction and (ii) application of metaheuristic algorithm to find an optimal solution (i. e., to choose an optimal set of fault tolerance techniques). When during the algorithm execution it is necessary to measure the execution time of some software components, the simulation modeling is applied. The procedure of simulation modeling also consists of the following steps: automatic construction of simulation model of the RTAS configuration and running this model in a simulation environment to measure the required time. This method was implemented as an experimental software tool. The tool works in cooperation with DYANA simulation environment. The results of experiments with the implemented method are presented. Finally, future plans for development of the presented method and tool are briefly described.

  16. Simulation of magnetization process of Pure-type superconductor magnet undulator based on T-method

    NASA Astrophysics Data System (ADS)

    Deri, Yi; Kawaguchi, Hideki; Tsuchimoto, Masanori; Tanaka, Takashi

    2015-11-01

    For the next generation Free Electron Laser, Pure-type undulator made of high Tc superconductors (HTSs) was considered to achieve a small size and high intensity magnetic field undulator. In general, it is very difficult to adjust the undulator magnet alignment after the HTS magnetization since the entire undulator is installed inside a cryostat. The appropriate HTS alignment has to be determined in the design stage. This paper presents the development of a numerical simulation code for magnetization process of the Pure-type HTS undulator to assist the design of the optimal size and alignment of the HTS magnets.

  17. Finite analytic method based on mixed-form Richards' equation for simulating water flow in vadose zone

    NASA Astrophysics Data System (ADS)

    Zhang, Zaiyong; Wang, Wenke; Yeh, Tian-chyi Jim; Chen, Li; Wang, Zhoufeng; Duan, Lei; An, Kedong; Gong, Chengcheng

    2016-06-01

    In this paper, we develop a finite analytic method (FAMM), which combines flexibility of numerical methods and advantages of analytical solutions, to solve the mixed-form Richards' equation. This new approach minimizes mass balance errors and truncation errors associated with most numerical approaches. We use numerical experiments to demonstrate that FAMM can obtain more accurate numerical solutions and control the global mass balance better than modified Picard finite difference method (MPFD) as compared with analytical solutions. In addition, FAMM is superior to the finite analytic method based on head-based Richards' equation (FAMH). Besides, FAMM solutions are compared to analytical solutions for wetting and drying processes in Brindabella Silty Clay Loam and Yolo Light Clay soils. Finally, we demonstrate that FAMM yields comparable results with those from MPFD and Hydrus-1D for simulating infiltration into other different soils under wet and dry conditions. These numerical experiments further confirm the fact that as long as a hydraulic constitutive model captures general behaviors of other models, it can be used to yield flow fields comparable to those based on other models.

  18. Continuous surface force based lattice Boltzmann equation method for simulating thermocapillary flow

    NASA Astrophysics Data System (ADS)

    Zheng, Lin; Zheng, Song; Zhai, Qinglan

    2016-02-01

    In this paper, we extend a lattice Boltzmann equation (LBE) with continuous surface force (CSF) to simulate thermocapillary flows. The model is designed on our previous CSF LBE for athermal two phase flow, in which the interfacial tension forces and the Marangoni stresses as the results of the interface interactions between different phases are described by a conception of CSF. In this model, the sharp interfaces between different phases are separated by a narrow transition layers, and the kinetics and morphology evolution of phase separation would be characterized by an order parameter via Cahn-Hilliard equation which is solved in the frame work of LBE. The scalar convection-diffusion equation for temperature field is resolved by thermal LBE. The models are validated by thermal two layered Poiseuille flow, and two superimposed planar fluids at negligibly small Reynolds and Marangoni numbers for the thermocapillary driven convection, which have analytical solutions for the velocity and temperature. Then thermocapillary migration of two/three dimensional deformable droplet are simulated. Numerical results show that the predictions of present LBE agreed with the analytical solution/other numerical results.

  19. Large Eddy simulation of compressible flows with a low-numerical dissipation patch-based adaptive mesh refinement method

    NASA Astrophysics Data System (ADS)

    Pantano, Carlos

    2005-11-01

    We describe a hybrid finite difference method for large-eddy simulation (LES) of compressible flows with a low-numerical dissipation scheme and structured adaptive mesh refinement (SAMR). Numerical experiments and validation calculations are presented including a turbulent jet and the strongly shock-driven mixing of a Richtmyer-Meshkov instability. The approach is a conservative flux-based SAMR formulation and as such, it utilizes refinement to computational advantage. The numerical method for the resolved scale terms encompasses the cases of scheme alternation and internal mesh interfaces resulting from SAMR. An explicit centered scheme that is consistent with a skew-symmetric finite difference formulation is used in turbulent flow regions while a weighted essentially non-oscillatory (WENO) scheme is employed to capture shocks. The subgrid stresses and transports are calculated by means of the streched-vortex model, Misra & Pullin (1997)

  20. Method for estimation of structural composition of skin layers based on light propagation simulation for liposuction applications.

    PubMed

    Song, Sangha; Elguezua, Inko; Kobayashi, Yo; Fujie, Masakatsu G

    2014-01-01

    Skin surface irregularity is the most common side effect after liposuction. To reduce this, it is necessary to devise a systematic method to provide structural composition details of skin layers, such as fat thickness and fat boundary tilt angle, for the plastic surgeon. Several commercial portable devices are available to measure skin layer information, working on the principle of a near-infrared technique using the light penetration properties of tissue in optical windows. However, these can only measure general fat thickness and not the structural compositions of skin layers with irregularities. Therefore, our goal in this paper is to propose a method to estimate the structural compositions of skin layers by analyzing and validating the relationship between light distribution and structural composition from simulation data based on specific structural conditions.

  1. Finger milling-cutter CNC generating hypoid pinion tooth surfaces based on modified-roll method and machining simulation

    NASA Astrophysics Data System (ADS)

    Li, Genggeng; Deng, Xiaozhong; Wei, Bingyang; Lei, Baozhen

    2011-05-01

    The two coordinate systems of cradle-type hypoid generator and free-form CNC machine tool by application disc milling-cutter to generate hypoid pinion tooth surfaces based on the modified-roll method were set up, respectively, and transformation principle and method for machine-tool settings between the two coordinate systems was studied. It was presented that finger milling-cutter was mounted on imagined disc milling-cutter and its motion was controlled directly by CNC shafts to replace disc milling-cutter blades effective cutting motion. Finger milling-cutter generation accomplished by ordered circular interpolation was determined, and interpolation center, starting and ending were worked out. Finally, a hypoid pinion was virtually machined by using CNC machining simulation software VERICUT.

  2. Finger milling-cutter CNC generating hypoid pinion tooth surfaces based on modified-roll method and machining simulation

    NASA Astrophysics Data System (ADS)

    Li, Genggeng; Deng, Xiaozhong; Wei, Bingyang; Lei, Baozhen

    2010-12-01

    The two coordinate systems of cradle-type hypoid generator and free-form CNC machine tool by application disc milling-cutter to generate hypoid pinion tooth surfaces based on the modified-roll method were set up, respectively, and transformation principle and method for machine-tool settings between the two coordinate systems was studied. It was presented that finger milling-cutter was mounted on imagined disc milling-cutter and its motion was controlled directly by CNC shafts to replace disc milling-cutter blades effective cutting motion. Finger milling-cutter generation accomplished by ordered circular interpolation was determined, and interpolation center, starting and ending were worked out. Finally, a hypoid pinion was virtually machined by using CNC machining simulation software VERICUT.

  3. An Optimization-oriented Simulation-based Job Shop Scheduling Method with Four Parameters Using Pattern Search

    NASA Astrophysics Data System (ADS)

    Arakawa, Masahiro; Fuyuki, Masahiko; Inoue, Ichiro

    Aiming at the elimination of tardy jobs in a job shop production schedule, an optimization-oriented simulation-based scheduling (OSBS) method incorporating capacity adjustment function is proposed. In order to determine the pertinent additional capacities and to control job allocations simultaneously the proposed method incorporates the parameter-space search improvement (PSSI) method into the scheduling procedure. In previous papers, we have introduced four parameters; two of them are used to control the upper limit to the additional capacity and the balance of the capacity distribution among machines, while the others are used to control the job allocation procedure. We found that a ‘direct' optimization procedure which uses the enumeration method produces a best solution with practical significance, but it takes too much computation time for practical use. In this paper, we propose a new method which adopts a pattern search method in the schedule generation procedure to obtain an approximate optimal solution. It is found that the computation time becomes short enough for a practical use. Moreover, the extension of the parameter domain yields an approximate optimal solution which is better than the best solution obtained by the ‘direct' optimization.

  4. Cluster-based computational methods for mass univariate analyses of event-related brain potentials/fields: A simulation study

    PubMed Central

    Pernet, C.R.; Latinus, M.; Nichols, T.E.; Rousselet, G.A.

    2015-01-01

    Background In recent years, analyses of event related potentials/fields have moved from the selection of a few components and peaks to a mass-univariate approach in which the whole data space is analyzed. Such extensive testing increases the number of false positives and correction for multiple comparisons is needed. Method Here we review all cluster-based correction for multiple comparison methods (cluster-height, cluster-size, cluster-mass, and threshold free cluster enhancement – TFCE), in conjunction with two computational approaches (permutation and bootstrap). Results Data driven Monte-Carlo simulations comparing two conditions within subjects (two sample Student's t-test) showed that, on average, all cluster-based methods using permutation or bootstrap alike control well the family-wise error rate (FWER), with a few caveats. Conclusions (i) A minimum of 800 iterations are necessary to obtain stable results; (ii) below 50 trials, bootstrap methods are too conservative; (iii) for low critical family-wise error rates (e.g. p = 1%), permutations can be too liberal; (iv) TFCE controls best the type 1 error rate with an attenuated extent parameter (i.e. power < 1). PMID:25128255

  5. Numerical simulation and experimental validation of biofilm in a multi-physics framework using an SPH based method

    NASA Astrophysics Data System (ADS)

    Soleimani, Meisam; Wriggers, Peter; Rath, Henryke; Stiesch, Meike

    2016-10-01

    In this paper, a 3D computational model has been developed to investigate biofilms in a multi-physics framework using smoothed particle hydrodynamics (SPH) based on a continuum approach. Biofilm formation is a complex process in the sense that several physical phenomena are coupled and consequently different time-scales are involved. On one hand, biofilm growth is driven by biological reaction and nutrient diffusion and on the other hand, it is influenced by fluid flow causing biofilm deformation and interface erosion in the context of fluid and deformable solid interaction. The geometrical and numerical complexity arising from these phenomena poses serious complications and challenges in grid-based techniques such as finite element. Here the solution is based on SPH as one of the powerful meshless methods. SPH based computational modeling is quite new in the biological community and the method is uniquely robust in capturing the interface-related processes of biofilm formation such as erosion. The obtained results show a good agreement with experimental and published data which demonstrates that the model is capable of simulating and predicting overall spatial and temporal evolution of biofilm.

  6. Numerical simulation and experimental validation of biofilm in a multi-physics framework using an SPH based method

    NASA Astrophysics Data System (ADS)

    Soleimani, Meisam; Wriggers, Peter; Rath, Henryke; Stiesch, Meike

    2016-06-01

    In this paper, a 3D computational model has been developed to investigate biofilms in a multi-physics framework using smoothed particle hydrodynamics (SPH) based on a continuum approach. Biofilm formation is a complex process in the sense that several physical phenomena are coupled and consequently different time-scales are involved. On one hand, biofilm growth is driven by biological reaction and nutrient diffusion and on the other hand, it is influenced by fluid flow causing biofilm deformation and interface erosion in the context of fluid and deformable solid interaction. The geometrical and numerical complexity arising from these phenomena poses serious complications and challenges in grid-based techniques such as finite element. Here the solution is based on SPH as one of the powerful meshless methods. SPH based computational modeling is quite new in the biological community and the method is uniquely robust in capturing the interface-related processes of biofilm formation such as erosion. The obtained results show a good agreement with experimental and published data which demonstrates that the model is capable of simulating and predicting overall spatial and temporal evolution of biofilm.

  7. Comparison of Ensemble Kalman Filter groundwater-data assimilation methods based on stochastic moment equations and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Panzeri, M.; Riva, M.; Guadagnini, A.; Neuman, S. P.

    2014-04-01

    Traditional Ensemble Kalman Filter (EnKF) data assimilation requires computationally intensive Monte Carlo (MC) sampling, which suffers from filter inbreeding unless the number of simulations is large. Recently we proposed an alternative EnKF groundwater-data assimilation method that obviates the need for sampling and is free of inbreeding issues. In our new approach, theoretical ensemble moments are approximated directly by solving a system of corresponding stochastic groundwater flow equations. Like MC-based EnKF, our moment equations (ME) approach allows Bayesian updating of system states and parameters in real-time as new data become available. Here we compare the performances and accuracies of the two approaches on two-dimensional transient groundwater flow toward a well pumping water in a synthetic, randomly heterogeneous confined aquifer subject to prescribed head and flux boundary conditions.

  8. Spectral-Element Simulations of Wave Propagation in Porous Media: Finite-Frequency Sensitivity Kernels Based Upon Adjoint Methods

    NASA Astrophysics Data System (ADS)

    Morency, C.; Tromp, J.

    2008-12-01

    The mathematical formulation of wave propagation in porous media developed by Biot is based upon the principle of virtual work, ignoring processes at the microscopic level, and does not explicitly incorporate gradients in porosity. Based on recent studies focusing on averaging techniques, we derive the macroscopic porous medium equations from the microscale, with a particular emphasis on the effects of gradients in porosity. In doing so, we are able to naturally determine two key terms in the momentum equations and constitutive relationships, directly translating the coupling between the solid and fluid phases, namely a drag force and an interfacial strain tensor. In both terms, gradients in porosity arise. One remarkable result is that when we rewrite this set of equations in terms of the well known Biot variables us, w), terms involving gradients in porosity are naturally accommodated by gradients involving w, the fluid motion relative to the solid, and Biot's formulation is recovered, i.e., it remains valid in the presence of porosity gradients We have developed a numerical implementation of the Biot equations for two-dimensional problems based upon the spectral-element method (SEM) in the time domain. The SEM is a high-order variational method, which has the advantage of accommodating complex geometries like a finite-element method, while keeping the exponential convergence rate of (pseudo)spectral methods. As in the elastic and acoustic cases, poroelastic wave propagation based upon the SEM involves a diagonal mass matrix, which leads to explicit time integration schemes that are well-suited to simulations on parallel computers. Effects associated with physical dispersion & attenuation and frequency-dependent viscous resistance are addressed by using a memory variable approach. Various benchmarks involving poroelastic wave propagation in the high- and low-frequency regimes, and acoustic-poroelastic and poroelastic-poroelastic discontinuities have been

  9. Numerical biaxial tensile test for sheet metal forming simulation of aluminium alloy sheets based on the homogenized crystal plasticity finite element method

    NASA Astrophysics Data System (ADS)

    Yamanaka, A.; Ishii, Y.; Hakoyama, T.; Eyckens, P.; Kuwabara, T.

    2016-08-01

    The simulation of the stretch forming of A5182-O aluminum alloy sheet with a spherical punch is performed using the crystal plasticity (CP) finite element method based on the mathematical homogenization theory. In the simulation, the CP constitutive equations and their parameters calibrated by the numerical and experimental biaxial tensile tests with a cruciform specimen are used. The results demonstrate that the variation of the sheet thickness distribution simulated show a relatively good agreement with the experimental results.

  10. System simulation method for fiber-based homodyne multiple target interferometers using short coherence length laser sources

    NASA Astrophysics Data System (ADS)

    Fox, Maik; Beuth, Thorsten; Streck, Andreas; Stork, Wilhelm

    2015-09-01

    Homodyne laser interferometers for velocimetry are well-known optical systems used in many applications. While the detector power output signal of such a system, using a long coherence length laser and a single target, is easily modelled using the Doppler shift, scenarios with a short coherence length source, e.g. an unstabilized semiconductor laser, and multiple weak targets demand a more elaborated approach for simulation. Especially when using fiber components, the actual setup is an important factor for system performance as effects like return losses and multiple way propagation have to be taken into account. If the power received from the targets is in the same region as stray light created in the fiber setup, a complete system simulation becomes a necessity. In previous work, a phasor based signal simulation approach for interferometers based on short coherence length laser sources has been evaluated. To facilitate the use of the signal simulation, a fiber component ray tracer has since been developed that allows the creation of input files for the signal simulation environment. The software uses object oriented MATLAB code, simplifying the entry of different fiber setups and the extension of the ray tracer. Thus, a seamless way from a system description based on arbitrarily interconnected fiber components to a signal simulation for different target scenarios has been established. The ray tracer and signal simulation are being used for the evaluation of interferometer concepts incorporating delay lines to compensate for short coherence length.

  11. On the direct numerical simulation of moderate-Stokes-number turbulent particulate flows using algebraic-closure-based and kinetic-based moments methods

    NASA Astrophysics Data System (ADS)

    Vie, Aymeric; Masi, Enrica; Simonin, Olivier; Massot, Marc; EM2C/Ecole Centrale Paris Team; IMFT Team

    2012-11-01

    To simulate particulate flows, a convenient formalism for HPC is to use Eulerian moment methods, which describe the evolution of velocity moments instead of tracking directly the number density function (NDF) of the droplets. By using a conditional PDF approach, the Mesoscopic Eulerian Formalism (MEF) of Février et al. 2005 offers a solution for the direct numerical simulation of turbulent particulate flows, even at relatively high Stokes number. Here, we propose to compare to existing approaches used to solved for this formalism: the Algebraic-Closure-Based Moment method (Kaufmann et al. 2008, Masi et al. 2011), and the Kinetic-Based Moment Method (Yuan et al. 2010, Chalons et al. 2010, Vié et al. 2012). Therefore, the goal of the current work is to evaluate both strategies in turbulent test cases. For the ACBMM, viscosity-type and non-linear closures are envisaged, whereas for the KBMM, isotropic and anisotropic closures are investigated. A main aspect of the current methodology for the comparison is that the same numerical methods are used for both approaches. Results show that the new non-linear closure and the Anisotropic Gaussian closures are both accurate in shear flows, whereas viscosity-type and isotropic closures lead to wrong results.

  12. Global approach for transient shear wave inversion based on the adjoint method: a comprehensive 2D simulation study.

    PubMed

    Arnal, B; Pinton, G; Garapon, P; Pernot, M; Fink, M; Tanter, M

    2013-10-01

    Shear wave imaging (SWI) maps soft tissue elasticity by measuring shear wave propagation with ultrafast ultrasound acquisitions (10 000 frames s(-1)). This spatiotemporal data can be used as an input for an inverse problem that determines a shear modulus map. Common inversion methods are local: the shear modulus at each point is calculated based on the values of its neighbour (e.g. time-of-flight, wave equation inversion). However, these approaches are sensitive to the information loss such as noise or the lack of the backscattered signal. In this paper, we evaluate the benefits of a global approach for elasticity inversion using a least-squares formulation, which is derived from full waveform inversion in geophysics known as the adjoint method. We simulate an acoustic waveform in a medium with a soft and a hard lesion. For this initial application, full elastic propagation and viscosity are ignored. We demonstrate that the reconstruction of the shear modulus map is robust with a non-uniform background or in the presence of noise with regularization. Compared to regular local inversions, the global approach leads to an increase of contrast (∼+3 dB) and a decrease of the quantification error (∼+2%). We demonstrate that the inversion is reliable in the case when there is no signal measured within the inclusions like hypoechoic lesions which could have an impact on medical diagnosis.

  13. Bootstrapping Methods Applied for Simulating Laboratory Works

    ERIC Educational Resources Information Center

    Prodan, Augustin; Campean, Remus

    2005-01-01

    Purpose: The aim of this work is to implement bootstrapping methods into software tools, based on Java. Design/methodology/approach: This paper presents a category of software e-tools aimed at simulating laboratory works and experiments. Findings: Both students and teaching staff use traditional statistical methods to infer the truth from sample…

  14. Simulation of optimal arctic routes using a numerical sea ice model based on an ice-coupled ocean circulation method

    NASA Astrophysics Data System (ADS)

    Nam, Jong-Ho; Park, Inha; Lee, Ho Jin; Kwon, Mi Ok; Choi, Kyungsik; Seo, Young-Kyo

    2013-06-01

    Ever since the Arctic region has opened its mysterious passage to mankind, continuous attempts to take advantage of its fastest route across the region has been made. The Arctic region is still covered by thick ice and thus finding a feasible navigating route is essential for an economical voyage. To find the optimal route, it is necessary to establish an efficient transit model that enables us to simulate every possible route in advance. In this work, an enhanced algorithm to determine the optimal route in the Arctic region is introduced. A transit model based on the simulated sea ice and environmental data numerically modeled in the Arctic is developed. By integrating the simulated data into a transit model, further applications such as route simulation, cost estimation or hindcast can be easily performed. An interactive simulation system that determines the optimal Arctic route using the transit model is developed. The simulation of optimal routes is carried out and the validity of the results is discussed.

  15. Forecasting nonlinear chaotic time series with function expression method based on an improved genetic-simulated annealing algorithm.

    PubMed

    Wang, Jun; Zhou, Bi-hua; Zhou, Shu-dao; Sheng, Zheng

    2015-01-01

    The paper proposes a novel function expression method to forecast chaotic time series, using an improved genetic-simulated annealing (IGSA) algorithm to establish the optimum function expression that describes the behavior of time series. In order to deal with the weakness associated with the genetic algorithm, the proposed algorithm incorporates the simulated annealing operation which has the strong local search ability into the genetic algorithm to enhance the performance of optimization; besides, the fitness function and genetic operators are also improved. Finally, the method is applied to the chaotic time series of Quadratic and Rossler maps for validation. The effect of noise in the chaotic time series is also studied numerically. The numerical results verify that the method can forecast chaotic time series with high precision and effectiveness, and the forecasting precision with certain noise is also satisfactory. It can be concluded that the IGSA algorithm is energy-efficient and superior.

  16. Simulation method for evaluating progressive addition lenses.

    PubMed

    Qin, Linling; Qian, Lin; Yu, Jingchi

    2013-06-20

    Since progressive addition lenses (PALs) are currently state-of-the-art in multifocal correction for presbyopia, it is important to study the methods for evaluating PALs. A nonoptical simulation method used to accurately characterize PALs during the design and optimization process is proposed in this paper. It involves the direct calculation of each surface of the lens according to the lens heights of front and rear surfaces. The validity of this simulation method for the evaluation of PALs is verified by the good agreement with Rotlex method. In particular, the simulation with a "correction action" included into the design process is potentially a useful method with advantages of time-saving, convenience, and accuracy. Based on the eye-plus-lens model, which is established through an accurate ray tracing calculation along the gaze direction, the method can find an excellent application in actually evaluating the wearer performance for optimal design of more comfortable, satisfactory, and personalized PALs. PMID:23842170

  17. A low-numerical dissipation, patch-based adaptive-mesh-refinement method for large-eddy simulation of compressible flows

    NASA Astrophysics Data System (ADS)

    Pantano, C.; Deiterding, R.; Hill, D. J.; Pullin, D. I.

    2006-09-01

    This paper describes a hybrid finite-difference method for the large-eddy simulation of compressible flows with low-numerical dissipation and structured adaptive mesh refinement (SAMR). A conservative flux-based approach is described with an explicit centered scheme used in turbulent flow regions while a weighted essentially non-oscillatory (WENO) scheme is employed to capture shocks. Three-dimensional numerical simulations of a Richtmyer-Meshkov instability are presented.

  18. A Parallel Adaptive Finite Element Method for the Simulation of Photon Migration with the Radiative-Transfer-Based Model

    PubMed Central

    Lu, Yujie; Chatziioannou, Arion F.

    2009-01-01

    Whole-body optical molecular imaging of mouse models in preclinical research is rapidly developing in recent years. In this context, it is essential and necessary to develop novel simulation methods of light propagation for optical imaging, especially when a priori knowledge, large-volume domain and a wide-range of optical properties need to be considered in the reconstruction algorithm. In this paper, we propose a three dimensional parallel adaptive finite element method with simplified spherical harmonics (SPN) approximation to simulate optical photon propagation in large-volumes of heterogenous tissues. The simulation speed is significantly improved by a posteriori parallel adaptive mesh refinement and dynamic mesh repartitioning. Compared with the diffusion equation and the Monte Carlo methods, the SPN method shows improved performance and the necessity of high-order approximation in heterogeneous domains. Optimal solver selection and time-costing analysis in real mouse geometry further improve the performance of the proposed algorithm and show the superiority of the proposed parallel adaptive framework for whole-body optical molecular imaging in murine models. PMID:20052300

  19. A Parallel Adaptive Finite Element Method for the Simulation of Photon Migration with the Radiative-Transfer-Based Model.

    PubMed

    Lu, Yujie; Chatziioannou, Arion F

    2009-01-01

    Whole-body optical molecular imaging of mouse models in preclinical research is rapidly developing in recent years. In this context, it is essential and necessary to develop novel simulation methods of light propagation for optical imaging, especially when a priori knowledge, large-volume domain and a wide-range of optical properties need to be considered in the reconstruction algorithm. In this paper, we propose a three dimensional parallel adaptive finite element method with simplified spherical harmonics (SP(N)) approximation to simulate optical photon propagation in large-volumes of heterogenous tissues. The simulation speed is significantly improved by a posteriori parallel adaptive mesh refinement and dynamic mesh repartitioning. Compared with the diffusion equation and the Monte Carlo methods, the SP(N) method shows improved performance and the necessity of high-order approximation in heterogeneous domains. Optimal solver selection and time-costing analysis in real mouse geometry further improve the performance of the proposed algorithm and show the superiority of the proposed parallel adaptive framework for whole-body optical molecular imaging in murine models.

  20. Jacobian Free-Newton Krylov Discontinuous Galerkin Method and Physics-Based Preconditioning for Nuclear Reactor Simulations

    SciTech Connect

    HyeongKae Park; R. Nourgaliev; Richard C. Martineau; Dana A. Knoll

    2008-09-01

    Multidimensional, higher-order (2nd and higher) numerical methods have come to the forefront in recent years due to significant advances of computer technology and numerical algorithms, and have shown great potential as viable design tools for realistic applications. To achieve this goal, implicit high-order accurate coupling of the multiphysics simulations is a critical component. One of the issues that arise from multiphysics simulation is the necessity to resolve multiple time scales. For example, the dynamical time scales of neutron kinetics, fluid dynamics and heat conduction significantly differ (typically >1010 magnitude), with the dominant (fastest) physical mode also changing during the course of transient [Pope and Mousseau, 2007]. This leads to the severe time step restriction for stability in traditional multiphysics (i.e. operator split, semi-implicit discretization) simulations. The lower order methods suffer from an undesirable numerical dissipation. Thus implicit, higher order accurate scheme is necessary to perform seamlessly-coupled multiphysics simulations that can be used to analyze the “what-if” regulatory accident scenarios, or to design and optimize engineering systems.

  1. Partially Averaged Navier-Stokes method based on k-ω model for simulating unsteady cavitating flows

    NASA Astrophysics Data System (ADS)

    Hu, C. L.; Wang, G. Y.; Wang, Z. Y.

    2015-01-01

    The turbulence closure is significant to unsteady cavitating flow computations as the flow is frequently time-dependent accompanied with multiple scales of vortex. A turbulence bridging model named as PANS (Partially-averaged Navier-Stokes) purported for any filter-width is developed recently. The model filter width is controlled through two parameters: the unresolved-to-total ratios of kinetic energy fk and dissipation rate fω. In the present paper, the PANS method based on k-ω model is used to simulate unsteady cavitating flows over a Clark-y hydrofoil. The main objective of this work is to present the characteristics of PANS k-ω model and evaluate it depending on experimental data. The PANS k-ω model is implemented with various filter parameters (fk=0.2~1, fω =1/fk). The comparisons with the experimental data show that with the decrease of the filter parameter fk, the PANS model can reasonably predict the time evolution process of cavity shapes and lift force fluctuating in time. As the PANS model with smaller fk can overcome the over-prediction of turbulent kinetic energy with original k-ω model, the time-averaged eddy viscosity at the rear of attached cavity decreases and more levels of physical turbulent fluctuations are resolved. What's more, it is found that the value of ω in the free stream significantly affects the numerical results such as time-averaged cavity and fluctuations of the lift coefficient. With decreasing fk, the sensitivity of ω-equation on free stream becomes much weaker.

  2. Dynamic light scattering-based method to determine primary particle size of iron oxide nanoparticles in simulated gastrointestinal fluid.

    PubMed

    Yang, Seung-Chul; Paik, Sae-Yeol-Rim; Ryu, Jina; Choi, Kyeong-Ok; Kang, Tae Seok; Lee, Jong Kwon; Song, Chi Won; Ko, Sanghoon

    2014-10-15

    Simple dynamic light scattering (DLS)-based methodologies were developed to determine primary particle size distribution of iron oxide particles in simulated gastrointestinal fluid. Iron oxide particles, which easily agglomerate in aqueous media, were converted into dispersed particles by modification of surface charge using citric acid and sodium citrate. After the modification, zeta-potential value decreased to -40mV at pH 7. Mean particle diameters in suspensions of iron oxide nano- and microparticles stabilized by the mixture of citric acid and sodium citrate were dramatically decreased to 166 and 358nm, respectively, which were close to the particle size distributions observed in the micrographs. In simulated gastrointestinal fluid, both iron oxide nano- and microparticles were heavily agglomerated with particle diameters of almost 2600 and 5200nm, respectively, due to charge shielding on the citrate-modified surface by ions in the media. For determining primary particle size distribution by using DLS-based approach, the iron oxide particles incubated in the simulated gastrointestinal fluid were converted to monodisperse particles by altering the pH to 7 and electrolyte elimination. The simple DLS-based methodologies are well suited to determine primary particle size distribution of mineral nanoparticles at various physical, chemical, and biological conditions.

  3. Implicit methods in particle simulation

    SciTech Connect

    Cohen, B.I.

    1982-03-16

    This paper surveys recent advances in the application of implicit integration schemes to particle simulation of plasmas. The use of implicit integration schemes is motivated by the goal of efficiently studying low-frequency plasma phenomena using a large timestep, while retaining accuracy and kinetics. Implicit schemes achieve numerical stability and provide selective damping of unwanted high-frequency waves. This paper reviews the implicit moment and direct implicit methods. Lastly, the merging of implicit methods with orbit averaging can result in additional computational savings.

  4. A simulation method for the fruitage body

    NASA Astrophysics Data System (ADS)

    Lu, Ling; Song, Weng-lin; Wang, Lei

    2009-07-01

    An effective visual modeling for creating the fruitage body has been present. According to the geometry shape character of fruitage, we build up its face model base on ellipsoid deformation. The face model is relation with radius. We consider different radius become a face in the fruitage, and uses same method to simulate the shape of fruitage inside. The body model is formed by combine face model and radius direction. Our method can simulate virtual inter and outer structure for fruitage body. The method decreases a lot of data and increases display speed. Another, the texture model of fruitage is defined by sum of different base function. This kind of method is simple and speed. We show the feasibility of our method by creating a winter-jujube and an apricot. They include exocorp, mesocorp and endocarp. It is useful that develop virtual plant.

  5. A 3-Dimensional Absorbed Dose Calculation Method Based on Quantitative SPECT for Radionuclide Therapy: Evaluation for 131I Using Monte Carlo Simulation

    PubMed Central

    Ljungberg, Michael; Sjögreen, Katarina; Liu, Xiaowei; Frey, Eric; Dewaraja, Yuni; Strand, Sven-Erik

    2009-01-01

    A general method is presented for patient-specific 3-dimensional absorbed dose calculations based on quantitative SPECT activity measurements. Methods The computational scheme includes a method for registration of the CT image to the SPECT image and position-dependent compensation for attenuation, scatter, and collimator detector response performed as part of an iterative reconstruction method. A method for conversion of the measured activity distribution to a 3-dimensional absorbed dose distribution, based on the EGS4 (electron-gamma shower, version 4) Monte Carlo code, is also included. The accuracy of the activity quantification and the absorbed dose calculation is evaluated on the basis of realistic Monte Carlo–simulated SPECT data, using the SIMIND (simulation of imaging nuclear detectors) program and a voxel-based computer phantom. CT images are obtained from the computer phantom, and realistic patient movements are added relative to the SPECT image. The SPECT-based activity concentration and absorbed dose distributions are compared with the true ones. Results Correction could be made for object scatter, photon attenuation, and scatter penetration in the collimator. However, inaccuracies were imposed by the limited spatial resolution of the SPECT system, for which the collimator response correction did not fully compensate. Conclusion The presented method includes compensation for most parameters degrading the quantitative image information. The compensation methods are based on physical models and therefore are generally applicable to other radionuclides. The proposed evaluation methodology may be used as a basis for future intercomparison of different methods. PMID:12163637

  6. Symplectic partitioned Runge-Kutta method based on the eighth-order nearly analytic discrete operator and its wavefield simulations

    NASA Astrophysics Data System (ADS)

    Zhang, Chao-Yuan; Ma, Xiao; Yang, Lei; Song, Guo-Jie

    2014-03-01

    We propose a symplectic partitioned Runge-Kutta (SPRK) method with eighth-order spatial accuracy based on the extended Hamiltonian system of the acoustic wave equation. Known as the eighth-order NSPRK method, this technique uses an eighth-order accurate nearly analytic discrete (NAD) operator to discretize high-order spatial differential operators and employs a second-order SPRK method to discretize temporal derivatives. The stability criteria and numerical dispersion relations of the eighth-order NSPRK method are given by a semi-analytical method and are tested by numerical experiments. We also show the differences of the numerical dispersions between the eighth-order NSPRK method and conventional numerical methods such as the fourth-order NSPRK method, the eighth-order Lax-Wendroff correction (LWC) method and the eighth-order staggered-grid (SG) method. The result shows that the ability of the eighth-order NSPRK method to suppress the numerical dispersion is obviously superior to that of the conventional numerical methods. In the same computational environment, to eliminate visible numerical dispersions, the eighth-order NSPRK is approximately 2.5 times faster than the fourth-order NSPRK and 3.4 times faster than the fourth-order SPRK, and the memory requirement is only approximately 47.17% of the fourth-order NSPRK method and 49.41 % of the fourth-order SPRK method, which indicates the highest computational efficiency. Modeling examples for the two-layer models such as the heterogeneous and Marmousi models show that the wavefields generated by the eighth-order NSPRK method are very clear with no visible numerical dispersion. These numerical experiments illustrate that the eighth-order NSPRK method can effectively suppress numerical dispersion when coarse grids are adopted. Therefore, this method can greatly decrease computer memory requirement and accelerate the forward modeling productivity. In general, the eighth-order NSPRK method has tremendous potential

  7. Matrix method for acoustic levitation simulation.

    PubMed

    Andrade, Marco A B; Perez, Nicolas; Buiochi, Flavio; Adamowski, Julio C

    2011-08-01

    A matrix method is presented for simulating acoustic levitators. A typical acoustic levitator consists of an ultrasonic transducer and a reflector. The matrix method is used to determine the potential for acoustic radiation force that acts on a small sphere in the standing wave field produced by the levitator. The method is based on the Rayleigh integral and it takes into account the multiple reflections that occur between the transducer and the reflector. The potential for acoustic radiation force obtained by the matrix method is validated by comparing the matrix method results with those obtained by the finite element method when using an axisymmetric model of a single-axis acoustic levitator. After validation, the method is applied in the simulation of a noncontact manipulation system consisting of two 37.9-kHz Langevin-type transducers and a plane reflector. The manipulation system allows control of the horizontal position of a small levitated sphere from -6 mm to 6 mm, which is done by changing the phase difference between the two transducers. The horizontal position of the sphere predicted by the matrix method agrees with the horizontal positions measured experimentally with a charge-coupled device camera. The main advantage of the matrix method is that it allows simulation of non-symmetric acoustic levitators without requiring much computational effort. PMID:21859587

  8. Evaluation of a transient, simultaneous, arbitrary Lagrange-Euler based multi-physics method for simulating the mitral heart valve.

    PubMed

    Espino, Daniel M; Shepherd, Duncan E T; Hukins, David W L

    2014-01-01

    A transient multi-physics model of the mitral heart valve has been developed, which allows simultaneous calculation of fluid flow and structural deformation. A recently developed contact method has been applied to enable simulation of systole (the stage when blood pressure is elevated within the heart to pump blood to the body). The geometry was simplified to represent the mitral valve within the heart walls in two dimensions. Only the mitral valve undergoes deformation. A moving arbitrary Lagrange-Euler mesh is used to allow true fluid-structure interaction (FSI). The FSI model requires blood flow to induce valve closure by inducing strains in the region of 10-20%. Model predictions were found to be consistent with existing literature and will undergo further development.

  9. Fourier transform-based scattering-rate method for self-consistent simulations of carrier transport in semiconductor heterostructures

    NASA Astrophysics Data System (ADS)

    Schrottke, L.; Lü, X.; Grahn, H. T.

    2015-04-01

    We present a self-consistent model for carrier transport in periodic semiconductor heterostructures completely formulated in the Fourier domain. In addition to the Hamiltonian for the layer system, all expressions for the scattering rates, the applied electric field, and the carrier distribution are treated in reciprocal space. In particular, for slowly converging cases of the self-consistent solution of the Schrödinger and Poisson equations, numerous transformations between real and reciprocal space during the iterations can be avoided by using the presented method, which results in a significant reduction of computation time. Therefore, it is a promising tool for the simulation and efficient design of complex heterostructures such as terahertz quantum-cascade lasers.

  10. Fourier transform-based scattering-rate method for self-consistent simulations of carrier transport in semiconductor heterostructures

    SciTech Connect

    Schrottke, L. Lü, X.; Grahn, H. T.

    2015-04-21

    We present a self-consistent model for carrier transport in periodic semiconductor heterostructures completely formulated in the Fourier domain. In addition to the Hamiltonian for the layer system, all expressions for the scattering rates, the applied electric field, and the carrier distribution are treated in reciprocal space. In particular, for slowly converging cases of the self-consistent solution of the Schrödinger and Poisson equations, numerous transformations between real and reciprocal space during the iterations can be avoided by using the presented method, which results in a significant reduction of computation time. Therefore, it is a promising tool for the simulation and efficient design of complex heterostructures such as terahertz quantum-cascade lasers.

  11. Implicit temperature-correction-based immersed-boundary thermal lattice Boltzmann method for the simulation of natural convection

    NASA Astrophysics Data System (ADS)

    Seta, Takeshi

    2013-06-01

    In the present paper, we apply the implicit-correction method to the immersed-boundary thermal lattice Boltzmann method (IB-TLBM) for the natural convection between two concentric horizontal cylinders and in a square enclosure containing a circular cylinder. The Chapman-Enskog multiscale expansion proves the existence of an extra term in the temperature equation from the source term of the kinetic equation. In order to eliminate the extra term, we redefine the temperature and the source term in the lattice Boltzmann equation. When the relaxation time is less than unity, the new definition of the temperature and source term enhances the accuracy of the thermal lattice Boltzmann method. The implicit-correction method is required in order to calculate the thermal interaction between a fluid and a rigid solid using the redefined temperature. Simulation of the heat conduction between two concentric cylinders indicates that the error at each boundary point of the proposed IB-TLBM is reduced by the increment of the number of Lagrangian points constituting the boundaries. We derive the theoretical relation between a temperature slip at the boundary and the relaxation time and demonstrate that the IB-TLBM requires a small relaxation time in order to avoid temperature distortion around the immersed boundary. The streamline, isotherms, and average Nusselt number calculated by the proposed method agree well with those of previous numerical studies involving natural convection. The proposed IB-TLBM improves the accuracy of the boundary conditions for the temperature and velocity using an adequate discrete area for each of the Lagrangian nodes and reduces the penetration of the streamline on the surface of the body.

  12. Implicit temperature-correction-based immersed-boundary thermal lattice Boltzmann method for the simulation of natural convection.

    PubMed

    Seta, Takeshi

    2013-06-01

    In the present paper, we apply the implicit-correction method to the immersed-boundary thermal lattice Boltzmann method (IB-TLBM) for the natural convection between two concentric horizontal cylinders and in a square enclosure containing a circular cylinder. The Chapman-Enskog multiscale expansion proves the existence of an extra term in the temperature equation from the source term of the kinetic equation. In order to eliminate the extra term, we redefine the temperature and the source term in the lattice Boltzmann equation. When the relaxation time is less than unity, the new definition of the temperature and source term enhances the accuracy of the thermal lattice Boltzmann method. The implicit-correction method is required in order to calculate the thermal interaction between a fluid and a rigid solid using the redefined temperature. Simulation of the heat conduction between two concentric cylinders indicates that the error at each boundary point of the proposed IB-TLBM is reduced by the increment of the number of Lagrangian points constituting the boundaries. We derive the theoretical relation between a temperature slip at the boundary and the relaxation time and demonstrate that the IB-TLBM requires a small relaxation time in order to avoid temperature distortion around the immersed boundary. The streamline, isotherms, and average Nusselt number calculated by the proposed method agree well with those of previous numerical studies involving natural convection. The proposed IB-TLBM improves the accuracy of the boundary conditions for the temperature and velocity using an adequate discrete area for each of the Lagrangian nodes and reduces the penetration of the streamline on the surface of the body.

  13. Land Surface Model and Particle Swarm Optimization Algorithm Based on the Model-Optimization Method for Improving Soil Moisture Simulation in a Semi-Arid Region.

    PubMed

    Yang, Qidong; Zuo, Hongchao; Li, Weidong

    2016-01-01

    Improving the capability of land-surface process models to simulate soil moisture assists in better understanding the atmosphere-land interaction. In semi-arid regions, due to limited near-surface observational data and large errors in large-scale parameters obtained by the remote sensing method, there exist uncertainties in land surface parameters, which can cause large offsets between the simulated results of land-surface process models and the observational data for the soil moisture. In this study, observational data from the Semi-Arid Climate Observatory and Laboratory (SACOL) station in the semi-arid loess plateau of China were divided into three datasets: summer, autumn, and summer-autumn. By combing the particle swarm optimization (PSO) algorithm and the land-surface process model SHAW (Simultaneous Heat and Water), the soil and vegetation parameters that are related to the soil moisture but difficult to obtain by observations are optimized using three datasets. On this basis, the SHAW model was run with the optimized parameters to simulate the characteristics of the land-surface process in the semi-arid loess plateau. Simultaneously, the default SHAW model was run with the same atmospheric forcing as a comparison test. Simulation results revealed the following: parameters optimized by the particle swarm optimization algorithm in all simulation tests improved simulations of the soil moisture and latent heat flux; differences between simulated results and observational data are clearly reduced, but simulation tests involving the adoption of optimized parameters cannot simultaneously improve the simulation results for the net radiation, sensible heat flux, and soil temperature. Optimized soil and vegetation parameters based on different datasets have the same order of magnitude but are not identical; soil parameters only vary to a small degree, but the variation range of vegetation parameters is large. PMID:26991786

  14. Land Surface Model and Particle Swarm Optimization Algorithm Based on the Model-Optimization Method for Improving Soil Moisture Simulation in a Semi-Arid Region

    PubMed Central

    Yang, Qidong; Zuo, Hongchao; Li, Weidong

    2016-01-01

    Improving the capability of land-surface process models to simulate soil moisture assists in better understanding the atmosphere-land interaction. In semi-arid regions, due to limited near-surface observational data and large errors in large-scale parameters obtained by the remote sensing method, there exist uncertainties in land surface parameters, which can cause large offsets between the simulated results of land-surface process models and the observational data for the soil moisture. In this study, observational data from the Semi-Arid Climate Observatory and Laboratory (SACOL) station in the semi-arid loess plateau of China were divided into three datasets: summer, autumn, and summer-autumn. By combing the particle swarm optimization (PSO) algorithm and the land-surface process model SHAW (Simultaneous Heat and Water), the soil and vegetation parameters that are related to the soil moisture but difficult to obtain by observations are optimized using three datasets. On this basis, the SHAW model was run with the optimized parameters to simulate the characteristics of the land-surface process in the semi-arid loess plateau. Simultaneously, the default SHAW model was run with the same atmospheric forcing as a comparison test. Simulation results revealed the following: parameters optimized by the particle swarm optimization algorithm in all simulation tests improved simulations of the soil moisture and latent heat flux; differences between simulated results and observational data are clearly reduced, but simulation tests involving the adoption of optimized parameters cannot simultaneously improve the simulation results for the net radiation, sensible heat flux, and soil temperature. Optimized soil and vegetation parameters based on different datasets have the same order of magnitude but are not identical; soil parameters only vary to a small degree, but the variation range of vegetation parameters is large. PMID:26991786

  15. Epistemology of knowledge based simulation

    SciTech Connect

    Reddy, R.

    1987-04-01

    Combining artificial intelligence concepts, with traditional simulation methodologies yields a powerful design support tool known as knowledge based simulation. This approach turns a descriptive simulation tool into a prescriptive tool, one which recommends specific goals. Much work in the area of general goal processing and explanation of recommendations remains to be done.

  16. Three-Dimensional Carotid Plaque Progression Simulation Using Meshless Generalized Finite Difference Method Based on Multi-Year MRI Patient-Tracking Data.

    PubMed

    Yang, Chun; Tang, Dalin; Atluri, Satya

    2010-01-01

    Cardiovascular disease (CVD) is becoming the number one cause of death worldwide. Atherosclerotic plaque rupture and progression are closely related to most severe cardiovascular syndromes such as heart attack and stroke. Mechanisms governing plaque rupture and progression are not well understood. A computational procedure based on three-dimensional meshless generalized finite difference (MGFD) method and serial magnetic resonance imaging (MRI) data was introduced to quantify patient-specific carotid atherosclerotic plaque growth functions and simulate plaque progression. Participating patients were scanned three times (T1, T2, and T3, at intervals of about 18 months) to obtain plaque progression data. Vessel wall thickness (WT) changes were used as the measure for plaque progression. Since there was insufficient data with the current technology to quantify individual plaque component growth, the whole plaque was assumed to be uniform, homogeneous, isotropic, linear, and nearly incompressible. The linear elastic model was used. The 3D plaque model was discretized and solved using a meshless generalized finite difference (GFD) method. Four growth functions with different combinations of wall thickness, stress, and neighboring point terms were introduced to predict future plaque growth based on previous time point data. Starting from the T2 plaque geometry, plaque progression was simulated by solving the solid model and adjusting wall thickness using plaque growth functions iteratively until T3 is reached. Numerically simulated plaque progression agreed very well with the target T3 plaque geometry with errors ranging from 11.56%, 6.39%, 8.24%, to 4.45%, given by the four growth functions. We believe this is the first time 3D plaque progression simulation based on multi-year patient-tracking data was reported. Serial MRI-based progression simulation adds time dimension to plaque vulnerability assessment and will improve prediction accuracy for potential plaque rupture

  17. Simulation-based surgical education.

    PubMed

    Evgeniou, Evgenios; Loizou, Peter

    2013-09-01

    The reduction in time for training at the workplace has created a challenge for the traditional apprenticeship model of training. Simulation offers the opportunity for repeated practice in a safe and controlled environment, focusing on trainees and tailored to their needs. Recent technological advances have led to the development of various simulators, which have already been introduced in surgical training. The complexity and fidelity of the available simulators vary, therefore depending on our recourses we should select the appropriate simulator for the task or skill we want to teach. Educational theory informs us about the importance of context in professional learning. Simulation should therefore recreate the clinical environment and its complexity. Contemporary approaches to simulation have introduced novel ideas for teaching teamwork, communication skills and professionalism. In order for simulation-based training to be successful, simulators have to be validated appropriately and integrated in a training curriculum. Within a surgical curriculum, trainees should have protected time for simulation-based training, under appropriate supervision. Simulation-based surgical education should allow the appropriate practice of technical skills without ignoring the clinical context and must strike an adequate balance between the simulation environment and simulators. PMID:23088646

  18. A method to find correlations between steering feel and vehicle handling properties using a moving base driving simulator

    NASA Astrophysics Data System (ADS)

    Rothhämel, Malte; IJkema, Jolle; Drugge, Lars

    2011-12-01

    There have been several investigations to find out how drivers experience a change in vehicle-handling behaviour. However, the hypothesis that there is a correlation between what the driver perceives and vehicle- handling properties remains to be verified. To define what people feel, the human feeling of steering systems was divided into dimensions of perception. Then 28 test drivers rated different steering system characteristics of a semi-trailer tractor combination in a moving base-driving simulator. Characteristics of the steering system differed in friction, damping, inertia and stiffness. The same steering system characteristics were also tested in accordance with international standards of vehicle-handling tests resulting in characteristic quantities. The instrumental measurements and the non-instrumental ratings were analysed with respect to correlation between each other with the help of regression analysis and neural networks. Results show that there are correlations between measurements and ratings. Moreover, it is shown that which one of the handling variables influence the different dimensions of the steering feel.

  19. [Application of simulated annealing method and neural network on optimizing soil sampling schemes based on road distribution].

    PubMed

    Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng

    2015-03-01

    Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency. PMID:26211074

  20. Parallel node placement method by bubble simulation

    NASA Astrophysics Data System (ADS)

    Nie, Yufeng; Zhang, Weiwei; Qi, Nan; Li, Yiqiang

    2014-03-01

    An efficient Parallel Node Placement method by Bubble Simulation (PNPBS), employing METIS-based domain decomposition (DD) for an arbitrary number of processors is introduced. In accordance with the desired nodal density and Newton’s Second Law of Motion, automatic generation of node sets by bubble simulation has been demonstrated in previous work. Since the interaction force between nodes is short-range, for two distant nodes, their positions and velocities can be updated simultaneously and independently during dynamic simulation, which indicates the inherent property of parallelism, it is quite suitable for parallel computing. In this PNPBS method, the METIS-based DD scheme has been investigated for uniform and non-uniform node sets, and dynamic load balancing is obtained by evenly distributing work among the processors. For the nodes near the common interface of two neighboring subdomains, there is no need for special treatment after dynamic simulation. These nodes have good geometrical properties and a smooth density distribution which is desirable in the numerical solution of partial differential equations (PDEs). The results of numerical examples show that quasi linear speedup in the number of processors and high efficiency are achieved.

  1. Influence of River Bed Elevation Survey Configurations and Interpolation Methods on the Accuracy of LIDAR Dtm-Based River Flow Simulations

    NASA Astrophysics Data System (ADS)

    Santillan, J. R.; Serviano, J. L.; Makinano-Santillan, M.; Marqueso, J. T.

    2016-09-01

    In this paper, we investigated how survey configuration and the type of interpolation method can affect the accuracy of river flow simulations that utilize LIDAR DTM integrated with interpolated river bed as its main source of topographic information. Aside from determining the accuracy of the individually-generated river bed topographies, we also assessed the overall accuracy of the river flow simulations in terms of maximum flood depth and extent. Four survey configurations consisting of river bed elevation data points arranged as cross-section (XS), zig-zag (ZZ), river banks-centerline (RBCL), and river banks-centerline-zig-zag (RBCLZZ), and two interpolation methods (Inverse Distance-Weighted and Ordinary Kriging) were considered. Major results show that the choice of survey configuration, rather than the interpolation method, has significant effect on the accuracy of interpolated river bed surfaces, and subsequently on the accuracy of river flow simulations. The RMSEs of the interpolated surfaces and the model results vary from one configuration to another, and depends on how each configuration evenly collects river bed elevation data points. The large RMSEs for the RBCL configuration and the low RMSEs for the XS configuration confirm that as the data points become evenly spaced and cover more portions of the river, the resulting interpolated surface and the river flow simulation where it was used also become more accurate. The XS configuration with Ordinary Kriging (OK) as interpolation method provided the best river bed interpolation and river flow simulation results. The RBCL configuration, regardless of the interpolation algorithm used, resulted to least accurate river bed surfaces and simulation results. Based on the accuracy analysis, the use of XS configuration to collect river bed data points and applying the OK method to interpolate the river bed topography are the best methods to use to produce satisfactory river flow simulation outputs. The use of

  2. Development of an Output-based Adaptive Method for Multi-Dimensional Euler and Navier-Stokes Simulations

    NASA Technical Reports Server (NTRS)

    Darmofal, David L.

    2003-01-01

    The use of computational simulations in the prediction of complex aerodynamic flows is becoming increasingly prevalent in the design process within the aerospace industry. Continuing advancements in both computing technology and algorithmic development are ultimately leading to attempts at simulating ever-larger, more complex problems. However, by increasing the reliance on computational simulations in the design cycle, we must also increase the accuracy of these simulations in order to maintain or improve the reliability arid safety of the resulting aircraft. At the same time, large-scale computational simulations must be made more affordable so that their potential benefits can be fully realized within the design cycle. Thus, a continuing need exists for increasing the accuracy and efficiency of computational algorithms such that computational fluid dynamics can become a viable tool in the design of more reliable, safer aircraft. The objective of this research was the development of an error estimation and grid adaptive strategy for reducing simulation errors in integral outputs (functionals) such as lift or drag from from multi-dimensional Euler and Navier-Stokes simulations. In this final report, we summarize our work during this grant.

  3. Development and Implementation of a Transport Method for the Transport and Reaction Simulation Engine (TaRSE) based on the Godunov-Mixed Finite Element Method

    USGS Publications Warehouse

    James, Andrew I.; Jawitz, James W.; Munoz-Carpena, Rafael

    2009-01-01

    A model to simulate transport of materials in surface water and ground water has been developed to numerically approximate solutions to the advection-dispersion equation. This model, known as the Transport and Reaction Simulation Engine (TaRSE), uses an algorithm that incorporates a time-splitting technique where the advective part of the equation is solved separately from the dispersive part. An explicit finite-volume Godunov method is used to approximate the advective part, while a mixed-finite element technique is used to approximate the dispersive part. The dispersive part uses an implicit discretization, which allows it to run stably with a larger time step than the explicit advective step. The potential exists to develop algorithms that run several advective steps, and then one dispersive step that encompasses the time interval of the advective steps. Because the dispersive step is computationally most expensive, schemes can be implemented that are more computationally efficient than non-time-split algorithms. This technique enables scientists to solve problems with high grid Peclet numbers, such as transport problems with sharp solute fronts, without spurious oscillations in the numerical approximation to the solution and with virtually no artificial diffusion.

  4. Investigating internal architecture effect in plastic deformation and failure for TPMS-based scaffolds using simulation methods and experimental procedure.

    PubMed

    Kadkhodapour, J; Montazerian, H; Raeisi, S

    2014-10-01

    Rapid prototyping (RP) has been a promising technique for producing tissue engineering scaffolds which mimic the behavior of host tissue as properly as possible. Biodegradability, agreeable feasibility of cell growth, and migration parallel to mechanical properties, such as strength and energy absorption, have to be considered in design procedure. In order to study the effect of internal architecture on the plastic deformation and failure pattern, the architecture of triply periodic minimal surfaces which have been observed in nature were used. P and D surfaces at 30% and 60% of volume fractions were modeled with 3∗3∗ 3 unit cells and imported to Objet EDEN 260 3-D printer. Models were printed by VeroBlue FullCure 840 photopolymer resin. Mechanical compression test was performed to investigate the compressive behavior of scaffolds. Deformation procedure and stress-strain curves were simulated by FEA and exhibited good agreement with the experimental observation. Current approaches for predicting dominant deformation mode under compression containing Maxwell's criteria and scaling laws were also investigated to achieve an understanding of the relationships between deformation pattern and mechanical properties of porous structures. It was observed that effect of stress concentration in TPMS-based scaffolds resultant by heterogeneous mass distribution, particularly at lower volume fractions, led to a different behavior from that of typical cellular materials. As a result, although more parameters are considered for determining dominant deformation in scaling laws, two mentioned approaches could not exclusively be used to compare the mechanical response of cellular materials at the same volume fraction. PMID:25175253

  5. Simulation and Non-Simulation Based Human Reliability Analysis Approaches

    SciTech Connect

    Boring, Ronald Laurids; Shirley, Rachel Elizabeth; Joe, Jeffrey Clark; Mandelli, Diego

    2014-12-01

    Part of the U.S. Department of Energy’s Light Water Reactor Sustainability (LWRS) Program, the Risk-Informed Safety Margin Characterization (RISMC) Pathway develops approaches to estimating and managing safety margins. RISMC simulations pair deterministic plant physics models with probabilistic risk models. As human interactions are an essential element of plant risk, it is necessary to integrate human actions into the RISMC risk model. In this report, we review simulation-based and non-simulation-based human reliability assessment (HRA) methods. Chapter 2 surveys non-simulation-based HRA methods. Conventional HRA methods target static Probabilistic Risk Assessments for Level 1 events. These methods would require significant modification for use in dynamic simulation of Level 2 and Level 3 events. Chapter 3 is a review of human performance models. A variety of methods and models simulate dynamic human performance; however, most of these human performance models were developed outside the risk domain and have not been used for HRA. The exception is the ADS-IDAC model, which can be thought of as a virtual operator program. This model is resource-intensive but provides a detailed model of every operator action in a given scenario, along with models of numerous factors that can influence operator performance. Finally, Chapter 4 reviews the treatment of timing of operator actions in HRA methods. This chapter is an example of one of the critical gaps between existing HRA methods and the needs of dynamic HRA. This report summarizes the foundational information needed to develop a feasible approach to modeling human interactions in the RISMC simulations.

  6. A heterogeneous graph-based recommendation simulator

    SciTech Connect

    Yeonchan, Ahn; Sungchan, Park; Lee, Matt Sangkeun; Sang-goo, Lee

    2013-01-01

    Heterogeneous graph-based recommendation frameworks have flexibility in that they can incorporate various recommendation algorithms and various kinds of information to produce better results. In this demonstration, we present a heterogeneous graph-based recommendation simulator which enables participants to experience the flexibility of a heterogeneous graph-based recommendation method. With our system, participants can simulate various recommendation semantics by expressing the semantics via meaningful paths like User Movie User Movie. The simulator then returns the recommendation results on the fly based on the user-customized semantics using a fast Monte Carlo algorithm.

  7. Simulation-based medical teaching and learning

    PubMed Central

    Al-Elq, Abdulmohsen H.

    2010-01-01

    One of the most important steps in curriculum development is the introduction of simulation- based medical teaching and learning. Simulation is a generic term that refers to an artificial representation of a real world process to achieve educational goals through experiential learning. Simulation based medical education is defined as any educational activity that utilizes simulation aides to replicate clinical scenarios. Although medical simulation is relatively new, simulation has been used for a long time in other high risk professions such as aviation. Medical simulation allows the acquisition of clinical skills through deliberate practice rather than an apprentice style of learning. Simulation tools serve as an alternative to real patients. A trainee can make mistakes and learn from them without the fear of harming the patient. There are different types and classification of simulators and their cost vary according to the degree of their resemblance to the reality, or ‘fidelity’. Simulation- based learning is expensive. However, it is cost-effective if utilized properly. Medical simulation has been found to enhance clinical competence at the undergraduate and postgraduate levels. It has also been found to have many advantages that can improve patient safety and reduce health care costs through the improvement of the medical provider's competencies. The objective of this narrative review article is to highlight the importance of simulation as a new teaching method in undergraduate and postgraduate education. PMID:22022669

  8. Rule-based simulation models

    NASA Technical Reports Server (NTRS)

    Nieten, Joseph L.; Seraphine, Kathleen M.

    1991-01-01

    Procedural modeling systems, rule based modeling systems, and a method for converting a procedural model to a rule based model are described. Simulation models are used to represent real time engineering systems. A real time system can be represented by a set of equations or functions connected so that they perform in the same manner as the actual system. Most modeling system languages are based on FORTRAN or some other procedural language. Therefore, they must be enhanced with a reaction capability. Rule based systems are reactive by definition. Once the engineering system has been decomposed into a set of calculations using only basic algebraic unary operations, a knowledge network of calculations and functions can be constructed. The knowledge network required by a rule based system can be generated by a knowledge acquisition tool or a source level compiler. The compiler would take an existing model source file, a syntax template, and a symbol table and generate the knowledge network. Thus, existing procedural models can be translated and executed by a rule based system. Neural models can be provide the high capacity data manipulation required by the most complex real time models.

  9. A Lattice Boltzmann Method for Turbomachinery Simulations

    NASA Technical Reports Server (NTRS)

    Hsu, A. T.; Lopez, I.

    2003-01-01

    Lattice Boltzmann (LB) Method is a relatively new method for flow simulations. The start point of LB method is statistic mechanics and Boltzmann equation. The LB method tries to set up its model at molecular scale and simulate the flow at macroscopic scale. LBM has been applied to mostly incompressible flows and simple geometry.

  10. Multigrid methods with applications to reservoir simulation

    SciTech Connect

    Xiao, Shengyou

    1994-05-01

    Multigrid methods are studied for solving elliptic partial differential equations. Focus is on parallel multigrid methods and their use for reservoir simulation. Multicolor Fourier analysis is used to analyze the behavior of standard multigrid methods for problems in one and two dimensions. Relation between multicolor and standard Fourier analysis is established. Multiple coarse grid methods for solving model problems in 1 and 2 dimensions are considered; at each coarse grid level we use more than one coarse grid to improve convergence. For a given Dirichlet problem, a related extended problem is first constructed; a purification procedure can be used to obtain Moore-Penrose solutions of the singular systems encountered. For solving anisotropic equations, semicoarsening and line smoothing techniques are used with multiple coarse grid methods to improve convergence. Two-level convergence factors are estimated using multicolor. In the case where each operator has the same stencil on each grid point on one level, exact multilevel convergence factors can be obtained. For solving partial differential equations with discontinuous coefficients, interpolation and restriction operators should include information about the equation coefficients. Matrix-dependent interpolation and restriction operators based on the Schur complement can be used in nonsymmetric cases. A semicoarsening multigrid solver with these operators is used in UTCOMP, a 3-D, multiphase, multicomponent, compositional reservoir simulator. The numerical experiments are carried out on different computing systems. Results indicate that the multigrid methods are promising.

  11. High-fidelity simulations of CdTe vapor deposition from a bond-order potential-based molecular dynamics method

    NASA Astrophysics Data System (ADS)

    Zhou, X. W.; Ward, D. K.; Wong, B. M.; Doty, F. P.; Zimmerman, J. A.; Nielson, G. N.; Cruz-Campa, J. L.; Gupta, V. P.; Granata, J. E.; Chavez, J. J.; Zubia, D.

    2012-06-01

    CdTe has been a special semiconductor for constructing the lowest-cost solar cells, and the CdTe-based Cd1-xZnxTe alloy has been the leading semiconductor for radiation detection applications. The performance currently achieved for the materials, however, is still far below theoretical expectations. This is because the property-limiting nanoscale defects that are easily formed during the growth of CdTe crystals are difficult to explore in experiments. Here, we demonstrate the capability of a bond-order potential-based molecular dynamics method for predicting the crystalline growth of CdTe films during vapor deposition simulations. Such a method may begin to enable defects generated during vapor deposition of CdTe crystals to be accurately explored.

  12. Formability analysis of aluminum alloy sheets at elevated temperatures with numerical simulation based on the M-K method

    SciTech Connect

    Bagheriasl, Reza; Ghavam, Kamyar; Worswick, Michael

    2011-05-04

    The effect of temperature on formability of aluminum alloy sheet is studied by developing the Forming Limit Diagrams, FLD, for aluminum alloy 3000-series using the Marciniak and Kuczynski technique by numerical simulation. The numerical model is conducted in LS-DYNA and incorporates the Barlat's YLD2000 anisotropic yield function and the temperature dependant Bergstrom hardening law. Three different temperatures; room temperature, 250 deg. C and 300 deg. C, are studied. For each temperature case, various loading conditions are applied to the M-K defect model. The effect of the material anisotropy is considered by varying the defect angle. A simplified failure criterion is used to predict the onset of necking. Minor and major strains are obtained from the simulations and plotted for each temperature level. It is demonstrated that temperature improves the forming limit of aluminum 3000-series alloy sheet.

  13. An automated method for predicting full-scale CO/sub 2/ flood performance based on detailed pattern flood simulations

    SciTech Connect

    Rester, S.; Todd, M.R.

    1984-04-01

    A procedure is described for estimating the response of a field scale CO/sub 2/ flood from a limited number of simulations of pattern flood symmetry elements. This procedure accounts for areally varying reservoir properties, areally varying conditions when CO/sub 2/ injection is initiated, phased conversion of injectors to CO/sub 2/, and shut in criteria for producers. Examples of the use of this procedure are given.

  14. DEA based neonatal lung simulator

    NASA Astrophysics Data System (ADS)

    Schlatter, Samuel; Haemmerle, Enrico; Chang, Robin; O'Brien, Benjamin M.; Gisby, Todd; Anderson, Iain

    2011-04-01

    To reduce the likelihood of ventilator induced lung injury a neonatal lung simulator is developed based on Dielectric Elastomer Actuators (DEAs). DEAs are particularly suited for this application due to their natural like response as well as their self-sensing ability. By actively controlling the DEA, the pressure and volume inside the lung simulator can be controlled giving rise to active compliance control. Additionally the capacitance of the DEA can be used as a measurement of volume eliminating the integration errors that plague flow sensors. Based on simulations conducted with the FEA package ABAQUS and experimental data, the characteristics of the lung simulator were explored. A relationship between volume and capacitance was derived based on the self sensing of a bubble actuator. This was then used to calculate the compliance of the experimental bubble actuator. The current results are promising and show that mimicking a neonatal lung with DEAs may be possible.

  15. Medical students’ satisfaction with the Applied Basic Clinical Seminar with Scenarios for Students, a novel simulation-based learning method in Greece

    PubMed Central

    2016-01-01

    Purpose: The integration of simulation-based learning (SBL) methods holds promise for improving the medical education system in Greece. The Applied Basic Clinical Seminar with Scenarios for Students (ABCS3) is a novel two-day SBL course that was designed by the Scientific Society of Hellenic Medical Students. The ABCS3 targeted undergraduate medical students and consisted of three core components: the case-based lectures, the ABCDE hands-on station, and the simulation-based clinical scenarios. The purpose of this study was to evaluate the general educational environment of the course, as well as the skills and knowledge acquired by the participants. Methods: Two sets of questions were distributed to the participants: the Dundee Ready Educational Environment Measure (DREEM) questionnaire and an internally designed feedback questionnaire (InEv). A multiple-choice examination was also distributed prior to the course and following its completion. A total of 176 participants answered the DREEM questionnaire, 56 the InEv, and 60 the MCQs. Results: The overall DREEM score was 144.61 (±28.05) out of 200. Delegates who participated in both the case-based lectures and the interactive scenarios core components scored higher than those who only completed the case-based lecture session (P=0.038). The mean overall feedback score was 4.12 (±0.56) out of 5. Students scored significantly higher on the post-test than on the pre-test (P<0.001). Conclusion: The ABCS3 was found to be an effective SBL program, as medical students reported positive opinions about their experiences and exhibited improvements in their clinical knowledge and skills. PMID:27012313

  16. Angioplasty simulation using ChainMail method

    NASA Astrophysics Data System (ADS)

    Le Fol, Tanguy; Acosta-Tamayo, Oscar; Lucas, Antoine; Haigron, Pascal

    2007-03-01

    Tackling transluminal angioplasty planning, the aim of our work is to bring, in a patient specific way, solutions to clinical problems. This work focuses on realization of simple simulation scenarios taking into account macroscopic behaviors of stenosis. It means simulating geometrical and physical data from the inflation of a balloon while integrating data from tissues analysis and parameters from virtual tool-tissues interactions. In this context, three main behaviors has been identified: soft tissues crush completely under the effect of the balloon, calcified plaques, do not admit any deformation but could move in deformable structures, the blood vessel wall undergoes consequences from compression phenomenon and tries to find its original form. We investigated the use of Chain-Mail which is based on elements linked with the others thanks to geometric constraints. Compared with time consuming methods or low realism ones, Chain-Mail methods provide a good compromise between physical and geometrical approaches. In this study, constraints are defined from pixel density from angio-CT images. The 2D method, proposed in this paper, first initializes the balloon in the blood vessel lumen. Then the balloon inflates and the moving propagation, gives an approximate reaction of tissues. Finally, a minimal energy level is calculated to locally adjust element positions, throughout elastic relaxation stage. Preliminary experimental results obtained on 2D computed tomography (CT) images (100x100 pixels) show that the method is fast enough to handle a great number of linked-element. The simulation is able to verify real-time and realistic interactions, particularly for hard and soft plaques.

  17. Identification of substance in complicated mixture of simulants under the action of THz radiation on the base of SDA (spectral dynamics analysis) method

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Varentsova, Svetlana A.; Krotkus, Arunas; Molis, Gediminas

    2010-10-01

    The SDA (Spectral Dynamics Analysis) - method (method of THz spectrum dynamics analysis in THz range of frequencies) is used for the detection and identification of substances with similar THz Fourier spectra (such substances are named usually as the simulants) in the two- or three-component medium. This method allows us to obtain the unique 2D THz signature of the substance - the spectrogram- and to analyze the dynamics of many spectral lines of the THz signal, passed through or reflected from substance, by one set of its integral measurements simultaneously; even measurements are made on short-term intervals (less than 20 ps). For long-term intervals (100 ps and more) the SDA method gives an opportunity to define the relaxation time for excited energy levels of molecules. This information gives new opportunity to identify the substance because the relaxation time is different for molecules of different substances. The restoration of the signal by its integral values is made on the base of SVD - Single Value Decomposition - technique. We consider three examples for PTFE mixed with small content of the L-Tartaric Acid and the Sucrose in pellets. A concentration of these substances is about 5%-10%. Our investigations show that the spectrograms and dynamics of spectral lines of THz pulse passed through the pure PTFE differ from the spectrograms of the compound medium containing PTFE and the L-Tartaric Acid or the Sucrose or both these substances together. So, it is possible to detect the presence of a small amount of the additional substances in the sample even their THz Fourier spectra are practically identical. Therefore, the SDA method can be very effective for the defense and security applications and for quality control in pharmaceutical industry. We also show that in the case of substances-simulants the use of auto- and correlation functions has much worse resolvability in a comparison with the SDA method.

  18. Jacobian-free Newton Krylov discontinuous Galerkin method and physics-based preconditioning for nuclear reactor simulations

    SciTech Connect

    HyeongKae Park; Robert R. Nourgaliev; Richard C. Martineau; Dana A. Knoll

    2008-09-01

    We present high-order accurate spatiotemporal discretization of all-speed flow solvers using Jacobian-free Newton Krylov framework. One of the key developments in this work is the physics-based preconditioner for the all-speed flow, which makes use of traditional semi-implicit schemes. The physics-based preconditioner is developed in the primitive variable form, which allows a straightforward separation of physical phenomena. Numerical examples demonstrate that the developed preconditioner effectively reduces the number of the Krylov iterations, and the efficiency is independent of the Mach number and mesh sizes under a fixed CFL condition.

  19. Using LC-MS Based Methods for Testing the Digestibility of a Nonpurified Transgenic Membrane Protein in Simulated Gastric Fluid.

    PubMed

    Skinner, Wayne S; Phinney, Brett S; Herren, Anthony; Goodstal, Floyd J; Dicely, Isabel; Facciotti, Daniel

    2016-06-29

    The digestibility of a nonpurified transgenic membrane protein was determined in pepsin, as part of the food safety evaluation of its resistance to digestion and allergenic potential. Delta-6-desaturase from Saprolegnia diclina, a transmembrane protein expressed in safflower for the production of gamma linolenic acid in the seed, could not be obtained in a pure, native form as normally required for this assay. As a novel approach, the endoplasmic reticulum isolated from immature seeds was digested in simulated gastric fluid (SGF) and the degradation of delta-6-desaturase was selectively followed by SDS-PAGE and targeted LC-MS/MS quantification using stable isotope-labeled peptides as internal standards. The digestion of delta-6-desaturase by SGF was shown to be both rapid and complete. Less than 10% of the initial amount of D6D remained intact after 30 s, and no fragments large enough (>3 kDa) to elicit a type I allergenic response remained after 60 min. PMID:27255301

  20. A hybrid-Vlasov model based on the current advance method for the simulation of collisionless magnetized plasma

    SciTech Connect

    Valentini, F. . E-mail: valentin@fis.unical.it; Travnicek, P.; Califano, F.; Hellinger, P.; Mangeney, A.

    2007-07-01

    We present a numerical scheme for the integration of the Vlasov-Maxwell system of equations for a non-relativistic plasma, in the hybrid approximation, where the Vlasov equation is solved for the ion distribution function and the electrons are treated as a fluid. In the Ohm equation for the electric field, effects of electron inertia have been retained, in order to include the small scale dynamics up to characteristic lengths of the order of the electron skin depth. The low frequency approximation is used by neglecting the time derivative of the electric field, i.e. the displacement current in the Ampere equation. The numerical algorithm consists in coupling the splitting method proposed by Cheng and Knorr in 1976 [C.Z. Cheng, G. Knorr, J. Comput. Phys. 22 (1976) 330-351.] and the current advance method (CAM) introduced by Matthews in 1994 [A.P. Matthews, J. Comput. Phys. 112 (1994) 102-116.] In its present version, the code solves the Vlasov-Maxwell equations in a five-dimensional phase space (2-D in the physical space and 3-D in the velocity space) and it is implemented in a parallel version to exploit the computational power of the modern massively parallel supercomputers. The structure of the algorithm and the coupling between the splitting method and the CAM method (extended to the hybrid case) is discussed in detail. Furthermore, in order to test the hybrid-Vlasov code, the numerical results on propagation and damping of linear ion-acoustic modes and time evolution of linear elliptically polarized Alfven waves (including the so-called whistler regime) are compared to the analytical solutions. Finally, the numerical results of the hybrid-Vlasov code on the parametric instability of Alfven waves are compared with those obtained using a two-fluid approach.

  1. A fluid model simulation of a simplified plasma limiter based on spectral-element time-domain method

    SciTech Connect

    Qian, Cheng; Ding, Dazhi Fan, Zhenhong; Chen, Rushan

    2015-03-15

    A simplified plasma limiter prototype is proposed and the fluid model coupled with Maxwell's equations is established to describe the operating mechanism of plasma limiter. A three-dimensional (3-D) simplified sandwich structure plasma limiter model is analyzed with the spectral-element time-domain (SETD) method. The field breakdown threshold of air and argon at different frequency is predicted and compared with the experimental data and there is a good agreement between them for gas microwave breakdown discharge problems. Numerical results demonstrate that the two-layer plasma limiter (plasma-slab-plasma) has better protective characteristics than a one-layer plasma limiter (slab-plasma-slab) with the same length of gas chamber.

  2. A fluid model simulation of a simplified plasma limiter based on spectral-element time-domain method

    NASA Astrophysics Data System (ADS)

    Qian, Cheng; Ding, Dazhi; Fan, Zhenhong; Chen, Rushan

    2015-03-01

    A simplified plasma limiter prototype is proposed and the fluid model coupled with Maxwell's equations is established to describe the operating mechanism of plasma limiter. A three-dimensional (3-D) simplified sandwich structure plasma limiter model is analyzed with the spectral-element time-domain (SETD) method. The field breakdown threshold of air and argon at different frequency is predicted and compared with the experimental data and there is a good agreement between them for gas microwave breakdown discharge problems. Numerical results demonstrate that the two-layer plasma limiter (plasma-slab-plasma) has better protective characteristics than a one-layer plasma limiter (slab-plasma-slab) with the same length of gas chamber.

  3. Methods of sound simulation and applications in flight simulators

    NASA Technical Reports Server (NTRS)

    Gaertner, K. P.

    1980-01-01

    An overview of methods for electronically synthesizing sounds is presented. A given amount of hardware and computer capacity places an upper limit on the degree and fidelity of realism of sound simulation which is attainable. Good sound realism for aircraft simulators can be especially expensive because of the complexity of flight sounds and their changing patterns through time. Nevertheless, the flight simulator developed at the Research Institute for Human Engineering, West Germany, shows that it is possible to design an inexpensive sound simulator with the required acoustic properties using analog computer elements. The characteristics of the sub-sound elements produced by this sound simulator for take-off, cruise and approach are discussed.

  4. A Method to Estimate the Size and Characteristics of HIV-positive Populations Using an Individual-based Stochastic Simulation Model

    PubMed Central

    van Sighem, Ard; Thiebaut, Rodolphe; Smith, Colette; Ratmann, Oliver; Cambiano, Valentina; Albert, Jan; Amato-Gauci, Andrew; Bezemer, Daniela; Campbell, Colin; Commenges, Daniel; Donoghoe, Martin; Ford, Deborah; Kouyos, Roger; Lodwick, Rebecca; Lundgren, Jens; Pantazis, Nikos; Pharris, Anastasia; Quinten, Chantal; Thorne, Claire; Touloumi, Giota; Delpech, Valerie; Phillips, Andrew

    2016-01-01

    It is important not only to collect epidemiologic data on HIV but to also fully utilize such information to understand the epidemic over time and to help inform and monitor the impact of policies and interventions. We describe and apply a novel method to estimate the size and characteristics of HIV-positive populations. The method was applied to data on men who have sex with men living in the UK and to a pseudo dataset to assess performance for different data availability. The individual-based simulation model was calibrated using an approximate Bayesian computation-based approach. In 2013, 48,310 (90% plausibility range: 39,900–45,560) men who have sex with men were estimated to be living with HIV in the UK, of whom 10,400 (6,160–17,350) were undiagnosed. There were an estimated 3,210 (1,730–5,350) infections per year on average between 2010 and 2013. Sixty-two percent of the total HIV-positive population are thought to have viral load <500 copies/ml. In the pseudo-epidemic example, HIV estimates have narrower plausibility ranges and are closer to the true number, the greater the data availability to calibrate the model. We demonstrate that our method can be applied to settings with less data, however plausibility ranges for estimates will be wider to reflect greater uncertainty of the data used to fit the model. PMID:26605814

  5. A Method to Estimate the Size and Characteristics of HIV-positive Populations Using an Individual-based Stochastic Simulation Model.

    PubMed

    Nakagawa, Fumiyo; van Sighem, Ard; Thiebaut, Rodolphe; Smith, Colette; Ratmann, Oliver; Cambiano, Valentina; Albert, Jan; Amato-Gauci, Andrew; Bezemer, Daniela; Campbell, Colin; Commenges, Daniel; Donoghoe, Martin; Ford, Deborah; Kouyos, Roger; Lodwick, Rebecca; Lundgren, Jens; Pantazis, Nikos; Pharris, Anastasia; Quinten, Chantal; Thorne, Claire; Touloumi, Giota; Delpech, Valerie; Phillips, Andrew

    2016-03-01

    It is important not only to collect epidemiologic data on HIV but to also fully utilize such information to understand the epidemic over time and to help inform and monitor the impact of policies and interventions. We describe and apply a novel method to estimate the size and characteristics of HIV-positive populations. The method was applied to data on men who have sex with men living in the UK and to a pseudo dataset to assess performance for different data availability. The individual-based simulation model was calibrated using an approximate Bayesian computation-based approach. In 2013, 48,310 (90% plausibility range: 39,900-45,560) men who have sex with men were estimated to be living with HIV in the UK, of whom 10,400 (6,160-17,350) were undiagnosed. There were an estimated 3,210 (1,730-5,350) infections per year on average between 2010 and 2013. Sixty-two percent of the total HIV-positive population are thought to have viral load <500 copies/ml. In the pseudo-epidemic example, HIV estimates have narrower plausibility ranges and are closer to the true number, the greater the data availability to calibrate the model. We demonstrate that our method can be applied to settings with less data, however plausibility ranges for estimates will be wider to reflect greater uncertainty of the data used to fit the model.

  6. Large-Eddy Simulation and Multigrid Methods

    SciTech Connect

    Falgout,R D; Naegle,S; Wittum,G

    2001-06-18

    A method to simulate turbulent flows with Large-Eddy Simulation on unstructured grids is presented. Two kinds of dynamic models are used to model the unresolved scales of motion and are compared with each other on different grids. Thereby the behavior of the models is shown and additionally the feature of adaptive grid refinement is investigated. Furthermore the parallelization aspect is addressed.

  7. A task-based parallelism and vectorized approach to 3D Method of Characteristics (MOC) reactor simulation for high performance computing architectures

    NASA Astrophysics Data System (ADS)

    Tramm, John R.; Gunow, Geoffrey; He, Tim; Smith, Kord S.; Forget, Benoit; Siegel, Andrew R.

    2016-05-01

    In this study we present and analyze a formulation of the 3D Method of Characteristics (MOC) technique applied to the simulation of full core nuclear reactors. Key features of the algorithm include a task-based parallelism model that allows independent MOC tracks to be assigned to threads dynamically, ensuring load balancing, and a wide vectorizable inner loop that takes advantage of modern SIMD computer architectures. The algorithm is implemented in a set of highly optimized proxy applications in order to investigate its performance characteristics on CPU, GPU, and Intel Xeon Phi architectures. Speed, power, and hardware cost efficiencies are compared. Additionally, performance bottlenecks are identified for each architecture in order to determine the prospects for continued scalability of the algorithm on next generation HPC architectures.

  8. Combined quantum mechanics and molecular mechanics simulation of Ca2+/ammonia solution based on the ONIOM-XS method: Octahedral coordination and implication to biology

    NASA Astrophysics Data System (ADS)

    Kerdcharoen, Teerakiat; Morokuma, Keiji

    2003-05-01

    An extension of the ONIOM (Own N-layered Integrated molecular Orbital and molecular Mechanics) method [M. Svensson, S. Humbel, R. D. J. Froese, T. Mutsubara, S. Sieber, and K. Morokuma, J. Phys. Chem. 100, 19357 (1996)] for simulation in the condensed phase, called ONIOM-XS (XS=eXtension to Solvation) [T. Kerdcharoen and K. Morokuma, Chem. Phys. Lett. 355, 257 (2002)], was applied to investigate the coordination of Ca2+ in liquid ammonia. A coordination number of 6 is found. Previous simulations based on pair potential or pair potential plus three-body correction gave values of 9 and 8.2, respectively. The new value is the same as the coordination number most frequently listed in the Cambridge Structural Database (CSD) and Protein Data Bank (PDB). N-Ca-N angular distribution reveals a near-octahedral coordination structure. Inclusion of many-body interactions (which amounts to 25% of the pair interactions) into the potential energy surface is essential for obtaining reasonable coordination number. Analyses of the metal coordination in water, water-ammonia mixture, and in proteins reveals that cation/ammonia solution can be used to approximate the coordination environment in proteins.

  9. Interval sampling methods and measurement error: a computer simulation.

    PubMed

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments.

  10. Constraint methods that accelerate free-energy simulations of biomolecules.

    PubMed

    Perez, Alberto; MacCallum, Justin L; Coutsias, Evangelos A; Dill, Ken A

    2015-12-28

    Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann's law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions. PMID:26723628

  11. Constraint methods that accelerate free-energy simulations of biomolecules

    NASA Astrophysics Data System (ADS)

    Perez, Alberto; MacCallum, Justin L.; Coutsias, Evangelos A.; Dill, Ken A.

    2015-12-01

    Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann's law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.

  12. Constraint methods that accelerate free-energy simulations of biomolecules

    SciTech Connect

    Perez, Alberto; MacCallum, Justin L.; Coutsias, Evangelos A.; Dill, Ken A.

    2015-12-28

    Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann’s law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.

  13. Molecular dynamic simulation methods for anisotropic liquids.

    PubMed

    Aoki, Keiko M; Yoneya, Makoto; Yokoyama, Hiroshi

    2004-03-22

    Methods of molecular dynamics simulations for anisotropic molecules are presented. The new methods, with an anisotropic factor in the cell dynamics, dramatically reduce the artifacts related to cell shapes and overcome the difficulties of simulating anisotropic molecules under constant hydrostatic pressure or constant volume. The methods are especially effective for anisotropic liquids, such as smectic liquid crystals and membranes, of which the stacks of layers are compressible (elastic in direction perpendicular to the layers) while the layer itself is liquid and only elastic under uniform compressive force. The methods can also be used for crystals and isotropic liquids as well.

  14. Inversion based on computational simulations

    SciTech Connect

    Hanson, K.M.; Cunningham, G.S.; Saquib, S.S.

    1998-09-01

    A standard approach to solving inversion problems that involve many parameters uses gradient-based optimization to find the parameters that best match the data. The authors discuss enabling techniques that facilitate application of this approach to large-scale computational simulations, which are the only way to investigate many complex physical phenomena. Such simulations may not seem to lend themselves to calculation of the gradient with respect to numerous parameters. However, adjoint differentiation allows one to efficiently compute the gradient of an objective function with respect to all the variables of a simulation. When combined with advanced gradient-based optimization algorithms, adjoint differentiation permits one to solve very large problems of optimization or parameter estimation. These techniques will be illustrated through the simulation of the time-dependent diffusion of infrared light through tissue, which has been used to perform optical tomography. The techniques discussed have a wide range of applicability to modeling including the optimization of models to achieve a desired design goal.

  15. Meta-Analysis of a Continuous Outcome Combining Individual Patient Data and Aggregate Data: A Method Based on Simulated Individual Patient Data

    ERIC Educational Resources Information Center

    Yamaguchi, Yusuke; Sakamoto, Wataru; Goto, Masashi; Staessen, Jan A.; Wang, Jiguang; Gueyffier, Francois; Riley, Richard D.

    2014-01-01

    When some trials provide individual patient data (IPD) and the others provide only aggregate data (AD), meta-analysis methods for combining IPD and AD are required. We propose a method that reconstructs the missing IPD for AD trials by a Bayesian sampling procedure and then applies an IPD meta-analysis model to the mixture of simulated IPD and…

  16. A simple method for simulating gasdynamic systems

    NASA Technical Reports Server (NTRS)

    Hartley, Tom T.

    1991-01-01

    A simple method for performing digital simulation of gasdynamic systems is presented. The approach is somewhat intuitive, and requires some knowledge of the physics of the problem as well as an understanding of the finite difference theory. The method is explicitly shown in appendix A which is taken from the book by P.J. Roache, 'Computational Fluid Dynamics,' Hermosa Publishers, 1982. The resulting method is relatively fast while it sacrifices some accuracy.

  17. Spectral Methods in General Relativistic MHD Simulations

    NASA Astrophysics Data System (ADS)

    Garrison, David

    2012-03-01

    In this talk I discuss the use of spectral methods in improving the accuracy of a General Relativistic Magnetohydrodynamic (GRMHD) computer code. I introduce SpecCosmo, a GRMHD code developed as a Cactus arrangement at UHCL, and show simulation results using both Fourier spectral methods and finite differencing. This work demonstrates the use of spectral methods with the FFTW 3.3 Fast Fourier Transform package integrated with the Cactus Framework to perform spectral differencing using MPI.

  18. Rainfall Simulation: methods, research questions and challenges

    NASA Astrophysics Data System (ADS)

    Ries, J. B.; Iserloh, T.

    2012-04-01

    In erosion research, rainfall simulations are used for the improvement of process knowledge as well as in the field for the assessment of overland flow generation, infiltration, and erosion rates. In all these fields of research, rainfall experiments have become an indispensable part of the research methods. In this context, small portable rainfall simulators with small test-plot sizes of one square-meter or even less, and devices of low weight and water consumption are in demand. Accordingly, devices with manageable technical effort like nozzle-type simulators seem to prevail against larger simulators. The reasons are obvious: lower costs and less time consumption needed for mounting enable a higher repetition rate. Regarding the high number of research questions, of different fields of application, and not least also due to the great technical creativity of our research staff, a large number of different experimental setups is available. Each of the devices produces a different rainfall, leading to different kinetic energy amounts influencing the soil surface and accordingly, producing different erosion results. Hence, important questions contain the definition, the comparability, the measurement and the simulation of natural rainfall and the problem of comparability in general. Another important discussion topic will be the finding of an agreement on an appropriate calibration method for the simulated rainfalls, in order to enable a comparison of the results of different rainfall simulator set-ups. In most of the publications, only the following "nice" sentence can be read: "Our rainfall simulator generates a rainfall spectrum that is similar to natural rainfall!". The most substantial and critical properties of a simulated rainfall are the drop-size distribution, the fall velocities of the drops, and the spatial distribution of the rainfall on the plot-area. In a comparison of the most important methods, the Laser Distrometer turned out to be the most up

  19. Reduced Basis Method for Nanodevices Simulation

    SciTech Connect

    Pau, George Shu Heng

    2008-05-23

    Ballistic transport simulation in nanodevices, which involves self-consistently solving a coupled Schrodinger-Poisson system of equations, is usually computationally intensive. Here, we propose coupling the reduced basis method with the subband decomposition method to improve the overall efficiency of the simulation. By exploiting a posteriori error estimation procedure and greedy sampling algorithm, we are able to design an algorithm where the computational cost is reduced significantly. In addition, the computational cost only grows marginally with the number of grid points in the confined direction.

  20. Simulation methods for advanced scientific computing

    SciTech Connect

    Booth, T.E.; Carlson, J.A.; Forster, R.A.

    1998-11-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The objective of the project was to create effective new algorithms for solving N-body problems by computer simulation. The authors concentrated on developing advanced classical and quantum Monte Carlo techniques. For simulations of phase transitions in classical systems, they produced a framework generalizing the famous Swendsen-Wang cluster algorithms for Ising and Potts models. For spin-glass-like problems, they demonstrated the effectiveness of an extension of the multicanonical method for the two-dimensional, random bond Ising model. For quantum mechanical systems, they generated a new method to compute the ground-state energy of systems of interacting electrons. They also improved methods to compute excited states when the diffusion quantum Monte Carlo method is used and to compute longer time dynamics when the stationary phase quantum Monte Carlo method is used.

  1. Bridging the gap: simulations meet knowledge bases

    NASA Astrophysics Data System (ADS)

    King, Gary W.; Morrison, Clayton T.; Westbrook, David L.; Cohen, Paul R.

    2003-09-01

    Tapir and Krill are declarative languages for specifying actions and agents, respectively, that can be executed in simulation. As such, they bridge the gap between strictly declarative knowledge bases and strictly executable code. Tapir and Krill components can be combined to produce models of activity which can answer questions about mechanisms and processes using conventional inference methods and simulation. Tapir was used in DARPA's Rapid Knowledge Formation (RKF) project to construct models of military tactics from the Army Field Manual FM3-90. These were then used to build Courses of Actions (COAs) which could be critiqued by declarative reasoning or via Monte Carlo simulation. Tapir and Krill can be read and written by non-knowledge engineers making it an excellent vehicle for Subject Matter Experts to build and critique knowledge bases.

  2. Effective medium based optical analysis with finite element method simulations to study photochromic transitions in Ag-TiO2 nanocomposite films

    NASA Astrophysics Data System (ADS)

    Abhilash, T.; Balasubrahmaniyam, M.; Kasiviswanathan, S.

    2016-03-01

    Photochromic transitions in silver nanoparticles (AgNPs) embedded titanium dioxide (TiO2) films under green light illumination are marked by reduction in strength and blue shift in the position of the localized surface plasmon resonance (LSPR) associated with AgNPs. These transitions, which happen in the sub-nanometer length scale, have been analysed using the variations observed in the effective dielectric properties of the Ag-TiO2 nanocomposite films in response to the size reduction of AgNPs and subsequent changes in the surrounding medium due to photo-oxidation. Bergman-Milton formulation based on spectral density approach is used to extract dielectric properties and information about the geometrical distribution of the effective medium. Combined with finite element method simulations, we isolate the effects due to the change in average size of the nanoparticles and those due to the change in the dielectric function of the surrounding medium. By analysing the dynamics of photochromic transitions in the effective medium, we conclude that the observed blue shift in LSPR is mainly because of the change in the dielectric function of surrounding medium, while a shape-preserving effective size reduction of the AgNPs causes decrease in the strength of LSPR.

  3. A method for simulating a flux-locked DC SQUID

    NASA Technical Reports Server (NTRS)

    Gutt, G. M.; Kasdin, N. J.; Condron, M. R., II; Muhlfelder, B.; Lockhart, J. M.; Cromar, M. W.

    1993-01-01

    The authors describe a computationally efficient and accurate method for simulating a dc SQUID's V-Phi (voltage-flux) and I-V characteristics which has proven valuable in evaluating and improving various SQUID readout methods. The simulation of the SQUID is based on fitting of previously acquired data from either a real or a modeled device using the Fourier transform of the V-Phi curve. This method does not predict SQUID behavior, but rather is a way of replicating a known behavior efficiently with portability into various simulation programs such as SPICE. The authors discuss the methods used to simulate the SQUID and the flux-locking control electronics, and present specific examples of this approach. Results include an estimate of the slew rate and linearity of a simple flux-locked loop using a characterized dc SQUID.

  4. Method for Constructing Standardized Simulated Root Canals.

    ERIC Educational Resources Information Center

    Schulz-Bongert, Udo; Weine, Franklin S.

    1990-01-01

    The construction of visual and manipulative aids, clear resin blocks with root-canal-like spaces, for simulation of root canals is explained. Time, materials, and techniques are discussed. The method allows for comparison of canals, creation of any configuration of canals, and easy presentation during instruction. (MSE)

  5. A Simulation Method Measuring Psychomotor Nursing Skills.

    ERIC Educational Resources Information Center

    McBride, Helena; And Others

    1981-01-01

    The development of a simulation technique to evaluate performance of psychomotor skills in an undergraduate nursing program is described. This method is used as one admission requirement to an alternate route nursing program. With modifications, any health profession could use this technique where psychomotor skills performance is important.…

  6. A Method for Simulating Bank Reconciliation

    ERIC Educational Resources Information Center

    Klemin, Vernon W.

    1974-01-01

    A method of simulation to tie check writing, making deposits, finding outstanding checks, receiving bank statements, and bank reconciliation into a process is presented as a way to convey to students a feeling of a procedure completed. A step-by-step teaching procedure and examples of bank statements are included. (AG)

  7. Fast Particle Methods for Multiscale Phenomena Simulations

    NASA Technical Reports Server (NTRS)

    Koumoutsakos, P.; Wray, A.; Shariff, K.; Pohorille, Andrew

    2000-01-01

    We are developing particle methods oriented at improving computational modeling capabilities of multiscale physical phenomena in : (i) high Reynolds number unsteady vortical flows, (ii) particle laden and interfacial flows, (iii)molecular dynamics studies of nanoscale droplets and studies of the structure, functions, and evolution of the earliest living cell. The unifying computational approach involves particle methods implemented in parallel computer architectures. The inherent adaptivity, robustness and efficiency of particle methods makes them a multidisciplinary computational tool capable of bridging the gap of micro-scale and continuum flow simulations. Using efficient tree data structures, multipole expansion algorithms, and improved particle-grid interpolation, particle methods allow for simulations using millions of computational elements, making possible the resolution of a wide range of length and time scales of these important physical phenomena.The current challenges in these simulations are in : [i] the proper formulation of particle methods in the molecular and continuous level for the discretization of the governing equations [ii] the resolution of the wide range of time and length scales governing the phenomena under investigation. [iii] the minimization of numerical artifacts that may interfere with the physics of the systems under consideration. [iv] the parallelization of processes such as tree traversal and grid-particle interpolations We are conducting simulations using vortex methods, molecular dynamics and smooth particle hydrodynamics, exploiting their unifying concepts such as : the solution of the N-body problem in parallel computers, highly accurate particle-particle and grid-particle interpolations, parallel FFT's and the formulation of processes such as diffusion in the context of particle methods. This approach enables us to transcend among seemingly unrelated areas of research.

  8. Mixed-level optical simulations of light-emitting diodes based on a combination of rigorous electromagnetic solvers and Monte Carlo ray-tracing methods

    NASA Astrophysics Data System (ADS)

    Bahl, Mayank; Zhou, Gui-Rong; Heller, Evan; Cassarly, William; Jiang, Mingming; Scarmozzino, Robert; Gregory, G. Groot; Herrmann, Daniel

    2015-04-01

    Over the last two decades, extensive research has been done to improve light-emitting diodes (LEDs) designs. Increasingly complex designs have necessitated the use of computational simulations which have provided numerous insights for improving LED performance. Depending upon the focus of the design and the scale of the problem, simulations are carried out using rigorous electromagnetic (EM) wave optics-based techniques, such as finite-difference time-domain and rigorous coupled wave analysis, or through ray optics-based techniques such as Monte Carlo ray-tracing (RT). The former are typically used for modeling nanostructures on the LED die, and the latter for modeling encapsulating structures, die placement, back-reflection, and phosphor downconversion. This paper presents the use of a mixed-level simulation approach that unifies the use of EM wave-level and ray-level tools. This approach uses rigorous EM wave-based tools to characterize the nanostructured die and generates both a bidirectional scattering distribution function and a far-field angular intensity distribution. These characteristics are then incorporated into the RT simulator to obtain the overall performance. Such a mixed-level approach allows for comprehensive modeling of the optical characteristic of LEDs, including polarization effects, and can potentially lead to a more accurate performance than that from individual modeling tools alone.

  9. Mesoscopic Simulation Methods for Polymer Dynamics

    NASA Astrophysics Data System (ADS)

    Larson, Ronald

    2015-03-01

    We assess the accuracy and efficiency of mesoscopic simulation methods, namely Brownian Dynamics (BD), Stochastic Rotation Dynamics (SRD) and Dissipative Particle Dynamics (DPD), for polymers in solution at equilibrium and in flows in microfluidic geometries. Both SRD and DPD use solvent ``particles'' to carry momentum, and so account automatically for hydrodynamic interactions both within isolated polymer coils, and with other polymer molecules and with nearby solid boundaries. We assess quantitatively the effects of artificial particle inertia and fluid compressibility and show that they can be made small with appropriate choice of simulation parameters. We then use these methods to study flow-induced migration of polymer chains produced by: 1) hydrodynamic interactions, 2) streamline curvature or stress-gradients, and 3) convection of wall depletion zones. We show that huge concentration gradients can be produced by these mechanisms in microfluidic geometries that can be exploited for separation of polymers by size in periodic contraction-expansion geometries. We also assess the range of conditions for which BD, SRD or DPD is preferable for mesoscopic simulations. Finally, we show how such methods can be used to simulate quantitatively the swimming of micro-organisms such as E. coli. In collaboration with Lei Jiang and Tongyang Zhao, University of Michigan, Ann Arbor, MI.

  10. Discontinuous Galerkin Methods for Turbulence Simulation

    NASA Technical Reports Server (NTRS)

    Collis, S. Scott

    2002-01-01

    A discontinuous Galerkin (DG) method is formulated, implemented, and tested for simulation of compressible turbulent flows. The method is applied to turbulent channel flow at low Reynolds number, where it is found to successfully predict low-order statistics with fewer degrees of freedom than traditional numerical methods. This reduction is achieved by utilizing local hp-refinement such that the computational grid is refined simultaneously in all three spatial coordinates with decreasing distance from the wall. Another advantage of DG is that Dirichlet boundary conditions can be enforced weakly through integrals of the numerical fluxes. Both for a model advection-diffusion problem and for turbulent channel flow, weak enforcement of wall boundaries is found to improve results at low resolution. Such weak boundary conditions may play a pivotal role in wall modeling for large-eddy simulation.

  11. Twitter's tweet method modelling and simulation

    NASA Astrophysics Data System (ADS)

    Sarlis, Apostolos S.; Sakas, Damianos P.; Vlachos, D. S.

    2015-02-01

    This paper seeks to purpose the concept of Twitter marketing methods. The tools that Twitter provides are modelled and simulated using iThink in the context of a Twitter media-marketing agency. The paper has leveraged the system's dynamic paradigm to conduct Facebook marketing tools and methods modelling, using iThink™ system to implement them. It uses the design science research methodology for the proof of concept of the models and modelling processes. The following models have been developed for a twitter marketing agent/company and tested in real circumstances and with real numbers. These models were finalized through a number of revisions and iterators of the design, develop, simulate, test and evaluate. It also addresses these methods that suit most organized promotion through targeting, to the Twitter social media service. The validity and usefulness of these Twitter marketing methods models for the day-to-day decision making are authenticated by the management of the company organization. It implements system dynamics concepts of Twitter marketing methods modelling and produce models of various Twitter marketing situations. The Tweet method that Twitter provides can be adjusted, depending on the situation, in order to maximize the profit of the company/agent.

  12. Computer Based Simulation of Laboratory Experiments.

    ERIC Educational Resources Information Center

    Edward, Norrie S.

    1997-01-01

    Examines computer based simulations of practical laboratory experiments in engineering. Discusses the aims and achievements of lab work (cognitive, process, psychomotor, and affective); types of simulations (model building and behavioral); and the strengths and weaknesses of simulations. Describes the development of a centrifugal pump simulation,…

  13. Automated Simulation Updates based on Flight Data

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Ward, David G.

    2007-01-01

    A statistically-based method for using flight data to update aerodynamic data tables used in flight simulators is explained and demonstrated. A simplified wind-tunnel aerodynamic database for the F/A-18 aircraft is used as a starting point. Flight data from the NASA F-18 High Alpha Research Vehicle (HARV) is then used to update the data tables so that the resulting aerodynamic model characterizes the aerodynamics of the F-18 HARV. Prediction cases are used to show the effectiveness of the automated method, which requires no ad hoc adjustments by the analyst.

  14. A method to produce and validate a digitally reconstructed radiograph-based computer simulation for optimisation of chest radiographs acquired with a computed radiography imaging system

    PubMed Central

    Moore, C S; Liney, G P; Beavis, A W; Saunderson, J R

    2011-01-01

    Objectives The purpose of this study was to develop and validate a computer model to produce realistic simulated computed radiography (CR) chest images using CT data sets of real patients. Methods Anatomical noise, which is the limiting factor in determining pathology in chest radiography, is realistically simulated by the CT data, and frequency-dependent noise has been added post-digitally reconstructed radiograph (DRR) generation to simulate exposure reduction. Realistic scatter and scatter fractions were measured in images of a chest phantom acquired on the CR system simulated by the computer model and added post-DRR calculation. Results The model has been validated with a phantom and patients and shown to provide predictions of signal-to-noise ratios (SNRs), tissue-to-rib ratios (TRRs: a measure of soft tissue pixel value to that of rib) and pixel value histograms that lie within the range of values measured with patients and the phantom. The maximum difference in measured SNR to that calculated was 10%. TRR values differed by a maximum of 1.3%. Conclusion Experienced image evaluators have responded positively to the DRR images, are satisfied they contain adequate anatomical features and have deemed them clinically acceptable. Therefore, the computer model can be used by image evaluators to grade chest images presented at different tube potentials and doses in order to optimise image quality and patient dose for clinical CR chest radiographs without the need for repeat patient exposures. PMID:21933979

  15. Vibratory compaction method for preparing lunar regolith drilling simulant

    NASA Astrophysics Data System (ADS)

    Chen, Chongbin; Quan, Qiquan; Deng, Zongquan; Jiang, Shengyuan

    2016-07-01

    Drilling and coring is an effective way to acquire lunar regolith samples along the depth direction. To facilitate the modeling and simulation of lunar drilling, ground verification experiments for drilling and coring should be performed using lunar regolith simulant. The simulant should mimic actual lunar regolith, and the distribution of its mechanical properties should vary along the longitudinal direction. Furthermore, an appropriate preparation method is required to ensure that the simulant has consistent mechanical properties so that the experimental results can be repeatable. Vibratory compaction actively changes the relative density of a raw material, making it suitable for building a multilayered drilling simulant. It is necessary to determine the relation between the preparation parameters and the expected mechanical properties of the drilling simulant. A vibratory compaction model based on the ideal elastoplastic theory is built to represent the dynamical properties of the simulant during compaction. Preparation experiments indicated that the preparation method can be used to obtain drilling simulant with the desired mechanical property distribution along the depth direction.

  16. Density based visualization for molecular simulation.

    PubMed

    Rozmanov, Dmitri; Baoukina, Svetlana; Tieleman, D Peter

    2014-01-01

    Molecular visualization of structural information obtained from computer simulations is an important part of research work flow. A good visualization technique should be capable of eliminating redundant information and highlight important effects clarifying the key phenomena in the system. Current methods of presenting structural data are mostly limited to variants of the traditional ball-and-stick representation. This approach becomes less attractive when very large biological systems are simulated at microsecond timescales, and is less effective when coarse-grained models are used. Real time rendering of such large systems becomes a difficult task; the amount of information in one single frame of a simulation trajectory is enormous given the large number of particles; at the same time, each structure contains information about one configurational point of the system and no information about statistical weight of this specific configuration. In this paper we report a novel visualization technique based on spatial particle densities. The atomic densities are sampled on a high resolution 3-dimensional grid along a relatively short molecular dynamics trajectory using hundreds of configurations. The density information is then analyzed and visualized using the open-source ParaView software. The performance and capability of the method are demonstrated on two large systems simulated with the MARTINI coarse-grained force field: a lipid nanoparticle for delivering siRNA molecules and monolayers with a complex composition under conditions that induce monolayer collapse.

  17. Matching methods to create paired survival data based on an exposure occurring over time: a simulation study with application to breast cancer

    PubMed Central

    2014-01-01

    Background Paired survival data are often used in clinical research to assess the prognostic effect of an exposure. Matching generates correlated censored data expecting that the paired subjects just differ from the exposure. Creating pairs when the exposure is an event occurring over time could be tricky. We applied a commonly used method, Method 1, which creates pairs a posteriori and propose an alternative method, Method 2, which creates pairs in “real-time”. We used two semi-parametric models devoted to correlated censored data to estimate the average effect of the exposure HR¯(t): the Holt and Prentice (HP), and the Lee Wei and Amato (LWA) models. Contrary to the HP, the LWA allowed adjustment for the matching covariates (LWA a ) and for an interaction (LWA i ) between exposure and covariates (assimilated to prognostic profiles). The aim of our study was to compare the performances of each model according to the two matching methods. Methods Extensive simulations were conducted. We simulated cohort data sets on which we applied the two matching methods, the HP and the LWA. We used our conclusions to assess the prognostic effect of subsequent pregnancy after treatment for breast cancer in a female cohort treated and followed up in eight french hospitals. Results In terms of bias and RMSE, Method 2 performed better than Method 1 in designing the pairs, and LWA a was the best model for all the situations except when there was an interaction between exposure and covariates, for which LWA i was more appropriate. On our real data set, we found opposite effects of pregnancy according to the six prognostic profiles, but none were statistically significant. We probably lacked statistical power or reached the limits of our approach. The pairs’ censoring options chosen for combination Method 2 - LWA had to be compared with others. Conclusions Correlated censored data designing by Method 2 seemed to be the most pertinent method to create pairs, when the criterion

  18. Comparison of EBSD patterns simulated by two multislice methods.

    PubMed

    Liu, Q B; Cai, C Y; Zhou, G W; Wang, Y G

    2016-10-01

    The extraction of crystallography information from electron backscatter diffraction (EBSD) patterns can be facilitated by diffraction simulations based on the dynamical electron diffraction theory. In this work, the EBSD patterns are successfully simulated by two multislice methods, that is, the real space (RS) method and the revised real space (RRS) method. The calculation results by the two multislice methods are compared and analyzed in detail with respect to different accelerating voltages, Debye-Waller factors and aperture radii. It is found that the RRS method provides a larger view field of the EBSD patterns than that by the RS method under the same calculation conditions. Moreover, the Kikuchi bands of the EBSD patterns obtained by the RRS method have a better match with the experimental patterns than those by the RS method. Especially, the lattice parameters obtained by the RRS method are more accurate than those by the RS method. These results demonstrate that the RRS method is more accurate for simulating the EBSD patterns than the RS method within the accepted computation time.

  19. Interactive methods for exploring particle simulation data

    SciTech Connect

    Co, Christopher S.; Friedman, Alex; Grote, David P.; Vay, Jean-Luc; Bethel, E. Wes; Joy, Kenneth I.

    2004-05-01

    In this work, we visualize high-dimensional particle simulation data using a suite of scatter plot-based visualizations coupled with interactive selection tools. We use traditional 2D and 3D projection scatter plots as well as a novel oriented disk rendering style to convey various information about the data. Interactive selection tools allow physicists to manually classify ''interesting'' sets of particles that are highlighted across multiple, linked views of the data. The power of our application is the ability to correspond new visual representations of the simulation data with traditional, well understood visualizations. This approach supports the interactive exploration of the high-dimensional space while promoting discovery of new particle behavior.

  20. A cloud-based simulation architecture for pandemic influenza simulation.

    PubMed

    Eriksson, Henrik; Raciti, Massimiliano; Basile, Maurizio; Cunsolo, Alessandro; Fröberg, Anders; Leifler, Ola; Ekberg, Joakim; Timpka, Toomas

    2011-01-01

    High-fidelity simulations of pandemic outbreaks are resource consuming. Cluster-based solutions have been suggested for executing such complex computations. We present a cloud-based simulation architecture that utilizes computing resources both locally available and dynamically rented online. The approach uses the Condor framework for job distribution and management of the Amazon Elastic Computing Cloud (EC2) as well as local resources. The architecture has a web-based user interface that allows users to monitor and control simulation execution. In a benchmark test, the best cost-adjusted performance was recorded for the EC2 H-CPU Medium instance, while a field trial showed that the job configuration had significant influence on the execution time and that the network capacity of the master node could become a bottleneck. We conclude that it is possible to develop a scalable simulation environment that uses cloud-based solutions, while providing an easy-to-use graphical user interface. PMID:22195089

  1. TU-C-17A-08: Improving IMRT Planning and Reducing Inter-Planner Variability Using the Stochastic Frontier Method: Validation Based On Clinical and Simulated Data

    SciTech Connect

    Gagne, MC; Archambault, L; Tremblay, D; Varfalvy, N

    2014-06-15

    Purpose: Intensity modulated radiation therapy always requires compromises between PTV coverage and organs at risk (OAR) sparing. We previously developed metrics that correlate doses to OAR to specific patients’ morphology using stochastic frontier analysis (SFA). Here, we aim to examine the validity of this approach using a large set of realistically simulated dosimetric and geometric data. Methods: SFA describes a set of treatment plans as an asymmetric distribution with respect to a frontier defining optimal plans. Eighty head and neck IMRT plans were used to establish a metric predicting the mean dose to parotids as a function of simple geometric parameters. A database of 140 parotids was used as a basis distribution to simulate physically plausible data of geometry and dose. Distributions comprising between 20 and 5000 were simulated and the SFA was applied to obtain new frontiers, which were compared to the original frontier. Results: It was possible to simulate distributions consistent with the original dataset. Below 160 organs, the SFA could not always describe distributions as asymmetric: a few cases showed a Gaussian or half-Gaussian distribution. In order to converge to a stable solution, the number of organs in a distribution must ideally be above 100, but in many cases stable parameters could be achieved with as low as 60 samples of organ data. Mean RMS value of the error of new frontiers was significantly reduced when additional organs are used. Conclusion: The number of organs in a distribution showed to have an impact on the effectiveness of the model. It is always possible to obtain a frontier, but if the number of organs in the distribution is small (< 160), it may not represent de lowest dose achievable. These results will be used to determine number of cases necessary to adapt the model to other organs.

  2. A discrete event method for wave simulation

    SciTech Connect

    Nutaro, James J

    2006-01-01

    This article describes a discrete event interpretation of the finite difference time domain (FDTD) and digital wave guide network (DWN) wave simulation schemes. The discrete event method is formalized using the discrete event system specification (DEVS). The scheme is shown to have errors that are proportional to the resolution of the spatial grid. A numerical example demonstrates the relative efficiency of the scheme with respect to FDTD and DWN schemes. The potential for the discrete event scheme to reduce numerical dispersion and attenuation errors is discussed.

  3. A Method of Simulating Fluid Structure Interactions for Deformable Decelerators

    NASA Astrophysics Data System (ADS)

    Gidzak, Vladimyr Mykhalo

    A method is developed for performing simulations that contain fluid-structure interactions between deployable decelerators and a high speed compressible flow. The problem of coupling together multiple physical systems is examined with discussion of the strength of coupling for various methods. A non-monolithic strongly coupled option is presented for fluid-structure systems based on grid deformation. A class of algebraic grid deformation methods is then presented with examples of increasing complexity. The strength of the fluid-structure coupling is validated against two analytic problems, chosen to test the time dependent behavior of structure on fluid interactions, and of fluid on structure interruptions. A one-dimentional material heating model is also validated against experimental data. Results are provided for simulations of a wind tunnel scale disk-gap-band parachute with comparison to experimental data. Finally, a simulation is performed on a flight scale tension cone decelerator, with examination of time-dependent material stress, and heating.

  4. Physics-Based Simulator for NEO Exploration Analysis & Simulation

    NASA Technical Reports Server (NTRS)

    Balaram, J.; Cameron, J.; Jain, A.; Kline, H.; Lim, C.; Mazhar, H.; Myint, S.; Nayar, H.; Patton, R.; Pomerantz, M.; Quadrelli, M.; Shakkotai, P.; Tso, K.

    2011-01-01

    As part of the Space Exploration Analysis and Simulation (SEAS) task, the National Aeronautics and Space Administration (NASA) is using physics-based simulations at NASA's Jet Propulsion Laboratory (JPL) to explore potential surface and near-surface mission operations at Near Earth Objects (NEOs). The simulator is under development at JPL and can be used to provide detailed analysis of various surface and near-surface NEO robotic and human exploration concepts. In this paper we describe the SEAS simulator and provide examples of recent mission systems and operations concepts investigated using the simulation. We also present related analysis work and tools developed for both the SEAS task as well as general modeling, analysis and simulation capabilites for asteroid/small-body objects.

  5. Apparatus for and method of simulating turbulence

    DOEpatents

    Dimas, Athanassios; Lottati, Isaac; Bernard, Peter; Collins, James; Geiger, James C.

    2003-01-01

    In accordance with a preferred embodiment of the invention, a novel apparatus for and method of simulating physical processes such as fluid flow is provided. Fluid flow near a boundary or wall of an object is represented by a collection of vortex sheet layers. The layers are composed of a grid or mesh of one or more geometrically shaped space filling elements. In the preferred embodiment, the space filling elements take on a triangular shape. An Eulerian approach is employed for the vortex sheets, where a finite-volume scheme is used on the prismatic grid formed by the vortex sheet layers. A Lagrangian approach is employed for the vortical elements (e.g., vortex tubes or filaments) found in the remainder of the flow domain. To reduce the computational time, a hairpin removal scheme is employed to reduce the number of vortex filaments, and a Fast Multipole Method (FMM), preferably implemented using parallel processing techniques, reduces the computation of the velocity field.

  6. Physiological Based Simulator Fidelity Design Guidance

    NASA Technical Reports Server (NTRS)

    Schnell, Thomas; Hamel, Nancy; Postnikov, Alex; Hoke, Jaclyn; McLean, Angus L. M. Thom, III

    2012-01-01

    The evolution of the role of flight simulation has reinforced assumptions in aviation that the degree of realism in a simulation system directly correlates to the training benefit, i.e., more fidelity is always better. The construct of fidelity has several dimensions, including physical fidelity, functional fidelity, and cognitive fidelity. Interaction of different fidelity dimensions has an impact on trainee immersion, presence, and transfer of training. This paper discusses research results of a recent study that investigated if physiological-based methods could be used to determine the required level of simulator fidelity. Pilots performed a relatively complex flight task consisting of mission task elements of various levels of difficulty in a fixed base flight simulator and a real fighter jet trainer aircraft. Flight runs were performed using one forward visual channel of 40 deg. field of view for the lowest level of fidelity, 120 deg. field of view for the middle level of fidelity, and unrestricted field of view and full dynamic acceleration in the real airplane. Neuro-cognitive and physiological measures were collected under these conditions using the Cognitive Avionics Tool Set (CATS) and nonlinear closed form models for workload prediction were generated based on these data for the various mission task elements. One finding of the work described herein is that simple heart rate is a relatively good predictor of cognitive workload, even for short tasks with dynamic changes in cognitive loading. Additionally, we found that models that used a wide range of physiological and neuro-cognitive measures can further boost the accuracy of the workload prediction.

  7. Developing a Theory of Digitally-Enabled Trial-Based Problem Solving through Simulation Methods: The Case of Direct-Response Marketing

    ERIC Educational Resources Information Center

    Clark, Joseph Warren

    2012-01-01

    In turbulent business environments, change is rapid, continuous, and unpredictable. Turbulence undermines those adaptive problem solving methods that generate solutions by extrapolating from what worked (or did not work) in the past. To cope with this challenge, organizations utilize trial-based problem solving (TBPS) approaches in which they…

  8. A performance-based method for calculating the design thickness of compacted clay liners exposed to high strength leachate under simulated landfill conditions.

    PubMed

    Safari, Edwin; Jalili Ghazizade, Mahdi; Abdoli, Mohammad Ali

    2012-09-01

    Compacted clay liners (CCLs) when feasible, are preferred to composite geosynthetic liners. The thickness of CCLs is typically prescribed by each country's environmental protection regulations. However, considering the fact that construction of CCLs represents a significant portion of overall landfill construction costs; a performance based design of liner thickness would be preferable to 'one size fits all' prescriptive standards. In this study researchers analyzed the hydraulic behaviour of a compacted clayey soil in three laboratory pilot scale columns exposed to high strength leachate under simulated landfill conditions. The temperature of the simulated CCL at the surface was maintained at 40 ± 2 °C and a vertical pressure of 250 kPa was applied to the soil through a gravel layer on top of the 50 cm thick CCL where high strength fresh leachate was circulated at heads of 15 and 30 cm simulating the flow over the CCL. Inverse modelling using HYDRUS-1D indicated that the hydraulic conductivity after 180 days was decreased about three orders of magnitude in comparison with the values measured prior to the experiment. A number of scenarios of different leachate heads and persistence time were considered and saturation depth of the CCL was predicted through modelling. Under a typical leachate head of 30 cm, the saturation depth was predicted to be less than 60 cm for a persistence time of 3 years. This approach can be generalized to estimate an effective thickness of a CCL instead of using prescribed values, which may be conservatively overdesigned and thus unduly costly. PMID:22617473

  9. A performance-based method for calculating the design thickness of compacted clay liners exposed to high strength leachate under simulated landfill conditions.

    PubMed

    Safari, Edwin; Jalili Ghazizade, Mahdi; Abdoli, Mohammad Ali

    2012-09-01

    Compacted clay liners (CCLs) when feasible, are preferred to composite geosynthetic liners. The thickness of CCLs is typically prescribed by each country's environmental protection regulations. However, considering the fact that construction of CCLs represents a significant portion of overall landfill construction costs; a performance based design of liner thickness would be preferable to 'one size fits all' prescriptive standards. In this study researchers analyzed the hydraulic behaviour of a compacted clayey soil in three laboratory pilot scale columns exposed to high strength leachate under simulated landfill conditions. The temperature of the simulated CCL at the surface was maintained at 40 ± 2 °C and a vertical pressure of 250 kPa was applied to the soil through a gravel layer on top of the 50 cm thick CCL where high strength fresh leachate was circulated at heads of 15 and 30 cm simulating the flow over the CCL. Inverse modelling using HYDRUS-1D indicated that the hydraulic conductivity after 180 days was decreased about three orders of magnitude in comparison with the values measured prior to the experiment. A number of scenarios of different leachate heads and persistence time were considered and saturation depth of the CCL was predicted through modelling. Under a typical leachate head of 30 cm, the saturation depth was predicted to be less than 60 cm for a persistence time of 3 years. This approach can be generalized to estimate an effective thickness of a CCL instead of using prescribed values, which may be conservatively overdesigned and thus unduly costly.

  10. Annular subaperture stitching method based on autocollimation

    NASA Astrophysics Data System (ADS)

    Yiwei, Chen; Erlong, Miao; Yongxin, Sui; Huaijiang, Yang

    2014-11-01

    In this paper, we propose an annular subaperture stitching method based on an autocollimation method to relax the requirements on mechanical location accuracy. In this approach, we move a ball instead of the interferometer and the aspheric surface so that testing results for adjacent annular subapertures are registered. Thus, the stitching algorithm can easily stitch the subaperture testing results together when large mechanical location errors exist. To verify this new method, we perform a simulation experiment. The simulation results demonstrate that this method can stitch together the subaperture testing results under large mechanical location errors.

  11. Fast simulations of patient-specific haemodynamics of coronary artery bypass grafts based on a POD-Galerkin method and a vascular shape parametrization

    NASA Astrophysics Data System (ADS)

    Ballarin, Francesco; Faggiano, Elena; Ippolito, Sonia; Manzoni, Andrea; Quarteroni, Alfio; Rozza, Gianluigi; Scrofani, Roberto

    2016-06-01

    In this work a reduced-order computational framework for the study of haemodynamics in three-dimensional patient-specific configurations of coronary artery bypass grafts dealing with a wide range of scenarios is proposed. We combine several efficient algorithms to face at the same time both the geometrical complexity involved in the description of the vascular network and the huge computational cost entailed by time dependent patient-specific flow simulations. Medical imaging procedures allow to reconstruct patient-specific configurations from clinical data. A centerlines-based parametrization is proposed to efficiently handle geometrical variations. POD-Galerkin reduced-order models are employed to cut down large computational costs. This computational framework allows to characterize blood flows for different physical and geometrical variations relevant in the clinical practice, such as stenosis factors and anastomosis variations, in a rapid and reliable way. Several numerical results are discussed, highlighting the computational performance of the proposed framework, as well as its capability to carry out sensitivity analysis studies, so far out of reach. In particular, a reduced-order simulation takes only a few minutes to run, resulting in computational savings of 99% of CPU time with respect to the full-order discretization. Moreover, the error between full-order and reduced-order solutions is also studied, and it is numerically found to be less than 1% for reduced-order solutions obtained with just O(100) online degrees of freedom.

  12. Analytic Methods for Simulated Light Transport

    NASA Astrophysics Data System (ADS)

    Arvo, James Richard

    1995-01-01

    This thesis presents new mathematical and computational tools for the simulation of light transport in realistic image synthesis. New algorithms are presented for exact computation of direct illumination effects related to light emission, shadowing, and first-order scattering from surfaces. New theoretical results are presented for the analysis of global illumination algorithms, which account for all interreflections of light among surfaces of an environment. First, a closed-form expression is derived for the irradiance Jacobian, which is the derivative of a vector field representing radiant energy flux. The expression holds for diffuse polygonal scenes and correctly accounts for shadowing, or partial occlusion. Three applications of the irradiance Jacobian are demonstrated: locating local irradiance extrema, direct computation of isolux contours, and surface mesh generation. Next, the concept of irradiance is generalized to tensors of arbitrary order. A recurrence relation for irradiance tensors is derived that extends a widely used formula published by Lambert in 1760. Several formulas with applications in computer graphics are derived from this recurrence relation and are independently verified using a new Monte Carlo method for sampling spherical triangles. The formulas extend the range of non-diffuse effects that can be computed in closed form to include illumination from directional area light sources and reflections from and transmissions through glossy surfaces. Finally, new analysis for global illumination is presented, which includes both direct illumination and indirect illumination due to multiple interreflections of light. A novel operator equation is proposed that clarifies existing deterministic algorithms for simulating global illumination and facilitates error analysis. Basic properties of the operators and solutions are identified which are not evident from previous formulations. A taxonomy of errors that arise in simulating global illumination is

  13. Component-Based Framework for Subsurface Simulations

    SciTech Connect

    Palmer, Bruce J.; Fang, Yilin; Hammond, Glenn E.; Gurumoorthi, Vidhya

    2007-08-01

    Simulations in the subsurface environment represent a broad range of phenomena covering an equally broad range of scales. Developing modelling capabilities that can integrate models representing different phenomena acting at different scales present formidable challenges both from the algorithmic and computer science perspective. This paper will describe the development of an integrated framework that will be used to combine different models into a single simulation. Initial work has focused on creating two frameworks, one for performing smooth particle hydrodynamics (SPH) simulations of fluid systems, the other for performing grid-based continuum simulations of reactive subsurface flow. The SPH framework is based on a parallel code developed for doing pore scale simulations, the continuum grid-based framework is based on the STOMP (Subsurface Transport Over Multiple Phases) code developed at PNNL. Future work will focus on combining the frameworks together to perform multiscale, multiphysics simulations of reactive subsurface flow.

  14. Implicit methods for efficient musculoskeletal simulation and optimal control

    PubMed Central

    van den Bogert, Antonie J.; Blana, Dimitra; Heinrich, Dieter

    2011-01-01

    The ordinary differential equations for musculoskeletal dynamics are often numerically stiff and highly nonlinear. Consequently, simulations require small time steps, and optimal control problems are slow to solve and have poor convergence. In this paper, we present an implicit formulation of musculoskeletal dynamics, which leads to new numerical methods for simulation and optimal control, with the expectation that we can mitigate some of these problems. A first order Rosenbrock method was developed for solving forward dynamic problems using the implicit formulation. It was used to perform real-time dynamic simulation of a complex shoulder arm system with extreme dynamic stiffness. Simulations had an RMS error of only 0.11 degrees in joint angles when running at real-time speed. For optimal control of musculoskeletal systems, a direct collocation method was developed for implicitly formulated models. The method was applied to predict gait with a prosthetic foot and ankle. Solutions were obtained in well under one hour of computation time and demonstrated how patients may adapt their gait to compensate for limitations of a specific prosthetic limb design. The optimal control method was also applied to a state estimation problem in sports biomechanics, where forces during skiing were estimated from noisy and incomplete kinematic data. Using a full musculoskeletal dynamics model for state estimation had the additional advantage that forward dynamic simulations, could be done with the same implicitly formulated model to simulate injuries and perturbation responses. While these methods are powerful and allow solution of previously intractable problems, there are still considerable numerical challenges, especially related to the convergence of gradient-based solvers. PMID:22102983

  15. Implicit methods for efficient musculoskeletal simulation and optimal control.

    PubMed

    van den Bogert, Antonie J; Blana, Dimitra; Heinrich, Dieter

    2011-01-01

    The ordinary differential equations for musculoskeletal dynamics are often numerically stiff and highly nonlinear. Consequently, simulations require small time steps, and optimal control problems are slow to solve and have poor convergence. In this paper, we present an implicit formulation of musculoskeletal dynamics, which leads to new numerical methods for simulation and optimal control, with the expectation that we can mitigate some of these problems. A first order Rosenbrock method was developed for solving forward dynamic problems using the implicit formulation. It was used to perform real-time dynamic simulation of a complex shoulder arm system with extreme dynamic stiffness. Simulations had an RMS error of only 0.11 degrees in joint angles when running at real-time speed. For optimal control of musculoskeletal systems, a direct collocation method was developed for implicitly formulated models. The method was applied to predict gait with a prosthetic foot and ankle. Solutions were obtained in well under one hour of computation time and demonstrated how patients may adapt their gait to compensate for limitations of a specific prosthetic limb design. The optimal control method was also applied to a state estimation problem in sports biomechanics, where forces during skiing were estimated from noisy and incomplete kinematic data. Using a full musculoskeletal dynamics model for state estimation had the additional advantage that forward dynamic simulations, could be done with the same implicitly formulated model to simulate injuries and perturbation responses. While these methods are powerful and allow solution of previously intractable problems, there are still considerable numerical challenges, especially related to the convergence of gradient-based solvers.

  16. An improved method for simulating microcalcifications in digital mammograms

    PubMed Central

    Zanca, Federica; Chakraborty, Dev Prasad; Van Ongeval, Chantal; Jacobs, Jurgen; Claus, Filip; Marchal, Guy; Bosmans, Hilde

    2008-01-01

    The assessment of the performance of a digital mammography system requires an observer study with a relatively large number of cases with known truth which is often difficult to assemble. Several investigators have developed methods for generating hybrid abnormal images containing simulated microcalcifications. This article addresses some of the limitations of earlier methods. The new method is based on digital images of needle biopsy specimens. Since the specimens are imaged separately from the breast, the microcalcification attenuation profile scan is deduced without the effects of over and underlying tissues. The resulting templates are normalized for image acquisition specific parameters and reprocessed to simulate microcalcifications appropriate to other imaging systems, with different x-ray, detector and image processing parameters than the original acquisition system. This capability is not shared by previous simulation methods that have relied on extracting microcalcifications from breast images. The method was validated by five experienced mammographers who compared 59 pairs of simulated and real microcalcifications in a two-alternative forced choice task designed to test if they could distinguish the real from the simulated lesions. They also classified the shapes of the microcalcifications according to a standardized clinical lexicon. The observed probability of correct choice was 0.415, 95% confidence interval (0.284, 0.546), showing that the radiologists were unable to distinguish the lesions. The shape classification revealed substantial agreement with the truth (mean kappa=0.70), showing that we were able to accurately simulate the lesion morphology. While currently limited to single microcalcifications, the method is extensible to more complex clusters of microcalcifications and to three-dimensional images. It can be used to objectively assess an imaging technology, especially with respect to its ability to adequately visualize the morphology of the

  17. Etch Profile Simulation Using Level Set Methods

    NASA Technical Reports Server (NTRS)

    Hwang, Helen H.; Meyyappan, Meyya; Arnold, James O. (Technical Monitor)

    1997-01-01

    Etching and deposition of materials are critical steps in semiconductor processing for device manufacturing. Both etching and deposition may have isotropic and anisotropic components, due to directional sputtering and redeposition of materials, for example. Previous attempts at modeling profile evolution have used so-called "string theory" to simulate the moving solid-gas interface between the semiconductor and the plasma. One complication of this method is that extensive de-looping schemes are required at the profile corners. We will present a 2D profile evolution simulation using level set theory to model the surface. (1) By embedding the location of the interface in a field variable, the need for de-looping schemes is eliminated and profile corners are more accurately modeled. This level set profile evolution model will calculate both isotropic and anisotropic etch and deposition rates of a substrate in low pressure (10s mTorr) plasmas, considering the incident ion energy angular distribution functions and neutral fluxes. We will present etching profiles of Si substrates in Ar/Cl2 discharges for various incident ion energies and trench geometries.

  18. Numeric Modified Adomian Decomposition Method for Power System Simulations

    SciTech Connect

    Dimitrovski, Aleksandar D; Simunovic, Srdjan; Pannala, Sreekanth

    2016-01-01

    This paper investigates the applicability of numeric Wazwaz El Sayed modified Adomian Decomposition Method (WES-ADM) for time domain simulation of power systems. WESADM is a numerical method based on a modified Adomian decomposition (ADM) technique. WES-ADM is a numerical approximation method for the solution of nonlinear ordinary differential equations. The non-linear terms in the differential equations are approximated using Adomian polynomials. In this paper WES-ADM is applied to time domain simulations of multimachine power systems. WECC 3-generator, 9-bus system and IEEE 10-generator, 39-bus system have been used to test the applicability of the approach. Several fault scenarios have been tested. It has been found that the proposed approach is faster than the trapezoidal method with comparable accuracy.

  19. Simulating cardiac ultrasound image based on MR diffusion tensor imaging

    PubMed Central

    Qin, Xulei; Wang, Silun; Shen, Ming; Lu, Guolan; Zhang, Xiaodong; Wagner, Mary B.; Fei, Baowei

    2015-01-01

    Purpose: Cardiac ultrasound simulation can have important applications in the design of ultrasound systems, understanding the interaction effect between ultrasound and tissue and setting the ground truth for validating quantification methods. Current ultrasound simulation methods fail to simulate the myocardial intensity anisotropies. New simulation methods are needed in order to simulate realistic ultrasound images of the heart. Methods: The proposed cardiac ultrasound image simulation method is based on diffusion tensor imaging (DTI) data of the heart. The method utilizes both the cardiac geometry and the fiber orientation information to simulate the anisotropic intensities in B-mode ultrasound images. Before the simulation procedure, the geometry and fiber orientations of the heart are obtained from high-resolution structural MRI and DTI data, respectively. The simulation includes two important steps. First, the backscatter coefficients of the point scatterers inside the myocardium are processed according to the fiber orientations using an anisotropic model. Second, the cardiac ultrasound images are simulated with anisotropic myocardial intensities. The proposed method was also compared with two other nonanisotropic intensity methods using 50 B-mode ultrasound image volumes of five different rat hearts. The simulated images were also compared with the ultrasound images of a diseased rat heart in vivo. A new segmental evaluation method is proposed to validate the simulation results. The average relative errors (AREs) of five parameters, i.e., mean intensity, Rayleigh distribution parameter σ, and first, second, and third quartiles, were utilized as the evaluation metrics. The simulated images were quantitatively compared with real ultrasound images in both ex vivo and in vivo experiments. Results: The proposed ultrasound image simulation method can realistically simulate cardiac ultrasound images of the heart using high-resolution MR-DTI data. The AREs of their

  20. Lensless ghost imaging based on mathematical simulation and experimental simulation

    NASA Astrophysics Data System (ADS)

    Liu, Yanyan; Wang, Biyi; Zhao, Yingchao; Dong, Junzhang

    2014-02-01

    The differences of conventional imaging and correlated imaging are discussed in this paper. The mathematical model of lensless ghost imaging system is set up and the image of double slits is computed by mathematical simulation. The results are also testified by the experimental verification. Both the theory simulation and experimental verifications results shows that the mathematical model based on statistical optical principle are keeping consistent with real experimental results.

  1. Agent-Based Simulations for Project Management

    NASA Technical Reports Server (NTRS)

    White, J. Chris; Sholtes, Robert M.

    2011-01-01

    Currently, the most common approach used in project planning tools is the Critical Path Method (CPM). While this method was a great improvement over the basic Gantt chart technique being used at the time, it now suffers from three primary flaws: (1) task duration is an input, (2) productivity impacts are not considered , and (3) management corrective actions are not included. Today, computers have exceptional computational power to handle complex simulations of task e)(eculion and project management activities (e.g ., dynamically changing the number of resources assigned to a task when it is behind schedule). Through research under a Department of Defense contract, the author and the ViaSim team have developed a project simulation tool that enables more realistic cost and schedule estimates by using a resource-based model that literally turns the current duration-based CPM approach "on its head." The approach represents a fundamental paradigm shift in estimating projects, managing schedules, and reducing risk through innovative predictive techniques.

  2. An example-based brain MRI simulation framework

    NASA Astrophysics Data System (ADS)

    He, Qing; Roy, Snehashis; Jog, Amod; Pham, Dzung L.

    2015-03-01

    The simulation of magnetic resonance (MR) images plays an important role in the validation of image analysis algorithms such as image segmentation, due to lack of sufficient ground truth in real MR images. Previous work on MRI simulation has focused on explicitly modeling the MR image formation process. However, because of the overwhelming complexity of MR acquisition these simulations must involve simplifications and approximations that can result in visually unrealistic simulated images. In this work, we describe an example-based simulation framework, which uses an "atlas" consisting of an MR image and its anatomical models derived from the hard segmentation. The relationships between the MR image intensities and its anatomical models are learned using a patch-based regression that implicitly models the physics of the MR image formation. Given the anatomical models of a new brain, a new MR image can be simulated using the learned regression. This approach has been extended to also simulate intensity inhomogeneity artifacts based on the statistical model of training data. Results show that the example based MRI simulation method is capable of simulating different image contrasts and is robust to different choices of atlas. The simulated images resemble real MR images more than simulations produced by a physics-based model.

  3. Computational Simulations and the Scientific Method

    NASA Technical Reports Server (NTRS)

    Kleb, Bil; Wood, Bill

    2005-01-01

    As scientific simulation software becomes more complicated, the scientific-software implementor's need for component tests from new model developers becomes more crucial. The community's ability to follow the basic premise of the Scientific Method requires independently repeatable experiments, and model innovators are in the best position to create these test fixtures. Scientific software developers also need to quickly judge the value of the new model, i.e., its cost-to-benefit ratio in terms of gains provided by the new model and implementation risks such as cost, time, and quality. This paper asks two questions. The first is whether other scientific software developers would find published component tests useful, and the second is whether model innovators think publishing test fixtures is a feasible approach.

  4. Computational simulation methods for composite fracture mechanics

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.

    1988-01-01

    Structural integrity, durability, and damage tolerance of advanced composites are assessed by studying damage initiation at various scales (micro, macro, and global) and accumulation and growth leading to global failure, quantitatively and qualitatively. In addition, various fracture toughness parameters associated with a typical damage and its growth must be determined. Computational structural analysis codes to aid the composite design engineer in performing these tasks were developed. CODSTRAN (COmposite Durability STRuctural ANalysis) is used to qualitatively and quantitatively assess the progressive damage occurring in composite structures due to mechanical and environmental loads. Next, methods are covered that are currently being developed and used at Lewis to predict interlaminar fracture toughness and related parameters of fiber composites given a prescribed damage. The general purpose finite element code MSC/NASTRAN was used to simulate the interlaminar fracture and the associated individual as well as mixed-mode strain energy release rates in fiber composites.

  5. Numerical simulation of the boat growth method

    NASA Astrophysics Data System (ADS)

    Oda, K.; Saito, T.; Nishihama, J.; Ishihara, T.

    1989-09-01

    This paper presents a three-dimensional mathematical model for thermal convection in molten metals, which is applicable to the heat transfer phenomena in a boat-shaped crucibles. The governing equations are solved using an extended version, developed by Saito et al. (1986), of the Amsden and Harlow (1968) simplified marker and cell method. It is shown that the following parameters must be incorporated for an accurate simulation of melt growth: (1) the radiative heat transfer in the furnace, (2) the complex crucible configuration, (3) the melt flow, and (4) the solid-liquid interface shape. The velocity and temperature distribution calculated from this model are compared with the results of previous studies.

  6. Development of semiclassical molecular dynamics simulation method.

    PubMed

    Nakamura, Hiroki; Nanbu, Shinkoh; Teranishi, Yoshiaki; Ohta, Ayumi

    2016-04-28

    Various quantum mechanical effects such as nonadiabatic transitions, quantum mechanical tunneling and coherence play crucial roles in a variety of chemical and biological systems. In this paper, we propose a method to incorporate tunneling effects into the molecular dynamics (MD) method, which is purely based on classical mechanics. Caustics, which define the boundary between classically allowed and forbidden regions, are detected along classical trajectories and the optimal tunneling path with minimum action is determined by starting from each appropriate caustic. The real phase associated with tunneling can also be estimated. Numerical demonstration with use of a simple collinear chemical reaction O + HCl → OH + Cl is presented in order to help the reader to well comprehend the method proposed here. Generalization to the on-the-fly ab initio version is rather straightforward. By treating the nonadiabatic transitions at conical intersections by the Zhu-Nakamura theory, new semiclassical MD methods can be developed. PMID:27067383

  7. Development of semiclassical molecular dynamics simulation method.

    PubMed

    Nakamura, Hiroki; Nanbu, Shinkoh; Teranishi, Yoshiaki; Ohta, Ayumi

    2016-04-28

    Various quantum mechanical effects such as nonadiabatic transitions, quantum mechanical tunneling and coherence play crucial roles in a variety of chemical and biological systems. In this paper, we propose a method to incorporate tunneling effects into the molecular dynamics (MD) method, which is purely based on classical mechanics. Caustics, which define the boundary between classically allowed and forbidden regions, are detected along classical trajectories and the optimal tunneling path with minimum action is determined by starting from each appropriate caustic. The real phase associated with tunneling can also be estimated. Numerical demonstration with use of a simple collinear chemical reaction O + HCl → OH + Cl is presented in order to help the reader to well comprehend the method proposed here. Generalization to the on-the-fly ab initio version is rather straightforward. By treating the nonadiabatic transitions at conical intersections by the Zhu-Nakamura theory, new semiclassical MD methods can be developed.

  8. Discrete Stochastic Simulation Methods for Chemically Reacting Systems

    PubMed Central

    Cao, Yang; Samuels, David C.

    2012-01-01

    Discrete stochastic chemical kinetics describe the time evolution of a chemically reacting system by taking into account the fact that in reality chemical species are present with integer populations and exhibit some degree of randomness in their dynamical behavior. In recent years, with the development of new techniques to study biochemistry dynamics in a single cell, there are increasing studies using this approach to chemical kinetics in cellular systems, where the small copy number of some reactant species in the cell may lead to deviations from the predictions of the deterministic differential equations of classical chemical kinetics. This chapter reviews the fundamental theory related to stochastic chemical kinetics and several simulation methods that are based on that theory. We focus on non-stiff biochemical systems and the two most important discrete stochastic simulation methods: Gillespie's Stochastic Simulation Algorithm (SSA) and the tau-leaping method. Different implementation strategies of these two methods are discussed. Then we recommend a relatively simple and efficient strategy that combines the strengths of the two methods: the hybrid SSA/tau-leaping method. The implementation details of the hybrid strategy are given here and a related software package is introduced. Finally, the hybrid method is applied to simple biochemical systems as a demonstration of its application. PMID:19216925

  9. High-performance liquid chromatography - Ultraviolet method for the determination of total specific migration of nine ultraviolet absorbers in food simulants based on 1,1,3,3-Tetramethylguanidine and organic phase anion exchange solid phase extraction to remove glyceride.

    PubMed

    Wang, Jianling; Xiao, Xiaofeng; Chen, Tong; Liu, Tingfei; Tao, Huaming; He, Jun

    2016-06-17

    The glyceride in oil food simulant usually causes serious interferences to target analytes and leads to failure of the normal function of the RP-HPLC column. In this work, a convenient HPLC-UV method for the determination of the total specific migration of nine ultraviolet (UV) absorbers in food simulants was developed based on 1,1,3,3-tetramethylguanidine (TMG) and organic phase anion exchange (OPAE) SPE to efficiently remove glyceride in olive oil simulant. In contrast to the normal ion exchange carried out in an aqueous solution or aqueous phase environment, the OPAE SPE was performed in the organic phase environments, and the time-consuming and challenging extraction of the nine UV absorbers from vegetable oil with aqueous solution could be readily omitted. The method was proved to have good linearity (r≥0.99992), precision (intra-day RSD≤3.3%), and accuracy(91.0%≤recoveries≤107%); furthermore, the lower limit of quantifications (0.05-0.2mg/kg) in five types of food simulants(10% ethanol, 3% acetic acid, 20% ethanol, 50% ethanol and olive oil) was observed. The method was found to be well suited for quantitative determination of the total specific migration of the nine UV absorbers both in aqueous and vegetable oil simulant according to Commission Regulation (EU) No. 10/2011. Migration levels of the nine UV absorbers were determined in 31 plastic samples, and UV-24, UV-531, HHBP and UV-326 were frequently detected, especially in olive oil simulant for UV-326 in PE samples. In addition, the OPAE SPE procedure was also been applied to efficiently enrich or purify seven antioxidants in olive oil simulant. Results indicate that this procedure will have more extensive applications in the enriching or purification of the extremely weak acidic compounds with phenol hydroxyl group that are relatively stable in TMG n-hexane solution and that can be barely extracted from vegetable oil.

  10. High-performance liquid chromatography - Ultraviolet method for the determination of total specific migration of nine ultraviolet absorbers in food simulants based on 1,1,3,3-Tetramethylguanidine and organic phase anion exchange solid phase extraction to remove glyceride.

    PubMed

    Wang, Jianling; Xiao, Xiaofeng; Chen, Tong; Liu, Tingfei; Tao, Huaming; He, Jun

    2016-06-17

    The glyceride in oil food simulant usually causes serious interferences to target analytes and leads to failure of the normal function of the RP-HPLC column. In this work, a convenient HPLC-UV method for the determination of the total specific migration of nine ultraviolet (UV) absorbers in food simulants was developed based on 1,1,3,3-tetramethylguanidine (TMG) and organic phase anion exchange (OPAE) SPE to efficiently remove glyceride in olive oil simulant. In contrast to the normal ion exchange carried out in an aqueous solution or aqueous phase environment, the OPAE SPE was performed in the organic phase environments, and the time-consuming and challenging extraction of the nine UV absorbers from vegetable oil with aqueous solution could be readily omitted. The method was proved to have good linearity (r≥0.99992), precision (intra-day RSD≤3.3%), and accuracy(91.0%≤recoveries≤107%); furthermore, the lower limit of quantifications (0.05-0.2mg/kg) in five types of food simulants(10% ethanol, 3% acetic acid, 20% ethanol, 50% ethanol and olive oil) was observed. The method was found to be well suited for quantitative determination of the total specific migration of the nine UV absorbers both in aqueous and vegetable oil simulant according to Commission Regulation (EU) No. 10/2011. Migration levels of the nine UV absorbers were determined in 31 plastic samples, and UV-24, UV-531, HHBP and UV-326 were frequently detected, especially in olive oil simulant for UV-326 in PE samples. In addition, the OPAE SPE procedure was also been applied to efficiently enrich or purify seven antioxidants in olive oil simulant. Results indicate that this procedure will have more extensive applications in the enriching or purification of the extremely weak acidic compounds with phenol hydroxyl group that are relatively stable in TMG n-hexane solution and that can be barely extracted from vegetable oil. PMID:27189432

  11. MDMS: Molecular Dynamics Meta-Simulator for evaluating exchange type sampling methods.

    PubMed

    Smith, Daniel B; Okur, Asim; Brooks, Bernard

    2012-08-30

    Replica exchange methods have become popular tools to explore conformational space for small proteins. For larger biological systems, even with enhanced sampling methods, exploring the free energy landscape remains computationally challenging. This problem has led to the development of many improved replica exchange methods. Unfortunately, testing these methods remains expensive. We propose a Molecular Dynamics Meta-Simulator (MDMS) based on transition state theory to simulate a replica exchange simulation, eliminating the need to run explicit dynamics between exchange attempts. MDMS simulations allow for rapid testing of new replica exchange based methods, greatly reducing the amount of time needed for new method development.

  12. Irreversible simulated tempering algorithm with skew detailed balance conditions: a learning method of weight factors in simulated tempering

    NASA Astrophysics Data System (ADS)

    Sakai, Yuji; Hukushima, Koji

    2016-09-01

    Recent numerical studies concerning simulated tempering algorithm without the detailed balance condition are reviewed and an irreversible simulated tempering algorithm based on the skew detailed balance condition is described. A method to estimate weight factors in simulated tempering by sequentially implementing the irreversible simulated tempering algorithm is studied in comparison with the conventional simulated tempering algorithm satisfying the detailed balance condition. It is found that the total amount of Monte Carlo steps for estimating the weight factors is successfully reduced by applying the proposed method to an two-dimensional ferromagnetic Ising model.

  13. Simulation Platform: a cloud-based online simulation environment.

    PubMed

    Yamazaki, Tadashi; Ikeno, Hidetoshi; Okumura, Yoshihiro; Satoh, Shunji; Kamiyama, Yoshimi; Hirata, Yutaka; Inagaki, Keiichiro; Ishihara, Akito; Kannon, Takayuki; Usui, Shiro

    2011-09-01

    For multi-scale and multi-modal neural modeling, it is needed to handle multiple neural models described at different levels seamlessly. Database technology will become more important for these studies, specifically for downloading and handling the neural models seamlessly and effortlessly. To date, conventional neuroinformatics databases have solely been designed to archive model files, but the databases should provide a chance for users to validate the models before downloading them. In this paper, we report our on-going project to develop a cloud-based web service for online simulation called "Simulation Platform". Simulation Platform is a cloud of virtual machines running GNU/Linux. On a virtual machine, various software including developer tools such as compilers and libraries, popular neural simulators such as GENESIS, NEURON and NEST, and scientific software such as Gnuplot, R and Octave, are pre-installed. When a user posts a request, a virtual machine is assigned to the user, and the simulation starts on that machine. The user remotely accesses to the machine through a web browser and carries out the simulation, without the need to install any software but a web browser on the user's own computer. Therefore, Simulation Platform is expected to eliminate impediments to handle multiple neural models that require multiple software.

  14. Simulation Platform: a cloud-based online simulation environment.

    PubMed

    Yamazaki, Tadashi; Ikeno, Hidetoshi; Okumura, Yoshihiro; Satoh, Shunji; Kamiyama, Yoshimi; Hirata, Yutaka; Inagaki, Keiichiro; Ishihara, Akito; Kannon, Takayuki; Usui, Shiro

    2011-09-01

    For multi-scale and multi-modal neural modeling, it is needed to handle multiple neural models described at different levels seamlessly. Database technology will become more important for these studies, specifically for downloading and handling the neural models seamlessly and effortlessly. To date, conventional neuroinformatics databases have solely been designed to archive model files, but the databases should provide a chance for users to validate the models before downloading them. In this paper, we report our on-going project to develop a cloud-based web service for online simulation called "Simulation Platform". Simulation Platform is a cloud of virtual machines running GNU/Linux. On a virtual machine, various software including developer tools such as compilers and libraries, popular neural simulators such as GENESIS, NEURON and NEST, and scientific software such as Gnuplot, R and Octave, are pre-installed. When a user posts a request, a virtual machine is assigned to the user, and the simulation starts on that machine. The user remotely accesses to the machine through a web browser and carries out the simulation, without the need to install any software but a web browser on the user's own computer. Therefore, Simulation Platform is expected to eliminate impediments to handle multiple neural models that require multiple software. PMID:21741207

  15. Atomistic hybrid DSMC/NEMD method for nonequilibrium multiscale simulations

    SciTech Connect

    Gu Kai; Watkins, Charles B. Koplik, Joel

    2010-03-01

    A multiscale hybrid method for coupling the direct simulation Monte Carlo (DSMC) method to the nonequilibrium molecular dynamics (NEMD) method is introduced. The method addresses Knudsen layer type gas flows within a few mean free paths of an interface or about an object with dimensions of the order of a few mean free paths. It employs the NEMD method to resolve nanoscale phenomena closest to the interface along with coupled DSMC simulation of the remainder of the Knudsen layer. The hybrid DSMC/NEMD method is a particle based algorithm without a buffer zone. It incorporates a new, modified generalized soft sphere (MGSS) molecular collision model to improve the poor computational efficiency of the traditional generalized soft sphere GSS model and to achieve DSMC compatibility with Lennard-Jones NEMD molecular interactions. An equilibrium gas, a Fourier thermal flow, and an oscillatory Couette flow, are simulated to validate the method. The method shows good agreement with Maxwell-Boltzmann theory for the equilibrium system, Chapman-Enskog theory for Fourier flow, and pure DSMC simulations for oscillatory Couette flow. Speedup in CPU time of the hybrid solver is benchmarked against a pure NEMD solver baseline for different system sizes and solver domain partitions. Finally, the hybrid method is applied to investigate interaction of argon gas with solid surface molecules in a parametric study of the influence of wetting effects and solid molecular mass on energy transfer and thermal accommodation coefficients. It is determined that wetting effect strength and solid molecular mass have a significant impact on the energy transfer between gas and solid phases and thermal accommodation coefficient.

  16. A multiscale quantum mechanics/electromagnetics method for device simulations.

    PubMed

    Yam, ChiYung; Meng, Lingyi; Zhang, Yu; Chen, GuanHua

    2015-04-01

    Multiscale modeling has become a popular tool for research applying to different areas including materials science, microelectronics, biology, chemistry, etc. In this tutorial review, we describe a newly developed multiscale computational method, incorporating quantum mechanics into electronic device modeling with the electromagnetic environment included through classical electrodynamics. In the quantum mechanics/electromagnetics (QM/EM) method, the regions of the system where active electron scattering processes take place are treated quantum mechanically, while the surroundings are described by Maxwell's equations and a semiclassical drift-diffusion model. The QM model and the EM model are solved, respectively, in different regions of the system in a self-consistent manner. Potential distributions and current densities at the interface between QM and EM regions are employed as the boundary conditions for the quantum mechanical and electromagnetic simulations, respectively. The method is illustrated in the simulation of several realistic systems. In the case of junctionless field-effect transistors, transfer characteristics are obtained and a good agreement between experiments and simulations is achieved. Optical properties of a tandem photovoltaic cell are studied and the simulations demonstrate that multiple QM regions are coupled through the classical EM model. Finally, the study of a carbon nanotube-based molecular device shows the accuracy and efficiency of the QM/EM method.

  17. On the simulation of space based manipulators with contact

    NASA Technical Reports Server (NTRS)

    Walker, Michael W.; Dionise, Joseph

    1989-01-01

    An efficient method of simulating the motion of space based manipulators is presented. Since the manipulators will come into contact with different objects in their environment while carrying out different tasks, an important part of the simulation is the modeling of those contacts. An inverse dynamics controller is used to control a two armed manipulator whose task is to grasp an object floating in space. Simulation results are presented and an evaluation is made of the performance of the controller.

  18. The gap-tooth method in particle simulations

    NASA Astrophysics Data System (ADS)

    Gear, C. William; Li, Ju; Kevrekidis, Ioannis G.

    2003-09-01

    We explore the gap-tooth method for multiscale modeling of systems represented by microscopic physics-based simulators, when coarse-grained evolution equations are not available in closed form. A biased random walk particle simulation, motivated by the viscous Burgers equation, serves as an example. We construct macro-to-micro (lifting) and micro-to-macro (restriction) operators, and drive the coarse time-evolution by particle simulations in appropriately coupled microdomains (“teeth”) separated by large spatial gaps. A macroscopically interpolative mechanism for communication between the teeth at the particle level is introduced. The results demonstrate the feasibility of a “closure-on-demand” approach to solving some hydrodynamics problems.

  19. First Principles based methods and applications for realistic simulations on complex soft materials to develop new materials for energy, health, and environmental sustainability

    NASA Astrophysics Data System (ADS)

    Goddard, William

    2013-03-01

    For soft materials applications it is essential to obtain accurate descriptions of the weak (London dispersion, electrostatic) interactions between nonbond units, to include interactions with and stabilization by solvent, and to obtain accurate free energies and entropic changes during chemical, physical, and thermal processing. We will describe some of the advances being made in first principles based methods for treating soft materials with applications selected from new organic electrodes and electrolytes for batteries and fuel cells, forward osmosis for water cleanup, extended matter stable at ambient conditions, and drugs for modulating activation of GCPR membrane proteins,

  20. Electromagnetic metamaterial simulations using a GPU-accelerated FDTD method

    NASA Astrophysics Data System (ADS)

    Seok, Myung-Su; Lee, Min-Gon; Yoo, SeokJae; Park, Q.-Han

    2015-12-01

    Metamaterials composed of artificial subwavelength structures exhibit extraordinary properties that cannot be found in nature. Designing artificial structures having exceptional properties plays a pivotal role in current metamaterial research. We present a new numerical simulation scheme for metamaterial research. The scheme is based on a graphic processing unit (GPU)-accelerated finite-difference time-domain (FDTD) method. The FDTD computation can be significantly accelerated when GPUs are used instead of only central processing units (CPUs). We explain how the fast FDTD simulation of large-scale metamaterials can be achieved through communication optimization in a heterogeneous CPU/GPU-based computer cluster. Our method also includes various advanced FDTD techniques: the non-uniform grid technique, the total-field/scattered-field (TFSF) technique, the auxiliary field technique for dispersive materials, the running discrete Fourier transform, and the complex structure setting. We demonstrate the power of our new FDTD simulation scheme by simulating the negative refraction of light in a coaxial waveguide metamaterial.

  1. XML-based resources for simulation

    SciTech Connect

    Kelsey, R. L.; Riese, J. M.; Young, G. A.

    2004-01-01

    As simulations and the machines they run on become larger and more complex the inputs and outputs become more unwieldy. Increased complexity makes the setup of simulation problems difficult. It also contributes to the burden of handling and analyzing large amounts of output results. Another problem is that among a class of simulation codes (such as those for physical system simulation) there is often no single standard format or resource for input data. To run the same problem on different simulations requires a different setup for each simulation code. The extensible Markup Language (XML) is used to represent a general set of data resources including physical system problems, materials, and test results. These resources provide a 'plug and play' approach to simulation setup. For example, a particular material for a physical system can be selected from a material database. The XML-based representation of the selected material is then converted to the native format of the simulation being run and plugged into the simulation input file. In this manner a user can quickly and more easily put together a simulation setup. In the case of output data, an XML approach to regression testing includes tests and test results with XML-based representations. This facilitates the ability to query for specific tests and make comparisons between results. Also, output results can easily be converted to other formats for publishing online or on paper.

  2. Simulated Radiative Transfer DOAS - A new method for improving volcanic SO2 emissions retrievals from ground-based UV-spectroscopic measurements of scattered solar radiation

    NASA Astrophysics Data System (ADS)

    Kern, C.; Deutschmann, T.; Vogel, L.; Bobrowski, N.; Hoermann, C.; Werner, C. A.; Sutton, A. J.; Elias, T.

    2011-12-01

    Passive Differential Optical Absorption Spectroscopy (DOAS) has become a standard tool for measuring SO2 at volcanoes. More recently, ultra-violet (UV) cameras have also been applied to obtain 2D images of SO2-bearing plumes. Both techniques can be used to derive SO2 emission rates by measuring SO2 column densities, integrating these along the plume cross-section, and multiplying by the wind speed. Recent measurements and model studies have revealed that the dominating source of uncertainty in these techniques often originates from an inaccurate assessment of radiative transfer through the volcanic plume. The typical assumption that all detected radiation is scattered behind the volcanic plume and takes a straight path from there to the instrument is often incorrect. We recently showed that the straight path assumption can lead to column density errors of 50% or more in cases where plumes with high SO2 and aerosol concentrations are measured from several kilometers distance, or where the background atmosphere contains a large amount of scattering aerosols. Both under- and overestimation are possible depending on the atmospheric conditions and geometry during spectral acquisition. Simulated Radiative Transfer (SRT) DOAS is a new evaluation scheme that combines radiative transfer modeling with spectral analysis of passive DOAS measurements in the UV region to derive more accurate SO2 column densities than conventional DOAS retrievals, which in turn leads to considerably more accurate emission rates. A three-dimensional backward Monte Carlo radiative transfer model is used to simulate realistic light paths in and around the volcanic plume containing variable amounts of SO2 and aerosols. An inversion algorithm is then applied to derive the true SO2 column density. For fast processing of large datasets, a linearized algorithm based on lookup tables was developed and tested on a number of example datasets. In some cases, the information content of the spectral data is

  3. Simulation-based training: the next revolution in radiology education?

    PubMed

    Desser, Terry S

    2007-11-01

    Simulation-based training methods have been widely adopted in hazardous professions such as aviation, nuclear power, and the military. Their use in medicine has been accelerating lately, fueled by the public's concerns over medical errors as well as new Accreditation Council for Graduate Medical Education requirements for outcome-based and proficiency-based assessment methods. This article reviews the rationale for simulator-based training, types of simulators, their historical development and validity testing, and some results to date in laparoscopic surgery and endoscopic procedures. A number of companies have developed endovascular simulators for interventional radiologic procedures; although they cannot as yet replicate the experience of performing cases in real patients, they promise to play an increasingly important role in procedural training in the future.

  4. A High Order Element Based Method for the Simulation of Velocity Damping in the Hyporheic Zone of a High Mountain River

    NASA Astrophysics Data System (ADS)

    Preziosi-Ribero, Antonio; Peñaloza-Giraldo, Jorge; Escobar-Vargas, Jorge; Donado-Garzón, Leonardo

    2016-04-01

    Groundwater - Surface water interaction is a topic that has gained relevance among the scientific community over the past decades. However, several questions remain unsolved inside this topic, and almost all the research that has been done in the past regards the transport phenomena and has little to do with understanding the dynamics of the flow patterns of the above mentioned interactions. The aim of this research is to verify the attenuation of the water velocity that comes from the free surface and enters the porous media under the bed of a high mountain river. The understanding of this process is a key feature in order to characterize and quantify the interactions between groundwater and surface water. However, the lack of information and the difficulties that arise when measuring groundwater flows under streams make the physical quantification non reliable for scientific purposes. These issues suggest that numerical simulations and in-stream velocity measurements can be used in order to characterize these flows. Previous studies have simulated the attenuation of a sinusoidal pulse of vertical velocity that comes from a stream and goes into a porous medium. These studies used the Burgers equation and the 1-D Navier-Stokes equations as governing equations. However, the boundary conditions of the problem, and the results when varying the different parameters of the equations show that the understanding of the process is not complete yet. To begin with, a Spectral Multi Domain Penalty Method (SMPM) was proposed for quantifying the velocity damping solving the Navier - Stokes equations in 1D. The main assumptions are incompressibility and a hydrostatic approximation for the pressure distributions. This method was tested with theoretical signals that are mainly trigonometric pulses or functions. Afterwards, in order to test the results with real signals, velocity profiles were captured near the Gualí River bed (Honda, Colombia), with an Acoustic Doppler

  5. Lattice-Boltzmann-based Simulations of Diffusiophoresis

    NASA Astrophysics Data System (ADS)

    Castigliego, Joshua; Kreft Pearce, Jennifer

    We present results from a lattice-Boltzmann-base Brownian Dynamics simulation on diffusiophoresis and the separation of particles within the system. A gradient in viscosity that simulates a concentration gradient in a dissolved polymer allows us to separate various types of particles by their deformability. As seen in previous experiments, simulated particles that have a higher deformability react differently to the polymer matrix than those with a lower deformability. Therefore, the particles can be separated from each other. This simulation, in particular, was intended to model an oceanic system where the particles of interest were zooplankton, phytoplankton and microplastics. The separation of plankton from the microplastics was achieved.

  6. PIXE simulation: Models, methods and technologies

    SciTech Connect

    Batic, M.; Pia, M. G.; Saracco, P.; Weidenspointner, G.

    2013-04-19

    The simulation of PIXE (Particle Induced X-ray Emission) is discussed in the context of general-purpose Monte Carlo systems for particle transport. Dedicated PIXE codes are mainly concerned with the application of the technique to elemental analysis, but they lack the capability of dealing with complex experimental configurations. General-purpose Monte Carlo codes provide powerful tools to model the experimental environment in great detail, but so far they have provided limited functionality for PIXE simulation. This paper reviews recent developments that have endowed the Geant4 simulation toolkit with advanced capabilities for PIXE simulation, and related efforts for quantitative validation of cross sections and other physical parameters relevant to PIXE simulation.

  7. Method for simulating discontinuous physical systems

    DOEpatents

    Baty, Roy S.; Vaughn, Mark R.

    2001-01-01

    The mathematical foundations of conventional numerical simulation of physical systems provide no consistent description of the behavior of such systems when subjected to discontinuous physical influences. As a result, the numerical simulation of such problems requires ad hoc encoding of specific experimental results in order to address the behavior of such discontinuous physical systems. In the present invention, these foundations are replaced by a new combination of generalized function theory and nonstandard analysis. The result is a class of new approaches to the numerical simulation of physical systems which allows the accurate and well-behaved simulation of discontinuous and other difficult physical systems, as well as simpler physical systems. Applications of this new class of numerical simulation techniques to process control, robotics, and apparatus design are outlined.

  8. Lattice Boltzmann Method for Spacecraft Propellant Slosh Simulation

    NASA Technical Reports Server (NTRS)

    Orr, Jeb S.; Powers, Joseph F.; Yang, Hong Q

    2015-01-01

    A scalable computational approach to the simulation of propellant tank sloshing dynamics in microgravity is presented. In this work, we use the lattice Boltzmann equation (LBE) to approximate the behavior of two-phase, single-component isothermal flows at very low Bond numbers. Through the use of a non-ideal gas equation of state and a modified multiple relaxation time (MRT) collision operator, the proposed method can simulate thermodynamically consistent phase transitions at temperatures and density ratios consistent with typical spacecraft cryogenic propellants, for example, liquid oxygen. Determination of the tank forces and moments is based upon a novel approach that relies on the global momentum conservation of the closed fluid domain, and a parametric wall wetting model allows tuning of the free surface contact angle. Development of the interface is implicit and no interface tracking approach is required. A numerical example illustrates the method's application to prediction of bulk fluid behavior during a spacecraft ullage settling maneuver.

  9. A generic reaction-based biogeochemical simulator

    SciTech Connect

    Fang, Yilin; Yabusaki, Steven B.; Yeh, Gour T.; C.T. Miller, M.W. Farthing, W.G. Gray, and G.F. Pinder

    2004-06-17

    This paper presents a generic biogeochemical simulator, BIOGEOCHEM. The simulator can read a thermodynamic database based on the EQ3/EQ6 database. It can also read user-specified equilibrium and kinetic reactions (reactions not defined in the format of that in EQ3/EQ6 database) symbolically. BIOGEOCHEM is developed with a general paradigm. It overcomes the requirement in most available reaction-based models that reactions and rate laws be specified in a limited number of canonical forms. The simulator interprets the reactions, and rate laws of virtually any type for input to the MAPLE symbolic mathematical software package. MAPLE then generates Fortran code for the analytical Jacobian matrix used in the Newton-Raphson technique, which are compiled and linked into the BIOGEOCHEM executable. With this feature, the users are exempted from recoding the simulator to accept new equilibrium expressions or kinetic rate laws. Two examples are used to demonstrate the new features of the simulator.

  10. A Carbonaceous Chondrite Based Simulant of Phobos

    NASA Technical Reports Server (NTRS)

    Rickman, Douglas L.; Patel, Manish; Pearson, V.; Wilson, S.; Edmunson, J.

    2016-01-01

    In support of an ESA-funded concept study considering a sample return mission, a simulant of the Martian moon Phobos was needed. There are no samples of the Phobos regolith, therefore none of the four characteristics normally used to design a simulant are explicitly known for Phobos. Because of this, specifications for a Phobos simulant were based on spectroscopy, other remote measurements, and judgment. A composition based on the Tagish Lake meteorite was assumed. The requirement that sterility be achieved, especially given the required organic content, was unusual and problematic. The final design mixed JSC-1A, antigorite, pseudo-agglutinates and gilsonite. Sterility was achieved by radiation in a commercial facility.

  11. Methods for simulating solute breakthrough curves in pumping groundwater wells

    USGS Publications Warehouse

    Starn, J. Jeffrey; Bagtzoglou, Amvrossios C.; Robbins, Gary A.

    2012-01-01

    In modeling there is always a trade-off between execution time and accuracy. For gradient-based parameter estimation methods, where a simulation model is run repeatedly to populate a Jacobian (sensitivity) matrix, there exists a need for rapid simulation methods of known accuracy that can decrease execution time, and thus make the model more useful without sacrificing accuracy. Convolution-based methods can be executed rapidly for any desired input function once the residence-time distribution is known. The residence-time distribution can be calculated efficiently using particle tracking, but particle tracking can be ambiguous near a pumping well if the grid is too coarse. We present several embedded analytical expressions for improving particle tracking near a pumping well and compare them with a finely gridded finite-difference solution in terms of accuracy and CPU usage. Even though the embedded analytical approach can improve particle tracking near a well, particle methods reduce, but do not eliminate, reliance on a grid because velocity fields typically are calculated on a grid, and additional error is incurred using linear interpolation of velocity. A dilution rate can be calculated for a given grid and pumping well to determine if the grid is sufficiently refined. Embedded analytical expressions increase accuracy but add significantly to CPU usage. Structural error introduced by the numerical solution method may affect parameter estimates.

  12. An implicit finite element method for simulating inhomogeneous deformation and shear bands of amorphous alloys based on the free-volume model

    SciTech Connect

    Gao, Yanfei

    2006-01-01

    Inhomogeneous deformation of amorphous alloys is caused by the initiation, multiplication and interaction of shear bands (i.e., narrow bands with large plastic deformation). Based on the free volume model under the generalized multiaxial stress state, this work develops a finite element scheme to model the individual processes of shear bands that contribute to the macroscopic plasticity behavior. In this model, the stress-driven increase of the free volume reduces the viscosity and thus leads to the strain localization in the shear band. Using the small-strain and rate-dependent plasticity framework, the plastic strain is assumed to be proportional to the deviatoric stress, and the flow stress is a function of the free volume, while the temporal change of the free volume is also coupled with the stress state. Nonlinear equations from the incremental finite element formulation are solved by the Newton-Raphson method, in which the corresponding material tangent is obtained by simultaneously and implicitly integrating the plastic flow equation and the evolution equation of the free volume field. This micromechanical model allows us to study the interaction between individual shear bands and between the shear bands and the background stress fields. To illustrate its capabilities, the method is used to solve representative boundary value problems.

  13. Parallel implementation of the particle simulation method with dynamic load balancing: Toward realistic geodynamical simulation

    NASA Astrophysics Data System (ADS)

    Furuichi, M.; Nishiura, D.

    2015-12-01

    Fully Lagrangian methods such as Smoothed Particle Hydrodynamics (SPH) and Discrete Element Method (DEM) have been widely used to solve the continuum and particles motions in the computational geodynamics field. These mesh-free methods are suitable for the problems with the complex geometry and boundary. In addition, their Lagrangian nature allows non-diffusive advection useful for tracking history dependent properties (e.g. rheology) of the material. These potential advantages over the mesh-based methods offer effective numerical applications to the geophysical flow and tectonic processes, which are for example, tsunami with free surface and floating body, magma intrusion with fracture of rock, and shear zone pattern generation of granular deformation. In order to investigate such geodynamical problems with the particle based methods, over millions to billion particles are required for the realistic simulation. Parallel computing is therefore important for handling such huge computational cost. An efficient parallel implementation of SPH and DEM methods is however known to be difficult especially for the distributed-memory architecture. Lagrangian methods inherently show workload imbalance problem for parallelization with the fixed domain in space, because particles move around and workloads change during the simulation. Therefore dynamic load balance is key technique to perform the large scale SPH and DEM simulation. In this work, we present the parallel implementation technique of SPH and DEM method utilizing dynamic load balancing algorithms toward the high resolution simulation over large domain using the massively parallel super computer system. Our method utilizes the imbalances of the executed time of each MPI process as the nonlinear term of parallel domain decomposition and minimizes them with the Newton like iteration method. In order to perform flexible domain decomposition in space, the slice-grid algorithm is used. Numerical tests show that our

  14. A simulation based approach to optimize inventory replenishment with RAND algorithm: An extended study of corrected demand using Holt's method for textile industry

    NASA Astrophysics Data System (ADS)

    Morshed, Mohammad Sarwar; Kamal, Mostafa Mashnoon; Khan, Somaiya Islam

    2016-07-01

    Inventory has been a major concern in supply chain and numerous researches have been done lately on inventory control which brought forth a number of methods that efficiently manage inventory and related overheads by reducing cost of replenishment. This research is aimed towards providing a better replenishment policy in case of multi-product, single supplier situations for chemical raw materials of textile industries in Bangladesh. It is assumed that industries currently pursue individual replenishment system. The purpose is to find out the optimum ideal cycle time and individual replenishment cycle time of each product for replenishment that will cause lowest annual holding and ordering cost, and also find the optimum ordering quantity. In this paper indirect grouping strategy has been used. It is suggested that indirect grouping Strategy outperforms direct grouping strategy when major cost is high. An algorithm by Kaspi and Rosenblatt (1991) called RAND is exercised for its simplicity and ease of application. RAND provides an ideal cycle time (T) for replenishment and integer multiplier (ki) for individual items. Thus the replenishment cycle time for each product is found as T×ki. Firstly, based on data, a comparison between currently prevailing (individual) process and RAND is provided that uses the actual demands which presents 49% improvement in total cost of replenishment. Secondly, discrepancies in demand is corrected by using Holt's method. However, demands can only be forecasted one or two months into the future because of the demand pattern of the industry under consideration. Evidently, application of RAND with corrected demand display even greater improvement. The results of this study demonstrates that cost of replenishment can be significantly reduced by applying RAND algorithm and exponential smoothing models.

  15. Multinomial tau-leaping method for stochastic kinetic simulations.

    PubMed

    Pettigrew, Michel F; Resat, Haluk

    2007-02-28

    We introduce the multinomial tau-leaping (MtauL) method for general reaction networks with multichannel reactant dependencies. The MtauL method is an extension of the binomial tau-leaping method where efficiency is improved in several ways. First, tau-leaping steps are determined simply and efficiently using a priori information and Poisson distribution-based estimates of expectation values for reaction numbers over a tentative tau-leaping step. Second, networks are partitioned into closed groups of reactions and corresponding reactants in which no group reactant set is found in any other group. Third, product formation is factored into upper-bound estimation of the number of times a particular reaction occurs. Together, these features allow larger time steps where the numbers of reactions occurring simultaneously in a multichannel manner are estimated accurately using a multinomial distribution. Furthermore, we develop a simple procedure that places a specific upper bound on the total reaction number to ensure non-negativity of species populations over a single multiple-reaction step. Using two disparate test case problems involving cellular processes--epidermal growth factor receptor signaling and a lactose operon model--we show that the tau-leaping based methods such as the MtauL algorithm can significantly reduce the number of simulation steps thus increasing the numerical efficiency over the exact stochastic simulation algorithm by orders of magnitude. PMID:17343434

  16. Multinomial tau-leaping method for stochastic kinetic simulations

    NASA Astrophysics Data System (ADS)

    Pettigrew, Michel F.; Resat, Haluk

    2007-02-01

    We introduce the multinomial tau-leaping (MτL) method for general reaction networks with multichannel reactant dependencies. The MτL method is an extension of the binomial tau-leaping method where efficiency is improved in several ways. First, τ-leaping steps are determined simply and efficiently using a priori information and Poisson distribution-based estimates of expectation values for reaction numbers over a tentative τ-leaping step. Second, networks are partitioned into closed groups of reactions and corresponding reactants in which no group reactant set is found in any other group. Third, product formation is factored into upper-bound estimation of the number of times a particular reaction occurs. Together, these features allow larger time steps where the numbers of reactions occurring simultaneously in a multichannel manner are estimated accurately using a multinomial distribution. Furthermore, we develop a simple procedure that places a specific upper bound on the total reaction number to ensure non-negativity of species populations over a single multiple-reaction step. Using two disparate test case problems involving cellular processes—epidermal growth factor receptor signaling and a lactose operon model—we show that the τ-leaping based methods such as the MτL algorithm can significantly reduce the number of simulation steps thus increasing the numerical efficiency over the exact stochastic simulation algorithm by orders of magnitude.

  17. Improvements in the gyrokinetic simulation method

    SciTech Connect

    Matsuda, Y.; Cohen, B.I.; Williams, T.J.

    1991-01-01

    Gyrokinetic particle-in-cell (PIC) simulations have been proven to be an important and useful tool for studying low frequency waves and instabilities below ion cyclotron frequency. The gyrokinetic formalism eliminates the cyclotron motion by analytically averaging the equation of motion in time, while keeping finite-Larmor radius effects, and therefore allows a time step of integration to be significantly longer than the cyclotron period. At the same time the thermal fluctuation level is reduced well below that of a conventional PIC simulation code. Recent simulations have been performed over a number of wave periods to study nonlinear evolution of drift waves and ion-temperature-gradient modes and the associated transport. With about a quarter million particles and a 64 {times} 128 {times} 32 grid in three dimensions, it takes about 100 hours on the Cray-2 single processor to follow the modes to a nonlinear quasi-steady state for relatively strong gradients and strong growth rates. Much more efficient simulations are needed in order to understand these low-frequency waves and the transport associated with them by the use of this tool, and to facilitate the simulation of more weakly unstable plasmas with parameters more relevant to experimental conditions. We have set a goal of achieving an efficiency gain of a factor of 100 on a present-day computer over what has been achieved on the Cray-2 for gyrokinetic simulations. To reach this goal we have begun a project with two components; one is the use of new PIC techniques such as subcycling, orbit-averaging, and semi-implicit algorithms, and the other is the use of massively parallel computers such as the BBN TC200 and the Thinking Machines CM-2. 6 refs.

  18. Numerical simulation of self-sustained oscillation of a voice-producing element based on Navier-Stokes equations and the finite element method

    NASA Astrophysics Data System (ADS)

    de Vries, Martinus P.; Hamburg, Marc C.; Schutte, Harm K.; Verkerke, Gijsbertus J.; Veldman, Arthur E. P.

    2003-04-01

    Surgical removal of the larynx results in radically reduced production of voice and speech. To improve voice quality a voice-producing element (VPE) is developed, based on the lip principle, called after the lips of a musician while playing a brass instrument. To optimize the VPE, a numerical model is developed. In this model, the finite element method is used to describe the mechanical behavior of the VPE. The flow is described by two-dimensional incompressible Navier-Stokes equations. The interaction between VPE and airflow is modeled by placing the grid of the VPE model in the grid of the aerodynamical model, and requiring continuity of forces and velocities. By applying and increasing pressure to the numerical model, pulses comparable to glottal volume velocity waveforms are obtained. By variation of geometric parameters their influence can be determined. To validate this numerical model, an in vitro test with a prototype of the VPE is performed. Experimental and numerical results show an acceptable agreement.

  19. A new automatic baseline correction method based on iterative method

    NASA Astrophysics Data System (ADS)

    Bao, Qingjia; Feng, Jiwen; Chen, Fang; Mao, Wenping; Liu, Zao; Liu, Kewen; Liu, Chaoyang

    2012-05-01

    A new automatic baseline correction method for Nuclear Magnetic Resonance (NMR) spectra is presented. It is based on an improved baseline recognition method and a new iterative baseline modeling method. The presented baseline recognition method takes advantages of three baseline recognition algorithms in order to recognize all signals in spectra. While in the iterative baseline modeling method, besides the well-recognized baseline points in signal-free regions, the 'quasi-baseline points' in the signal-crowded regions are also identified and then utilized to improve robustness by preventing the negative regions. The experimental results on both simulated data and real metabolomics spectra with over-crowded peaks show the efficiency of this automatic method.

  20. Simulation-based training for colonoscopy: establishing criteria for competency.

    PubMed

    Preisler, Louise; Svendsen, Morten Bo Søndergaard; Nerup, Nikolaj; Svendsen, Lars Bo; Konge, Lars

    2015-01-01

    The aim of this study was to create simulation-based tests with credible pass/fail standards for 2 different fidelities of colonoscopy models. Only competent practitioners should perform colonoscopy. Reliable and valid simulation-based tests could be used to establish basic competency in colonoscopy before practicing on patients. Twenty-five physicians (10 consultants with endoscopic experience and 15 fellows with very little endoscopic experience) were tested on 2 different simulator models: a virtual-reality simulator and a physical model. Tests were repeated twice on each simulator model. Metrics with discriminatory ability were identified for both modalities and reliability was determined. The contrasting-groups method was used to create pass/fail standards and the consequences of these were explored. The consultants significantly performed faster and scored higher than the fellows on both the models (P < 0.001). Reliability analysis showed Cronbach α = 0.80 and 0.87 for the virtual-reality and the physical model, respectively. The established pass/fail standards failed one of the consultants (virtual-reality simulator) and allowed one fellow to pass (physical model). The 2 tested simulations-based modalities provided reliable and valid assessments of competence in colonoscopy and credible pass/fail standards were established for both the tests. We propose to use these standards in simulation-based training programs before proceeding to supervised training on patients. PMID:25634177

  1. Improving the performance of a filling line based on simulation

    NASA Astrophysics Data System (ADS)

    Jasiulewicz-Kaczmarek, M.; Bartkowiak, T.

    2016-08-01

    The paper describes the method of improving performance of a filling line based on simulation. This study concerns a production line that is located in a manufacturing centre of a FMCG company. A discrete event simulation model was built using data provided by maintenance data acquisition system. Two types of failures were identified in the system and were approximated using continuous statistical distributions. The model was validated taking into consideration line performance measures. A brief Pareto analysis of line failures was conducted to identify potential areas of improvement. Two improvements scenarios were proposed and tested via simulation. The outcome of the simulations were the bases of financial analysis. NPV and ROI values were calculated taking into account depreciation, profits, losses, current CIT rate and inflation. A validated simulation model can be a useful tool in maintenance decision-making process.

  2. The parallel subdomain-levelset deflation method in reservoir simulation

    NASA Astrophysics Data System (ADS)

    van der Linden, J. H.; Jönsthövel, T. B.; Lukyanov, A. A.; Vuik, C.

    2016-01-01

    Extreme and isolated eigenvalues are known to be harmful to the convergence of an iterative solver. These eigenvalues can be produced by strong heterogeneity in the underlying physics. We can improve the quality of the spectrum by 'deflating' the harmful eigenvalues. In this work, deflation is applied to linear systems in reservoir simulation. In particular, large, sudden differences in the permeability produce extreme eigenvalues. The number and magnitude of these eigenvalues is linked to the number and magnitude of the permeability jumps. Two deflation methods are discussed. Firstly, we state that harmonic Ritz eigenvector deflation, which computes the deflation vectors from the information produced by the linear solver, is unfeasible in modern reservoir simulation due to high costs and lack of parallelism. Secondly, we test a physics-based subdomain-levelset deflation algorithm that constructs the deflation vectors a priori. Numerical experiments show that both methods can improve the performance of the linear solver. We highlight the fact that subdomain-levelset deflation is particularly suitable for a parallel implementation. For cases with well-defined permeability jumps of a factor 104 or higher, parallel physics-based deflation has potential in commercial applications. In particular, the good scalability of parallel subdomain-levelset deflation combined with the robust parallel preconditioner for deflated system suggests the use of this method as an alternative for AMG.

  3. Multigrid hierarchical simulated annealing method for reconstructing heterogeneous media

    NASA Astrophysics Data System (ADS)

    Pant, Lalit M.; Mitra, Sushanta K.; Secanell, Marc

    2015-12-01

    A reconstruction methodology based on different-phase-neighbor (DPN) pixel swapping and multigrid hierarchical annealing is presented. The method performs reconstructions by starting at a coarse image and successively refining it. The DPN information is used at each refinement stage to freeze interior pixels of preformed structures. This preserves the large-scale structures in refined images and also reduces the number of pixels to be swapped, thereby resulting in a decrease in the necessary computational time to reach a solution. Compared to conventional single-grid simulated annealing, this method was found to reduce the required computation time to achieve a reconstruction by around a factor of 70-90, with the potential of even higher speedups for larger reconstructions. The method is able to perform medium sized (up to 3003 voxels) three-dimensional reconstructions with multiple correlation functions in 36-47 h.

  4. Microcanonical ensemble simulation method applied to discrete potential fluids

    NASA Astrophysics Data System (ADS)

    Sastre, Francisco; Benavides, Ana Laura; Torres-Arenas, José; Gil-Villegas, Alejandro

    2015-09-01

    In this work we extend the applicability of the microcanonical ensemble simulation method, originally proposed to study the Ising model [A. Hüller and M. Pleimling, Int. J. Mod. Phys. C 13, 947 (2002), 10.1142/S0129183102003693], to the case of simple fluids. An algorithm is developed by measuring the transition rates probabilities between macroscopic states, that has as advantage with respect to conventional Monte Carlo NVT (MC-NVT) simulations that a continuous range of temperatures are covered in a single run. For a given density, this new algorithm provides the inverse temperature, that can be parametrized as a function of the internal energy, and the isochoric heat capacity is then evaluated through a numerical derivative. As an illustrative example we consider a fluid composed of particles interacting via a square-well (SW) pair potential of variable range. Equilibrium internal energies and isochoric heat capacities are obtained with very high accuracy compared with data obtained from MC-NVT simulations. These results are important in the context of the application of the Hüller-Pleimling method to discrete-potential systems, that are based on a generalization of the SW and square-shoulder fluids properties.

  5. High Performance Parallel Methods for Space Weather Simulations

    NASA Technical Reports Server (NTRS)

    Hunter, Paul (Technical Monitor); Gombosi, Tamas I.

    2003-01-01

    This is the final report of our NASA AISRP grant entitled 'High Performance Parallel Methods for Space Weather Simulations'. The main thrust of the proposal was to achieve significant progress towards new high-performance methods which would greatly accelerate global MHD simulations and eventually make it possible to develop first-principles based space weather simulations which run much faster than real time. We are pleased to report that with the help of this award we made major progress in this direction and developed the first parallel implicit global MHD code with adaptive mesh refinement. The main limitation of all earlier global space physics MHD codes was the explicit time stepping algorithm. Explicit time steps are limited by the Courant-Friedrichs-Lewy (CFL) condition, which essentially ensures that no information travels more than a cell size during a time step. This condition represents a non-linear penalty for highly resolved calculations, since finer grid resolution (and consequently smaller computational cells) not only results in more computational cells, but also in smaller time steps.

  6. Knowledge-based simulation using object-oriented programming

    NASA Technical Reports Server (NTRS)

    Sidoran, Karen M.

    1993-01-01

    Simulations have become a powerful mechanism for understanding and modeling complex phenomena. Their results have had substantial impact on a broad range of decisions in the military, government, and industry. Because of this, new techniques are continually being explored and developed to make them even more useful, understandable, extendable, and efficient. One such area of research is the application of the knowledge-based methods of artificial intelligence (AI) to the computer simulation field. The goal of knowledge-based simulation is to facilitate building simulations of greatly increased power and comprehensibility by making use of deeper knowledge about the behavior of the simulated world. One technique for representing and manipulating knowledge that has been enhanced by the AI community is object-oriented programming. Using this technique, the entities of a discrete-event simulation can be viewed as objects in an object-oriented formulation. Knowledge can be factual (i.e., attributes of an entity) or behavioral (i.e., how the entity is to behave in certain circumstances). Rome Laboratory's Advanced Simulation Environment (RASE) was developed as a research vehicle to provide an enhanced simulation development environment for building more intelligent, interactive, flexible, and realistic simulations. This capability will support current and future battle management research and provide a test of the object-oriented paradigm for use in large scale military applications.

  7. Structured Debriefing in Simulation-Based Education.

    PubMed

    Palaganas, Janice C; Fey, Mary; Simon, Robert

    2016-02-01

    Debriefing following a simulation event is a conversational period for reflection and feedback aimed at sustaining or improving future performance. It is considered by many simulation educators to be a critical activity for learning in simulation-based education. Deep learning can be achieved during debriefing and often depends on the facilitation skills of the debriefer as well as the learner's perceptions of a safe and supportive learning environment as created by the debriefer. On the other hand, poorly facilitated debriefings may create adverse learning, generate bad feelings, and may lead to a degradation of clinical performance, self-reflection, or harm to the educator-learner relationship. The use of a structure that recognizes logical and sequential phases during debriefing can assist simulation educators to achieve a deep level of learning. PMID:26909457

  8. Simulation-Based Education with Mastery Learning Improves Paracentesis Skills

    PubMed Central

    Barsuk, Jeffrey H.; Cohen, Elaine R.; Vozenilek, John A.; O'Connor, Lanty M.; McGaghie, William C.; Wayne, Diane B.

    2012-01-01

    Background Paracentesis is a commonly performed bedside procedure that has the potential for serious complications. Therefore, simulation-based education for paracentesis is valuable for clinicians. Objective To assess internal medicine residents' procedural skills before and after simulation-based mastery learning on a paracentesis simulator. Methods A team with expertise in simulation and procedural skills developed and created a high fidelity, ultrasound-compatible paracentesis simulator. Fifty-eight first-year internal medicine residents completed a mastery learning-based intervention using the paracentesis simulator. Residents underwent baseline skill assessment (pretest) using a 25-item checklist. Residents completed a posttest after a 3-hour education session featuring a demonstration of the procedure, deliberate practice, ultrasound training, and feedback. All residents were expected to meet or exceed a minimum passing score (MPS) at posttest, the key feature of mastery learning. We compared pretest and posttest checklist scores to evaluate the effect of the educational intervention. Residents rated the training sessions. Results Residents' paracentesis skills improved from an average pretest score of 33.0% (SD  =  15.2%) to 92.7% (SD  =  5.4%) at posttest (P < .001). After the training intervention, all residents met or exceeded the MPS. The training sessions and realism of the simulation were rated highly by learners. Conclusion This study demonstrates the ability of a paracentesis simulator to significantly improve procedural competence. PMID:23451302

  9. PACO: PArticle COunting Method To Enforce Concentrations in Dynamic Simulations.

    PubMed

    Berti, Claudio; Furini, Simone; Gillespie, Dirk

    2016-03-01

    We present PACO, a computationally efficient method for concentration boundary conditions in nonequilibrium particle simulations. Because it requires only particle counting, its computational effort is significantly smaller than other methods. PACO enables Brownian dynamics simulations of micromolar electrolytes (3 orders of magnitude lower than previously simulated). PACO for Brownian dynamics is integrated in the BROWNIES package (www.phys.rush.edu/BROWNIES). We also introduce a molecular dynamics PACO implementation that allows for very accurate control of concentration gradients.

  10. A Multiscale simulation method for ice crystallization and frost growth

    NASA Astrophysics Data System (ADS)

    Yazdani, Miad

    2015-11-01

    Formation of ice crystals and frost is associated with physical mechanisms at immensely separated scales. The primary focus of this work is on crystallization and frost growth on a cold plate exposed to the humid air. The nucleation is addressed through Gibbs energy barrier method based on the interfacial energy of crystal and condensate as well as the ambient and surface conditions. The supercooled crystallization of ice crystals is simulated through a phase-field based method where the variation of degree of surface tension anisotropy and its mode in the fluid medium is represented statistically. In addition, the mesoscale width of the interface is quantified asymptotically which serves as a length-scale criterion into a so-called ``Adaptive'' AMR (AAMR) algorithm to tie the grid resolution at the interface to local physical properties. Moreover, due to the exposure of crystal to humid air, a secondary non-equilibrium growth process contributes to the formation of frost at the tip of the crystal. A Monte-Carlo implementation of Diffusion Limited Aggregation method addresses the formation of frost during the crystallization. Finally, a virtual boundary based Immersed Boundary Method (IBM) is adapted to address the interaction of ice crystal with convective air during its growth.

  11. The frontal method in hydrodynamics simulations

    USGS Publications Warehouse

    Walters, R.A.

    1980-01-01

    The frontal solution method has proven to be an effective means of solving the matrix equations resulting from the application of the finite element method to a variety of problems. In this study, several versions of the frontal method were compared in efficiency for several hydrodynamics problems. Three basic modifications were shown to be of value: 1. Elimination of equations with boundary conditions beforehand, 2. Modification of the pivoting procedures to allow dynamic management of the equation size, and 3. Storage of the eliminated equations in a vector. These modifications are sufficiently general to be applied to other classes of problems. ?? 1980.

  12. Microcomputer-Based Programs for Pharmacokinetic Simulations.

    ERIC Educational Resources Information Center

    Li, Ronald C.; And Others

    1995-01-01

    Microcomputer software that simulates drug-concentration time profiles based on user-assigned pharmacokinetic parameters such as central volume of distribution, elimination rate constant, absorption rate constant, dosing regimens, and compartmental transfer rate constants is described. The software is recommended for use in undergraduate…

  13. Simulation reduction using the Taguchi method

    NASA Technical Reports Server (NTRS)

    Mistree, Farrokh; Lautenschlager, Ume; Erikstad, Stein Owe; Allen, Janet K.

    1993-01-01

    A large amount of engineering effort is consumed in conducting experiments to obtain information needed for making design decisions. Efficiency in generating such information is the key to meeting market windows, keeping development and manufacturing costs low, and having high-quality products. The principal focus of this project is to develop and implement applications of Taguchi's quality engineering techniques. In particular, we show how these techniques are applied to reduce the number of experiments for trajectory simulation of the LifeSat space vehicle. Orthogonal arrays are used to study many parameters simultaneously with a minimum of time and resources. Taguchi's signal to noise ratio is being employed to measure quality. A compromise Decision Support Problem and Robust Design are applied to demonstrate how quality is designed into a product in the early stages of designing.

  14. Multinomial Tau-Leaping Method for Stochastic Kinetic Simulations

    SciTech Connect

    Pettigrew, Michel F.; Resat, Haluk

    2007-02-28

    We introduce the multinomial tau-leaping (MtL) method, an improved version of the binomial tau-leaping method, for general reaction networks. Improvements in efficiency are achieved in several ways. Firstly, tau-leaping steps are determined simply and efficiently using a-prior information. Secondly, networks are partitioned into closed groups of reactions and corresponding reactants in which no group reactant or reaction is found in any other group. Thirdly, product formation is factored into upper bound estimation of the number of times a particular reaction occurs. Together, these features allow for larger time steps where the numbers of reactions occurring simultaneously in a multi-channel manner are estimated accurately using a multinomial distribution. Using a wide range of test case problems of scientific and practical interest involving cellular processes, such as epidermal growth factor receptor signaling and lactose operon model incorporating gene transcription and translation, we show that tau-leaping based methods like the MtL algorithm can significantly reduce the number of simulation steps thus increasing the numerical efficiency over the exact stochastic simulation algorithm by orders of magnitude. Furthermore, the simultaneous multi-channel representation capability of the MtL algorithm makes it a candidate for FPGA implementation or for parallelization in parallel computing environments.

  15. Development of an Agent-Based Model (ABM) to Simulate the Immune System and Integration of a Regression Method to Estimate the Key ABM Parameters by Fitting the Experimental Data.

    PubMed

    Tong, Xuming; Chen, Jinghang; Miao, Hongyu; Li, Tingting; Zhang, Le

    2015-01-01

    Agent-based models (ABM) and differential equations (DE) are two commonly used methods for immune system simulation. However, it is difficult for ABM to estimate key parameters of the model by incorporating experimental data, whereas the differential equation model is incapable of describing the complicated immune system in detail. To overcome these problems, we developed an integrated ABM regression model (IABMR). It can combine the advantages of ABM and DE by employing ABM to mimic the multi-scale immune system with various phenotypes and types of cells as well as using the input and output of ABM to build up the Loess regression for key parameter estimation. Next, we employed the greedy algorithm to estimate the key parameters of the ABM with respect to the same experimental data set and used ABM to describe a 3D immune system similar to previous studies that employed the DE model. These results indicate that IABMR not only has the potential to simulate the immune system at various scales, phenotypes and cell types, but can also accurately infer the key parameters like DE model. Therefore, this study innovatively developed a complex system development mechanism that could simulate the complicated immune system in detail like ABM and validate the reliability and efficiency of model like DE by fitting the experimental data.

  16. Engineering virtual-environment-based training simulators

    NASA Astrophysics Data System (ADS)

    Jense, Hans; Kuijper, Frido

    1998-04-01

    While the potential of Virtual Environments (VE's) for training simulators has been recognized right from the start of the emergence of the technology, to date most VE systems that claim to be training simulators have been developed in an adhoc fashion. Based on requirements of the Royal Netherlands Army and Air Force, we have recently developed VE based training simulators following basic systems engineering practice. This paper reports on our approach in general, and specifically focuses on two examples. The first is a distributed VE system for training Forward Air Controllers (FAC's). This system comprises an immersive VE for the FAC trainee, as well as a number of other components, all interconnected in a network infrastructure utilizing the DIS/HLA standard protocols for distributed simulation. The prototype VE FAC simulator is currently being used in the training program of the Netherlands Integrated Air/Ground Operations School. Feedback from the users is being collected as input for a follow-on development activity. A second development is aimed at the evaluation of VE technology for training gunnery procedures with the Stinger man-portable air-defense system. In this project, a system is being developed that enables us to evaluate a number of different configurations with respect to both human and systems performance characteristics.

  17. Accelerated GPU based SPECT Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris

    2016-06-01

    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: 99m Tc, 111In and 131I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency

  18. Simulation of the 2-dimensional Drude’s model using molecular dynamics method

    SciTech Connect

    Naa, Christian Fredy; Amin, Aisyah; Ramli,; Suprijadi,; Djamal, Mitra; Wahyoedi, Seramika Ari; Viridi, Sparisoma

    2015-04-16

    In this paper, we reported the results of the simulation of the electronic conduction in solids. The simulation is based on the Drude’s models by applying molecular dynamics (MD) method, which uses the fifth-order predictor-corrector algorithm. A formula of the electrical conductivity as a function of lattice length and ion diameter τ(L, d) cand be obtained empirically based on the simulation results.

  19. Simulation of solid body motion in a Newtonian fluid using a vorticity-based pseudo-spectral immersed boundary method augmented by the radial basis functions

    NASA Astrophysics Data System (ADS)

    Sabetghadam, Fereidoun; Soltani, Elshan

    2015-10-01

    The moving boundary conditions are implemented into the Fourier pseudo-spectral solution of the two-dimensional incompressible Navier-Stokes equations (NSE) in the vorticity-velocity form, using the radial basis functions (RBF). Without explicit definition of an external forcing function, the desired immersed boundary conditions are imposed by direct modification of the convection and diffusion terms. At the beginning of each time-step the solenoidal velocities, satisfying the desired moving boundary conditions, along with a modified vorticity are obtained and used in modification of the convection and diffusion terms of the vorticity evolution equation. Time integration is performed by the explicit fourth-order Runge-Kutta method and the boundary conditions are set at the beginning of each sub-step. The method is applied to a couple of moving boundary problems and more than second-order of accuracy in space is demonstrated for the Reynolds numbers up to Re = 550. Moreover, performance of the method is shown in comparison with the classical Fourier pseudo-spectral method.

  20. Situating Computer Simulation Professional Development: Does It Promote Inquiry-Based Simulation Use?

    ERIC Educational Resources Information Center

    Gonczi, Amanda L.; Maeng, Jennifer L.; Bell, Randy L.; Whitworth, Brooke A.

    2016-01-01

    This mixed-methods study sought to identify professional development implementation variables that may influence participant (a) adoption of simulations, and (b) use for inquiry-based science instruction. Two groups (Cohort 1, N = 52; Cohort 2, N = 104) received different professional development. Cohort 1 was focused on Web site use mechanics.…

  1. Multigrid methods for numerical simulation of laminar diffusion flames

    NASA Technical Reports Server (NTRS)

    Liu, C.; Liu, Z.; Mccormick, S.

    1993-01-01

    This paper documents the result of a computational study of multigrid methods for numerical simulation of 2D diffusion flames. The focus is on a simplified combustion model, which is assumed to be a single step, infinitely fast and irreversible chemical reaction with five species (C3H8, O2, N2, CO2 and H2O). A fully-implicit second-order hybrid scheme is developed on a staggered grid, which is stretched in the streamwise coordinate direction. A full approximation multigrid scheme (FAS) based on line distributive relaxation is developed as a fast solver for the algebraic equations arising at each time step. Convergence of the process for the simplified model problem is more than two-orders of magnitude faster than other iterative methods, and the computational results show good grid convergence, with second-order accuracy, as well as qualitatively agreement with the results of other researchers.

  2. Transcending Competency Testing in Hospital-Based Simulation.

    PubMed

    Lassche, Madeline; Wilson, Barbara

    2016-02-01

    Simulation is a frequently used method for training students in health care professions and has recently gained acceptance in acute care hospital settings for use in educational programs and competency testing. Although hospital-based simulation is currently limited primarily to use in skills acquisition, expansion of the use of simulation via a modified Quality Health Outcomes Model to address systems factors such as the physical environment and human factors such as fatigue, reliance on memory, and reliance on vigilance could drive system-wide changes. Simulation is an expensive resource and should not be limited to use for education and competency testing. Well-developed, peer-reviewed simulations can be used for environmental factors, human factors, and interprofessional education to improve patients' outcomes and drive system-wide change for quality improvement initiatives. PMID:26909459

  3. Multiple time-scale methods in particle simulations of plasmas

    SciTech Connect

    Cohen, B.I.

    1985-02-14

    This paper surveys recent advances in the application of multiple time-scale methods to particle simulation of collective phenomena in plasmas. These methods dramatically improve the efficiency of simulating low-frequency kinetic behavior by allowing the use of a large timestep, while retaining accuracy. The numerical schemes surveyed provide selective damping of unwanted high-frequency waves and preserve numerical stability in a variety of physics models: electrostatic, magneto-inductive, Darwin and fully electromagnetic. The paper reviews hybrid simulation models, the implicitmoment-equation method, the direct implicit method, orbit averaging, and subcycling.

  4. Simulating multiple diffraction in imaging systems using a path integration method.

    PubMed

    Mout, Marco; Wick, Michael; Bociort, Florian; Petschulat, Jörg; Urbach, Paul

    2016-05-10

    We present a method for simulating multiple diffraction in imaging systems based on the Huygens-Fresnel principle. The method accounts for the effects of both aberrations and diffraction and is entirely performed using Monte Carlo ray tracing. We compare the results of this method to those of reference simulations for field propagation through optical systems and for the calculation of point spread functions. The method can accurately model a wide variety of optical systems beyond the exit pupil approximation. PMID:27168302

  5. Cluster growth processes by direct simulation monte carlo method

    NASA Astrophysics Data System (ADS)

    Mizuseki, H.; Jin, Y.; Kawazoe, Y.; Wille, L. T.

    Thin films obtained by cluster deposition have attracted strong attention both as a new manufacturing technique to realize high-density magnetic recording media and to create systems with unique magnetic properties. Because the film's features are influenced by the cluster properties during the flight path, the relevant physical scale to be studied is as large as centimeters. In this paper, a new model of cluster growth processes based on a combination of the Direct Simulation Monte Carlo (DSMC) method and the cluster growth model is introduced to examine the effects of experimental conditions on cluster growth by an adiabatic expansion process. From the macroscopic viewpoint, we simulate the behavior of clusters and inert gas in the flight path under different experimental conditions. The internal energy of the cluster, which consists of rotational and vibrational energies, is limited by the binding energy which depends on the cluster size. These internal and binding energies are used as criteria of the cluster growth. The binding energy is estimated by surface and volume terms. Several types of size distribution of generated clusters under various conditions are obtained by the present model. The results of the present numerical simulations reveal that the size distribution is strongly related to the experimental conditions and can be controlled.

  6. Task simulation in computer-based training

    SciTech Connect

    Gardner, P.R.

    1988-02-01

    Westinghouse Hanford Company (WHC) makes extensive use of job-task simulations in company-developed computer-based training (CBT) courseware. This courseware is different from most others because it does not simulate process control machinery or other computer programs, instead the WHC Excerises model day-to-day tasks such as physical work preparations, progress, and incident handling. These Exercises provide a higher level of motivation and enable the testing of more complex patterns of behavior than those typically measured by multiple-choice and short questions. Examples from the WHC Radiation Safety and Crane Safety courses will be used as illustrations. 3 refs.

  7. Simulating Biofilm Deformation and Detachment with the Immersed Boundary Method

    NASA Astrophysics Data System (ADS)

    Sudarsan, Rangarajan; Ghosh, Sudeshna; Stockie, John M.; Eberl, Hermann J.

    2016-03-01

    We apply the immersed boundary (or IB) method to simulate deformation and detachment of a periodic array of wall-bounded biofilm colonies in response to a linear shear flow. The biofilm material is represented as a network of Hookean springs that are placed along the edges of a triangulation of the biofilm region. The interfacial shear stress, lift and drag forces acting on the biofilm colony are computed by using fluid stress jump method developed by Williams, Fauci and Gaver [Disc. Contin. Dyn. Sys. B 11(2):519-540, 2009], with a modified version of their exclusion filter. Our detachment criterion is based on the novel concept of an averaged equivalent continuum stress tensor defined at each IB point in the biofilm which is then used to determine a corresponding von Mises yield stress; wherever this yield stress exceeds a given critical threshold the connections to that node are severed, thereby signalling the onset of a detachment event. In order to capture the deformation and detachment behaviour of a biofilm colony at different stages of growth, we consider a family of four biofilm shapes with varying aspect ratio. Our numerical simulations focus on the behaviour of weak biofilms (with relatively low yield stress threshold) and investigate features of the fluid-structure interaction such as locations of maximum shear and increased drag. The most important conclusion of this work is that the commonly employed detachment strategy in biofilm models based only on interfacial shear stress can lead to incorrect or inaccurate results when applied to the study of shear induced detachment of weak biofilms. Our detachment strategy based on equivalent continuum stresses provides a unified and consistent IB framework that handles both sloughing and erosion modes of biofilm detachment, and is consistent with strategies employed in many other continuum based biofilm models.

  8. Does a motion base prevent simulator sickness?

    NASA Technical Reports Server (NTRS)

    Sharkey, Thomas J.; Mccauley, Michael E.

    1992-01-01

    The use of high-fidelity motion cues to reduce the discrepancy between visually implied motion and actual motion is tested experimentally using the NASA Vertical Motion Simulator (VMS). Ten pilot subjects use the VMS to fly simulated S-turns and sawtooths which generate a high incidence of motion sickness. The subjects fly the maneuvers on separate days both with and without use of a motion base provided by the VMS, and data are collected regarding symptoms, dark focus, and postural equilibrium. The motion-base condition is shown to be practically irrelevant with respect to the incidence and severity of motion sickness. It is suggested that the data-collection procedure cannot detect differences in sickness levels, and the false cues of the motion condition are theorized to have an adverse impact approximately equivalent to the absence of cues in a fixed-base condition.

  9. Microcomputer based software for biodynamic simulation

    NASA Technical Reports Server (NTRS)

    Rangarajan, N.; Shams, T.

    1993-01-01

    This paper presents a description of a microcomputer based software package, called DYNAMAN, which has been developed to allow an analyst to simulate the dynamics of a system consisting of a number of mass segments linked by joints. One primary application is in predicting the motion of a human occupant in a vehicle under the influence of a variety of external forces, specially those generated during a crash event. Extensive use of a graphical user interface has been made to aid the user in setting up the input data for the simulation and in viewing the results from the simulation. Among its many applications, it has been successfully used in the prototype design of a moving seat that aids in occupant protection during a crash, by aircraft designers in evaluating occupant injury in airplane crashes, and by users in accident reconstruction for reconstructing the motion of the occupant and correlating the impacts with observed injuries.

  10. Streamflow simulation methods for ungauged and poorly gauged watersheds

    NASA Astrophysics Data System (ADS)

    Loukas, A.; Vasiliades, L.

    2014-02-01

    Rainfall-runoff modelling procedures for ungauged and poorly gauged watersheds are developed in this study. A well established hydrological model, the UBC watershed model, is selected and applied in five different river basins located in Canada, Cyprus and Pakistan. Catchments from cold, temperate, continental and semiarid climate zones are included to demonstrate the develop procedures. Two methodologies for streamflow modelling are proposed and analysed. The first method uses the UBC watershed model with a universal set of parameters for water allocation and flow routing, and precipitation gradients estimated from the available annual precipitation data as well as from regional information on the distribution of orographic precipitation. This method is proposed for watersheds without streamflow gauge data and limited meteorological station data. The second hybrid method proposes the coupling of UBC watershed model with artificial neural networks (ANNs) and is intended for use in poorly gauged watersheds which have limited streamflow measurements. The two proposed methods have been applied to five mountainous watersheds with largely varying climatic, physiographic and hydrological characteristics. The evaluation of the applied methods is based on combination of graphical results, statistical evaluation metrics, and normalized goodness-of-fit statistics. The results show that the first method satisfactorily simulates the observed hydrograph assuming that the basins are ungauged. When limited streamflow measurements are available, the coupling of ANNs with the regional non-calibrated UBC flow model components is considered a successful alternative method over the conventional calibration of a hydrological model based on the employed evaluation criteria for streamflow modelling and flood frequency estimation.

  11. Streamflow simulation methods for ungauged and poorly gauged watersheds

    NASA Astrophysics Data System (ADS)

    Loukas, A.; Vasiliades, L.

    2014-07-01

    Rainfall-runoff modelling procedures for ungauged and poorly gauged watersheds are developed in this study. A well-established hydrological model, the University of British Columbia (UBC) watershed model, is selected and applied in five different river basins located in Canada, Cyprus, and Pakistan. Catchments from cold, temperate, continental, and semiarid climate zones are included to demonstrate the procedures developed. Two methodologies for streamflow modelling are proposed and analysed. The first method uses the UBC watershed model with a universal set of parameters for water allocation and flow routing, and precipitation gradients estimated from the available annual precipitation data as well as from regional information on the distribution of orographic precipitation. This method is proposed for watersheds without streamflow gauge data and limited meteorological station data. The second hybrid method proposes the coupling of UBC watershed model with artificial neural networks (ANNs) and is intended for use in poorly gauged watersheds which have limited streamflow measurements. The two proposed methods have been applied to five mountainous watersheds with largely varying climatic, physiographic, and hydrological characteristics. The evaluation of the applied methods is based on the combination of graphical results, statistical evaluation metrics, and normalized goodness-of-fit statistics. The results show that the first method satisfactorily simulates the observed hydrograph assuming that the basins are ungauged. When limited streamflow measurements are available, the coupling of ANNs with the regional, non-calibrated UBC flow model components is considered a successful alternative method to the conventional calibration of a hydrological model based on the evaluation criteria employed for streamflow modelling and flood frequency estimation.

  12. Simulation of Free Surface Dynamic in a Random Heterogeneous Porous Medium by the Method Based on Mapping the Regular Domain on the Flow Domain.

    NASA Astrophysics Data System (ADS)

    Lazarev, Y.; Petrov, P.; Tartakovsky, D. M.

    2002-12-01

    In this paper the problem of vacuum-incompressible fluid interface moving in a porous medium by treating conductivity of a medium as a random field with known statistics is considered. The flow is described by a combination of mass conservation and Darcy' law. The use of a coordinate system tied with the moving fluid allows reducing the problem to the well-explored class of problems with fixed boundaries and an effective conductivity tensor instead of the initial scalar conductivity. The hydraulic head is represented as a series in powers of effective conductivity fluctuations. The applied procedure is close to the perturbation theory procedure in the amplitude of the hydraulic conductivity fluctuations ˜ K when searching the solution with accuracy up to ˜ K2 . In both cases physical quantity variance is considered to be proportional to its reason: ˜ V = A ṡ ˜ K ( A is a linear operator). Yet unlike perturbation theory, where it is considered that A depends only on undisturbed flow parameters A = A(¯ K), in the approach being used A is considered to be dependent on averaged flow parameters A = A( ¯ K,< ˜ K2 > ). Equations of the mean hydraulic head and mean flux and expressions for respective variances as well are derived in the 2-D case. For 1-D flow the derived solution agrees with the exact one within terms of σ K2 -order at any free surface fluctuations. Within this approach the free surface moving, evolution in time of the mean hydraulic head spatial distribution, mean flux and relative correlation functions are described by the set of first-order partial differential equations. The conjugate gradient method with preconditioning is proposed to be used as the general method of equation numerical solving to find hydraulic head statistic moments. The problem matrix symmetry and its positive definiteness serve the foundation of the method applicability. RFLOW code has been elaborated to solve this set of equations numerically. Testing data of the

  13. Space-based radar array system simulation and validation

    NASA Astrophysics Data System (ADS)

    Schuman, H. K.; Pflug, D. R.; Thompson, L. D.

    1981-08-01

    The present status of the space-based radar phased array lens simulator is discussed. Huge arrays of thin wire radiating elements on either side of a ground screen are modeled by the simulator. Also modeled are amplitude and phase adjust modules connecting radiating elements between arrays, feedline to radiator mismatch, and lens warping. A successive approximation method is employed. The first approximation is based on a plane wave expansion (infinite array) moment method especially suited to large array analysis. the first approximation results then facilitate higher approximation computations that account for effects of nonuniform periodicities (lens edge, lens section interfaces, failed modules, etc.). The programming to date is discussed via flow diagrams. An improved theory is presented in a consolidated development. The use of the simulator is illustrated by computing active impedances and radiating element current distributions for infinite planar arrays of straight and 'swept back' dipoles (arms inclined with respect to the array plane) with feedline scattering taken into account.

  14. Numerical Methods and Simulations of Complex Multiphase Flows

    NASA Astrophysics Data System (ADS)

    Brady, Peter

    Multiphase flows are an important part of many natural and technological phenomena such as ocean-air coupling (which is important for climate modeling) and the atomization of liquid fuel jets in combustion engines. The unique challenges of multiphase flow often make analytical solutions to the governing equations impossible and experimental investigations very difficult. Thus, high-fidelity numerical simulations can play a pivotal role in understanding these systems. This dissertation describes numerical methods developed for complex multiphase flows and the simulations performed using these methods. First, the issue of multiphase code verification is addressed. Code verification answers the question "Is this code solving the equations correctly?" The method of manufactured solutions (MMS) is a procedure for generating exact benchmark solutions which can test the most general capabilities of a code. The chief obstacle to applying MMS to multiphase flow lies in the discontinuous nature of the material properties at the interface. An extension of the MMS procedure to multiphase flow is presented, using an adaptive marching tetrahedron style algorithm to compute the source terms near the interface. Guidelines for the use of the MMS to help locate coding mistakes are also detailed. Three multiphase systems are then investigated: (1) the thermocapillary motion of three-dimensional and axisymmetric drops in a confined apparatus, (2) the flow of two immiscible fluids completely filling an enclosed cylinder and driven by the rotation of the bottom endwall, and (3) the atomization of a single drop subjected to a high shear turbulent flow. The systems are simulated numerically by solving the full multiphase Navier-Stokes equations coupled to the various equations of state and a level set interface tracking scheme based on the refined level set grid method. The codes have been parallelized using MPI in order to take advantage of today's very large parallel computational

  15. Hybrid optimization schemes for simulation-based problems.

    SciTech Connect

    Fowler, Katie; Gray, Genetha Anne; Griffin, Joshua D.

    2010-05-01

    The inclusion of computer simulations in the study and design of complex engineering systems has created a need for efficient approaches to simulation-based optimization. For example, in water resources management problems, optimization problems regularly consist of objective functions and constraints that rely on output from a PDE-based simulator. Various assumptions can be made to simplify either the objective function or the physical system so that gradient-based methods apply, however the incorporation of realistic objection functions can be accomplished given the availability of derivative-free optimization methods. A wide variety of derivative-free methods exist and each method has both advantages and disadvantages. Therefore, to address such problems, we propose a hybrid approach, which allows the combining of beneficial elements of multiple methods in order to more efficiently search the design space. Specifically, in this paper, we illustrate the capabilities of two novel algorithms; one which hybridizes pattern search optimization with Gaussian Process emulation and the other which hybridizes pattern search and a genetic algorithm. We describe the hybrid methods and give some numerical results for a hydrological application which illustrate that the hybrids find an optimal solution under conditions for which traditional optimal search methods fail.

  16. Daylighting simulation: methods, algorithms, and resources

    SciTech Connect

    Carroll, William L.

    1999-12-01

    This document presents work conducted as part of Subtask C, ''Daylighting Design Tools'', Subgroup C2, ''New Daylight Algorithms'', of the IEA SHC Task 21 and the ECBCS Program Annex 29 ''Daylight in Buildings''. The search for and collection of daylighting analysis methods and algorithms led to two important observations. First, there is a wide range of needs for different types of methods to produce a complete analysis tool. These include: Geometry; Light modeling; Characterization of the natural illumination resource; Materials and components properties, representations; and Usability issues (interfaces, interoperability, representation of analysis results, etc). Second, very advantageously, there have been rapid advances in many basic methods in these areas, due to other forces. They are in part driven by: The commercial computer graphics community (commerce, entertainment); The lighting industry; Architectural rendering and visualization for projects; and Academia: Course materials, research. This has led to a very rich set of information resources that have direct applicability to the small daylighting analysis community. Furthermore, much of this information is in fact available online. Because much of the information about methods and algorithms is now online, an innovative reporting strategy was used: the core formats are electronic, and used to produce a printed form only secondarily. The electronic forms include both online WWW pages and a downloadable .PDF file with the same appearance and content. Both electronic forms include live primary and indirect links to actual information sources on the WWW. In most cases, little additional commentary is provided regarding the information links or citations that are provided. This in turn allows the report to be very concise. The links are expected speak for themselves. The report consists of only about 10+ pages, with about 100+ primary links, but with potentially thousands of indirect links. For purposes of

  17. Kinetic Method for Hydrogen-Deuterium-Tritium Mixture Distillation Simulation

    SciTech Connect

    Sazonov, A.B.; Kagramanov, Z.G.; Magomedbekov, E.P.

    2005-07-15

    Simulation of hydrogen distillation plants requires mathematical procedures suitable for multicomponent systems. In most of the present-day simulation methods a distillation column is assumed to be composed of theoretical stages, or plates. However, in the case of a multicomponent mixture theoretical plate does not exist.An alternative kinetic method of simulation is depicted in the work. According to this method a system of mass-transfer differential equations is solved numerically. Mass-transfer coefficients are estimated with using experimental results and empirical equations.Developed method allows calculating the steady state of a distillation column as well as its any non-steady state when initial conditions are given. The results for steady states are compared with ones obtained via Thiele-Geddes theoretical stage technique and the necessity of using kinetic method is demonstrated. Examples of a column startup period and periodic distillation simulations are shown as well.

  18. Simulations of 6-DOF Motion with a Cartesian Method

    NASA Technical Reports Server (NTRS)

    Murman, Scott M.; Aftosmis, Michael J.; Berger, Marsha J.; Kwak, Dochan (Technical Monitor)

    2003-01-01

    Coupled 6-DOF/CFD trajectory predictions using an automated Cartesian method are demonstrated by simulating a GBU-32/JDAM store separating from an F-18C aircraft. Numerical simulations are performed at two Mach numbers near the sonic speed, and compared with flight-test telemetry and photographic-derived data. Simulation results obtained with a sequential-static series of flow solutions are contrasted with results using a time-dependent flow solver. Both numerical methods show good agreement with the flight-test data through the first half of the simulations. The sequential-static and time-dependent methods diverge over the last half of the trajectory prediction. after the store produces peak angular rates. A cost comparison for the Cartesian method is included, in terms of absolute cost and relative to computing uncoupled 6-DOF trajectories. A detailed description of the 6-DOF method, as well as a verification of its accuracy, is provided in an appendix.

  19. A web-based virtual lighting simulator

    SciTech Connect

    Papamichael, Konstantinos; Lai, Judy; Fuller, Daniel; Tariq, Tara

    2002-05-06

    This paper is about a web-based ''virtual lighting simulator,'' which is intended to allow architects and lighting designers to quickly assess the effect of key parameters on the daylighting and lighting performance in various space types. The virtual lighting simulator consists of a web-based interface that allows navigation through a large database of images and data, which were generated through parametric lighting simulations. At its current form, the virtual lighting simulator has two main modules, one for daylighting and one for electric lighting. The daylighting module includes images and data for a small office space, varying most key daylighting parameters, such as window size and orientation, glazing type, surface reflectance, sky conditions, time of the year, etc. The electric lighting module includes images and data for five space types (classroom, small office, large open office, warehouse and small retail), varying key lighting parameters, such as the electric lighting system, surface reflectance, dimming/switching, etc. The computed images include perspectives and plans and are displayed in various formats to support qualitative as well as quantitative assessment. The quantitative information is in the form of iso-contour lines superimposed on the images, as well as false color images and statistical information on work plane illuminance. The qualitative information includes images that are adjusted to account for the sensitivity and adaptation of the human eye. The paper also includes a section on the major technical issues and their resolution.

  20. Mathematical modeling and simulation in animal health - Part II: principles, methods, applications, and value of physiologically based pharmacokinetic modeling in veterinary medicine and food safety assessment.

    PubMed

    Lin, Z; Gehring, R; Mochel, J P; Lavé, T; Riviere, J E

    2016-10-01

    This review provides a tutorial for individuals interested in quantitative veterinary pharmacology and toxicology and offers a basis for establishing guidelines for physiologically based pharmacokinetic (PBPK) model development and application in veterinary medicine. This is important as the application of PBPK modeling in veterinary medicine has evolved over the past two decades. PBPK models can be used to predict drug tissue residues and withdrawal times in food-producing animals, to estimate chemical concentrations at the site of action and target organ toxicity to aid risk assessment of environmental contaminants and/or drugs in both domestic animals and wildlife, as well as to help design therapeutic regimens for veterinary drugs. This review provides a comprehensive summary of PBPK modeling principles, model development methodology, and the current applications in veterinary medicine, with a focus on predictions of drug tissue residues and withdrawal times in food-producing animals. The advantages and disadvantages of PBPK modeling compared to other pharmacokinetic modeling approaches (i.e., classical compartmental/noncompartmental modeling, nonlinear mixed-effects modeling, and interspecies allometric scaling) are further presented. The review finally discusses contemporary challenges and our perspectives on model documentation, evaluation criteria, quality improvement, and offers solutions to increase model acceptance and applications in veterinary pharmacology and toxicology.

  1. Improved computational methods for simulating inertial confinement fusion

    NASA Astrophysics Data System (ADS)

    Fatenejad, Milad

    This dissertation describes the development of two multidimensional Lagrangian code for simulating inertial confinement fusion (ICF) on structured meshes. The first is DRACO, a production code primarily developed by the Laboratory for Laser Energetics. Several significant new capabilities were implemented including the ability to model radiative transfer using Implicit Monte Carlo [Fleck et al., JCP 8, 313 (1971)]. DRACO was also extended to operate in 3D Cartesian geometry on hexahedral meshes. Originally the code was only used in 2D cylindrical geometry. This included implementing thermal conduction and a flux-limited multigroup diffusion model for radiative transfer. Diffusion equations are solved by extending the 2D Kershaw method [Kershaw, JCP 39, 375 (1981)] to three dimensions. The second radiation-hydrodynamics code developed as part of this thesis is Cooper, a new 3D code which operates on structured hexahedral meshes. Cooper supports the compatible hydrodynamics framework [Caramana et al., JCP 146, 227 (1998)] to obtain round-off error levels of global energy conservation. This level of energy conservation is maintained even when two temperature thermal conduction, ion/electron equilibration, and multigroup diffusion based radiative transfer is active. Cooper is parallelized using domain decomposition, and photon energy group decomposition. The Mesh Oriented datABase (MOAB) computational library is used to exchange information between processes when domain decomposition is used. Cooper's performance is analyzed through direct comparisons with DRACO. Cooper also contains a method for preserving spherical symmetry during target implosions [Caramana et al., JCP 157, 89 (1999)]. Several deceleration phase implosion simulations were used to compare instability growth using traditional hydrodynamics and compatible hydrodynamics with/without symmetry modification. These simulations demonstrate increased symmetry preservation errors when traditional hydrodynamics

  2. Modelling and Simulation as a Recognizing Method in Education

    ERIC Educational Resources Information Center

    Stoffa, Veronika

    2004-01-01

    Computer animation-simulation models of complex processes and events, which are the method of instruction, can be an effective didactic device. Gaining deeper knowledge about objects modelled helps to plan simulation experiments oriented on processes and events researched. Animation experiments realized on multimedia computers can aid easier…

  3. Simulation-based assessment for construction helmets.

    PubMed

    Long, James; Yang, James; Lei, Zhipeng; Liang, Daan

    2015-01-01

    In recent years, there has been a concerted effort for greater job safety in all industries. Personnel protective equipment (PPE) has been developed to help mitigate the risk of injury to humans that might be exposed to hazardous situations. The human head is the most vulnerable to impact as a moderate magnitude can cause serious injury or death. That is why industries have required the use of an industrial hard hat or helmet. There have only been a few articles published to date that are focused on the risk of head injury when wearing an industrial helmet. A full understanding of the effectiveness of construction helmets on reducing injury is lacking. This paper presents a simulation-based method to determine the threshold at which a human will sustain injury when wearing a construction helmet and assesses the risk of injury for wearers of construction helmets or hard hats. Advanced finite element, or FE, models were developed to study the impact on construction helmets. The FE model consists of two parts: the helmet and the human models. The human model consists of a brain, enclosed by a skull and an outer layer of skin. The level and probability of injury to the head was determined using both the head injury criterion (HIC) and tolerance limits set by Deck and Willinger. The HIC has been widely used to assess the likelihood of head injury in vehicles. The tolerance levels proposed by Deck and Willinger are more suited for finite element models but lack wide-scale validation. Different cases of impact were studied using LSTC's LS-DYNA. PMID:23495784

  4. Simulation of large-scale rule-based models

    SciTech Connect

    Hlavacek, William S; Monnie, Michael I; Colvin, Joshua; Faseder, James

    2008-01-01

    Interactions of molecules, such as signaling proteins, with multiple binding sites and/or multiple sites of post-translational covalent modification can be modeled using reaction rules. Rules comprehensively, but implicitly, define the individual chemical species and reactions that molecular interactions can potentially generate. Although rules can be automatically processed to define a biochemical reaction network, the network implied by a set of rules is often too large to generate completely or to simulate using conventional procedures. To address this problem, we present DYNSTOC, a general-purpose tool for simulating rule-based models. DYNSTOC implements a null-event algorithm for simulating chemical reactions in a homogenous reaction compartment. The simulation method does not require that a reaction network be specified explicitly in advance, but rather takes advantage of the availability of the reaction rules in a rule-based specification of a network to determine if a randomly selected set of molecular components participates in a reaction during a time step. DYNSTOC reads reaction rules written in the BioNetGen language which is useful for modeling protein-protein interactions involved in signal transduction. The method of DYNSTOC is closely related to that of STOCHSIM. DYNSTOC differs from STOCHSIM by allowing for model specification in terms of BNGL, which extends the range of protein complexes that can be considered in a model. DYNSTOC enables the simulation of rule-based models that cannot be simulated by conventional methods. We demonstrate the ability of DYNSTOC to simulate models accounting for multisite phosphorylation and multivalent binding processes that are characterized by large numbers of reactions. DYNSTOC is free for non-commercial use. The C source code, supporting documentation and example input files are available at .

  5. Bayesian individualization via sampling-based methods.

    PubMed

    Wakefield, J

    1996-02-01

    We consider the situation where we wish to adjust the dosage regimen of a patient based on (in general) sparse concentration measurements taken on-line. A Bayesian decision theory approach is taken which requires the specification of an appropriate prior distribution and loss function. A simple method for obtaining samples from the posterior distribution of the pharmacokinetic parameters of the patient is described. In general, these samples are used to obtain a Monte Carlo estimate of the expected loss which is then minimized with respect to the dosage regimen. Some special cases which yield analytic solutions are described. When the prior distribution is based on a population analysis then a method of accounting for the uncertainty in the population parameters is described. Two simulation studies showing how the methods work in practice are presented. PMID:8827585

  6. Fast spot-based multiscale simulations of granular drainage

    SciTech Connect

    Rycroft, Chris H.; Wong, Yee Lok; Bazant, Martin Z.

    2009-05-22

    We develop a multiscale simulation method for dense granular drainage, based on the recently proposed spot model, where the particle packing flows by local collective displacements in response to diffusing"spots'" of interstitial free volume. By comparing with discrete-element method (DEM) simulations of 55,000 spheres in a rectangular silo, we show that the spot simulation is able to approximately capture many features of drainage, such as packing statistics, particle mixing, and flow profiles. The spot simulation runs two to three orders of magnitude faster than DEM, making it an appropriate method for real-time control or optimization. We demonstrateextensions for modeling particle heaping and avalanching at the free surface, and for simulating the boundary layers of slower flow near walls. We show that the spot simulations are robust and flexible, by demonstrating that they can be used in both event-driven and fixed timestep approaches, and showing that the elastic relaxation step used in the model can be applied much less frequently and still create good results.

  7. Deployment Simulation Methods for Ultra-Lightweight Inflatable Structures

    NASA Technical Reports Server (NTRS)

    Wang, John T.; Johnson, Arthur R.

    2003-01-01

    Two dynamic inflation simulation methods are employed for modeling the deployment of folded thin-membrane tubes. The simulations are necessary because ground tests include gravity effects and may poorly represent deployment in space. The two simulation methods are referred to as the Control Volume (CV) method and the Arbitrary Lagrangian Eulerian (ALE) method. They are available in the LS-DYNA nonlinear dynamic finite element code. Both methods are suitable for modeling the interactions between the inflation gas and the thin-membrane tube structures. The CV method only considers the pressure induced by the inflation gas in the simulation, while the ALE method models the actual flow of the inflation gas. Thus, the transient fluid properties at any location within the tube can be predicted by the ALE method. Deployment simulations of three packaged tube models; namely coiled, Z-folded, and telescopically-folded configurations, are performed. Results predicted by both methods for the telescopically-folded configuration are correlated and computational efficiency issues are discussed.

  8. Discrete-element method simulations: from micro to macro scales.

    PubMed

    Heyes, D M; Baxter, J; Tüzün, U; Qin, R S

    2004-09-15

    Many liquid systems encountered in environmental science are often complex mixtures of many components which place severe demands on traditional computational modelling techniques. A meso scale description is required to account adequately for their flow behaviour on the meso and macro scales. Traditional techniques of computational fluid dynamics and molecular simulation are not well suited to tackling these systems, and researchers are increasingly turning to a range of relatively new computational techniques that offer the prospect of addressing the factors relevant to multicomponent multiphase liquids on length- and time-scales between the molecular level and the macro scale. In this category, we discuss the off-lattice techniques of 'smooth particle hydrodynamics' (SPH) and 'dissipative particle dynamics' (DPD), and the grid-based techniques of 'lattice gas' and 'lattice Boltzmann' (LB). We highlight the main conceptual and technical features underpinning these methods, their strengths and weaknesses, and provide a few examples of the applications of these techniques that illustrate their utility.

  9. Interactive physically-based sound simulation

    NASA Astrophysics Data System (ADS)

    Raghuvanshi, Nikunj

    The realization of interactive, immersive virtual worlds requires the ability to present a realistic audio experience that convincingly compliments their visual rendering. Physical simulation is a natural way to achieve such realism, enabling deeply immersive virtual worlds. However, physically-based sound simulation is very computationally expensive owing to the high-frequency, transient oscillations underlying audible sounds. The increasing computational power of desktop computers has served to reduce the gap between required and available computation, and it has become possible to bridge this gap further by using a combination of algorithmic improvements that exploit the physical, as well as perceptual properties of audible sounds. My thesis is a step in this direction. My dissertation concentrates on developing real-time techniques for both sub-problems of sound simulation: synthesis and propagation. Sound synthesis is concerned with generating the sounds produced by objects due to elastic surface vibrations upon interaction with the environment, such as collisions. I present novel techniques that exploit human auditory perception to simulate scenes with hundreds of sounding objects undergoing impact and rolling in real time. Sound propagation is the complementary problem of modeling the high-order scattering and diffraction of sound in an environment as it travels from source to listener. I discuss my work on a novel numerical acoustic simulator (ARD) that is hundred times faster and consumes ten times less memory than a high-accuracy finite-difference technique, allowing acoustic simulations on previously-intractable spaces, such as a cathedral, on a desktop computer. Lastly, I present my work on interactive sound propagation that leverages my ARD simulator to render the acoustics of arbitrary static scenes for multiple moving sources and listener in real time, while accounting for scene-dependent effects such as low-pass filtering and smooth attenuation

  10. Collaborative virtual experience based on reconfigurable simulation

    NASA Astrophysics Data System (ADS)

    Shahab, Qonita M.; Kwon, Yong-Moo; Ko, Heedong

    2006-10-01

    Virtual Reality simulation enables immersive 3D experience of a Virtual Environment. A simulation-based Virtual Environment can be used to map real world phenomena onto virtual experience. With a reconfigurable simulation, users can reconfigure the parameters of the involved objects, so that they can see different effects from the different configurations. This concept is suitable for a classroom learning of physics law. This research studies the Virtual Reality simulation of Newton's physics law on rigid body type of objects. With network support, collaborative interaction is enabled so that people from different places can interact with the same set of objects in immersive Collaborative Virtual Environment. The taxonomy of the interaction in different levels of collaboration is described as: distinct objects and same object, in which there are same object - sequentially, same object - concurrently - same attribute, and same object - concurrently - distinct attributes. The case studies are the interaction of users in two cases: destroying and creating a set of arranged rigid bodies. In Virtual Domino, users can observe physics law while applying force to the domino blocks in order to destroy the arrangements. In Virtual Dollhouse, users can observe physics law while constructing a dollhouse using existing building blocks, under gravity effects.

  11. Performance Analysis of an Actor-Based Distributed Simulation

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D.

    1998-01-01

    Object-oriented design of simulation programs appears to be very attractive because of the natural association of components in the simulated system with objects. There is great potential in distributing the simulation across several computers for the purpose of parallel computation and its consequent handling of larger problems in less elapsed time. One approach to such a design is to use "actors", that is, active objects with their own thread of control. Because these objects execute concurrently, communication is via messages. This is in contrast to an object-oriented design using passive objects where communication between objects is via method calls (direct calls when they are in the same address space and remote procedure calls when they are in different address spaces or different machines). This paper describes a performance analysis program for the evaluation of a design for distributed simulations based upon actors.

  12. Ocean Wave Simulation Based on Wind Field.

    PubMed

    Li, Zhongyi; Wang, Hao

    2016-01-01

    Ocean wave simulation has a wide range of applications in movies, video games and training systems. Wind force is the main energy resource for generating ocean waves, which are the result of the interaction between wind and the ocean surface. While numerous methods to handle simulating oceans and other fluid phenomena have undergone rapid development during the past years in the field of computer graphic, few of them consider to construct ocean surface height field from the perspective of wind force driving ocean waves. We introduce wind force to the construction of the ocean surface height field through applying wind field data and wind-driven wave particles. Continual and realistic ocean waves result from the overlap of wind-driven wave particles, and a strategy was proposed to control these discrete wave particles and simulate an endless ocean surface. The results showed that the new method is capable of obtaining a realistic ocean scene under the influence of wind fields at real time rates. PMID:26808718

  13. Ocean Wave Simulation Based on Wind Field.

    PubMed

    Li, Zhongyi; Wang, Hao

    2016-01-01

    Ocean wave simulation has a wide range of applications in movies, video games and training systems. Wind force is the main energy resource for generating ocean waves, which are the result of the interaction between wind and the ocean surface. While numerous methods to handle simulating oceans and other fluid phenomena have undergone rapid development during the past years in the field of computer graphic, few of them consider to construct ocean surface height field from the perspective of wind force driving ocean waves. We introduce wind force to the construction of the ocean surface height field through applying wind field data and wind-driven wave particles. Continual and realistic ocean waves result from the overlap of wind-driven wave particles, and a strategy was proposed to control these discrete wave particles and simulate an endless ocean surface. The results showed that the new method is capable of obtaining a realistic ocean scene under the influence of wind fields at real time rates.

  14. Ocean Wave Simulation Based on Wind Field

    PubMed Central

    2016-01-01

    Ocean wave simulation has a wide range of applications in movies, video games and training systems. Wind force is the main energy resource for generating ocean waves, which are the result of the interaction between wind and the ocean surface. While numerous methods to handle simulating oceans and other fluid phenomena have undergone rapid development during the past years in the field of computer graphic, few of them consider to construct ocean surface height field from the perspective of wind force driving ocean waves. We introduce wind force to the construction of the ocean surface height field through applying wind field data and wind-driven wave particles. Continual and realistic ocean waves result from the overlap of wind-driven wave particles, and a strategy was proposed to control these discrete wave particles and simulate an endless ocean surface. The results showed that the new method is capable of obtaining a realistic ocean scene under the influence of wind fields at real time rates. PMID:26808718

  15. Meshless thin-shell simulation based on global conformal parameterization.

    PubMed

    Guo, Xiaohu; Li, Xin; Bao, Yunfan; Gu, Xianfeng; Qin, Hong

    2006-01-01

    This paper presents a new approach to the physically-based thin-shell simulation of point-sampled geometry via explicit, global conformal point-surface parameterization and meshless dynamics. The point-based global parameterization is founded upon the rigorous mathematics of Riemann surface theory and Hodge theory. The parameterization is globally conformal everywhere except for a minimum number of zero points. Within our parameterization framework, any well-sampled point surface is functionally equivalent to a manifold, enabling popular and powerful surface-based modeling and physically-based simulation tools to be readily adapted for point geometry processing and animation. In addition, we propose a meshless surface computational paradigm in which the partial differential equations (for dynamic physical simulation) can be applied and solved directly over point samples via Moving Least Squares (MLS) shape functions defined on the global parametric domain without explicit connectivity information. The global conformal parameterization provides a common domain to facilitate accurate meshless simulation and efficient discontinuity modeling for complex branching cracks. Through our experiments on thin-shell elastic deformation and fracture simulation, we demonstrate that our integrative method is very natural, and that it has great potential to further broaden the application scope of point-sampled geometry in graphics and relevant fields.

  16. Remote Sensing Requirements Development: A Simulation-Based Approach

    NASA Technical Reports Server (NTRS)

    Zanoni, Vicki; Davis, Bruce; Ryan, Robert; Gasser, Gerald; Blonski, Slawomir

    2002-01-01

    Earth science research and application requirements for multispectral data have often been driven by currently available remote sensing technology. Few parametric studies exist that specify data required for certain applications. Consequently, data requirements are often defined based on the best data available or on what has worked successfully in the past. Since properties such as spatial resolution, swath width, spectral bands, signal-to-noise ratio (SNR), data quantization and band-to-band registration drive sensor platform and spacecraft system architecture and cost, analysis of these criteria is important to optimize system design objectively. Remote sensing data requirements are also linked to calibration and characterization methods. Parameters such as spatial resolution, radiometric accuracy and geopositional accuracy affect the complexity and cost of calibration methods. However, few studies have quantified the true accuracies required for specific problems. As calibration methods and standards are proposed, it is important that they be tied to well-known data requirements. The Application Research Toolbox (ART) developed at the John C. Stennis Space Center provides a simulation-based method for multispectral data requirements development. The ART produces simulated datasets from hyperspectral data through band synthesis. Parameters such as spectral band shape and width, SNR, data quantization, spatial resolution and band-to-band registration can be varied to create many different simulated data products. Simulated data utility can then be assessed for different applications so that requirements can be better understood.

  17. Efficient Combustion Simulation via the Adaptive Wavelet Collocation Method

    NASA Astrophysics Data System (ADS)

    Lung, Kevin; Brown-Dymkoski, Eric; Guerrero, Victor; Doran, Eric; Museth, Ken; Balme, Jo; Urberger, Bob; Kessler, Andre; Jones, Stephen; Moses, Billy; Crognale, Anthony

    Rocket engine development continues to be driven by the intuition and experience of designers, progressing through extensive trial-and-error test campaigns. Extreme temperatures and pressures frustrate direct observation, while high-fidelity simulation can be impractically expensive owing to the inherent muti-scale, multi-physics nature of the problem. To address this cost, an adaptive multi-resolution PDE solver has been designed which targets the high performance, many-core architecture of GPUs. The adaptive wavelet collocation method is used to maintain a sparse-data representation of the high resolution simulation, greatly reducing the memory footprint while tightly controlling physical fidelity. The tensorial, stencil topology of wavelet-based grids lends itself to highly vectorized algorithms which are necessary to exploit the performance of GPUs. This approach permits efficient implementation of direct finite-rate kinetics, and improved resolution of steep thermodynamic gradients and the smaller mixing scales that drive combustion dynamics. Resolving these scales is crucial for accurate chemical kinetics, which are typically degraded or lost in statistical modeling approaches.

  18. Coherent-wave Monte Carlo method for simulating light propagation in tissue

    NASA Astrophysics Data System (ADS)

    Kraszewski, Maciej; Pluciński, Jerzy

    2016-03-01

    Simulating propagation and scattering of coherent light in turbid media, such as biological tissues, is a complex problem. Numerical methods for solving Helmholtz or wave equation (e.g. finite-difference or finite-element methods) require large amount of computer memory and long computation time. This makes them impractical for simulating laser beam propagation into deep layers of tissue. Other group of methods, based on radiative transfer equation, allows to simulate only propagation of light averaged over the ensemble of turbid medium realizations. This makes them unuseful for simulating phenomena connected to coherence properties of light. We propose a new method for simulating propagation of coherent light (e.g. laser beam) in biological tissue, that we called Coherent-Wave Monte Carlo method. This method is based on direct computation of optical interaction between scatterers inside the random medium, what allows to reduce amount of memory and computation time required for simulation. We present the theoretical basis of the proposed method and its comparison with finite-difference methods for simulating light propagation in scattering media in Rayleigh approximation regime.

  19. A tool for simulating parallel branch-and-bound methods

    NASA Astrophysics Data System (ADS)

    Golubeva, Yana; Orlov, Yury; Posypkin, Mikhail

    2016-01-01

    The Branch-and-Bound method is known as one of the most powerful but very resource consuming global optimization methods. Parallel and distributed computing can efficiently cope with this issue. The major difficulty in parallel B&B method is the need for dynamic load redistribution. Therefore design and study of load balancing algorithms is a separate and very important research topic. This paper presents a tool for simulating parallel Branchand-Bound method. The simulator allows one to run load balancing algorithms with various numbers of processors, sizes of the search tree, the characteristics of the supercomputer's interconnect thereby fostering deep study of load distribution strategies. The process of resolution of the optimization problem by B&B method is replaced by a stochastic branching process. Data exchanges are modeled using the concept of logical time. The user friendly graphical interface to the simulator provides efficient visualization and convenient performance analysis.

  20. An Efficient, Semi-implicit Pressure-based Scheme Employing a High-resolution Finitie Element Method for Simulating Transient and Steady, Inviscid and Viscous, Compressible Flows on Unstructured Grids

    SciTech Connect

    Richard C. Martineau; Ray A. Berry

    2003-04-01

    A new semi-implicit pressure-based Computational Fluid Dynamics (CFD) scheme for simulating a wide range of transient and steady, inviscid and viscous compressible flow on unstructured finite elements is presented here. This new CFD scheme, termed the PCICEFEM (Pressure-Corrected ICE-Finite Element Method) scheme, is composed of three computational phases, an explicit predictor, an elliptic pressure Poisson solution, and a semiimplicit pressure-correction of the flow variables. The PCICE-FEM scheme is capable of second-order temporal accuracy by incorporating a combination of a time-weighted form of the two-step Taylor-Galerkin Finite Element Method scheme as an explicit predictor for the balance of momentum equations and the finite element form of a time-weighted trapezoid rule method for the semi-implicit form of the governing hydrodynamic equations. Second-order spatial accuracy is accomplished by linear unstructured finite element discretization. The PCICE-FEM scheme employs Flux-Corrected Transport as a high-resolution filter for shock capturing. The scheme is capable of simulating flows from the nearly incompressible to the high supersonic flow regimes. The PCICE-FEM scheme represents an advancement in mass-momentum coupled, pressurebased schemes. The governing hydrodynamic equations for this scheme are the conservative form of the balance of momentum equations (Navier-Stokes), mass conservation equation, and total energy equation. An operator splitting process is performed along explicit and implicit operators of the semi-implicit governing equations to render the PCICE-FEM scheme in the class of predictor-corrector schemes. The complete set of semi-implicit governing equations in the PCICE-FEM scheme are cast in this form, an explicit predictor phase and a semi-implicit pressure-correction phase with the elliptic pressure Poisson solution coupling the predictor-corrector phases. The result of this predictor-corrector formulation is that the pressure Poisson

  1. Current concepts in simulation-based trauma education.

    PubMed

    Cherry, Robert A; Ali, Jameel

    2008-11-01

    The use of simulation-based technology in trauma education has focused on providing a safe and effective alternative to the more traditional methods that are used to teach technical skills and critical concepts in trauma resuscitation. Trauma team training using simulation-based technology is also being used to develop skills in leadership, team-information sharing, communication, and decision-making. The integration of simulators into medical student curriculum, residency training, and continuing medical education has been strongly recommended by the American College of Surgeons as an innovative means of enhancing patient safety, reducing medical errors, and performing a systematic evaluation of various competencies. Advanced human patient simulators are increasingly being used in trauma as an evaluation tool to assess clinical performance and to teach and reinforce essential knowledge, skills, and abilities. A number of specialty simulators in trauma and critical care have also been designed to meet these educational objectives. Ongoing educational research is still needed to validate long-term retention of knowledge and skills, provide reliable methods to evaluate teaching effectiveness and performance, and to demonstrate improvement in patient safety and overall quality of care.

  2. Classification method based on KCCA

    NASA Astrophysics Data System (ADS)

    Wang, Zhanqing; Zhang, Guilin; Zhao, Guangzhou

    2007-11-01

    Nonlinear CCA extends the linear CCA in that it operates in the kernel space and thus implies the nonlinear combinations in the original space. This paper presents a classification method based on the kernel canonical correlation analysis (KCCA). We introduce the probabilistic label vectors (PLV) for a give pattern which extend the conventional concept of class label, and investigate the correlation between feature variables and PLV variables. A PLV predictor is presented based on KCCA, and then classification is performed on the predicted PLV. We formulate a frame for classification by integrating class information through PLV. Experimental results on Iris data set classification and facial expression recognition show the efficiencies of the proposed method.

  3. Simulation-based design using wavelets

    NASA Astrophysics Data System (ADS)

    Williams, John R.; Amaratunga, Kevin S.

    1994-03-01

    The design of large-scale systems requires methods of analysis which have the flexibility to provide a fast interactive simulation capability, while retaining the ability to provide high-order solution accuracy when required. This suggests that a hierarchical solution procedure is required that allows us to trade off accuracy for solution speed in a rational manner. In this paper, we examine the properties of the biorthogonal wavelets recently constructed by Dahlke and Weinreich and show how they can be used to implement a highly efficient multiscale solution procedure for solving a certain class of one-dimensional problems.

  4. Simulation-Based Rule Generation Considering Readability

    PubMed Central

    Yahagi, H.; Shimizu, S.; Ogata, T.; Hara, T.; Ota, J.

    2015-01-01

    Rule generation method is proposed for an aircraft control problem in an airport. Designing appropriate rules for motion coordination of taxiing aircraft in the airport is important, which is conducted by ground control. However, previous studies did not consider readability of rules, which is important because it should be operated and maintained by humans. Therefore, in this study, using the indicator of readability, we propose a method of rule generation based on parallel algorithm discovery and orchestration (PADO). By applying our proposed method to the aircraft control problem, the proposed algorithm can generate more readable and more robust rules and is found to be superior to previous methods. PMID:27347501

  5. Simulation-based disassembly systems design

    NASA Astrophysics Data System (ADS)

    Ohlendorf, Martin; Herrmann, Christoph; Hesselbach, Juergen

    2004-02-01

    Recycling of Waste of Electrical and Electronic Equipment (WEEE) is a matter of actual concern, driven by economic, ecological and legislative reasons. Here, disassembly as the first step of the treatment process plays a key role. To achieve sustainable progress in WEEE disassembly, the key is not to limit analysis and planning to merely disassembly processes in a narrow sense, but to consider entire disassembly plants including additional aspects such as internal logistics, storage, sorting etc. as well. In this regard, the paper presents ways of designing, dimensioning, structuring and modeling different disassembly systems. Goal is to achieve efficient and economic disassembly systems that allow recycling processes complying with legal requirements. Moreover, advantages of applying simulation software tools that are widespread and successfully utilized in conventional industry sectors are addressed. They support systematic disassembly planning by means of simulation experiments including consecutive efficiency evaluation. Consequently, anticipatory recycling planning considering various scenarios is enabled and decisions about which types of disassembly systems evidence appropriateness for specific circumstances such as product spectrum, throughput, disassembly depth etc. is supported. Furthermore, integration of simulation based disassembly planning in a holistic concept with configuration of interfaces and data utilization including cost aspects is described.

  6. Fault diagnosis based on continuous simulation models

    NASA Technical Reports Server (NTRS)

    Feyock, Stefan

    1987-01-01

    The results are described of an investigation of techniques for using continuous simulation models as basis for reasoning about physical systems, with emphasis on the diagnosis of system faults. It is assumed that a continuous simulation model of the properly operating system is available. Malfunctions are diagnosed by posing the question: how can we make the model behave like that. The adjustments that must be made to the model to produce the observed behavior usually provide definitive clues to the nature of the malfunction. A novel application of Dijkstra's weakest precondition predicate transformer is used to derive the preconditions for producing the required model behavior. To minimize the size of the search space, an envisionment generator based on interval mathematics was developed. In addition to its intended application, the ability to generate qualitative state spaces automatically from quantitative simulations proved to be a fruitful avenue of investigation in its own right. Implementations of the Dijkstra transform and the envisionment generator are reproduced in the Appendix.

  7. A modified method of characteristics and its application in forward and inversion simulations of underwater explosion

    NASA Astrophysics Data System (ADS)

    Zhang, Chengjiao; Li, Xiaojie; Yang, Chenchen

    2016-07-01

    This paper introduces a modified method of characteristics and its application in forward and inversion simulations of underwater explosion. Compared with standard method of characteristics which is appropriate to homoentripic flow problem, the modified method can be also used to deal with isentropic flow problem such as underwater explosion. Underwater explosion of spherical TNT and composition B explosives are simulated by using the modified method, respectively. Peak pressures and flow field pressures are obtained, and they are coincident with those from empirical formulas. The comparison demonstrates the modified is feasible and reliable in underwater explosion simulation. Based on the modified method, inverse difference schemes and inverse method are introduced. Combined with the modified, the inverse schemes can be used to deal with gas-water interface inversion of underwater explosion. Inversion simulations of underwater explosion of the explosives are performed in water, and equation of state (EOS) of detonation product is not needed. The peak pressures from the forward simulations are provided as boundary conditions in the inversion simulations. Inversion interfaces are obtained and they are mainly in good agreement with those from the forward simulations in near field. The comparison indicates the inverse method and the inverse difference schemes are reliable and reasonable in interface inversion simulation.

  8. Optical simulation of surface textured TCO using FDTD method

    NASA Astrophysics Data System (ADS)

    Elviyanti, I. L.; Purwanto, H.; Kusumandari

    2016-02-01

    The purpose of this research is simulating the transmittance of surface textured transparent conducting oxide (TCO) for Dye-Sensitized Solar Cell (DSSC) application. The simulation based on finite difference time domain (FDTD) was performed using the MatLab software for flat and pyramid surface textured TCO. Fluorine-doped tin oxide (FTO) and indium tin oxide (ITO) were used as TCO material. The transmittance simulation of flat TCO was compared to UV-Vis spectrophotometer measurement of real TCO to ensure the accuracy of the simulation. Then, the transmittance simulation of pyramid surface textures of TCO is higher than a flat one. It suggested that surface texturing enhance the path of light through dispersion and reflectance light by the pattern of the surface. This result indicates that surface textured increasing the transmittance of TCO through a complex light trapping mechanism which might be used to increase the light harvesting for DSSC application.

  9. An experimental method for the assessment of color simulation tools.

    PubMed

    Lillo, Julio; Alvaro, Leticia; Moreira, Humberto

    2014-07-22

    The Simulcheck method for evaluating the accuracy of color simulation tools in relation to dichromats is described and used to test three color simulation tools: Variantor, Coblis, and Vischeck. A total of 10 dichromats (five protanopes, five deuteranopes) and 10 normal trichromats participated in the current study. Simulcheck includes two psychophysical tasks: the Pseudoachromatic Stimuli Identification task and the Minimum Achromatic Contrast task. The Pseudoachromatic Stimuli Identification task allows determination of the two chromatic angles (h(uv) values) that generate a minimum response in the yellow–blue opponent mechanism and, consequently, pseudoachromatic stimuli (greens or reds). The Minimum Achromatic Contrast task requires the selection of the gray background that produces minimum contrast (near zero change in the achromatic mechanism) for each pseudoachromatic stimulus selected in the previous task (L(R) values). Results showed important differences in the colorimetric transformations performed by the three evaluated simulation tools and their accuracy levels. Vischeck simulation accurately implemented the algorithm of Brettel, Viénot, and Mollon (1997). Only Vischeck appeared accurate (similarity in huv and L(R) values between real and simulated dichromats) and, consequently, could render reliable color selections. It is concluded that Simulcheck is a consistent method because it provided an equivalent pattern of results for huv and L(R) values irrespective of the stimulus set used to evaluate a simulation tool. Simulcheck was also considered valid because real dichromats provided expected huv and LR values when performing the two psychophysical tasks included in this method.

  10. Simulation of parachute FSI using the front tracking method

    NASA Astrophysics Data System (ADS)

    Kim, Joung-Dong; Li, Yan; Li, Xiaolin

    2013-02-01

    We use the front tracking method on a spring system to model the dynamic evolution of parachute canopy and risers. The canopy surface and the riser string chord of a parachute are represented by a triangulated surface mesh with preset equilibrium length on each side of the simplices. The stretching and wrinkling of the canopy and its supporting string chords (risers) are modeled by the spring system. The spring constants of the canopy and the risers are chosen based on the analysis of Young's surface modulus for the canopy fabric and Young's string modulus of the string chord. Damping is added to dissipate the excessive spring internal energy. The current model does not have radial reinforcement cables and has not taken into account the canopy porosity. This mechanical structure is coupled with the incompressible Navier-Stokes solver through the "Impulse Method". We analyzed the numerical stability of the spring system and used this computational module to simulate the flow pattern around a static parachute canopy and the dynamic evolution during the parachute inflation process. The numerical solutions have been compared with the available experimental data and there are good agreements in the terminal descent velocity and breathing frequency of the parachute.

  11. Simulating rotationally inelastic collisions using a direct simulation Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Schullian, O.; Loreau, J.; Vaeck, N.; van der Avoird, A.; Heazlewood, B. R.; Rennick, C. J.; Softley, T. P.

    2015-12-01

    A new approach to simulating rotational cooling using a direct simulation Monte Carlo (DSMC) method is described and applied to the rotational cooling of ammonia seeded into a helium supersonic jet. The method makes use of ab initio rotational state changing cross sections calculated as a function of collision energy. Each particle in the DSMC simulations is labelled with a vector of rotational populations that evolves with time. Transfer of energy into translation is calculated from the mean energy transfer for this population at the specified collision energy. The simulations are compared with a continuum model for the on-axis density, temperature and velocity; rotational temperature as a function of distance from the nozzle is in accord with expectations from experimental measurements. The method could be applied to other types of gas mixture dynamics under non-uniform conditions, such as buffer gas cooling of NH3 by He.

  12. SPH-based simulation of multi-material asteroid collisions

    NASA Astrophysics Data System (ADS)

    Maindl, T. I.; Schäfer, C.; Speith, R.; Süli, Á.; Forgács-Dajka, E.; Dvorak, R.

    2013-11-01

    We give a brief introduction to smoothed particle hydrodynamics methods for continuum mechanics. Specifically, we present our 3D SPH code to simulate and analyze collisions of asteroids consisting of two types of material: basaltic rock and ice. We consider effects like brittle failure, fragmentation, and merging in different impact scenarios. After validating our code against previously published results we present first collision results based on measured values for the Weibull flaw distribution parameters of basalt.

  13. Efficient methods and practical guidelines for simulating isotope effects

    NASA Astrophysics Data System (ADS)

    Ceriotti, Michele; Markland, Thomas E.

    2013-01-01

    The shift in chemical equilibria due to isotope substitution is frequently exploited to obtain insight into a wide variety of chemical and physical processes. It is a purely quantum mechanical effect, which can be computed exactly using simulations based on the path integral formalism. Here we discuss how these techniques can be made dramatically more efficient, and how they ultimately outperform quasi-harmonic approximations to treat quantum liquids not only in terms of accuracy, but also in terms of computational cost. To achieve this goal we introduce path integral quantum mechanics estimators based on free energy perturbation, which enable the evaluation of isotope effects using only a single path integral molecular dynamics trajectory of the naturally abundant isotope. We use as an example the calculation of the free energy change associated with H/D and 16O/18O substitutions in liquid water, and of the fractionation of those isotopes between the liquid and the vapor phase. In doing so, we demonstrate and discuss quantitatively the relative benefits of each approach, thereby providing a set of guidelines that should facilitate the choice of the most appropriate method in different, commonly encountered scenarios. The efficiency of the estimators we introduce and the analysis that we perform should in particular facilitate accurate ab initio calculation of isotope effects in condensed phase systems.

  14. Efficient methods and practical guidelines for simulating isotope effects.

    PubMed

    Ceriotti, Michele; Markland, Thomas E

    2013-01-01

    The shift in chemical equilibria due to isotope substitution is frequently exploited to obtain insight into a wide variety of chemical and physical processes. It is a purely quantum mechanical effect, which can be computed exactly using simulations based on the path integral formalism. Here we discuss how these techniques can be made dramatically more efficient, and how they ultimately outperform quasi-harmonic approximations to treat quantum liquids not only in terms of accuracy, but also in terms of computational cost. To achieve this goal we introduce path integral quantum mechanics estimators based on free energy perturbation, which enable the evaluation of isotope effects using only a single path integral molecular dynamics trajectory of the naturally abundant isotope. We use as an example the calculation of the free energy change associated with H/D and (16)O/(18)O substitutions in liquid water, and of the fractionation of those isotopes between the liquid and the vapor phase. In doing so, we demonstrate and discuss quantitatively the relative benefits of each approach, thereby providing a set of guidelines that should facilitate the choice of the most appropriate method in different, commonly encountered scenarios. The efficiency of the estimators we introduce and the analysis that we perform should in particular facilitate accurate ab initio calculation of isotope effects in condensed phase systems. PMID:23298033

  15. Replica exchange simulation method using temperature and solvent viscosity

    NASA Astrophysics Data System (ADS)

    Nguyen, Phuong H.

    2010-04-01

    We propose an efficient and simple method for fast conformational sampling by introducing the solvent viscosity as a parameter to the conventional temperature replica exchange molecular dynamics (T-REMD) simulation method. The method, named V-REMD (V stands for viscosity), uses both low solvent viscosity and high temperature to enhance sampling for each replica; therefore it requires fewer replicas than the T-REMD method. To reduce the solvent viscosity by a factor of λ in a molecular dynamics simulation, one can simply reduce the mass of solvent molecules by a factor of λ2. This makes the method as simple as the conventional method. Moreover, thermodynamic and conformational properties of structures in replicas are still useful as long as one has sufficiently sampled the Boltzmann ensemble. The advantage of the present method has been demonstrated with the simulations of the trialanine, deca-alanine, and a 16-residue β-hairpin peptides. It shows that the method could reduce the number of replicas by a factor of 1.5 to 2 as compared with the T-REMD method.

  16. Broadening the interface bandwidth in simulation based training

    NASA Technical Reports Server (NTRS)

    Somers, Larry E.

    1989-01-01

    Currently most computer based simulations rely exclusively on computer generated graphics to create the simulation. When training is involved, the method almost exclusively used to display information to the learner is text displayed on the cathode ray tube. MICROEXPERT Systems is concentrating on broadening the communications bandwidth between the computer and user by employing a novel approach to video image storage combined with sound and voice output. An expert system is used to combine and control the presentation of analog video, sound, and voice output with computer based graphics and text. Researchers are currently involved in the development of several graphics based user interfaces for NASA, the U.S. Army, and the U.S. Navy. Here, the focus is on the human factors considerations, software modules, and hardware components being used to develop these interfaces.

  17. Intercomparison Of Bias-Correction Methods For Monthly Temperature And Precipitation Simulated By Multiple Climate Models

    NASA Astrophysics Data System (ADS)

    Watanabe, S.; Kanae, S.; Seto, S.; Hirabayashi, Y.; Oki, T.

    2012-12-01

    Bias-correction methods applied to monthly temperature and precipitation data simulated by multiple General Circulation Models (GCMs) are evaluated in this study. Although various methods have been proposed recently, an intercomparison among them using multiple GCM simulations has seldom been reported. Here, five previous methods as well as a proposed new method are compared. Before the comparison, we classified previous methods. The methods proposed in previous studies can be classified into four types based on the following two criteria: 1) Whether the statistics (e.g. mean, standard deviation, or the coefficient of variation) of future simulation is used in bias-correction; and 2) whether the estimation of cumulative probability is included in bias-correction. The methods which require future statistics will depend on the data in the projection period, while those which do not use future statistics are not. The classification proposed can characterize each bias-correction method. These methods are applied to temperature and precipitation simulated from 12 GCMs in the Coupled Model Intercomparison Project (CMIP3) archives. Parameters of each method are calibrated by using 1948-1972 observed data and validated for the 1974-1998 period. These methods are then applied to GCM future simulations (2073-2097), and the bias-corrected data are intercompared. For the historical simulation, negligible difference can be found between observed and bias-corrected data. However, the difference in the future simulation is large dependent on the characteristics of each method. The frequency (probability) that the 2073-2097 bias-corrected data exceed the 95th percentile of the 1948-1972 observed data is estimated in order to evaluate the differences among methods. The difference between proposed and one of the previous method is more than 10% in many areas. The differences of bias-corrected data among methods are discussed based on their respective characteristics. The results

  18. Fabrication of plasmonic thin films and their characterization by optical method and FDTD simulation technique

    NASA Astrophysics Data System (ADS)

    Kuzma, A.; Uherek, F.; Å kriniarová, J.; Pudiš, D.; Weis, M.; Donoval, M.

    2015-08-01

    In this paper we present optical properties of thin metal films deposited on the glass substrates by the physical vapor deposition. Localized surface plasmon polaritons of different film thicknesses have been spectrally characterized by optical methods. Evidence of the Au nanoparticles in deposited thin films have been demonstrated by Scanning Electron Microscope (SEM) and Atomic Force Microscope (AFM) and their dimensions as well as separations have been evaluated. As a first approximation, the simulation model of deposited nanoparticles without assuming their dimension and separation distributions has been created. Simulation model defines relation between the nanoparticle dimensions and their separations. Model of deposited nanoparticles has been simulated by the Finite-Difference Time-Domain (FDTD) simulation method. The pulsed excitation has been used and transmission of optical radiation has been calculated from the spectral response by Fast Fourier Transform (FFT) analyses. Plasmonic extinctions have been calculated from measured spectral characteristics as well as simulated characteristics and compared with each other. The nanoparticle dimensions and separations have been evaluated from the agreement between the simulation and experimental spectral characteristics. Surface morphology of thin metal film has been used as an input for the detail simulation study based on the experimental observation of metal nanoparticle distribution. Hence, this simulation method includes appropriate coupling effects between nanoparticles and provides more reliable results. Obtained results are helpful for further deep understanding of thin metal films plasmonic properties and simulation method is demonstrated as a powerful tool for the deposition technology optimizations.

  19. Assessment of Human Patient Simulation-Based Learning

    PubMed Central

    Schwartz, Catrina R.; Odegard, Peggy Soule; Hammer, Dana P.; Seybert, Amy L.

    2011-01-01

    The most common types of assessment of human patient simulation are satisfaction and/or confidence surveys or tests of knowledge acquisition. There is an urgent need to develop valid, reliable assessment instruments related to simulation-based learning. Assessment practices for simulation-based activities in the pharmacy curricula are highlighted, with a focus on human patient simulation. Examples of simulation-based assessment activities are reviewed according to type of assessment or domain being assessed. Assessment strategies are suggested for faculty members and programs that use simulation-based learning. PMID:22345727

  20. A General Simulation Method for Multiple Bodies in Proximate Flight

    NASA Technical Reports Server (NTRS)

    Meakin, Robert L.

    2003-01-01

    Methods of unsteady aerodynamic simulation for an arbitrary number of independent bodies flying in close proximity are considered. A novel method to efficiently detect collision contact points is described. A method to compute body trajectories in response to aerodynamic loads, applied loads, and inter-body collisions is also given. The physical correctness of the methods are verified by comparison to a set of analytic solutions. The methods, combined with a Navier-Stokes solver, are used to demonstrate the possibility of predicting the unsteady aerodynamics and flight trajectories of moving bodies that involve rigid-body collisions.

  1. Finite element methods for enhanced oil recovery Simulation

    SciTech Connect

    Cohen, M.F.

    1985-02-01

    A general, finite element procedure for reservoir simulation is presented. This effort is directed toward improving the numerical behavior of standard upstream, or upwind, finite difference techniques, without significantly increasing the computational costs. Two methods from previous authors' work are modified and developed: upwind finite elements and the Petrov-Galerkin method. These techniques are applied in a one- and two-dimensional, surfactant/ polymer simulator. The paper sets forth the mathematical formulation and several details concerning the implementation. The results indicate that the PetrovGalerkin method does significantly reduce numericaldiffusion errors, while it retains the stability of the first-order, upwind methods. It is also relatively simple to implement. Both the upwind, and PetrovGalerkin, finite element methods demonstrate little sensitivity to grid orientation.

  2. Direct simulation Monte Carlo method with a focal mechanism algorithm

    NASA Astrophysics Data System (ADS)

    Rachman, Asep Nur; Chung, Tae Woong; Yoshimoto, Kazuo; Yun, Sukyoung

    2015-01-01

    To simulate the observation of the radiation pattern of an earthquake, the direct simulation Monte Carlo (DSMC) method is modified by implanting a focal mechanism algorithm. We compare the results of the modified DSMC method (DSMC-2) with those of the original DSMC method (DSMC-1). DSMC-2 shows more or similarly reliable results compared to those of DSMC-1, for events with 12 or more recorded stations, by weighting twice for hypocentral distance of less than 80 km. Not only the number of stations, but also other factors such as rough topography, magnitude of event, and the analysis method influence the reliability of DSMC-2. The most reliable result by DSMC-2 is obtained by the best azimuthal coverage by the largest number of stations. The DSMC-2 method requires shorter time steps and a larger number of particles than those of DSMC-1 to capture a sufficient number of arrived particles in the small-sized receiver.

  3. Tools for evaluating team performance in simulation-based training

    PubMed Central

    Rosen, Michael A; Weaver, Sallie J; Lazzara, Elizabeth H; Salas, Eduardo; Wu, Teresa; Silvestri, Salvatore; Schiebel, Nicola; Almeida, Sandra; King, Heidi B

    2010-01-01

    Teamwork training constitutes one of the core approaches for moving healthcare systems toward increased levels of quality and safety, and simulation provides a powerful method of delivering this training, especially for face-paced and dynamic specialty areas such as Emergency Medicine. Team performance measurement and evaluation plays an integral role in ensuring that simulation-based training for teams (SBTT) is systematic and effective. However, this component of SBTT systems is overlooked frequently. This article addresses this gap by providing a review and practical introduction to the process of developing and implementing evaluation systems in SBTT. First, an overview of team performance evaluation is provided. Second, best practices for measuring team performance in simulation are reviewed. Third, some of the prominent measurement tools in the literature are summarized and discussed relative to the best practices. Subsequently, implications of the review are discussed for the practice of training teamwork in Emergency Medicine. PMID:21063558

  4. Computational methods for coupling microstructural and micromechanical materials response simulations

    SciTech Connect

    HOLM,ELIZABETH A.; BATTAILE,CORBETT C.; BUCHHEIT,THOMAS E.; FANG,HUEI ELIOT; RINTOUL,MARK DANIEL; VEDULA,VENKATA R.; GLASS,S. JILL; KNOROVSKY,GERALD A.; NEILSEN,MICHAEL K.; WELLMAN,GERALD W.; SULSKY,DEBORAH; SHEN,YU-LIN; SCHREYER,H. BUCK

    2000-04-01

    Computational materials simulations have traditionally focused on individual phenomena: grain growth, crack propagation, plastic flow, etc. However, real materials behavior results from a complex interplay between phenomena. In this project, the authors explored methods for coupling mesoscale simulations of microstructural evolution and micromechanical response. In one case, massively parallel (MP) simulations for grain evolution and microcracking in alumina stronglink materials were dynamically coupled. In the other, codes for domain coarsening and plastic deformation in CuSi braze alloys were iteratively linked. this program provided the first comparison of two promising ways to integrate mesoscale computer codes. Coupled microstructural/micromechanical codes were applied to experimentally observed microstructures for the first time. In addition to the coupled codes, this project developed a suite of new computational capabilities (PARGRAIN, GLAD, OOF, MPM, polycrystal plasticity, front tracking). The problem of plasticity length scale in continuum calculations was recognized and a solution strategy was developed. The simulations were experimentally validated on stockpile materials.

  5. Simulation methods with extended stability for stiff biochemical Kinetics

    PubMed Central

    2010-01-01

    Background With increasing computer power, simulating the dynamics of complex systems in chemistry and biology is becoming increasingly routine. The modelling of individual reactions in (bio)chemical systems involves a large number of random events that can be simulated by the stochastic simulation algorithm (SSA). The key quantity is the step size, or waiting time, τ, whose value inversely depends on the size of the propensities of the different channel reactions and which needs to be re-evaluated after every firing event. Such a discrete event simulation may be extremely expensive, in particular for stiff systems where τ can be very short due to the fast kinetics of some of the channel reactions. Several alternative methods have been put forward to increase the integration step size. The so-called τ-leap approach takes a larger step size by allowing all the reactions to fire, from a Poisson or Binomial distribution, within that step. Although the expected value for the different species in the reactive system is maintained with respect to more precise methods, the variance at steady state can suffer from large errors as τ grows. Results In this paper we extend Poisson τ-leap methods to a general class of Runge-Kutta (RK) τ-leap methods. We show that with the proper selection of the coefficients, the variance of the extended τ-leap can be well-behaved, leading to significantly larger step sizes. Conclusions The benefit of adapting the extended method to the use of RK frameworks is clear in terms of speed of calculation, as the number of evaluations of the Poisson distribution is still one set per time step, as in the original τ-leap method. The approach paves the way to explore new multiscale methods to simulate (bio)chemical systems. PMID:20701766

  6. A new lattice Monte Carlo method for simulating dielectric inhomogeneity

    NASA Astrophysics Data System (ADS)

    Duan, Xiaozheng; Wang, Zhen-Gang; Nakamura, Issei

    We present a new lattice Monte Carlo method for simulating systems involving dielectric contrast between different species by modifying an algorithm originally proposed by Maggs et al. The original algorithm is known to generate attractive interactions between particles that have different dielectric constant than the solvent. Here we show that such attractive force is spurious, arising from incorrectly biased statistical weight caused by the particle motion during the Monte Carlo moves. We propose a new, simple algorithm to resolve this erroneous sampling. We demonstrate the application of our algorithm by simulating an uncharged polymer in a solvent with different dielectric constant. Further, we show that the electrostatic fields in ionic crystals obtained from our simulations with a relatively small simulation box correspond well with results from the analytical solution. Thus, our Monte Carlo method avoids the need for the Ewald summation in conventional simulation methods for charged systems. This work was supported by the National Natural Science Foundation of China (21474112 and 21404103). We are grateful to Computing Center of Jilin Province for essential support.

  7. Parallel methods for dynamic simulation of multiple manipulator systems

    NASA Technical Reports Server (NTRS)

    Mcmillan, Scott; Sadayappan, P.; Orin, David E.

    1993-01-01

    In this paper, efficient dynamic simulation algorithms for a system of m manipulators, cooperating to manipulate a large load, are developed; their performance, using two possible forms of parallelism on a general-purpose parallel computer, is investigated. One form, temporal parallelism, is obtained with the use of parallel numerical integration methods. A speedup of 3.78 on four processors of CRAY Y-MP8 was achieved with a parallel four-point block predictor-corrector method for the simulation of a four manipulator system. These multi-point methods suffer from reduced accuracy, and when comparing these runs with a serial integration method, the speedup can be as low as 1.83 for simulations with the same accuracy. To regain the performance lost due to accuracy problems, a second form of parallelism is employed. Spatial parallelism allows most of the dynamics of each manipulator chain to be computed simultaneously. Used exclusively in the four processor case, this form of parallelism in conjunction with a serial integration method results in a speedup of 3.1 on four processors over the best serial method. In cases where there are either more processors available or fewer chains in the system, the multi-point parallel integration methods are still advantageous despite the reduced accuracy because both forms of parallelism can then combine to generate more parallel tasks and achieve greater effective speedups. This paper also includes results for these cases.

  8. Simulation models of ecological economics developed with energy language methods

    SciTech Connect

    Odum, H.T. . Dept. of Environmental Engineering Sciences)

    1989-08-01

    The energy-systems language method of modelling and simulation, because of its energy constrained rules, is a means for transferring homologous concepts between levels of the hierarchies of nature. Mathematics of self-organization may justify emulation as the simulation of systems overview without details. Here, these methods are applied to the new fields of ecological economics and ecological engineering . Since the vitality of national economics depends on the symbiotic coupling of environmental resources and human economic behavior, the energy language is adapted to develop overview models of nations relevant to public policies. An overview model of a developing nation is given as an example with simulations for alternative policies. Maximum economic vitality was obtained with trade for external resources, but ultimate economic carrying capacity and standard of living was determined by indigenous resources, optimum utilization and absence of foreign debt.

  9. GREEN'S Function and Super-Particle Methods for Kinetic Simulation of Heteroepitaxy

    NASA Astrophysics Data System (ADS)

    Lam, Chi-Hang; Lung, M. T.

    Arrays of nanosized three dimensional islands are known to self-assemble spontaneously on strained heteroepitaxial thin films. We simulate the dynamics using kinetic Monte Carlo method based on a ball and spring lattice model. Green's function and super-particle methods which greatly enhance the computational efficiency are explained.

  10. A comparative study of divergence cleaning methods of magnetic field in the solar coronal numerical simulation

    NASA Astrophysics Data System (ADS)

    Feng, Xueshang; Zhang, Man

    2016-03-01

    This paper presents a comparative study of divergence cleaning methods of magnetic field in the solar coronal three-dimensional numerical simulation. For such purpose, the diffusive method, projection method, generalized Lagrange multiplier method and constrained-transport method are used. All these methods are combined with a finite-volume scheme based on a six-component grid system in spherical coordinates. In order to see the performance between the four divergence cleaning methods, solar coronal numerical simulation for Carrington rotation 2056 has been studied. Numerical results show that the average relative divergence error is around 10^{-4.5} for the constrained-transport method, while about 10^{-3.1}- 10^{-3.6} for the other three methods. Although there exist some differences in the average relative divergence errors for the four employed methods, our tests show they can all produce basic structured solar wind.

  11. A comparative study of interface reconstruction methods for multi-material ALE simulations

    SciTech Connect

    Kucharik, Milan; Garimalla, Rao; Schofield, Samuel; Shashkov, Mikhail

    2009-01-01

    In this paper we compare the performance of different methods for reconstructing interfaces in multi-material compressible flow simulations. The methods compared are a material-order-dependent Volume-of-Fluid (VOF) method, a material-order-independent VOF method based on power diagram partitioning of cells and the Moment-of-Fluid method (MOF). We demonstrate that the MOF method provides the most accurate tracking of interfaces, followed by the VOF method with the right material ordering. The material-order-independent VOF method performs some-what worse than the above two while the solutions with VOF using the wrong material order are considerably worse.

  12. Vectorization of a particle simulation method for hypersonic rarefied flow

    NASA Technical Reports Server (NTRS)

    Mcdonald, Jeffrey D.; Baganoff, Donald

    1988-01-01

    An efficient particle simulation technique for hypersonic rarefied flows is presented at an algorithmic and implementation level. The implementation is for a vector computer architecture, specifically the Cray-2. The method models an ideal diatomic Maxwell molecule with three translational and two rotational degrees of freedom. Algorithms are designed specifically for compatibility with fine grain parallelism by reducing the number of data dependencies in the computation. By insisting on this compatibility, the method is capable of performing simulation on a much larger scale than previously possible. A two-dimensional simulation of supersonic flow over a wedge is carried out for the near-continuum limit where the gas is in equilibrium and the ideal solution can be used as a check on the accuracy of the gas model employed in the method. Also, a three-dimensional, Mach 8, rarefied flow about a finite-span flat plate at a 45 degree angle of attack was simulated. It utilized over 10 to the 7th particles carried through 400 discrete time steps in less than one hour of Cray-2 CPU time. This problem was chosen to exhibit the capability of the method in handling a large number of particles and a true three-dimensional geometry.

  13. Numerical Simulation of High Velocity Impact Phenomenon by the Distinct Element Method (dem)

    NASA Astrophysics Data System (ADS)

    Tsukahara, Y.; Matsuo, A.; Tanaka, K.

    2007-12-01

    Continuous-DEM (Distinct Element Method) for impact analysis is proposed in this paper. Continuous-DEM is based on DEM (Distinct Element Method) and the idea of the continuum theory. Numerical simulations of impacts between SUS 304 projectile and concrete target has been performed using the proposed method. The results agreed quantitatively with the impedance matching method. Experimental elastic-plastic behavior with compression and rarefaction wave under plate impact was also qualitatively reproduced, matching the result by AUTODYN®.

  14. New lattice Boltzmann method for the simulation of three-dimensional radiation transfer in turbid media.

    PubMed

    McHardy, Christopher; Horneber, Tobias; Rauh, Cornelia

    2016-07-25

    Based on the kinetic theory of photons, a new lattice Boltzmann method for the simulation of 3D radiation transport is presented. The method was successfully validated with Monte Carlo simulations of radiation transport in optical thick absorbing and non-absorbing turbid media containing either isotropic or anisotropic scatterers. Moreover, for the approximation of Mie-scattering, a new iterative algebraic approach for the discretization of the scattering phase function was developed, ensuring full conservation of energy and asymmetry after discretization. It was found that the main error sources of the method are caused by linearization and ray effects and suggestions for further improvement of the method are made. PMID:27464152

  15. A rainfall simulator based on multifractal generator

    NASA Astrophysics Data System (ADS)

    Akrour, Nawal; mallet, Cecile; barthes, Laurent; chazottes, Aymeric

    2015-04-01

    illustrating the simulator's capabilities will be provided. They show that the simulated two-dimensional fields have coherent statistical properties in term of cumulative rain rate distribution but also in term of power spectrum and structure function with the observed ones at different spatial scales (1, 4, 16 km2) involving that scale features are well represented by the model. Keywords: precipitation, multifractal modeling, variogram, structure function, scale invariance, rain intermittency Akrour, N., Aymeric; C., Verrier, S., Barthes, L., Mallet, C.: 2013. Calibrating synthetic multifractal times series with observed data. International Precipitation Conference (IPC 11), Wageningen, The Netherlands http://www.wageningenur.nl/upload_mm/7/5/e/a72f004a-8e66-445c-bb0b-f489ed0ff0d4_Abstract%20book_TotaalLR-SEC.pdf Akrour, N., Aymeric; C., Verrier, S., Mallet, C., Barthes, L.: 2014: Simulation of yearly rainfall time series at micro-scale resolution with actual properties: intermittency, scale invariance, rainfall distribution, submitted to Water Resources Research (under revision) Schertzer, D., S. Lovejoy, 1987: Physically based rain and cloud modeling by anisotropic, multiplicative turbulent cascades. J. Geophys. Res. 92, 9692-9714 Schleiss, M., S. Chamoun, and A. Berne (2014), Stochastic simulation of intermittent rainfall using the concept of dry drift, Water Resources Research, 50 (3), 2329-2349

  16. Operational characteristic analysis of conduction cooling HTS SMES for Real Time Digital Simulator based power quality enhancement simulation

    NASA Astrophysics Data System (ADS)

    Kim, A. R.; Kim, G. H.; Kim, K. M.; Kim, D. W.; Park, M.; Yu, I. K.; Kim, S. H.; Sim, K.; Sohn, M. H.; Seong, K. C.

    2010-11-01

    This paper analyzes the operational characteristics of conduction cooling Superconducting Magnetic Energy Storage (SMES) through a real hardware based simulation. To analyze the operational characteristics, the authors manufactured a small-scale toroidal-type SMES and implemented a Real Time Digital Simulator (RTDS) based power quality enhancement simulation. The method can consider not only electrical characteristics such as inductance and current but also temperature characteristic by using the real SMES system. In order to prove the effectiveness of the proposed method, a voltage sag compensation simulation has been implemented using the RTDS connected with the High Temperature Superconducting (HTS) model coil and DC/DC converter system, and the simulation results are discussed in detail.

  17. Saltwater Intrusion Simulation in Heterogeneous Aquifer Using Lattice Boltzmann Method

    NASA Astrophysics Data System (ADS)

    Servan-Camas, B.; Tsai, F. T.

    2006-12-01

    This study develops a saltwater intrusion simulation model using a lattice Boltzmann method (LBM) in a two- dimensional coastal confined aquifer. The saltwater intrusion phenomenon is described by density-varied groundwater flow and mass transport equations, where a freshwater-saltwater mixing zone is considered. Although primarily developed using the mesoscopic approach to solve macroscopic fluid dynamic problems (e.g. Navier-Stoke equation), LBM is able to be adopted to solve physical-based diffusion-type governing equations as for the groundwater flow and mass transport equations. The challenge of using LBM in saltwater intrusion modeling is to recover hydraulic conductivity heterogeneity. In this study, the Darcy equation and the advection-dispersion equation (ADE) are recovered in the lattice Boltzmann modeling. Specifically, the hydraulic conductivity heterogeneity is represented by the speed of sound in LBM. Under the consideration on the steady-state groundwater flow due to low storativity, in each time step the flow problem is modified to be a Poisson equation and solved by LBM. Nevertheless, the groundwater flow is still a time-marching problem with spatial-temporal variation in salinity concentration as well as density. The Henry problem is used to compare the LBM results against the Henry analytic solution and SUTRA result. Also, we show that LBM is capable of handling the Dirichlet, Neumann, and Cauchy concentration boundary conditions at the sea side. Finally, we compare the saltwater intrusion results using LBM in the Henry problem when heterogeneous hydraulic conductivity is considered.

  18. Simulation on the Measurement Method of Geometric Distortion of Telescopes

    NASA Astrophysics Data System (ADS)

    Fan, Li; Shu-lin, Ren

    2016-07-01

    The accurate measurement on the effect of telescope geometric distortion is conducive to improving the astrometric positioning accuracy of telescopes, which is of significant importance for many disciplines of astronomy, such as stellar clusters, natural satellites, asteroids, comets, and other celestial bodies in the solar system. For this reason, the predecessors have developed an iterative self-calibration method to measure the telescope geometric distortion by dithering observations in a dense star field, and achieved fine results. However, the previous work did not make constraints on the density of star field, and the dithering mode, but chose empirically some good conditions (for example, a denser star field and a larger dithering number) to observe, which took up much observing time, and caused a rather low efficiency. In order to explore the validity of the self-calibration method, and optimize its observational conditions, it is necessary to carry out the corresponding simulations. In this paper, we introduce first the self-calibration method in detail, then by the simulation method, we verify the effectiveness of the self-calibration method, and make further optimizations on the observational conditions, such as the density of star field and the dithering number, to achieve a higher accuracy of geometric distortion measurement. Finally, taking consideration of the practical application for correcting the geometric distortion effect, we have analyzed the relationship between the number of reference stars in the field of view and the astrometric accuracy by virtue of the simulation method.

  19. Model-based microwave image reconstruction: simulations and experiments

    SciTech Connect

    Ciocan, Razvan; Jiang Huabei

    2004-12-01

    We describe an integrated microwave imaging system that can provide spatial maps of dielectric properties of heterogeneous media with tomographically collected data. The hardware system (800-1200 MHz) was built based on a lock-in amplifier with 16 fixed antennas. The reconstruction algorithm was implemented using a Newton iterative method with combined Marquardt-Tikhonov regularizations. System performance was evaluated using heterogeneous media mimicking human breast tissue. Finite element method coupled with the Bayliss and Turkel radiation boundary conditions were applied to compute the electric field distribution in the heterogeneous media of interest. The results show that inclusions embedded in a 76-diameter background medium can be quantitatively reconstructed from both simulated and experimental data. Quantitative analysis of the microwave images obtained suggests that an inclusion of 14 mm in diameter is the smallest object that can be fully characterized presently using experimental data, while objects as small as 10 mm in diameter can be quantitatively resolved with simulated data.

  20. Crystal level simulations using Eulerian finite element methods

    SciTech Connect

    Becker, R; Barton, N R; Benson, D J

    2004-02-06

    Over the last several years, significant progress has been made in the use of crystal level material models in simulations of forming operations. However, in Lagrangian finite element approaches simulation capabilities are limited in many cases by mesh distortion associated with deformation heterogeneity. Contexts in which such large distortions arise include: bulk deformation to strains approaching or exceeding unity, especially in highly anisotropic or multiphase materials; shear band formation and intersection of shear bands; and indentation with sharp indenters. Investigators have in the past used Eulerian finite element methods with material response determined from crystal aggregates to study steady state forming processes. However, Eulerian and Arbitrary Lagrangian-Eulerian (ALE) finite element methods have not been widely utilized for simulation of transient deformation processes at the crystal level. The advection schemes used in Eulerian and ALE codes control mesh distortion and allow for simulation of much larger total deformations. We will discuss material state representation issues related to advection and will present results from ALE simulations.

  1. Impact and Implementation of Simulation-Based Training for Safety

    PubMed Central

    Bilotta, Federico F.; Werner, Samantha M.; Bergese, Sergio D.; Rosa, Giovanni

    2013-01-01

    Patient safety is an issue of imminent concern in the high-risk field of medicine, and systematic changes that alter the way medical professionals approach patient care are needed. Simulation-based training (SBT) is an exemplary solution for addressing the dynamic medical environment of today. Grounded in methodologies developed by the aviation industry, SBT exceeds traditional didactic and apprenticeship models in terms of speed of learning, amount of information retained, and capability for deliberate practice. SBT remains an option in many medical schools and continuing medical education curriculums (CMEs), though its use in training has been shown to improve clinical practice. Future simulation-based anesthesiology training research needs to develop methods for measuring both the degree to which training translates into increased practitioner competency and the effect of training on safety improvements for patients. PMID:24311981

  2. Simulation-based education and performance assessments for pediatric surgeons.

    PubMed

    Barsness, Katherine

    2014-08-01

    Education in the knowledge, skills, and attitudes necessary for a surgeon to perform at an expert level in the operating room, and beyond, must address all potential cognitive and technical performance gaps, professionalism and personal behaviors, and effective team communication. Educational strategies should also seek to replicate the stressors and distractions that might occur during a high-risk operation or critical care event. Finally, education cannot remain fixed in an apprenticeship model of "See one, do one, teach one," whereby patients are exposed to the risk of harm inherent to any learning curve. The majority of these educational goals can be achieved with the addition of simulation-based education (SBE) as a valuable adjunct to traditional training methods. This article will review relevant principles of SBE, explore currently available simulation-based educational tools for pediatric surgeons, and finally make projections for the future of SBE and performance assessments for pediatric surgeons.

  3. Parallel Computing Environments and Methods for Power Distribution System Simulation

    SciTech Connect

    Lu, Ning; Taylor, Zachary T.; Chassin, David P.; Guttromson, Ross T.; Studham, Scott S.

    2005-11-10

    The development of cost-effective high-performance parallel computing on multi-processor super computers makes it attractive to port excessively time consuming simulation software from personal computers (PC) to super computes. The power distribution system simulator (PDSS) takes a bottom-up approach and simulates load at appliance level, where detailed thermal models for appliances are used. This approach works well for a small power distribution system consisting of a few thousand appliances. When the number of appliances increases, the simulation uses up the PC memory and its run time increases to a point where the approach is no longer feasible to model a practical large power distribution system. This paper presents an effort made to port a PC-based power distribution system simulator (PDSS) to a 128-processor shared-memory super computer. The paper offers an overview of the parallel computing environment and a description of the modification made to the PDSS model. The performances of the PDSS running on a standalone PC and on the super computer are compared. Future research direction of utilizing parallel computing in the power distribution system simulation is also addressed.

  4. System and Method for Finite Element Simulation of Helicopter Turbulence

    NASA Technical Reports Server (NTRS)

    McFarland, R. E. (Inventor); Dulsenberg, Ken (Inventor)

    1999-01-01

    The present invention provides a turbulence model that has been developed for blade-element helicopter simulation. This model uses an innovative temporal and geometrical distribution algorithm that preserves the statistical characteristics of the turbulence spectra over the rotor disc, while providing velocity components in real time to each of five blade-element stations along each of four blades. for a total of twenty blade-element stations. The simulator system includes a software implementation of flight dynamics that adheres to the guidelines for turbulence set forth in military specifications. One of the features of the present simulator system is that it applies simulated turbulence to the rotor blades of the helicopter, rather than to its center of gravity. The simulator system accurately models the rotor penetration into a gust field. It includes time correlation between the front and rear of the main rotor, as well as between the side forces felt at the center of gravity and at the tail rotor. It also includes features for added realism, such as patchy turbulence and vertical gusts in to which the rotor disc penetrates. These features are realized by a unique real time implementation of the turbulence filters. The new simulator system uses two arrays one on either side of the main rotor to record the turbulence field and to produce time-correlation from the front to the rear of the rotor disc. The use of Gaussian Interpolation between the two arrays maintains the statistical properties of the turbulence across the rotor disc. The present simulator system and method may be used in future and existing real-time helicopter simulations with minimal increase in computational workload.

  5. A virtual reality based simulator for learning nasogastric tube placement.

    PubMed

    Choi, Kup-Sze; He, Xuejian; Chiang, Vico Chung-Lim; Deng, Zhaohong

    2015-02-01

    Nasogastric tube (NGT) placement is a common clinical procedure where a plastic tube is inserted into the stomach through the nostril for feeding or drainage. However, the placement is a blind process in which the tube may be mistakenly inserted into other locations, leading to unexpected complications or fatal incidents. The placement techniques are conventionally acquired by practising on unrealistic rubber mannequins or on humans. In this paper, a virtual reality based training simulation system is proposed to facilitate the training of NGT placement. It focuses on the simulation of tube insertion and the rendering of the feedback forces with a haptic device. A hybrid force model is developed to compute the forces analytically or numerically under different conditions, including the situations when the patient is swallowing or when the tube is buckled at the nostril. To ensure real-time interactive simulations, an offline simulation approach is adopted to obtain the relationship between the insertion depth and insertion force using a non-linear finite element method. The offline dataset is then used to generate real-time feedback forces by interpolation. The virtual training process is logged quantitatively with metrics that can be used for assessing objective performance and tracking progress. The system has been evaluated by nursing professionals. They found that the haptic feeling produced by the simulated forces is similar to their experience during real NGT insertion. The proposed system provides a new educational tool to enhance conventional training in NGT placement.

  6. Visual Fidelity and Learning via Computer-Based Simulations.

    ERIC Educational Resources Information Center

    Holmes, Glen A.

    Effective classroom simulations can provide opportunities for end-users to analyze human teaching and learning behaviors and can also help prepare teachers for real-world experiences. This paper proposes a simulation project based on an aggregation of ideas associated with knowledge-based simulations, behavior observations, visualization, and the…

  7. A real-time infrared imaging simulation method with physical effects modeling of infrared sensors

    NASA Astrophysics Data System (ADS)

    Li, Ni; Huai, Wenqing; Wang, Shaodan; Ren, Lei

    2016-09-01

    Infrared imaging simulation technology can provide infrared data sources for the development, improvement and evaluation of infrared imaging systems under different environment, status and weather conditions, which is reusable and more economic than physical experiments. A real-time infrared imaging simulation process is established to reproduce a complete physical imaging process. Our emphasis is put on the modeling of infrared sensors, involving physical effects of both spatial domain and frequency domain. An improved image convolution method is proposed based on GPU parallel processing to enhance the real-time simulation ability with ensuring its simulation accuracy at the same time. Finally the effectiveness of the above methods is validated by simulation analysis and result comparison.

  8. Numerical simulation of the blast impact problem using the Direct Simulation Monte Carlo (DSMC) method

    NASA Astrophysics Data System (ADS)

    Sharma, Anupam; Long, Lyle N.

    2004-10-01

    A particle approach using the Direct Simulation Monte Carlo (DSMC) method is used to solve the problem of blast impact with structures. A novel approach to model the solid boundary condition for particle methods is presented. The solver is validated against an analytical solution of the Riemann shocktube problem and against experiments on interaction of a planar shock with a square cavity. Blast impact simulations are performed for two model shapes, a box and an I-shaped beam, assuming that the solid body does not deform. The solver uses domain decomposition technique to run in parallel. The parallel performance of the solver on two Beowulf clusters is also presented.

  9. The Co-simulation of Humanoid Robot Based on Solidworks, ADAMS and Simulink

    NASA Astrophysics Data System (ADS)

    Song, Dalei; Zheng, Lidan; Wang, Li; Qi, Weiwei; Li, Yanli

    A simulation method of adaptive controller is proposed for the humanoid robot system based on co-simulation of Solidworks, ADAMS and Simulink. A complex mathematical modeling process is avoided by this method, and the real time dynamic simulating function of Simulink would be exerted adequately. This method could be generalized to other complicated control system. This method is adopted to build and analyse the model of humanoid robot. The trajectory tracking and adaptive controller design also proceed based on it. The effect of trajectory tracking is evaluated by fitting-curve theory of least squares method. The anti-interference capability of the robot is improved a lot through comparative analysis.

  10. PNS and statistical experiments simulation in subcritical systems using Monte-Carlo method on example of Yalina-Thermal assembly

    NASA Astrophysics Data System (ADS)

    Sadovich, Sergey; Talamo, A.; Burnos, V.; Kiyavitskaya, H.; Fokov, Yu.

    2014-06-01

    In subcritical systems driven by an external neutron source, the experimental methods based on pulsed neutron source and statistical techniques play an important role for reactivity measurement. Simulation of these methods is very time-consumed procedure. For simulations in Monte-Carlo programs several improvements for neutronic calculations have been made. This paper introduces a new method for simulation PNS and statistical measurements. In this method all events occurred in the detector during simulation are stored in a file using PTRAC feature in the MCNP. After that with a special code (or post-processing) PNS and statistical methods can be simulated. Additionally different shapes of neutron pulses and its lengths as well as dead time of detectors can be included into simulation. The methods described above were tested on subcritical assembly Yalina-Thermal, located in Joint Institute for Power and Nuclear Research SOSNY, Minsk, Belarus. A good agreement between experimental and simulated results was shown.

  11. Computer simulations of enzyme catalysis: methods, progress, and insights.

    PubMed

    Warshel, Arieh

    2003-01-01

    Understanding the action of enzymes on an atomistic level is one of the important aims of modern biophysics. This review describes the state of the art in addressing this challenge by simulating enzymatic reactions. It considers different modeling methods including the empirical valence bond (EVB) and more standard molecular orbital quantum mechanics/molecular mechanics (QM/MM) methods. The importance of proper configurational averaging of QM/MM energies is emphasized, pointing out that at present such averages are performed most effectively by the EVB method. It is clarified that all properly conducted simulation studies have identified electrostatic preorganization effects as the source of enzyme catalysis. It is argued that the ability to simulate enzymatic reactions also provides the chance to examine the importance of nonelectrostatic contributions and the validity of the corresponding proposals. In fact, simulation studies have indicated that prominent proposals such as desolvation, steric strain, near attack conformation, entropy traps, and coherent dynamics do not account for a major part of the catalytic power of enzymes. Finally, it is pointed out that although some of the issues are likely to remain controversial for some time, computer modeling approaches can provide a powerful tool for understanding enzyme catalysis.

  12. Efficient simulation of stochastic chemical kinetics with the Stochastic Bulirsch-Stoer extrapolation method

    PubMed Central

    2014-01-01

    Background Biochemical systems with relatively low numbers of components must be simulated stochastically in order to capture their inherent noise. Although there has recently been considerable work on discrete stochastic solvers, there is still a need for numerical methods that are both fast and accurate. The Bulirsch-Stoer method is an established method for solving ordinary differential equations that possesses both of these qualities. Results In this paper, we present the Stochastic Bulirsch-Stoer method, a new numerical method for simulating discrete chemical reaction systems, inspired by its deterministic counterpart. It is able to achieve an excellent efficiency due to the fact that it is based on an approach with high deterministic order, allowing for larger stepsizes and leading to fast simulations. We compare it to the Euler τ-leap, as well as two more recent τ-leap methods, on a number of example problems, and find that as well as being very accurate, our method is the most robust, in terms of efficiency, of all the methods considered in this paper. The problems it is most suited for are those with increased populations that would be too slow to simulate using Gillespie’s stochastic simulation algorithm. For such problems, it is likely to achieve higher weak order in the moments. Conclusions The Stochastic Bulirsch-Stoer method is a novel stochastic solver that can be used for fast and accurate simulations. Crucially, compared to other similar methods, it better retains its high accuracy when the timesteps are increased. Thus the Stochastic Bulirsch-Stoer method is both computationally efficient and robust. These are key properties for any stochastic numerical method, as they must typically run many thousands of simulations. PMID:24939084

  13. Simulation of extrudate swell using an extended finite element method

    NASA Astrophysics Data System (ADS)

    Choi, Young Joon; Hulsen, Martien A.

    2011-09-01

    An extended finite element method (XFEM) is presented for the simulation of extrudate swell. A temporary arbitrary Lagrangian-Eulerian (ALE) scheme is incorporated to cope with the movement of the free surface. The main advantage of the proposed method is that the movement of the free surface can be simulated on a fixed Eulerian mesh without any need of re-meshing. The swell ratio of an upper-convected Maxwell fluid is compared with those of the moving boundary-fitted mesh problems of the conventional ALE technique, and those of Crochet & Keunings (1980). The proposed XFEM combined with the temporary ALE scheme can provide similar accuracy to the boundary-fitted mesh problems for low Deborah numbers. For high Deborah numbers, the method seems to be more stable for the extrusion problem.

  14. Energy Efficient Biomolecular Simulations with FPGA-based Reconfigurable Computing

    SciTech Connect

    Hampton, Scott S; Agarwal, Pratul K

    2010-05-01

    Reconfigurable computing (RC) is being investigated as a hardware solution for improving time-to-solution for biomolecular simulations. A number of popular molecular dynamics (MD) codes are used to study various aspects of biomolecules. These codes are now capable of simulating nanosecond time-scale trajectories per day on conventional microprocessor-based hardware, but biomolecular processes often occur at the microsecond time-scale or longer. A wide gap exists between the desired and achievable simulation capability; therefore, there is considerable interest in alternative algorithms and hardware for improving the time-to-solution of MD codes. The fine-grain parallelism provided by Field Programmable Gate Arrays (FPGA) combined with their low power consumption make them an attractive solution for improving the performance of MD simulations. In this work, we use an FPGA-based coprocessor to accelerate the compute-intensive calculations of LAMMPS, a popular MD code, achieving up to 5.5 fold speed-up on the non-bonded force computations of the particle mesh Ewald method and up to 2.2 fold speed-up in overall time-to-solution, and potentially an increase by a factor of 9 in power-performance efficiencies for the pair-wise computations. The results presented here provide an example of the multi-faceted benefits to an application in a heterogeneous computing environment.

  15. Applications of large eddy simulation methods to gyrokinetic turbulence

    SciTech Connect

    Bañón Navarro, A. Happel, T.; Teaca, B. [Applied Mathematics Research Centre, Coventry University, Coventry CV1 5FB; Max-Planck für Sonnensystemforschung, Max-Planck-Str. 2, D-37191 Katlenburg-Lindau; Max-Planck Jenko, F. [Max-Planck-Institut für Plasmaphysik, EURATOM Association, D-85748 Garching; Max-Planck Hammett, G. W. [Max-Planck Collaboration: ASDEX Upgrade Team

    2014-03-15

    The large eddy simulation (LES) approach—solving numerically the large scales of a turbulent system and accounting for the small-scale influence through a model—is applied to nonlinear gyrokinetic systems that are driven by a number of different microinstabilities. Comparisons between modeled, lower resolution, and higher resolution simulations are performed for an experimental measurable quantity, the electron density fluctuation spectrum. Moreover, the validation and applicability of LES is demonstrated through a series of diagnostics based on the free energetics of the system.

  16. Simulation: A Complementary Method for Teaching Health Services Strategic Management

    PubMed Central

    Reddick, W. T.

    1990-01-01

    Rapid change in the health care environment mandates a more comprehensive approach to the education of future health administrators. The area of consideration in this study is that of health care strategic management. A comprehensive literature review suggests microcomputer-based simulation as an appropriate vehicle for addressing the needs of both educators and students. Seven strategic management software packages are reviewed and rated with an instrument adapted from the Infoworld review format. The author concludes that a primary concern is the paucity of health care specific strategic management simulations.

  17. Simulation of 3D tumor cell growth using nonlinear finite element method.

    PubMed

    Dong, Shoubing; Yan, Yannan; Tang, Liqun; Meng, Junping; Jiang, Yi

    2016-01-01

    We propose a novel parallel computing framework for a nonlinear finite element method (FEM)-based cell model and apply it to simulate avascular tumor growth. We derive computation formulas to simplify the simulation and design the basic algorithms. With the increment of the proliferation generations of tumor cells, the FEM elements may become larger and more distorted. Then, we describe a remesh and refinement processing of the distorted or over large finite elements and the parallel implementation based on Message Passing Interface to improve the accuracy and efficiency of the simulation. We demonstrate the feasibility and effectiveness of the FEM model and the parallelization methods in simulations of early tumor growth. PMID:26213205

  18. Applying dynamic simulation modeling methods in health care delivery research-the SIMULATE checklist: report of the ISPOR simulation modeling emerging good practices task force.

    PubMed

    Marshall, Deborah A; Burgos-Liz, Lina; IJzerman, Maarten J; Osgood, Nathaniel D; Padula, William V; Higashi, Mitchell K; Wong, Peter K; Pasupathy, Kalyan S; Crown, William

    2015-01-01

    Health care delivery systems are inherently complex, consisting of multiple tiers of interdependent subsystems and processes that are adaptive to changes in the environment and behave in a nonlinear fashion. Traditional health technology assessment and modeling methods often neglect the wider health system impacts that can be critical for achieving desired health system goals and are often of limited usefulness when applied to complex health systems. Researchers and health care decision makers can either underestimate or fail to consider the interactions among the people, processes, technology, and facility designs. Health care delivery system interventions need to incorporate the dynamics and complexities of the health care system context in which the intervention is delivered. This report provides an overview of common dynamic simulation modeling methods and examples of health care system interventions in which such methods could be useful. Three dynamic simulation modeling methods are presented to evaluate system interventions for health care delivery: system dynamics, discrete event simulation, and agent-based modeling. In contrast to conventional evaluations, a dynamic systems approach incorporates the complexity of the system and anticipates the upstream and downstream consequences of changes in complex health care delivery systems. This report assists researchers and decision makers in deciding whether these simulation methods are appropriate to address specific health system problems through an eight-point checklist referred to as the SIMULATE (System, Interactions, Multilevel, Understanding, Loops, Agents, Time, Emergence) tool. It is a primer for researchers and decision makers working in health care delivery and implementation sciences who face complex challenges in delivering effective and efficient care that can be addressed with system interventions. On reviewing this report, the readers should be able to identify whether these simulation modeling

  19. Applying dynamic simulation modeling methods in health care delivery research-the SIMULATE checklist: report of the ISPOR simulation modeling emerging good practices task force.

    PubMed

    Marshall, Deborah A; Burgos-Liz, Lina; IJzerman, Maarten J; Osgood, Nathaniel D; Padula, William V; Higashi, Mitchell K; Wong, Peter K; Pasupathy, Kalyan S; Crown, William

    2015-01-01

    Health care delivery systems are inherently complex, consisting of multiple tiers of interdependent subsystems and processes that are adaptive to changes in the environment and behave in a nonlinear fashion. Traditional health technology assessment and modeling methods often neglect the wider health system impacts that can be critical for achieving desired health system goals and are often of limited usefulness when applied to complex health systems. Researchers and health care decision makers can either underestimate or fail to consider the interactions among the people, processes, technology, and facility designs. Health care delivery system interventions need to incorporate the dynamics and complexities of the health care system context in which the intervention is delivered. This report provides an overview of common dynamic simulation modeling methods and examples of health care system interventions in which such methods could be useful. Three dynamic simulation modeling methods are presented to evaluate system interventions for health care delivery: system dynamics, discrete event simulation, and agent-based modeling. In contrast to conventional evaluations, a dynamic systems approach incorporates the complexity of the system and anticipates the upstream and downstream consequences of changes in complex health care delivery systems. This report assists researchers and decision makers in deciding whether these simulation methods are appropriate to address specific health system problems through an eight-point checklist referred to as the SIMULATE (System, Interactions, Multilevel, Understanding, Loops, Agents, Time, Emergence) tool. It is a primer for researchers and decision makers working in health care delivery and implementation sciences who face complex challenges in delivering effective and efficient care that can be addressed with system interventions. On reviewing this report, the readers should be able to identify whether these simulation modeling

  20. Parameter Studies, time-dependent simulations and design with automated Cartesian methods

    NASA Technical Reports Server (NTRS)

    Aftosmis, Michael

    2005-01-01

    Over the past decade, NASA has made a substantial investment in developing adaptive Cartesian grid methods for aerodynamic simulation. Cartesian-based methods played a key role in both the Space Shuttle Accident Investigation and in NASA's return to flight activities. The talk will provide an overview of recent technological developments focusing on the generation of large-scale aerodynamic databases, automated CAD-based design, and time-dependent simulations with of bodies in relative motion. Automation, scalability and robustness underly all of these applications and research in each of these topics will be presented.

  1. Making the Case for Simulation-Based Assessments to Overcome the Challenges in Evaluating Clinical Competency.

    PubMed

    Leigh, Gwen; Stueben, Frances; Harrington, Deedra; Hetherman, Stephen

    2016-05-13

    The use of simulation in nursing has increased substantially in the last few decades. Most schools of nursing have incorporated simulation into their curriculum but few are using simulation to evaluate clinical competency at the end of a semester or prior to graduation. Using simulation for such high stakes evaluation is somewhat novel to nursing. Educators are now being challenged to move simulation to the next level and use it as a tool for evaluating clinical competency. Can the use of simulation for high-stakes evaluation add to or improve our current evaluation methods? Using patient simulation for evaluation in contrast to a teaching modality has important differences that must be considered. This article discusses the difficulties of evaluating clinical competency, and makes the case for using simulation based assessment as a method of high stakes evaluation. Using simulation for high-stakes evaluation has the potential for significantly impacting nursing education.

  2. A Simulation Base Investigation of High Latency Space Systems Operations

    NASA Technical Reports Server (NTRS)

    Li, Zu Qun; Crues, Edwin Z.; Bielski, Paul; Moore, Michael

    2017-01-01

    NASA's human space program has developed considerable experience with near Earth space operations. Although NASA has experience with deep space robotic missions, NASA has little substantive experience with human deep space operations. Even in the Apollo program, the missions lasted only a few weeks and the communication latencies were on the order of seconds. Human missions beyond the relatively close confines of the Earth-Moon system will involve missions with durations measured in months and communications latencies measured in minutes. To minimize crew risk and to maximize mission success, NASA needs to develop a better understanding of the implications of these types of mission durations and communication latencies on vehicle design, mission design and flight controller interaction with the crew. To begin to address these needs, NASA performed a study using a physics-based subsystem simulation to investigate the interactions between spacecraft crew and a ground-based mission control center for vehicle subsystem operations across long communication delays. The simulation, built with a subsystem modeling tool developed at NASA's Johnson Space Center, models the life support system of a Mars transit vehicle. The simulation contains models of the cabin atmosphere and pressure control system, electrical power system, drinking and waste water systems, internal and external thermal control systems, and crew metabolic functions. The simulation has three interfaces: 1) a real-time crew interface that can be use to monitor and control the vehicle subsystems; 2) a mission control center interface with data transport delays up to 15 minutes each way; 3) a real-time simulation test conductor interface that can be use to insert subsystem malfunctions and observe the interactions between the crew, ground, and simulated vehicle. The study was conducted at the 21st NASA Extreme Environment Mission Operations (NEEMO) mission between July 18th and Aug 3rd of year 2016. The NEEMO

  3. Simulated scaling method for localized enhanced sampling and simultaneous "alchemical" free energy simulations: a general method for molecular mechanical, quantum mechanical, and quantum mechanical/molecular mechanical simulations.

    PubMed

    Li, Hongzhi; Fajer, Mikolai; Yang, Wei

    2007-01-14

    A potential scaling version of simulated tempering is presented to efficiently sample configuration space in a localized region. The present "simulated scaling" method is developed with a Wang-Landau type of updating scheme in order to quickly flatten the distributions in the scaling parameter lambdam space. This proposal is meaningful for a broad range of biophysical problems, in which localized sampling is required. Besides its superior capability and robustness in localized conformational sampling, this simulated scaling method can also naturally lead to efficient "alchemical" free energy predictions when dual-topology alchemical hybrid potential is applied; thereby simultaneously, both of the chemically and conformationally distinct portions of two end point chemical states can be efficiently sampled. As demonstrated in this work, the present method is also feasible for the quantum mechanical and quantum mechanical/molecular mechanical simulations.

  4. Exact hybrid particle/population simulation of rule-based models of biochemical systems.

    PubMed

    Hogg, Justin S; Harris, Leonard A; Stover, Lori J; Nair, Niketh S; Faeder, James R

    2014-04-01

    Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, such as ordinary differential equations or Gillespie's algorithm, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This "network-free" approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of "partial network expansion" into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim. Performance tests show that significant memory savings

  5. A demonstration device to simulate the radial velocity method for exoplanet detection

    NASA Astrophysics Data System (ADS)

    Choopan, W.; Liewrian, W.; Ketpichainarong, W.; Panijpan, B.

    2016-07-01

    A device for simulating exoplanet detection by the radial method based on the Doppler principle has been constructed. The spectral shift of light from a distant star, mutually revolving with the exoplanet, is simulated by the spectral shift of the sound wave emitted by the device’s star approaching and receding relative to the static frequency detector. The detected sound frequency shift reflects the relative velocity of the ‘star’ very well. Both teachers and students benefit from the radial velocity method and the transit method (published by us previously) provided by this device.

  6. Momentum-exchange method in lattice Boltzmann simulations of particle-fluid interactions.

    PubMed

    Chen, Yu; Cai, Qingdong; Xia, Zhenhua; Wang, Moran; Chen, Shiyi

    2013-07-01

    The momentum exchange method has been widely used in lattice Boltzmann simulations for particle-fluid interactions. Although proved accurate for still walls, it will result in inaccurate particle dynamics without corrections. In this work, we reveal the physical cause of this problem and find that the initial momentum of the net mass transfer through boundaries in the moving-boundary treatment is not counted in the conventional momentum exchange method. A corrected momentum exchange method is then proposed by taking into account the initial momentum of the net mass transfer at each time step. The method is easy to implement with negligible extra computation cost. Direct numerical simulations of a single elliptical particle sedimentation are carried out to evaluate the accuracy for our method as well as other lattice Boltzmann-based methods by comparisons with the results of the finite element method. A shear flow test shows that our method is Galilean invariant.

  7. Experiences using DAKOTA stochastic expansion methods in computational simulations.

    SciTech Connect

    Templeton, Jeremy Alan; Ruthruff, Joseph R.

    2012-01-01

    Uncertainty quantification (UQ) methods bring rigorous statistical connections to the analysis of computational and experiment data, and provide a basis for probabilistically assessing margins associated with safety and reliability. The DAKOTA toolkit developed at Sandia National Laboratories implements a number of UQ methods, which are being increasingly adopted by modeling and simulation teams to facilitate these analyses. This report disseminates results as to the performance of DAKOTA's stochastic expansion methods for UQ on a representative application. Our results provide a number of insights that may be of interest to future users of these methods, including the behavior of the methods in estimating responses at varying probability levels, and the expansion levels for the methodologies that may be needed to achieve convergence.

  8. Lattice Boltzmann Method for Spacecraft Propellant Slosh Simulation

    NASA Technical Reports Server (NTRS)

    Orr, Jeb S.; Powers, Joseph F.; Yang, Hong Q.

    2015-01-01

    A scalable computational approach to the simulation of propellant tank sloshing dynamics in microgravity is presented. In this work, we use the lattice Boltzmann equation (LBE) to approximate the behavior of two-phase, single-component isothermal flows at very low Bond numbers. Through the use of a non-ideal gas equation of state and a modified multiple relaxation time (MRT) collision operator, the proposed method can simulate thermodynamically consistent phase transitions at temperatures and density ratios consistent with typical spacecraft cryogenic propellants, for example, liquid oxygen. Determination of the tank forces and moments relies upon the global momentum conservation of the fluid domain, and a parametric wall wetting model allows tuning of the free surface contact angle. Development of the interface is implicit and no interface tracking approach is required. Numerical examples illustrate the method's application to predicting bulk fluid motion including lateral propellant slosh in low-g conditions.

  9. Calibration of three rainfall simulators with automatic measurement methods

    NASA Astrophysics Data System (ADS)

    Roldan, Margarita

    2010-05-01

    CALIBRATION OF THREE RAINFALL SIMULATORS WITH AUTOMATIC MEASUREMENT METHODS M. Roldán (1), I. Martín (2), F. Martín (2), S. de Alba(3), M. Alcázar(3), F.I. Cermeño(3) 1 Grupo de Investigación Ecología y Gestión Forestal Sostenible. ECOGESFOR-Universidad Politécnica de Madrid. E.U.I.T. Forestal. Avda. Ramiro de Maeztu s/n. Ciudad Universitaria. 28040 Madrid. margarita.roldan@upm.es 2 E.U.I.T. Forestal. Avda. Ramiro de Maeztu s/n. Ciudad Universitaria. 28040 Madrid. 3 Facultad de Ciencias Geológicas. Universidad Complutense de Madrid. Ciudad Universitaria s/n. 28040 Madrid The rainfall erosivity is the potential ability of rain to cause erosion. It is function of the physical characteristics of rainfall (Hudson, 1971). Most expressions describing erosivity are related to kinetic energy or momentum and so with drop mass or size and fall velocity. Therefore, research on factors determining erosivity leds to the necessity to study the relation between fall height and fall velocity for different drop sizes, generated in a rainfall simulator (Epema G.F.and Riezebos H.Th, 1983) Rainfall simulators are one of the most used tools for erosion studies and are used to determine fall velocity and drop size. Rainfall simulators allow repeated and multiple measurements The main reason for use of rainfall simulation as a research tool is to reproduce in a controlled way the behaviour expected in the natural environment. But in many occasions when simulated rain is used in order to compare it with natural rain, there is a lack of correspondence between natural and simulated rain and this can introduce some doubt about validity of data because the characteristics of natural rain are not adequately represented in rainfall simulation research (Dunkerley D., 2008). Many times the rainfall simulations have high rain rates and they do not resemble natural rain events and these measures are not comparables. And besides the intensity is related to the kinetic energy which

  10. Agent-based simulation of building evacuation using a grid graph-based model

    NASA Astrophysics Data System (ADS)

    Tan, L.; Lin, H.; Hu, M.; Che, W.

    2014-02-01

    Shifting from macroscope models to microscope models, the agent-based approach has been widely used to model crowd evacuation as more attentions are paid on individualized behaviour. Since indoor evacuation behaviour is closely related to spatial features of the building, effective representation of indoor space is essential for the simulation of building evacuation. The traditional cell-based representation has limitations in reflecting spatial structure and is not suitable for topology analysis. Aiming at incorporating powerful topology analysis functions of GIS to facilitate agent-based simulation of building evacuation, we used a grid graph-based model in this study to represent the indoor space. Such model allows us to establish an evacuation network at a micro level. Potential escape routes from each node thus could be analysed through GIS functions of network analysis considering both the spatial structure and route capacity. This would better support agent-based modelling of evacuees' behaviour including route choice and local movements. As a case study, we conducted a simulation of emergency evacuation from the second floor of an official building using Agent Analyst as the simulation platform. The results demonstrate the feasibility of the proposed method, as well as the potential of GIS in visualizing and analysing simulation results.

  11. Use of simulated data sets to evaluate the fidelity of Metagenomicprocessing methods

    SciTech Connect

    Mavromatis, Konstantinos; Ivanova, Natalia; Barry, Kerri; Shapiro, Harris; Goltsman, Eugene; McHardy, Alice C.; Rigoutsos, Isidore; Salamov, Asaf; Korzeniewski, Frank; Land, Miriam; Lapidus, Alla; Grigoriev, Igor; Richardson, Paul; Hugenholtz, Philip; Kyrpides, Nikos C.

    2006-12-01

    Metagenomics is a rapidly emerging field of research for studying microbial communities. To evaluate methods presently used to process metagenomic sequences, we constructed three simulated data sets of varying complexity by combining sequencing reads randomly selected from 113 isolate genomes. These data sets were designed to model real metagenomes in terms of complexity and phylogenetic composition. We assembled sampled reads using three commonly used genome assemblers (Phrap, Arachne and JAZZ), and predicted genes using two popular gene finding pipelines (fgenesb and CRITICA/GLIMMER). The phylogenetic origins of the assembled contigs were predicted using one sequence similarity--based (blast hit distribution) and two sequence composition--based (PhyloPythia, oligonucleotide frequencies) binning methods. We explored the effects of the simulated community structure and method combinations on the fidelity of each processing step by comparison to the corresponding isolate genomes. The simulated data sets are available online to facilitate standardized benchmarking of tools for metagenomic analysis.

  12. Rupture Dynamics Simulation for Non-Planar fault by a Curved Grid Finite Difference Method

    NASA Astrophysics Data System (ADS)

    Zhang, Z.; Zhu, G.; Chen, X.

    2011-12-01

    We first implement the non-staggered finite difference method to solve the dynamic rupture problem, with split-node, for non-planar fault. Split-node method for dynamic simulation has been used widely, because of that it's more precise to represent the fault plane than other methods, for example, thick fault, stress glut and so on. The finite difference method is also a popular numeric method to solve kinematic and dynamic problem in seismology. However, previous works focus most of theirs eyes on the staggered-grid method, because of its simplicity and computational efficiency. However this method has its own disadvantage comparing to non-staggered finite difference method at some fact for example describing the boundary condition, especially the irregular boundary, or non-planar fault. Zhang and Chen (2006) proposed the MacCormack high order non-staggered finite difference method based on curved grids to precisely solve irregular boundary problem. Based upon on this non-staggered grid method, we make success of simulating the spontaneous rupture problem. The fault plane is a kind of boundary condition, which could be irregular of course. So it's convinced that we could simulate rupture process in the case of any kind of bending fault plane. We will prove this method is valid in the case of Cartesian coordinate first. In the case of bending fault, the curvilinear grids will be used.

  13. The Local Variational Multiscale Method for Turbulence Simulation.

    SciTech Connect

    Collis, Samuel Scott; Ramakrishnan, Srinivas

    2005-05-01

    Accurate and efficient turbulence simulation in complex geometries is a formidable chal-lenge. Traditional methods are often limited by low accuracy and/or restrictions to simplegeometries. We explore the merger of Discontinuous Galerkin (DG) spatial discretizationswith Variational Multi-Scale (VMS) modeling, termed Local VMS (LVMS), to overcomethese limitations. DG spatial discretizations support arbitrarily high-order accuracy on un-structured grids amenable for complex geometries. Furthermore, high-order, hierarchicalrepresentation within DG provides a natural framework fora prioriscale separation crucialfor VMS implementation. We show that the combined benefits of DG and VMS within theLVMS method leads to promising new approach to LES for use in complex geometries.The efficacy of LVMS for turbulence simulation is assessed by application to fully-developed turbulent channelflow. First, a detailed spatial resolution study is undertakento record the effects of the DG discretization on turbulence statistics. Here, the localhp[?]refinement capabilites of DG are exploited to obtain reliable low-order statistics effi-ciently. Likewise, resolution guidelines for simulating wall-bounded turbulence using DGare established. We also explore the influence of enforcing Dirichlet boundary conditionsindirectly through numericalfluxes in DG which allows the solution to jump (slip) at thechannel walls. These jumps are effective in simulating the influence of the wall commen-surate with the local resolution and this feature of DG is effective in mitigating near-wallresolution requirements. In particular, we show that by locally modifying the numericalviscousflux used at the wall, we are able to regulate the near-wall slip through a penaltythat leads to improved shear-stress predictions. This work, demonstrates the potential ofthe numerical viscousflux to act as a numerically consistent wall-model and this successwarrents future research.As in any high-order numerical method some

  14. Distributed adaptive simulation through standards-based integration of simulators and adaptive learning systems.

    PubMed

    Bergeron, Bryan; Cline, Andrew; Shipley, Jaime

    2012-01-01

    We have developed a distributed, standards-based architecture that enables simulation and simulator designers to leverage adaptive learning systems. Our approach, which incorporates an electronic competency record, open source LMS, and open source microcontroller hardware, is a low-cost, pragmatic option to integrating simulators with traditional courseware. PMID:22356955

  15. An Implicit Characteristic Based Method for Electromagnetics

    NASA Technical Reports Server (NTRS)

    Beggs, John H.; Briley, W. Roger

    2001-01-01

    An implicit characteristic-based approach for numerical solution of Maxwell's time-dependent curl equations in flux conservative form is introduced. This method combines a characteristic based finite difference spatial approximation with an implicit lower-upper approximate factorization (LU/AF) time integration scheme. This approach is advantageous for three-dimensional applications because the characteristic differencing enables a two-factor approximate factorization that retains its unconditional stability in three space dimensions, and it does not require solution of tridiagonal systems. Results are given both for a Fourier analysis of stability, damping and dispersion properties, and for one-dimensional model problems involving propagation and scattering for free space and dielectric materials using both uniform and nonuniform grids. The explicit Finite Difference Time Domain Method (FDTD) algorithm is used as a convenient reference algorithm for comparison. The one-dimensional results indicate that for low frequency problems on a highly resolved uniform or nonuniform grid, this LU/AF algorithm can produce accurate solutions at Courant numbers significantly greater than one, with a corresponding improvement in efficiency for simulating a given period of time. This approach appears promising for development of dispersion optimized LU/AF schemes for three dimensional applications.

  16. Individualized feedback during simulated laparoscopic training: a mixed methods study

    PubMed Central

    Weurlander, Maria; Hedman, Leif; Nisell, Henry; Lindqvist, Pelle G.; Felländer-Tsai, Li; Enochsson, Lars

    2015-01-01

    Objectives This study aimed to explore the value of indi-vidualized feedback on performance, flow and self-efficacy during simulated laparoscopy. Furthermore, we wished to explore attitudes towards feedback and simulator training among medical students. Methods Sixteen medical students were included in the study and randomized to laparoscopic simulator training with or without feedback. A teacher provided individualized feedback continuously throughout the procedures to the target group. Validated questionnaires and scales were used to evaluate self-efficacy and flow. The Mann-Whitney U test was used to evaluate differences between groups regarding laparoscopic performance (instrument path length), self-efficacy and flow. Qualitative data was collected by group interviews and interpreted using inductive thematic analyses. Results Sixteen students completed the simulator training and questionnaires. Instrument path length was shorter in the feedback group (median 3.9 m; IQR: 3.3-4.9) as com-pared to the control group (median 5.9 m; IQR: 5.0-8.1), p<0.05. Self-efficacy improved in both groups. Eleven students participated in the focus interviews. Participants in the control group expressed that they had fun, whereas participants in the feedback group were more concentrated on the task and also more anxious. Both groups had high ambitions to succeed and also expressed the importance of getting feedback. The authenticity of the training scenario was important for the learning process. Conclusions This study highlights the importance of individualized feedback during simulated laparoscopy training. The next step is to further optimize feedback and to transfer standardized and individualized feedback from the simulated setting to the operating room. PMID:26223033

  17. Simulation of Foam Divot Weight on External Tank Utilizing Least Squares and Neural Network Methods

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Coroneos, Rula M.

    2007-01-01

    Simulation of divot weight in the insulating foam, associated with the external tank of the U.S. space shuttle, has been evaluated using least squares and neural network concepts. The simulation required models based on fundamental considerations that can be used to predict under what conditions voids form, the size of the voids, and subsequent divot ejection mechanisms. The quadratic neural networks were found to be satisfactory for the simulation of foam divot weight in various tests associated with the external tank. Both linear least squares method and the nonlinear neural network predicted identical results.

  18. Particle Splitting: A New Method for SPH Star Formation Simulations

    NASA Astrophysics Data System (ADS)

    Kitsionas, Spyridon

    2003-07-01

    We have invented a new algorithm to use with self-gravitating SPH Star Formation codes. The new method is designed to enable SPH simulations to self-regulate their numerical resolution, i.e. the number of SPH particles; the latter is calculated using the Jeans condition (Bate & Burkert 1997) and the local hydrodynamic conditions of the gas. We apply our SPH with Particle Splitting code to cloud-cloud collision simulations. Chapter 2 lists the properties of our standard SPH code. Chapter 3 discusses the efficiency of the standard code as this is applied to simulations of rotating, uniform clouds with m=2 density perturbations. Chapter 4 [astro-ph/0203057] describes the new method and the tests that it has successfully been applied to. It also contains the results of the application of Particle Splitting to the case of rotating clouds as those of Chapter 3, where, with great computational efficiency, we have reproduced the results of FD codes and SPH simulations with large numbers of particles. Chapter 5 gives a detailed account of the cloud-cloud collisions studied, starting from a variety of initial conditions produced by altering the cloud mass, cloud velocity and the collision impact parameter. In the majority of the cases studied, the collisions produced filaments (similar to those observed in ammonia in nearby Star Forming Regions) or networks of filaments; groups of protostellar cores have been produced by fragmentation of the filaments. The accretion rates at these cores are comparable to those of Class 0 objects. Due to time-step constraints the simulations stop early in their evolution. The star formation efficiency of this mechanism is extrapolated in time and is found to be 10-20%.

  19. Simulation of an array-based neural net model

    NASA Technical Reports Server (NTRS)

    Barnden, John A.

    1987-01-01

    Research in cognitive science suggests that much of cognition involves the rapid manipulation of complex data structures. However, it is very unclear how this could be realized in neural networks or connectionist systems. A core question is: how could the interconnectivity of items in an abstract-level data structure be neurally encoded? The answer appeals mainly to positional relationships between activity patterns within neural arrays, rather than directly to neural connections in the traditional way. The new method was initially devised to account for abstract symbolic data structures, but it also supports cognitively useful spatial analogue, image-like representations. As the neural model is based on massive, uniform, parallel computations over 2D arrays, the massively parallel processor is a convenient tool for simulation work, although there are complications in using the machine to the fullest advantage. An MPP Pascal simulation program for a small pilot version of the model is running.

  20. A web-based repository of surgical simulator projects.

    PubMed

    Leskovský, Peter; Harders, Matthias; Székely, Gábor

    2006-01-01

    The use of computer-based surgical simulators for training of prospective surgeons has been a topic of research for more than a decade. As a result, a large number of academic projects have been carried out, and a growing number of commercial products are available on the market. Keeping track of all these endeavors for established groups as well as for newly started projects can be quite arduous. Gathering information on existing methods, already traveled research paths, and problems encountered is a time consuming task. To alleviate this situation, we have established a modifiable online repository of existing projects. It contains detailed information about a large number of simulator projects gathered from web pages, papers and personal communication. The database is modifiable (with password protected sections) and also allows for a simple statistical analysis of the collected data. For further information, the surgical repository web page can be found at www.virtualsurgery.vision.ee.ethz.ch. PMID:16404068

  1. An Ellipsoidal Particle-Finite Element Method for Hypervelocity Impact Simulation. Chapter 1

    NASA Technical Reports Server (NTRS)

    Shivarama, Ravishankar; Fahrenthold, Eric P.

    2004-01-01

    A number of coupled particle-element and hybrid particle-element methods have been developed for the simulation of hypervelocity impact problems, to avoid certain disadvantages associated with the use of pure continuum based or pure particle based methods. To date these methods have employed spherical particles. In recent work a hybrid formulation has been extended to the ellipsoidal particle case. A model formulation approach based on Lagrange's equations, with particles entropies serving as generalized coordinates, avoids the angular momentum conservation problems which have been reported with ellipsoidal smooth particle hydrodynamics models.

  2. Simulation Of A Photofission-Based Cargo Interrogation System

    SciTech Connect

    King, Michael; Gozani, Tsahi; Stevenson, John; Shaw, Timothy

    2011-06-01

    A comprehensive model has been developed to characterize and optimize the detection of Bremsstrahlung x-ray induced fission signatures from nuclear materials hidden in cargo containers. An effective active interrogation system should not only induce a large number of fission events but also efficiently detect their signatures. The proposed scanning system utilizes a 9-MV commercially available linear accelerator and the detection of strong fission signals i.e. delayed gamma rays and prompt neutrons. Because the scanning system is complex and the cargo containers are large and often highly attenuating, the simulation method segments the model into several physical steps, representing each change of radiation particle. Each approximation is carried-out separately, resulting in a major reduction in computational time and a significant improvement in tally statistics. The model investigates the effect on the fission rate and detection rate by various cargo types, densities and distributions. Hydrogenous and metallic cargos, homogeneous and heterogeneous, as well as various locations of the nuclear material inside the cargo container were studied. We will show that for the photofission-based interrogation system simulation, the final results are not only in good agreement with a full, single-step simulation but also with experimental results, further validating the full-system simulation.

  3. Exploring Discretization Error in Simulation-Based Aerodynamic Databases

    NASA Technical Reports Server (NTRS)

    Aftosmis, Michael J.; Nemec, Marian

    2010-01-01

    This work examines the level of discretization error in simulation-based aerodynamic databases and introduces strategies for error control. Simulations are performed using a parallel, multi-level Euler solver on embedded-boundary Cartesian meshes. Discretization errors in user-selected outputs are estimated using the method of adjoint-weighted residuals and we use adaptive mesh refinement to reduce these errors to specified tolerances. Using this framework, we examine the behavior of discretization error throughout a token database computed for a NACA 0012 airfoil consisting of 120 cases. We compare the cost and accuracy of two approaches for aerodynamic database generation. In the first approach, mesh adaptation is used to compute all cases in the database to a prescribed level of accuracy. The second approach conducts all simulations using the same computational mesh without adaptation. We quantitatively assess the error landscape and computational costs in both databases. This investigation highlights sensitivities of the database under a variety of conditions. The presence of transonic shocks or the stiffness in the governing equations near the incompressible limit are shown to dramatically increase discretization error requiring additional mesh resolution to control. Results show that such pathologies lead to error levels that vary by over factor of 40 when using a fixed mesh throughout the database. Alternatively, controlling this sensitivity through mesh adaptation leads to mesh sizes which span two orders of magnitude. We propose strategies to minimize simulation cost in sensitive regions and discuss the role of error-estimation in database quality.

  4. Axon voltage-clamp simulations. I. Methods and tests.

    PubMed Central

    Moore, J W; Ramón, F; Joyner, R W

    1975-01-01

    This is the first in a series of four papers in which we present the numerical simulation of the application of the voltage clamp technique to excitable cells. In this paper we describe the application of the Crank-Nicolson (1947) method for the solution of the parabolic partial differential equations that describe a cylindrical cell in which the ionic conductances are functions of voltage and time (Hodgkin and Huxley, 1952). This method is compared with other methods in terms of accuracy and speed of solution for a propagated action potential. In addition, differential equations representing a simple voltage-clamp electronic circuit are presented. Using the voltage clamp circuit equations, we simulate the voltage clamp of a single isopotential membrane patch and show how the parameters of the circuit affect the transient response of the patch to a step change in the control potential.The stimulation methods presented in this series of papers allow the evaluation of voltage clamp control of an excitable cell or a syncytium of excitable cells. To the extent that membrane parameters and geometrical factors can be determined, the methods presented here provide solutions for the voltage profile as a function of time. PMID:1174640

  5. Validation of Ultrafilter Performance Model Based on Systematic Simulant Evaluation

    SciTech Connect

    Russell, Renee L.; Billing, Justin M.; Smith, Harry D.; Peterson, Reid A.

    2009-11-18

    Because of limited availability of test data with actual Hanford tank waste samples, a method was developed to estimate expected filtration performance based on physical characterization data for the Hanford Tank Waste Treatment and Immobilization Plant. A test with simulated waste was analyzed to demonstrate that filtration of this class of waste is consistent with a concentration polarization model. Subsequently, filtration data from actual waste samples were analyzed to demonstrate that centrifuged solids concentrations provide a reasonable estimate of the limiting concentration for filtration.

  6. Fuzzy-based simulation of real color blindness.

    PubMed

    Lee, Jinmi; dos Santos, Wellington P

    2010-01-01

    About 8% of men are affected by color blindness. That population is at a disadvantage since they cannot perceive a substantial amount of the visual information. This work presents two computational tools developed to assist color blind people. The first one tests color blindness and assess its severity. The second tool is based on Fuzzy Logic, and implements a method proposed to simulate real red and green color blindness in order to generate synthetic cases of color vision disturbance in a statistically significant amount. Our purpose is to develop correction tools and obtain a deeper understanding of the accessibility problems faced by people with chromatic visual impairment.

  7. Some Developments of the Equilibrium Particle Simulation Method for the Direct Simulation of Compressible Flows

    NASA Technical Reports Server (NTRS)

    Macrossan, M. N.

    1995-01-01

    The direct simulation Monte Carlo (DSMC) method is the established technique for the simulation of rarefied gas flows. In some flows of engineering interest, such as occur for aero-braking spacecraft in the upper atmosphere, DSMC can become prohibitively expensive in CPU time because some regions of the flow, particularly on the windward side of blunt bodies, become collision dominated. As an alternative to using a hybrid DSMC and continuum gas solver (Euler or Navier-Stokes solver) this work is aimed at making the particle simulation method efficient in the high density regions of the flow. A high density, infinite collision rate limit of DSMC, the Equilibrium Particle Simulation method (EPSM) was proposed some 15 years ago. EPSM is developed here for the flow of a gas consisting of many different species of molecules and is shown to be computationally efficient (compared to DSMC) for high collision rate flows. It thus offers great potential as part of a hybrid DSMC/EPSM code which could handle flows in the transition regime between rarefied gas flows and fully continuum flows. As a first step towards this goal a pure EPSM code is described. The next step of combining DSMC and EPSM is not attempted here but should be straightforward. EPSM and DSMC are applied to Taylor-Couette flow with Kn = 0.02 and 0.0133 and S(omega) = 3). Toroidal vortices develop for both methods but some differences are found, as might be expected for the given flow conditions. EPSM appears to be less sensitive to the sequence of random numbers used in the simulation than is DSMC and may also be more dissipative. The question of the origin and the magnitude of the dissipation in EPSM is addressed. It is suggested that this analysis is also relevant to DSMC when the usual accuracy requirements on the cell size and decoupling time step are relaxed in the interests of computational efficiency.

  8. Multigrid Methods for Fully Implicit Oil Reservoir Simulation

    NASA Technical Reports Server (NTRS)

    Molenaar, J.

    1996-01-01

    In this paper we consider the simultaneous flow of oil and water in reservoir rock. This displacement process is modeled by two basic equations: the material balance or continuity equations and the equation of motion (Darcy's law). For the numerical solution of this system of nonlinear partial differential equations there are two approaches: the fully implicit or simultaneous solution method and the sequential solution method. In the sequential solution method the system of partial differential equations is manipulated to give an elliptic pressure equation and a hyperbolic (or parabolic) saturation equation. In the IMPES approach the pressure equation is first solved, using values for the saturation from the previous time level. Next the saturations are updated by some explicit time stepping method; this implies that the method is only conditionally stable. For the numerical solution of the linear, elliptic pressure equation multigrid methods have become an accepted technique. On the other hand, the fully implicit method is unconditionally stable, but it has the disadvantage that in every time step a large system of nonlinear algebraic equations has to be solved. The most time-consuming part of any fully implicit reservoir simulator is the solution of this large system of equations. Usually this is done by Newton's method. The resulting systems of linear equations are then either solved by a direct method or by some conjugate gradient type method. In this paper we consider the possibility of applying multigrid methods for the iterative solution of the systems of nonlinear equations. There are two ways of using multigrid for this job: either we use a nonlinear multigrid method or we use a linear multigrid method to deal with the linear systems that arise in Newton's method. So far only a few authors have reported on the use of multigrid methods for fully implicit simulations. Two-level FAS algorithm is presented for the black-oil equations, and linear multigrid for

  9. Quadrature Moments Method for the Simulation of Turbulent Reactive Flows

    NASA Technical Reports Server (NTRS)

    Raman, Venkatramanan; Pitsch, Heinz; Fox, Rodney O.

    2003-01-01

    A sub-filter model for reactive flows, namely the DQMOM model, was formulated for Large Eddy Simulation (LES) using the filtered mass density function. Transport equations required to determine the location and size of the delta-peaks were then formulated for a 2-peak decomposition of the FDF. The DQMOM scheme was implemented in an existing structured-grid LES solver. Simulations of scalar shear layer using an experimental configuration showed that the first and second moments of both reactive and inert scalars are in good agreement with a conventional Lagrangian scheme that evolves the same FDF. Comparisons with LES simulations performed using laminar chemistry assumption for the reactive scalar show that the new method provides vast improvements at minimal computational cost. Currently, the DQMOM model is being implemented for use with the progress variable/mixture fraction model of Pierce. Comparisons with experimental results and LES simulations using a single-environment for the progress-variable are planned. Future studies will aim at understanding the effect of increase in environments on predictions.

  10. A mixed RKPM/RBF immersed method for FSI simulations

    NASA Astrophysics Data System (ADS)

    Giometto, M.; Fang, J.; Putti, M.; Saetta, A.; Lanzoni, S.; Parlange, M. B.

    2012-04-01

    Simulating fluid-structure interaction still represents a challenging multiphysics application in the framework of Civil and Environmental Engineering. Techniques to couple in an efficient way the two continua have become more and more sophisticated and they all aim to find an efficient way to deal with the two different frames of reference and to avoid the need to re-mesh when the element aspect ratio has become unacceptable (e.g. large deformations). To overcome such problems we propose a mixed RKPM / RBF immersed method in which a Lagrangian meshless solid domain moves on top of a background Eulerian fluid mesh that spans over the entire computational domain. This method is similar to the original Immersed Boundary Method introduced by C.Peskin, except that the structure has the same spatial dimension of the fluid domain and therefore the effects of the fluid-embedded bodies are summarized into a volumetric source term. The governing equations for the viscous fluid are discretized and solved on a regular, Cartesian mesh using a pseudo-spectral approach, while the solid equations are solved by means of RKPM basis functions. The use of a Reproducing Kernel Particle Method for the solid domain enables us to easily handle large deformations without the typical mesh distortion of Finite-Element-Methods while still providing sufficient accuracy in the solution. At the moment we're validating the code by running simple-geometry, low-Reynolds DNS simulations; a further step will be including a subgrid-scale model in the fluid formulation and run simulations with turbulent flows at high-Reynolds numbers, to study the effects of flexible structures (e.g. trees, bridges, towers) on the Atmospheric Boundary Layer.

  11. Construction of dynamic stochastic simulation models using knowledge-based techniques

    NASA Technical Reports Server (NTRS)

    Williams, M. Douglas; Shiva, Sajjan G.

    1990-01-01

    Over the past three decades, computer-based simulation models have proven themselves to be cost-effective alternatives to the more structured deterministic methods of systems analysis. During this time, many techniques, tools and languages for constructing computer-based simulation models have been developed. More recently, advances in knowledge-based system technology have led many researchers to note the similarities between knowledge-based programming and simulation technologies and to investigate the potential application of knowledge-based programming techniques to simulation modeling. The integration of conventional simulation techniques with knowledge-based programming techniques is discussed to provide a development environment for constructing knowledge-based simulation models. A comparison of the techniques used in the construction of dynamic stochastic simulation models and those used in the construction of knowledge-based systems provides the requirements for the environment. This leads to the design and implementation of a knowledge-based simulation development environment. These techniques were used in the construction of several knowledge-based simulation models including the Advanced Launch System Model (ALSYM).

  12. Comparison of AGE and Spectral Methods for the Simulation of Far-Wakes

    NASA Technical Reports Server (NTRS)

    Bisset, D. K.; Rogers, M. M.; Kega, Dennis (Technical Monitor)

    1999-01-01

    Turbulent flow simulation methods based on finite differences are attractive for their simplicity, flexibility and efficiency, but not always for accuracy or stability. This report demonstrates that a good compromise is possible with the Advected Grid Explicit (AGE) method. AGE has proven to be both efficient and accurate for simulating turbulent free-shear flows, including planar mixing layers and planar jets. Its efficiency results from its localized fully explicit finite difference formulation (Bisset 1998a,b) that is very straightforward to compute, outweighing the need for a fairly small timestep. Also, most of the successful simulations were slightly under-resolved, and therefore they were, in effect, large-eddy simulations (LES) without a sub-grid-scale (SGS) model, rather than direct numerical simulations (DNS). The principle is that the role of the smallest scales of turbulent motion (when the Reynolds number is not too low) is to dissipate turbulent energy, and therefore they do not have to be simulated when the numerical method is inherently dissipative at its resolution limits. Such simulations are termed 'auto-LES' (LES with automatic SGS modeling) in this report.

  13. A Survey of Stochastic Simulation and Optimization Methods in Signal Processing

    NASA Astrophysics Data System (ADS)

    Pereyra, Marcelo; Schniter, Philip; Chouzenoux, Emilie; Pesquet, Jean-Christophe; Tourneret, Jean-Yves; Hero, Alfred O.; McLaughlin, Steve

    2016-03-01

    Modern signal processing (SP) methods rely very heavily on probability and statistics to solve challenging SP problems. SP methods are now expected to deal with ever more complex models, requiring ever more sophisticated computational inference techniques. This has driven the development of statistical SP methods based on stochastic simulation and optimization. Stochastic simulation and optimization algorithms are computationally intensive tools for performing statistical inference in models that are analytically intractable and beyond the scope of deterministic inference methods. They have been recently successfully applied to many difficult problems involving complex statistical models and sophisticated (often Bayesian) statistical inference techniques. This survey paper offers an introduction to stochastic simulation and optimization methods in signal and image processing. The paper addresses a variety of high-dimensional Markov chain Monte Carlo (MCMC) methods as well as deterministic surrogate methods, such as variational Bayes, the Bethe approach, belief and expectation propagation and approximate message passing algorithms. It also discusses a range of optimization methods that have been adopted to solve stochastic problems, as well as stochastic methods for deterministic optimization. Subsequently, areas of overlap between simulation and optimization, in particular optimization-within-MCMC and MCMC-driven optimization are discussed.

  14. Particle-based simulations of red blood cells-A review.

    PubMed

    Ye, Ting; Phan-Thien, Nhan; Lim, Chwee Teck

    2016-07-26

    Particle-based methods have been increasingly attractive for solving biofluid flow problems, because of the ease and flexibility in modeling complex structure fluids afforded by the methods. In this review, we focus on popular particle-based methods widely used in red blood cell (RBC) simulations, including dissipative particle dynamics (DPD), smoothed particle hydrodynamics (SPH), and lattice Boltzmann method (LBM). We introduce their basic ideas and formulations, and present their applications in RBC simulations which are divided into three classes according to the number of RBCs in the simulation: a single RBC, two or multiple RBCs, and RBC suspension. Furthermore, we analyze their advantages and disadvantages. On weighing the pros and cons of the methods, a combination of the immersed boundary (IB) method and some forms of smoothed dissipative particle hydrodynamics (SDPD) methods may be required to deal effectively with RBC simulations. PMID:26706718

  15. Intercomparison of 3D pore-scale flow and solute transport simulation methods

    SciTech Connect

    Mehmani, Yashar; Schoenherr, Martin; Pasquali, Andrea; Perkins, William A.; Kim, Kyungjoo; Perego, Mauro; Parks, Michael L.; Balhoff, Matthew T.; Richmond, Marshall C.; Geier, Martin; Krafczyk, Manfred; Luo, Li -Shi; Tartakovsky, Alexandre M.; Yang, Xiaofan; Scheibe, Timothy D.; Trask, Nathaniel

    2015-09-28

    Multiple numerical approaches have been developed to simulate porous media fluid flow and solute transport at the pore scale. These include 1) methods that explicitly model the three-dimensional geometry of pore spaces and 2) methods that conceptualize the pore space as a topologically consistent set of stylized pore bodies and pore throats. In previous work we validated a model of the first type, using computational fluid dynamics (CFD) codes employing a standard finite volume method (FVM), against magnetic resonance velocimetry (MRV) measurements of pore-scale velocities. Here we expand that validation to include additional models of the first type based on the lattice Boltzmann method (LBM) and smoothed particle hydrodynamics (SPH), as well as a model of the second type, a pore-network model (PNM). The PNM approach used in the current study was recently improved and demonstrated to accurately simulate solute transport in a two-dimensional experiment. While the PNM approach is computationally much less demanding than direct numerical simulation methods, the effect of conceptualizing complex three-dimensional pore geometries on solute transport in the manner of PNMs has not been fully determined. We apply all four approaches (FVM-based CFD, LBM, SPH and PNM) to simulate pore-scale velocity distributions and (for capable codes) nonreactive solute transport, and intercompare the model results. Comparisons are drawn both in terms of macroscopic variables (e.g., permeability, solute breakthrough curves) and microscopic variables (e.g., local velocities and concentrations). Generally good agreement was achieved among the various approaches, but some differences were observed depending on the model context. The intercomparison work was challenging because of variable capabilities of the codes, and inspired some code enhancements to allow consistent comparison of flow and transport simulations across the full suite of methods. This paper provides support for confidence

  16. Intercomparison of 3D pore-scale flow and solute transport simulation methods

    DOE PAGESBeta

    Mehmani, Yashar; Schoenherr, Martin; Pasquali, Andrea; Perkins, William A.; Kim, Kyungjoo; Perego, Mauro; Parks, Michael L.; Balhoff, Matthew T.; Richmond, Marshall C.; Geier, Martin; et al

    2015-09-28

    Multiple numerical approaches have been developed to simulate porous media fluid flow and solute transport at the pore scale. These include 1) methods that explicitly model the three-dimensional geometry of pore spaces and 2) methods that conceptualize the pore space as a topologically consistent set of stylized pore bodies and pore throats. In previous work we validated a model of the first type, using computational fluid dynamics (CFD) codes employing a standard finite volume method (FVM), against magnetic resonance velocimetry (MRV) measurements of pore-scale velocities. Here we expand that validation to include additional models of the first type based onmore » the lattice Boltzmann method (LBM) and smoothed particle hydrodynamics (SPH), as well as a model of the second type, a pore-network model (PNM). The PNM approach used in the current study was recently improved and demonstrated to accurately simulate solute transport in a two-dimensional experiment. While the PNM approach is computationally much less demanding than direct numerical simulation methods, the effect of conceptualizing complex three-dimensional pore geometries on solute transport in the manner of PNMs has not been fully determined. We apply all four approaches (FVM-based CFD, LBM, SPH and PNM) to simulate pore-scale velocity distributions and (for capable codes) nonreactive solute transport, and intercompare the model results. Comparisons are drawn both in terms of macroscopic variables (e.g., permeability, solute breakthrough curves) and microscopic variables (e.g., local velocities and concentrations). Generally good agreement was achieved among the various approaches, but some differences were observed depending on the model context. The intercomparison work was challenging because of variable capabilities of the codes, and inspired some code enhancements to allow consistent comparison of flow and transport simulations across the full suite of methods. This paper provides support for

  17. Intercomparison of 3D pore-scale flow and solute transport simulation methods

    NASA Astrophysics Data System (ADS)

    Yang, Xiaofan; Mehmani, Yashar; Perkins, William A.; Pasquali, Andrea; Schönherr, Martin; Kim, Kyungjoo; Perego, Mauro; Parks, Michael L.; Trask, Nathaniel; Balhoff, Matthew T.; Richmond, Marshall C.; Geier, Martin; Krafczyk, Manfred; Luo, Li-Shi; Tartakovsky, Alexandre M.; Scheibe, Timothy D.

    2016-09-01

    Multiple numerical approaches have been developed to simulate porous media fluid flow and solute transport at the pore scale. These include 1) methods that explicitly model the three-dimensional geometry of pore spaces and 2) methods that conceptualize the pore space as a topologically consistent set of stylized pore bodies and pore throats. In previous work we validated a model of the first type, using computational fluid dynamics (CFD) codes employing a standard finite volume method (FVM), against magnetic resonance velocimetry (MRV) measurements of pore-scale velocities. Here we expand that validation to include additional models of the first type based on the lattice Boltzmann method (LBM) and smoothed particle hydrodynamics (SPH), as well as a model of the second type, a pore-network model (PNM). The PNM approach used in the current study was recently improved and demonstrated to accurately simulate solute transport in a two-dimensional experiment. While the PNM approach is computationally much less demanding than direct numerical simulation methods, the effect of conceptualizing complex three-dimensional pore geometries on solute transport in the manner of PNMs has not been fully determined. We apply all four approaches (FVM-based CFD, LBM, SPH and PNM) to simulate pore-scale velocity distributions and (for capable codes) nonreactive solute transport, and intercompare the model results. Comparisons are drawn both in terms of macroscopic variables (e.g., permeability, solute breakthrough curves) and microscopic variables (e.g., local velocities and concentrations). Generally good agreement was achieved among the various approaches, but some differences were observed depending on the model context. The intercomparison work was challenging because of variable capabilities of the codes, and inspired some code enhancements to allow consistent comparison of flow and transport simulations across the full suite of methods. This study provides support for confidence

  18. Study on self-calibration angle encoder using simulation method

    NASA Astrophysics Data System (ADS)

    Wang, Yan; Xue, Zi; Huang, Yao; Wang, Xiaona

    2016-01-01

    The angle measurement technology is very important in precision manufacture, optical industry, aerospace, aviation and navigation, etc. Further, the angle encoder, which uses concept `subdivision of full circle (2π rad=360°)' and transforms the angle into number of electronic pulse, is the most common instrument for angle measurement. To improve the accuracy of the angle encoder, a novel self-calibration method was proposed that enables the angle encoder to calibrate itself without angle reference. An angle deviation curve among 0° to 360° was simulated with equal weights Fourier components for the study of the self-calibration method. In addition, a self-calibration algorithm was used in the process of this deviation curve. The simulation result shows the relationship between the arrangement of multi-reading heads and the Fourier components distribution of angle encoder deviation curve. Besides, an actual self-calibration angle encoder was calibrated by polygon angle standard in national institute of metrology, China. The experiment result indicates the actual self-calibration effect on the Fourier components distribution of angle encoder deviation curve. In the end, the comparison, which is between the simulation self-calibration result and the experiment self-calibration result, reflects good consistency and proves the reliability of the self-calibration angle encoder.

  19. IR characteristic simulation of city scenes based on radiosity model

    NASA Astrophysics Data System (ADS)

    Xiong, Xixian; Zhou, Fugen; Bai, Xiangzhi; Yu, Xiyu

    2013-09-01

    Reliable modeling for thermal infrared (IR) signatures of real-world city scenes is required for signature management of civil and military platforms. Traditional modeling methods generally assume that scene objects are individual entities during the physical processes occurring in infrared range. However, in reality, the physical scene involves convective and conductive interactions between objects as well as the radiations interactions between objects. A method based on radiosity model describes these complex effects. It has been developed to enable an accurate simulation for the radiance distribution of the city scenes. Firstly, the physical processes affecting the IR characteristic of city scenes were described. Secondly, heat balance equations were formed on the basis of combining the atmospheric conditions, shadow maps and the geometry of scene. Finally, finite difference method was used to calculate the kinetic temperature of object surface. A radiosity model was introduced to describe the scattering effect of radiation between surface elements in the scene. By the synthesis of objects radiance distribution in infrared range, we could obtain the IR characteristic of scene. Real infrared images and model predictions were shown and compared. The results demonstrate that this method can realistically simulate the IR characteristic of city scenes. It effectively displays the infrared shadow effects and the radiation interactions between objects in city scenes.

  20. New Simulation Methods to Facilitate Achieving a Mechanistic Understanding of Basic Pharmacology Principles in the Classroom

    ERIC Educational Resources Information Center

    Grover, Anita; Lam, Tai Ning; Hunt, C. Anthony

    2008-01-01

    We present a simulation tool to aid the study of basic pharmacology principles. By taking advantage of the properties of agent-based modeling, the tool facilitates taking a mechanistic approach to learning basic concepts, in contrast to the traditional empirical methods. Pharmacodynamics is a particular aspect of pharmacology that can benefit from…

  1. Assessing the Accuracy of Classwide Direct Observation Methods: Two Analyses Using Simulated and Naturalistic Data

    ERIC Educational Resources Information Center

    Dart, Evan H.; Radley, Keith C.; Briesch, Amy M.; Furlow, Christopher M.; Cavell, Hannah J.; Briesch, Amy M.

    2016-01-01

    Two studies investigated the accuracy of eight different interval-based group observation methods that are commonly used to assess the effects of classwide interventions. In Study 1, a Microsoft Visual Basic program was created to simulate a large set of observational data. Binary data were randomly generated at the student level to represent…

  2. Graph-based simulation of quantum computation in the density matrix representation

    NASA Astrophysics Data System (ADS)

    Viamontes, George F.; Markov, Igor L.; Hayes, John P.

    2004-08-01

    Quantum-mechanical phenomena are playing an increasing role in information processing, as transistor sizes approach the nanometer level, and quantum circuits and data encoding methods appear in the securest forms of communication. Simulating such phenomena efficiently is exceedingly difficult because of the vast size of the quantum state space involved. A major complication is caused by errors (noise) due to unwanted interactions between the quantum states and the environment. Consequently, simulating quantum circuits and their associated errors using the density matrix representation is potentially significant in many applications, but is well beyond the computational abilities of most classical simulation techniques in both time and memory resources. The size of a density matrix grows exponentially with the number of qubits simulated, rendering array-based simulation techniques that explicitly store the density matrix intractable. In this work, we propose a new technique aimed at efficiently simulating quantum circuits that are subject to errors. In particular, we describe new graph-based algorithms implemented in the simulator QuIDDPro/D. While previously reported graph-based simulators operate in terms of the state-vector representation, these new algorithms use the density matrix representation. To gauge the improvements offered by QuIDDPro/D, we compare its simulation performance with an optimized array-based simulator called QCSim. Empirical results, generated by both simulators on a set of quantum circuit benchmarks involving error correction, reversible logic, communication, and quantum search, show that the graph-based approach far outperforms the array-based approach.

  3. Mosquito population dynamics from cellular automata-based simulation

    NASA Astrophysics Data System (ADS)

    Syafarina, Inna; Sadikin, Rifki; Nuraini, Nuning

    2016-02-01

    In this paper we present an innovative model for simulating mosquito-vector population dynamics. The simulation consist of two stages: demography and dispersal dynamics. For demography simulation, we follow the existing model for modeling a mosquito life cycles. Moreover, we use cellular automata-based model for simulating dispersal of the vector. In simulation, each individual vector is able to move to other grid based on a random walk. Our model is also capable to represent immunity factor for each grid. We simulate the model to evaluate its correctness. Based on the simulations, we can conclude that our model is correct. However, our model need to be improved to find a realistic parameters to match real data.

  4. Amyloid oligomer structure characterization from simulations: A general method

    SciTech Connect

    Nguyen, Phuong H.; Li, Mai Suan

    2014-03-07

    Amyloid oligomers and plaques are composed of multiple chemically identical proteins. Therefore, one of the first fundamental problems in the characterization of structures from simulations is the treatment of the degeneracy, i.e., the permutation of the molecules. Second, the intramolecular and intermolecular degrees of freedom of the various molecules must be taken into account. Currently, the well-known dihedral principal component analysis method only considers the intramolecular degrees of freedom, and other methods employing collective variables can only describe intermolecular degrees of freedom at the global level. With this in mind, we propose a general method that identifies all the structures accurately. The basis idea is that the intramolecular and intermolecular states are described in terms of combinations of single-molecule and double-molecule states, respectively, and the overall structures of oligomers are the product basis of the intramolecular and intermolecular states. This way, the degeneracy is automatically avoided. The method is illustrated on the conformational ensemble of the tetramer of the Alzheimer's peptide Aβ{sub 9−40}, resulting from two atomistic molecular dynamics simulations in explicit solvent, each of 200 ns, starting from two distinct structures.

  5. Amyloid oligomer structure characterization from simulations: A general method

    NASA Astrophysics Data System (ADS)

    Nguyen, Phuong H.; Li, Mai Suan; Derreumaux, Philippe

    2014-03-01

    Amyloid oligomers and plaques are composed of multiple chemically identical proteins. Therefore, one of the first fundamental problems in the characterization of structures from simulations is the treatment of the degeneracy, i.e., the permutation of the molecules. Second, the intramolecular and intermolecular degrees of freedom of the various molecules must be taken into account. Currently, the well-known dihedral principal component analysis method only considers the intramolecular degrees of freedom, and other methods employing collective variables can only describe intermolecular degrees of freedom at the global level. With this in mind, we propose a general method that identifies all the structures accurately. The basis idea is that the intramolecular and intermolecular states are described in terms of combinations of single-molecule and double-molecule states, respectively, and the overall structures of oligomers are the product basis of the intramolecular and intermolecular states. This way, the degeneracy is automatically avoided. The method is illustrated on the conformational ensemble of the tetramer of the Alzheimer's peptide Aβ9-40, resulting from two atomistic molecular dynamics simulations in explicit solvent, each of 200 ns, starting from two distinct structures.

  6. An HLA based design of space system simulation environment

    NASA Astrophysics Data System (ADS)

    Li, Yinghua; Li, Yong; Liu, Jie

    2007-06-01

    Space system simulation is involved in many application fields, such as space remote sensing and space communication, etc. A simulation environment which can be shared by different space system simulation is needed. Two rules, called object template towing and hierarchical reusability, are proposed. Based on these two rules, the architecture, the network structure and the function structure of the simulation environment are designed. Then, the mechanism of utilizing data resources, inheriting object models and running simulation systems are also constructed. These mechanisms make the simulation objects defined in advance be easily inherited by different HLA federates, the fundamental simulation models be shared by different simulation systems. Therefore, the simulation environment is highly universal and reusable.

  7. Simulation Parameters Settings Methodology Proposal Based on Leverage Points

    NASA Astrophysics Data System (ADS)

    Janošek, Michal; Kocian, Václav

    Simulation belongs to one of the most time consuming phases of complex system design. It is necessary to test our model with number of parameters of the entire simulation, control mechanism and components. We test various scenarios and strategies. In this article we would like to present the methodology proposal for simulation parameters settings based on leverage points' hierarchy developed by Donella H. Meadows to aid the simulation process.

  8. Discrete Element Method Simulation of Nonlinear Viscoelastic Stress Wave Problems

    NASA Astrophysics Data System (ADS)

    Tang, Zhiping; Horie, Y.; Wang, Wenqiang

    2002-07-01

    A DEM(Discrete Element Method) simulation of nonlinear viscoelastic stress wave problems is carried out. The interaction forces among elements are described using a model in which neighbor elements are linked by a nonlinear spring and a certain number of Maxwell components in parallel. By making use of exponential relaxation moduli, it is shown that numerical computation of the convolution integral does not require storing and repeatedly calculating strain history, so that the computational cost is dramatically reduced. To validate the viscoelastic DM2 code1, stress wave propagation in a Maxwell rod with one end subjected to a constant stress loading is simulated. Results excellently fit those from the characteristics calculation. The code is then used to investigate the problem of meso-scale damage in a plastic-bonded explosive under shock loading. Results not only show "compression damage", but also reveal a complex damage evolution. They demonstrate a unique capability of DEM in modeling heterogeneous materials.

  9. Large-eddy simulation using the finite element method

    SciTech Connect

    McCallen, R.C.; Gresho, P.M.; Leone, J.M. Jr.; Kollmann, W.

    1993-10-01

    In a large-eddy simulation (LES) of turbulent flows, the large-scale motion is calculated explicitly (i.e., approximated with semi-empirical relations). Typically, finite difference or spectral numerical schemes are used to generate an LES; the use of finite element methods (FEM) has been far less prominent. In this study, we demonstrate that FEM in combination with LES provides a viable tool for the study of turbulent, separating channel flows, specifically the flow over a two-dimensional backward-facing step. The combination of these methodologies brings together the advantages of each: LES provides a high degree of accuracy with a minimum of empiricism for turbulence modeling and FEM provides a robust way to simulate flow in very complex domains of practical interest. Such a combination should prove very valuable to the engineering community.

  10. Adaptive mesh refinement and adjoint methods in geophysics simulations

    NASA Astrophysics Data System (ADS)

    Burstedde, Carsten

    2013-04-01

    It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times

  11. Traffic and Driving Simulator Based on Architecture of Interactive Motion

    PubMed Central

    Paz, Alexander; Veeramisti, Naveen; Khaddar, Romesh; de la Fuente-Mella, Hanns; Modorcea, Luiza

    2015-01-01

    This study proposes an architecture for an interactive motion-based traffic simulation environment. In order to enhance modeling realism involving actual human beings, the proposed architecture integrates multiple types of simulation, including: (i) motion-based driving simulation, (ii) pedestrian simulation, (iii) motorcycling and bicycling simulation, and (iv) traffic flow simulation. The architecture has been designed to enable the simulation of the entire network; as a result, the actual driver, pedestrian, and bike rider can navigate anywhere in the system. In addition, the background traffic interacts with the actual human beings. This is accomplished by using a hybrid mesomicroscopic traffic flow simulation modeling approach. The mesoscopic traffic flow simulation model loads the results of a user equilibrium traffic assignment solution and propagates the corresponding traffic through the entire system. The microscopic traffic flow simulation model provides background traffic around the vicinities where actual human beings are navigating the system. The two traffic flow simulation models interact continuously to update system conditions based on the interactions between actual humans and the fully simulated entities. Implementation efforts are currently in progress and some preliminary tests of individual components have been conducted. The implementation of the proposed architecture faces significant challenges ranging from multiplatform and multilanguage integration to multievent communication and coordination. PMID:26491711

  12. Traffic and Driving Simulator Based on Architecture of Interactive Motion.

    PubMed

    Paz, Alexander; Veeramisti, Naveen; Khaddar, Romesh; de la Fuente-Mella, Hanns; Modorcea, Luiza

    2015-01-01

    This study proposes an architecture for an interactive motion-based traffic simulation environment. In order to enhance modeling realism involving actual human beings, the proposed architecture integrates multiple types of simulation, including: (i) motion-based driving simulation, (ii) pedestrian simulation, (iii) motorcycling and bicycling simulation, and (iv) traffic flow simulation. The architecture has been designed to enable the simulation of the entire network; as a result, the actual driver, pedestrian, and bike rider can navigate anywhere in the system. In addition, the background traffic interacts with the actual human beings. This is accomplished by using a hybrid mesomicroscopic traffic flow simulation modeling approach. The mesoscopic traffic flow simulation model loads the results of a user equilibrium traffic assignment solution and propagates the corresponding traffic through the entire system. The microscopic traffic flow simulation model provides background traffic around the vicinities where actual human beings are navigating the system. The two traffic flow simulation models interact continuously to update system conditions based on the interactions between actual humans and the fully simulated entities. Implementation efforts are currently in progress and some preliminary tests of individual components have been conducted. The implementation of the proposed architecture faces significant challenges ranging from multiplatform and multilanguage integration to multievent communication and coordination.

  13. Traffic and Driving Simulator Based on Architecture of Interactive Motion.

    PubMed

    Paz, Alexander; Veeramisti, Naveen; Khaddar, Romesh; de la Fuente-Mella, Hanns; Modorcea, Luiza

    2015-01-01

    This study proposes an architecture for an interactive motion-based traffic simulation environment. In order to enhance modeling realism involving actual human beings, the proposed architecture integrates multiple types of simulation, including: (i) motion-based driving simulation, (ii) pedestrian simulation, (iii) motorcycling and bicycling simulation, and (iv) traffic flow simulation. The architecture has been designed to enable the simulation of the entire network; as a result, the actual driver, pedestrian, and bike rider can navigate anywhere in the system. In addition, the background traffic interacts with the actual human beings. This is accomplished by using a hybrid mesomicroscopic traffic flow simulation modeling approach. The mesoscopic traffic flow simulation model loads the results of a user equilibrium traffic assignment solution and propagates the corresponding traffic through the entire system. The microscopic traffic flow simulation model provides background traffic around the vicinities where actual human beings are navigating the system. The two traffic flow simulation models interact continuously to update system conditions based on the interactions between actual humans and the fully simulated entities. Implementation efforts are currently in progress and some preliminary tests of individual components have been conducted. The implementation of the proposed architecture faces significant challenges ranging from multiplatform and multilanguage integration to multievent communication and coordination. PMID:26491711

  14. Simulated evaluation of an intraoperative surface modeling method for catheter ablation by a real phantom simulation experiment

    NASA Astrophysics Data System (ADS)

    Sun, Deyu; Rettmann, Maryam E.; Packer, Douglas; Robb, Richard A.; Holmes, David R.

    2015-03-01

    In this work, we propose a phantom experiment method to quantitatively evaluate an intraoperative left-atrial modeling update method. In prior work, we proposed an update procedure which updates the preoperative surface model with information from real-time tracked 2D ultrasound. Prior studies did not evaluate the reconstruction using an anthropomorphic phantom. In this approach, a silicone heart phantom (based on a high resolution human atrial surface model reconstructed from CT images) was made as simulated atriums. A surface model of the left atrium of the phantom was deformed by a morphological operation - simulating the shape difference caused by organ deformation between pre-operative scanning and intra-operative guidance. During the simulated procedure, a tracked ultrasound catheter was inserted into right atrial phantom - scanning the left atrial phantom in a manner mimicking the cardiac ablation procedure. By merging the preoperative model and the intraoperative ultrasound images, an intraoperative left atrial model was reconstructed. According to results, the reconstruction error of the modeling method is smaller than the initial geometric difference caused by organ deformation. As the area of the left atrial phantom scanned by ultrasound increases, the reconstruction error of the intraoperative surface model decreases. The study validated the efficacy of the modeling method.

  15. A hybrid Boundary Element Unstructured Transmission-line (BEUT) method for accurate 2D electromagnetic simulation

    NASA Astrophysics Data System (ADS)

    Simmons, Daniel; Cools, Kristof; Sewell, Phillip

    2016-11-01

    Time domain electromagnetic simulation tools have the ability to model transient, wide-band applications, and non-linear problems. The Boundary Element Method (BEM) and the Transmission Line Modeling (TLM) method are both well established numerical techniques for simulating time-varying electromagnetic fields. The former surface based method can accurately describe outwardly radiating fields from piecewise uniform objects and efficiently deals with large domains filled with homogeneous media. The latter volume based method can describe inhomogeneous and non-linear media and has been proven to be unconditionally stable. Furthermore, the Unstructured TLM (UTLM) enables modelling of geometrically complex objects by using triangular meshes which removes staircasing and unnecessary extensions of the simulation domain. The hybridization of BEM and UTLM which is described in this paper is named the Boundary Element Unstructured Transmission-line (BEUT) method. It incorporates the advantages of both methods. The theory and derivation of the 2D BEUT method is described in this paper, along with any relevant implementation details. The method is corroborated by studying its correctness and efficiency compared to the traditional UTLM method when applied to complex problems such as the transmission through a system of Luneburg lenses and the modelling of antenna radomes for use in wireless communications.

  16. Fluid, solid and fluid-structure interaction simulations on patient-based abdominal aortic aneurysm models.

    PubMed

    Kelly, Sinead; O'Rourke, Malachy

    2012-04-01

    This article describes the use of fluid, solid and fluid-structure interaction simulations on three patient-based abdominal aortic aneurysm geometries. All simulations were carried out using OpenFOAM, which uses the finite volume method to solve both fluid and solid equations. Initially a fluid-only simulation was carried out on a single patient-based geometry and results from this simulation were compared with experimental results. There was good qualitative and quantitative agreement between the experimental and numerical results, suggesting that OpenFOAM is capable of predicting the main features of unsteady flow through a complex patient-based abdominal aortic aneurysm geometry. The intraluminal thrombus and arterial wall were then included, and solid stress and fluid-structure interaction simulations were performed on this, and two other patient-based abdominal aortic aneurysm geometries. It was found that the solid stress simulations resulted in an under-estimation of the maximum stress by up to 5.9% when compared with the fluid-structure interaction simulations. In the fluid-structure interaction simulations, flow induced pressure within the aneurysm was found to be up to 4.8% higher than the value of peak systolic pressure imposed in the solid stress simulations, which is likely to be the cause of the variation in the stress results. In comparing the results from the initial fluid-only simulation with results from the fluid-structure interaction simulation on the same patient, it was found that wall shear stress values varied by up to 35% between the two simulation methods. It was concluded that solid stress simulations are adequate to predict the maximum stress in an aneurysm wall, while fluid-structure interaction simulations should be performed if accurate prediction of the fluid wall shear stress is necessary. Therefore, the decision to perform fluid-structure interaction simulations should be based on the particular variables of interest in a given

  17. A hybrid method for flood simulation in small catchments combining hydrodynamic and hydrological techniques

    NASA Astrophysics Data System (ADS)

    Bellos, Vasilis; Tsakiris, George

    2016-09-01

    The study presents a new hybrid method for the simulation of flood events in small catchments. It combines a physically-based two-dimensional hydrodynamic model and the hydrological unit hydrograph theory. Unit hydrographs are derived using the FLOW-R2D model which is based on the full form of two-dimensional Shallow Water Equations, solved by a modified McCormack numerical scheme. The method is tested at a small catchment in a suburb of Athens-Greece for a storm event which occurred in February 2013. The catchment is divided into three friction zones and unit hydrographs of 15 and 30 min are produced. The infiltration process is simulated by the empirical Kostiakov equation and the Green-Ampt model. The results from the implementation of the proposed hybrid method are compared with recorded data at the hydrometric station at the outlet of the catchment and the results derived from the fully hydrodynamic model FLOW-R2D. It is concluded that for the case studied, the proposed hybrid method produces results close to those of the fully hydrodynamic simulation at substantially shorter computational time. This finding, if further verified in a variety of case studies, can be useful in devising effective hybrid tools for the two-dimensional flood simulations, which are lead to accurate and considerably faster results than those achieved by the fully hydrodynamic simulations.

  18. Comparison of different simulation methods for multiplane computer generated holograms

    NASA Astrophysics Data System (ADS)

    Kämpfe, Thomas; Hudelist, Florian; Waddie, Andrew J.; Taghizadeh, Mohammad R.; Kley, Ernst-Bernhard; Tunnermann, Andreas

    2008-04-01

    Computer generated holograms (CGH) are used to transform an incoming light distribution into a desired output. Recently multi plane CGHs became of interest since they allow the combination of some well known design methods for thin CGHs with unique properties of thick holograms. Iterative methods like the iterative Fourier transform algorithm (IFTA) require an operator that transforms a required optical function into an actual physical structure (e.g. a height structure). Commonly the thin element approximation (TEA) is used for this purpose. Together with the angular spectrum of plane waves (APSW) it has also been successfully used in the case of multi plane CGHs. Of course, due to the approximations inherent in TEA, it can only be applied above a certain feature size. In this contribution we want to give a first comparison of the TEA & ASPW approach with simulation results from the Fourier modal method (FMM) for the example of one dimensional, pattern generating, multi plane CGH.

  19. Temporal coarse-graining method to simulate the movement of atoms

    SciTech Connect

    Ichinomiya, Takashi

    2013-10-15

    We propose a novel method to simulate the movement of atoms at finite temperature. The main idea of our method is to derive “renormalized,” or coarse-grained in time, dynamics from the Euler–Maruyama scheme, which is the standard method for solving the stochastic differential equations numerically. Based on this renormalization, we propose a new algorithm for solving overdamped Langevin equations. We test our renormalization scheme on two models and demonstrate that the results obtained by this method are consistent with those obtained by the standard method. Our algorithm performs better than the standard scheme, especially at low temperatures and with multiple processors.

  20. A multi-stage method for connecting participatory sensing and noise simulations.

    PubMed

    Hu, Mingyuan; Che, Weitao; Zhang, Qiuju; Luo, Qingli; Lin, Hui

    2015-01-22

    Most simulation-based noise maps are important for official noise assessment but lack local noise characteristics. The main reasons for this lack of information are that official noise simulations only provide information about expected noise levels, which is limited by the use of large-scale monitoring of noise sources, and are updated infrequently. With the emergence of smart cities and ubiquitous sensing, the possible improvements enabled by sensing technologies provide the possibility to resolve this problem. This study proposed an integrated methodology to propel participatory sensing from its current random and distributed sampling origins to professional noise simulation. The aims of this study were to effectively organize the participatory noise data, to dynamically refine the granularity of the noise features on road segments (e.g., different portions of a road segment), and then to provide a reasonable spatio-temporal data foundation to support noise simulations, which can be of help to researchers in understanding how participatory sensing can play a role in smart cities. This study first discusses the potential limitations of the current participatory sensing and simulation-based official noise maps. Next, we explain how participatory noise data can contribute to a simulation-based noise map by providing (1) spatial matching of the participatory noise data to the virtual partitions at a more microscopic level of road networks; (2) multi-temporal scale noise estimations at the spatial level of virtual partitions; and (3) dynamic aggregation of virtual partitions by comparing the noise values at the relevant temporal scale to form a dynamic segmentation of each road segment to support multiple spatio-temporal noise simulations. In this case study, we demonstrate how this method could play a significant role in a simulation-based noise map. Together, these results demonstrate the potential benefits of participatory noise data as dynamic input sources for

  1. Comparison of Different Methods of Grading a Level Turn Task on a Flight Simulator

    NASA Technical Reports Server (NTRS)

    Heath, Bruce E.; Crier, tomyka

    2003-01-01

    With the advancements in the computing power of personal computers, pc-based flight simulators and trainers have opened new avenues in the training of airplane pilots. It may be desirable to have the flight simulator make a quantitative evaluation of the progress of a pilot's training thereby reducing the physical requirement of the flight instructor who must, in turn, watch every flight. In an experiment, University students conducted six different flights, each consisting of two level turns. The flights were three minutes in duration. By evaluating videotapes, two certified flight instructors provided separate letter grades for each turn. These level turns were also evaluated using two other computer based grading methods. One method determined automated grades based on prescribed tolerances in bank angle, airspeed and altitude. The other method used was deviations in altitude and bank angle for performance index and performance grades.

  2. High-order finite element methods for cardiac monodomain simulations.

    PubMed

    Vincent, Kevin P; Gonzales, Matthew J; Gillette, Andrew K; Villongco, Christopher T; Pezzuto, Simone; Omens, Jeffrey H; Holst, Michael J; McCulloch, Andrew D

    2015-01-01

    Computational modeling of tissue-scale cardiac electrophysiology requires numerically converged solutions to avoid spurious artifacts. The steep gradients inherent to cardiac action potential propagation necessitate fine spatial scales and therefore a substantial computational burden. The use of high-order interpolation methods has previously been proposed for these simulations due to their theoretical convergence advantage. In this study, we compare the convergence behavior of linear Lagrange, cubic Hermite, and the newly proposed cubic Hermite-style serendipity interpolation methods for finite element simulations of the cardiac monodomain equation. The high-order methods reach converged solutions with fewer degrees of freedom and longer element edge lengths than traditional linear elements. Additionally, we propose a dimensionless number, the cell Thiele modulus, as a more useful metric for determining solution convergence than element size alone. Finally, we use the cell Thiele modulus to examine convergence criteria for obtaining clinically useful activation patterns for applications such as patient-specific modeling where the total activation time is known a priori. PMID:26300783

  3. High-order finite element methods for cardiac monodomain simulations

    PubMed Central

    Vincent, Kevin P.; Gonzales, Matthew J.; Gillette, Andrew K.; Villongco, Christopher T.; Pezzuto, Simone; Omens, Jeffrey H.; Holst, Michael J.; McCulloch, Andrew D.

    2015-01-01

    Computational modeling of tissue-scale cardiac electrophysiology requires numerically converged solutions to avoid spurious artifacts. The steep gradients inherent to cardiac action potential propagation necessitate fine spatial scales and therefore a substantial computational burden. The use of high-order interpolation methods has previously been proposed for these simulations due to their theoretical convergence advantage. In this study, we compare the convergence behavior of linear Lagrange, cubic Hermite, and the newly proposed cubic Hermite-style serendipity interpolation methods for finite element simulations of the cardiac monodomain equation. The high-order methods reach converged solutions with fewer degrees of freedom and longer element edge lengths than traditional linear elements. Additionally, we propose a dimensionless number, the cell Thiele modulus, as a more useful metric for determining solution convergence than element size alone. Finally, we use the cell Thiele modulus to examine convergence criteria for obtaining clinically useful activation patterns for applications such as patient-specific modeling where the total activation time is known a priori. PMID:26300783

  4. Benchmark Study of 3D Pore-scale Flow and Solute Transport Simulation Methods

    NASA Astrophysics Data System (ADS)

    Scheibe, T. D.; Yang, X.; Mehmani, Y.; Perkins, W. A.; Pasquali, A.; Schoenherr, M.; Kim, K.; Perego, M.; Parks, M. L.; Trask, N.; Balhoff, M.; Richmond, M. C.; Geier, M.; Krafczyk, M.; Luo, L. S.; Tartakovsky, A. M.

    2015-12-01

    Multiple numerical approaches have been developed to simulate porous media fluid flow and solute transport at the pore scale. These include 1) methods that explicitly model the three-dimensional geometry of pore spaces and 2) methods that conceptualize the pore space as a topologically consistent set of stylized pore bodies and pore throats. In previous work we validated a model of the first type, using computational fluid dynamics (CFD) codes employing standard finite volume method (FVM), against magnetic resonance velocimetry (MRV) measurements of pore-scale velocities. Here we expand that benchmark study to include additional models of the first type based on the immersed-boundary method (IMB), lattice Boltzmann method (LBM), and smoothed particle hydrodynamics (SPH), as well as a model of the second type, a pore-network model (PNM). While the PNM approach is computationally much less demanding than direct numerical simulation methods, the effect of conceptualizing complex three-dimensional pore geometries in the manner of PNMs has not been fully determined. We apply all five approaches (FVM-based CFD, IMB, LBM, SPH and PNM) to simulate pore-scale velocity distributions and nonreactive solute transport, and intercompare the model results. Comparisons are drawn both in terms of macroscopic variables (e.g., permeability, solute breakthrough curves) and microscopic variables (e.g., local velocities and concentrations). Generally good agreement was achieved among the various approaches, but some differences were observed depending on the model context. The benchmark study was challenging because of variable capabilities of the codes, and inspired some code enhancements to allow consistent comparison of flow and transport simulations across the full suite of methods. This study provides support for confidence in a variety of pore-scale modeling methods, and motivates further development and application of pore-scale simulation methods.

  5. Rapid simulation of spatial epidemics: a spectral method.

    PubMed

    Brand, Samuel P C; Tildesley, Michael J; Keeling, Matthew J

    2015-04-01

    Spatial structure and hence the spatial position of host populations plays a vital role in the spread of infection. In the majority of situations, it is only possible to predict the spatial spread of infection using simulation models, which can be computationally demanding especially for large population sizes. Here we develop an approximation method that vastly reduces this computational burden. We assume that the transmission rates between individuals or sub-populations are determined by a spatial transmission kernel. This kernel is assumed to be isotropic, such that the transmission rate is simply a function of the distance between susceptible and infectious individuals; as such this provides the ideal mechanism for modelling localised transmission in a spatial environment. We show that the spatial force of infection acting on all susceptibles can be represented as a spatial convolution between the transmission kernel and a spatially extended 'image' of the infection state. This representation allows the rapid calculation of stochastic rates of infection using fast-Fourier transform (FFT) routines, which greatly improves the computational efficiency of spatial simulations. We demonstrate the efficiency and accuracy of this fast spectral rate recalculation (FSR) method with two examples: an idealised scenario simulating an SIR-type epidemic outbreak amongst N habitats distributed across a two-dimensional plane; the spread of infection between US cattle farms, illustrating that the FSR method makes continental-scale outbreak forecasting feasible with desktop processing power. The latter model demonstrates which areas of the US are at consistently high risk for cattle-infections, although predictions of epidemic size are highly dependent on assumptions about the tail of the transmission kernel. PMID:25659478

  6. Rapid simulation of spatial epidemics: a spectral method.

    PubMed

    Brand, Samuel P C; Tildesley, Michael J; Keeling, Matthew J

    2015-04-01

    Spatial structure and hence the spatial position of host populations plays a vital role in the spread of infection. In the majority of situations, it is only possible to predict the spatial spread of infection using simulation models, which can be computationally demanding especially for large population sizes. Here we develop an approximation method that vastly reduces this computational burden. We assume that the transmission rates between individuals or sub-populations are determined by a spatial transmission kernel. This kernel is assumed to be isotropic, such that the transmission rate is simply a function of the distance between susceptible and infectious individuals; as such this provides the ideal mechanism for modelling localised transmission in a spatial environment. We show that the spatial force of infection acting on all susceptibles can be represented as a spatial convolution between the transmission kernel and a spatially extended 'image' of the infection state. This representation allows the rapid calculation of stochastic rates of infection using fast-Fourier transform (FFT) routines, which greatly improves the computational efficiency of spatial simulations. We demonstrate the efficiency and accuracy of this fast spectral rate recalculation (FSR) method with two examples: an idealised scenario simulating an SIR-type epidemic outbreak amongst N habitats distributed across a two-dimensional plane; the spread of infection between US cattle farms, illustrating that the FSR method makes continental-scale outbreak forecasting feasible with desktop processing power. The latter model demonstrates which areas of the US are at consistently high risk for cattle-infections, although predictions of epidemic size are highly dependent on assumptions about the tail of the transmission kernel.

  7. Simulation-based transthoracic echocardiography: “An anesthesiologist's perspective”

    PubMed Central

    Magoon, Rohan; Sharma, Amita; Ladha, Suruchi; Kapoor, Poonam Malhotra; Hasija, Suruchi

    2016-01-01

    With the growing requirement of echocardiography in the perioperative management, the anesthesiologists need to be well trained in transthoracic echocardiography (TTE). Lack of formal, structured teaching program precludes the same. The present article reviews the expanding domain of TTE, simulation-based TTE training, the advancements, current limitations, and the importance of simulation-based training for the anesthesiologists. PMID:27397457

  8. Validation techniques of agent based modelling for geospatial simulations

    NASA Astrophysics Data System (ADS)

    Darvishi, M.; Ahmadi, G.

    2014-10-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  9. Structure identification methods for atomistic simulations of crystalline materials

    DOE PAGESBeta

    Stukowski, Alexander

    2012-05-28

    Here, we discuss existing and new computational analysis techniques to classify local atomic arrangements in large-scale atomistic computer simulations of crystalline solids. This article includes a performance comparison of typical analysis algorithms such as common neighbor analysis (CNA), centrosymmetry analysis, bond angle analysis, bond order analysis and Voronoi analysis. In addition we propose a simple extension to the CNA method that makes it suitable for multi-phase systems. Finally, we introduce a new structure identification algorithm, the neighbor distance analysis, which is designed to identify atomic structure units in grain boundaries.

  10. Methodical aspects of text testing in a driving simulator.

    PubMed

    Sundin, A; Patten, C J D; Bergmark, M; Hedberg, A; Iraeus, I-M; Pettersson, I

    2012-01-01

    A test with 30 test persons was conducted in a driving simulator. The test was a concept exploration and comparison of existing user interaction technologies for text message handling with focus on traffic safety and experience (technology familiarity and learning effects). Focus was put on methodical aspects how to measure and how to analyze the data. Results show difficulties with the eye tracking system (calibration etc.) per se, and also include the subsequent raw data preparation. The physical setup in the car where found important for the test completion. PMID:22317503

  11. Simulating Electric Double Layer Capacitance by Using Lattice Boltzmann Method

    NASA Astrophysics Data System (ADS)

    Sun, Ning; Gersappe, Dilip

    2015-03-01

    By using the Lattice Boltzmann Method (LBM) we studied diffuse-charge dynamics in electrochemical systems. We use the LBM to solve Poisson-Nernst-Planck equations (PNP) and Modified Poisson-Nernst-Planck equations (MPNP). The isotropic permittivity of electrolyte is modeled using the Booth model. The results show that both steric effect (MPNP) and isotropic permittivity (Booth model) can have large influence on diffuse-charge dynamics, especially when electrolyte concentration or applied potential is high. This model can be applied to simulate electric double layer capacitance of super capacitors with complex geometry and also incorporate other effects such as heat convection in a modular manner.

  12. Evaluation of a novel method of noise reduction using computer-simulated mammograms.

    PubMed

    Tischenko, Oleg; Hoeschen, Christoph; Dance, David R; Hunt, Roger A; Maidment, Andrew D A; Bakic, Predrag R

    2005-01-01

    A novel method of noise reduction has been tested for mammography using computer-simulated images for which the truth is known exactly. This method is based on comparing two images. The images are compared at different scales, using a cross-correlation function as a measure of similarity to define the image modifications in the wavelet domain. The computer-simulated images were calculated for noise-free primary radiation using a quasi-realistic voxel phantom. Two images corresponding to slightly different geometry were produced. Gaussian noise was added with certain properties to simulate quantum noise. The added noise could be reduced by >70% using the proposed method without any noticeable corruption of the structures. It is possible to save 50% dose in mammography by producing two images (each 25% of the dose for a standard mammogram). Additionally, a reduction of the anatomical noise and, therefore, better detection rates of breast cancer in mammography are possible.

  13. Handbook of Scaling Methods in Aquatic Ecology: Measurement, Analysis, Simulation

    NASA Astrophysics Data System (ADS)

    Marrasé, Celia

    2004-03-01

    Researchers in aquatic sciences have long been interested in describing temporal and biological heterogeneities at different observation scales. During the 1970s, scaling studies received a boost from the application of spectral analysis to ecological sciences. Since then, new insights have evolved in parallel with advances in observation technologies and computing power. In particular, during the last 2 decades, novel theoretical achievements were facilitated by the use of microstructure profilers, the application of mathematical tools derived from fractal and wavelet analyses, and the increase in computing power that allowed more complex simulations. The idea of publishing the Handbook of Scaling Methods in Aquatic Ecology arose out of a special session of the 2001 Aquatic Science Meeting of the American Society of Limnology and Oceanography. The edition of the book is timely, because it compiles a good amount of the work done in these last 2 decades. The book is comprised of three sections: measurements, analysis, and simulation. Each contains some review chapters and a number of more specialized contributions. The contents are multidisciplinary and focus on biological and physical processes and their interactions over a broad range of scales, from micro-layers to ocean basins. The handbook topics include high-resolution observation methodologies, as well as applications of different mathematical tools for analysis and simulation of spatial structures, time variability of physical and biological processes, and individual organism behavior. The scientific background of the authors is highly diverse, ensuring broad interest for the scientific community.

  14. Simulation based analysis of laser beam brazing

    NASA Astrophysics Data System (ADS)

    Dobler, Michael; Wiethop, Philipp; Schmid, Daniel; Schmidt, Michael

    2016-03-01

    Laser beam brazing is a well-established joining technology in car body manufacturing with main applications in the joining of divided tailgates and the joining of roof and side panels. A key advantage of laser brazed joints is the seam's visual quality which satisfies highest requirements. However, the laser beam brazing process is very complex and process dynamics are only partially understood. In order to gain deeper knowledge of the laser beam brazing process, to determine optimal process parameters and to test process variants, a transient three-dimensional simulation model of laser beam brazing is developed. This model takes into account energy input, heat transfer as well as fluid and wetting dynamics that lead to the formation of the brazing seam. A validation of the simulation model is performed by metallographic analysis and thermocouple measurements for different parameter sets of the brazing process. These results show that the multi-physical simulation model not only can be used to gain insight into the laser brazing process but also offers the possibility of process optimization in industrial applications. The model's capabilities in determining optimal process parameters are exemplarily shown for the laser power. Small deviations in the energy input can affect the brazing results significantly. Therefore, the simulation model is used to analyze the effect of the lateral laser beam position on the energy input and the resulting brazing seam.

  15. PWORLD: A Precedent-Based Global Simulation.

    ERIC Educational Resources Information Center

    Schrodt, Philip A.

    A "world model" is constructed where precedent-searching is one of the primary driving mechanisms. The simulation assumes that nations in the system are utility maximizers but that they have relatively primitive decision mechanisms and that they are strongly influenced by their previous short-term successful behavior and the short-term success of…

  16. Issues of Simulation-Based Route Assignment

    SciTech Connect

    Nagel, K.; Rickert, M.

    1999-07-20

    The authors use an iterative re-planning scheme with simulation feedback to generate a self-consistent route-set for a given street network and origin-destination matrix. The iteration process is defined by three parameters. They found that they have influence on the speed of the relaxation, but not necessarily on its final state.

  17. A pseudo non-linear method for fast simulations of ultrasonic reverberation

    NASA Astrophysics Data System (ADS)

    Byram, Brett; Shu, Jasmine

    2016-04-01

    There is growing evidence that reverberation is a primary mechanism of clinical image degradation. This has led to a number of new approaches to suppress reverberation, including our recently proposed model-based algorithm. The algorithm can work well, but it must be trained to reject clutter, while preserving the signal of interest. One way to do this is to use simulated data, but current simulation methods that include multipath scattering are slow and do not readily allow separation of clutter and signal. Here, we propose a more convenient pseudo non-linear simulation method that utilizes existing linear simulation tools like Field II. The approach functions by linearly simulating scattered wavefronts at shallow depths, and then time-shifting these wavefronts to deeper depths. The simulation only requires specification of the first and last scatterers encountered by a multiply reflected wave and a third point that establishes the arrival time of the reverberation. To maintain appropriate 2D correlation, this set of three points is fixed for the entire simulation and is shifted as with a normal linear simulation scattering field. We show example images, and we compute first order speckle statistics as a function of scatterer density. We perform ex vivo measures of reverberation where we find that the average speckle SNR is 1.73, which we can simulate with 2 reverberation scatterers per resolution cell. We also compare ex vivo lateral speckle statistics to those from linear and pseudo non-linear simulation data. Finally, the van Cittert-Zernike curve was shown to match empirical and theoretical observations.

  18. Comparative Study on Two Melting Simulation Methods: Melting Curve of Gold

    NASA Astrophysics Data System (ADS)

    Liu, Zhong-Li; Sun, Jun-Sheng; Li, Rui; Zhang, Xiu-Lu; Cai, Ling-Cang

    2016-05-01

    Melting simulation methods are of crucial importance to determining melting temperature of materials efficiently. A high-efficiency melting simulation method saves much simulation time and computational resources. To compare the efficiency of our newly developed shock melting (SM) method with that of the well-established two-phase (TP) method, we calculate the high-pressure melting curve of Au using the two methods based on the optimally selected interatomic potentials. Although we only use 640 atoms to determine the melting temperature of Au in the SM method, the resulting melting curve accords very well with the results from the TP method using much more atoms. Thus, this shows that a much smaller system size in SM method can still achieve a fully converged melting curve compared with the TP method, implying the robustness and efficiency of the SM method. Supported by the National Natural Science Foundation of China under Grant No. 41574076 and the NSAF of China under Grant No. U1230201/A06, and the Young Core Teacher Scheme of Henan Province under Grant No. 2014GGJS-108

  19. A coupled finite-element, boundary-integral method for simulating ultrasonic flowmeters.

    PubMed

    Bezdĕk, Michal; Landes, Hermann; Rieder, Alfred; Lerch, Reinhard

    2007-03-01

    Today's most popular technology of ultrasonic flow measurement is based on the transit-time principle. In this paper, a numerical simulation technique applicable to the analysis of transit-time flowmeters is presented. A flowmeter represents a large simulation problem that also requires computation of acoustic fields in moving media. For this purpose, a novel boundary integral method, the Helmholtz integral-ray tracing method (HIRM), is derived and validated. HIRM is applicable to acoustic radiation problems in arbitrary mean flows at low Mach numbers and significantly reduces the memory demands in comparison with the finite-element method (FEM). It relies on an approximate free-space Green's function which makes use of the ray tracing technique. For simulation of practical acoustic devices, a hybrid simulation scheme consisting of FEM and HIRM is proposed. The coupling of FEM and HIRM is facilitated by means of absorbing boundaries in combination with a new, reflection-free, acoustic-source formulation. Using the coupled FEM-HIRM scheme, a full three-dimensional (3-D) simulation of a complete transit-time flowmeter is performed for the first time. The obtained simulation results are in good agreement with measurements both at zero flow and under flow conditions. PMID:17375833

  20. Correlation theory-based signal processing method for CMF signals

    NASA Astrophysics Data System (ADS)

    Shen, Yan-lin; Tu, Ya-qing

    2016-06-01

    Signal processing precision of Coriolis mass flowmeter (CMF) signals affects measurement accuracy of Coriolis mass flowmeters directly. To improve the measurement accuracy of CMFs, a correlation theory-based signal processing method for CMF signals is proposed, which is comprised of the correlation theory-based frequency estimation method and phase difference estimation method. Theoretical analysis shows that the proposed method eliminates the effect of non-integral period sampling signals on frequency and phase difference estimation. The results of simulations and field experiments demonstrate that the proposed method improves the anti-interference performance of frequency and phase difference estimation and has better estimation performance than the adaptive notch filter, discrete Fourier transform and autocorrelation methods in terms of frequency estimation and the data extension-based correlation, Hilbert transform, quadrature delay estimator and discrete Fourier transform methods in terms of phase difference estimation, which contributes to improving the measurement accuracy of Coriolis mass flowmeters.

  1. Agent-based modeling to simulate the dengue spread

    NASA Astrophysics Data System (ADS)

    Deng, Chengbin; Tao, Haiyan; Ye, Zhiwei

    2008-10-01

    In this paper, we introduce a novel method ABM in simulating the unique process for the dengue spread. Dengue is an acute infectious disease with a long history of over 200 years. Unlike the diseases that can be transmitted directly from person to person, dengue spreads through a must vector of mosquitoes. There is still no any special effective medicine and vaccine for dengue up till now. The best way to prevent dengue spread is to take precautions beforehand. Thus, it is crucial to detect and study the dynamic process of dengue spread that closely relates to human-environment interactions where Agent-Based Modeling (ABM) effectively works. The model attempts to simulate the dengue spread in a more realistic way in the bottom-up way, and to overcome the limitation of ABM, namely overlooking the influence of geographic and environmental factors. Considering the influence of environment, Aedes aegypti ecology and other epidemiological characteristics of dengue spread, ABM can be regarded as a useful way to simulate the whole process so as to disclose the essence of the evolution of dengue spread.

  2. A Computer-Based Simulation of an Acid-Base Titration

    ERIC Educational Resources Information Center

    Boblick, John M.

    1971-01-01

    Reviews the advantages of computer simulated environments for experiments, referring in particular to acid-base titrations. Includes pre-lab instructions and a sample computer printout of a student's use of an acid-base simulation. Ten references. (PR)

  3. Time-domain simulation of a guitar: model and method.

    PubMed

    Derveaux, Grégoire; Chaigne, Antoine; Joly, Patrick; Bécache, Eliane

    2003-12-01

    This paper presents a three-dimensional time-domain numerical model of the vibration and acoustic radiation from a guitar. The model involves the transverse displacement of the string excited by a force pulse, the flexural motion of the soundboard, and the sound radiation. A specific spectral method is used for solving the Kirchhoff-Love's dynamic top plate model for a damped, heterogeneous orthotropic material. The air-plate interaction is solved with a fictitious domain method, and a conservative scheme is used for the time discretization. Frequency analysis is performed on the simulated sound pressure and plate velocity waveforms in order to evaluate quantitatively the transfer of energy through the various components of the coupled system: from the string to the soundboard and from the soundboard to the air. The effects of some structural changes in soundboard thickness and cavity volume on the produced sounds are presented and discussed. Simulations of the same guitar in three different cases are also performed: "in vacuo," in air with a perfectly rigid top plate, and in air with an elastic top plate. This allows comparisons between structural, acoustic, and structural-acoustic modes of the instrument. Finally, attention is paid to the evolution with time of the spatial pressure field. This shows, in particular, the complex evolution of the directivity pattern in the near field of the instrument, especially during the attack.

  4. Simulation of FEL pulse length calculation with THz streaking method.

    PubMed

    Gorgisyan, I; Ischebeck, R; Prat, E; Reiche, S; Rivkin, L; Juranić, P

    2016-05-01

    Having accurate and comprehensive photon diagnostics for the X-ray pulses delivered by free-electron laser (FEL) facilities is of utmost importance. Along with various parameters of the photon beam (such as photon energy, beam intensity, etc.), the pulse length measurements are particularly useful both for the machine operators to measure the beam parameters and monitor the stability of the machine performance, and for the users carrying out pump-probe experiments at such facilities to better understand their measurement results. One of the most promising pulse length measurement techniques used for photon diagnostics is the THz streak camera which is capable of simultaneously measuring the lengths of the photon pulses and their arrival times with respect to the pump laser. This work presents simulations of a THz streak camera performance. The simulation procedure utilizes FEL pulses with two different photon energies in hard and soft X-ray regions, respectively. It recreates the energy spectra of the photoelectrons produced by the photon pulses and streaks them by a single-cycle THz pulse. Following the pulse-retrieval procedure of the THz streak camera, the lengths were calculated from the streaked spectra. To validate the pulse length calculation procedure, the precision and the accuracy of the method were estimated for streaking configuration corresponding to previously performed experiments. The obtained results show that for the discussed setup the method is capable of measuring FEL pulses with about a femtosecond accuracy and precision. PMID:27140142

  5. Simulation of FEL pulse length calculation with THz streaking method

    PubMed Central

    Gorgisyan, I.; Ischebeck, R.; Prat, E.; Reiche, S.; Rivkin, L.; Juranić, P.

    2016-01-01

    Having accurate and comprehensive photon diagnostics for the X-ray pulses delivered by free-electron laser (FEL) facilities is of utmost importance. Along with various parameters of the photon beam (such as photon energy, beam intensity, etc.), the pulse length measurements are particularly useful both for the machine operators to measure the beam parameters and monitor the stability of the machine performance, and for the users carrying out pump–probe experiments at such facilities to better understand their measurement results. One of the most promising pulse length measurement techniques used for photon diagnostics is the THz streak camera which is capable of simultaneously measuring the lengths of the photon pulses and their arrival times with respect to the pump laser. This work presents simulations of a THz streak camera performance. The simulation procedure utilizes FEL pulses with two different photon energies in hard and soft X-ray regions, respectively. It recreates the energy spectra of the photoelectrons produced by the photon pulses and streaks them by a single-cycle THz pulse. Following the pulse-retrieval procedure of the THz streak camera, the lengths were calculated from the streaked spectra. To validate the pulse length calculation procedure, the precision and the accuracy of the method were estimated for streaking configuration corresponding to previously performed experiments. The obtained results show that for the discussed setup the method is capable of measuring FEL pulses with about a femtosecond accuracy and precision. PMID:27140142

  6. The Impact of Content Area Focus on the Effectiveness of a Web-Based Simulation

    ERIC Educational Resources Information Center

    Adcock, Amy B.; Duggan, Molly H.; Watson, Ginger S.; Belfore, Lee A.

    2010-01-01

    This paper describes an assessment of a web-based interview simulation designed to teach empathetic helping skills. The system includes an animated character acting as a client and responses designed to recreate a simulated role-play, a common assessment method used for teaching these skills. The purpose of this study was to determine whether…

  7. Motion control simulation based on VR for humanoid robot

    NASA Astrophysics Data System (ADS)

    He, Huaiqing; Tang, Haoxuan

    2004-03-01

    This paper describes the motion control simulation based on VR for humanoid robot aiming at walking and running. To insure that the motion rhythm of humanoid robot conforms to the motion laws of humans, the body geometrical model based on skeleton and its kinematics models based on the graph of time sequences are presented firstly. Then a control algorithm based on Jacobian matrix is proposed to generate the periodical walking and running. Finally, computer simulation experiments demonstrate the feasibility of the models and the algorithm. The simulation system developed makes us interactively regulate the motion direction and velocity for humanoid robot.

  8. Modeling and simulation of dynamic problems in solid mechanics using material point method

    NASA Astrophysics Data System (ADS)

    Dobšíček, Miroslav

    A relatively new computational method, namely Material Point Method (MPM), developed by Prof. Sulsky, from the Particle-In-Cell (PIC) method in computational fluid mechanics, was used for simulations of dynamic problems. In this regard, various dynamic and material simulations have been carried out, which include dynamic crack growth using cohesive zone model, microstructure evolution of closed-cell polymer foam in compression and simulation of granular materials. In this process the MPM algorithm was developed by, either implementing completely newer capabilities of simulation or refining the older versions for increased robustness and versatility. The incorporation of a characteristic length scale in MPM through cohesive zone model allowed investigation of physics-based dynamic crack propagation. The simulations are capable of handling crack growth with crack-tip velocities in both sub-Rayleigh and intersonic regimes. Crack initiation and propagation are the natural outcome of the simulations incorporating the cohesive zone model. Good qualitative agreement was observed between numerical results presented here and the experimental results in terms of the photoelastic stress patterns ahead of the crack-tip. MPM will allow prediction of material properties for microstructures driving the optimization of processing and performance in foam materials through simulation of real microstructures. The simulations are able to capture the various stages of deformations in foam compression. The stress-strain curve simulated from MPM compares reasonably with the experimental results. Based on the results from micro-CT and MPM simulations, it was found that elastic buckling of cell-walls occur even in the elastic regime of compression. Within the elastic region, less than 35% of the cell-wall material carries majority of the compressive load. The particle nature in MPM was found suitable for simulation of granular materials. Contact algorithm has been implemented in MPM to

  9. DOM Based XSS Detecting Method Based on Phantomjs

    NASA Astrophysics Data System (ADS)

    Dong, Ri-Zhan; Ling, Jie; Liu, Yi

    Because malicious code does not appear in html source code, DOM based XSS cannot be detected by traditional methods. By analyzing the causes of DOM based XSS, this paper proposes a detection method of DOM based XSS based on phantomjs. This paper uses function hijacking to detect dangerous operation and achieves a prototype system. Comparing with existing tools shows that the system improves the detection rate and the method is effective to detect DOM based XSS.

  10. Shrinkage regression-based methods for microarray missing value imputation

    PubMed Central

    2013-01-01

    Background Missing values commonly occur in the microarray data, which usually contain more than 5% missing values with up to 90% of genes affected. Inaccurate missing value estimation results in reducing the power of downstream microarray data analyses. Many types of methods have been developed to estimate missing values. Among them, the regression-based methods are very popular and have been shown to perform better than the other types of methods in many testing microarray datasets. Results To further improve the performances of the regression-based methods, we propose shrinkage regression-based methods. Our methods take the advantage of the correlation structure in the microarray data and select similar genes for the target gene by Pearson correlation coefficients. Besides, our methods incorporate the least squares principle, utilize a shrinkage estimation approach to adjust the coefficients of the regression model, and then use the new coefficients to estimate missing values. Simulation results show that the proposed methods provide more accurate missing value estimation in six testing microarray datasets than the existing regression-based methods do. Conclusions Imputation of missing values is a very important aspect of microarray data analyses because most of the downstream analyses require a complete dataset. Therefore, exploring accurate and efficient methods for estimating missing values has become an essential issue. Since our proposed shrinkage regression-based methods can provide accurate missing value estimation, they are competitive alternatives to the existing regression-based methods. PMID:24565159

  11. A methodology towards virtualisation-based high performance simulation platform supporting multidisciplinary design of complex products

    NASA Astrophysics Data System (ADS)

    Ren, Lei; Zhang, Lin; Tao, Fei; (Luke) Zhang, Xiaolong; Luo, Yongliang; Zhang, Yabin

    2012-08-01

    Multidisciplinary design of complex products leads to an increasing demand for high performance simulation (HPS) platforms. One great challenge is how to achieve high efficient utilisation of large-scale simulation resources in distributed and heterogeneous environments. This article reports a virtualisation-based methodology to realise a HPS platform. This research is driven by the issues concerning large-scale simulation resources deployment and complex simulation environment construction, efficient and transparent utilisation of fine-grained simulation resources and high reliable simulation with fault tolerance. A framework of virtualisation-based simulation platform (VSIM) is first proposed. Then the article investigates and discusses key approaches in VSIM, including simulation resources modelling, a method to automatically deploying simulation resources for dynamic construction of system environment, and a live migration mechanism in case of faults in run-time simulation. Furthermore, the proposed methodology is applied to a multidisciplinary design system for aircraft virtual prototyping and some experiments are conducted. The experimental results show that the proposed methodology can (1) significantly improve the utilisation of fine-grained simulation resources, (2) result in a great reduction in deployment time and an increased flexibility for simulation environment construction and (3)achieve fault tolerant simulation.

  12. Synchrotron-based EUV lithography illuminator simulator

    DOEpatents

    Naulleau, Patrick P.

    2004-07-27

    A lithographic illuminator to illuminate a reticle to be imaged with a range of angles is provided. The illumination can be employed to generate a pattern in the pupil of the imaging system, where spatial coordinates in the pupil plane correspond to illumination angles in the reticle plane. In particular, a coherent synchrotron beamline is used along with a potentially decoherentizing holographic optical element (HOE), as an experimental EUV illuminator simulation station. The pupil fill is completely defined by a single HOE, thus the system can be easily modified to model a variety of illuminator fill patterns. The HOE can be designed to generate any desired angular spectrum and such a device can serve as the basis for an illuminator simulator.

  13. Limits of simulation based high resolution EBSD.

    PubMed

    Alkorta, Jon

    2013-08-01

    High resolution electron backscattered diffraction (HREBSD) is a novel technique for a relative determination of both orientation and stress state in crystals through digital image correlation techniques. Recent works have tried to use simulated EBSD patterns as reference patterns to achieve the absolute orientation and stress state of crystals. However, a precise calibration of the pattern centre location is needed to avoid the occurrence of phantom stresses. A careful analysis of the projective transformation involved in the formation of EBSD patterns has permitted to understand these phantom stresses. This geometrical analysis has been confirmed by numerical simulations. The results indicate that certain combinations of crystal strain states and sample locations (pattern centre locations) lead to virtually identical EBSD patterns. This ambiguity makes the problem of solving the absolute stress state of a crystal unfeasible in a single-detector configuration. PMID:23676453

  14. Improving the Aircraft Design Process Using Web-Based Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.; Follen, Gregory J. (Technical Monitor)

    2000-01-01

    Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and multifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.

  15. Improving the Aircraft Design Process Using Web-based Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.

    2003-01-01

    Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and muitifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.

  16. Comment on: ‘A Poisson resampling method for simulating reduced counts in nuclear medicine images’

    NASA Astrophysics Data System (ADS)

    de Nijs, Robin

    2015-07-01

    In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics for counts below 100. Only Poisson resampling was not affected by this, while Gaussian redrawing was less affected by it than Poisson redrawing. Poisson resampling is the method of choice, when simulating half-count (or less) images from full-count images. It simulates correctly the statistical properties, also in the case of rounding off of the images.

  17. Fast integral methods for integrated optical systems simulations: a review

    NASA Astrophysics Data System (ADS)

    Kleemann, Bernd H.

    2015-09-01

    Boundary integral equation methods (BIM) or simply integral methods (IM) in the context of optical design and simulation are rigorous electromagnetic methods solving Helmholtz or Maxwell equations on the boundary (surface or interface of the structures between two materials) for scattering or/and diffraction purposes. This work is mainly restricted to integral methods for diffracting structures such as gratings, kinoforms, diffractive optical elements (DOEs), micro Fresnel lenses, computer generated holograms (CGHs), holographic or digital phase holograms, periodic lithographic structures, and the like. In most cases all of the mentioned structures have dimensions of thousands of wavelengths in diameter. Therefore, the basic methods necessary for the numerical treatment are locally applied electromagnetic grating diffraction algorithms. Interestingly, integral methods belong to the first electromagnetic methods investigated for grating diffraction. The development started in the mid 1960ies for gratings with infinite conductivity and it was mainly due to the good convergence of the integral methods especially for TM polarization. The first integral equation methods (IEM) for finite conductivity were the methods by D. Maystre at Fresnel Institute in Marseille: in 1972/74 for dielectric, and metallic gratings, and later for multiprofile, and other types of gratings and for photonic crystals. Other methods such as differential and modal methods suffered from unstable behaviour and slow convergence compared to BIMs for metallic gratings in TM polarization from the beginning to the mid 1990ies. The first BIM for gratings using a parametrization of the profile was developed at Karl-Weierstrass Institute in Berlin under a contract with Carl Zeiss Jena works in 1984-1986 by A. Pomp, J. Creutziger, and the author. Due to the parametrization, this method was able to deal with any kind of surface grating from the beginning: whether profiles with edges, overhanging non

  18. Simulation and optimization of an experimental membrane wastewater treatment plant using computational intelligence methods.

    PubMed

    Ludwig, T; Kern, P; Bongards, M; Wolf, C

    2011-01-01

    The optimization of relaxation and filtration times of submerged microfiltration flat modules in membrane bioreactors used for municipal wastewater treatment is essential for efficient plant operation. However, the optimization and control of such plants and their filtration processes is a challenging problem due to the underlying highly nonlinear and complex processes. This paper presents the use of genetic algorithms for this optimization problem in conjunction with a fully calibrated simulation model, as computational intelligence methods are perfectly suited to the nonconvex multi-objective nature of the optimization problems posed by these complex systems. The simulation model is developed and calibrated using membrane modules from the wastewater simulation software GPS-X based on the Activated Sludge Model No.1 (ASM1). Simulation results have been validated at a technical reference plant. They clearly show that filtration process costs for cleaning and energy can be reduced significantly by intelligent process optimization.

  19. Review of Methods Related to Assessing Human Performance in Nuclear Power Plant Control Room Simulations

    SciTech Connect

    Katya L Le Blanc; Ronald L Boring; David I Gertman

    2001-11-01

    With the increased use of digital systems in Nuclear Power Plant (NPP) control rooms comes a need to thoroughly understand the human performance issues associated with digital systems. A common way to evaluate human performance is to test operators and crews in NPP control room simulators. However, it is often challenging to characterize human performance in meaningful ways when measuring performance in NPP control room simulations. A review of the literature in NPP simulator studies reveals a variety of ways to measure human performance in NPP control room simulations including direct observation, automated computer logging, recordings from physiological equipment, self-report techniques, protocol analysis and structured debriefs, and application of model-based evaluation. These methods and the particular measures used are summarized and evaluated.

  20. A machine learning method for the prediction of receptor activation in the simulation of synapses.

    PubMed

    Montes, Jesus; Gomez, Elena; Merchán-Pérez, Angel; Defelipe, Javier; Peña, Jose-Maria

    2013-01-01

    Chemical synaptic transmission involves the release of a neurotransmitter that diffuses in the extracellular space and interacts with specific receptors located on the postsynaptic membrane. Computer simulation approaches provide fundamental tools for exploring various aspects of the synaptic transmission under different conditions. In particular, Monte Carlo methods can track the stochastic movements of neurotransmitter molecules and their interactions with other discrete molecules, the receptors. However, these methods are computationally expensive, even when used with simplified models, preventing their use in large-scale and multi-scale simulations of complex neuronal systems that may involve large numbers of synaptic connections. We have developed a machine-learning based method that can accurately predict relevant aspects of the behavior of synapses, such as the percentage of open synaptic receptors as a function of time since the release of the neurotransmitter, with considerably lower computational cost compared with the conventional Monte Carlo alternative. The method is designed to learn patterns and general principles from a corpus of previously generated Monte Carlo simulations of synapses covering a wide range of structural and functional characteristics. These patterns are later used as a predictive model of the behavior of synapses under different conditions without the need for additional computationally expensive Monte Carlo simulations. This is performed in five stages: data sampling, fold creation, machine learning, validation and curve fitting. The resulting procedure is accurate, automatic, and it is general enough to predict synapse behavior under experimental conditions that are different to the ones it has been trained on. Since our method efficiently reproduces the results that can be obtained with Monte Carlo simulations at a considerably lower computational cost, it is suitable for the simulation of high numbers of synapses and it is

  1. A Machine Learning Method for the Prediction of Receptor Activation in the Simulation of Synapses

    PubMed Central

    Montes, Jesus; Gomez, Elena; Merchán-Pérez, Angel; DeFelipe, Javier; Peña, Jose-Maria

    2013-01-01

    Chemical synaptic transmission involves the release of a neurotransmitter that diffuses in the extracellular space and interacts with specific receptors located on the postsynaptic membrane. Computer simulation approaches provide fundamental tools for exploring various aspects of the synaptic transmission under different conditions. In particular, Monte Carlo methods can track the stochastic movements of neurotransmitter molecules and their interactions with other discrete molecules, the receptors. However, these methods are computationally expensive, even when used with simplified models, preventing their use in large-scale and multi-scale simulations of complex neuronal systems that may involve large numbers of synaptic connections. We have developed a machine-learning based method that can accurately predict relevant aspects of the behavior of synapses, such as the percentage of open synaptic receptors as a function of time since the release of the neurotransmitter, with considerably lower computational cost compared with the conventional Monte Carlo alternative. The method is designed to learn patterns and general principles from a corpus of previously generated Monte Carlo simulations of synapses covering a wide range of structural and functional characteristics. These patterns are later used as a predictive model of the behavior of synapses under different conditions without the need for additional computationally expensive Monte Carlo simulations. This is performed in five stages: data sampling, fold creation, machine learning, validation and curve fitting. The resulting procedure is accurate, automatic, and it is general enough to predict synapse behavior under experimental conditions that are different to the ones it has been trained on. Since our method efficiently reproduces the results that can be obtained with Monte Carlo simulations at a considerably lower computational cost, it is suitable for the simulation of high numbers of synapses and it is

  2. A machine learning method for the prediction of receptor activation in the simulation of synapses.

    PubMed

    Montes, Jesus; Gomez, Elena; Merchán-Pérez, Angel; Defelipe, Javier; Peña, Jose-Maria

    2013-01-01

    Chemical synaptic transmission involves the release of a neurotransmitter that diffuses in the extracellular space and interacts with specific receptors located on the postsynaptic membrane. Computer simulation approaches provide fundamental tools for exploring various aspects of the synaptic transmission under different conditions. In particular, Monte Carlo methods can track the stochastic movements of neurotransmitter molecules and their interactions with other discrete molecules, the receptors. However, these methods are computationally expensive, even when used with simplified models, preventing their use in large-scale and multi-scale simulations of complex neuronal systems that may involve large numbers of synaptic connections. We have developed a machine-learning based method that can accurately predict relevant aspects of the behavior of synapses, such as the percentage of open synaptic receptors as a function of time since the release of the neurotransmitter, with considerably lower computational cost compared with the conventional Monte Carlo alternative. The method is designed to learn patterns and general principles from a corpus of previously generated Monte Carlo simulations of synapses covering a wide range of structural and functional characteristics. These patterns are later used as a predictive model of the behavior of synapses under different conditions without the need for additional computationally expensive Monte Carlo simulations. This is performed in five stages: data sampling, fold creation, machine learning, validation and curve fitting. The resulting procedure is accurate, automatic, and it is general enough to predict synapse behavior under experimental conditions that are different to the ones it has been trained on. Since our method efficiently reproduces the results that can be obtained with Monte Carlo simulations at a considerably lower computational cost, it is suitable for the simulation of high numbers of synapses and it is

  3. Bluff Body Flow Simulation Using a Vortex Element Method

    SciTech Connect

    Anthony Leonard; Phillippe Chatelain; Michael Rebel

    2004-09-30

    Heavy ground vehicles, especially those involved in long-haul freight transportation, consume a significant part of our nation's energy supply. it is therefore of utmost importance to improve their efficiency, both to reduce emissions and to decrease reliance on imported oil. At highway speeds, more than half of the power consumed by a typical semi truck goes into overcoming aerodynamic drag, a fraction which increases with speed and crosswind. Thanks to better tools and increased awareness, recent years have seen substantial aerodynamic improvements by the truck industry, such as tractor/trailer height matching, radiator area reduction, and swept fairings. However, there remains substantial room for improvement as understanding of turbulent fluid dynamics grows. The group's research effort focused on vortex particle methods, a novel approach for computational fluid dynamics (CFD). Where common CFD methods solve or model the Navier-Stokes equations on a grid which stretches from the truck surface outward, vortex particle methods solve the vorticity equation on a Lagrangian basis of smooth particles and do not require a grid. They worked to advance the state of the art in vortex particle methods, improving their ability to handle the complicated, high Reynolds number flow around heavy vehicles. Specific challenges that they have addressed include finding strategies to accurate capture vorticity generation and resultant forces at the truck wall, handling the aerodynamics of spinning bodies such as tires, application of the method to the GTS model, computation time reduction through improved integration methods, a closest point transform for particle method in complex geometrics, and work on large eddy simulation (LES) turbulence modeling.

  4. On the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods.

    PubMed

    Lee, Anthony; Yau, Christopher; Giles, Michael B; Doucet, Arnaud; Holmes, Christopher C

    2010-12-01

    We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design. PMID:22003276

  5. Grid-based Methods in Relativistic Hydrodynamics and Magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Martí, José María; Müller, Ewald

    2015-12-01

    An overview of grid-based numerical methods used in relativistic hydrodynamics (RHD) and magnetohydrodynamics (RMHD) is presented. Special emphasis is put on a comprehensive review of the application of high-resolution shock-capturing methods. Results of a set of demanding test bench simulations obtained with different numerical methods are compared in an attempt to assess the present capabilities and limits of the various numerical strategies. Applications to three astrophysical phenomena are briefly discussed to motivate the need for and to demonstrate the success of RHD and RMHD simulations in their understanding. The review further provides FORTRAN programs to compute the exact solution of the Riemann problem in RMHD, and to simulate 1D RMHD flows in Cartesian coordinates.

  6. Selecting a dynamic simulation modeling method for health care delivery research-part 2: report of the ISPOR Dynamic Simulation Modeling Emerging Good Practices Task Force.

    PubMed

    Marshall, Deborah A; Burgos-Liz, Lina; IJzerman, Maarten J; Crown, William; Padula, William V; Wong, Peter K; Pasupathy, Kalyan S; Higashi, Mitchell K; Osgood, Nathaniel D

    2015-03-01

    In a previous report, the ISPOR Task Force on Dynamic Simulation Modeling Applications in Health Care Delivery Research Emerging Good Practices introduced the fundamentals of dynamic simulation modeling and identified the types of health care delivery problems for which dynamic simulation modeling can be used more effectively than other modeling methods. The hierarchical relationship between the health care delivery system, providers, patients, and other stakeholders exhibits a level of complexity that ought to be captured using dynamic simulation modeling methods. As a tool to help researchers decide whether dynamic simulation modeling is an appropriate method for modeling the effects of an intervention on a health care system, we presented the System, Interactions, Multilevel, Understanding, Loops, Agents, Time, Emergence (SIMULATE) checklist consisting of eight elements. This report builds on the previous work, systematically comparing each of the three most commonly used dynamic simulation modeling methods-system dynamics, discrete-event simulation, and agent-based modeling. We review criteria for selecting the most suitable method depending on 1) the purpose-type of problem and research questions being investigated, 2) the object-scope of the model, and 3) the method to model the object to achieve the purpose. Finally, we provide guidance for emerging good practices for dynamic simulation modeling in the health sector, covering all aspects, from the engagement of decision makers in the model design through model maintenance and upkeep. We conclude by providing some recommendations about the application of these methods to add value to informed decision making, with an emphasis on stakeholder engagement, starting with the problem definition. Finally, we identify areas in which further methodological development will likely occur given the growing "volume, velocity and variety" and availability of "big data" to provide empirical evidence and techniques

  7. Selecting a dynamic simulation modeling method for health care delivery research-part 2: report of the ISPOR Dynamic Simulation Modeling Emerging Good Practices Task Force.

    PubMed

    Marshall, Deborah A; Burgos-Liz, Lina; IJzerman, Maarten J; Crown, William; Padula, William V; Wong, Peter K; Pasupathy, Kalyan S; Higashi, Mitchell K; Osgood, Nathaniel D

    2015-03-01

    In a previous report, the ISPOR Task Force on Dynamic Simulation Modeling Applications in Health Care Delivery Research Emerging Good Practices introduced the fundamentals of dynamic simulation modeling and identified the types of health care delivery problems for which dynamic simulation modeling can be used more effectively than other modeling methods. The hierarchical relationship between the health care delivery system, providers, patients, and other stakeholders exhibits a level of complexity that ought to be captured using dynamic simulation modeling methods. As a tool to help researchers decide whether dynamic simulation modeling is an appropriate method for modeling the effects of an intervention on a health care system, we presented the System, Interactions, Multilevel, Understanding, Loops, Agents, Time, Emergence (SIMULATE) checklist consisting of eight elements. This report builds on the previous work, systematically comparing each of the three most commonly used dynamic simulation modeling methods-system dynamics, discrete-event simulation, and agent-based modeling. We review criteria for selecting the most suitable method depending on 1) the purpose-type of problem and research questions being investigated, 2) the object-scope of the model, and 3) the method to model the object to achieve the purpose. Finally, we provide guidance for emerging good practices for dynamic simulation modeling in the health sector, covering all aspects, from the engagement of decision makers in the model design through model maintenance and upkeep. We conclude by providing some recommendations about the application of these methods to add value to informed decision making, with an emphasis on stakeholder engagement, starting with the problem definition. Finally, we identify areas in which further methodological development will likely occur given the growing "volume, velocity and variety" and availability of "big data" to provide empirical evidence and techniques

  8. Web-based system for surgical planning and simulation

    NASA Astrophysics Data System (ADS)

    Eldeib, Ayman M.; Ahmed, Mohamed N.; Farag, Aly A.; Sites, C. B.

    1998-10-01

    The growing scientific knowledge and rapid progress in medical imaging techniques has led to an increasing demand for better and more efficient methods of remote access to high-performance computer facilities. This paper introduces a web-based telemedicine project that provides interactive tools for surgical simulation and planning. The presented approach makes use of client-server architecture based on new internet technology where clients use an ordinary web browser to view, send, receive and manipulate patients' medical records while the server uses the supercomputer facility to generate online semi-automatic segmentation, 3D visualization, surgical simulation/planning and neuroendoscopic procedures navigation. The supercomputer (SGI ONYX 1000) is located at the Computer Vision and Image Processing Lab, University of Louisville, Kentucky. This system is under development in cooperation with the Department of Neurological Surgery, Alliant Health Systems, Louisville, Kentucky. The server is connected via a network to the Picture Archiving and Communication System at Alliant Health Systems through a DICOM standard interface that enables authorized clients to access patients' images from different medical modalities.

  9. Simulation-based learning: From theory to practice.

    PubMed

    DeCaporale-Ryan, Lauren N; Dadiz, Rita; Peyre, Sarah E

    2016-06-01

    Comments on the article, "Stimulating Reflective Practice Using Collaborative Reflective Training in Breaking Bad News Simulations," by Kim, Hernandez, Lavery, and Denmark (see record 2016-18380-001). Kim et al. are applauded for engaging and supporting the development of simulation-based education, and for their efforts to create an interprofessional learning environment. However, we hope further work on alternate methods of debriefing leverage the already inherent activation of learners that builds on previous experience, fosters reflection and builds skills. What is needed is the transference of learning theories into our educational research efforts that measure the effectiveness, validation, and reliability of behavior based performance change. The majority of breaking bad news (BBN) curricula limit program evaluations to reports of learner satisfaction, confidence and self-efficacy, rather than determining the successful translation of effective and humanistic interpersonal skills into long-term clinical practice (Rosenbaum et al., 2004). Research is needed to investigate how educational programs affect provider-patient-family interaction, and ultimately patient and family understanding, to better inform our teaching BBN skills. (PsycINFO Database Record PMID:27270248

  10. Physics-Based Haptic Simulation of Bone Machining.

    PubMed

    Arbabtafti, M; Moghaddam, M; Nahvi, A; Mahvash, M; Richardson, B; Shirinzadeh, B

    2011-01-01

    We present a physics-based training simulator for bone machining. Based on experimental studies, the energy required to remove a unit volume of bone is a constant for every particular bone material. We use this physical principle to obtain the forces required to remove bone material with a milling tool rotating at high speed. The rotating blades of the tool are modeled as a set of small cutting elements. The force of interaction between a cutting element and bone is calculated from the energy required to remove a bone chip with an estimated thickness and known material stiffness. The total force acting on the cutter at a particular instant is obtained by integrating the differential forces over all cutting elements engaged. A voxel representation is used to represent the virtual bone and removed chips for calculating forces of machining. We use voxels that carry bone material properties to represent the volumetric haptic body and to apply underlying physical changes during machining. Experimental results of machining samples of a real bone confirm the force model. A real-time haptic implementation of the method in a dental training simulator is described.

  11. Generating volumetric composition maps from particle based computational geodynamic simulations.

    NASA Astrophysics Data System (ADS)

    May, D. A.

    2012-04-01

    The advent of using large scale, high resolution three-dimensional hybrid particle-grid based methods to study geodynamics processes is upon us. Visualizing and interpreting the three-dimensional geometry of the material configuration after severe deformation has occurred is a challenging task when adopting such a point based representation. In two-dimensions, the material configuration is readily visualized by creating a simple (x,y) scatter plot, using the particles position vector and coloring the points according to the lithology which each particle represents. Using only colored points (which do not need to be rendered as spheres), this approach unambiguous fills the 2D model domain with information defining the current material configuration. Along with an increased volume (i.e. MBytes) of output data generated by three-dimensional simulations, the higher dimensionality introduces additional complexities for visualization. The geometry of the deformed material in three-space will become topologically more complex than its two-dimensional counterpart. Secondly, the scatter plot approach used in 2D to represent the material configuration simply does not extend to three-dimensions as technique is unable to provide any sense of depth. To address some of the visualization challenges posed by such methods, we describe how an Approximate Voronoi Diagram (AVD) can be used to produce a volumetric representation of point based data. The AVD approach allows us to efficiently construct a volumetric partitioning of any subset of the model domain amongst a set points. From this representation, we can efficiently generate a representation of the material configuration which can be volume rendered, contoured, or from which cross sections can be extracted. The type of volumetric representations possible, and the performance characteristics of the AVD algorithm were demonstrated by applying the technique to simulation results from models of continental collision and salt

  12. Science Classroom Inquiry (SCI) Simulations: A Novel Method to Scaffold Science Learning

    PubMed Central

    Peffer, Melanie E.; Beckler, Matthew L.; Schunn, Christian; Renken, Maggie; Revak, Amanda

    2015-01-01

    Science education is progressively more focused on employing inquiry-based learning methods in the classroom and increasing scientific literacy among students. However, due to time and resource constraints, many classroom science activities and laboratory experiments focus on simple inquiry, with a step-by-step approach to reach predetermined outcomes. The science classroom inquiry (SCI) simulations were designed to give students real life, authentic science experiences within the confines of a typical classroom. The SCI simulations allow students to engage with a science problem in a meaningful, inquiry-based manner. Three discrete SCI simulations were created as website applications for use with middle school and high school students. For each simulation, students were tasked with solving a scientific problem through investigation and hypothesis testing. After completion of the simulation, 67% of students reported a change in how they perceived authentic science practices, specifically related to the complex and dynamic nature of scientific research and how scientists approach problems. Moreover, 80% of the students who did not report a change in how they viewed the practice of science indicated that the simulation confirmed or strengthened their prior understanding. Additionally, we found a statistically significant positive correlation between students’ self-reported changes in understanding of authentic science practices and the degree to which each simulation benefitted learning. Since SCI simulations were effective in promoting both student learning and student understanding of authentic science practices with both middle and high school students, we propose that SCI simulations are a valuable and versatile technology that can be used to educate and inspire a wide range of science students on the real-world complexities inherent in scientific study. PMID:25786245

  13. Science classroom inquiry (SCI) simulations: a novel method to scaffold science learning.

    PubMed

    Peffer, Melanie E; Beckler, Matthew L; Schunn, Christian; Renken, Maggie; Revak, Amanda

    2015-01-01

    Science education is progressively more focused on employing inquiry-based learning methods in the classroom and increasing scientific literacy among students. However, due to time and resource constraints, many classroom science activities and laboratory experiments focus on simple inquiry, with a step-by-step approach to reach predetermined outcomes. The science classroom inquiry (SCI) simulations were designed to give students real life, authentic science experiences within the confines of a typical classroom. The SCI simulations allow students to engage with a science problem in a meaningful, inquiry-based manner. Three discrete SCI simulations were created as website applications for use with middle school and high school students. For each simulation, students were tasked with solving a scientific problem through investigation and hypothesis testing. After completion of the simulation, 67% of students reported a change in how they perceived authentic science practices, specifically related to the complex and dynamic nature of scientific research and how scientists approach problems. Moreover, 80% of the students who did not report a change in how they viewed the practice of science indicated that the simulation confirmed or strengthened their prior understanding. Additionally, we found a statistically significant positive correlation between students' self-reported changes in understanding of authentic science practices and the degree to which each simulation benefitted learning. Since SCI simulations were effective in promoting both student learning and student understanding of authentic science practices with both middle and high school students, we propose that SCI simulations are a valuable and versatile technology that can be used to educate and inspire a wide range of science students on the real-world complexities inherent in scientific study.

  14. Simulating Cancer Growth with Multiscale Agent-Based Modeling

    PubMed Central

    Wang, Zhihui; Butner, Joseph D.; Kerketta, Romica; Cristini, Vittorio; Deisboeck, Thomas S.

    2014-01-01

    There have been many techniques developed in recent years to in silico model a variety of cancer behaviors. Agent-based modeling is a specific discrete-based hybrid modeling approach that allows simulating the role of diversity in cell populations as well as within each individual cell; it has therefore become a powerful modeling method widely used by computational cancer researchers. Many aspects of tumor morphology including phenotype-changing mutations, the adaptation to microenvironment, the process of angiogenesis, the influence of extracellular matrix, reactions to chemotherapy or surgical intervention, the effects of oxygen and nutrient availability, and metastasis and invasion of healthy tissues have been incorporated and investigated in agent-based models. In this review, we introduce some of the most recent agent-based models that have provided insight into the understanding of cancer growth and invasion, spanning multiple biological scales in time and space, and we further describe several experimentally testable hypotheses generated by those models. We also discuss some of the current challenges of multiscale agent-based cancer models. PMID:24793698

  15. Cognitive Modeling for Agent-Based Simulation of Child Maltreatment

    NASA Astrophysics Data System (ADS)

    Hu, Xiaolin; Puddy, Richard

    This paper extends previous work to develop cognitive modeling for agent-based simulation of child maltreatment (CM). The