Science.gov

Sample records for based simulation methods

  1. Fast simulation method for airframe analysis based on big data

    NASA Astrophysics Data System (ADS)

    Liu, Dongliang; Zhang, Lixin

    2016-10-01

    In this paper, we employ the big data method to structural analysis by considering the correlations between loads and loads, loads and results and results and results. By means of fundamental mathematics and physical rules, the principle, feasibility and error control of the method are discussed. We then establish the analysis process and procedures. The method is validated by two examples. The results show that the fast simulation method based on big data is fast and precise when it is applied to structural analysis.

  2. Hybrid statistics-simulations based method for atom-counting from ADF STEM images.

    PubMed

    De Wael, Annelies; De Backer, Annick; Jones, Lewys; Nellist, Peter D; Van Aert, Sandra

    2017-01-25

    A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials.

  3. A method for MREIT-based source imaging: simulation studies

    NASA Astrophysics Data System (ADS)

    Song, Yizhuang; Jeong, Woo Chul; Woo, Eung Je; Seo, Jin Keun

    2016-08-01

    This paper aims to provide a method for using magnetic resonance electrical impedance tomography (MREIT) to visualize local conductivity changes associated with evoked neuronal activities in the brain. MREIT is an MRI-based technique for conductivity mapping by probing the magnetic flux density induced by an externally injected current through surface electrodes. Since local conductivity changes resulting from evoked neural activities are very small (less than a few %), a major challenge is to acquire exogenous magnetic flux density data exceeding a certain noise level. Noting that the signal-to-noise ratio is proportional to the square root of the number of averages, it is important to reduce the data acquisition time to get more averages within a given total data collection time. The proposed method uses a sub-sampled k-space data set in the phase-encoding direction to significantly reduce the data acquisition time. Since the sub-sampled data violates the Nyquist criteria, we only get a nonlinearly wrapped version of the exogenous magnetic flux density data, which is insufficient for conductivity imaging. Taking advantage of the sparseness of the conductivity change, the proposed method detects local conductivity changes by estimating the time-change of the Laplacian of the nonlinearly wrapped data.

  4. Understanding exoplanet populations with simulation-based methods

    NASA Astrophysics Data System (ADS)

    Morehead, Robert Charles

    The Kepler candidate catalog represents an unprecedented sample of exoplanet host stars. This dataset is ideal for probing the populations of exoplanet systems and exploring their architectures. Confirming transiting exoplanets candidates through traditional follow-up methods is challenging, especially for faint host stars. Most of Kepler's validated planets relied on statistical methods to separate true planets from false-positives. Multiple transiting planet systems (MTPS) have been previously shown to have low false-positive rates and over 850 planets in MTPSs have been statistically validated so far. We show that the period-normalized transit duration ratio (xi) offers additional information that can be used to establish the planetary nature of these systems. We briefly discuss the observed distribution of xi for the Q1-Q17 Kepler Candidate Search. We also use xi to develop a Bayesian statistical framework combined with Monte Carlo methods to determine which pairs of planet candidates in an MTPS are consistent with the planet hypothesis for a sample of 862 MTPSs that include candidate planets, confirmed planets, and known false-positives. This analysis proves to be efficient and advantageous in that it only requires catalog-level bulk candidate properties and galactic population modeling to compute the probabilities of a myriad of feasible scenarios composed of background and companion stellar blends in the photometric aperture, without needing additional observational follow-up. Our results agree with the previous results of a low false-positive rate in the Kepler MTPSs. This implies, independently of any other estimates, that most of the MTPSs detected by Kepler are planetary in nature, but that a substantial fraction could be orbiting stars other than then the putative target star, and therefore may be subject to significant error in the inferred planet parameters resulting from unknown or mismeasured stellar host attributes. We also apply approximate

  5. Human swallowing simulation based on videofluorography images using Hamiltonian MPS method

    NASA Astrophysics Data System (ADS)

    Kikuchi, Takahiro; Michiwaki, Yukihiro; Kamiya, Tetsu; Toyama, Yoshio; Tamai, Tasuku; Koshizuka, Seiichi

    2015-09-01

    In developed nations, swallowing disorders and aspiration pneumonia have become serious problems. We developed a method to simulate the behavior of the organs involved in swallowing to clarify the mechanisms of swallowing and aspiration. The shape model is based on anatomically realistic geometry, and the motion model utilizes forced displacements based on realistic dynamic images to reflect the mechanisms of human swallowing. The soft tissue organs are modeled as nonlinear elastic material using the Hamiltonian MPS method. This method allows for stable simulation of the complex swallowing movement. A penalty method using metaballs is employed to simulate contact between organ walls and smooth sliding along the walls. We performed four numerical simulations under different analysis conditions to represent four cases of swallowing, including a healthy volunteer and a patient with a swallowing disorder. The simulation results were compared to examine the epiglottic downfolding mechanism, which strongly influences the risk of aspiration.

  6. The Simulation of the Recharging Method Based on Solar Radiation for an Implantable Biosensor

    PubMed Central

    Li, Yun; Song, Yong; Kong, Xianyue; Li, Maoyuan; Zhao, Yufei; Hao, Qun; Gao, Tianxin

    2016-01-01

    A method of recharging implantable biosensors based on solar radiation is proposed. Firstly, the models of the proposed method are developed. Secondly, the recharging processes based on solar radiation are simulated using Monte Carlo (MC) method and the energy distributions of sunlight within the different layers of human skin have been achieved and discussed. Finally, the simulation results are verified experimentally, which indicates that the proposed method will contribute to achieve a low-cost, convenient and safe method for recharging implantable biosensors. PMID:27626422

  7. A stable cutting method for finite elements based virtual surgery simulation.

    PubMed

    Jerábková, Lenka; Jerábek, Jakub; Chudoba, Rostislav; Kuhlen, Torsten

    2007-01-01

    In this paper we present a novel approach for stable interactive cutting of deformable objects in virtual environments. Our method is based on the extended finite elements method, allowing for a modeling of discontinuities without remeshing. As no new elements are created, the impact on simulation performance is minimized. We also propose an appropriate mass lumping technique to guarantee for the stability of the simulation regardless of the position of the cut.

  8. Real-time simulation of ultrasound refraction phenomena using ray-trace based wavefront construction method.

    PubMed

    Szostek, Kamil; Piórkowski, Adam

    2016-10-01

    Ultrasound (US) imaging is one of the most popular techniques used in clinical diagnosis, mainly due to lack of adverse effects on patients and the simplicity of US equipment. However, the characteristics of the medium cause US imaging to imprecisely reconstruct examined tissues. The artifacts are the results of wave phenomena, i.e. diffraction or refraction, and should be recognized during examination to avoid misinterpretation of an US image. Currently, US training is based on teaching materials and simulators and ultrasound simulation has become an active research area in medical computer science. Many US simulators are limited by the complexity of the wave phenomena, leading to intensive sophisticated computation that makes it difficult for systems to operate in real time. To achieve the required frame rate, the vast majority of simulators reduce the problem of wave diffraction and refraction. The following paper proposes a solution for an ultrasound simulator based on methods known in geophysics. To improve simulation quality, a wavefront construction method was adapted which takes into account the refraction phenomena. This technique uses ray tracing and velocity averaging to construct wavefronts in the simulation. Instead of a geological medium, real CT scans are applied. This approach can produce more realistic projections of pathological findings and is also capable of providing real-time simulation.

  9. Evaluation of a clinical simulation-based assessment method for EHR-platforms.

    PubMed

    Jensen, Sanne; Rasmussen, Stine Loft; Lyng, Karen Marie

    2014-01-01

    In a procurement process assessment of issues like human factors and interaction between technology and end-users can be challenging. In a large public procurement of an Electronic health record-platform (EHR-platform) in Denmark a clinical simulation-based method for assessing and comparing human factor issues was developed and evaluated. This paper describes the evaluation of the method, its advantages and disadvantages. Our findings showed that clinical simulation is beneficial for assessing user satisfaction, usefulness and patient safety, all though it is resource demanding. The method made it possible to assess qualitative topics during the procurement and it provides an excellent ground for user involvement.

  10. Physics-based statistical model and simulation method of RF propagation in urban environments

    DOEpatents

    Pao, Hsueh-Yuan; Dvorak, Steven L.

    2010-09-14

    A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.

  11. A novel method for simulation of brushless DC motor servo-control system based on MATLAB

    NASA Astrophysics Data System (ADS)

    Tao, Keyan; Yan, Yingmin

    2006-11-01

    This paper provides a research about the simulation of brush-less DC motor (BLDCM) servo control system. Based on the mathematical model of Brush-less DC motor (BLDCM), built the system simulation model with the MATLAB software. When the system model is made, the isolated functional blocks, such as BLDCM block, the rotor's position detection block, change-phase logic block etc. have been modeled. By the organic combination of these blocks, the model of BLDCM can be established easily. The reasonability and validity have been testified by the simulation results and this novel method offers a new thought way for designing and debugging actual motors.

  12. Apparatus and method for interaction phenomena with world modules in data-flow-based simulation

    DOEpatents

    Xavier, Patrick G.; Gottlieb, Eric J.; McDonald, Michael J.; Oppel, III, Fred J.

    2006-08-01

    A method and apparatus accommodate interaction phenomenon in a data-flow-based simulation of a system of elements, by establishing meta-modules to simulate system elements and by establishing world modules associated with interaction phenomena. World modules are associated with proxy modules from a group of meta-modules associated with one of the interaction phenomenon. The world modules include a communication world, a sensor world, a mobility world, and a contact world. World modules can be further associated with other world modules if necessary. Interaction phenomenon are simulated in corresponding world modules by accessing member functions in the associated group of proxy modules. Proxy modules can be dynamically allocated at a desired point in the simulation to accommodate the addition of elements in the system of elements such as a system of robots, a system of communication terminals, or a system of vehicles, being simulated.

  13. Methods for simulation-based analysis of fluid-structure interaction.

    SciTech Connect

    Barone, Matthew Franklin; Payne, Jeffrey L.

    2005-10-01

    Methods for analysis of fluid-structure interaction using high fidelity simulations are critically reviewed. First, a literature review of modern numerical techniques for simulation of aeroelastic phenomena is presented. The review focuses on methods contained within the arbitrary Lagrangian-Eulerian (ALE) framework for coupling computational fluid dynamics codes to computational structural mechanics codes. The review treats mesh movement algorithms, the role of the geometric conservation law, time advancement schemes, wetted surface interface strategies, and some representative applications. The complexity and computational expense of coupled Navier-Stokes/structural dynamics simulations points to the need for reduced order modeling to facilitate parametric analysis. The proper orthogonal decomposition (POD)/Galerkin projection approach for building a reduced order model (ROM) is presented, along with ideas for extension of the methodology to allow construction of ROMs based on data generated from ALE simulations.

  14. Parallel electromagnetic simulator based on the Finite-Difference Time Domain method

    NASA Astrophysics Data System (ADS)

    Walendziuk, Wojciech

    2006-03-01

    In the following paper the parallel tool for electromagnetic field distribution analysis is presented. The main simulation programme is based on the parallel algorithm of the Finite-Difference Time-Domain method and use Message Passing Interface as a communication library. In the paper also ways of communications among computation nodes in a parallel environment and efficiency of the parallel algorithm are presented.

  15. Validation of population-based disease simulation models: a review of concepts and methods

    PubMed Central

    2010-01-01

    Background Computer simulation models are used increasingly to support public health research and policy, but questions about their quality persist. The purpose of this article is to review the principles and methods for validation of population-based disease simulation models. Methods We developed a comprehensive framework for validating population-based chronic disease simulation models and used this framework in a review of published model validation guidelines. Based on the review, we formulated a set of recommendations for gathering evidence of model credibility. Results Evidence of model credibility derives from examining: 1) the process of model development, 2) the performance of a model, and 3) the quality of decisions based on the model. Many important issues in model validation are insufficiently addressed by current guidelines. These issues include a detailed evaluation of different data sources, graphical representation of models, computer programming, model calibration, between-model comparisons, sensitivity analysis, and predictive validity. The role of external data in model validation depends on the purpose of the model (e.g., decision analysis versus prediction). More research is needed on the methods of comparing the quality of decisions based on different models. Conclusion As the role of simulation modeling in population health is increasing and models are becoming more complex, there is a need for further improvements in model validation methodology and common standards for evaluating model credibility. PMID:21087466

  16. An efficient surrogate-based simulation-optimization method for calibrating a regional MODFLOW model

    NASA Astrophysics Data System (ADS)

    Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.

    2017-01-01

    Simulation-optimization method entails a large number of model simulations, which is computationally intensive or even prohibitive if the model simulation is extremely time-consuming. Statistical models have been examined as a surrogate of the high-fidelity physical model during simulation-optimization process to tackle this problem. Among them, Multivariate Adaptive Regression Splines (MARS), a non-parametric adaptive regression method, is superior in overcoming problems of high-dimensions and discontinuities of the data. Furthermore, the stability and accuracy of MARS model can be improved by bootstrap aggregating methods, namely, bagging. In this paper, Bagging MARS (BMARS) method is integrated to a surrogate-based simulation-optimization framework to calibrate a three-dimensional MODFLOW model, which is developed to simulate the groundwater flow in an arid hardrock-alluvium region in northwestern Oman. The physical MODFLOW model is surrogated by the statistical model developed using BMARS algorithm. The surrogate model, which is fitted and validated using training dataset generated by the physical model, can approximate solutions rapidly. An efficient Sobol' method is employed to calculate global sensitivities of head outputs to input parameters, which are used to analyze their importance for the model outputs spatiotemporally. Only sensitive parameters are included in the calibration process to further improve the computational efficiency. Normalized root mean square error (NRMSE) between measured and simulated heads at observation wells is used as the objective function to be minimized during optimization. The reasonable history match between the simulated and observed heads demonstrated feasibility of this high-efficient calibration framework.

  17. A neural-network-based method of model reduction for the dynamic simulation of MEMS

    NASA Astrophysics Data System (ADS)

    Liang, Y. C.; Lin, W. Z.; Lee, H. P.; Lim, S. P.; Lee, K. H.; Feng, D. P.

    2001-05-01

    This paper proposes a neuro-network-based method for model reduction that combines the generalized Hebbian algorithm (GHA) with the Galerkin procedure to perform the dynamic simulation and analysis of nonlinear microelectromechanical systems (MEMS). An unsupervised neural network is adopted to find the principal eigenvectors of a correlation matrix of snapshots. It has been shown that the extensive computer results of the principal component analysis using the neural network of GHA can extract an empirical basis from numerical or experimental data, which can be used to convert the original system into a lumped low-order macromodel. The macromodel can be employed to carry out the dynamic simulation of the original system resulting in a dramatic reduction of computation time while not losing flexibility and accuracy. Compared with other existing model reduction methods for the dynamic simulation of MEMS, the present method does not need to compute the input correlation matrix in advance. It needs only to find very few required basis functions, which can be learned directly from the input data, and this means that the method possesses potential advantages when the measured data are large. The method is evaluated to simulate the pull-in dynamics of a doubly-clamped microbeam subjected to different input voltage spectra of electrostatic actuation. The efficiency and the flexibility of the proposed method are examined by comparing the results with those of the fully meshed finite-difference method.

  18. GPU-accelerated Monte Carlo simulation of particle coagulation based on the inverse method

    NASA Astrophysics Data System (ADS)

    Wei, J.; Kruis, F. E.

    2013-09-01

    Simulating particle coagulation using Monte Carlo methods is in general a challenging computational task due to its numerical complexity and the computing cost. Currently, the lowest computing costs are obtained when applying a graphic processing unit (GPU) originally developed for speeding up graphic processing in the consumer market. In this article we present an implementation of accelerating a Monte Carlo method based on the Inverse scheme for simulating particle coagulation on the GPU. The abundant data parallelism embedded within the Monte Carlo method is explained as it will allow an efficient parallelization of the MC code on the GPU. Furthermore, the computation accuracy of the MC on GPU was validated with a benchmark, a CPU-based discrete-sectional method. To evaluate the performance gains by using the GPU, the computing time on the GPU against its sequential counterpart on the CPU were compared. The measured speedups show that the GPU can accelerate the execution of the MC code by a factor 10-100, depending on the chosen particle number of simulation particles. The algorithm shows a linear dependence of computing time with the number of simulation particles, which is a remarkable result in view of the n2 dependence of the coagulation.

  19. Agent-based modeling: Methods and techniques for simulating human systems

    PubMed Central

    Bonabeau, Eric

    2002-01-01

    Agent-based modeling is a powerful simulation modeling technique that has seen a number of applications in the last few years, including applications to real-world business problems. After the basic principles of agent-based simulation are briefly introduced, its four areas of application are discussed by using real-world applications: flow simulation, organizational simulation, market simulation, and diffusion simulation. For each category, one or several business applications are described and analyzed. PMID:12011407

  20. A general parallelization strategy for random path based geostatistical simulation methods

    NASA Astrophysics Data System (ADS)

    Mariethoz, Grégoire

    2010-07-01

    The size of simulation grids used for numerical models has increased by many orders of magnitude in the past years, and this trend is likely to continue. Efficient pixel-based geostatistical simulation algorithms have been developed, but for very large grids and complex spatial models, the computational burden remains heavy. As cluster computers become widely available, using parallel strategies is a natural step for increasing the usable grid size and the complexity of the models. These strategies must profit from of the possibilities offered by machines with a large number of processors. On such machines, the bottleneck is often the communication time between processors. We present a strategy distributing grid nodes among all available processors while minimizing communication and latency times. It consists in centralizing the simulation on a master processor that calls other slave processors as if they were functions simulating one node every time. The key is to decouple the sending and the receiving operations to avoid synchronization. Centralization allows having a conflict management system ensuring that nodes being simulated simultaneously do not interfere in terms of neighborhood. The strategy is computationally efficient and is versatile enough to be applicable to all random path based simulation methods.

  1. The method of infrared image simulation based on the measured image

    NASA Astrophysics Data System (ADS)

    Lou, Shuli; Liu, Liang; Ren, Jiancun

    2015-10-01

    The development of infrared imaging guidance technology has promoted the research of infrared imaging simulation technology and the key of infrared imaging simulation is the generation of IR image. The generation of IR image is worthful in military and economy. In order to solve the problem of credibility and economy of infrared scene generation, a method of infrared scene generation based on the measured image is proposed. Through researching on optical properties of ship-target and sea background, ship-target images with various gestures are extracted from recorded images based on digital image processing technology. The ship-target image is zoomed in and out to simulate the relative motion between the viewpoint and the target according to field of view and the distance between the target and the sensor. The gray scale of ship-target image is adjusted to simulate the radiation change of the ship-target according to the distance between the viewpoint and the target and the atmospheric transmission. Frames of recorded infrared images without target are interpolated to simulate high frame rate of missile. Processed ship-target images and sea-background infrared images are synthetized to obtain infrared scenes according to different viewpoints. Experiments proved that this method is flexible and applicable, and the fidelity and the reliability of synthesis infrared images can be guaranteed.

  2. Simulation of the Recharging Method of Implantable Biosensors Based on a Wearable Incoherent Light Source

    PubMed Central

    Song, Yong; Hao, Qun; Kong, Xianyue; Hu, Lanxin; Cao, Jie; Gao, Tianxin

    2014-01-01

    Recharging implantable electronics from the outside of the human body is very important for applications such as implantable biosensors and other implantable electronics. In this paper, a recharging method for implantable biosensors based on a wearable incoherent light source has been proposed and simulated. Firstly, we develop a model of the incoherent light source and a multi-layer model of skin tissue. Secondly, the recharging processes of the proposed method have been simulated and tested experimentally, whereby some important conclusions have been reached. Our results indicate that the proposed method will offer a convenient, safe and low-cost recharging method for implantable biosensors, which should promote the application of implantable electronics. PMID:25372616

  3. Wang-Landau Reaction Ensemble Method: Simulation of Weak Polyelectrolytes and General Acid-Base Reactions.

    PubMed

    Landsgesell, Jonas; Holm, Christian; Smiatek, Jens

    2017-02-14

    We present a novel method for the study of weak polyelectrolytes and general acid-base reactions in molecular dynamics and Monte Carlo simulations. The approach combines the advantages of the reaction ensemble and the Wang-Landau sampling method. Deprotonation and protonation reactions are simulated explicitly with the help of the reaction ensemble method, while the accurate sampling of the corresponding phase space is achieved by the Wang-Landau approach. The combination of both techniques provides a sufficient statistical accuracy such that meaningful estimates for the density of states and the partition sum can be obtained. With regard to these estimates, several thermodynamic observables like the heat capacity or reaction free energies can be calculated. We demonstrate that the computation times for the calculation of titration curves with a high statistical accuracy can be significantly decreased when compared to the original reaction ensemble method. The applicability of our approach is validated by the study of weak polyelectrolytes and their thermodynamic properties.

  4. A Monte Carlo simulation based inverse propagation method for stochastic model updating

    NASA Astrophysics Data System (ADS)

    Bao, Nuo; Wang, Chunjie

    2015-08-01

    This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.

  5. Efficient Molecular Dynamics Simulations of Multiple Radical Center Systems Based on the Fragment Molecular Orbital Method

    SciTech Connect

    Nakata, Hiroya; Schmidt, Michael W; Fedorov, Dmitri G; Kitaura, Kazuo; Nakamura, Shinichiro; Gordon, Mark S

    2014-10-16

    The fully analytic energy gradient has been developed and implemented for the restricted open-shell Hartree–Fock (ROHF) method based on the fragment molecular orbital (FMO) theory for systems that have multiple open-shell molecules. The accuracy of the analytic ROHF energy gradient is compared with the corresponding numerical gradient, illustrating the accuracy of the analytic gradient. The ROHF analytic gradient is used to perform molecular dynamics simulations of an unusual open-shell system, liquid oxygen, and mixtures of oxygen and nitrogen. These molecular dynamics simulations provide some insight about how triplet oxygen molecules interact with each other. Timings reveal that the method can calculate the energy gradient for a system containing 4000 atoms in only 6 h. Therefore, it is concluded that the FMO-ROHF method will be useful for investigating systems with multiple open shells.

  6. Efficient molecular dynamics simulations of multiple radical center systems based on the fragment molecular orbital method.

    PubMed

    Nakata, Hiroya; Schmidt, Michael W; Fedorov, Dmitri G; Kitaura, Kazuo; Nakamura, Shinichiro; Gordon, Mark S

    2014-10-16

    The fully analytic energy gradient has been developed and implemented for the restricted open-shell Hartree-Fock (ROHF) method based on the fragment molecular orbital (FMO) theory for systems that have multiple open-shell molecules. The accuracy of the analytic ROHF energy gradient is compared with the corresponding numerical gradient, illustrating the accuracy of the analytic gradient. The ROHF analytic gradient is used to perform molecular dynamics simulations of an unusual open-shell system, liquid oxygen, and mixtures of oxygen and nitrogen. These molecular dynamics simulations provide some insight about how triplet oxygen molecules interact with each other. Timings reveal that the method can calculate the energy gradient for a system containing 4000 atoms in only 6 h. Therefore, it is concluded that the FMO-ROHF method will be useful for investigating systems with multiple open shells.

  7. Thermoelastic Simulations Based on Discontinuous Galerkin Methods: Formulation and Application in Gas Turbines

    NASA Astrophysics Data System (ADS)

    Hao, Zengrong; Gu, Chunwei; Song, Yin

    2016-06-01

    This study extends the discontinuous Galerkin (DG) methods to simulations of thermoelasticity. A thermoelastic formulation of interior penalty DG (IP-DG) method is presented and aspects of the numerical implementation are discussed in matrix form. The content related to thermal expansion effects is illustrated explicitly in the discretized equation system. The feasibility of the method for general thermoelastic simulations is validated through typical test cases, including tackling stress discontinuities caused by jumps of thermal expansive properties and controlling accompanied non-physical oscillations through adjusting the magnitude of IP term. The developed simulation platform upon the method is applied to the engineering analysis of thermoelastic performance for a turbine vane and a series of vanes with various types of simplified thermal barrier coating (TBC) systems. This analysis demonstrates that while TBC properties on heat conduction are generally the major consideration for protecting the alloy base vanes, the mechanical properties may have more significant effects on protections of coatings themselves. Changing characteristics of normal tractions on TBC/base interface, closely related to the occurrence of coating failures, over diverse components distributions along TBC thickness of the functional graded materials are summarized and analysed, illustrating the opposite tendencies in situations with different thermal-stress-free temperatures for coatings.

  8. An Improved SoC Test Scheduling Method Based on Simulated Annealing Algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Jingjing; Shen, Zhihang; Gao, Huaien; Chen, Bianna; Zheng, Weida; Xiong, Xiaoming

    2017-02-01

    In this paper, we propose an improved SoC test scheduling method based on simulated annealing algorithm (SA). It is our first to disorganize IP core assignment for each TAM to produce a new solution for SA, allocate TAM width for each TAM using greedy algorithm and calculate corresponding testing time. And accepting the core assignment according to the principle of simulated annealing algorithm and finally attain the optimum solution. Simultaneously, we run the test scheduling experiment with the international reference circuits provided by International Test Conference 2002(ITC’02) and the result shows that our algorithm is superior to the conventional integer linear programming algorithm (ILP), simulated annealing algorithm (SA) and genetic algorithm(GA). When TAM width reaches to 48,56 and 64, the testing time based on our algorithm is lesser than the classic methods and the optimization rates are 30.74%, 3.32%, 16.13% respectively. Moreover, the testing time based on our algorithm is very close to that of improved genetic algorithm (IGA), which is state-of-the-art at present.

  9. Simulations of Ground Motion in Southern California based upon the Spectral-Element Method

    NASA Astrophysics Data System (ADS)

    Tromp, J.; Komatitsch, D.; Liu, Q.

    2003-12-01

    We use the spectral-element method to simulate ground motion generated by recent well-recorded small earthquakes in Southern California. Simulations are performed using a new sedimentary basin model that is constrained by hundreds of petroleum industry well logs and more than twenty thousand kilometers of seismic reflection profiles. The numerical simulations account for 3D variations of seismic wave speeds and density, topography and bathymetry, and attenuation. Simulations for several small recent events demonstrate that the combination of a detailed sedimentary basin model and an accurate numerical technique facilitates the simulation of ground motion at periods of 2 seconds and longer inside the Los Angeles basin and 6 seconds and longer elsewhere. Peak ground displacement, velocity and acceleration maps illustrate that significant amplification occurs in the basin. Centroid-Moment Tensor mechanisms are obtained based upon Pnl and surface waveforms and numerically calculated 3D Frechet derivatives. We use a combination of waveform and waveform-envelope misfit criteria, and facilitate pure double-couple or zero-trace moment-tensor inversions.

  10. Simulation of 2D Brain's Potential Distribution Based on Two Electrodes ECVT Using Finite Element Method

    NASA Astrophysics Data System (ADS)

    Sirait, S. H.; Edison, R. E.; Baidillah, M. R.; Taruno, W. P.; Haryanto, F.

    2016-08-01

    The aim of this study is to simulate the potential distribution of 2D brain geometry based on two electrodes ECVT. ECVT (electrical capacitance tomography) is a tomography modality which produces dielectric distribution image of a subject from several capacitance electrodes measurements. This study begins by producing the geometry of 2D brain based on MRI image and then setting the boundary conditions on the boundaries of the geometry. The values of boundary conditions follow the potential values used in two electrodes brain ECVT, and for this reason the first boundary is set to 20 volt and 2.5 MHz signal and another boundary is set to ground. Poisson equation is implemented as the governing equation in the 2D brain geometry and finite element method is used to solve the equation. Simulated Hodgkin-Huxley action potential is applied as disturbance potential in the geometry. We divide this study into two which comprises simulation without disturbance potential and simulation with disturbance potential. From this study, each of time dependent potential distributions from non-disturbance and disturbance potential of the 2D brain geometry has been generated.

  11. A Local Order Parameter-Based Method for Simulation of Free Energy Barriers in Crystal Nucleation.

    PubMed

    Eslami, Hossein; Khanjari, Neda; Müller-Plathe, Florian

    2017-03-14

    While global order parameters have been widely used as reaction coordinates in nucleation and crystallization studies, their use in nucleation studies is claimed to have a serious drawback. In this work, a local order parameter is introduced as a local reaction coordinate to drive the simulation from the liquid phase to the solid phase and vice versa. This local order parameter holds information regarding the order in the first- and second-shell neighbors of a particle and has different well-defined values for local crystallites and disordered neighborhoods but is insensitive to the type of the crystal structure. The order parameter is employed in metadynamics simulations to calculate the solid-liquid phase equilibria and free energy barrier to nucleation. Our results for repulsive soft spheres and the Lennard-Jones potential, LJ(12-6), reveal better-resolved solid and liquid basins compared with the case in which a global order parameter is used. It is also shown that the configuration space is sampled more efficiently in the present method, allowing a more accurate calculation of the free energy barrier and the solid-liquid interfacial free energy. Another feature of the present local order parameter-based method is that it is possible to apply the bias potential to regions of interest in the order parameter space, for example, on the largest nucleus in the case of nucleation studies. In the present scheme for metadynamics simulation of the nucleation in supercooled LJ(12-6) particles, unlike the cases in which global order parameters are employed, there is no need to have an estimate of the size of the critical nucleus and to refine the results with the results of umbrella sampling simulations. The barrier heights and the nucleation pathway obtained from this method agree very well with the results of former umbrella sampling simulations.

  12. Full wave simulation of lower hybrid waves in Maxwellian plasma based on the finite element method

    SciTech Connect

    Meneghini, O.; Shiraiwa, S.; Parker, R.

    2009-09-15

    A full wave simulation of the lower-hybrid (LH) wave based on the finite element method is presented. For the LH wave, the most important terms of the dielectric tensor are the cold plasma contribution and the electron Landau damping (ELD) term, which depends only on the component of the wave vector parallel to the background magnetic field. The nonlocal hot plasma ELD effect was expressed as a convolution integral along the magnetic field lines and the resultant integro-differential Helmholtz equation was solved iteratively. The LH wave propagation in a Maxwellian tokamak plasma based on the Alcator C experiment was simulated for electron temperatures in the range of 2.5-10 keV. Comparison with ray tracing simulations showed good agreement when the single pass damping is strong. The advantages of the new approach include a significant reduction of computational requirements compared to full wave spectral methods and seamless treatment of the core, the scrape off layer and the launcher regions.

  13. A Volume-of-Fluid based simulation method for wave impact problems

    NASA Astrophysics Data System (ADS)

    Kleefsman, K. M. T.; Fekken, G.; Veldman, A. E. P.; Iwanowski, B.; Buchner, B.

    2005-06-01

    In this paper, some aspects of water impact and green water loading are considered by numerically investigating a dambreak problem and water entry problems. The numerical method is based on the Navier-Stokes equations that describe the flow of an incompressible viscous fluid. The equations are discretised on a fixed Cartesian grid using the finite volume method. Even though very small cut cells can appear when moving an object through the fixed grid, the method is stable. The free surface is displaced using the Volume-of-Fluid method together with a local height function, resulting in a strictly mass conserving method. The choice of boundary conditions at the free surface appears to be crucial for the accuracy and robustness of the method. For validation, results of a dambreak simulation are shown that can be compared with measurements. A box has been placed in the flow, as a model for a container on the deck of an offshore floater on which forces are calculated. The water entry problem has been investigated by dropping wedges with different dead-rise angles, a cylinder and a cone into calm water with a prescribed velocity. The resulting free surface dynamics, with the sideways jets, has been compared with photographs of experiments. Also a comparison of slamming coefficients with theory and experimental results has been made. Finally, a drop test with a free falling wedge has been simulated.

  14. Minimizing the Discrepancy between Simulated and Historical Failures in Turbine Engines: A Simulation-Based Optimization Method (Postprint)

    DTIC Science & Technology

    2015-01-01

    AFRL-RX-WP-JA-2015-0169 MINIMIZING THE DISCREPANCY BETWEEN SIMULATED AND HISTORICAL FAILURES IN TURBINE ENGINES: A SIMULATION-BASED...To) 15 November 2011 – 30 December 2014 4. TITLE AND SUBTITLE MINIMIZING THE DISCREPANCY BETWEEN SIMULATED AND HISTORICAL FAILURES IN TURBINE ...final publication is available at http://dx.doi.org/10.1155/2015/813565. 14. ABSTRACT The reliability modeling of a module in a turbine engine

  15. On a Wavelet-Based Method for the Numerical Simulation of Wave Propagation

    NASA Astrophysics Data System (ADS)

    Hong, Tae-Kyung; Kennett, B. L. N.

    2002-12-01

    A wavelet-based method for the numerical simulation of acoustic and elastic wave propagation is developed. Using a displacement-velocity formulation and treating spatial derivatives with linear operators, the wave equations are rewritten as a system of equations whose evolution in time is controlled by first-order derivatives. The linear operators for spatial derivatives are implemented in wavelet bases using an operator projection technique with nonstandard forms of wavelet transform. Using a semigroup approach, the discretized solution in time can be represented in an explicit recursive form, based on Taylor expansion of exponential functions of operator matrices. The boundary conditions are implemented by augmenting the system of equations with equivalent force terms at the boundaries. The wavelet-based method is applied to the acoustic wave equation with rigid boundary conditions at both ends in 1-D domain and to the elastic wave equation with a traction-free boundary conditions at a free surface in 2-D spatial media. The method can be applied directly to media with plane surfaces, and surface topography can be included with the aid of distortion of the grid describing the properties of the medium. The numerical results are compared with analytic solutions based on the Cagniard technique and show high accuracy. The wavelet-based approach is also demonstrated for complex media including highly varying topography or stochastic heterogeneity with rapid variations in physical parameters. These examples indicate the value of the approach as an accurate and stable tool for the simulation of wave propagation in general complex media.

  16. Stray light analysis and suppression method of dynamic star simulator based on LCOS splicing technology

    NASA Astrophysics Data System (ADS)

    Meng, Yao; Zhang, Guo-yu

    2015-10-01

    Star simulator acts ground calibration equipment of the star sensor, It testes the related parameters and performance of the star sensor. At present, when the dynamic star simulator based on LCOS splicing is identified by the star sensor, there is a major problem which is the poor LCOS contrast. In this paper, we analysis the cause of LC OS stray light , which is the relation between the incident angle of light and contrast ratio and set up the function relationship between the angle and the irradiance of the stray light. According to this relationship, we propose a scheme that we control the incident angle . It is a popular method to use the compound parabolic concentrator (CPC), although it can control any angle what we want in theory, in fact, we usually use it above +/-15° because of the length and the manufacturing cost. Then I set a telescopic system in front of the CPC , that principle is the same as the laser beam expander. We simulate the CPC with the Tracepro, it simulate the exit surface irradiance. The telescopic system should be designed by the ZEMAX because of the chromatic aberration correction. As a result, we get a collimating light source which the viewing angle is less than +/-5° and the area of uniform irradiation surface is greater than 20mm×20mm.

  17. A novel antibody humanization method based on epitopes scanning and molecular dynamics simulation.

    PubMed

    Zhang, Ding; Chen, Cai-Feng; Zhao, Bin-Bin; Gong, Lu-Lu; Jin, Wen-Jing; Liu, Jing-Jun; Wang, Jing-Fei; Wang, Tian-Tian; Yuan, Xiao-Hui; He, You-Wen

    2013-01-01

    1-17-2 is a rat anti-human DEC-205 monoclonal antibody that induces internalization and delivers antigen to dendritic cells (DCs). The potentially clinical application of this antibody is limited by its murine origin. Traditional humanization method such as complementarity determining regions (CDRs) graft often leads to a decreased or even lost affinity. Here we have developed a novel antibody humanization method based on computer modeling and bioinformatics analysis. First, we used homology modeling technology to build the precise model of Fab. A novel epitope scanning algorithm was designed to identify antigenic residues in the framework regions (FRs) that need to be mutated to human counterpart in the humanization process. Then virtual mutation and molecular dynamics (MD) simulation were used to assess the conformational impact imposed by all the mutations. By comparing the root-mean-square deviations (RMSDs) of CDRs, we found five key residues whose mutations would destroy the original conformation of CDRs. These residues need to be back-mutated to rescue the antibody binding affinity. Finally we constructed the antibodies in vitro and compared their binding affinity by flow cytometry and surface plasmon resonance (SPR) assay. The binding affinity of the refined humanized antibody was similar to that of the original rat antibody. Our results have established a novel method based on epitopes scanning and MD simulation for antibody humanization.

  18. Parallel octree-based multiresolution mesh method for large-scale earthquake ground motion simulation

    NASA Astrophysics Data System (ADS)

    Kim, Eui Joong

    Large scale ground motion simulation requires supercomputing systems in order to obtain reliable and useful results within reasonable elapsed time. In this study, we develop a framework for terascale ground motion simulations in highly heterogeneous basins. As part of the development, we present a parallel octree-based multiresolution finite element methodology for the elastodynamic wave propagation problem. The octree-based multiresolution finite element method reduces memory use significantly and improves overall computational performance. The framework is comprised of three parts; (1) an octree-based mesh generator, Euclid developed by TV and O'Hallaron, (2) a parallel mesh partitioner, ParMETIS developed by Karypis et al.[2], and (3) a parallel octree-based multiresolution finite element solver, QUAKE developed in this study. Realistic earthquakes parameters, soil material properties, and sedimentary basins dimensions will produce extremely large meshes. The out-of-core versional octree-based mesh generator, Euclid overcomes the resulting severe memory limitations. By using a parallel, distributed-memory graph partitioning algorithm, ParMETIS partitions large meshes, overcoming the memory and cost problem. Despite capability of the Octree-Based Multiresolution Mesh Method ( OBM3), large problem sizes necessitate parallelism to handle large memory and work requirements. The parallel OBM 3 elastic wave propagation code, QUAKE has been developed to address these issues. The numerical methodology and the framework have been used to simulate the seismic response of both idealized systems and of the Greater Los Angeles basin to simple pulses and to a mainshock of the 1994 Northridge Earthquake, for frequencies of up to 1 Hz and domain size of 80 km x 80 km x 30 km. In the idealized models, QUAKE shows good agreement with the analytical Green's function solutions. In the realistic models for the Northridge earthquake mainshock, QUAKE qualitatively agrees, with at most

  19. A simple numerical method for snowmelt simulation based on the equation of heat energy.

    PubMed

    Stojković, Milan; Jaćimović, Nenad

    2016-01-01

    This paper presents one-dimensional numerical model for snowmelt/accumulation simulations, based on the equation of heat energy. It is assumed that the snow column is homogeneous at the current time step; however, its characteristics such as snow density and thermal conductivity are treated as functions of time. The equation of heat energy for snow column is solved using the implicit finite difference method. The incoming energy at the snow surface includes the following parts: conduction, convection, radiation and the raindrop energy. Along with the snow melting process, the model includes a model for snow accumulation. The Euler method for the numerical integration of the balance equation is utilized in the proposed model. The model applicability is demonstrated at the meteorological station Zlatibor, located in the western region of Serbia at 1,028 meters above sea level (m.a.s.l.) Simulation results of snowmelt/accumulation suggest that the proposed model achieved better agreement with observed data in comparison with the temperature index method. The proposed method may be utilized as part of a deterministic hydrological model in order to improve short and long term predictions of possible flood events.

  20. Method for distributed agent-based non-expert simulation of manufacturing process behavior

    DOEpatents

    Ivezic, Nenad; Potok, Thomas E.

    2004-11-30

    A method for distributed agent based non-expert simulation of manufacturing process behavior on a single-processor computer comprises the steps of: object modeling a manufacturing technique having a plurality of processes; associating a distributed agent with each the process; and, programming each the agent to respond to discrete events corresponding to the manufacturing technique, wherein each discrete event triggers a programmed response. The method can further comprise the step of transmitting the discrete events to each agent in a message loop. In addition, the programming step comprises the step of conditioning each agent to respond to a discrete event selected from the group consisting of a clock tick message, a resources received message, and a request for output production message.

  1. Simulation and evaluation of tablet-coating burst based on finite element method.

    PubMed

    Yang, Yan; Li, Juan; Miao, Kong-Song; Shan, Wei-Guang; Tang, Lan; Yu, Hai-Ning

    2016-09-01

    The objective of this study was to simulate and evaluate the burst behavior of coated tablets. Three-dimensional finite element models of tablet-coating were established using software ANSYS. Swelling pressure of cores was measured by a self-made device and applied at the internal surface of the models. Mechanical properties of the polymer film were determined using a texture analyzer and applied as material properties of the models. The resulted finite element models were validated by experimental data. The validated models were used to assess the factors those influenced burst behavior and predict the coating burst behavior. The simulation results of coating burst and failure location were strongly matched with the experimental data. It was found that internal swelling pressure, inside corner radius and corner thickness were three main factors controlling the stress distribution and burst behavior. Based on the linear relationship between the internal pressure and the maximum principle stress on coating, burst pressure of coatings was calculated and used to predict the burst behavior. This study demonstrated that burst behavior of coated tablets could be simulated and evaluated by finite element method.

  2. Density-of-states based Monte Carlo methods for simulation of biological systems

    NASA Astrophysics Data System (ADS)

    Rathore, Nitin; Knotts, Thomas A.; de Pablo, Juan J.

    2004-03-01

    We have developed density-of-states [1] based Monte Carlo techniques for simulation of biological molecules. Two such methods are discussed. The first, Configurational Temperature Density of States (CTDOS) [2], relies on computing the density of states of a peptide system from knowledge of its configurational temperature. The reciprocal of this intrinsic temperature, computed from instantaneous configurational information of the system, is integrated to arrive at the density of states. The method shows improved efficiency and accuracy over techniques that are based on histograms of random visits to distinct energy states. The second approach, Expanded Ensemble Density of States (EXEDOS), incorporates elements from both the random walk method and the expanded ensemble formalism. It is used in this work to study mechanical deformation of model peptides. Results are presented in the form of force-extension curves and the corresponding potentials of mean force. The application of this proposed technique is further generalized to other biological systems; results will be presented for ion transport through protein channels, base stacking in nucleic acids and hybridization of DNA strands. [1]. F. Wang and D. P. Landau, Phys. Rev. Lett., 86, 2050 (2001). [2]. N. Rathore, T. A. Knotts IV and J. J. de Pablo, Biophys. J., Dec. (2003).

  3. Spin tracking simulations in AGS based on ray-tracing methods - bare lattice, no snakes -

    SciTech Connect

    Meot, F.; Ahrens, L.; Gleen, J.; Huang, H.; Luccio, A.; MacKay, W. W.; Roser, T.; Tsoupas, N.

    2009-09-01

    This Note reports on the first simulations of and spin dynamics in the AGS using the ray-tracing code Zgoubi. It includes lattice analysis, comparisons with MAD, DA tracking, numerical calculation of depolarizing resonance strengths and comparisons with analytical models, etc. It also includes details on the setting-up of Zgoubi input data files and on the various numerical methods of concern in and available from Zgoubi. Simulations of crossing and neighboring of spin resonances in AGS ring, bare lattice, without snake, have been performed, in order to assess the capabilities of Zgoubi in that matter, and are reported here. This yields a rather long document. The two main reasons for that are, on the one hand the desire of an extended investigation of the energy span, and on the other hand a thorough comparison of Zgoubi results with analytical models as the 'thin lens' approximation, the weak resonance approximation, and the static case. Section 2 details the working hypothesis : AGS lattice data, formulae used for deriving various resonance related quantities from the ray-tracing based 'numerical experiments', etc. Section 3 gives inventories of the intrinsic and imperfection resonances together with, in a number of cases, the strengths derived from the ray-tracing. Section 4 gives the details of the numerical simulations of resonance crossing, including behavior of various quantities (closed orbit, synchrotron motion, etc.) aimed at controlling that the conditions of particle and spin motions are correct. In a similar manner Section 5 gives the details of the numerical simulations of spin motion in the static case: fixed energy in the neighboring of the resonance. In Section 6, weak resonances are explored, Zgoubi results are compared with the Fresnel integrals model. Section 7 shows the computation of the {rvec n} vector in the AGS lattice and tuning considered. Many details on the numerical conditions as data files etc. are given in the Appendix Section

  4. A VOF-based method for the simulation of thermocapillary flow

    NASA Astrophysics Data System (ADS)

    Ma, Chen; Bothe, Dieter

    2010-11-01

    This contribution concerns 3D direct numerical simulation of surface tension-driven two-phase flow with free deformable interface. The two-phase Navier-Stokes equations together with the energy balance in temperature form for incompressible, immiscible fluids are solved. We employ an extended VOF (volume of fluid) method, where the interface is kept sharp using the PLIC-method (piecewise linear interface construction). The surface tension, modeled as a body force via the interface delta-function, is assumed to be linearly dependent on temperature. The surface temperature gradient calculation is based on carefully computed interface temperatures. Numerical results on thermocapillary migration of droplets are obtained for a wide range of Marangoni numbers. Both the terminal and initial stage of the migration are studied and very good agreement with theoretical and experimental results is achieved. In addition, simulation of the B'enard-Marangoni instability in square containers with small aspect ratio and high-Prandtl-number fluids is discussed concerning the development and numbers of convection cells in relation to the aspect ratio.

  5. A simulation-based probabilistic design method for arctic sea transport systems

    NASA Astrophysics Data System (ADS)

    Martin, Bergström; Ove, Erikstad Stein; Sören, Ehlers

    2016-12-01

    When designing an arctic cargo ship, it is necessary to consider multiple stochastic factors. This paper evaluates the merits of a simulation-based probabilistic design method specifically developed to deal with this challenge. The outcome of the paper indicates that the incorporation of simulations and probabilistic design parameters into the design process enables more informed design decisions. For instance, it enables the assessment of the stochastic transport capacity of an arctic ship, as well as of its long-term ice exposure that can be used to determine an appropriate level of ice-strengthening. The outcome of the paper also indicates that significant gains in transport system cost-efficiency can be obtained by extending the boundaries of the design task beyond the individual vessel. In the case of industrial shipping, this allows for instance the consideration of port-based cargo storage facilities allowing for temporary shortages in transport capacity and thus a reduction in the required fleet size / ship capacity.

  6. A method based on Monte Carlo simulation for the determination of the G(E) function.

    PubMed

    Chen, Wei; Feng, Tiancheng; Liu, Jun; Su, Chuanying; Tian, Yanjie

    2015-02-01

    The G(E) function method is a spectrometric method for the exposure dose estimation; this paper describes a method based on Monte Carlo method to determine the G(E) function of a 4″ × 4″ × 16″ NaI(Tl) detector. Simulated spectrums of various monoenergetic gamma rays in the region of 40 -3200 keV and the corresponding deposited energy in an air ball in the energy region of full-energy peak were obtained using Monte Carlo N-particle Transport Code. Absorbed dose rate in air was obtained according to the deposited energy and divided by counts of corresponding full-energy peak to get the G(E) function value at energy E in spectra. Curve-fitting software 1st0pt was used to determine coefficients of the G(E) function. Experimental results show that the calculated dose rates using the G(E) function determined by the authors' method are accordant well with those values obtained by ionisation chamber, with a maximum deviation of 6.31 %.

  7. Numerical Simulation of Drophila Flight Based on Arbitrary Langrangian-Eulerian Method

    NASA Astrophysics Data System (ADS)

    Erzincanli, Belkis; Sahin, Mehmet

    2012-11-01

    A parallel unstructured finite volume algorithm based on Arbitrary Lagrangian Eulerian (ALE) method has been developed in order to investigate the wake structure around a pair of flapping Drosophila wings. The numerical method uses a side-centered arrangement of the primitive variables that does not require any ad-hoc modifications in order to enhance pressure coupling. A radial basis function (RBF) interpolation method is also implemented in order to achieve large mesh deformations. For the parallel solution of resulting large-scale algebraic equations, a matrix factorization is introduced similar to that of the projection method for the whole coupled system and two-cycle of BoomerAMG solver is used for the scaled discrete Laplacian provided by the HYPRE library which we access through the PETSc library. The present numerical algorithm is initially validated for the flow past an oscillating circular cylinder in a channel and the flow induced by an oscillating sphere in a cubic cavity. Then the numerical algorithm is applied to the numerical simulation of flow field around a pair of flapping Drosophila wing in hover flight. The time variation of the near wake structure is shown along with the aerodynamic loads and particle traces. The authors acknowledge financial support from Turkish National Scientific and Technical Research Council (TUBITAK) through project number 111M332. The authors would like to thank Michael Dickinson and Michael Elzinga for providing the experimental data.

  8. Simulating underwater propulsion using an immersed boundary method based open-source solver

    NASA Astrophysics Data System (ADS)

    Senturk, Utku; Hemmati, Arman; Smits, Alexander J.

    2016-11-01

    The performance of a newly developed Immersed Boundary Method (IBM) incorporated into a finite volume solver is examined using foam-extend-3.2. IBM uses a discrete forcing approach based on the weighted least squares interpolation to preserve the sharpness of the boundary, which decreases the computational complexity of the problem. Initially, four case studies with gradually increasing complexities are considered to verify the accuracy of the IBM approach. These include the flow past 2D stationary and transversely oscillating cylinders and 3D wake of stationary and pitching flat plates with aspect ratio 1.0 at Re=2000. The primary objective of this study, which is pursued by an ongoing simulation of the wake formed behind a pitching deformable 3D flat plate, is to investigate the underwater locomotion of a fish at Re=10000. The results of the IBM based solver are compared to the experimental results, which suggest that the force computations are accurate in general. Spurious oscillations in the forces are observed for problems with moving bodies which change based on spatial and temporal grid resolutions. Although it still has the full advantage of the main code features, the IBM-based solver in foam-extend-3.2 requires further development to be exploited for complex grids. The work was supported by ONR under MURI Grant N00014-14-1-0533.

  9. Video-Based Method of Quantifying Performance and Instrument Motion During Simulated Phonosurgery

    PubMed Central

    Conroy, Ellen; Surender, Ketan; Geng, Zhixian; Chen, Ting; Dailey, Seth; Jiang, Jack

    2015-01-01

    Objectives/Hypothesis To investigate the use of the Video-Based Phonomicrosurgery Instrument Tracking System to collect instrument position data during simulated phonomicrosurgery and calculate motion metrics using these data. We used this system to determine if novice subject motion metrics improved over 1 week of training. Study Design Prospective cohort study. Methods Ten subjects performed simulated surgical tasks once per day for 5 days. Instrument position data were collected and used to compute motion metrics (path length, depth perception, and motion smoothness). Data were analyzed to determine if motion metrics improved with practice time. Task outcome was also determined each day, and relationships between task outcome and motion metrics were used to evaluate the validity of motion metrics as indicators of surgical performance. Results Significant decreases over time were observed for path length (P <.001), depth perception (P <.001), and task outcome (P <.001). No significant change was observed for motion smoothness. Significant relationships were observed between task outcome and path length (P <.001), depth perception (P <.001), and motion smoothness (P <.001). Conclusions Our system can estimate instrument trajectory and provide quantitative descriptions of surgical performance. It may be useful for evaluating phonomicrosurgery performance. Path length and depth perception may be particularly useful indicators. PMID:24737286

  10. A new method to extract stable feature points based on self-generated simulation images

    NASA Astrophysics Data System (ADS)

    Long, Fei; Zhou, Bin; Ming, Delie; Tian, Jinwen

    2015-10-01

    Recently, image processing has got a lot of attention in the field of photogrammetry, medical image processing, etc. Matching two or more images of the same scene taken at different times, by different cameras, or from different viewpoints, is a popular and important problem. Feature extraction plays an important part in image matching. Traditional SIFT detectors reject the unstable points by eliminating the low contrast and edge response points. The disadvantage is the need to set the threshold manually. The main idea of this paper is to get the stable extremums by machine learning algorithm. Firstly we use ASIFT approach coupled with the light changes and blur to generate multi-view simulated images, which make up the set of the simulated images of the original image. According to the way of generating simulated images set, affine transformation of each generated image is also known. Instead of the traditional matching process which contain the unstable RANSAC method to get the affine transformation, this approach is more stable and accurate. Secondly we calculate the stability value of the feature points by the set of image with its affine transformation. Then we get the different feature properties of the feature point, such as DOG features, scales, edge point density, etc. Those two form the training set while stability value is the dependent variable and feature property is the independent variable. At last, a process of training by Rank-SVM is taken. We will get a weight vector. In use, based on the feature properties of each points and weight vector calculated by training, we get the sort value of each feature point which refers to the stability value, then we sort the feature points. In conclusion, we applied our algorithm and the original SIFT detectors to test as a comparison. While in different view changes, blurs, illuminations, it comes as no surprise that experimental results show that our algorithm is more efficient.

  11. [Method for environmental management in paper industry based on pollution control technology simulation].

    PubMed

    Zhang, Xue-Ying; Wen, Zong-Guo

    2014-11-01

    To evaluate the reduction potential of industrial water pollutant emissions and to study the application of technology simulation in pollutant control and environment management, an Industrial Reduction Potential Analysis and Environment Management (IRPAEM) model was developed based on coupling of "material-process-technology-product". The model integrated bottom-up modeling and scenario analysis method, and was applied to China's paper industry. Results showed that under CM scenario, the reduction potentials of waster water, COD and ammonia nitrogen would reach 7 x 10(8) t, 39 x 10(4) t and 0.3 x 10(4) t, respectively in 2015, 13.8 x 10(8) t, 56 x 10(4) t and 0.5 x 10(4) t, respectively in 2020. Strengthening the end-treatment would still be the key method to reduce emissions during 2010-2020, while the reduction effect of structure adjustment would be more obvious during 2015-2020. Pollution production could basically reach the domestic or international advanced level of clean production in 2015 and 2020; the index of wastewater and ammonia nitrogen would basically meet the emission standards in 2015 and 2020 while COD would not.

  12. Proposal of Classification Method of Time Series Data in International Emissions Trading Market Using Agent-based Simulation

    NASA Astrophysics Data System (ADS)

    Nakada, Tomohiro; Takadama, Keiki; Watanabe, Shigeyoshi

    This paper proposes the classification method using Bayesian analytical method to classify the time series data in the international emissions trading market depend on the agent-based simulation and compares the case with Discrete Fourier transform analytical method. The purpose demonstrates the analytical methods mapping time series data such as market price. These analytical methods have revealed the following results: (1) the classification methods indicate the distance of mapping from the time series data, it is easier the understanding and inference than time series data; (2) these methods can analyze the uncertain time series data using the distance via agent-based simulation including stationary process and non-stationary process; and (3) Bayesian analytical method can show the 1% difference description of the emission reduction targets of agent.

  13. [Simulation of water and carbon fluxes in harvard forest area based on data assimilation method].

    PubMed

    Zhang, Ting-Long; Sun, Rui; Zhang, Rong-Hua; Zhang, Lei

    2013-10-01

    Model simulation and in situ observation are the two most important means in studying the water and carbon cycles of terrestrial ecosystems, but have their own advantages and shortcomings. To combine these two means would help to reflect the dynamic changes of ecosystem water and carbon fluxes more accurately. Data assimilation provides an effective way to integrate the model simulation and in situ observation. Based on the observation data from the Harvard Forest Environmental Monitoring Site (EMS), and by using ensemble Kalman Filter algorithm, this paper assimilated the field measured LAI and remote sensing LAI into the Biome-BGC model to simulate the water and carbon fluxes in Harvard forest area. As compared with the original model simulated without data assimilation, the improved Biome-BGC model with the assimilation of the field measured LAI in 1998, 1999, and 2006 increased the coefficient of determination R2 between model simulation and flux observation for the net ecosystem exchange (NEE) and evapotranspiration by 8.4% and 10.6%, decreased the sum of absolute error (SAE) and root mean square error (RMSE) of NEE by 17.7% and 21.2%, and decreased the SAE and RMSE of the evapotranspiration by 26. 8% and 28.3%, respectively. After assimilated the MODIS LAI products of 2000-2004 into the improved Biome-BGC model, the R2 between simulated and observed results of NEE and evapotranspiration was increased by 7.8% and 4.7%, the SAE and RMSE of NEE were decreased by 21.9% and 26.3%, and the SAE and RMSE of evapotranspiration were decreased by 24.5% and 25.5%, respectively. It was suggested that the simulation accuracy of ecosystem water and carbon fluxes could be effectively improved if the field measured LAI or remote sensing LAI was integrated into the model.

  14. Advanced Spacecraft EM Modelling Based on Geometric Simplification Process and Multi-Methods Simulation

    NASA Astrophysics Data System (ADS)

    Leman, Samuel; Hoeppe, Frederic

    2016-05-01

    This paper is about the first results of a new generation of ElectroMagnetic (EM) methodology applied to spacecraft systems modelling in the low frequency range (system's dimensions are of the same order of magnitude as the wavelength).This innovative approach aims at implementing appropriate simplifications of the real system based on the identification of the dominant electrical and geometrical parameters driving the global EM behaviour. One rigorous but expensive simulation is performed to quantify the error generated by the use of simpler multi-models. If both the speed up of the simulation time and the quality of the EM response are satisfied, uncertainty simulation could be performed based on the simple models library implementing in a flexible and robust Kron's network formalism.This methodology is expected to open up new perspectives concerning fast parametric analysis, and deep understanding of systems behaviour. It will ensure the identification of main radiated and conducted coupling paths and the sensitive EM parameters in order to optimize the protections and to control the disturbance sources in spacecraft design phases.

  15. A simulation-based marginal method for longitudinal data with dropout and mismeasured covariates.

    PubMed

    Yi, Grace Y

    2008-07-01

    Longitudinal data often contain missing observations and error-prone covariates. Extensive attention has been directed to analysis methods to adjust for the bias induced by missing observations. There is relatively little work on investigating the effects of covariate measurement error on estimation of the response parameters, especially on simultaneously accounting for the biases induced by both missing values and mismeasured covariates. It is not clear what the impact of ignoring measurement error is when analyzing longitudinal data with both missing observations and error-prone covariates. In this article, we study the effects of covariate measurement error on estimation of the response parameters for longitudinal studies. We develop an inference method that adjusts for the biases induced by measurement error as well as by missingness. The proposed method does not require the full specification of the distribution of the response vector but only requires modeling its mean and variance structures. Furthermore, the proposed method employs the so-called functional modeling strategy to handle the covariate process, with the distribution of covariates left unspecified. These features, plus the simplicity of implementation, make the proposed method very attractive. In this paper, we establish the asymptotic properties for the resulting estimators. With the proposed method, we conduct sensitivity analyses on a cohort data set arising from the Framingham Heart Study. Simulation studies are carried out to evaluate the impact of ignoring covariate measurement error and to assess the performance of the proposed method.

  16. Two methods for transmission line simulation model creation based on time domain measurements

    NASA Astrophysics Data System (ADS)

    Rinas, D.; Frei, S.

    2011-07-01

    The emission from transmission lines plays an important role in the electromagnetic compatibility of automotive electronic systems. In a frequency range below 200 MHz radiation from cables is often the dominant emission factor. In higher frequency ranges radiation from PCBs and their housing becomes more relevant. Main sources for this emission are the conducting traces. The established field measurement methods according CISPR 25 for evaluation of emissions suffer from the need to use large anechoic chambers. Furthermore measurement data can not be used for simulation model creation in order to compute the overall fields radiated from a car. In this paper a method to determine the far-fields and a simulation model of radiating transmission lines, esp. cable bundles and conducting traces on planar structures, is proposed. The method measures the electromagnetic near-field above the test object. Measurements are done in time domain in order to get phase information and to reduce measurement time. On the basis of near-field data equivalent source identification can be done. Considering correlations between sources along each conductive structure in model creation process, the model accuracy increases and computational costs can be reduced.

  17. Full wave simulation of waves in ECRIS plasmas based on the finite element method

    SciTech Connect

    Torrisi, G.; Mascali, D.; Neri, L.; Castro, G.; Patti, G.; Celona, L.; Gammino, S.; Ciavola, G.; Di Donato, L.; Sorbello, G.; Isernia, T.

    2014-02-12

    This paper describes the modeling and the full wave numerical simulation of electromagnetic waves propagation and absorption in an anisotropic magnetized plasma filling the resonant cavity of an electron cyclotron resonance ion source (ECRIS). The model assumes inhomogeneous, dispersive and tensorial constitutive relations. Maxwell's equations are solved by the finite element method (FEM), using the COMSOL Multiphysics{sup ®} suite. All the relevant details have been considered in the model, including the non uniform external magnetostatic field used for plasma confinement, the local electron density profile resulting in the full-3D non uniform magnetized plasma complex dielectric tensor. The more accurate plasma simulations clearly show the importance of cavity effect on wave propagation and the effects of a resonant surface. These studies are the pillars for an improved ECRIS plasma modeling, that is mandatory to optimize the ion source output (beam intensity distribution and charge state, especially). Any new project concerning the advanced ECRIS design will take benefit by an adequate modeling of self-consistent wave absorption simulations.

  18. Monte Carlo method based radiative transfer simulation of stochastic open forest generated by circle packing application

    NASA Astrophysics Data System (ADS)

    Jin, Shengye; Tamura, Masayuki

    2013-10-01

    Monte Carlo Ray Tracing (MCRT) method is a versatile application for simulating radiative transfer regime of the Solar - Atmosphere - Landscape system. Moreover, it can be used to compute the radiation distribution over a complex landscape configuration, as an example like a forest area. Due to its robustness to the complexity of the 3-D scene altering, MCRT method is also employed for simulating canopy radiative transfer regime as the validation source of other radiative transfer models. In MCRT modeling within vegetation, one basic step is the canopy scene set up. 3-D scanning application was used for representing canopy structure as accurately as possible, but it is time consuming. Botanical growth function can be used to model the single tree growth, but cannot be used to express the impaction among trees. L-System is also a functional controlled tree growth simulation model, but it costs large computing memory. Additionally, it only models the current tree patterns rather than tree growth during we simulate the radiative transfer regime. Therefore, it is much more constructive to use regular solid pattern like ellipsoidal, cone, cylinder etc. to indicate single canopy. Considering the allelopathy phenomenon in some open forest optical images, each tree in its own `domain' repels other trees. According to this assumption a stochastic circle packing algorithm is developed to generate the 3-D canopy scene in this study. The canopy coverage (%) and the tree amount (N) of the 3-D scene are declared at first, similar to the random open forest image. Accordingly, we randomly generate each canopy radius (rc). Then we set the circle central coordinate on XY-plane as well as to keep circles separate from each other by the circle packing algorithm. To model the individual tree, we employ the Ishikawa's tree growth regressive model to set the tree parameters including DBH (dt), tree height (H). However, the relationship between canopy height (Hc) and trunk height (Ht) is

  19. Entropy in bimolecular simulations: A comprehensive review of atomic fluctuations-based methods.

    PubMed

    Kassem, Summer; Ahmed, Marawan; El-Sheikh, Salah; Barakat, Khaled H

    2015-11-01

    Entropy of binding constitutes a major, and in many cases a detrimental, component of the binding affinity in biomolecular interactions. While the enthalpic part of the binding free energy is easier to calculate, estimating the entropy of binding is further more complicated. A precise evaluation of entropy requires a comprehensive exploration of the complete phase space of the interacting entities. As this task is extremely hard to accomplish in the context of conventional molecular simulations, calculating entropy has involved many approximations. Most of these golden standard methods focused on developing a reliable estimation of the conformational part of the entropy. Here, we review these methods with a particular emphasis on the different techniques that extract entropy from atomic fluctuations. The theoretical formalisms behind each method is explained highlighting its strengths as well as its limitations, followed by a description of a number of case studies for each method. We hope that this brief, yet comprehensive, review provides a useful tool to understand these methods and realize the practical issues that may arise in such calculations.

  20. RC Model-based Comparison Tests of the Added Compliance Method with Computer Simulations and a Standard Method

    NASA Astrophysics Data System (ADS)

    Pałko, Krzysztof J.; Rogalski, Andrzej; Zieliński, Krzysztof; Glapiński, Jarosław; Kozarski, Maciej; Pałko, Tadeusz; Darowski, Marek

    2007-01-01

    Ventilation of the lungs involves the exchange of gases during inhalation and exhalation causing the movement of respiratory gases between alveolars and the atmosphere as a result of a pressure drop between alveolars and the atmosphere. During artificial ventilation what is most important is to keep specific mechanical parameters of the lungs such as total compliance of the respiratory system Cp (consisting of the lung and the thorax compliances) and the airway resistance Rp when the patient is ventilated. Therefore, as the main goal of this work and as the first step to use our earlier method of added lung compliance in clinical practice was: 1) to carry out computer simulations to compare the application of this method during different expiratory phases, and 2) to compare this method with the standard method for its accuracy. The primary tests of the added-compliance method of the main lung parameters measurement have been made using the RC mechanical model of the lungs.

  1. Waveform-based simulated annealing of crosshole transmission data: a semi-global method for estimating seismic anisotropy

    NASA Astrophysics Data System (ADS)

    Afanasiev, Michael V.; Pratt, R. Gerhard; Kamei, Rie; McDowell, Glenn

    2014-12-01

    We successfully apply the semi-global inverse method of simulated annealing to determine the best-fitting 1-D anisotropy model for use in acoustic frequency domain waveform tomography. Our forward problem is based on a numerical solution of the frequency domain acoustic wave equation, and we minimize wavefield phase residuals through random perturbations to a 1-D vertically varying anisotropy profile. Both real and synthetic examples are presented in order to demonstrate and validate the approach. For the real data example, we processed and inverted a cross-borehole data set acquired by Vale Technology Development (Canada) Ltd. in the Eastern Deeps deposit, located in Voisey's Bay, Labrador, Canada. The inversion workflow comprises the full suite of acquisition, data processing, starting model building through traveltime tomography, simulated annealing and finally waveform tomography. Waveform tomography is a high resolution method that requires an accurate starting model. A cycle-skipping issue observed in our initial starting model was hypothesized to be due to an erroneous anisotropy model from traveltime tomography. This motivated the use of simulated annealing as a semi-global method for anisotropy estimation. We initially tested the simulated annealing approach on a synthetic data set based on the Voisey's Bay environment; these tests were successful and led to the application of the simulated annealing approach to the real data set. Similar behaviour was observed in the anisotropy models obtained through traveltime tomography in both the real and synthetic data sets, where simulated annealing produced an anisotropy model which solved the cycle-skipping issue. In the real data example, simulated annealing led to a final model that compares well with the velocities independently estimated from borehole logs. By comparing the calculated ray paths and wave paths, we attributed the failure of anisotropic traveltime tomography to the breakdown of the ray

  2. Incompressible SPH method based on Rankine source solution for violent water wave simulation

    NASA Astrophysics Data System (ADS)

    Zheng, X.; Ma, Q. W.; Duan, W. Y.

    2014-11-01

    With wide applications, the smoothed particle hydrodynamics method (abbreviated as SPH) has become an important numerical tool for solving complex flows, in particular those with a rapidly moving free surface. For such problems, the incompressible Smoothed Particle Hydrodynamics (ISPH) has been shown to yield better and more stable pressure time histories than the traditional SPH by many papers in literature. However, the existing ISPH method directly approximates the second order derivatives of the functions to be solved by using the Poisson equation. The order of accuracy of the method becomes low, especially when particles are distributed in a disorderly manner, which generally happens for modelling violent water waves. This paper introduces a new formulation using the Rankine source solution. In the new approach to the ISPH, the Poisson equation is first transformed into another form that does not include any derivative of the functions to be solved, and as a result, does not need to numerically approximate derivatives. The advantage of the new approach without need of numerical approximation of derivatives is obvious, potentially leading to a more robust numerical method. The newly formulated method is tested by simulating various water waves, and its convergent behaviours are numerically studied in this paper. Its results are compared with experimental data in some cases and reasonably good agreement is achieved. More importantly, numerical results clearly show that the newly developed method does need less number of particles and so less computational costs to achieve the similar level of accuracy, or to produce more accurate results with the same number of particles compared with the traditional SPH and existing ISPH when it is applied to modelling water waves.

  3. The Corrected Simulation Method of Critical Heat Flux Prediction for Water-Cooled Divertor Based on Euler Homogeneous Model

    NASA Astrophysics Data System (ADS)

    Zhang, Jingyang; Han, Le; Chang, Haiping; Liu, Nan; Xu, Tiejun

    2016-02-01

    An accurate critical heat flux (CHF) prediction method is the key factor for realizing the steady-state operation of a water-cooled divertor that works under one-sided high heating flux conditions. An improved CHF prediction method based on Euler's homogeneous model for flow boiling combined with realizable k-ɛ model for single-phase flow is adopted in this paper in which time relaxation coefficients are corrected by the Hertz-Knudsen formula in order to improve the calculation accuracy of vapor-liquid conversion efficiency under high heating flux conditions. Moreover, local large differences of liquid physical properties due to the extreme nonuniform heating flux on cooling wall along the circumference direction are revised by formula IAPWS-IF97. Therefore, this method can improve the calculation accuracy of heat and mass transfer between liquid phase and vapor phase in a CHF prediction simulation of water-cooled divertors under the one-sided high heating condition. An experimental example is simulated based on the improved and the uncorrected methods. The simulation results, such as temperature, void fraction and heat transfer coefficient, are analyzed to achieve the CHF prediction. The results show that the maximum error of CHF based on the improved method is 23.7%, while that of CHF based on uncorrected method is up to 188%, as compared with the experiment results of Ref. [12]. Finally, this method is verified by comparison with the experimental data obtained by International Thermonuclear Experimental Reactor (ITER), with a maximum error of 6% only. This method provides an efficient tool for the CHF prediction of water-cooled divertors. supported by the National Magnetic Confinement Fusion Science Program of China (No. 2010GB104005) and National Natural Science Foundation of China (No. 51406085)

  4. A simplified numerical simulation method of bending properties for glass fiber cloth reinforced denture base resin.

    PubMed

    Tanimoto, Yasuhiro; Nishiwaki, Tsuyoshi; Nishiyama, Norihiro; Nemoto, Kimiya; Maekawa, Zen-ichiro

    2002-06-01

    The purpose of this study was to propose a new numerical modeling of the glass fiber cloth reinforced denture base resin (GFRP). The proposed model is constructed with an isotropic shell, beam and orthotropic shell elements representing the outmost resin, interlaminar resin and glass fiber cloth, respectively. The proposed model was applied to the failure progress analysis under three-point bending conditions, the validity of the numerical model was checked through comparisons with experimental results. The failure progress behaviors involving the local failures, such as interlaminar delamination and resin failure, could be simulated using the numerical model for analyzing the failure progress of GFRP. It is concluded that the model was effective for the failure progress analysis of GFRP.

  5. DETECTORS AND EXPERIMENTAL METHODS Design and simulations for the detector based on DSSSD

    NASA Astrophysics Data System (ADS)

    Xu, Yan-Bing; Wang, Huan-Yu; Meng, Xiang-Cheng; Wang, Hui; Lu, Hong; Ma, Yu-Qian; Li, Xin-Qiao; Shi, Feng; Wang, Ping; Zhao, Xiao-Yun; Wu, Feng

    2010-12-01

    The present paper describes the design and simulation results of a position-sensitive charged particle detector based on the Double Sided Silicon Strip Detector (DSSSD). Also, the characteristics of the DSSSD and its testing result were are discussed. With the application of the DSSSD, the position-sensitive charged particle detector can not only give particle flux and energy spectra information and identify different types of charged particles, but also measure the location and angle of incident particles. As the detector can make multiparameter measurements of charged particles, it is widely used in space detection and exploration missions, such as charged particle detection related to earthquakes, space environment monitoring and solar activity inspection.

  6. Thickness-based adaptive mesh refinement methods for multi-phase flow simulations with thin regions

    SciTech Connect

    Chen, Xiaodong; Yang, Vigor

    2014-07-15

    In numerical simulations of multi-scale, multi-phase flows, grid refinement is required to resolve regions with small scales. A notable example is liquid-jet atomization and subsequent droplet dynamics. It is essential to characterize the detailed flow physics with variable length scales with high fidelity, in order to elucidate the underlying mechanisms. In this paper, two thickness-based mesh refinement schemes are developed based on distance- and topology-oriented criteria for thin regions with confining wall/plane of symmetry and in any situation, respectively. Both techniques are implemented in a general framework with a volume-of-fluid formulation and an adaptive-mesh-refinement capability. The distance-oriented technique compares against a critical value, the ratio of an interfacial cell size to the distance between the mass center of the cell and a reference plane. The topology-oriented technique is developed from digital topology theories to handle more general conditions. The requirement for interfacial mesh refinement can be detected swiftly, without the need of thickness information, equation solving, variable averaging or mesh repairing. The mesh refinement level increases smoothly on demand in thin regions. The schemes have been verified and validated against several benchmark cases to demonstrate their effectiveness and robustness. These include the dynamics of colliding droplets, droplet motions in a microchannel, and atomization of liquid impinging jets. Overall, the thickness-based refinement technique provides highly adaptive meshes for problems with thin regions in an efficient and fully automatic manner.

  7. Cut-cell method based large-eddy simulation of tip-leakage flow

    NASA Astrophysics Data System (ADS)

    Pogorelov, Alexej; Meinke, Matthias; Schröder, Wolfgang

    2015-07-01

    The turbulent low Mach number flow through an axial fan at a Reynolds number of 9.36 × 105 based on the outer casing diameter is investigated by large-eddy simulation. A finite-volume flow solver in an unstructured hierarchical Cartesian setup for the compressible Navier-Stokes equations is used. To account for sharp edges, a fully conservative cut-cell approach is applied. A newly developed rotational periodic boundary condition for Cartesian meshes is introduced such that the simulations are performed just for a 72° segment, i.e., the flow field over one out of five axial blades is resolved. The focus of this numerical analysis is on the development of the vortical flow structures in the tip-gap region. A detailed grid convergence study is performed on four computational grids with 50 × 106, 250 × 106, 1 × 109, and 1.6 × 109 cells. Results of the instantaneous and the mean fan flow field are thoroughly analyzed based on the solution with 1 × 109 cells. High levels of turbulent kinetic energy and pressure fluctuations are generated by a tip-gap vortex upstream of the blade, the separating vortices inside the tip gap, and a counter-rotating vortex on the outer casing wall. An intermittent interaction of the turbulent wake, generated by the tip-gap vortex, with the downstream blade, leads to a cyclic transition with high pressure fluctuations on the suction side of the blade and a decay of the tip-gap vortex. The disturbance of the tip-gap vortex results in an unsteady behavior of the turbulent wake causing the intermittent interaction. For this interaction and the cyclic transition, two dominant frequencies are identified which perfectly match with the characteristic frequencies in the experimental sound power level and therefore explain their physical origin.

  8. Detached eddy simulation for turbulent fluid-structure interaction of moving bodies using the constraint-based immersed boundary method

    NASA Astrophysics Data System (ADS)

    Nangia, Nishant; Bhalla, Amneet P. S.; Griffith, Boyce E.; Patankar, Neelesh A.

    2016-11-01

    Flows over bodies of industrial importance often contain both an attached boundary layer region near the structure and a region of massively separated flow near its trailing edge. When simulating these flows with turbulence modeling, the Reynolds-averaged Navier-Stokes (RANS) approach is more efficient in the former, whereas large-eddy simulation (LES) is more accurate in the latter. Detached-eddy simulation (DES), based on the Spalart-Allmaras model, is a hybrid method that switches from RANS mode of solution in attached boundary layers to LES in detached flow regions. Simulations of turbulent flows over moving structures on a body-fitted mesh incur an enormous remeshing cost every time step. The constraint-based immersed boundary (cIB) method eliminates this operation by placing the structure on a Cartesian mesh and enforcing a rigidity constraint as an additional forcing in the Navier-Stokes momentum equation. We outline the formulation and development of a parallel DES-cIB method using adaptive mesh refinement. We show preliminary validation results for flows past stationary bodies with both attached and separated boundary layers along with results for turbulent flows past moving bodies. This work is supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1324585.

  9. A particle-in-cell method for the simulation of plasmas based on an unconditionally stable field solver

    NASA Astrophysics Data System (ADS)

    Wolf, Eric M.; Causley, Matthew; Christlieb, Andrew; Bettencourt, Matthew

    2016-12-01

    We propose a new particle-in-cell (PIC) method for the simulation of plasmas based on a recently developed, unconditionally stable solver for the wave equation. This method is not subject to a CFL restriction, limiting the ratio of the time step size to the spatial step size, typical of explicit methods, while maintaining computational cost and code complexity comparable to such explicit schemes. We describe the implementation in one and two dimensions for both electrostatic and electromagnetic cases, and present the results of several standard test problems, showing good agreement with theory with time step sizes much larger than allowed by typical CFL restrictions.

  10. A particle-in-cell method for the simulation of plasmas based on an unconditionally stable field solver

    SciTech Connect

    Wolf, Eric M.; Causley, Matthew; Christlieb, Andrew; Bettencourt, Matthew

    2016-08-09

    Here, we propose a new particle-in-cell (PIC) method for the simulation of plasmas based on a recently developed, unconditionally stable solver for the wave equation. This method is not subject to a CFL restriction, limiting the ratio of the time step size to the spatial step size, typical of explicit methods, while maintaining computational cost and code complexity comparable to such explicit schemes. We describe the implementation in one and two dimensions for both electrostatic and electromagnetic cases, and present the results of several standard test problems, showing good agreement with theory with time step sizes much larger than allowed by typical CFL restrictions.

  11. Statistical modification analysis of helical planetary gears based on response surface method and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Guo, Fan

    2015-11-01

    Tooth modification technique is widely used in gear industry to improve the meshing performance of gearings. However, few of the present studies on tooth modification considers the influence of inevitable random errors on gear modification effects. In order to investigate the uncertainties of tooth modification amount variations on system's dynamic behaviors of a helical planetary gears, an analytical dynamic model including tooth modification parameters is proposed to carry out a deterministic analysis on the dynamics of a helical planetary gear. The dynamic meshing forces as well as the dynamic transmission errors of the sun-planet 1 gear pair with and without tooth modifications are computed and compared to show the effectiveness of tooth modifications on gear dynamics enhancement. By using response surface method, a fitted regression model for the dynamic transmission error(DTE) fluctuations is established to quantify the relationship between modification amounts and DTE fluctuations. By shifting the inevitable random errors arousing from manufacturing and installing process to tooth modification amount variations, a statistical tooth modification model is developed and a methodology combining Monte Carlo simulation and response surface method is presented for uncertainty analysis of tooth modifications. The uncertainly analysis reveals that the system's dynamic behaviors do not obey the normal distribution rule even though the design variables are normally distributed. In addition, a deterministic modification amount will not definitely achieve an optimal result for both static and dynamic transmission error fluctuation reduction simultaneously.

  12. A method motion simulator design based on modeling characteristics of the human operator

    NASA Technical Reports Server (NTRS)

    Repperger, D. W.; Junker, A. M.

    1978-01-01

    A design criteria is obtained to compare two simulators and evaluate their equivalence or credibility. In the subsequent analysis the comparison of two simulators can be considered as the same problem as the comparison of a real world situation and a simulation's representation of this real world situation. The design criteria developed involves modeling of the human operator and defining simple parameters to describe his behavior in the simulator and in the real world situation. In the process of obtaining human operator parameters to define characteristics to evaluate simulators, measures are also obtained on these human operator characteristics which can be used to describe the human as an information processor and controller. First, a study is conducted on the simulator design problem in such a manner that this modeling approach can be used to develop a criteria for the comparison of two simulators.

  13. Fast Simulation Method for Ocean Wave Base on Ocean Wave Spectrum and Improved Gerstner Model with GPU

    NASA Astrophysics Data System (ADS)

    Zhang, Wenqiao; Zhang, Jing; Zhang, Tianchi

    2017-01-01

    For the randomness and complexity of ocean wave, and the simulation of large-scale ocean requires a great amount of computation, but the computational efficiency is low, the real-time ability is poor, a fast method of wave simulation is proposed based on the observation and research results of oceanography, it takes advantage of the grid which combined with the technique of LOD and projection, and use the height map of ocean which is formd by retrieval of ocean wave spectrum and directional spectrum to compute with FFT, and it uses the height map to cyclic mapping for the grid on GPU which combined with the technique of LOD and projection to get the dynamic height data and simulation of ocean. The experimental results show that the method is vivid and it conforms with randomness and complexity of ocean wave, it effectively improves the simulation speed of the wave and satisfied with the real-time ability and fidelity in simulation system of ocean.

  14. Ground motion simulation for the 23 August 2011, Mineral, Virginia earthquake using physics-based and stochastic broadband methods

    USGS Publications Warehouse

    Sun, Xiaodan; Hartzell, Stephen; Rezaeian, Sanaz

    2015-01-01

    Three broadband simulation methods are used to generate synthetic ground motions for the 2011 Mineral, Virginia, earthquake and compare with observed motions. The methods include a physics‐based model by Hartzell et al. (1999, 2005), a stochastic source‐based model by Boore (2009), and a stochastic site‐based model by Rezaeian and Der Kiureghian (2010, 2012). The ground‐motion dataset consists of 40 stations within 600 km of the epicenter. Several metrics are used to validate the simulations: (1) overall bias of response spectra and Fourier spectra (from 0.1 to 10 Hz); (2) spatial distribution of residuals for GMRotI50 peak ground acceleration (PGA), peak ground velocity, and pseudospectral acceleration (PSA) at various periods; (3) comparison with ground‐motion prediction equations (GMPEs) for the eastern United States. Our results show that (1) the physics‐based model provides satisfactory overall bias from 0.1 to 10 Hz and produces more realistic synthetic waveforms; (2) the stochastic site‐based model also yields more realistic synthetic waveforms and performs superiorly for frequencies greater than about 1 Hz; (3) the stochastic source‐based model has larger bias at lower frequencies (<0.5  Hz) and cannot reproduce the varying frequency content in the time domain. The spatial distribution of GMRotI50 residuals shows that there is no obvious pattern with distance in the simulation bias, but there is some azimuthal variability. The comparison between synthetics and GMPEs shows similar fall‐off with distance for all three models, comparable PGA and PSA amplitudes for the physics‐based and stochastic site‐based models, and systematic lower amplitudes for the stochastic source‐based model at lower frequencies (<0.5  Hz).

  15. Parallel adaptive mesh refinement method based on WENO finite difference scheme for the simulation of multi-dimensional detonation

    NASA Astrophysics Data System (ADS)

    Wang, Cheng; Dong, XinZhuang; Shu, Chi-Wang

    2015-10-01

    For numerical simulation of detonation, computational cost using uniform meshes is large due to the vast separation in both time and space scales. Adaptive mesh refinement (AMR) is advantageous for problems with vastly different scales. This paper aims to propose an AMR method with high order accuracy for numerical investigation of multi-dimensional detonation. A well-designed AMR method based on finite difference weighted essentially non-oscillatory (WENO) scheme, named as AMR&WENO is proposed. A new cell-based data structure is used to organize the adaptive meshes. The new data structure makes it possible for cells to communicate with each other quickly and easily. In order to develop an AMR method with high order accuracy, high order prolongations in both space and time are utilized in the data prolongation procedure. Based on the message passing interface (MPI) platform, we have developed a workload balancing parallel AMR&WENO code using the Hilbert space-filling curve algorithm. Our numerical experiments with detonation simulations indicate that the AMR&WENO is accurate and has a high resolution. Moreover, we evaluate and compare the performance of the uniform mesh WENO scheme and the parallel AMR&WENO method. The comparison results provide us further insight into the high performance of the parallel AMR&WENO method.

  16. Physical parameter identification method based on modal analysis for two-axis on-road vehicles: Theory and simulation

    NASA Astrophysics Data System (ADS)

    Zheng, Minyi; Zhang, Bangji; Zhang, Jie; Zhang, Nong

    2016-07-01

    Physical parameters are very important for vehicle dynamic modeling and analysis. However, most of physical parameter identification methods are assuming some physical parameters of vehicle are known, and the other unknown parameters can be identified. In order to identify physical parameters of vehicle in the case that all physical parameters are unknown, a methodology based on the State Variable Method(SVM) for physical parameter identification of two-axis on-road vehicle is presented. The modal parameters of the vehicle are identified by the SVM, furthermore, the physical parameters of the vehicle are estimated by least squares method. In numerical simulations, physical parameters of Ford Granada are chosen as parameters of vehicle model, and half-sine bump function is chosen to simulate tire stimulated by impulse excitation. The first numerical simulation shows that the present method can identify all of the physical parameters and the largest absolute value of percentage error of the identified physical parameter is 0.205%; and the effect of the errors of additional mass, structural parameter and measurement noise are discussed in the following simulations, the results shows that when signal contains 30 dB noise, the largest absolute value of percentage error of the identification is 3.78%. These simulations verify that the presented method is effective and accurate for physical parameter identification of two-axis on-road vehicles. The proposed methodology can identify all physical parameters of 7-DOF vehicle model by using free-decay responses of vehicle without need to assume some physical parameters are known.

  17. On-orbit servicing system assessment and optimization methods based on lifecycle simulation under mixed aleatory and epistemic uncertainties

    NASA Astrophysics Data System (ADS)

    Yao, Wen; Chen, Xiaoqian; Huang, Yiyong; van Tooren, Michel

    2013-06-01

    To assess the on-orbit servicing (OOS) paradigm and optimize its utilities by taking advantage of its inherent flexibility and responsiveness, the OOS system assessment and optimization methods based on lifecycle simulation under uncertainties are studied. The uncertainty sources considered in this paper include both the aleatory (random launch/OOS operation failure and on-orbit component failure) and the epistemic (the unknown trend of the end-used market price) types. Firstly, the lifecycle simulation under uncertainties is discussed. The chronological flowchart is presented. The cost and benefit models are established, and the uncertainties thereof are modeled. The dynamic programming method to make optimal decision in face of the uncertain events is introduced. Secondly, the method to analyze the propagation effects of the uncertainties on the OOS utilities is studied. With combined probability and evidence theory, a Monte Carlo lifecycle Simulation based Unified Uncertainty Analysis (MCS-UUA) approach is proposed, based on which the OOS utility assessment tool under mixed uncertainties is developed. Thirdly, to further optimize the OOS system under mixed uncertainties, the reliability-based optimization (RBO) method is studied. To alleviate the computational burden of the traditional RBO method which involves nested optimum search and uncertainty analysis, the framework of Sequential Optimization and Mixed Uncertainty Analysis (SOMUA) is employed to integrate MCS-UUA, and the RBO algorithm SOMUA-MCS is developed. Fourthly, a case study on the OOS system for a hypothetical GEO commercial communication satellite is investigated with the proposed assessment tool. Furthermore, the OOS system is optimized with SOMUA-MCS. Lastly, some conclusions are given and future research prospects are highlighted.

  18. Inferring Population Decline and Expansion From Microsatellite Data: A Simulation-Based Evaluation of the Msvar Method

    PubMed Central

    Girod, Christophe; Vitalis, Renaud; Leblois, Raphaël; Fréville, Hélène

    2011-01-01

    Reconstructing the demographic history of populations is a central issue in evolutionary biology. Using likelihood-based methods coupled with Monte Carlo simulations, it is now possible to reconstruct past changes in population size from genetic data. Using simulated data sets under various demographic scenarios, we evaluate the statistical performance of Msvar, a full-likelihood Bayesian method that infers past demographic change from microsatellite data. Our simulation tests show that Msvar is very efficient at detecting population declines and expansions, provided the event is neither too weak nor too recent. We further show that Msvar outperforms two moment-based methods (the M-ratio test and Bottleneck) for detecting population size changes, whatever the time and the severity of the event. The same trend emerges from a compilation of empirical studies. The latest version of Msvar provides estimates of the current and the ancestral population size and the time since the population started changing in size. We show that, in the absence of prior knowledge, Msvar provides little information on the mutation rate, which results in biased estimates and/or wide credibility intervals for each of the demographic parameters. However, scaling the population size parameters with the mutation rate and scaling the time with current population size, as coalescent theory requires, significantly improves the quality of the estimates for contraction but not for expansion scenarios. Finally, our results suggest that Msvar is robust to moderate departures from a strict stepwise mutation model. PMID:21385729

  19. Simulation of metal cutting using the particle finite-element method and a physically based plasticity model

    NASA Astrophysics Data System (ADS)

    Rodríguez, J. M.; Jonsén, P.; Svoboda, A.

    2017-01-01

    Metal cutting is one of the most common metal-shaping processes. In this process, specified geometrical and surface properties are obtained through the break-up of material and removal by a cutting edge into a chip. The chip formation is associated with large strains, high strain rates and locally high temperatures due to adiabatic heating. These phenomena together with numerical complications make modeling of metal cutting difficult. Material models, which are crucial in metal-cutting simulations, are usually calibrated based on data from material testing. Nevertheless, the magnitudes of strains and strain rates involved in metal cutting are several orders of magnitude higher than those generated from conventional material testing. Therefore, a highly desirable feature is a material model that can be extrapolated outside the calibration range. In this study, a physically based plasticity model based on dislocation density and vacancy concentration is used to simulate orthogonal metal cutting of AISI 316L. The material model is implemented into an in-house particle finite-element method software. Numerical simulations are in agreement with experimental results, but also with previous results obtained with the finite-element method.

  20. National Clinical Skills Competition: an effective simulation-based method to improve undergraduate medical education in China.

    PubMed

    Jiang, Guanchao; Chen, Hong; Wang, Qiming; Chi, Baorong; He, Qingnan; Xiao, Haipeng; Zhou, Qinghuan; Liu, Jing; Wang, Shan

    2016-01-01

    Background The National Clinical Skills Competition has been held in China for 5 consecutive years since 2010 to promote undergraduate education reform and improve the teaching quality. The effects of the simulation-based competition will be analyzed in this study. Methods Participation in the competitions and the compilation of the questions used in the competition finals are summarized, and the influence and guidance quality are further analyzed. Through the nationwide distribution of questionnaires in medical colleges, the effects of the simulation-based competition on promoting undergraduate medical education reform were evaluated. Results The results show that approximately 450 students from more than 110 colleges (accounting for 81% of colleges providing undergraduate clinical medical education in China) participated in the competition each year. The knowledge, skills, and attitudes were comprehensively evaluated by simulation-based assessment. Eight hundred and eighty copies of the questionnaires were distributed to 110 participating medical schools in 2015. In total, 752 valid responses were received across 95 schools. The majority of the interviewees agreed or strongly agreed that competition promoted the adoption of advanced educational principles (76.8%), updated the curriculum model and instructional methods (79.8%), strengthened faculty development (84.0%), improved educational resources (82.1%), and benefited all students (53.4%). Conclusions The National Clinical Skills Competition is widely accepted in China. It has effectively promoted the reform and development of undergraduate medical education in China.

  1. Task-based image quality evaluation of iterative reconstruction methods for low dose CT using computer simulations

    NASA Astrophysics Data System (ADS)

    Xu, Jingyan; Fuld, Matthew K.; Fung, George S. K.; Tsui, Benjamin M. W.

    2015-04-01

    Iterative reconstruction (IR) methods for x-ray CT is a promising approach to improve image quality or reduce radiation dose to patients. The goal of this work was to use task based image quality measures and the channelized Hotelling observer (CHO) to evaluate both analytic and IR methods for clinical x-ray CT applications. We performed realistic computer simulations at five radiation dose levels, from a clinical reference low dose D0 to 25% D0. A fixed size and contrast lesion was inserted at different locations into the liver of the XCAT phantom to simulate a weak signal. The simulated data were reconstructed on a commercial CT scanner (SOMATOM Definition Flash; Siemens, Forchheim, Germany) using the vendor-provided analytic (WFBP) and IR (SAFIRE) methods. The reconstructed images were analyzed by CHOs with both rotationally symmetric (RS) and rotationally oriented (RO) channels, and with different numbers of lesion locations (5, 10, and 20) in a signal known exactly (SKE), background known exactly but variable (BKEV) detection task. The area under the receiver operating characteristic curve (AUC) was used as a summary measure to compare the IR and analytic methods; the AUC was also used as the equal performance criterion to derive the potential dose reduction factor of IR. In general, there was a good agreement in the relative AUC values of different reconstruction methods using CHOs with RS and RO channels, although the CHO with RO channels achieved higher AUCs than RS channels. The improvement of IR over analytic methods depends on the dose level. The reference dose level D0 was based on a clinical low dose protocol, lower than the standard dose due to the use of IR methods. At 75% D0, the performance improvement was statistically significant (p < 0.05). The potential dose reduction factor also depended on the detection task. For the SKE/BKEV task involving 10 lesion locations, a dose reduction of at least 25% from D0 was achieved.

  2. Numerical Simulation of Evacuation Process in Malaysia By Using Distinct-Element-Method Based Multi-Agent Model

    NASA Astrophysics Data System (ADS)

    Abustan, M. S.; Rahman, N. A.; Gotoh, H.; Harada, E.; Talib, S. H. A.

    2016-07-01

    In Malaysia, not many researches on crowd evacuation simulation had been reported. Hence, the development of numerical crowd evacuation process by taking into account people behavioral patterns and psychological characteristics is crucial in Malaysia. On the other hand, tsunami disaster began to gain attention of Malaysian citizens after the 2004 Indian Ocean Tsunami that need quick evacuation process. In relation to the above circumstances, we have conducted simulations of tsunami evacuation process at the Miami Beach of Penang Island by using Distinct Element Method (DEM)-based crowd behavior simulator. The main objectives are to investigate and reproduce current conditions of evacuation process at the said locations under different hypothetical scenarios for the efficiency study of the evacuation. The sim-1 is initial condition of evacuation planning while sim-2 as improvement of evacuation planning by adding new evacuation area. From the simulation result, sim-2 have a shorter time of evacuation process compared to the sim-1. The evacuation time recuded 53 second. The effect of the additional evacuation place is confirmed from decreasing of the evacuation completion time. Simultaneously, the numerical simulation may be promoted as an effective tool in studying crowd evacuation process.

  3. Occurrence and simulation of trihalomethanes in swimming pool water: A simple prediction method based on DOC and mass balance.

    PubMed

    Peng, Di; Saravia, Florencia; Abbt-Braun, Gudrun; Horn, Harald

    2016-01-01

    Trihalomethanes (THM) are the most typical disinfection by-products (DBPs) found in public swimming pool water. DBPs are produced when organic and inorganic matter in water reacts with chemical disinfectants. The irregular contribution of substances from pool visitors and long contact time with disinfectant make the forecast of THM in pool water a challenge. In this work occurrence of THM in a public indoor swimming pool was investigated and correlated with the dissolved organic carbon (DOC). Daily sampling of pool water for 26 days showed a positive correlation between DOC and THM with a time delay of about two days, while THM and DOC didn't directly correlate with the number of visitors. Based on the results and mass-balance in the pool water, a simple simulation model for estimating THM concentration in indoor swimming pool water was proposed. Formation of THM from DOC, volatilization into air and elimination by pool water treatment were included in the simulation. Formation ratio of THM gained from laboratory analysis using native pool water and information from field study in an indoor swimming pool reduced the uncertainty of the simulation. The simulation was validated by measurements in the swimming pool for 50 days. The simulated results were in good compliance with measured results. This work provides a useful and simple method for predicting THM concentration and its accumulation trend for long term in indoor swimming pool water.

  4. Simulation of two-dimensional target motion based on a liquid crystal beam steering method

    NASA Astrophysics Data System (ADS)

    Lin, Yixiang; Ai, Yong; Shan, Xin; Liu, Min

    2015-05-01

    A simulation platform is established for target motion using a liquid crystal (LC) spatial light modulator as a nonmechanical beam steering control device. By controlling the period and orientation of the phase grating generated by the spatial light modulator, the platform realizes two-dimensional (2-D) beam steering using a single LC device. The zenith and azimuth angle range from 0 deg to 2.89 deg and from 0 deg to 360 deg, respectively, with control resolution of 0.0226 deg and 0.0300 deg, respectively. The response time of the beam steering is always less than 0.04 s, irrespective of steering angle. Three typical aircraft tracks are imitated to evaluate the performance of the simulation platform. The correlation coefficients between the theoretical and simulated motions are larger than 0.9822. Results show that it is highly feasible to realize 2-D target motion simulation using the LC spatial light modulator.

  5. Genetic Algorithm Based Simulated Annealing Method for Solving Unit Commitment Problem in Utility System

    NASA Astrophysics Data System (ADS)

    Rajan, C. Christober Asir

    2010-10-01

    The objective of this paper is to find the generation scheduling such that the total operating cost can be minimized, when subjected to a variety of constraints. This also means that it is desirable to find the optimal generating unit commitment in the power system for the next H hours. Genetic Algorithms (GA's) are general-purpose optimization techniques based on principles inspired from the biological evolution using metaphors of mechanisms such as neural section, genetic recombination and survival of the fittest. In this, the unit commitment schedule is coded as a string of symbols. An initial population of parent solutions is generated at random. Here, each schedule is formed by committing all the units according to their initial status ("flat start"). Here the parents are obtained from a pre-defined set of solution's i.e. each and every solution is adjusted to meet the requirements. Then, a random recommitment is carried out with respect to the unit's minimum down times. And SA improves the status. A 66-bus utility power system with twelve generating units in India demonstrates the effectiveness of the proposed approach. Numerical results are shown comparing the cost solutions and computation time obtained by using the Genetic Algorithm method and other conventional methods.

  6. Evaluation of deformation accuracy of a virtual pneumoperitoneum method based on clinical trials for patient-specific laparoscopic surgery simulator

    NASA Astrophysics Data System (ADS)

    Oda, Masahiro; Qu, Jia Di; Nimura, Yukitaka; Kitasaka, Takayuki; Misawa, Kazunari; Mori, Kensaku

    2012-02-01

    This paper evaluates deformation accuracy of a virtual pneumoperitoneum method by utilizing measurement data of real deformations of patient bodies. Laparoscopic surgery is an option of surgical operations that is less invasive technique as compared with traditional surgical operations. In laparoscopic surgery, the pneumoperitoneum process is performed to create a viewing and working space. Although a virtual pneumoperitoneum method based on 3D CT image deformation has been proposed for patient-specific laparoscopy simulators, quantitative evaluation based on measurements obtained in real surgery has not been performed. In this paper, we evaluate deformation accuracy of the virtual pneumoperitoneum method based on real deformation data of the abdominal wall measured in operating rooms (ORs.) The evaluation results are used to find optimal deformation parameters of the virtual pneumoperitoneum method. We measure landmark positions on the abdominal wall on a 3D CT image taken before performing a pneumoperitoneum process. The landmark positions are defined based on anatomical structure of a patient body. We also measure the landmark positions on a 3D CT image deformed by the virtual pneumoperitoneum method. To measure real deformations of the abdominal wall, we measure the landmark positions on the abdominal wall of a patient before and after the pneumoperitoneum process in the OR. We transform the landmark positions measured in the OR from the tracker coordinate system to the CT coordinate system. A positional error of the virtual pneumoperitoneum method is calculated based on positional differences between the landmark positions on the 3D CT image and the transformed landmark positions. Experimental results based on eight cases of surgeries showed that the minimal positional error was 13.8 mm. The positional error can be decreased from the previous method by calculating optimal deformation parameters of the virtual pneumoperitoneum method from the experimental

  7. Comparison of different methods to calculate total runoff and sediment yield based on aliquot sampling from rainfall simulations

    NASA Astrophysics Data System (ADS)

    Tresch, Simon; Fister, Wolfgang; Marzen, Miriam; Kuhn, Nikolaus J.

    2015-04-01

    The quality of data obtained by rainfall experiments depends mainly on the quality of the rainfall simulation itself. However, the best rainfall simulation cannot deliver valuable data, if runoff and sediment discharge from the plot are not sampled at a proper interval or if poor interpolation methods are being used. The safest way to get good results would be to collect all runoff and sediment amounts that come off the plot in the shortest possible intervals. Unfortunately, high rainfall amounts often coincide with limited transport and analysis capacities. Therefore, it is in most cases necessary to find a good compromise between sampling frequency, interpolation method, and available analysis capacities. The aim of this study was to compare different methods to calculate total sediment yield based on aliquot sampling intervals. The methods tested were (1) simple extrapolation of one sample until next sample was collected; (2) averaging between two successive samples; (3) extrapolation of the sediment concentration; (4) extrapolation using a regression function. The results indicate that all methods could, theoretically, be used to calculate total sediment yields, but errors between 10-25% would have to be taken into account for interpretation of the gained data. Highest deviations were always found for the first measurement interval, which shows that it is very important to capture the initial flush of sediment from the plot to be able to calculate reliable total values.

  8. Simulation of the early stage of binary alloy decomposition, based on the free energy density functional method

    NASA Astrophysics Data System (ADS)

    L'vov, P. E.; Svetukhin, V. V.

    2016-07-01

    Based on the free energy density functional method, the early stage of decomposition of a onedimensional binary alloy corresponding to the approximation of regular solutions has been simulated. In the simulation, Gaussian composition fluctuations caused by the initial alloy state are taken into account. The calculation is performed using the block approach implying discretization of the extensive solution volume into independent fragments for each of which the decomposition process is calculated, and then a joint analysis of the formed second phase segregations is performed. It was possible to trace all stages of solid solution decomposition: nucleation, growth, and coalescence (initial stage). The time dependences of the main phase distribution characteristics are calculated: the average size and concentration of the second phase particles, their size distribution function, and the nucleation rate of the second phase particles (clusters). Cluster trajectories in the size-composition space are constructed for the cases of growth and dissolution.

  9. Monte Carlo simulation methods in moment-based scale-bridging algorithms for thermal radiative-transfer problems

    SciTech Connect

    Densmore, J.D.; Park, H.; Wollaber, A.B.; Rauenzahn, R.M.; Knoll, D.A.

    2015-03-01

    We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption–emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck–Cummings algorithm.

  10. A particle-in-cell method for the simulation of plasmas based on an unconditionally stable field solver

    DOE PAGES

    Wolf, Eric M.; Causley, Matthew; Christlieb, Andrew; ...

    2016-08-09

    Here, we propose a new particle-in-cell (PIC) method for the simulation of plasmas based on a recently developed, unconditionally stable solver for the wave equation. This method is not subject to a CFL restriction, limiting the ratio of the time step size to the spatial step size, typical of explicit methods, while maintaining computational cost and code complexity comparable to such explicit schemes. We describe the implementation in one and two dimensions for both electrostatic and electromagnetic cases, and present the results of several standard test problems, showing good agreement with theory with time step sizes much larger than allowedmore » by typical CFL restrictions.« less

  11. Simulation modeling based method for choosing an effective set of fault tolerance mechanisms for real-time avionics systems

    NASA Astrophysics Data System (ADS)

    Bakhmurov, A. G.; Balashov, V. V.; Glonina, A. B.; Pashkov, V. N.; Smeliansky, R. L.; Volkanov, D. Yu.

    2013-12-01

    In this paper, the reliability allocation problem (RAP) for real-time avionics systems (RTAS) is considered. The proposed method for solving this problem consists of two steps: (i) creation of an RTAS simulation model at the necessary level of abstraction and (ii) application of metaheuristic algorithm to find an optimal solution (i. e., to choose an optimal set of fault tolerance techniques). When during the algorithm execution it is necessary to measure the execution time of some software components, the simulation modeling is applied. The procedure of simulation modeling also consists of the following steps: automatic construction of simulation model of the RTAS configuration and running this model in a simulation environment to measure the required time. This method was implemented as an experimental software tool. The tool works in cooperation with DYANA simulation environment. The results of experiments with the implemented method are presented. Finally, future plans for development of the presented method and tool are briefly described.

  12. Finite analytic method based on mixed-form Richards' equation for simulating water flow in vadose zone

    NASA Astrophysics Data System (ADS)

    Zhang, Zaiyong; Wang, Wenke; Yeh, Tian-chyi Jim; Chen, Li; Wang, Zhoufeng; Duan, Lei; An, Kedong; Gong, Chengcheng

    2016-06-01

    In this paper, we develop a finite analytic method (FAMM), which combines flexibility of numerical methods and advantages of analytical solutions, to solve the mixed-form Richards' equation. This new approach minimizes mass balance errors and truncation errors associated with most numerical approaches. We use numerical experiments to demonstrate that FAMM can obtain more accurate numerical solutions and control the global mass balance better than modified Picard finite difference method (MPFD) as compared with analytical solutions. In addition, FAMM is superior to the finite analytic method based on head-based Richards' equation (FAMH). Besides, FAMM solutions are compared to analytical solutions for wetting and drying processes in Brindabella Silty Clay Loam and Yolo Light Clay soils. Finally, we demonstrate that FAMM yields comparable results with those from MPFD and Hydrus-1D for simulating infiltration into other different soils under wet and dry conditions. These numerical experiments further confirm the fact that as long as a hydraulic constitutive model captures general behaviors of other models, it can be used to yield flow fields comparable to those based on other models.

  13. Continuous surface force based lattice Boltzmann equation method for simulating thermocapillary flow

    NASA Astrophysics Data System (ADS)

    Zheng, Lin; Zheng, Song; Zhai, Qinglan

    2016-02-01

    In this paper, we extend a lattice Boltzmann equation (LBE) with continuous surface force (CSF) to simulate thermocapillary flows. The model is designed on our previous CSF LBE for athermal two phase flow, in which the interfacial tension forces and the Marangoni stresses as the results of the interface interactions between different phases are described by a conception of CSF. In this model, the sharp interfaces between different phases are separated by a narrow transition layers, and the kinetics and morphology evolution of phase separation would be characterized by an order parameter via Cahn-Hilliard equation which is solved in the frame work of LBE. The scalar convection-diffusion equation for temperature field is resolved by thermal LBE. The models are validated by thermal two layered Poiseuille flow, and two superimposed planar fluids at negligibly small Reynolds and Marangoni numbers for the thermocapillary driven convection, which have analytical solutions for the velocity and temperature. Then thermocapillary migration of two/three dimensional deformable droplet are simulated. Numerical results show that the predictions of present LBE agreed with the analytical solution/other numerical results.

  14. A numerical simulation of the hole-tone feedback cycle based on an axisymmetric discrete vortex method and Curle's equation

    NASA Astrophysics Data System (ADS)

    Langthjem, M. A.; Nakano, M.

    2005-11-01

    An axisymmetric numerical simulation approach to the hole-tone self-sustained oscillation problem is developed, based on the discrete vortex method for the incompressible flow field, and a representation of flow noise sources on an acoustically compact impingement plate by Curle's equation. The shear layer of the jet is represented by 'free' discrete vortex rings, and the jet nozzle and the end plate by bound vortex rings. A vortex ring is released from the nozzle at each time step in the simulation. The newly released vortex rings are disturbed by acoustic feedback. It is found that the basic feedback cycle works hydrodynamically. The effect of the acoustic feedback is to suppress the broadband noise and reinforce the characteristic frequency and its higher harmonics. An experimental investigation is also described. A hot wire probe was used to measure velocity fluctuations in the shear layer, and a microphone to measure acoustic pressure fluctuations. Comparisons between simulated and experimental results show quantitative agreement with respect to both frequency and amplitude of the shear layer velocity fluctuations. As to acoustic pressure fluctuations, there is quantitative agreement w.r.t. frequencies, and reasonable qualitative agreement w.r.t. peaks of the characteristic frequency and its higher harmonics. Both simulated and measured frequencies f follow the criterion L/uc+L/c0=n/f where L is the gap length between nozzle exit and end plate, uc is the shear layer convection velocity, c0 is the speed of sound, and n is a mode number (n={1}/{2},1,{3}/{2},…). The experimental results however display a complicated pattern of mode jumps, which the numerical method cannot capture.

  15. An Effective Correction Method for Seriously Oblique Remote Sensing Images Based on Multi-View Simulation and a Piecewise Model

    PubMed Central

    Wang, Chunyuan; Liu, Xiang; Zhao, Xiaoli; Wang, Yongqi

    2016-01-01

    Conventional correction approaches are unsuitable for effectively correcting remote sensing images acquired in the seriously oblique condition which has severe distortions and resolution disparity. Considering that the extraction of control points (CPs) and the parameter estimation of the correction model play important roles in correction accuracy, this paper introduces an effective correction method for large angle (LA) images. Firstly, a new CP extraction algorithm is proposed based on multi-view simulation (MVS) to ensure the effective matching of CP pairs between the reference image and the LA image. Then, a new piecewise correction algorithm is advanced with the optimized CPs, where a concept of distribution measurement (DM) is introduced to quantify the CPs distribution. The whole image is partitioned into contiguous subparts which are corrected by different correction formulae to guarantee the accuracy of each subpart. The extensive experimental results demonstrate that the proposed method significantly outperforms conventional approaches. PMID:27763538

  16. A closed-loop dynamic simulation-based design method for articulated heavy vehicles with active trailer steering systems

    NASA Astrophysics Data System (ADS)

    Manjurul Islam, Md.; Ding, Xuejun; He, Yuping

    2012-05-01

    This paper presents a closed-loop dynamic simulation-based design method for articulated heavy vehicles (AHVs) with active trailer steering (ATS) systems. AHVs have poor manoeuvrability at low speeds and exhibit low lateral stability at high speeds. From the design point of view, there exists a trade-off relationship between AHVs' manoeuvrability and stability. For example, fewer articulation points and longer wheelbases will improve high-speed lateral stability, but they will degrade low-speed manoeuvrability. To tackle this conflicting design problem, a systematic method is proposed for the design of AHVs with ATS systems. In order to evaluate vehicle performance measures under a well-defined testing manoeuvre, a driver model is introduced and it 'drivers' the vehicle model to follow a prescribed route at a given speed. Considering the interactions between the mechanical trailer and the ATS system, the proposed design method simultaneously optimises the active design variables of the controllers and passive design variables of the trailer in a single design loop (SDL). Through the design optimisation of an ATS system for an AHV with a truck and a drawbar trailer combination, this SDL method is compared against a published two design loop method. The benchmark investigation shows that the former can determine better trade-off design solutions than those derived by the latter. This SDL method provides an effective approach to automatically implement the design synthesis of AHVs with ATS systems.

  17. Performance Simulation: The Method.

    ERIC Educational Resources Information Center

    Rucker, Lance M.

    A logical, performer-based approach to teaching psychomotor skills is described. Four phases of surgical psychomotor skills training are identified, using an example from a dental preclinical training curriculum: (1) dental students are acquainted with the postural and positional parameters of balanced psychomotor performances; (2) students learn…

  18. Numerical simulation and experimental validation of biofilm in a multi-physics framework using an SPH based method

    NASA Astrophysics Data System (ADS)

    Soleimani, Meisam; Wriggers, Peter; Rath, Henryke; Stiesch, Meike

    2016-10-01

    In this paper, a 3D computational model has been developed to investigate biofilms in a multi-physics framework using smoothed particle hydrodynamics (SPH) based on a continuum approach. Biofilm formation is a complex process in the sense that several physical phenomena are coupled and consequently different time-scales are involved. On one hand, biofilm growth is driven by biological reaction and nutrient diffusion and on the other hand, it is influenced by fluid flow causing biofilm deformation and interface erosion in the context of fluid and deformable solid interaction. The geometrical and numerical complexity arising from these phenomena poses serious complications and challenges in grid-based techniques such as finite element. Here the solution is based on SPH as one of the powerful meshless methods. SPH based computational modeling is quite new in the biological community and the method is uniquely robust in capturing the interface-related processes of biofilm formation such as erosion. The obtained results show a good agreement with experimental and published data which demonstrates that the model is capable of simulating and predicting overall spatial and temporal evolution of biofilm.

  19. Basis set generation for quantum dynamics simulations using simple trajectory-based methods.

    PubMed

    Saller, Maximilian A C; Habershon, Scott

    2015-01-13

    Methods for solving the time-dependent Schrödinger equation generally employ either a global static basis set, which is fixed at the outset, or a dynamic basis set, which evolves according to classical-like or variational equations of motion; the former approach results in the well-known exponential scaling with system size, while the latter can suffer from challenging numerical problems, such as singular matrices, as well as violation of energy conservation. Here, we suggest a middle road: building a basis set using trajectories to place time-independent basis functions in the regions of phase space relevant to wave function propagation. This simple approach, which potentially circumvents many of the problems traditionally associated with global or dynamic basis sets, is successfully demonstrated for two challenging benchmark problems in quantum dynamics, namely, relaxation dynamics following photoexcitation in pyrazine, and the spin Boson model.

  20. Development of modern approach to absorbed dose assessment in radionuclide therapy, based on Monte Carlo method simulation of patient scintigraphy

    NASA Astrophysics Data System (ADS)

    Lysak, Y. V.; Klimanov, V. A.; Narkevich, B. Ya

    2017-01-01

    One of the most difficult problems of modern radionuclide therapy (RNT) is control of the absorbed dose in pathological volume. This research presents new approach based on estimation of radiopharmaceutical (RP) accumulated activity value in tumor volume, based on planar scintigraphic images of the patient and calculated radiation transport using Monte Carlo method, including absorption and scattering in biological tissues of the patient, and elements of gamma camera itself. In our research, to obtain the data, we performed modeling scintigraphy of the vial with administered to the patient activity of RP in gamma camera, the vial was placed at the certain distance from the collimator, and the similar study was performed in identical geometry, with the same values of activity of radiopharmaceuticals in the pathological target in the body of the patient. For correct calculation results, adapted Fisher-Snyder human phantom was simulated in MCNP program. In the context of our technique, calculations were performed for different sizes of pathological targets and various tumors deeps inside patient’s body, using radiopharmaceuticals based on a mixed β-γ-radiating (131I, 177Lu), and clear β- emitting (89Sr, 90Y) therapeutic radionuclides. Presented method can be used for adequate implementing in clinical practice estimation of absorbed doses in the regions of interest on the basis of planar scintigraphy of the patient with sufficient accuracy.

  1. A novel state-space based method for direct numerical simulation of particle-laden turbulent flows

    NASA Astrophysics Data System (ADS)

    Ranjan, Reetesh; Pantano, Carlos

    2012-11-01

    We present a novel state-space-based numerical method for transport of the particle density function, which can be used to investigate particle-laden turbulent flows. Here, the problem can be stated purely in a deterministic Eulerian framework. The method is coupled to an incompressible three-dimensional flow solver. We consider a dilute suspension where the volume fraction and mass loading of the particles in the flow are low enough so that the approximation of one-way coupling remains valid. The particle transport equation is derived from the governing equation of the particle dynamics described in a Lagrangian frame, by treating position and velocity of the particle as state-space variables. Application and features of this method will be demonstrated by simulating a particle-laden decaying isotropic turbulent flow. It is well known that even in an isotropic turbulent flow, the distribution of particles is not uniform. For example, heavier-than-fluid particles tend to accumulate in regions of low vorticity and high strain rate. This lead to large regions in the flow where particles remain sparsely distributed. The new approach can capture the statistics of the particle in such sparsely distributed regions in an accurate manner compared to other numerical methods.

  2. Comparison of Ensemble Kalman Filter groundwater-data assimilation methods based on stochastic moment equations and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Panzeri, M.; Riva, M.; Guadagnini, A.; Neuman, S. P.

    2014-04-01

    Traditional Ensemble Kalman Filter (EnKF) data assimilation requires computationally intensive Monte Carlo (MC) sampling, which suffers from filter inbreeding unless the number of simulations is large. Recently we proposed an alternative EnKF groundwater-data assimilation method that obviates the need for sampling and is free of inbreeding issues. In our new approach, theoretical ensemble moments are approximated directly by solving a system of corresponding stochastic groundwater flow equations. Like MC-based EnKF, our moment equations (ME) approach allows Bayesian updating of system states and parameters in real-time as new data become available. Here we compare the performances and accuracies of the two approaches on two-dimensional transient groundwater flow toward a well pumping water in a synthetic, randomly heterogeneous confined aquifer subject to prescribed head and flux boundary conditions.

  3. Final Performance Report on Grant FA9550-07-1-0366 (Simulation-Based and Sampling Method for Global Optimization)

    DTIC Science & Technology

    2010-01-25

    Lipschitz -type conditions, then V π̂ k (x) −→ V π∗(x) ∀x ∈ S w.p.1. 2.2.3 Simulation-Based Approach to POMDPs In a simulation-based approach to POMDPs, we...Advances in Mathematical Finance, Birkhauser, 2007. 4.5 Awards • Michael Fu: Elected Fellow of the Institute of Electrical and Electronics Engineers (IEEE

  4. Simulation of collaborative studies for real-time PCR-based quantitation methods for genetically modified crops.

    PubMed

    Watanabe, Satoshi; Sawada, Hiroshi; Naito, Shigehiro; Akiyama, Hiroshi; Teshima, Reiko; Furui, Satoshi; Kitta, Kazumi; Hino, Akihiro

    2013-01-01

    To study impacts of various random effects and parameters of collaborative studies on the precision of quantitation methods of genetically modified (GM) crops, we developed a set of random effects models for cycle time values of a standard curve-based relative real-time PCR that makes use of an endogenous gene sequence as the internal standard. The models and data from a published collaborative study for six GM lines at four concentration levels were used to simulate collaborative studies under various conditions. Results suggested that by reducing the numbers of well replications from three to two, and standard levels of endogenous sequence from five to three, the number of unknown samples analyzable on a 96-well PCR plate in routine analyses could be almost doubled, and still the acceptable repeatability RSD (RSDr < or = 25%) and the reproducibility RSD (RSDR < 35%) of the collaborative study could be met. Further, RSDr and RSD(R) were found most sensitive to random effects attributable to inhomogeneity among blind replicates, but they were little influenced by those attributable to DNA extractions. The proposed models are expected to be useful for optimizing standard curve-based relative quantitation methods for GM crops by real-time PCR and their collaborative studies.

  5. Application of Wavelet-Based Methods for Accelerating Multi-Time-Scale Simulation of Bistable Heterogeneous Catalysis

    DOE PAGES

    Gur, Sourav; Frantziskonis, George N.; Univ. of Arizona, Tucson, AZ; ...

    2017-02-16

    Here, we report results from a numerical study of multi-time-scale bistable dynamics for CO oxidation on a catalytic surface in a flowing, well-mixed gas stream. The problem is posed in terms of surface and gas-phase submodels that dynamically interact in the presence of stochastic perturbations, reflecting the impact of molecular-scale fluctuations on the surface and turbulence in the gas. Wavelet-based methods are used to encode and characterize the temporal dynamics produced by each submodel and detect the onset of sudden state shifts (bifurcations) caused by nonlinear kinetics. When impending state shifts are detected, a more accurate but computationally expensive integrationmore » scheme can be used. This appears to make it possible, at least in some cases, to decrease the net computational burden associated with simulating multi-time-scale, nonlinear reacting systems by limiting the amount of time in which the more expensive integration schemes are required. Critical to achieving this is being able to detect unstable temporal transitions such as the bistable shifts in the example problem considered here. Lastly, our results indicate that a unique wavelet-based algorithm based on the Lipschitz exponent is capable of making such detections, even under noisy conditions, and may find applications in critical transition detection problems beyond catalysis.« less

  6. System simulation method for fiber-based homodyne multiple target interferometers using short coherence length laser sources

    NASA Astrophysics Data System (ADS)

    Fox, Maik; Beuth, Thorsten; Streck, Andreas; Stork, Wilhelm

    2015-09-01

    Homodyne laser interferometers for velocimetry are well-known optical systems used in many applications. While the detector power output signal of such a system, using a long coherence length laser and a single target, is easily modelled using the Doppler shift, scenarios with a short coherence length source, e.g. an unstabilized semiconductor laser, and multiple weak targets demand a more elaborated approach for simulation. Especially when using fiber components, the actual setup is an important factor for system performance as effects like return losses and multiple way propagation have to be taken into account. If the power received from the targets is in the same region as stray light created in the fiber setup, a complete system simulation becomes a necessity. In previous work, a phasor based signal simulation approach for interferometers based on short coherence length laser sources has been evaluated. To facilitate the use of the signal simulation, a fiber component ray tracer has since been developed that allows the creation of input files for the signal simulation environment. The software uses object oriented MATLAB code, simplifying the entry of different fiber setups and the extension of the ray tracer. Thus, a seamless way from a system description based on arbitrarily interconnected fiber components to a signal simulation for different target scenarios has been established. The ray tracer and signal simulation are being used for the evaluation of interferometer concepts incorporating delay lines to compensate for short coherence length.

  7. On the direct numerical simulation of moderate-Stokes-number turbulent particulate flows using algebraic-closure-based and kinetic-based moments methods

    NASA Astrophysics Data System (ADS)

    Vie, Aymeric; Masi, Enrica; Simonin, Olivier; Massot, Marc; EM2C/Ecole Centrale Paris Team; IMFT Team

    2012-11-01

    To simulate particulate flows, a convenient formalism for HPC is to use Eulerian moment methods, which describe the evolution of velocity moments instead of tracking directly the number density function (NDF) of the droplets. By using a conditional PDF approach, the Mesoscopic Eulerian Formalism (MEF) of Février et al. 2005 offers a solution for the direct numerical simulation of turbulent particulate flows, even at relatively high Stokes number. Here, we propose to compare to existing approaches used to solved for this formalism: the Algebraic-Closure-Based Moment method (Kaufmann et al. 2008, Masi et al. 2011), and the Kinetic-Based Moment Method (Yuan et al. 2010, Chalons et al. 2010, Vié et al. 2012). Therefore, the goal of the current work is to evaluate both strategies in turbulent test cases. For the ACBMM, viscosity-type and non-linear closures are envisaged, whereas for the KBMM, isotropic and anisotropic closures are investigated. A main aspect of the current methodology for the comparison is that the same numerical methods are used for both approaches. Results show that the new non-linear closure and the Anisotropic Gaussian closures are both accurate in shear flows, whereas viscosity-type and isotropic closures lead to wrong results.

  8. Numerical hydrodynamic simulations based on semi-analytic galaxy merger trees: method and Milky Way-like galaxies

    NASA Astrophysics Data System (ADS)

    Moster, Benjamin P.; Macciò, Andrea V.; Somerville, Rachel S.

    2014-01-01

    We present a new approach to study galaxy evolution in a cosmological context. We combine cosmological merger trees and semi-analytic models of galaxy formation to provide the initial conditions for multimerger hydrodynamic simulations. In this way, we exploit the advantages of merger simulations (high resolution and inclusion of the gas physics) and semi-analytic models (cosmological background and low computational cost), and integrate them to create a novel tool. This approach allows us to study the evolution of various galaxy properties, including the treatment of the hot gaseous halo from which gas cools and accretes on to the central disc, which has been neglected in many previous studies. This method shows several advantages over other methods. As only the particles in the regions of interest are included, the run time is much shorter than in traditional cosmological simulations, leading to greater computational efficiency. Using cosmological simulations, we show that multiple mergers are expected to be more common than sequences of isolated mergers, and therefore studies of galaxy mergers should take this into account. In this pilot study, we present our method and illustrate the results of simulating 10 Milky Way-like galaxies since z = 1. We find good agreement with observations for the total stellar masses, star formation rates, cold gas fractions and disc scalelength parameters. We expect that this novel numerical approach will be very useful for pursuing a number of questions pertaining to the transformation of galaxy internal structure through cosmic time.

  9. Bootstrapping Methods Applied for Simulating Laboratory Works

    ERIC Educational Resources Information Center

    Prodan, Augustin; Campean, Remus

    2005-01-01

    Purpose: The aim of this work is to implement bootstrapping methods into software tools, based on Java. Design/methodology/approach: This paper presents a category of software e-tools aimed at simulating laboratory works and experiments. Findings: Both students and teaching staff use traditional statistical methods to infer the truth from sample…

  10. Data mining of the GAW14 simulated data using rough set theory and tree-based methods.

    PubMed

    Wei, Liang-Ying; Huang, Cheng-Lung; Chen, Chien-Hsiun

    2005-12-30

    Rough set theory and decision trees are data mining methods used for dealing with vagueness and uncertainty. They have been utilized to unearth hidden patterns in complicated datasets collected for industrial processes. The Genetic Analysis Workshop 14 simulated data were generated using a system that implemented multiple correlations among four consequential layers of genetic data (disease-related loci, endophenotypes, phenotypes, and one disease trait). When information of one layer was blocked and uncertainty was created in the correlations among these layers, the correlation between the first and last layers (susceptibility genes and the disease trait in this case), was not easily directly detected. In this study, we proposed a two-stage process that applied rough set theory and decision trees to identify genes susceptible to the disease trait. During the first stage, based on phenotypes of subjects and their parents, decision trees were built to predict trait values. Phenotypes retained in the decision trees were then advanced to the second stage, where rough set theory was applied to discover the minimal subsets of genes associated with the disease trait. For comparison, decision trees were also constructed to map susceptible genes during the second stage. Our results showed that the decision trees of the first stage had accuracy rates of about 99% in predicting the disease trait. The decision trees and rough set theory failed to identify the true disease-related loci.

  11. Combination of a latin hypercube sampling and of an simulated annealing method to optimize a physically based hydrological model

    NASA Astrophysics Data System (ADS)

    Robert, D.; Braud, I.; Cohard, J.; Zin, I.; Vauclin, M.

    2010-12-01

    Physically based hydrological models involve a large amount of parameters and data. Any of them is associated with uncertainties because of indirect measurements of some characteristics or because of spatial or temporal variability of others, …. Then, even if lots of data are measured in the field or in the laboratory, ignorance and uncertainty about data persist and a large degree of freedom remains for modeling. Moreover the choice for physical parameterization also induces uncertainties and errors in model behavior and simulation results. To address this problem, sensitivity analyses are useful. They allow the determination of the influence of each parameter on modeling results and allow the adjustment of an optimal parameter set by minimizing a cost function. However, the larger the number of parameters, the more expensive the computational costs to explore the whole parameter space. In this context, we carried out an original approach in the hydrology domain to perform this sensitivity analysis using a 1D Soil - Vegetation - Atmosphere Transfer model. The chosen method is a global method. It focuses on the output data variability due to the input parameter uncertainties. The latin hypercube sampling is adopted to sample the analyzed input parameter space. This method has the advantage to reduce the computational cost. The method is applied using the SiSPAT (Simple Soil Vegetation Atmosphere Transfer) model over a complete year period with observations collected in a small catchments in Benin, within the AMMA project. It involves sensitivity to 30 parameters sampled in 40 intervals. The quality of the modeled results is evaluated by calculating several criteria: the bias, the root mean square error and the Nash-Sutcliffe efficiency coefficient between modeled and observed time series of net radiation, heat fluxes, soil temperatures and volumetric water contents.... To hierarchize the influence of the various input parameters on the results, the study of

  12. Forecasting nonlinear chaotic time series with function expression method based on an improved genetic-simulated annealing algorithm.

    PubMed

    Wang, Jun; Zhou, Bi-hua; Zhou, Shu-dao; Sheng, Zheng

    2015-01-01

    The paper proposes a novel function expression method to forecast chaotic time series, using an improved genetic-simulated annealing (IGSA) algorithm to establish the optimum function expression that describes the behavior of time series. In order to deal with the weakness associated with the genetic algorithm, the proposed algorithm incorporates the simulated annealing operation which has the strong local search ability into the genetic algorithm to enhance the performance of optimization; besides, the fitness function and genetic operators are also improved. Finally, the method is applied to the chaotic time series of Quadratic and Rossler maps for validation. The effect of noise in the chaotic time series is also studied numerically. The numerical results verify that the method can forecast chaotic time series with high precision and effectiveness, and the forecasting precision with certain noise is also satisfactory. It can be concluded that the IGSA algorithm is energy-efficient and superior.

  13. A method of simulating polarization-sensitive optical coherence tomography based on a polarization-sensitive Monte Carlo program and a sphere cylinder birefringence model

    NASA Astrophysics Data System (ADS)

    Chen, Dongsheng; Zeng, Nan; Liu, Celong; Ma, Hui

    2012-12-01

    In this paper, we present a new method to simulate the signal of polarization-sensitive optical coherence tomography (for short, PS-OCT) by the use of the sphere cylinder birefringence Monte Carlo program developed by our laboratory. Using the program, we can simulate various turbid media based on different optical models and analyze the scattering and polarization information of the simulated media. The detecting area and angle range and the scattering times of the photons are three main conditions we can use to screen out the photons which contribute to the signal of PS-OCT, and in this paper, we study the effects of these three factors on simulation results using our program, and find that the scattering times of the photon is the main factor to affect the signal, and the detecting area and angle range are less important but necessary conditions. In order to test and verify the feasibility of our simulation, we use two methods as a reference. One is called Extended Huygens Fresnel (for short, EHF) method, which is based on electromagnetism theory and can describe the single scattering and multiple scattering of light. By comparison of the results obtained from EHF method and ours, we explore the screening regularities of the photons in the simulation. Meanwhile, we also compare our simulation with another polarization related simulation presented by a Russian group, and our experimental results. Both the comparisons find that our simulation is efficient for PS-OCT at the superficial depth range, and should be further corrected in order to simulate the signal of PS-OCT at deeper depth.

  14. Jacobian Free-Newton Krylov Discontinuous Galerkin Method and Physics-Based Preconditioning for Nuclear Reactor Simulations

    SciTech Connect

    HyeongKae Park; R. Nourgaliev; Richard C. Martineau; Dana A. Knoll

    2008-09-01

    Multidimensional, higher-order (2nd and higher) numerical methods have come to the forefront in recent years due to significant advances of computer technology and numerical algorithms, and have shown great potential as viable design tools for realistic applications. To achieve this goal, implicit high-order accurate coupling of the multiphysics simulations is a critical component. One of the issues that arise from multiphysics simulation is the necessity to resolve multiple time scales. For example, the dynamical time scales of neutron kinetics, fluid dynamics and heat conduction significantly differ (typically >1010 magnitude), with the dominant (fastest) physical mode also changing during the course of transient [Pope and Mousseau, 2007]. This leads to the severe time step restriction for stability in traditional multiphysics (i.e. operator split, semi-implicit discretization) simulations. The lower order methods suffer from an undesirable numerical dissipation. Thus implicit, higher order accurate scheme is necessary to perform seamlessly-coupled multiphysics simulations that can be used to analyze the “what-if” regulatory accident scenarios, or to design and optimize engineering systems.

  15. A GPU-based large-scale Monte Carlo simulation method for systems with long-range interactions

    NASA Astrophysics Data System (ADS)

    Liang, Yihao; Xing, Xiangjun; Li, Yaohang

    2017-06-01

    In this work we present an efficient implementation of Canonical Monte Carlo simulation for Coulomb many body systems on graphics processing units (GPU). Our method takes advantage of the GPU Single Instruction, Multiple Data (SIMD) architectures, and adopts the sequential updating scheme of Metropolis algorithm. It makes no approximation in the computation of energy, and reaches a remarkable 440-fold speedup, compared with the serial implementation on CPU. We further use this method to simulate primitive model electrolytes, and measure very precisely all ion-ion pair correlation functions at high concentrations. From these data, we extract the renormalized Debye length, renormalized valences of constituent ions, and renormalized dielectric constants. These results demonstrate unequivocally physics beyond the classical Poisson-Boltzmann theory.

  16. Partially Averaged Navier-Stokes method based on k-ω model for simulating unsteady cavitating flows

    NASA Astrophysics Data System (ADS)

    Hu, C. L.; Wang, G. Y.; Wang, Z. Y.

    2015-01-01

    The turbulence closure is significant to unsteady cavitating flow computations as the flow is frequently time-dependent accompanied with multiple scales of vortex. A turbulence bridging model named as PANS (Partially-averaged Navier-Stokes) purported for any filter-width is developed recently. The model filter width is controlled through two parameters: the unresolved-to-total ratios of kinetic energy fk and dissipation rate fω. In the present paper, the PANS method based on k-ω model is used to simulate unsteady cavitating flows over a Clark-y hydrofoil. The main objective of this work is to present the characteristics of PANS k-ω model and evaluate it depending on experimental data. The PANS k-ω model is implemented with various filter parameters (fk=0.2~1, fω =1/fk). The comparisons with the experimental data show that with the decrease of the filter parameter fk, the PANS model can reasonably predict the time evolution process of cavity shapes and lift force fluctuating in time. As the PANS model with smaller fk can overcome the over-prediction of turbulent kinetic energy with original k-ω model, the time-averaged eddy viscosity at the rear of attached cavity decreases and more levels of physical turbulent fluctuations are resolved. What's more, it is found that the value of ω in the free stream significantly affects the numerical results such as time-averaged cavity and fluctuations of the lift coefficient. With decreasing fk, the sensitivity of ω-equation on free stream becomes much weaker.

  17. A simulation method for the fruitage body

    NASA Astrophysics Data System (ADS)

    Lu, Ling; Song, Weng-lin; Wang, Lei

    2009-07-01

    An effective visual modeling for creating the fruitage body has been present. According to the geometry shape character of fruitage, we build up its face model base on ellipsoid deformation. The face model is relation with radius. We consider different radius become a face in the fruitage, and uses same method to simulate the shape of fruitage inside. The body model is formed by combine face model and radius direction. Our method can simulate virtual inter and outer structure for fruitage body. The method decreases a lot of data and increases display speed. Another, the texture model of fruitage is defined by sum of different base function. This kind of method is simple and speed. We show the feasibility of our method by creating a winter-jujube and an apricot. They include exocorp, mesocorp and endocarp. It is useful that develop virtual plant.

  18. An image-based reaction field method for electrostatic interactions in molecular dynamics simulations of aqueous solutions

    NASA Astrophysics Data System (ADS)

    Lin, Yuchun; Baumketner, Andrij; Deng, Shaozhong; Xu, Zhenli; Jacobs, Donald; Cai, Wei

    2009-10-01

    In this paper, a new solvation model is proposed for simulations of biomolecules in aqueous solutions that combines the strengths of explicit and implicit solvent representations. Solute molecules are placed in a spherical cavity filled with explicit water, thus providing microscopic detail where it is most needed. Solvent outside of the cavity is modeled as a dielectric continuum whose effect on the solute is treated through the reaction field corrections. With this explicit/implicit model, the electrostatic potential represents a solute molecule in an infinite bath of solvent, thus avoiding unphysical interactions between periodic images of the solute commonly used in the lattice-sum explicit solvent simulations. For improved computational efficiency, our model employs an accurate and efficient multiple-image charge method to compute reaction fields together with the fast multipole method for the direct Coulomb interactions. To minimize the surface effects, periodic boundary conditions are employed for nonelectrostatic interactions. The proposed model is applied to study liquid water. The effect of model parameters, which include the size of the cavity, the number of image charges used to compute reaction field, and the thickness of the buffer layer, is investigated in comparison with the particle-mesh Ewald simulations as a reference. An optimal set of parameters is obtained that allows for a faithful representation of many structural, dielectric, and dynamic properties of the simulated water, while maintaining manageable computational cost. With controlled and adjustable accuracy of the multiple-image charge representation of the reaction field, it is concluded that the employed model achieves convergence with only one image charge in the case of pure water. Future applications to pKa calculations, conformational sampling of solvated biomolecules and electrolyte solutions are briefly discussed.

  19. A new method for the prediction of chatter stability lobes based on dynamic cutting force simulation model and support vector machine

    NASA Astrophysics Data System (ADS)

    Peng, Chong; Wang, Lun; Liao, T. Warren

    2015-10-01

    Currently, chatter has become the critical factor in hindering machining quality and productivity in machining processes. To avoid cutting chatter, a new method based on dynamic cutting force simulation model and support vector machine (SVM) is presented for the prediction of chatter stability lobes. The cutting force is selected as the monitoring signal, and the wavelet energy entropy theory is used to extract the feature vectors. A support vector machine is constructed using the MATLAB LIBSVM toolbox for pattern classification based on the feature vectors derived from the experimental cutting data. Then combining with the dynamic cutting force simulation model, the stability lobes diagram (SLD) can be estimated. Finally, the predicted results are compared with existing methods such as zero-order analytical (ZOA) and semi-discretization (SD) method as well as actual cutting experimental results to confirm the validity of this new method.

  20. Sensitive and rapid reversed-phase liquid chromatography-fluorescence method for determining bisphenol A diglycidyl ether in aqueous-based food simulants.

    PubMed

    Paseiro Losada, P; López Mahía, P; Vázquez Odériz, L; Simal Lozano, J; Simal Gándara, J

    1991-01-01

    A method has been developed for determination of bisphenol A diglycidyl ether (BADGE) in 3 aqueous-based food simulants: water, 15% (v/v) ethanol, and 3% (w/v) acetic acid. BADGE is extracted with C18 cartridges and the extract is concentrated under a stream of nitrogen. BADGE is quantitated by reversed-phase liquid chromatography with fluorescence detection. Relative precision at 200 micrograms/L was 3.4%, the detection limit of the method was 0.1 micrograms/L, and recoveries of spiking concentrations from 1 to 8 micrograms/L were nearly 100%. Relative standard deviations for the method ranged from 3.5 to 5.9%, depending on the identity of the spiked aqueous-based food simulant.

  1. Symplectic partitioned Runge-Kutta method based on the eighth-order nearly analytic discrete operator and its wavefield simulations

    NASA Astrophysics Data System (ADS)

    Zhang, Chao-Yuan; Ma, Xiao; Yang, Lei; Song, Guo-Jie

    2014-03-01

    We propose a symplectic partitioned Runge-Kutta (SPRK) method with eighth-order spatial accuracy based on the extended Hamiltonian system of the acoustic wave equation. Known as the eighth-order NSPRK method, this technique uses an eighth-order accurate nearly analytic discrete (NAD) operator to discretize high-order spatial differential operators and employs a second-order SPRK method to discretize temporal derivatives. The stability criteria and numerical dispersion relations of the eighth-order NSPRK method are given by a semi-analytical method and are tested by numerical experiments. We also show the differences of the numerical dispersions between the eighth-order NSPRK method and conventional numerical methods such as the fourth-order NSPRK method, the eighth-order Lax-Wendroff correction (LWC) method and the eighth-order staggered-grid (SG) method. The result shows that the ability of the eighth-order NSPRK method to suppress the numerical dispersion is obviously superior to that of the conventional numerical methods. In the same computational environment, to eliminate visible numerical dispersions, the eighth-order NSPRK is approximately 2.5 times faster than the fourth-order NSPRK and 3.4 times faster than the fourth-order SPRK, and the memory requirement is only approximately 47.17% of the fourth-order NSPRK method and 49.41 % of the fourth-order SPRK method, which indicates the highest computational efficiency. Modeling examples for the two-layer models such as the heterogeneous and Marmousi models show that the wavefields generated by the eighth-order NSPRK method are very clear with no visible numerical dispersion. These numerical experiments illustrate that the eighth-order NSPRK method can effectively suppress numerical dispersion when coarse grids are adopted. Therefore, this method can greatly decrease computer memory requirement and accelerate the forward modeling productivity. In general, the eighth-order NSPRK method has tremendous potential

  2. A New Combined Stepwise-Based High-Order Decoupled Direct and Reduced-Form Method To Improve Uncertainty Analysis in PM2.5 Simulations.

    PubMed

    Huang, Zhijiong; Hu, Yongtao; Zheng, Junyu; Yuan, Zibing; Russell, Armistead G; Ou, Jiamin; Zhong, Zhuangmin

    2017-04-04

    The traditional reduced-form model (RFM) based on the high-order decoupled direct method (HDDM), is an efficient uncertainty analysis approach for air quality models, but it has large biases in uncertainty propagation due to the limitation of the HDDM in predicting nonlinear responses to large perturbations of model inputs. To overcome the limitation, a new stepwise-based RFM method that combines several sets of local sensitive coefficients under different conditions is proposed. Evaluations reveal that the new RFM improves the prediction of nonlinear responses. The new method is applied to quantify uncertainties in simulated PM2.5 concentrations in the Pearl River Delta (PRD) region of China as a case study. Results show that the average uncertainty range of hourly PM2.5 concentrations is -28% to 57%, which can cover approximately 70% of the observed PM2.5 concentrations, while the traditional RFM underestimates the upper bound of the uncertainty range by 1-6%. Using a variance-based method, the PM2.5 boundary conditions and primary PM2.5 emissions are found to be the two major uncertainty sources in PM2.5 simulations. The new RFM better quantifies the uncertainty range in model simulations and can be applied to improve applications that rely on uncertainty information.

  3. Including anatomical and functional information in MC simulation of PET and SPECT brain studies. Brain-VISET: a voxel-based iterative method.

    PubMed

    Marti-Fuster, Berta; Esteban, Oscar; Thielemans, Kris; Setoain, Xavier; Santos, Andres; Ros, Domenec; Pavia, Javier

    2014-10-01

    Monte Carlo (MC) simulation provides a flexible and robust framework to efficiently evaluate and optimize image processing methods in emission tomography. In this work we present Brain-VISET (Voxel-based Iterative Simulation for Emission Tomography), a method that aims to simulate realistic [ (99m) Tc]-SPECT and [ (18) F]-PET brain databases by including anatomical and functional information. To this end, activity and attenuation maps generated using high-resolution anatomical images from patients were used as input maps in a MC projector to simulate SPECT or PET sinograms. The reconstructed images were compared with the corresponding real SPECT or PET studies in an iterative process where the activity inputs maps were being modified at each iteration. Datasets of 30 refractory epileptic patients were used to assess the new method. Each set consisted of structural images (MRI and CT) and functional studies (SPECT and PET), thereby allowing the inclusion of anatomical and functional variability in the simulation input models. SPECT and PET sinograms were obtained using the SimSET package and were reconstructed with the same protocols as those employed for the clinical studies. The convergence of Brain-VISET was evaluated by studying the behavior throughout iterations of the correlation coefficient, the quotient image histogram and a ROI analysis comparing simulated with real studies. The realism of generated maps was also evaluated. Our findings show that Brain-VISET is able to generate realistic SPECT and PET studies and that four iterations is a suitable number of iterations to guarantee a good agreement between simulated and real studies.

  4. Land Surface Model and Particle Swarm Optimization Algorithm Based on the Model-Optimization Method for Improving Soil Moisture Simulation in a Semi-Arid Region.

    PubMed

    Yang, Qidong; Zuo, Hongchao; Li, Weidong

    2016-01-01

    Improving the capability of land-surface process models to simulate soil moisture assists in better understanding the atmosphere-land interaction. In semi-arid regions, due to limited near-surface observational data and large errors in large-scale parameters obtained by the remote sensing method, there exist uncertainties in land surface parameters, which can cause large offsets between the simulated results of land-surface process models and the observational data for the soil moisture. In this study, observational data from the Semi-Arid Climate Observatory and Laboratory (SACOL) station in the semi-arid loess plateau of China were divided into three datasets: summer, autumn, and summer-autumn. By combing the particle swarm optimization (PSO) algorithm and the land-surface process model SHAW (Simultaneous Heat and Water), the soil and vegetation parameters that are related to the soil moisture but difficult to obtain by observations are optimized using three datasets. On this basis, the SHAW model was run with the optimized parameters to simulate the characteristics of the land-surface process in the semi-arid loess plateau. Simultaneously, the default SHAW model was run with the same atmospheric forcing as a comparison test. Simulation results revealed the following: parameters optimized by the particle swarm optimization algorithm in all simulation tests improved simulations of the soil moisture and latent heat flux; differences between simulated results and observational data are clearly reduced, but simulation tests involving the adoption of optimized parameters cannot simultaneously improve the simulation results for the net radiation, sensible heat flux, and soil temperature. Optimized soil and vegetation parameters based on different datasets have the same order of magnitude but are not identical; soil parameters only vary to a small degree, but the variation range of vegetation parameters is large.

  5. Epistemology of knowledge based simulation

    SciTech Connect

    Reddy, R.

    1987-04-01

    Combining artificial intelligence concepts, with traditional simulation methodologies yields a powerful design support tool known as knowledge based simulation. This approach turns a descriptive simulation tool into a prescriptive tool, one which recommends specific goals. Much work in the area of general goal processing and explanation of recommendations remains to be done.

  6. Simulation-based surgical education.

    PubMed

    Evgeniou, Evgenios; Loizou, Peter

    2013-09-01

    The reduction in time for training at the workplace has created a challenge for the traditional apprenticeship model of training. Simulation offers the opportunity for repeated practice in a safe and controlled environment, focusing on trainees and tailored to their needs. Recent technological advances have led to the development of various simulators, which have already been introduced in surgical training. The complexity and fidelity of the available simulators vary, therefore depending on our recourses we should select the appropriate simulator for the task or skill we want to teach. Educational theory informs us about the importance of context in professional learning. Simulation should therefore recreate the clinical environment and its complexity. Contemporary approaches to simulation have introduced novel ideas for teaching teamwork, communication skills and professionalism. In order for simulation-based training to be successful, simulators have to be validated appropriately and integrated in a training curriculum. Within a surgical curriculum, trainees should have protected time for simulation-based training, under appropriate supervision. Simulation-based surgical education should allow the appropriate practice of technical skills without ignoring the clinical context and must strike an adequate balance between the simulation environment and simulators.

  7. [Application of simulated annealing method and neural network on optimizing soil sampling schemes based on road distribution].

    PubMed

    Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng

    2015-03-01

    Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency.

  8. Simulations of light scattering spectra of a nanoshell on plane interface based on the discrete sources method

    NASA Astrophysics Data System (ADS)

    Eremina, Elena; Eremin, Yuri; Wriedt, Thomas

    2006-11-01

    The resonance properties of nanoshells are of great interest in nanosensing applications such as surface enhanced Raman scattering or biological sensing. In this paper the discrete sources method has been applied to analyze the spectrum of evanescent light scattering from a nanoshell particle deposited near a plane surface. Based on the rigorous theoretical model, which allows to take into account all features of the scattering problem as: medium with frequency dispersion, presence of the interface, the objective aperture and its location and core-shell asphericity, the scattering spectrum of nanoshells was calculated. The dependence of the local nanoshell spectral density behavior on its properties is discussed.

  9. Influence of River Bed Elevation Survey Configurations and Interpolation Methods on the Accuracy of LIDAR Dtm-Based River Flow Simulations

    NASA Astrophysics Data System (ADS)

    Santillan, J. R.; Serviano, J. L.; Makinano-Santillan, M.; Marqueso, J. T.

    2016-09-01

    In this paper, we investigated how survey configuration and the type of interpolation method can affect the accuracy of river flow simulations that utilize LIDAR DTM integrated with interpolated river bed as its main source of topographic information. Aside from determining the accuracy of the individually-generated river bed topographies, we also assessed the overall accuracy of the river flow simulations in terms of maximum flood depth and extent. Four survey configurations consisting of river bed elevation data points arranged as cross-section (XS), zig-zag (ZZ), river banks-centerline (RBCL), and river banks-centerline-zig-zag (RBCLZZ), and two interpolation methods (Inverse Distance-Weighted and Ordinary Kriging) were considered. Major results show that the choice of survey configuration, rather than the interpolation method, has significant effect on the accuracy of interpolated river bed surfaces, and subsequently on the accuracy of river flow simulations. The RMSEs of the interpolated surfaces and the model results vary from one configuration to another, and depends on how each configuration evenly collects river bed elevation data points. The large RMSEs for the RBCL configuration and the low RMSEs for the XS configuration confirm that as the data points become evenly spaced and cover more portions of the river, the resulting interpolated surface and the river flow simulation where it was used also become more accurate. The XS configuration with Ordinary Kriging (OK) as interpolation method provided the best river bed interpolation and river flow simulation results. The RBCL configuration, regardless of the interpolation algorithm used, resulted to least accurate river bed surfaces and simulation results. Based on the accuracy analysis, the use of XS configuration to collect river bed data points and applying the OK method to interpolate the river bed topography are the best methods to use to produce satisfactory river flow simulation outputs. The use of

  10. 3-D simulation of soot formation in a direct-injection diesel engine based on a comprehensive chemical mechanism and method of moments

    NASA Astrophysics Data System (ADS)

    Zhong, Bei-Jing; Dang, Shuai; Song, Ya-Na; Gong, Jing-Song

    2012-02-01

    Here, we propose both a comprehensive chemical mechanism and a reduced mechanism for a three-dimensional combustion simulation, describing the formation of polycyclic aromatic hydrocarbons (PAHs), in a direct-injection diesel engine. A soot model based on the reduced mechanism and a method of moments is also presented. The turbulent diffusion flame and PAH formation in the diesel engine were modelled using the reduced mechanism based on the detailed mechanism using a fixed wall temperature as a boundary condition. The spatial distribution of PAH concentrations and the characteristic parameters for soot formation in the engine cylinder were obtained by coupling a detailed chemical kinetic model with the three-dimensional computational fluid dynamic (CFD) model. Comparison of the simulated results with limited experimental data shows that the chemical mechanisms and soot model are realistic and correctly describe the basic physics of diesel combustion but require further development to improve their accuracy.

  11. Spectral simulation methods for enhancing qualitative and quantitative analyses based on infrared spectroscopy and quantitative calibration methods for passive infrared remote sensing of volatile organic compounds

    NASA Astrophysics Data System (ADS)

    Sulub, Yusuf Ismail

    Infrared spectroscopy (IR) has over the years found a myriad of applications including passive environmental remote sensing of toxic pollutants and the development of a blood glucose sensor. In this dissertation, capabilities of both these applications are further enhanced with data analysis strategies employing digital signal processing and novel simulation approaches. Both quantitative and qualitative determinations of volatile organic compounds are investigated in the passive IR remote sensing research described in this dissertation. In the quantitative work, partial least-squares (PLS) regression analysis is used to generate multivariate calibration models for passive Fourier transform IR remote sensing measurements of open-air generated vapors of ethanol in the presence methanol as an interfering species. A step-wise co-addition scheme coupled with a digital filtering approach is used to attenuate the effects of variation in optical path length or plume width. For the qualitative study, an IR imaging line scanner is used to acquire remote sensing data in both spatial and spectral domains. This technology is capable of not only identifying but also specifying the location of the sample under investigation. Successful implementation of this methodology is hampered by the huge costs incurred to conduct these experiments and the impracticality of acquiring large amounts of representative training data. To address this problem, a novel simulation approach is developed that generates training data based on synthetic analyte-active and measured analyte-inactive data. Subsequently, automated pattern classifiers are generated using piecewise linear discriminant analysis to predict the presence of the analyte signature in measured imaging data acquired in remote sensing applications. Near infrared glucose determinations based on the region of 5000--4000 cm-1 is the focus of the research in the latter part of this dissertation. A six-component aqueous matrix of glucose

  12. Development and Implementation of a Transport Method for the Transport and Reaction Simulation Engine (TaRSE) based on the Godunov-Mixed Finite Element Method

    USGS Publications Warehouse

    James, Andrew I.; Jawitz, James W.; Munoz-Carpena, Rafael

    2009-01-01

    A model to simulate transport of materials in surface water and ground water has been developed to numerically approximate solutions to the advection-dispersion equation. This model, known as the Transport and Reaction Simulation Engine (TaRSE), uses an algorithm that incorporates a time-splitting technique where the advective part of the equation is solved separately from the dispersive part. An explicit finite-volume Godunov method is used to approximate the advective part, while a mixed-finite element technique is used to approximate the dispersive part. The dispersive part uses an implicit discretization, which allows it to run stably with a larger time step than the explicit advective step. The potential exists to develop algorithms that run several advective steps, and then one dispersive step that encompasses the time interval of the advective steps. Because the dispersive step is computationally most expensive, schemes can be implemented that are more computationally efficient than non-time-split algorithms. This technique enables scientists to solve problems with high grid Peclet numbers, such as transport problems with sharp solute fronts, without spurious oscillations in the numerical approximation to the solution and with virtually no artificial diffusion.

  13. Investigating internal architecture effect in plastic deformation and failure for TPMS-based scaffolds using simulation methods and experimental procedure.

    PubMed

    Kadkhodapour, J; Montazerian, H; Raeisi, S

    2014-10-01

    Rapid prototyping (RP) has been a promising technique for producing tissue engineering scaffolds which mimic the behavior of host tissue as properly as possible. Biodegradability, agreeable feasibility of cell growth, and migration parallel to mechanical properties, such as strength and energy absorption, have to be considered in design procedure. In order to study the effect of internal architecture on the plastic deformation and failure pattern, the architecture of triply periodic minimal surfaces which have been observed in nature were used. P and D surfaces at 30% and 60% of volume fractions were modeled with 3∗3∗ 3 unit cells and imported to Objet EDEN 260 3-D printer. Models were printed by VeroBlue FullCure 840 photopolymer resin. Mechanical compression test was performed to investigate the compressive behavior of scaffolds. Deformation procedure and stress-strain curves were simulated by FEA and exhibited good agreement with the experimental observation. Current approaches for predicting dominant deformation mode under compression containing Maxwell's criteria and scaling laws were also investigated to achieve an understanding of the relationships between deformation pattern and mechanical properties of porous structures. It was observed that effect of stress concentration in TPMS-based scaffolds resultant by heterogeneous mass distribution, particularly at lower volume fractions, led to a different behavior from that of typical cellular materials. As a result, although more parameters are considered for determining dominant deformation in scaling laws, two mentioned approaches could not exclusively be used to compare the mechanical response of cellular materials at the same volume fraction.

  14. Simulation and Non-Simulation Based Human Reliability Analysis Approaches

    SciTech Connect

    Boring, Ronald Laurids; Shirley, Rachel Elizabeth; Joe, Jeffrey Clark; Mandelli, Diego

    2014-12-01

    Part of the U.S. Department of Energy’s Light Water Reactor Sustainability (LWRS) Program, the Risk-Informed Safety Margin Characterization (RISMC) Pathway develops approaches to estimating and managing safety margins. RISMC simulations pair deterministic plant physics models with probabilistic risk models. As human interactions are an essential element of plant risk, it is necessary to integrate human actions into the RISMC risk model. In this report, we review simulation-based and non-simulation-based human reliability assessment (HRA) methods. Chapter 2 surveys non-simulation-based HRA methods. Conventional HRA methods target static Probabilistic Risk Assessments for Level 1 events. These methods would require significant modification for use in dynamic simulation of Level 2 and Level 3 events. Chapter 3 is a review of human performance models. A variety of methods and models simulate dynamic human performance; however, most of these human performance models were developed outside the risk domain and have not been used for HRA. The exception is the ADS-IDAC model, which can be thought of as a virtual operator program. This model is resource-intensive but provides a detailed model of every operator action in a given scenario, along with models of numerous factors that can influence operator performance. Finally, Chapter 4 reviews the treatment of timing of operator actions in HRA methods. This chapter is an example of one of the critical gaps between existing HRA methods and the needs of dynamic HRA. This report summarizes the foundational information needed to develop a feasible approach to modeling human interactions in the RISMC simulations.

  15. Simulation-based medical teaching and learning.

    PubMed

    Al-Elq, Abdulmohsen H

    2010-01-01

    One of the most important steps in curriculum development is the introduction of simulation- based medical teaching and learning. Simulation is a generic term that refers to an artificial representation of a real world process to achieve educational goals through experiential learning. Simulation based medical education is defined as any educational activity that utilizes simulation aides to replicate clinical scenarios. Although medical simulation is relatively new, simulation has been used for a long time in other high risk professions such as aviation. Medical simulation allows the acquisition of clinical skills through deliberate practice rather than an apprentice style of learning. Simulation tools serve as an alternative to real patients. A trainee can make mistakes and learn from them without the fear of harming the patient. There are different types and classification of simulators and their cost vary according to the degree of their resemblance to the reality, or 'fidelity'. Simulation- based learning is expensive. However, it is cost-effective if utilized properly. Medical simulation has been found to enhance clinical competence at the undergraduate and postgraduate levels. It has also been found to have many advantages that can improve patient safety and reduce health care costs through the improvement of the medical provider's competencies. The objective of this narrative review article is to highlight the importance of simulation as a new teaching method in undergraduate and postgraduate education.

  16. A heterogeneous graph-based recommendation simulator

    SciTech Connect

    Yeonchan, Ahn; Sungchan, Park; Lee, Matt Sangkeun; Sang-goo, Lee

    2013-01-01

    Heterogeneous graph-based recommendation frameworks have flexibility in that they can incorporate various recommendation algorithms and various kinds of information to produce better results. In this demonstration, we present a heterogeneous graph-based recommendation simulator which enables participants to experience the flexibility of a heterogeneous graph-based recommendation method. With our system, participants can simulate various recommendation semantics by expressing the semantics via meaningful paths like User Movie User Movie. The simulator then returns the recommendation results on the fly based on the user-customized semantics using a fast Monte Carlo algorithm.

  17. Rule-based simulation models

    NASA Technical Reports Server (NTRS)

    Nieten, Joseph L.; Seraphine, Kathleen M.

    1991-01-01

    Procedural modeling systems, rule based modeling systems, and a method for converting a procedural model to a rule based model are described. Simulation models are used to represent real time engineering systems. A real time system can be represented by a set of equations or functions connected so that they perform in the same manner as the actual system. Most modeling system languages are based on FORTRAN or some other procedural language. Therefore, they must be enhanced with a reaction capability. Rule based systems are reactive by definition. Once the engineering system has been decomposed into a set of calculations using only basic algebraic unary operations, a knowledge network of calculations and functions can be constructed. The knowledge network required by a rule based system can be generated by a knowledge acquisition tool or a source level compiler. The compiler would take an existing model source file, a syntax template, and a symbol table and generate the knowledge network. Thus, existing procedural models can be translated and executed by a rule based system. Neural models can be provide the high capacity data manipulation required by the most complex real time models.

  18. Multigrid methods with applications to reservoir simulation

    SciTech Connect

    Xiao, Shengyou

    1994-05-01

    Multigrid methods are studied for solving elliptic partial differential equations. Focus is on parallel multigrid methods and their use for reservoir simulation. Multicolor Fourier analysis is used to analyze the behavior of standard multigrid methods for problems in one and two dimensions. Relation between multicolor and standard Fourier analysis is established. Multiple coarse grid methods for solving model problems in 1 and 2 dimensions are considered; at each coarse grid level we use more than one coarse grid to improve convergence. For a given Dirichlet problem, a related extended problem is first constructed; a purification procedure can be used to obtain Moore-Penrose solutions of the singular systems encountered. For solving anisotropic equations, semicoarsening and line smoothing techniques are used with multiple coarse grid methods to improve convergence. Two-level convergence factors are estimated using multicolor. In the case where each operator has the same stencil on each grid point on one level, exact multilevel convergence factors can be obtained. For solving partial differential equations with discontinuous coefficients, interpolation and restriction operators should include information about the equation coefficients. Matrix-dependent interpolation and restriction operators based on the Schur complement can be used in nonsymmetric cases. A semicoarsening multigrid solver with these operators is used in UTCOMP, a 3-D, multiphase, multicomponent, compositional reservoir simulator. The numerical experiments are carried out on different computing systems. Results indicate that the multigrid methods are promising.

  19. Experimental Evaluation of Suitability of Selected Multi-Criteria Decision-Making Methods for Large-Scale Agent-Based Simulations

    PubMed Central

    2016-01-01

    Multi-criteria decision-making (MCDM) can be formally implemented by various methods. This study compares suitability of four selected MCDM methods, namely WPM, TOPSIS, VIKOR, and PROMETHEE, for future applications in agent-based computational economic (ACE) models of larger scale (i.e., over 10 000 agents in one geographical region). These four MCDM methods were selected according to their appropriateness for computational processing in ACE applications. Tests of the selected methods were conducted on four hardware configurations. For each method, 100 tests were performed, which represented one testing iteration. With four testing iterations conducted on each hardware setting and separated testing of all configurations with the–server parameter de/activated, altogether, 12800 data points were collected and consequently analyzed. An illustrational decision-making scenario was used which allows the mutual comparison of all of the selected decision making methods. Our test results suggest that although all methods are convenient and can be used in practice, the VIKOR method accomplished the tests with the best results and thus can be recommended as the most suitable for simulations of large-scale agent-based models. PMID:27806061

  20. Evaluation of six scatter correction methods based on spectral analysis in (99m)Tc SPECT imaging using SIMIND Monte Carlo simulation.

    PubMed

    Asl, Mahsa Noori; Sadremomtaz, Alireza; Bitarafan-Rajabi, Ahmad

    2013-10-01

    Compton-scattered photons included within the photopeak pulse-height window result in the degradation of SPECT images both qualitatively and quantitatively. The purpose of this study is to evaluate and compare six scatter correction methods based on setting the energy windows in (99m)Tc spectrum. SIMIND Monte Carlo simulation is used to generate the projection images from a cold-sphere hot-background phantom. For evaluation of different scatter correction methods, three assessment criteria including image contrast, signal-to-noise ratio (SNR) and relative noise of the background (RNB) are considered. Except for the dual-photopeak window (DPW) method, the image contrast of the five cold spheres is improved in the range of 2.7-26%. Among methods considered, two methods show a nonuniform correction performance. The RNB for all of the scatter correction methods is ranged from minimum 0.03 for DPW method to maximum 0.0727 for the three energy window (TEW) method using trapezoidal approximation. The TEW method using triangular approximation because of ease of implementation, good improvement of the image contrast and the SNR for the five cold spheres, and the low noise level is proposed as most appropriate correction method.

  1. Simulation analysis of airflow alteration in the trachea following the vascular ring surgery based on CT images using the computational fluid dynamics method.

    PubMed

    Chen, Fong-Lin; Horng, Tzyy-Leng; Shih, Tzu-Ching

    2014-01-01

    This study presents a computational fluid dynamics (CFD) model to simulate the three-dimensional airflow in the trachea before and after the vascular ring surgery (VRS). The simulation was based on CT-scan images of the patients with the vascular ring diseases. The surface geometry of the tracheal airway was reconstructed using triangular mesh by the Amira software package. The unstructured tetrahedral volume meshes were generated by the ANSYS ICEM CFD software package. The airflow in the tracheal airway was solved by the ESI CFD-ACE+ software package. Numerical simulation shows that the pressure drops across the tracheal stenosis before and after the surgery were 0.1789 and 0.0967 Pa, respectively, with the inspiratory inlet velocity 0.1 m/s. Meanwhile, the improvement percentage by the surgery was 45.95%. In the expiratory phase, by contrast, the improvement percentage was 40.65%. When the inspiratory velocity reached 1 m/s, the pressure drop became 4.988~Pa and the improvement percentage was 43.32%. Simulation results further show that after treatment the pressure drop in the tracheal airway was significantly decreased, especially for low inspiratory and expiratory velocities. The CFD method can be applied to quantify the airway pressure alteration and to evaluate the treatment outcome of the vascular ring surgery under different respiratory velocities.

  2. Formability analysis of aluminum alloy sheets at elevated temperatures with numerical simulation based on the M-K method

    SciTech Connect

    Bagheriasl, Reza; Ghavam, Kamyar; Worswick, Michael

    2011-05-04

    The effect of temperature on formability of aluminum alloy sheet is studied by developing the Forming Limit Diagrams, FLD, for aluminum alloy 3000-series using the Marciniak and Kuczynski technique by numerical simulation. The numerical model is conducted in LS-DYNA and incorporates the Barlat's YLD2000 anisotropic yield function and the temperature dependant Bergstrom hardening law. Three different temperatures; room temperature, 250 deg. C and 300 deg. C, are studied. For each temperature case, various loading conditions are applied to the M-K defect model. The effect of the material anisotropy is considered by varying the defect angle. A simplified failure criterion is used to predict the onset of necking. Minor and major strains are obtained from the simulations and plotted for each temperature level. It is demonstrated that temperature improves the forming limit of aluminum 3000-series alloy sheet.

  3. Collaborative simulation method with spatiotemporal synchronization process control

    NASA Astrophysics Data System (ADS)

    Zou, Yisheng; Ding, Guofu; Zhang, Weihua; Zhang, Jian; Qin, Shengfeng; Tan, John Kian

    2016-10-01

    When designing a complex mechatronics system, such as high speed trains, it is relatively difficult to effectively simulate the entire system's dynamic behaviors because it involves multi-disciplinary subsystems. Currently,a most practical approach for multi-disciplinary simulation is interface based coupling simulation method, but it faces a twofold challenge: spatial and time unsynchronizations among multi-directional coupling simulation of subsystems. A new collaborative simulation method with spatiotemporal synchronization process control is proposed for coupling simulating a given complex mechatronics system across multiple subsystems on different platforms. The method consists of 1) a coupler-based coupling mechanisms to define the interfacing and interaction mechanisms among subsystems, and 2) a simulation process control algorithm to realize the coupling simulation in a spatiotemporal synchronized manner. The test results from a case study show that the proposed method 1) can certainly be used to simulate the sub-systems interactions under different simulation conditions in an engineering system, and 2) effectively supports multi-directional coupling simulation among multi-disciplinary subsystems. This method has been successfully applied in China high speed train design and development processes, demonstrating that it can be applied in a wide range of engineering systems design and simulation with improved efficiency and effectiveness.

  4. Simulating marine propellers with vortex particle method

    NASA Astrophysics Data System (ADS)

    Wang, Youjiang; Abdel-Maksoud, Moustafa; Song, Baowei

    2017-01-01

    The vortex particle method is applied to compute the open water characteristics of marine propellers. It is based on the large-eddy simulation technique, and the Smagorinsky-Lilly sub-grid scale model is implemented for the eddy viscosity. The vortex particle method is combined with the boundary element method, in the sense that the body is modelled with boundary elements and the slipstream is modelled with vortex particles. Rotational periodic boundaries are adopted, which leads to a cylindrical sector domain for the slipstream. The particle redistribution scheme and the fast multipole method are modified to consider the rotational periodic boundaries. Open water characteristics of three propellers with different skew angles are calculated with the proposed method. The results are compared with the ones obtained with boundary element method and experiments. It is found that the proposed method predicts the open water characteristics more accurately than the boundary element method, especially for high loading condition and high skew propeller. The influence of the Smagorinsky constant is also studied, which shows the results have a low sensitivity to it.

  5. Plume base flow simulation technology

    NASA Technical Reports Server (NTRS)

    Roberts, B. B.; Wallace, R. O.; Sims, J. L.

    1983-01-01

    A combined analytical/empirical approach was studied in an effort to define the plume simulation parameters for base flow. For design purposes, rocket exhaust simulation (i.e., plume simulation) is determined by wind tunnel testing. Cold gas testing was concluded to be a cost and schedule effective data base of substantial scope. The results fell short of the target, although work conducted was conclusive and advanced the state of the art. Comparisons of wind tunnel predictions with Space Transportation System (STS) flight data showed considerable differences. However, a review of the technology program data base has yielded an additional parameter that may correlate flight and cold gas test data. Data from the plume technology program and the NASA test flights are presented to substantiate the proposed simulation parameters.

  6. Simulation methods for looping transitions.

    PubMed

    Gaffney, B J; Silverstone, H J

    1998-09-01

    Looping transitions occur in field-swept electron magnetic resonance spectra near avoided crossings and involve a single pair of energy levels that are in resonance at two magnetic field strengths, before and after the avoided crossing. When the distance between the two resonances approaches a linewidth, the usual simulation of the spectra, which results from a linear approximation of the dependence of the transition frequency on magnetic field, breaks down. A cubic approximation to the transition frequency, which can be obtained from the two resonance fields and the field-derivatives of the transition frequencies, along with linear (or better) interpolation of the transition-probability factor, restores accurate simulation. The difference is crucial for accurate line shapes at fixed angles, as in an oriented single crystal, but the difference turns out to be a smaller change in relative intensity for a powder spectrum. Spin-3/2 Cr3+ in ruby and spin-5/2 Fe3+ in transferrin oxalate are treated as examples.

  7. Angioplasty simulation using ChainMail method

    NASA Astrophysics Data System (ADS)

    Le Fol, Tanguy; Acosta-Tamayo, Oscar; Lucas, Antoine; Haigron, Pascal

    2007-03-01

    Tackling transluminal angioplasty planning, the aim of our work is to bring, in a patient specific way, solutions to clinical problems. This work focuses on realization of simple simulation scenarios taking into account macroscopic behaviors of stenosis. It means simulating geometrical and physical data from the inflation of a balloon while integrating data from tissues analysis and parameters from virtual tool-tissues interactions. In this context, three main behaviors has been identified: soft tissues crush completely under the effect of the balloon, calcified plaques, do not admit any deformation but could move in deformable structures, the blood vessel wall undergoes consequences from compression phenomenon and tries to find its original form. We investigated the use of Chain-Mail which is based on elements linked with the others thanks to geometric constraints. Compared with time consuming methods or low realism ones, Chain-Mail methods provide a good compromise between physical and geometrical approaches. In this study, constraints are defined from pixel density from angio-CT images. The 2D method, proposed in this paper, first initializes the balloon in the blood vessel lumen. Then the balloon inflates and the moving propagation, gives an approximate reaction of tissues. Finally, a minimal energy level is calculated to locally adjust element positions, throughout elastic relaxation stage. Preliminary experimental results obtained on 2D computed tomography (CT) images (100x100 pixels) show that the method is fast enough to handle a great number of linked-element. The simulation is able to verify real-time and realistic interactions, particularly for hard and soft plaques.

  8. Medical students’ satisfaction with the Applied Basic Clinical Seminar with Scenarios for Students, a novel simulation-based learning method in Greece

    PubMed Central

    2016-01-01

    Purpose: The integration of simulation-based learning (SBL) methods holds promise for improving the medical education system in Greece. The Applied Basic Clinical Seminar with Scenarios for Students (ABCS3) is a novel two-day SBL course that was designed by the Scientific Society of Hellenic Medical Students. The ABCS3 targeted undergraduate medical students and consisted of three core components: the case-based lectures, the ABCDE hands-on station, and the simulation-based clinical scenarios. The purpose of this study was to evaluate the general educational environment of the course, as well as the skills and knowledge acquired by the participants. Methods: Two sets of questions were distributed to the participants: the Dundee Ready Educational Environment Measure (DREEM) questionnaire and an internally designed feedback questionnaire (InEv). A multiple-choice examination was also distributed prior to the course and following its completion. A total of 176 participants answered the DREEM questionnaire, 56 the InEv, and 60 the MCQs. Results: The overall DREEM score was 144.61 (±28.05) out of 200. Delegates who participated in both the case-based lectures and the interactive scenarios core components scored higher than those who only completed the case-based lecture session (P=0.038). The mean overall feedback score was 4.12 (±0.56) out of 5. Students scored significantly higher on the post-test than on the pre-test (P<0.001). Conclusion: The ABCS3 was found to be an effective SBL program, as medical students reported positive opinions about their experiences and exhibited improvements in their clinical knowledge and skills. PMID:27012313

  9. Pipette-based Method to Study Embryoid Body Formation Derived from Mouse and Human Pluripotent Stem Cells Partially Recapitulating Early Embryonic Development Under Simulated Microgravity Conditions

    NASA Astrophysics Data System (ADS)

    Shinde, Vaibhav; Brungs, Sonja; Hescheler, Jürgen; Hemmersbach, Ruth; Sachinidis, Agapios

    2016-06-01

    The in vitro differentiation of pluripotent stem cells partially recapitulates early in vivo embryonic development. More recently, embryonic development under the influence of microgravity has become a primary focus of space life sciences. In order to integrate the technique of pluripotent stem cell differentiation with simulated microgravity approaches, the 2-D clinostat compatible pipette-based method was experimentally investigated and adapted for investigating stem cell differentiation processes under simulated microgravity conditions. In order to keep residual accelerations as low as possible during clinorotation, while also guaranteeing enough material for further analysis, stem cells were exposed in 1-mL pipettes with a diameter of 3.5 mm. The differentiation of mouse and human pluripotent stem cells inside the pipettes resulted in the formation of embryoid bodies at normal gravity (1 g) after 24 h and 3 days. Differentiation of the mouse pluripotent stem cells on a 2-D pipette-clinostat for 3 days also resulted in the formation of embryoid bodies. Interestingly, the expression of myosin heavy chain was downregulated when cultivation was continued for an additional 7 days at normal gravity. This paper describes the techniques for culturing and differentiation of pluripotent stem cells and exposure to simulated microgravity during culturing or differentiation on a 2-D pipette clinostat. The implementation of these methodologies along with -omics technologies will contribute to understand the mechanisms regulating how microgravity influences early embryonic development.

  10. Jacobian-free Newton Krylov discontinuous Galerkin method and physics-based preconditioning for nuclear reactor simulations

    SciTech Connect

    HyeongKae Park; Robert R. Nourgaliev; Richard C. Martineau; Dana A. Knoll

    2008-09-01

    We present high-order accurate spatiotemporal discretization of all-speed flow solvers using Jacobian-free Newton Krylov framework. One of the key developments in this work is the physics-based preconditioner for the all-speed flow, which makes use of traditional semi-implicit schemes. The physics-based preconditioner is developed in the primitive variable form, which allows a straightforward separation of physical phenomena. Numerical examples demonstrate that the developed preconditioner effectively reduces the number of the Krylov iterations, and the efficiency is independent of the Mach number and mesh sizes under a fixed CFL condition.

  11. Parametrizing Physics-Based Earthquake Simulations

    NASA Astrophysics Data System (ADS)

    Schultz, Kasey W.; Yoder, Mark R.; Wilson, John M.; Heien, Eric M.; Sachs, Michael K.; Rundle, John B.; Turcotte, Don L.

    2016-11-01

    Utilizing earthquake source parameter scaling relations, we formulate an extensible slip weakening friction law for quasi-static earthquake simulations. This algorithm is based on the method used to generate fault strengths for a recent earthquake simulator comparison study of the California fault system. Here we focus on the application of this algorithm in the Virtual Quake earthquake simulator. As a case study we probe the effects of the friction law's parameters on simulated earthquake rates for the UCERF3 California fault model, and present the resulting conditional probabilities for California earthquake scenarios. The new friction model significantly extends the moment magnitude range over which simulated earthquake rates match observed rates in California, as well as substantially improving the agreement between simulated and observed scaling relations for mean slip and total rupture area.

  12. Large-Eddy Simulation of Shallow Water Langmuir Turbulence Using Isogeometric Analysis and the Residual-Based Variational Multiscale Method

    DTIC Science & Technology

    2012-01-01

    generating turbulence in the ocean; others include wind-and tidal -driven shear, buoyancy-driven convection and wave breaking. Wind speeds greater than 3 m...structure to the primary, mean component of the flow driven by the wind. LC results from surface wave -current interaction and often occurs within the...equations with an extra vortex force term accounting for wave -current interaction giving rise to LC. The RBVMS method with quadratic NURBS is shown to

  13. Methods of sound simulation and applications in flight simulators

    NASA Technical Reports Server (NTRS)

    Gaertner, K. P.

    1980-01-01

    An overview of methods for electronically synthesizing sounds is presented. A given amount of hardware and computer capacity places an upper limit on the degree and fidelity of realism of sound simulation which is attainable. Good sound realism for aircraft simulators can be especially expensive because of the complexity of flight sounds and their changing patterns through time. Nevertheless, the flight simulator developed at the Research Institute for Human Engineering, West Germany, shows that it is possible to design an inexpensive sound simulator with the required acoustic properties using analog computer elements. The characteristics of the sub-sound elements produced by this sound simulator for take-off, cruise and approach are discussed.

  14. [Comparison of two types of double-lined simulated landfill leakage detection based on high voltage DC method].

    PubMed

    Yang, Ping; Nai, Chang-Xin; Dong, Lu; Wang, Qi; Wang, Yan-Wen

    2006-01-01

    Two types of double high density polyethylene (HDPE) liners landfill that clay or geogrid was added between the two HDPE liners. The general resistance of the second mode is 15% larger than the general resistance of the first mode in the primary HDPE liner detection, and 20% larger than that of the first one in the secondary HDPE liner detection. High voltage DC method can accomplish the leakage detection and location of these two types of landfill and the error of leakage location is less than 10cm when electrode space is 1m.

  15. Large-Eddy Simulation and Multigrid Methods

    SciTech Connect

    Falgout,R D; Naegle,S; Wittum,G

    2001-06-18

    A method to simulate turbulent flows with Large-Eddy Simulation on unstructured grids is presented. Two kinds of dynamic models are used to model the unresolved scales of motion and are compared with each other on different grids. Thereby the behavior of the models is shown and additionally the feature of adaptive grid refinement is investigated. Furthermore the parallelization aspect is addressed.

  16. A fluid model simulation of a simplified plasma limiter based on spectral-element time-domain method

    SciTech Connect

    Qian, Cheng; Ding, Dazhi Fan, Zhenhong; Chen, Rushan

    2015-03-15

    A simplified plasma limiter prototype is proposed and the fluid model coupled with Maxwell's equations is established to describe the operating mechanism of plasma limiter. A three-dimensional (3-D) simplified sandwich structure plasma limiter model is analyzed with the spectral-element time-domain (SETD) method. The field breakdown threshold of air and argon at different frequency is predicted and compared with the experimental data and there is a good agreement between them for gas microwave breakdown discharge problems. Numerical results demonstrate that the two-layer plasma limiter (plasma-slab-plasma) has better protective characteristics than a one-layer plasma limiter (slab-plasma-slab) with the same length of gas chamber.

  17. A Method to Estimate the Size and Characteristics of HIV-positive Populations Using an Individual-based Stochastic Simulation Model.

    PubMed

    Nakagawa, Fumiyo; van Sighem, Ard; Thiebaut, Rodolphe; Smith, Colette; Ratmann, Oliver; Cambiano, Valentina; Albert, Jan; Amato-Gauci, Andrew; Bezemer, Daniela; Campbell, Colin; Commenges, Daniel; Donoghoe, Martin; Ford, Deborah; Kouyos, Roger; Lodwick, Rebecca; Lundgren, Jens; Pantazis, Nikos; Pharris, Anastasia; Quinten, Chantal; Thorne, Claire; Touloumi, Giota; Delpech, Valerie; Phillips, Andrew

    2016-03-01

    It is important not only to collect epidemiologic data on HIV but to also fully utilize such information to understand the epidemic over time and to help inform and monitor the impact of policies and interventions. We describe and apply a novel method to estimate the size and characteristics of HIV-positive populations. The method was applied to data on men who have sex with men living in the UK and to a pseudo dataset to assess performance for different data availability. The individual-based simulation model was calibrated using an approximate Bayesian computation-based approach. In 2013, 48,310 (90% plausibility range: 39,900-45,560) men who have sex with men were estimated to be living with HIV in the UK, of whom 10,400 (6,160-17,350) were undiagnosed. There were an estimated 3,210 (1,730-5,350) infections per year on average between 2010 and 2013. Sixty-two percent of the total HIV-positive population are thought to have viral load <500 copies/ml. In the pseudo-epidemic example, HIV estimates have narrower plausibility ranges and are closer to the true number, the greater the data availability to calibrate the model. We demonstrate that our method can be applied to settings with less data, however plausibility ranges for estimates will be wider to reflect greater uncertainty of the data used to fit the model.

  18. A Method to Estimate the Size and Characteristics of HIV-positive Populations Using an Individual-based Stochastic Simulation Model

    PubMed Central

    van Sighem, Ard; Thiebaut, Rodolphe; Smith, Colette; Ratmann, Oliver; Cambiano, Valentina; Albert, Jan; Amato-Gauci, Andrew; Bezemer, Daniela; Campbell, Colin; Commenges, Daniel; Donoghoe, Martin; Ford, Deborah; Kouyos, Roger; Lodwick, Rebecca; Lundgren, Jens; Pantazis, Nikos; Pharris, Anastasia; Quinten, Chantal; Thorne, Claire; Touloumi, Giota; Delpech, Valerie; Phillips, Andrew

    2016-01-01

    It is important not only to collect epidemiologic data on HIV but to also fully utilize such information to understand the epidemic over time and to help inform and monitor the impact of policies and interventions. We describe and apply a novel method to estimate the size and characteristics of HIV-positive populations. The method was applied to data on men who have sex with men living in the UK and to a pseudo dataset to assess performance for different data availability. The individual-based simulation model was calibrated using an approximate Bayesian computation-based approach. In 2013, 48,310 (90% plausibility range: 39,900–45,560) men who have sex with men were estimated to be living with HIV in the UK, of whom 10,400 (6,160–17,350) were undiagnosed. There were an estimated 3,210 (1,730–5,350) infections per year on average between 2010 and 2013. Sixty-two percent of the total HIV-positive population are thought to have viral load <500 copies/ml. In the pseudo-epidemic example, HIV estimates have narrower plausibility ranges and are closer to the true number, the greater the data availability to calibrate the model. We demonstrate that our method can be applied to settings with less data, however plausibility ranges for estimates will be wider to reflect greater uncertainty of the data used to fit the model. PMID:26605814

  19. Accelerated simulation methods for plasma kinetics

    NASA Astrophysics Data System (ADS)

    Caflisch, Russel

    2016-11-01

    Collisional kinetics is a multiscale phenomenon due to the disparity between the continuum (fluid) and the collisional (particle) length scales. This paper describes a class of simulation methods for gases and plasmas, and acceleration techniques for improving their speed and accuracy. Starting from the Landau-Fokker-Planck equation for plasmas, the focus will be on a binary collision model that is solved using a Direct Simulation Monte Carlo (DSMC) method. Acceleration of this method is achieved by coupling the particle method to a continuum fluid description. The velocity distribution function f is represented as a combination of a Maxwellian M (the thermal component) and a set of discrete particles fp (the kinetic component). For systems that are close to (local) equilibrium, this reduces the number N of simulated particles that are required to represent f for a given level of accuracy. We present two methods for exploiting this representation. In the first method, equilibration of particles in fp, as well as disequilibration of particles from M, due to the collision process, is represented by a thermalization/dethermalization step that employs an entropy criterion. Efficiency of the representation is greatly increased by inclusion of particles with negative weights. This significantly complicates the simulation, but the second method is a tractable approach for negatively weighted particles. The accelerated simulation method is compared with standard PIC-DSMC method for both spatially homogeneous problems such as a bump-on-tail and inhomogeneous problems such as nonlinear Landau damping.

  20. A task-based parallelism and vectorized approach to 3D Method of Characteristics (MOC) reactor simulation for high performance computing architectures

    NASA Astrophysics Data System (ADS)

    Tramm, John R.; Gunow, Geoffrey; He, Tim; Smith, Kord S.; Forget, Benoit; Siegel, Andrew R.

    2016-05-01

    In this study we present and analyze a formulation of the 3D Method of Characteristics (MOC) technique applied to the simulation of full core nuclear reactors. Key features of the algorithm include a task-based parallelism model that allows independent MOC tracks to be assigned to threads dynamically, ensuring load balancing, and a wide vectorizable inner loop that takes advantage of modern SIMD computer architectures. The algorithm is implemented in a set of highly optimized proxy applications in order to investigate its performance characteristics on CPU, GPU, and Intel Xeon Phi architectures. Speed, power, and hardware cost efficiencies are compared. Additionally, performance bottlenecks are identified for each architecture in order to determine the prospects for continued scalability of the algorithm on next generation HPC architectures.

  1. Constraint methods that accelerate free-energy simulations of biomolecules

    PubMed Central

    MacCallum, Justin L.; Dill, Ken A.

    2015-01-01

    Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann’s law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions. PMID:26723628

  2. Constraint methods that accelerate free-energy simulations of biomolecules

    SciTech Connect

    Perez, Alberto; MacCallum, Justin L.; Coutsias, Evangelos A.; Dill, Ken A.

    2015-12-28

    Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann’s law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.

  3. Constraint methods that accelerate free-energy simulations of biomolecules.

    PubMed

    Perez, Alberto; MacCallum, Justin L; Coutsias, Evangelos A; Dill, Ken A

    2015-12-28

    Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann's law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.

  4. Constraint methods that accelerate free-energy simulations of biomolecules

    NASA Astrophysics Data System (ADS)

    Perez, Alberto; MacCallum, Justin L.; Coutsias, Evangelos A.; Dill, Ken A.

    2015-12-01

    Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann's law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.

  5. Estimating School Efficiency: A Comparison of Methods Using Simulated Data.

    ERIC Educational Resources Information Center

    Bifulco, Robert; Bretschneider, Stuart

    2001-01-01

    Uses simulated data to assess the adequacy of two econometric and linear-programming techniques (data-envelopment analysis and corrected ordinary least squares) for measuring performance-based school reform. In complex data sets (simulated to contain measurement error and endogeneity), these methods are inadequate efficiency measures. (Contains 40…

  6. Interval sampling methods and measurement error: a computer simulation.

    PubMed

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments.

  7. Meta-Analysis of a Continuous Outcome Combining Individual Patient Data and Aggregate Data: A Method Based on Simulated Individual Patient Data

    ERIC Educational Resources Information Center

    Yamaguchi, Yusuke; Sakamoto, Wataru; Goto, Masashi; Staessen, Jan A.; Wang, Jiguang; Gueyffier, Francois; Riley, Richard D.

    2014-01-01

    When some trials provide individual patient data (IPD) and the others provide only aggregate data (AD), meta-analysis methods for combining IPD and AD are required. We propose a method that reconstructs the missing IPD for AD trials by a Bayesian sampling procedure and then applies an IPD meta-analysis model to the mixture of simulated IPD and…

  8. Inversion based on computational simulations

    SciTech Connect

    Hanson, K.M.; Cunningham, G.S.; Saquib, S.S.

    1998-09-01

    A standard approach to solving inversion problems that involve many parameters uses gradient-based optimization to find the parameters that best match the data. The authors discuss enabling techniques that facilitate application of this approach to large-scale computational simulations, which are the only way to investigate many complex physical phenomena. Such simulations may not seem to lend themselves to calculation of the gradient with respect to numerous parameters. However, adjoint differentiation allows one to efficiently compute the gradient of an objective function with respect to all the variables of a simulation. When combined with advanced gradient-based optimization algorithms, adjoint differentiation permits one to solve very large problems of optimization or parameter estimation. These techniques will be illustrated through the simulation of the time-dependent diffusion of infrared light through tissue, which has been used to perform optical tomography. The techniques discussed have a wide range of applicability to modeling including the optimization of models to achieve a desired design goal.

  9. Simulating protein dynamics: Novel methods and applications

    NASA Astrophysics Data System (ADS)

    Vishal, V.

    This Ph.D dissertation describes several methodological advances in molecular dynamics (MD) simulations. Methods like Markov State Models can be used effectively in combination with distributed computing to obtain long time scale behavior from an ensemble of short simulations. Advanced computing architectures like Graphics Processors can be used to greatly extend the scope of MD. Applications of MD techniques to problems like Alzheimer's Disease and fundamental questions in protein dynamics are described.

  10. Spectral Methods in General Relativistic MHD Simulations

    NASA Astrophysics Data System (ADS)

    Garrison, David

    2012-03-01

    In this talk I discuss the use of spectral methods in improving the accuracy of a General Relativistic Magnetohydrodynamic (GRMHD) computer code. I introduce SpecCosmo, a GRMHD code developed as a Cactus arrangement at UHCL, and show simulation results using both Fourier spectral methods and finite differencing. This work demonstrates the use of spectral methods with the FFTW 3.3 Fast Fourier Transform package integrated with the Cactus Framework to perform spectral differencing using MPI.

  11. Rainfall Simulation: methods, research questions and challenges

    NASA Astrophysics Data System (ADS)

    Ries, J. B.; Iserloh, T.

    2012-04-01

    In erosion research, rainfall simulations are used for the improvement of process knowledge as well as in the field for the assessment of overland flow generation, infiltration, and erosion rates. In all these fields of research, rainfall experiments have become an indispensable part of the research methods. In this context, small portable rainfall simulators with small test-plot sizes of one square-meter or even less, and devices of low weight and water consumption are in demand. Accordingly, devices with manageable technical effort like nozzle-type simulators seem to prevail against larger simulators. The reasons are obvious: lower costs and less time consumption needed for mounting enable a higher repetition rate. Regarding the high number of research questions, of different fields of application, and not least also due to the great technical creativity of our research staff, a large number of different experimental setups is available. Each of the devices produces a different rainfall, leading to different kinetic energy amounts influencing the soil surface and accordingly, producing different erosion results. Hence, important questions contain the definition, the comparability, the measurement and the simulation of natural rainfall and the problem of comparability in general. Another important discussion topic will be the finding of an agreement on an appropriate calibration method for the simulated rainfalls, in order to enable a comparison of the results of different rainfall simulator set-ups. In most of the publications, only the following "nice" sentence can be read: "Our rainfall simulator generates a rainfall spectrum that is similar to natural rainfall!". The most substantial and critical properties of a simulated rainfall are the drop-size distribution, the fall velocities of the drops, and the spatial distribution of the rainfall on the plot-area. In a comparison of the most important methods, the Laser Distrometer turned out to be the most up

  12. Reduced Basis Method for Nanodevices Simulation

    SciTech Connect

    Pau, George Shu Heng

    2008-05-23

    Ballistic transport simulation in nanodevices, which involves self-consistently solving a coupled Schrodinger-Poisson system of equations, is usually computationally intensive. Here, we propose coupling the reduced basis method with the subband decomposition method to improve the overall efficiency of the simulation. By exploiting a posteriori error estimation procedure and greedy sampling algorithm, we are able to design an algorithm where the computational cost is reduced significantly. In addition, the computational cost only grows marginally with the number of grid points in the confined direction.

  13. Simulation methods for advanced scientific computing

    SciTech Connect

    Booth, T.E.; Carlson, J.A.; Forster, R.A.

    1998-11-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The objective of the project was to create effective new algorithms for solving N-body problems by computer simulation. The authors concentrated on developing advanced classical and quantum Monte Carlo techniques. For simulations of phase transitions in classical systems, they produced a framework generalizing the famous Swendsen-Wang cluster algorithms for Ising and Potts models. For spin-glass-like problems, they demonstrated the effectiveness of an extension of the multicanonical method for the two-dimensional, random bond Ising model. For quantum mechanical systems, they generated a new method to compute the ground-state energy of systems of interacting electrons. They also improved methods to compute excited states when the diffusion quantum Monte Carlo method is used and to compute longer time dynamics when the stationary phase quantum Monte Carlo method is used.

  14. Bridging the gap: simulations meet knowledge bases

    NASA Astrophysics Data System (ADS)

    King, Gary W.; Morrison, Clayton T.; Westbrook, David L.; Cohen, Paul R.

    2003-09-01

    Tapir and Krill are declarative languages for specifying actions and agents, respectively, that can be executed in simulation. As such, they bridge the gap between strictly declarative knowledge bases and strictly executable code. Tapir and Krill components can be combined to produce models of activity which can answer questions about mechanisms and processes using conventional inference methods and simulation. Tapir was used in DARPA's Rapid Knowledge Formation (RKF) project to construct models of military tactics from the Army Field Manual FM3-90. These were then used to build Courses of Actions (COAs) which could be critiqued by declarative reasoning or via Monte Carlo simulation. Tapir and Krill can be read and written by non-knowledge engineers making it an excellent vehicle for Subject Matter Experts to build and critique knowledge bases.

  15. A method for simulating a flux-locked DC SQUID

    NASA Technical Reports Server (NTRS)

    Gutt, G. M.; Kasdin, N. J.; Condron, M. R., II; Muhlfelder, B.; Lockhart, J. M.; Cromar, M. W.

    1993-01-01

    The authors describe a computationally efficient and accurate method for simulating a dc SQUID's V-Phi (voltage-flux) and I-V characteristics which has proven valuable in evaluating and improving various SQUID readout methods. The simulation of the SQUID is based on fitting of previously acquired data from either a real or a modeled device using the Fourier transform of the V-Phi curve. This method does not predict SQUID behavior, but rather is a way of replicating a known behavior efficiently with portability into various simulation programs such as SPICE. The authors discuss the methods used to simulate the SQUID and the flux-locking control electronics, and present specific examples of this approach. Results include an estimate of the slew rate and linearity of a simple flux-locked loop using a characterized dc SQUID.

  16. A Simulation Method Measuring Psychomotor Nursing Skills.

    ERIC Educational Resources Information Center

    McBride, Helena; And Others

    1981-01-01

    The development of a simulation technique to evaluate performance of psychomotor skills in an undergraduate nursing program is described. This method is used as one admission requirement to an alternate route nursing program. With modifications, any health profession could use this technique where psychomotor skills performance is important.…

  17. Method for Constructing Standardized Simulated Root Canals.

    ERIC Educational Resources Information Center

    Schulz-Bongert, Udo; Weine, Franklin S.

    1990-01-01

    The construction of visual and manipulative aids, clear resin blocks with root-canal-like spaces, for simulation of root canals is explained. Time, materials, and techniques are discussed. The method allows for comparison of canals, creation of any configuration of canals, and easy presentation during instruction. (MSE)

  18. Novel methods for molecular dynamics simulations.

    PubMed

    Elber, R

    1996-04-01

    In the past year, significant progress was made in the development of molecular dynamics methods for the liquid phase and for biological macromolecules. Specifically, faster algorithms to pursue molecular dynamics simulations were introduced and advances were made in the design of new optimization algorithms guided by molecular dynamics protocols. A technique to calculate the quantum spectra of protein vibrations was introduced.

  19. Effective medium based optical analysis with finite element method simulations to study photochromic transitions in Ag-TiO2 nanocomposite films

    NASA Astrophysics Data System (ADS)

    Abhilash, T.; Balasubrahmaniyam, M.; Kasiviswanathan, S.

    2016-03-01

    Photochromic transitions in silver nanoparticles (AgNPs) embedded titanium dioxide (TiO2) films under green light illumination are marked by reduction in strength and blue shift in the position of the localized surface plasmon resonance (LSPR) associated with AgNPs. These transitions, which happen in the sub-nanometer length scale, have been analysed using the variations observed in the effective dielectric properties of the Ag-TiO2 nanocomposite films in response to the size reduction of AgNPs and subsequent changes in the surrounding medium due to photo-oxidation. Bergman-Milton formulation based on spectral density approach is used to extract dielectric properties and information about the geometrical distribution of the effective medium. Combined with finite element method simulations, we isolate the effects due to the change in average size of the nanoparticles and those due to the change in the dielectric function of the surrounding medium. By analysing the dynamics of photochromic transitions in the effective medium, we conclude that the observed blue shift in LSPR is mainly because of the change in the dielectric function of surrounding medium, while a shape-preserving effective size reduction of the AgNPs causes decrease in the strength of LSPR.

  20. Fast Particle Methods for Multiscale Phenomena Simulations

    NASA Technical Reports Server (NTRS)

    Koumoutsakos, P.; Wray, A.; Shariff, K.; Pohorille, Andrew

    2000-01-01

    We are developing particle methods oriented at improving computational modeling capabilities of multiscale physical phenomena in : (i) high Reynolds number unsteady vortical flows, (ii) particle laden and interfacial flows, (iii)molecular dynamics studies of nanoscale droplets and studies of the structure, functions, and evolution of the earliest living cell. The unifying computational approach involves particle methods implemented in parallel computer architectures. The inherent adaptivity, robustness and efficiency of particle methods makes them a multidisciplinary computational tool capable of bridging the gap of micro-scale and continuum flow simulations. Using efficient tree data structures, multipole expansion algorithms, and improved particle-grid interpolation, particle methods allow for simulations using millions of computational elements, making possible the resolution of a wide range of length and time scales of these important physical phenomena.The current challenges in these simulations are in : [i] the proper formulation of particle methods in the molecular and continuous level for the discretization of the governing equations [ii] the resolution of the wide range of time and length scales governing the phenomena under investigation. [iii] the minimization of numerical artifacts that may interfere with the physics of the systems under consideration. [iv] the parallelization of processes such as tree traversal and grid-particle interpolations We are conducting simulations using vortex methods, molecular dynamics and smooth particle hydrodynamics, exploiting their unifying concepts such as : the solution of the N-body problem in parallel computers, highly accurate particle-particle and grid-particle interpolations, parallel FFT's and the formulation of processes such as diffusion in the context of particle methods. This approach enables us to transcend among seemingly unrelated areas of research.

  1. Parallel methods for the flight simulation model

    SciTech Connect

    Xiong, Wei Zhong; Swietlik, C.

    1994-06-01

    The Advanced Computer Applications Center (ACAC) has been involved in evaluating advanced parallel architecture computers and the applicability of these machines to computer simulation models. The advanced systems investigated include parallel machines with shared. memory and distributed architectures consisting of an eight processor Alliant FX/8, a twenty four processor sor Sequent Symmetry, Cray XMP, IBM RISC 6000 model 550, and the Intel Touchstone eight processor Gamma and 512 processor Delta machines. Since parallelizing a truly efficient application program for the parallel machine is a difficult task, the implementation for these machines in a realistic setting has been largely overlooked. The ACAC has developed considerable expertise in optimizing and parallelizing application models on a collection of advanced multiprocessor systems. One of aspect of such an application model is the Flight Simulation Model, which used a set of differential equations to describe the flight characteristics of a launched missile by means of a trajectory. The Flight Simulation Model was written in the FORTRAN language with approximately 29,000 lines of source code. Depending on the number of trajectories, the computation can require several hours to full day of CPU time on DEC/VAX 8650 system. There is an impetus to reduce the execution time and utilize the advanced parallel architecture computing environment available. ACAC researchers developed a parallel method that allows the Flight Simulation Model to be able to run in parallel on the multiprocessor system. For the benchmark data tested, the parallel Flight Simulation Model implemented on the Alliant FX/8 has achieved nearly linear speedup. In this paper, we describe a parallel method for the Flight Simulation Model. We believe the method presented in this paper provides a general concept for the design of parallel applications. This concept, in most cases, can be adapted to many other sequential application programs.

  2. Mesoscopic Simulation Methods for Polymer Dynamics

    NASA Astrophysics Data System (ADS)

    Larson, Ronald

    2015-03-01

    We assess the accuracy and efficiency of mesoscopic simulation methods, namely Brownian Dynamics (BD), Stochastic Rotation Dynamics (SRD) and Dissipative Particle Dynamics (DPD), for polymers in solution at equilibrium and in flows in microfluidic geometries. Both SRD and DPD use solvent ``particles'' to carry momentum, and so account automatically for hydrodynamic interactions both within isolated polymer coils, and with other polymer molecules and with nearby solid boundaries. We assess quantitatively the effects of artificial particle inertia and fluid compressibility and show that they can be made small with appropriate choice of simulation parameters. We then use these methods to study flow-induced migration of polymer chains produced by: 1) hydrodynamic interactions, 2) streamline curvature or stress-gradients, and 3) convection of wall depletion zones. We show that huge concentration gradients can be produced by these mechanisms in microfluidic geometries that can be exploited for separation of polymers by size in periodic contraction-expansion geometries. We also assess the range of conditions for which BD, SRD or DPD is preferable for mesoscopic simulations. Finally, we show how such methods can be used to simulate quantitatively the swimming of micro-organisms such as E. coli. In collaboration with Lei Jiang and Tongyang Zhao, University of Michigan, Ann Arbor, MI.

  3. Discontinuous Galerkin Methods for Turbulence Simulation

    NASA Technical Reports Server (NTRS)

    Collis, S. Scott

    2002-01-01

    A discontinuous Galerkin (DG) method is formulated, implemented, and tested for simulation of compressible turbulent flows. The method is applied to turbulent channel flow at low Reynolds number, where it is found to successfully predict low-order statistics with fewer degrees of freedom than traditional numerical methods. This reduction is achieved by utilizing local hp-refinement such that the computational grid is refined simultaneously in all three spatial coordinates with decreasing distance from the wall. Another advantage of DG is that Dirichlet boundary conditions can be enforced weakly through integrals of the numerical fluxes. Both for a model advection-diffusion problem and for turbulent channel flow, weak enforcement of wall boundaries is found to improve results at low resolution. Such weak boundary conditions may play a pivotal role in wall modeling for large-eddy simulation.

  4. A method based on Monte Carlo simulations and voxelized anatomical atlases to evaluate and correct uncertainties on radiotracer accumulation quantitation in beta microprobe studies in the rat brain

    NASA Astrophysics Data System (ADS)

    Pain, F.; Dhenain, M.; Gurden, H.; Routier, A. L.; Lefebvre, F.; Mastrippolito, R.; Lanièce, P.

    2008-10-01

    The β-microprobe is a simple and versatile technique complementary to small animal positron emission tomography (PET). It relies on local measurements of the concentration of positron-labeled molecules. So far, it has been successfully used in anesthetized rats for pharmacokinetics experiments and for the study of brain energetic metabolism. However, the ability of the technique to provide accurate quantitative measurements using 18F, 11C and 15O tracers is likely to suffer from the contribution of 511 keV gamma rays background to the signal and from the contribution of positrons from brain loci surrounding the locus of interest. The aim of the present paper is to provide a method of evaluating several parameters, which are supposed to affect the quantification of recordings performed in vivo with this methodology. We have developed realistic voxelized phantoms of the rat whole body and brain, and used them as input geometries for Monte Carlo simulations of previous β-microprobe reports. In the context of realistic experiments (binding of 11C-Raclopride to D2 dopaminergic receptors in the striatum; local glucose metabolic rate measurement with 18F-FDG and H2O15 blood flow measurements in the somatosensory cortex), we have calculated the detection efficiencies and corresponding contribution of 511 keV gammas from peripheral organs accumulation. We confirmed that the 511 keV gammas background does not impair quantification. To evaluate the contribution of positrons from adjacent structures, we have developed β-Assistant, a program based on a rat brain voxelized atlas and matrices of local detection efficiencies calculated by Monte Carlo simulations for several probe geometries. This program was used to calculate the 'apparent sensitivity' of the probe for each brain structure included in the detection volume. For a given localization of a probe within the brain, this allows us to quantify the different sources of beta signal. Finally, since stereotaxic accuracy is

  5. Physalis: a New Method for Particle Simulations

    NASA Astrophysics Data System (ADS)

    Takagi, Shu; Oguz, Hasan; Prosperetti, Andrea

    2000-11-01

    A new computational method for the full Navier-Stokes viscous flow past cylinders and spheres is described and illustrated with preliminary results. Since, in the rest frame, the velocity vanishes on the particle, the Stokes equations apply in the immediate neighborhood of the surface. The analytic solutions of these equations available for both spheres and cylinders permit to effectively remove the particle, the effect of which is replaced by a consistency condition on the nodes of the computational grid that surround the particle. This condition is satisfied iteratively by a method that solves the field equations over the entire computational domain disregarding the presence of the particles, so that fast solvers can be used. The procedure eliminates the geometrical complexity of multi-particle simulations and permits to simulate disperse flows containing a large number of particles with a moderate computatonal cost. Supported by DOE and Japanese MESSC.

  6. Twitter's tweet method modelling and simulation

    NASA Astrophysics Data System (ADS)

    Sarlis, Apostolos S.; Sakas, Damianos P.; Vlachos, D. S.

    2015-02-01

    This paper seeks to purpose the concept of Twitter marketing methods. The tools that Twitter provides are modelled and simulated using iThink in the context of a Twitter media-marketing agency. The paper has leveraged the system's dynamic paradigm to conduct Facebook marketing tools and methods modelling, using iThink™ system to implement them. It uses the design science research methodology for the proof of concept of the models and modelling processes. The following models have been developed for a twitter marketing agent/company and tested in real circumstances and with real numbers. These models were finalized through a number of revisions and iterators of the design, develop, simulate, test and evaluate. It also addresses these methods that suit most organized promotion through targeting, to the Twitter social media service. The validity and usefulness of these Twitter marketing methods models for the day-to-day decision making are authenticated by the management of the company organization. It implements system dynamics concepts of Twitter marketing methods modelling and produce models of various Twitter marketing situations. The Tweet method that Twitter provides can be adjusted, depending on the situation, in order to maximize the profit of the company/agent.

  7. Decision-Theoretic Methods in Simulation Optimization

    DTIC Science & Technology

    2014-09-24

    Materiel Command REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is...Alamos National Lab: Frazier visited LANL , hosted by Frank Alexander, in January 2013, where he discussed the use of simulation optimization methods for...Alexander, Turab Lookman, and others from LANL , at the Materials Informatics Workshop at the Sante Fe Institute in April 2013. In February 2014, Frazier

  8. Computer Based Simulation of Laboratory Experiments.

    ERIC Educational Resources Information Center

    Edward, Norrie S.

    1997-01-01

    Examines computer based simulations of practical laboratory experiments in engineering. Discusses the aims and achievements of lab work (cognitive, process, psychomotor, and affective); types of simulations (model building and behavioral); and the strengths and weaknesses of simulations. Describes the development of a centrifugal pump simulation,…

  9. A method to produce and validate a digitally reconstructed radiograph-based computer simulation for optimisation of chest radiographs acquired with a computed radiography imaging system

    PubMed Central

    Moore, C S; Liney, G P; Beavis, A W; Saunderson, J R

    2011-01-01

    Objectives The purpose of this study was to develop and validate a computer model to produce realistic simulated computed radiography (CR) chest images using CT data sets of real patients. Methods Anatomical noise, which is the limiting factor in determining pathology in chest radiography, is realistically simulated by the CT data, and frequency-dependent noise has been added post-digitally reconstructed radiograph (DRR) generation to simulate exposure reduction. Realistic scatter and scatter fractions were measured in images of a chest phantom acquired on the CR system simulated by the computer model and added post-DRR calculation. Results The model has been validated with a phantom and patients and shown to provide predictions of signal-to-noise ratios (SNRs), tissue-to-rib ratios (TRRs: a measure of soft tissue pixel value to that of rib) and pixel value histograms that lie within the range of values measured with patients and the phantom. The maximum difference in measured SNR to that calculated was 10%. TRR values differed by a maximum of 1.3%. Conclusion Experienced image evaluators have responded positively to the DRR images, are satisfied they contain adequate anatomical features and have deemed them clinically acceptable. Therefore, the computer model can be used by image evaluators to grade chest images presented at different tube potentials and doses in order to optimise image quality and patient dose for clinical CR chest radiographs without the need for repeat patient exposures. PMID:21933979

  10. Interactive methods for exploring particle simulation data

    SciTech Connect

    Co, Christopher S.; Friedman, Alex; Grote, David P.; Vay, Jean-Luc; Bethel, E. Wes; Joy, Kenneth I.

    2004-05-01

    In this work, we visualize high-dimensional particle simulation data using a suite of scatter plot-based visualizations coupled with interactive selection tools. We use traditional 2D and 3D projection scatter plots as well as a novel oriented disk rendering style to convey various information about the data. Interactive selection tools allow physicists to manually classify ''interesting'' sets of particles that are highlighted across multiple, linked views of the data. The power of our application is the ability to correspond new visual representations of the simulation data with traditional, well understood visualizations. This approach supports the interactive exploration of the high-dimensional space while promoting discovery of new particle behavior.

  11. Physics-Based Simulations of Natural Hazards

    NASA Astrophysics Data System (ADS)

    Schultz, Kasey William

    Earthquakes and tsunamis are some of the most damaging natural disasters that we face. Just two recent events, the 2004 Indian Ocean earthquake and tsunami and the 2011 Haiti earthquake, claimed more than 400,000 lives. Despite their catastrophic impacts on society, our ability to predict these natural disasters is still very limited. The main challenge in studying the earthquake cycle is the non-linear and multi-scale properties of fault networks. Earthquakes are governed by physics across many orders of magnitude of spatial and temporal scales; from the scale of tectonic plates and their evolution over millions of years, down to the scale of rock fracturing over milliseconds to minutes at the sub-centimeter scale during an earthquake. Despite these challenges, there are useful patterns in earthquake occurrence. One such pattern, the frequency-magnitude relation, relates the number of large earthquakes to small earthquakes and forms the basis for assessing earthquake hazard. However the utility of these relations is proportional to the length of our earthquake records, and typical records span at most a few hundred years. Utilizing physics based interactions and techniques from statistical physics, earthquake simulations provide rich earthquake catalogs allowing us to measure otherwise unobservable statistics. In this dissertation I will discuss five applications of physics-based simulations of natural hazards, utilizing an earthquake simulator called Virtual Quake. The first is an overview of computing earthquake probabilities from simulations, focusing on the California fault system. The second uses simulations to help guide satellite-based earthquake monitoring methods. The third presents a new friction model for Virtual Quake and describes how we tune simulations to match reality. The fourth describes the process of turning Virtual Quake into an open source research tool. This section then focuses on a resulting collaboration using Virtual Quake for a detailed

  12. A discrete event method for wave simulation

    SciTech Connect

    Nutaro, James J

    2006-01-01

    This article describes a discrete event interpretation of the finite difference time domain (FDTD) and digital wave guide network (DWN) wave simulation schemes. The discrete event method is formalized using the discrete event system specification (DEVS). The scheme is shown to have errors that are proportional to the resolution of the spatial grid. A numerical example demonstrates the relative efficiency of the scheme with respect to FDTD and DWN schemes. The potential for the discrete event scheme to reduce numerical dispersion and attenuation errors is discussed.

  13. A novel load balancing method for hierarchical federation simulation system

    NASA Astrophysics Data System (ADS)

    Bin, Xiao; Xiao, Tian-yuan

    2013-07-01

    In contrast with single HLA federation framework, hierarchical federation framework can improve the performance of large-scale simulation system in a certain degree by distributing load on several RTI. However, in hierarchical federation framework, RTI is still the center of message exchange of federation, and it is still the bottleneck of performance of federation, the data explosion in a large-scale HLA federation may cause overload on RTI, It may suffer HLA federation performance reduction or even fatal error. Towards this problem, this paper proposes a load balancing method for hierarchical federation simulation system based on queuing theory, which is comprised of three main module: queue length predicting, load controlling policy, and controller. The method promotes the usage of resources of federate nodes, and improves the performance of HLA simulation system with balancing load on RTIG and federates. Finally, the experiment results are presented to demonstrate the efficient control of the method.

  14. A Method of Simulating Fluid Structure Interactions for Deformable Decelerators

    NASA Astrophysics Data System (ADS)

    Gidzak, Vladimyr Mykhalo

    A method is developed for performing simulations that contain fluid-structure interactions between deployable decelerators and a high speed compressible flow. The problem of coupling together multiple physical systems is examined with discussion of the strength of coupling for various methods. A non-monolithic strongly coupled option is presented for fluid-structure systems based on grid deformation. A class of algebraic grid deformation methods is then presented with examples of increasing complexity. The strength of the fluid-structure coupling is validated against two analytic problems, chosen to test the time dependent behavior of structure on fluid interactions, and of fluid on structure interruptions. A one-dimentional material heating model is also validated against experimental data. Results are provided for simulations of a wind tunnel scale disk-gap-band parachute with comparison to experimental data. Finally, a simulation is performed on a flight scale tension cone decelerator, with examination of time-dependent material stress, and heating.

  15. Efficient method for transport simulations in quantum cascade lasers

    NASA Astrophysics Data System (ADS)

    Maczka, Mariusz; Pawlowski, Stanislaw

    2016-12-01

    An efficient method for simulating quantum transport in quantum cascade lasers is presented. The calculations are performed within a simple approximation inspired by Büttiker probes and based on a finite model for semiconductor superlattices. The formalism of non-equilibrium Green's functions is applied to determine the selected transport parameters in a typical structure of a terahertz laser. Results were compared with those obtained for a infinite model as well as other methods described in literature.

  16. RELAP5 based engineering simulator

    SciTech Connect

    Charlton, T.R.; Laats, E.T.; Burtt, J.D.

    1990-01-01

    The INEL Engineering Simulation Center was established in 1988 to provide a modern, flexible, state-of-the-art simulation facility. This facility and two of the major projects which are part of the simulation center, the Advance Test Reactor (ATR) engineering simulator project and the Experimental Breeder Reactor II (EBR-II) advanced reactor control system, have been the subject of several papers in the past few years. Two components of the ATR engineering simulator project, RELAP5 and the Nuclear Plant Analyzer (NPA), have recently been improved significantly. This paper will present an overview of the INEL Engineering Simulation Center, and discuss the RELAP5/MOD3 and NPA/MOD1 codes, specifically how they are being used at the INEL Engineering Simulation Center. It will provide an update on the modifications to these two codes and their application to the ATR engineering simulator project, as well as, a discussion on the reactor system representation, control system modeling, two phase flow and heat transfer modeling. It will also discuss how these two codes are providing desktop, stand-alone reactor simulation. 12 refs., 2 figs.

  17. TU-C-17A-08: Improving IMRT Planning and Reducing Inter-Planner Variability Using the Stochastic Frontier Method: Validation Based On Clinical and Simulated Data

    SciTech Connect

    Gagne, MC; Archambault, L; Tremblay, D; Varfalvy, N

    2014-06-15

    Purpose: Intensity modulated radiation therapy always requires compromises between PTV coverage and organs at risk (OAR) sparing. We previously developed metrics that correlate doses to OAR to specific patients’ morphology using stochastic frontier analysis (SFA). Here, we aim to examine the validity of this approach using a large set of realistically simulated dosimetric and geometric data. Methods: SFA describes a set of treatment plans as an asymmetric distribution with respect to a frontier defining optimal plans. Eighty head and neck IMRT plans were used to establish a metric predicting the mean dose to parotids as a function of simple geometric parameters. A database of 140 parotids was used as a basis distribution to simulate physically plausible data of geometry and dose. Distributions comprising between 20 and 5000 were simulated and the SFA was applied to obtain new frontiers, which were compared to the original frontier. Results: It was possible to simulate distributions consistent with the original dataset. Below 160 organs, the SFA could not always describe distributions as asymmetric: a few cases showed a Gaussian or half-Gaussian distribution. In order to converge to a stable solution, the number of organs in a distribution must ideally be above 100, but in many cases stable parameters could be achieved with as low as 60 samples of organ data. Mean RMS value of the error of new frontiers was significantly reduced when additional organs are used. Conclusion: The number of organs in a distribution showed to have an impact on the effectiveness of the model. It is always possible to obtain a frontier, but if the number of organs in the distribution is small (< 160), it may not represent de lowest dose achievable. These results will be used to determine number of cases necessary to adapt the model to other organs.

  18. Physics-Based Simulator for NEO Exploration Analysis & Simulation

    NASA Technical Reports Server (NTRS)

    Balaram, J.; Cameron, J.; Jain, A.; Kline, H.; Lim, C.; Mazhar, H.; Myint, S.; Nayar, H.; Patton, R.; Pomerantz, M.; Quadrelli, M.; Shakkotai, P.; Tso, K.

    2011-01-01

    As part of the Space Exploration Analysis and Simulation (SEAS) task, the National Aeronautics and Space Administration (NASA) is using physics-based simulations at NASA's Jet Propulsion Laboratory (JPL) to explore potential surface and near-surface mission operations at Near Earth Objects (NEOs). The simulator is under development at JPL and can be used to provide detailed analysis of various surface and near-surface NEO robotic and human exploration concepts. In this paper we describe the SEAS simulator and provide examples of recent mission systems and operations concepts investigated using the simulation. We also present related analysis work and tools developed for both the SEAS task as well as general modeling, analysis and simulation capabilites for asteroid/small-body objects.

  19. Nonstationary multiscale turbulence simulation based on local PCA.

    PubMed

    Beghi, Alessandro; Cenedese, Angelo; Masiero, Andrea

    2014-09-01

    Turbulence simulation methods are of fundamental importance for evaluating the performance of control strategies for Adaptive Optics (AO) systems. In order to obtain a reliable evaluation of the performance a statistically accurate turbulence simulation method has to be used. This work generalizes a previously proposed method for turbulence simulation based on the use of a multiscale stochastic model. The main contributions of this work are: first, a multiresolution local PCA representation is considered. In typical operating conditions, the computational load for turbulence simulation is reduced approximately by a factor of 4, with respect to the previously proposed method, by means of this PCA representation. Second, thanks to a different low resolution method, based on a moving average model, the wind velocity can be in any direction (not necessarily that of the spatial axes). Finally, this paper extends the simulation procedure to generate, if needed, turbulence samples by using a more general model than that of the frozen flow hypothesis.

  20. Physiological Based Simulator Fidelity Design Guidance

    NASA Technical Reports Server (NTRS)

    Schnell, Thomas; Hamel, Nancy; Postnikov, Alex; Hoke, Jaclyn; McLean, Angus L. M. Thom, III

    2012-01-01

    The evolution of the role of flight simulation has reinforced assumptions in aviation that the degree of realism in a simulation system directly correlates to the training benefit, i.e., more fidelity is always better. The construct of fidelity has several dimensions, including physical fidelity, functional fidelity, and cognitive fidelity. Interaction of different fidelity dimensions has an impact on trainee immersion, presence, and transfer of training. This paper discusses research results of a recent study that investigated if physiological-based methods could be used to determine the required level of simulator fidelity. Pilots performed a relatively complex flight task consisting of mission task elements of various levels of difficulty in a fixed base flight simulator and a real fighter jet trainer aircraft. Flight runs were performed using one forward visual channel of 40 deg. field of view for the lowest level of fidelity, 120 deg. field of view for the middle level of fidelity, and unrestricted field of view and full dynamic acceleration in the real airplane. Neuro-cognitive and physiological measures were collected under these conditions using the Cognitive Avionics Tool Set (CATS) and nonlinear closed form models for workload prediction were generated based on these data for the various mission task elements. One finding of the work described herein is that simple heart rate is a relatively good predictor of cognitive workload, even for short tasks with dynamic changes in cognitive loading. Additionally, we found that models that used a wide range of physiological and neuro-cognitive measures can further boost the accuracy of the workload prediction.

  1. Developing a Theory of Digitally-Enabled Trial-Based Problem Solving through Simulation Methods: The Case of Direct-Response Marketing

    ERIC Educational Resources Information Center

    Clark, Joseph Warren

    2012-01-01

    In turbulent business environments, change is rapid, continuous, and unpredictable. Turbulence undermines those adaptive problem solving methods that generate solutions by extrapolating from what worked (or did not work) in the past. To cope with this challenge, organizations utilize trial-based problem solving (TBPS) approaches in which they…

  2. A performance-based method for calculating the design thickness of compacted clay liners exposed to high strength leachate under simulated landfill conditions.

    PubMed

    Safari, Edwin; Jalili Ghazizade, Mahdi; Abdoli, Mohammad Ali

    2012-09-01

    Compacted clay liners (CCLs) when feasible, are preferred to composite geosynthetic liners. The thickness of CCLs is typically prescribed by each country's environmental protection regulations. However, considering the fact that construction of CCLs represents a significant portion of overall landfill construction costs; a performance based design of liner thickness would be preferable to 'one size fits all' prescriptive standards. In this study researchers analyzed the hydraulic behaviour of a compacted clayey soil in three laboratory pilot scale columns exposed to high strength leachate under simulated landfill conditions. The temperature of the simulated CCL at the surface was maintained at 40 ± 2 °C and a vertical pressure of 250 kPa was applied to the soil through a gravel layer on top of the 50 cm thick CCL where high strength fresh leachate was circulated at heads of 15 and 30 cm simulating the flow over the CCL. Inverse modelling using HYDRUS-1D indicated that the hydraulic conductivity after 180 days was decreased about three orders of magnitude in comparison with the values measured prior to the experiment. A number of scenarios of different leachate heads and persistence time were considered and saturation depth of the CCL was predicted through modelling. Under a typical leachate head of 30 cm, the saturation depth was predicted to be less than 60 cm for a persistence time of 3 years. This approach can be generalized to estimate an effective thickness of a CCL instead of using prescribed values, which may be conservatively overdesigned and thus unduly costly.

  3. Implicit methods for efficient musculoskeletal simulation and optimal control

    PubMed Central

    van den Bogert, Antonie J.; Blana, Dimitra; Heinrich, Dieter

    2011-01-01

    The ordinary differential equations for musculoskeletal dynamics are often numerically stiff and highly nonlinear. Consequently, simulations require small time steps, and optimal control problems are slow to solve and have poor convergence. In this paper, we present an implicit formulation of musculoskeletal dynamics, which leads to new numerical methods for simulation and optimal control, with the expectation that we can mitigate some of these problems. A first order Rosenbrock method was developed for solving forward dynamic problems using the implicit formulation. It was used to perform real-time dynamic simulation of a complex shoulder arm system with extreme dynamic stiffness. Simulations had an RMS error of only 0.11 degrees in joint angles when running at real-time speed. For optimal control of musculoskeletal systems, a direct collocation method was developed for implicitly formulated models. The method was applied to predict gait with a prosthetic foot and ankle. Solutions were obtained in well under one hour of computation time and demonstrated how patients may adapt their gait to compensate for limitations of a specific prosthetic limb design. The optimal control method was also applied to a state estimation problem in sports biomechanics, where forces during skiing were estimated from noisy and incomplete kinematic data. Using a full musculoskeletal dynamics model for state estimation had the additional advantage that forward dynamic simulations, could be done with the same implicitly formulated model to simulate injuries and perturbation responses. While these methods are powerful and allow solution of previously intractable problems, there are still considerable numerical challenges, especially related to the convergence of gradient-based solvers. PMID:22102983

  4. Implicit methods for efficient musculoskeletal simulation and optimal control.

    PubMed

    van den Bogert, Antonie J; Blana, Dimitra; Heinrich, Dieter

    2011-01-01

    The ordinary differential equations for musculoskeletal dynamics are often numerically stiff and highly nonlinear. Consequently, simulations require small time steps, and optimal control problems are slow to solve and have poor convergence. In this paper, we present an implicit formulation of musculoskeletal dynamics, which leads to new numerical methods for simulation and optimal control, with the expectation that we can mitigate some of these problems. A first order Rosenbrock method was developed for solving forward dynamic problems using the implicit formulation. It was used to perform real-time dynamic simulation of a complex shoulder arm system with extreme dynamic stiffness. Simulations had an RMS error of only 0.11 degrees in joint angles when running at real-time speed. For optimal control of musculoskeletal systems, a direct collocation method was developed for implicitly formulated models. The method was applied to predict gait with a prosthetic foot and ankle. Solutions were obtained in well under one hour of computation time and demonstrated how patients may adapt their gait to compensate for limitations of a specific prosthetic limb design. The optimal control method was also applied to a state estimation problem in sports biomechanics, where forces during skiing were estimated from noisy and incomplete kinematic data. Using a full musculoskeletal dynamics model for state estimation had the additional advantage that forward dynamic simulations, could be done with the same implicitly formulated model to simulate injuries and perturbation responses. While these methods are powerful and allow solution of previously intractable problems, there are still considerable numerical challenges, especially related to the convergence of gradient-based solvers.

  5. Etch Profile Simulation Using Level Set Methods

    NASA Technical Reports Server (NTRS)

    Hwang, Helen H.; Meyyappan, Meyya; Arnold, James O. (Technical Monitor)

    1997-01-01

    Etching and deposition of materials are critical steps in semiconductor processing for device manufacturing. Both etching and deposition may have isotropic and anisotropic components, due to directional sputtering and redeposition of materials, for example. Previous attempts at modeling profile evolution have used so-called "string theory" to simulate the moving solid-gas interface between the semiconductor and the plasma. One complication of this method is that extensive de-looping schemes are required at the profile corners. We will present a 2D profile evolution simulation using level set theory to model the surface. (1) By embedding the location of the interface in a field variable, the need for de-looping schemes is eliminated and profile corners are more accurately modeled. This level set profile evolution model will calculate both isotropic and anisotropic etch and deposition rates of a substrate in low pressure (10s mTorr) plasmas, considering the incident ion energy angular distribution functions and neutral fluxes. We will present etching profiles of Si substrates in Ar/Cl2 discharges for various incident ion energies and trench geometries.

  6. Numeric Modified Adomian Decomposition Method for Power System Simulations

    SciTech Connect

    Dimitrovski, Aleksandar D; Simunovic, Srdjan; Pannala, Sreekanth

    2016-01-01

    This paper investigates the applicability of numeric Wazwaz El Sayed modified Adomian Decomposition Method (WES-ADM) for time domain simulation of power systems. WESADM is a numerical method based on a modified Adomian decomposition (ADM) technique. WES-ADM is a numerical approximation method for the solution of nonlinear ordinary differential equations. The non-linear terms in the differential equations are approximated using Adomian polynomials. In this paper WES-ADM is applied to time domain simulations of multimachine power systems. WECC 3-generator, 9-bus system and IEEE 10-generator, 39-bus system have been used to test the applicability of the approach. Several fault scenarios have been tested. It has been found that the proposed approach is faster than the trapezoidal method with comparable accuracy.

  7. IMPACT OF SIMULANT PRODUCTION METHODS ON SRAT PRODUCT

    SciTech Connect

    EIBLING, R

    2006-03-22

    The research and development programs in support of the Defense Waste Processing Facility (DWPF) and other high level waste vitrification processes require the use of both nonradioactive waste simulants and actual waste samples. The nonradioactive waste simulants have been used for laboratory testing, pilot-scale testing and full-scale integrated facility testing. Recent efforts have focused on matching the physical properties of actual sludge. These waste simulants were designed to reproduce the chemical and, if possible, the physical properties of the actual high level waste. This technical report documents a study of simulant production methods for high level waste simulated sludge and their impact on the physical properties of the resultant SRAT product. The sludge simulants used in support of DWPF have been based on average waste compositions and on expected or actual batch compositions. These sludge simulants were created to primarily match the chemical properties of the actual waste. These sludges were produced by generating manganese dioxide, MnO{sub 2}, from permanganate ion (MnO{sub 4}{sup -}) and manganous nitrate, precipitating ferric nitrate and nickel nitrate with sodium hydroxide, washing with inhibited water and then addition of other waste species. While these simulated sludges provided a good match for chemical reaction studies, they did not adequately match the physical properties (primarily rheology) measured on the actual waste. A study was completed in FY04 to determine the impact of simulant production methods on the physical properties of Sludge Batch 3 simulant. This study produced eight batches of sludge simulant, all prepared to the same chemical target, by varying the sludge production methods. The sludge batch, which most closely duplicated the actual SB3 sludge physical properties, was Test 8. Test 8 sludge was prepared by coprecipitating all of the major metals (including Al). After the sludge was washed to meet the target, the sludge

  8. Fast simulations of patient-specific haemodynamics of coronary artery bypass grafts based on a POD-Galerkin method and a vascular shape parametrization

    NASA Astrophysics Data System (ADS)

    Ballarin, Francesco; Faggiano, Elena; Ippolito, Sonia; Manzoni, Andrea; Quarteroni, Alfio; Rozza, Gianluigi; Scrofani, Roberto

    2016-06-01

    In this work a reduced-order computational framework for the study of haemodynamics in three-dimensional patient-specific configurations of coronary artery bypass grafts dealing with a wide range of scenarios is proposed. We combine several efficient algorithms to face at the same time both the geometrical complexity involved in the description of the vascular network and the huge computational cost entailed by time dependent patient-specific flow simulations. Medical imaging procedures allow to reconstruct patient-specific configurations from clinical data. A centerlines-based parametrization is proposed to efficiently handle geometrical variations. POD-Galerkin reduced-order models are employed to cut down large computational costs. This computational framework allows to characterize blood flows for different physical and geometrical variations relevant in the clinical practice, such as stenosis factors and anastomosis variations, in a rapid and reliable way. Several numerical results are discussed, highlighting the computational performance of the proposed framework, as well as its capability to carry out sensitivity analysis studies, so far out of reach. In particular, a reduced-order simulation takes only a few minutes to run, resulting in computational savings of 99% of CPU time with respect to the full-order discretization. Moreover, the error between full-order and reduced-order solutions is also studied, and it is numerically found to be less than 1% for reduced-order solutions obtained with just O(100) online degrees of freedom.

  9. Application of particle method to the casting process simulation

    NASA Astrophysics Data System (ADS)

    Hirata, N.; Zulaida, Y. M.; Anzai, K.

    2012-07-01

    Casting processes involve many significant phenomena such as fluid flow, solidification, and deformation, and it is known that casting defects are strongly influenced by the phenomena. However the phenomena complexly interacts each other and it is difficult to observe them directly because the temperature of the melt and other apparatus components are quite high, and they are generally opaque; therefore, a computer simulation is expected to serve a lot of benefits to consider what happens in the processes. Recently, a particle method, which is one of fully Lagrangian methods, has attracted considerable attention. The particle methods based on Lagrangian methods involving no calculation lattice have been developed rapidly because of their applicability to multi-physics problems. In this study, we combined the fluid flow, heat transfer and solidification simulation programs, and tried to simulate various casting processes such as continuous casting, centrifugal casting and ingot making. As a result of continuous casting simulation, the powder flow could be calculated as well as the melt flow, and the subsequent shape of interface between the melt and the powder was calculated. In the centrifugal casting simulation, the mold was smoothly modeled along the shape of the real mold, and the fluid flow and the rotating mold are simulated directly. As a result, the flow of the melt dragged by the rotating mold was calculated well. The eccentric rotation and the influence of Coriolis force were also reproduced directly and naturally. For ingot making simulation, a shrinkage formation behavior was calculated and the shape of the shrinkage agreed well with the experimental result.

  10. Computational Simulations and the Scientific Method

    NASA Technical Reports Server (NTRS)

    Kleb, Bil; Wood, Bill

    2005-01-01

    As scientific simulation software becomes more complicated, the scientific-software implementor's need for component tests from new model developers becomes more crucial. The community's ability to follow the basic premise of the Scientific Method requires independently repeatable experiments, and model innovators are in the best position to create these test fixtures. Scientific software developers also need to quickly judge the value of the new model, i.e., its cost-to-benefit ratio in terms of gains provided by the new model and implementation risks such as cost, time, and quality. This paper asks two questions. The first is whether other scientific software developers would find published component tests useful, and the second is whether model innovators think publishing test fixtures is a feasible approach.

  11. Simulating cardiac ultrasound image based on MR diffusion tensor imaging

    PubMed Central

    Qin, Xulei; Wang, Silun; Shen, Ming; Lu, Guolan; Zhang, Xiaodong; Wagner, Mary B.; Fei, Baowei

    2015-01-01

    Purpose: Cardiac ultrasound simulation can have important applications in the design of ultrasound systems, understanding the interaction effect between ultrasound and tissue and setting the ground truth for validating quantification methods. Current ultrasound simulation methods fail to simulate the myocardial intensity anisotropies. New simulation methods are needed in order to simulate realistic ultrasound images of the heart. Methods: The proposed cardiac ultrasound image simulation method is based on diffusion tensor imaging (DTI) data of the heart. The method utilizes both the cardiac geometry and the fiber orientation information to simulate the anisotropic intensities in B-mode ultrasound images. Before the simulation procedure, the geometry and fiber orientations of the heart are obtained from high-resolution structural MRI and DTI data, respectively. The simulation includes two important steps. First, the backscatter coefficients of the point scatterers inside the myocardium are processed according to the fiber orientations using an anisotropic model. Second, the cardiac ultrasound images are simulated with anisotropic myocardial intensities. The proposed method was also compared with two other nonanisotropic intensity methods using 50 B-mode ultrasound image volumes of five different rat hearts. The simulated images were also compared with the ultrasound images of a diseased rat heart in vivo. A new segmental evaluation method is proposed to validate the simulation results. The average relative errors (AREs) of five parameters, i.e., mean intensity, Rayleigh distribution parameter σ, and first, second, and third quartiles, were utilized as the evaluation metrics. The simulated images were quantitatively compared with real ultrasound images in both ex vivo and in vivo experiments. Results: The proposed ultrasound image simulation method can realistically simulate cardiac ultrasound images of the heart using high-resolution MR-DTI data. The AREs of their

  12. An example-based brain MRI simulation framework

    NASA Astrophysics Data System (ADS)

    He, Qing; Roy, Snehashis; Jog, Amod; Pham, Dzung L.

    2015-03-01

    The simulation of magnetic resonance (MR) images plays an important role in the validation of image analysis algorithms such as image segmentation, due to lack of sufficient ground truth in real MR images. Previous work on MRI simulation has focused on explicitly modeling the MR image formation process. However, because of the overwhelming complexity of MR acquisition these simulations must involve simplifications and approximations that can result in visually unrealistic simulated images. In this work, we describe an example-based simulation framework, which uses an "atlas" consisting of an MR image and its anatomical models derived from the hard segmentation. The relationships between the MR image intensities and its anatomical models are learned using a patch-based regression that implicitly models the physics of the MR image formation. Given the anatomical models of a new brain, a new MR image can be simulated using the learned regression. This approach has been extended to also simulate intensity inhomogeneity artifacts based on the statistical model of training data. Results show that the example based MRI simulation method is capable of simulating different image contrasts and is robust to different choices of atlas. The simulated images resemble real MR images more than simulations produced by a physics-based model.

  13. Modelica-based TCP simulation

    NASA Astrophysics Data System (ADS)

    Velieva, T. R.; Eferina, E. G.; Korolkova, A. V.; Kulyabov, D. S.; Sevastianov, L. A.

    2017-01-01

    For the study and verification of our mathematical model of telecommunication systems a discrete simulation model and a continuous analytical model were developed. However, for various reasons, these implementations are not entirely satisfactory. It is necessary to develop a more adequate simulation model, possibly using a different modeling paradigm. In order to modeling of the TCP source it is proposed to use a hybrid (continuous-discrete) approach. For computer implementation of the model the physical modeling language Modelica is used. The hybrid approach allows us to take into account the transitions between different states in the continuous model of the TCP protocol. The considered approach allowed to obtain a simple simulation model of TCP source. This model has great potential for expansion. It is possible to implement different types of TCP.

  14. Agent-Based Simulations for Project Management

    NASA Technical Reports Server (NTRS)

    White, J. Chris; Sholtes, Robert M.

    2011-01-01

    Currently, the most common approach used in project planning tools is the Critical Path Method (CPM). While this method was a great improvement over the basic Gantt chart technique being used at the time, it now suffers from three primary flaws: (1) task duration is an input, (2) productivity impacts are not considered , and (3) management corrective actions are not included. Today, computers have exceptional computational power to handle complex simulations of task e)(eculion and project management activities (e.g ., dynamically changing the number of resources assigned to a task when it is behind schedule). Through research under a Department of Defense contract, the author and the ViaSim team have developed a project simulation tool that enables more realistic cost and schedule estimates by using a resource-based model that literally turns the current duration-based CPM approach "on its head." The approach represents a fundamental paradigm shift in estimating projects, managing schedules, and reducing risk through innovative predictive techniques.

  15. 3D numerical simulation for the transient electromagnetic field excited by the central loop based on the vector finite-element method

    NASA Astrophysics Data System (ADS)

    Li, J. H.; Zhu, Z. Q.; Liu, S. C.; Zeng, S. H.

    2011-12-01

    Based on the principle of abnormal field algorithms, Helmholtz equations for electromagnetic field have been deduced. We made the electric field Helmholtz equation the governing equation, and derived the corresponding system of vector finite element method equations using the Galerkin method. For solving the governing equation using the vector finite element method, we divided the computing domain into homogenous brick elements, and used Whitney-type vector basis functions. After obtaining the electric field's anomaly field in the Laplace domain using the vector finite element method, we used the Gaver-Stehfest algorithm to transform the electric field's anomaly field to the time domain, and obtained the impulse response of magnetic field's anomaly field through the Faraday law of electromagnetic induction. By comparing 1D analytic solutions of quasi-H-type geoelectric models, the accuracy of the vector finite element method is tested. For the low resistivity brick geoelectric model, the plot shape of electromotive force computed using the vector finite element method coincides with that of the integral equation method and finite difference in time domain solutions.

  16. Agent Based Simulation Output Analysis

    DTIC Science & Technology

    2011-12-01

    over long periods of time) not to have a steady state, but apparently does. These simulation models are available free from sigmawiki.com 2.1...are used in computer animations and movies (for example, in the movie Jurassic Park) as well as to look for emergent social behavior in groups

  17. High-performance liquid chromatography - Ultraviolet method for the determination of total specific migration of nine ultraviolet absorbers in food simulants based on 1,1,3,3-Tetramethylguanidine and organic phase anion exchange solid phase extraction to remove glyceride.

    PubMed

    Wang, Jianling; Xiao, Xiaofeng; Chen, Tong; Liu, Tingfei; Tao, Huaming; He, Jun

    2016-06-17

    The glyceride in oil food simulant usually causes serious interferences to target analytes and leads to failure of the normal function of the RP-HPLC column. In this work, a convenient HPLC-UV method for the determination of the total specific migration of nine ultraviolet (UV) absorbers in food simulants was developed based on 1,1,3,3-tetramethylguanidine (TMG) and organic phase anion exchange (OPAE) SPE to efficiently remove glyceride in olive oil simulant. In contrast to the normal ion exchange carried out in an aqueous solution or aqueous phase environment, the OPAE SPE was performed in the organic phase environments, and the time-consuming and challenging extraction of the nine UV absorbers from vegetable oil with aqueous solution could be readily omitted. The method was proved to have good linearity (r≥0.99992), precision (intra-day RSD≤3.3%), and accuracy(91.0%≤recoveries≤107%); furthermore, the lower limit of quantifications (0.05-0.2mg/kg) in five types of food simulants(10% ethanol, 3% acetic acid, 20% ethanol, 50% ethanol and olive oil) was observed. The method was found to be well suited for quantitative determination of the total specific migration of the nine UV absorbers both in aqueous and vegetable oil simulant according to Commission Regulation (EU) No. 10/2011. Migration levels of the nine UV absorbers were determined in 31 plastic samples, and UV-24, UV-531, HHBP and UV-326 were frequently detected, especially in olive oil simulant for UV-326 in PE samples. In addition, the OPAE SPE procedure was also been applied to efficiently enrich or purify seven antioxidants in olive oil simulant. Results indicate that this procedure will have more extensive applications in the enriching or purification of the extremely weak acidic compounds with phenol hydroxyl group that are relatively stable in TMG n-hexane solution and that can be barely extracted from vegetable oil.

  18. A multiscale quantum mechanics/electromagnetics method for device simulations.

    PubMed

    Yam, ChiYung; Meng, Lingyi; Zhang, Yu; Chen, GuanHua

    2015-04-07

    Multiscale modeling has become a popular tool for research applying to different areas including materials science, microelectronics, biology, chemistry, etc. In this tutorial review, we describe a newly developed multiscale computational method, incorporating quantum mechanics into electronic device modeling with the electromagnetic environment included through classical electrodynamics. In the quantum mechanics/electromagnetics (QM/EM) method, the regions of the system where active electron scattering processes take place are treated quantum mechanically, while the surroundings are described by Maxwell's equations and a semiclassical drift-diffusion model. The QM model and the EM model are solved, respectively, in different regions of the system in a self-consistent manner. Potential distributions and current densities at the interface between QM and EM regions are employed as the boundary conditions for the quantum mechanical and electromagnetic simulations, respectively. The method is illustrated in the simulation of several realistic systems. In the case of junctionless field-effect transistors, transfer characteristics are obtained and a good agreement between experiments and simulations is achieved. Optical properties of a tandem photovoltaic cell are studied and the simulations demonstrate that multiple QM regions are coupled through the classical EM model. Finally, the study of a carbon nanotube-based molecular device shows the accuracy and efficiency of the QM/EM method.

  19. Atomistic hybrid DSMC/NEMD method for nonequilibrium multiscale simulations

    NASA Astrophysics Data System (ADS)

    Gu, Kai; Watkins, Charles B.; Koplik, Joel

    2010-03-01

    A multiscale hybrid method for coupling the direct simulation Monte Carlo (DSMC) method to the nonequilibrium molecular dynamics (NEMD) method is introduced. The method addresses Knudsen layer type gas flows within a few mean free paths of an interface or about an object with dimensions of the order of a few mean free paths. It employs the NEMD method to resolve nanoscale phenomena closest to the interface along with coupled DSMC simulation of the remainder of the Knudsen layer. The hybrid DSMC/NEMD method is a particle based algorithm without a buffer zone. It incorporates a new, modified generalized soft sphere (MGSS) molecular collision model to improve the poor computational efficiency of the traditional generalized soft sphere GSS model and to achieve DSMC compatibility with Lennard-Jones NEMD molecular interactions. An equilibrium gas, a Fourier thermal flow, and an oscillatory Couette flow, are simulated to validate the method. The method shows good agreement with Maxwell-Boltzmann theory for the equilibrium system, Chapman-Enskog theory for Fourier flow, and pure DSMC simulations for oscillatory Couette flow. Speedup in CPU time of the hybrid solver is benchmarked against a pure NEMD solver baseline for different system sizes and solver domain partitions. Finally, the hybrid method is applied to investigate interaction of argon gas with solid surface molecules in a parametric study of the influence of wetting effects and solid molecular mass on energy transfer and thermal accommodation coefficients. It is determined that wetting effect strength and solid molecular mass have a significant impact on the energy transfer between gas and solid phases and thermal accommodation coefficient.

  20. Research on BOM based composable modeling method

    NASA Astrophysics Data System (ADS)

    Zhang, Mingxin; He, Qiang; Gong, Jianxing

    2013-03-01

    Composable modeling method has been a research hotpot in the area of Modeling and Simulation for a long time. In order to increase the reuse and interoperability of BOM based model, this paper put forward a composable modeling method based on BOM, studied on the basic theory of composable modeling method based on BOM, designed a general structure of the coupled model based on BOM, and traversed the structure of atomic and coupled model based on BOM. At last, the paper introduced the process of BOM based composable modeling and made a conclusion on composable modeling method based on BOM. From the prototype we developed and accumulative model stocks, we found this method could increase the reuse and interoperability of models.

  1. Simulation Platform: a cloud-based online simulation environment.

    PubMed

    Yamazaki, Tadashi; Ikeno, Hidetoshi; Okumura, Yoshihiro; Satoh, Shunji; Kamiyama, Yoshimi; Hirata, Yutaka; Inagaki, Keiichiro; Ishihara, Akito; Kannon, Takayuki; Usui, Shiro

    2011-09-01

    For multi-scale and multi-modal neural modeling, it is needed to handle multiple neural models described at different levels seamlessly. Database technology will become more important for these studies, specifically for downloading and handling the neural models seamlessly and effortlessly. To date, conventional neuroinformatics databases have solely been designed to archive model files, but the databases should provide a chance for users to validate the models before downloading them. In this paper, we report our on-going project to develop a cloud-based web service for online simulation called "Simulation Platform". Simulation Platform is a cloud of virtual machines running GNU/Linux. On a virtual machine, various software including developer tools such as compilers and libraries, popular neural simulators such as GENESIS, NEURON and NEST, and scientific software such as Gnuplot, R and Octave, are pre-installed. When a user posts a request, a virtual machine is assigned to the user, and the simulation starts on that machine. The user remotely accesses to the machine through a web browser and carries out the simulation, without the need to install any software but a web browser on the user's own computer. Therefore, Simulation Platform is expected to eliminate impediments to handle multiple neural models that require multiple software.

  2. Electromagnetic metamaterial simulations using a GPU-accelerated FDTD method

    NASA Astrophysics Data System (ADS)

    Seok, Myung-Su; Lee, Min-Gon; Yoo, SeokJae; Park, Q.-Han

    2015-12-01

    Metamaterials composed of artificial subwavelength structures exhibit extraordinary properties that cannot be found in nature. Designing artificial structures having exceptional properties plays a pivotal role in current metamaterial research. We present a new numerical simulation scheme for metamaterial research. The scheme is based on a graphic processing unit (GPU)-accelerated finite-difference time-domain (FDTD) method. The FDTD computation can be significantly accelerated when GPUs are used instead of only central processing units (CPUs). We explain how the fast FDTD simulation of large-scale metamaterials can be achieved through communication optimization in a heterogeneous CPU/GPU-based computer cluster. Our method also includes various advanced FDTD techniques: the non-uniform grid technique, the total-field/scattered-field (TFSF) technique, the auxiliary field technique for dispersive materials, the running discrete Fourier transform, and the complex structure setting. We demonstrate the power of our new FDTD simulation scheme by simulating the negative refraction of light in a coaxial waveguide metamaterial.

  3. Changing the Paradigm: Simulation, a Method of First Resort

    DTIC Science & Technology

    2011-09-01

    PARADIGM: SIMULATION, A METHOD OF FIRST RESORT by Ben L. Anderson September 2011 Thesis Advisor: Thomas W. Lucas Second Reader: Devaushi...COVERED Master’s Thesis 4. TITLE AND SUBTITLE Changing the Paradigm: Simulation, a Method of First Resort 5. FUNDING NUMBERS 6. AUTHOR(S...is over 1,000,000,000 times more powerful than the first simulation pioneers had sixty years ago, yet the concept that simulation is a “ method of

  4. First Principles based methods and applications for realistic simulations on complex soft materials to develop new materials for energy, health, and environmental sustainability

    NASA Astrophysics Data System (ADS)

    Goddard, William

    2013-03-01

    For soft materials applications it is essential to obtain accurate descriptions of the weak (London dispersion, electrostatic) interactions between nonbond units, to include interactions with and stabilization by solvent, and to obtain accurate free energies and entropic changes during chemical, physical, and thermal processing. We will describe some of the advances being made in first principles based methods for treating soft materials with applications selected from new organic electrodes and electrolytes for batteries and fuel cells, forward osmosis for water cleanup, extended matter stable at ambient conditions, and drugs for modulating activation of GCPR membrane proteins,

  5. Base Camp Design Simulation Training

    DTIC Science & Technology

    2011-07-01

    It is a treasure-trove of engineering blueprints, bill of materials ( BOMs ), and plans. This 600- man base camp, Figure 6, inputted into VBS2TM...the technical proficiencies required in the construction of base camps. There are no blue prints or bill of materials included in the training. Yet...that “base camps need DOTMLPF (doctrine, organization, training, material , leadership and education, personnel, and facilities) solutions to address

  6. XML-based resources for simulation

    SciTech Connect

    Kelsey, R. L.; Riese, J. M.; Young, G. A.

    2004-01-01

    As simulations and the machines they run on become larger and more complex the inputs and outputs become more unwieldy. Increased complexity makes the setup of simulation problems difficult. It also contributes to the burden of handling and analyzing large amounts of output results. Another problem is that among a class of simulation codes (such as those for physical system simulation) there is often no single standard format or resource for input data. To run the same problem on different simulations requires a different setup for each simulation code. The extensible Markup Language (XML) is used to represent a general set of data resources including physical system problems, materials, and test results. These resources provide a 'plug and play' approach to simulation setup. For example, a particular material for a physical system can be selected from a material database. The XML-based representation of the selected material is then converted to the native format of the simulation being run and plugged into the simulation input file. In this manner a user can quickly and more easily put together a simulation setup. In the case of output data, an XML approach to regression testing includes tests and test results with XML-based representations. This facilitates the ability to query for specific tests and make comparisons between results. Also, output results can easily be converted to other formats for publishing online or on paper.

  7. Method for simulating discontinuous physical systems

    DOEpatents

    Baty, Roy S.; Vaughn, Mark R.

    2001-01-01

    The mathematical foundations of conventional numerical simulation of physical systems provide no consistent description of the behavior of such systems when subjected to discontinuous physical influences. As a result, the numerical simulation of such problems requires ad hoc encoding of specific experimental results in order to address the behavior of such discontinuous physical systems. In the present invention, these foundations are replaced by a new combination of generalized function theory and nonstandard analysis. The result is a class of new approaches to the numerical simulation of physical systems which allows the accurate and well-behaved simulation of discontinuous and other difficult physical systems, as well as simpler physical systems. Applications of this new class of numerical simulation techniques to process control, robotics, and apparatus design are outlined.

  8. PIXE simulation: Models, methods and technologies

    SciTech Connect

    Batic, M.; Pia, M. G.; Saracco, P.; Weidenspointner, G.

    2013-04-19

    The simulation of PIXE (Particle Induced X-ray Emission) is discussed in the context of general-purpose Monte Carlo systems for particle transport. Dedicated PIXE codes are mainly concerned with the application of the technique to elemental analysis, but they lack the capability of dealing with complex experimental configurations. General-purpose Monte Carlo codes provide powerful tools to model the experimental environment in great detail, but so far they have provided limited functionality for PIXE simulation. This paper reviews recent developments that have endowed the Geant4 simulation toolkit with advanced capabilities for PIXE simulation, and related efforts for quantitative validation of cross sections and other physical parameters relevant to PIXE simulation.

  9. A new rapid method of solar simulator calibration

    NASA Technical Reports Server (NTRS)

    Ross, B.

    1976-01-01

    A quick method for checking solar simulator spectra content is presented. The method is based upon a solar cell of extended spectral sensitivity and known absolute response, and a dichroic mirror with the reflection transmission transition close to the peak wavelength of the Thekaekara AMO distribution. It compromises the need for spectral discrimination with the ability to integrate wide spectral regions of the distribution which was considered important due to the spiky nature of the high pressure xenon lamp in common use. The results are expressed in terms of a single number, the blue/red ratio, which, combined with the total (unfiltered) output, provides a simple adequate characterization. Measurements were conducted at eleven major facilities across the country and a total of eighteen simulators were measured including five pulsed units.

  10. Lattice Boltzmann Method for Spacecraft Propellant Slosh Simulation

    NASA Technical Reports Server (NTRS)

    Orr, Jeb S.; Powers, Joseph F.; Yang, Hong Q

    2015-01-01

    A scalable computational approach to the simulation of propellant tank sloshing dynamics in microgravity is presented. In this work, we use the lattice Boltzmann equation (LBE) to approximate the behavior of two-phase, single-component isothermal flows at very low Bond numbers. Through the use of a non-ideal gas equation of state and a modified multiple relaxation time (MRT) collision operator, the proposed method can simulate thermodynamically consistent phase transitions at temperatures and density ratios consistent with typical spacecraft cryogenic propellants, for example, liquid oxygen. Determination of the tank forces and moments is based upon a novel approach that relies on the global momentum conservation of the closed fluid domain, and a parametric wall wetting model allows tuning of the free surface contact angle. Development of the interface is implicit and no interface tracking approach is required. A numerical example illustrates the method's application to prediction of bulk fluid behavior during a spacecraft ullage settling maneuver.

  11. Lattice-Boltzmann-based Simulations of Diffusiophoresis

    NASA Astrophysics Data System (ADS)

    Castigliego, Joshua; Kreft Pearce, Jennifer

    We present results from a lattice-Boltzmann-base Brownian Dynamics simulation on diffusiophoresis and the separation of particles within the system. A gradient in viscosity that simulates a concentration gradient in a dissolved polymer allows us to separate various types of particles by their deformability. As seen in previous experiments, simulated particles that have a higher deformability react differently to the polymer matrix than those with a lower deformability. Therefore, the particles can be separated from each other. This simulation, in particular, was intended to model an oceanic system where the particles of interest were zooplankton, phytoplankton and microplastics. The separation of plankton from the microplastics was achieved.

  12. A High Order Element Based Method for the Simulation of Velocity Damping in the Hyporheic Zone of a High Mountain River

    NASA Astrophysics Data System (ADS)

    Preziosi-Ribero, Antonio; Peñaloza-Giraldo, Jorge; Escobar-Vargas, Jorge; Donado-Garzón, Leonardo

    2016-04-01

    Groundwater - Surface water interaction is a topic that has gained relevance among the scientific community over the past decades. However, several questions remain unsolved inside this topic, and almost all the research that has been done in the past regards the transport phenomena and has little to do with understanding the dynamics of the flow patterns of the above mentioned interactions. The aim of this research is to verify the attenuation of the water velocity that comes from the free surface and enters the porous media under the bed of a high mountain river. The understanding of this process is a key feature in order to characterize and quantify the interactions between groundwater and surface water. However, the lack of information and the difficulties that arise when measuring groundwater flows under streams make the physical quantification non reliable for scientific purposes. These issues suggest that numerical simulations and in-stream velocity measurements can be used in order to characterize these flows. Previous studies have simulated the attenuation of a sinusoidal pulse of vertical velocity that comes from a stream and goes into a porous medium. These studies used the Burgers equation and the 1-D Navier-Stokes equations as governing equations. However, the boundary conditions of the problem, and the results when varying the different parameters of the equations show that the understanding of the process is not complete yet. To begin with, a Spectral Multi Domain Penalty Method (SMPM) was proposed for quantifying the velocity damping solving the Navier - Stokes equations in 1D. The main assumptions are incompressibility and a hydrostatic approximation for the pressure distributions. This method was tested with theoretical signals that are mainly trigonometric pulses or functions. Afterwards, in order to test the results with real signals, velocity profiles were captured near the Gualí River bed (Honda, Colombia), with an Acoustic Doppler

  13. Meshless lattice Boltzmann method for the simulation of fluid flows.

    PubMed

    Musavi, S Hossein; Ashrafizaadeh, Mahmud

    2015-02-01

    A meshless lattice Boltzmann numerical method is proposed. The collision and streaming operators of the lattice Boltzmann equation are separated, as in the usual lattice Boltzmann models. While the purely local collision equation remains the same, we rewrite the streaming equation as a pure advection equation and discretize the resulting partial differential equation using the Lax-Wendroff scheme in time and the meshless local Petrov-Galerkin scheme based on augmented radial basis functions in space. The meshless feature of the proposed method makes it a more powerful lattice Boltzmann solver, especially for cases in which using meshes introduces significant numerical errors into the solution, or when improving the mesh quality is a complex and time-consuming process. Three well-known benchmark fluid flow problems, namely the plane Couette flow, the circular Couette flow, and the impulsively started cylinder flow, are simulated for the validation of the proposed method. Excellent agreement with analytical solutions or with previous experimental and numerical results in the literature is observed in all the simulations. Although the computational resources required for the meshless method per node are higher compared to that of the standard lattice Boltzmann method, it is shown that for cases in which the total number of nodes is significantly reduced, the present method actually outperforms the standard lattice Boltzmann method.

  14. Methods for simulating solute breakthrough curves in pumping groundwater wells

    USGS Publications Warehouse

    Starn, J. Jeffrey; Bagtzoglou, Amvrossios C.; Robbins, Gary A.

    2012-01-01

    In modeling there is always a trade-off between execution time and accuracy. For gradient-based parameter estimation methods, where a simulation model is run repeatedly to populate a Jacobian (sensitivity) matrix, there exists a need for rapid simulation methods of known accuracy that can decrease execution time, and thus make the model more useful without sacrificing accuracy. Convolution-based methods can be executed rapidly for any desired input function once the residence-time distribution is known. The residence-time distribution can be calculated efficiently using particle tracking, but particle tracking can be ambiguous near a pumping well if the grid is too coarse. We present several embedded analytical expressions for improving particle tracking near a pumping well and compare them with a finely gridded finite-difference solution in terms of accuracy and CPU usage. Even though the embedded analytical approach can improve particle tracking near a well, particle methods reduce, but do not eliminate, reliance on a grid because velocity fields typically are calculated on a grid, and additional error is incurred using linear interpolation of velocity. A dilution rate can be calculated for a given grid and pumping well to determine if the grid is sufficiently refined. Embedded analytical expressions increase accuracy but add significantly to CPU usage. Structural error introduced by the numerical solution method may affect parameter estimates.

  15. Cloud GPU-based simulations for SQUAREMR

    NASA Astrophysics Data System (ADS)

    Kantasis, George; Xanthis, Christos G.; Haris, Kostas; Heiberg, Einar; Aletras, Anthony H.

    2017-01-01

    Quantitative Magnetic Resonance Imaging (MRI) is a research tool, used more and more in clinical practice, as it provides objective information with respect to the tissues being imaged. Pixel-wise T1 quantification (T1 mapping) of the myocardium is one such application with diagnostic significance. A number of mapping sequences have been developed for myocardial T1 mapping with a wide range in terms of measurement accuracy and precision. Furthermore, measurement results obtained with these pulse sequences are affected by errors introduced by the particular acquisition parameters used. SQUAREMR is a new method which has the potential of improving the accuracy of these mapping sequences through the use of massively parallel simulations on Graphical Processing Units (GPUs) by taking into account different acquisition parameter sets. This method has been shown to be effective in myocardial T1 mapping; however, execution times may exceed 30 min which is prohibitively long for clinical applications. The purpose of this study was to accelerate the construction of SQUAREMR's multi-parametric database to more clinically acceptable levels. The aim of this study was to develop a cloud-based cluster in order to distribute the computational load to several GPU-enabled nodes and accelerate SQUAREMR. This would accommodate high demands for computational resources without the need for major upfront equipment investment. Moreover, the parameter space explored by the simulations was optimized in order to reduce the computational load without compromising the T1 estimates compared to a non-optimized parameter space approach. A cloud-based cluster with 16 nodes resulted in a speedup of up to 13.5 times compared to a single-node execution. Finally, the optimized parameter set approach allowed for an execution time of 28 s using the 16-node cluster, without compromising the T1 estimates by more than 10 ms. The developed cloud-based cluster and optimization of the parameter set reduced

  16. A generic reaction-based biogeochemical simulator

    SciTech Connect

    Fang, Yilin; Yabusaki, Steven B.; Yeh, Gour T.; C.T. Miller, M.W. Farthing, W.G. Gray, and G.F. Pinder

    2004-06-17

    This paper presents a generic biogeochemical simulator, BIOGEOCHEM. The simulator can read a thermodynamic database based on the EQ3/EQ6 database. It can also read user-specified equilibrium and kinetic reactions (reactions not defined in the format of that in EQ3/EQ6 database) symbolically. BIOGEOCHEM is developed with a general paradigm. It overcomes the requirement in most available reaction-based models that reactions and rate laws be specified in a limited number of canonical forms. The simulator interprets the reactions, and rate laws of virtually any type for input to the MAPLE symbolic mathematical software package. MAPLE then generates Fortran code for the analytical Jacobian matrix used in the Newton-Raphson technique, which are compiled and linked into the BIOGEOCHEM executable. With this feature, the users are exempted from recoding the simulator to accept new equilibrium expressions or kinetic rate laws. Two examples are used to demonstrate the new features of the simulator.

  17. Parallel implementation of the particle simulation method with dynamic load balancing: Toward realistic geodynamical simulation

    NASA Astrophysics Data System (ADS)

    Furuichi, M.; Nishiura, D.

    2015-12-01

    Fully Lagrangian methods such as Smoothed Particle Hydrodynamics (SPH) and Discrete Element Method (DEM) have been widely used to solve the continuum and particles motions in the computational geodynamics field. These mesh-free methods are suitable for the problems with the complex geometry and boundary. In addition, their Lagrangian nature allows non-diffusive advection useful for tracking history dependent properties (e.g. rheology) of the material. These potential advantages over the mesh-based methods offer effective numerical applications to the geophysical flow and tectonic processes, which are for example, tsunami with free surface and floating body, magma intrusion with fracture of rock, and shear zone pattern generation of granular deformation. In order to investigate such geodynamical problems with the particle based methods, over millions to billion particles are required for the realistic simulation. Parallel computing is therefore important for handling such huge computational cost. An efficient parallel implementation of SPH and DEM methods is however known to be difficult especially for the distributed-memory architecture. Lagrangian methods inherently show workload imbalance problem for parallelization with the fixed domain in space, because particles move around and workloads change during the simulation. Therefore dynamic load balance is key technique to perform the large scale SPH and DEM simulation. In this work, we present the parallel implementation technique of SPH and DEM method utilizing dynamic load balancing algorithms toward the high resolution simulation over large domain using the massively parallel super computer system. Our method utilizes the imbalances of the executed time of each MPI process as the nonlinear term of parallel domain decomposition and minimizes them with the Newton like iteration method. In order to perform flexible domain decomposition in space, the slice-grid algorithm is used. Numerical tests show that our

  18. A Carbonaceous Chondrite Based Simulant of Phobos

    NASA Technical Reports Server (NTRS)

    Rickman, Douglas L.; Patel, Manish; Pearson, V.; Wilson, S.; Edmunson, J.

    2016-01-01

    In support of an ESA-funded concept study considering a sample return mission, a simulant of the Martian moon Phobos was needed. There are no samples of the Phobos regolith, therefore none of the four characteristics normally used to design a simulant are explicitly known for Phobos. Because of this, specifications for a Phobos simulant were based on spectroscopy, other remote measurements, and judgment. A composition based on the Tagish Lake meteorite was assumed. The requirement that sterility be achieved, especially given the required organic content, was unusual and problematic. The final design mixed JSC-1A, antigorite, pseudo-agglutinates and gilsonite. Sterility was achieved by radiation in a commercial facility.

  19. Computer Simulation Methods for Defect Configurations and Nanoscale Structures

    SciTech Connect

    Gao, Fei

    2010-01-01

    This chapter will describe general computer simulation methods, including ab initio calculations, molecular dynamics and kinetic Monte-Carlo method, and their applications to the calculations of defect configurations in various materials (metals, ceramics and oxides) and the simulations of nanoscale structures due to ion-solid interactions. The multiscale theory, modeling, and simulation techniques (both time scale and space scale) will be emphasized, and the comparisons between computer simulation results and exprimental observations will be made.

  20. 3-D Quantum Transport Solver Based on the Perfectly Matched Layer and Spectral Element Methods for the Simulation of Semiconductor Nanodevices

    PubMed Central

    Cheng, Candong; Lee, Joon-Ho; Lim, Kim Hwa; Massoud, Hisham Z.; Liu, Qing Huo

    2007-01-01

    A 3-D quantum transport solver based on the spectral element method (SEM) and perfectly matched layer (PML) is introduced to solve the 3-D Schrödinger equation with a tensor effective mass. In this solver, the influence of the environment is replaced with the artificial PML open boundary extended beyond the contact regions of the device. These contact regions are treated as waveguides with known incident waves from waveguide mode solutions. As the transmitted wave function is treated as a total wave, there is no need to decompose it into waveguide modes, thus significantly simplifying the problem in comparison with conventional open boundary conditions. The spectral element method leads to an exponentially improving accuracy with the increase in the polynomial order and sampling points. The PML region can be designed such that less than −100 dB outgoing waves are reflected by this artificial material. The computational efficiency of the SEM solver is demonstrated by comparing the numerical and analytical results from waveguide and plane-wave examples, and its utility is illustrated by multiple-terminal devices and semiconductor nanotube devices. PMID:18037971

  1. A fast mollified impulse method for biomolecular atomistic simulations

    NASA Astrophysics Data System (ADS)

    Fath, L.; Hochbruck, M.; Singh, C. V.

    2017-03-01

    Classical integration methods for molecular dynamics are inherently limited due to resonance phenomena occurring at certain time-step sizes. The mollified impulse method can partially avoid this problem by using appropriate filters based on averaging or projection techniques. However, existing filters are computationally expensive and tedious in implementation since they require either analytical Hessians or they need to solve nonlinear systems from constraints. In this work we follow a different approach based on corotation for the construction of a new filter for (flexible) biomolecular simulations. The main advantages of the proposed filter are its excellent stability properties and ease of implementation in standard softwares without Hessians or solving constraint systems. By simulating multiple realistic examples such as peptide, protein, ice equilibrium and ice-ice friction, the new filter is shown to speed up the computations of long-range interactions by approximately 20%. The proposed filtered integrators allow step sizes as large as 10 fs while keeping the energy drift less than 1% on a 50 ps simulation.

  2. Alloy Surface Structure:. Computer Simulations Using the Bfs Method

    NASA Astrophysics Data System (ADS)

    Bozzolo, Guillermo; Ferrante, John

    The use of semiempirical methods for modeling alloy properties has proven to be difficult and limited. The two primary approaches to this modeling, the embedded atom method and the phenomenological method of Miedema, have serious limitations in the range of materials studied and the degree of success in predicting properties of such systems. Recently, a new method has been developed by Bozzolo, Ferrante and Smith (BFS) which has had considerable success in predicting a wide range of alloy properties. In this work, we reference previous BFS applications to surface alloy formation and alloy surface structure, leading to the analysis of binary and ternary Ni-based alloy surfaces. We present Monte Carlo simulation results of thin films of NiAl and Ni-Al-Ti alloys, for a wide range of concentration of the Ti alloying addition. The composition of planes close to the surface as well as bulk features are discussed.

  3. Multinomial tau-leaping method for stochastic kinetic simulations

    NASA Astrophysics Data System (ADS)

    Pettigrew, Michel F.; Resat, Haluk

    2007-02-01

    We introduce the multinomial tau-leaping (MτL) method for general reaction networks with multichannel reactant dependencies. The MτL method is an extension of the binomial tau-leaping method where efficiency is improved in several ways. First, τ-leaping steps are determined simply and efficiently using a priori information and Poisson distribution-based estimates of expectation values for reaction numbers over a tentative τ-leaping step. Second, networks are partitioned into closed groups of reactions and corresponding reactants in which no group reactant set is found in any other group. Third, product formation is factored into upper-bound estimation of the number of times a particular reaction occurs. Together, these features allow larger time steps where the numbers of reactions occurring simultaneously in a multichannel manner are estimated accurately using a multinomial distribution. Furthermore, we develop a simple procedure that places a specific upper bound on the total reaction number to ensure non-negativity of species populations over a single multiple-reaction step. Using two disparate test case problems involving cellular processes—epidermal growth factor receptor signaling and a lactose operon model—we show that the τ-leaping based methods such as the MτL algorithm can significantly reduce the number of simulation steps thus increasing the numerical efficiency over the exact stochastic simulation algorithm by orders of magnitude.

  4. A simulation based approach to optimize inventory replenishment with RAND algorithm: An extended study of corrected demand using Holt's method for textile industry

    NASA Astrophysics Data System (ADS)

    Morshed, Mohammad Sarwar; Kamal, Mostafa Mashnoon; Khan, Somaiya Islam

    2016-07-01

    Inventory has been a major concern in supply chain and numerous researches have been done lately on inventory control which brought forth a number of methods that efficiently manage inventory and related overheads by reducing cost of replenishment. This research is aimed towards providing a better replenishment policy in case of multi-product, single supplier situations for chemical raw materials of textile industries in Bangladesh. It is assumed that industries currently pursue individual replenishment system. The purpose is to find out the optimum ideal cycle time and individual replenishment cycle time of each product for replenishment that will cause lowest annual holding and ordering cost, and also find the optimum ordering quantity. In this paper indirect grouping strategy has been used. It is suggested that indirect grouping Strategy outperforms direct grouping strategy when major cost is high. An algorithm by Kaspi and Rosenblatt (1991) called RAND is exercised for its simplicity and ease of application. RAND provides an ideal cycle time (T) for replenishment and integer multiplier (ki) for individual items. Thus the replenishment cycle time for each product is found as T×ki. Firstly, based on data, a comparison between currently prevailing (individual) process and RAND is provided that uses the actual demands which presents 49% improvement in total cost of replenishment. Secondly, discrepancies in demand is corrected by using Holt's method. However, demands can only be forecasted one or two months into the future because of the demand pattern of the industry under consideration. Evidently, application of RAND with corrected demand display even greater improvement. The results of this study demonstrates that cost of replenishment can be significantly reduced by applying RAND algorithm and exponential smoothing models.

  5. Different percentages of false-positive results obtained using five methods for the calculation of reference change values based on simulated normal and ln-normal distributions of data.

    PubMed

    Lund, Flemming; Petersen, Per Hyltoft; Fraser, Callum G; Sölétormos, György

    2016-11-01

    Background Reference change values provide objective tools to assess the significance of a change in two consecutive results for a biomarker from an individual. The reference change value calculation is based on the assumption that within-subject biological variation has random fluctuation around a homeostatic set point that follows a normal (Gaussian) distribution. This set point (or baseline in steady-state) should be estimated from a set of previous samples, but, in practice, decisions based on reference change value are often based on only two consecutive results. The original reference change value was based on standard deviations according to the assumption of normality, but was soon changed to coefficients of variation (CV) in the formula (reference change value = ± Z ċ 2(½) ċ CV). Z is being dependent on the desired probability of significance, which also defines the percentages of false-positive results. The aim of this study was to investigate false-positive results using five different published methods for calculation of reference change value. Methods The five reference change value methods were examined using normally and ln-normally distributed simulated data. Results One method performed best in approaching the theoretical false-positive percentages on normally distributed data and another method performed best on ln-normally distributed data. The commonly used reference change value method based on two results (without use of estimated set point) performed worst both on normally distributed and ln-normally distributed data. Conclusions The optimal choice of method to calculate reference change value limits requires knowledge of the distribution of data (normal or ln-normal) and, if possible, knowledge of the homeostatic set point.

  6. The parallel subdomain-levelset deflation method in reservoir simulation

    NASA Astrophysics Data System (ADS)

    van der Linden, J. H.; Jönsthövel, T. B.; Lukyanov, A. A.; Vuik, C.

    2016-01-01

    Extreme and isolated eigenvalues are known to be harmful to the convergence of an iterative solver. These eigenvalues can be produced by strong heterogeneity in the underlying physics. We can improve the quality of the spectrum by 'deflating' the harmful eigenvalues. In this work, deflation is applied to linear systems in reservoir simulation. In particular, large, sudden differences in the permeability produce extreme eigenvalues. The number and magnitude of these eigenvalues is linked to the number and magnitude of the permeability jumps. Two deflation methods are discussed. Firstly, we state that harmonic Ritz eigenvector deflation, which computes the deflation vectors from the information produced by the linear solver, is unfeasible in modern reservoir simulation due to high costs and lack of parallelism. Secondly, we test a physics-based subdomain-levelset deflation algorithm that constructs the deflation vectors a priori. Numerical experiments show that both methods can improve the performance of the linear solver. We highlight the fact that subdomain-levelset deflation is particularly suitable for a parallel implementation. For cases with well-defined permeability jumps of a factor 104 or higher, parallel physics-based deflation has potential in commercial applications. In particular, the good scalability of parallel subdomain-levelset deflation combined with the robust parallel preconditioner for deflated system suggests the use of this method as an alternative for AMG.

  7. A distributed UNIX-based simulator

    SciTech Connect

    Wyatt, P.W.; Arnold, T.R.; Hammer, K.E. ); Peery, J.S.; McKaskle, G.A. . Dept. of Nuclear Engineering)

    1990-01-01

    One of the problems confronting the designers of simulators over the last ten years -- particularly the designers of nuclear plant simulators -- has been how to accommodate the demands of their customers for increasing verisimilitude, especially in the modeling of as-faulted conditions. The demand for the modeling of multiphase multi-component thermal-hydraulics, for example, imposed a requirement that taxed the ingenuity of the simulator software developers. Difficulty was encountered in fitting such models into the existing simulator framework -- not least because the real-time requirement of training simulation imposed severe limits on the minimum time step. In the mid-1980's, two evolutions that had been proceeding for some time culminated in mature products of potentially great utility to simulation. One was the emergence of low-cost work stations featuring not only versatile, object-oriented graphics, but also considerable number-crunching capabilities of their own. The other was the adoption of UNIX as a standard'' operating system common to at least some machines offered by virtually all vendors. As a result, it is possible to design a simulator whose graphics and executive functions are off-loaded to one or more work stations, which are designed to handle such tasks. The number-crunching duties are assigned to another machine, which has been designed expressly for that purpose. This paper deals with such a distributed UNIX-based simulator developed at the Savannah River Laboratory using graphics supplied by Texas A M University under contract to SRL.

  8. Simulation-based medical education in pediatrics.

    PubMed

    Lopreiato, Joseph O; Sawyer, Taylor

    2015-01-01

    The use of simulation-based medical education (SBME) in pediatrics has grown rapidly over the past 2 decades and is expected to continue to grow. Similar to other instructional formats used in medical education, SBME is an instructional methodology that facilitates learning. Successful use of SBME in pediatrics requires attention to basic educational principles, including the incorporation of clear learning objectives. To facilitate learning during simulation the psychological safety of the participants must be ensured, and when done correctly, SBME is a powerful tool to enhance patient safety in pediatrics. Here we provide an overview of SBME in pediatrics and review key topics in the field. We first review the tools of the trade and examine various types of simulators used in pediatric SBME, including human patient simulators, task trainers, standardized patients, and virtual reality simulation. Then we explore several uses of simulation that have been shown to lead to effective learning, including curriculum integration, feedback and debriefing, deliberate practice, mastery learning, and range of difficulty and clinical variation. Examples of how these practices have been successfully used in pediatrics are provided. Finally, we discuss the future of pediatric SBME. As a community, pediatric simulation educators and researchers have been a leading force in the advancement of simulation in medicine. As the use of SBME in pediatrics expands, we hope this perspective will serve as a guide for those interested in improving the state of pediatric SBME.

  9. Novel Methods for Electromagnetic Simulation and Design

    DTIC Science & Technology

    2016-08-03

    average 1 hour per response, including the time for reviewing instructions, searching existing   data sources, gathering and maintaining the data...perfectly conducting half-space, for the simulation of layered and microstructured metamaterials, and for the analysis of time -domain integral...Lorenz-Mie-Debye formalism for the Maxwell equa- tions to the time -domain. We showed that the problem of scattering from a perfectly con- ducting

  10. Improving the performance of a filling line based on simulation

    NASA Astrophysics Data System (ADS)

    Jasiulewicz-Kaczmarek, M.; Bartkowiak, T.

    2016-08-01

    The paper describes the method of improving performance of a filling line based on simulation. This study concerns a production line that is located in a manufacturing centre of a FMCG company. A discrete event simulation model was built using data provided by maintenance data acquisition system. Two types of failures were identified in the system and were approximated using continuous statistical distributions. The model was validated taking into consideration line performance measures. A brief Pareto analysis of line failures was conducted to identify potential areas of improvement. Two improvements scenarios were proposed and tested via simulation. The outcome of the simulations were the bases of financial analysis. NPV and ROI values were calculated taking into account depreciation, profits, losses, current CIT rate and inflation. A validated simulation model can be a useful tool in maintenance decision-making process.

  11. Development of a numerical simulator of human swallowing using a particle method (part 1. Preliminary evaluation of the possibility of numerical simulation using the MPS method).

    PubMed

    Kamiya, Tetsu; Toyama, Yoshio; Michiwaki, Yukihiro; Kikuchi, Takahiro

    2013-01-01

    The aim of the present study was to evaluate the possibility of numerical simulation of the swallowing process using a moving particle simulation (MPS) method, which defined the food bolus as a number of particles in a fluid, a solid, and an elastic body. In order to verify the accuracy of the simulation results, a simple water bolus falling model was solved using the three-dimensional (3D) MPS method. We also examined the simplified swallowing simulation using a two-dimensional (2D) MPS method to confirm the interactions between the liquid, solid, elastic bolus, and organ structure. In a comparison of the 3D MPS simulation and experiments, the falling time of the water bolus and the configuration of the interface between the liquid and air corresponded exactly to the experimental measurements and the visualization images. The results showed that the accuracy of the 3D MPS simulation was qualitatively high for the simple falling model. Based on the results of the simplified swallowing simulation using the 2D MPS method, each bolus, defined as a liquid, solid, and elastic body, exhibited different behavior when the organs were transformed forcedly. This confirmed that the MPS method could be used for coupled simulations of the fluid, the solid, the elastic body, and the organ structures. The results suggested that the MPS method could be used to develop a numerical simulator of the swallowing process.

  12. Microcanonical ensemble simulation method applied to discrete potential fluids.

    PubMed

    Sastre, Francisco; Benavides, Ana Laura; Torres-Arenas, José; Gil-Villegas, Alejandro

    2015-09-01

    In this work we extend the applicability of the microcanonical ensemble simulation method, originally proposed to study the Ising model [A. Hüller and M. Pleimling, Int. J. Mod. Phys. C 13, 947 (2002)0129-183110.1142/S0129183102003693], to the case of simple fluids. An algorithm is developed by measuring the transition rates probabilities between macroscopic states, that has as advantage with respect to conventional Monte Carlo NVT (MC-NVT) simulations that a continuous range of temperatures are covered in a single run. For a given density, this new algorithm provides the inverse temperature, that can be parametrized as a function of the internal energy, and the isochoric heat capacity is then evaluated through a numerical derivative. As an illustrative example we consider a fluid composed of particles interacting via a square-well (SW) pair potential of variable range. Equilibrium internal energies and isochoric heat capacities are obtained with very high accuracy compared with data obtained from MC-NVT simulations. These results are important in the context of the application of the Hüller-Pleimling method to discrete-potential systems, that are based on a generalization of the SW and square-shoulder fluids properties.

  13. High Performance Parallel Methods for Space Weather Simulations

    NASA Technical Reports Server (NTRS)

    Hunter, Paul (Technical Monitor); Gombosi, Tamas I.

    2003-01-01

    This is the final report of our NASA AISRP grant entitled 'High Performance Parallel Methods for Space Weather Simulations'. The main thrust of the proposal was to achieve significant progress towards new high-performance methods which would greatly accelerate global MHD simulations and eventually make it possible to develop first-principles based space weather simulations which run much faster than real time. We are pleased to report that with the help of this award we made major progress in this direction and developed the first parallel implicit global MHD code with adaptive mesh refinement. The main limitation of all earlier global space physics MHD codes was the explicit time stepping algorithm. Explicit time steps are limited by the Courant-Friedrichs-Lewy (CFL) condition, which essentially ensures that no information travels more than a cell size during a time step. This condition represents a non-linear penalty for highly resolved calculations, since finer grid resolution (and consequently smaller computational cells) not only results in more computational cells, but also in smaller time steps.

  14. Modeling and simulation of wheeled polishing method for aspheric surface

    NASA Astrophysics Data System (ADS)

    Zong, Liang; Xie, Bin; Wang, Ansu

    2016-10-01

    This paper describes a new polishing tool for the polishing process of the aspheric lens: the wheeled polishing tool, equipping with an elastic polishing wheel which can automatically adapt to the surface shape of the lens, has been used to get high-precision surface based on the grinding action between the polishing wheel and the workpiece. In this paper, 3D model of polishing wheel structure is established by using the finite element analysis software. Distribution of the contact pressure between the polishing wheel and optical element is analyzed, and the contact pressure distribution function is deduced by using the least square method based on Hertz contact theory. The removal functions are deduced under different loading conditions based on Preston hypothesis. Finally, dwell time function is calculated. The simulation results show that the removal function and dwell time function are suitable for the wheeled polishing system, and thus establish a theoretical foundation for future research.

  15. Experiential Learning Methods, Simulation Complexity and Their Effects on Different Target Groups

    ERIC Educational Resources Information Center

    Kluge, Annette

    2007-01-01

    This article empirically supports the thesis that there is no clear and unequivocal argument in favor of simulations and experiential learning. Instead the effectiveness of simulation-based learning methods depends strongly on the target group's characteristics. Two methods of supporting experiential learning are compared in two different complex…

  16. Development of a numerical simulator of human swallowing using a particle method (Part 2. Evaluation of the accuracy of a swallowing simulation using the 3D MPS method).

    PubMed

    Kamiya, Tetsu; Toyama, Yoshio; Michiwaki, Yukihiro; Kikuchi, Takahiro

    2013-01-01

    The aim of this study was to develop and evaluate the accuracy of a three-dimensional (3D) numerical simulator of the swallowing action using the 3D moving particle simulation (MPS) method, which can simulate splashes and rapid changes in the free surfaces of food materials. The 3D numerical simulator of the swallowing action using the MPS method was developed based on accurate organ models, which contains forced transformation by elapsed time. The validity of the simulation results were evaluated qualitatively based on comparisons with videofluorography (VF) images. To evaluate the validity of the simulation results quantitatively, the normalized brightness around the vallecula was used as the evaluation parameter. The positions and configurations of the food bolus during each time step were compared in the simulated and VF images. The simulation results corresponded to the VF images during each time step in the visual evaluations, which suggested that the simulation was qualitatively correct. The normalized brightness of the simulated and VF images corresponded exactly at all time steps. This showed that the simulation results, which contained information on changes in the organs and the food bolus, were numerically correct. Based on these results, the accuracy of this simulator was high and it could be used to study the mechanism of disorders that cause dysphasia. This simulator also calculated the shear rate at a specific point and the timing with Newtonian and non-Newtonian fluids. We think that the information provided by this simulator could be useful for development of food products, medicines, and in rehabilitation facilities.

  17. Knowledge-based simulation using object-oriented programming

    NASA Technical Reports Server (NTRS)

    Sidoran, Karen M.

    1993-01-01

    Simulations have become a powerful mechanism for understanding and modeling complex phenomena. Their results have had substantial impact on a broad range of decisions in the military, government, and industry. Because of this, new techniques are continually being explored and developed to make them even more useful, understandable, extendable, and efficient. One such area of research is the application of the knowledge-based methods of artificial intelligence (AI) to the computer simulation field. The goal of knowledge-based simulation is to facilitate building simulations of greatly increased power and comprehensibility by making use of deeper knowledge about the behavior of the simulated world. One technique for representing and manipulating knowledge that has been enhanced by the AI community is object-oriented programming. Using this technique, the entities of a discrete-event simulation can be viewed as objects in an object-oriented formulation. Knowledge can be factual (i.e., attributes of an entity) or behavioral (i.e., how the entity is to behave in certain circumstances). Rome Laboratory's Advanced Simulation Environment (RASE) was developed as a research vehicle to provide an enhanced simulation development environment for building more intelligent, interactive, flexible, and realistic simulations. This capability will support current and future battle management research and provide a test of the object-oriented paradigm for use in large scale military applications.

  18. A Multiscale simulation method for ice crystallization and frost growth

    NASA Astrophysics Data System (ADS)

    Yazdani, Miad

    2015-11-01

    Formation of ice crystals and frost is associated with physical mechanisms at immensely separated scales. The primary focus of this work is on crystallization and frost growth on a cold plate exposed to the humid air. The nucleation is addressed through Gibbs energy barrier method based on the interfacial energy of crystal and condensate as well as the ambient and surface conditions. The supercooled crystallization of ice crystals is simulated through a phase-field based method where the variation of degree of surface tension anisotropy and its mode in the fluid medium is represented statistically. In addition, the mesoscale width of the interface is quantified asymptotically which serves as a length-scale criterion into a so-called ``Adaptive'' AMR (AAMR) algorithm to tie the grid resolution at the interface to local physical properties. Moreover, due to the exposure of crystal to humid air, a secondary non-equilibrium growth process contributes to the formation of frost at the tip of the crystal. A Monte-Carlo implementation of Diffusion Limited Aggregation method addresses the formation of frost during the crystallization. Finally, a virtual boundary based Immersed Boundary Method (IBM) is adapted to address the interaction of ice crystal with convective air during its growth.

  19. Spectral methods for multiscale plasma-physics simulations

    NASA Astrophysics Data System (ADS)

    Delzanno, Gian Luca; Manzini, Gianmarco; Vencels, Juris; Markidis, Stefano; Roytershteyn, Vadim

    2016-10-01

    In this talk, we present the SpectralPlasmaSolver (SPS) simulation method for the numerical approximation of the Vlasov-Maxwell equations. SPS either uses spectral methods both in physical and velocity space or combines spectral methods for the velocity space and a Discontinuous Galerkin (DG) discretization in space. The spectral methods are based on generalized Hermite's functions or Legendre polynomials, thus resulting in a time-dependent hyperbolic system for the spectral coefficients. The DG method is applied to numerically solve this system after a characteristic decomposition that properly ensures the upwinding in the scheme. This numerical approach can be seen as a generalization of the method of moment expansion and makes it possible to incorporate microscopic kinetic effects in a macroscale fluid-like behavior. The numerical approximation error for a given computational cost and the computational costs for a prescribed accuracy are orders of magnitude less than those provided by the standard PIC method. Moreover, conservation of physical quantities like mass, momentum, and energy can be proved theoretically. Finally, numerical examples are shown to prove the effectiveness of the approach.

  20. Simulations of Ground and Space-Based Oxygen Atom Experiments

    NASA Technical Reports Server (NTRS)

    Finchum, A. (Technical Monitor); Cline, J. A.; Minton, T. K.; Braunstein, M.

    2003-01-01

    A low-earth orbit (LEO) materials erosion scenario and the ground-based experiment designed to simulate it are compared using the direct-simulation Monte Carlo (DSMC) method. The DSMC model provides a detailed description of the interactions between the hyperthermal gas flow and a normally oriented flat plate for each case. We find that while the general characteristics of the LEO exposure are represented in the ground-based experiment, multi-collision effects can potentially alter the impact energy and directionality of the impinging molecules in the ground-based experiment. Multi-collision phenomena also affect downstream flux measurements.

  1. The frontal method in hydrodynamics simulations

    USGS Publications Warehouse

    Walters, R.A.

    1980-01-01

    The frontal solution method has proven to be an effective means of solving the matrix equations resulting from the application of the finite element method to a variety of problems. In this study, several versions of the frontal method were compared in efficiency for several hydrodynamics problems. Three basic modifications were shown to be of value: 1. Elimination of equations with boundary conditions beforehand, 2. Modification of the pivoting procedures to allow dynamic management of the equation size, and 3. Storage of the eliminated equations in a vector. These modifications are sufficiently general to be applied to other classes of problems. ?? 1980.

  2. Simulation reduction using the Taguchi method

    NASA Technical Reports Server (NTRS)

    Mistree, Farrokh; Lautenschlager, Ume; Erikstad, Stein Owe; Allen, Janet K.

    1993-01-01

    A large amount of engineering effort is consumed in conducting experiments to obtain information needed for making design decisions. Efficiency in generating such information is the key to meeting market windows, keeping development and manufacturing costs low, and having high-quality products. The principal focus of this project is to develop and implement applications of Taguchi's quality engineering techniques. In particular, we show how these techniques are applied to reduce the number of experiments for trajectory simulation of the LifeSat space vehicle. Orthogonal arrays are used to study many parameters simultaneously with a minimum of time and resources. Taguchi's signal to noise ratio is being employed to measure quality. A compromise Decision Support Problem and Robust Design are applied to demonstrate how quality is designed into a product in the early stages of designing.

  3. Multinomial Tau-Leaping Method for Stochastic Kinetic Simulations

    SciTech Connect

    Pettigrew, Michel F.; Resat, Haluk

    2007-02-28

    We introduce the multinomial tau-leaping (MtL) method, an improved version of the binomial tau-leaping method, for general reaction networks. Improvements in efficiency are achieved in several ways. Firstly, tau-leaping steps are determined simply and efficiently using a-prior information. Secondly, networks are partitioned into closed groups of reactions and corresponding reactants in which no group reactant or reaction is found in any other group. Thirdly, product formation is factored into upper bound estimation of the number of times a particular reaction occurs. Together, these features allow for larger time steps where the numbers of reactions occurring simultaneously in a multi-channel manner are estimated accurately using a multinomial distribution. Using a wide range of test case problems of scientific and practical interest involving cellular processes, such as epidermal growth factor receptor signaling and lactose operon model incorporating gene transcription and translation, we show that tau-leaping based methods like the MtL algorithm can significantly reduce the number of simulation steps thus increasing the numerical efficiency over the exact stochastic simulation algorithm by orders of magnitude. Furthermore, the simultaneous multi-channel representation capability of the MtL algorithm makes it a candidate for FPGA implementation or for parallelization in parallel computing environments.

  4. Simulation of the 2-dimensional Drude’s model using molecular dynamics method

    SciTech Connect

    Naa, Christian Fredy; Amin, Aisyah; Ramli,; Suprijadi,; Djamal, Mitra; Wahyoedi, Seramika Ari; Viridi, Sparisoma

    2015-04-16

    In this paper, we reported the results of the simulation of the electronic conduction in solids. The simulation is based on the Drude’s models by applying molecular dynamics (MD) method, which uses the fifth-order predictor-corrector algorithm. A formula of the electrical conductivity as a function of lattice length and ion diameter τ(L, d) cand be obtained empirically based on the simulation results.

  5. Simulation of 3D tumor cell growth using nonlinear finite element method.

    PubMed

    Dong, Shoubing; Yan, Yannan; Tang, Liqun; Meng, Junping; Jiang, Yi

    2016-01-01

    We propose a novel parallel computing framework for a nonlinear finite element method (FEM)-based cell model and apply it to simulate avascular tumor growth. We derive computation formulas to simplify the simulation and design the basic algorithms. With the increment of the proliferation generations of tumor cells, the FEM elements may become larger and more distorted. Then, we describe a remesh and refinement processing of the distorted or over large finite elements and the parallel implementation based on Message Passing Interface to improve the accuracy and efficiency of the simulation. We demonstrate the feasibility and effectiveness of the FEM model and the parallelization methods in simulations of early tumor growth.

  6. Selecting magnet laminations recipes using the method of simulated annealing

    SciTech Connect

    Russell, A.D.; Baiod, R.; Brown, B.C.

    1997-05-01

    The Fermilab Main Injector project is building 344 dipoles using more than 7000 tons of steel. There were significant run-to-run variations in the magnetic properties of the steel. Differences in stress relief in the steel after stamping resulted in variations of gap height. To minimize magnet-to-magnet strength and field shape variations the laminations were shuffled based on the available magnetic and mechanical data and assigned to magnets using a computer program based on the method of simulated annealing. The lamination sets selected by the program have produced magnets which easily satisfy the design requirements. This paper discussed observed gap variations, the program structure and the strength uniformity results for the magnets produced.

  7. Development of an Agent-Based Model (ABM) to Simulate the Immune System and Integration of a Regression Method to Estimate the Key ABM Parameters by Fitting the Experimental Data.

    PubMed

    Tong, Xuming; Chen, Jinghang; Miao, Hongyu; Li, Tingting; Zhang, Le

    2015-01-01

    Agent-based models (ABM) and differential equations (DE) are two commonly used methods for immune system simulation. However, it is difficult for ABM to estimate key parameters of the model by incorporating experimental data, whereas the differential equation model is incapable of describing the complicated immune system in detail. To overcome these problems, we developed an integrated ABM regression model (IABMR). It can combine the advantages of ABM and DE by employing ABM to mimic the multi-scale immune system with various phenotypes and types of cells as well as using the input and output of ABM to build up the Loess regression for key parameter estimation. Next, we employed the greedy algorithm to estimate the key parameters of the ABM with respect to the same experimental data set and used ABM to describe a 3D immune system similar to previous studies that employed the DE model. These results indicate that IABMR not only has the potential to simulate the immune system at various scales, phenotypes and cell types, but can also accurately infer the key parameters like DE model. Therefore, this study innovatively developed a complex system development mechanism that could simulate the complicated immune system in detail like ABM and validate the reliability and efficiency of model like DE by fitting the experimental data.

  8. Development of an Agent-Based Model (ABM) to Simulate the Immune System and Integration of a Regression Method to Estimate the Key ABM Parameters by Fitting the Experimental Data

    PubMed Central

    Tong, Xuming; Chen, Jinghang; Miao, Hongyu; Li, Tingting; Zhang, Le

    2015-01-01

    Agent-based models (ABM) and differential equations (DE) are two commonly used methods for immune system simulation. However, it is difficult for ABM to estimate key parameters of the model by incorporating experimental data, whereas the differential equation model is incapable of describing the complicated immune system in detail. To overcome these problems, we developed an integrated ABM regression model (IABMR). It can combine the advantages of ABM and DE by employing ABM to mimic the multi-scale immune system with various phenotypes and types of cells as well as using the input and output of ABM to build up the Loess regression for key parameter estimation. Next, we employed the greedy algorithm to estimate the key parameters of the ABM with respect to the same experimental data set and used ABM to describe a 3D immune system similar to previous studies that employed the DE model. These results indicate that IABMR not only has the potential to simulate the immune system at various scales, phenotypes and cell types, but can also accurately infer the key parameters like DE model. Therefore, this study innovatively developed a complex system development mechanism that could simulate the complicated immune system in detail like ABM and validate the reliability and efficiency of model like DE by fitting the experimental data. PMID:26535589

  9. Situating Computer Simulation Professional Development: Does It Promote Inquiry-Based Simulation Use?

    ERIC Educational Resources Information Center

    Gonczi, Amanda L.; Maeng, Jennifer L.; Bell, Randy L.; Whitworth, Brooke A.

    2016-01-01

    This mixed-methods study sought to identify professional development implementation variables that may influence participant (a) adoption of simulations, and (b) use for inquiry-based science instruction. Two groups (Cohort 1, N = 52; Cohort 2, N = 104) received different professional development. Cohort 1 was focused on Web site use mechanics.…

  10. Simulation of secondary fault shear displacements - method and application

    NASA Astrophysics Data System (ADS)

    Fälth, Billy; Hökmark, Harald; Lund, Björn; Mai, P. Martin; Munier, Raymond

    2014-05-01

    We present an earthquake simulation method to calculate dynamically and statically induced shear displacements on faults near a large earthquake. Our results are aimed at improved safety assessment of underground waste storage facilities, e.g. a nuclear waste repository. For our simulations, we use the distinct element code 3DEC. We benchmark 3DEC by running an earthquake simulation and then compare the displacement waveforms at a number of surface receivers with the corresponding results obtained from the COMPSYN code package. The benchmark test shows a good agreement in terms of both phase and amplitude. In our application to a potential earthquake near a storage facility, we use a model with a pre-defined earthquake fault plane (primary fault) surrounded by numerous smaller discontinuities (target fractures) representing faults in which shear movements may be induced by the earthquake. The primary fault and the target fractures are embedded in an elastic medium. Initial stresses are applied and the fault rupture mechanism is simulated through a programmed reduction of the primary fault shear strength, which is initiated at a pre-defined hypocenter. The rupture is propagated at a typical rupture propagation speed and arrested when it reaches the fault plane boundaries. The primary fault residual strength properties are uniform over the fault plane. The method allows for calculation of target fracture shear movements induced by static stress redistribution as well as by dynamic effects. We apply the earthquake simulation method in a model of the Forsmark nuclear waste repository site in Sweden with rock mass properties, in situ stresses and fault geometries according to the description of the site established by the Swedish Nuclear Fuel and Waste Management Co (SKB). The target fracture orientations are based on the Discrete Fracture Network model developed for the site. With parameter values set to provide reasonable upper bound estimates of target fracture

  11. Accelerated GPU based SPECT Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris

    2016-06-01

    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: 99m Tc, 111In and 131I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency

  12. Accelerated GPU based SPECT Monte Carlo simulations.

    PubMed

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris

    2016-06-07

    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational

  13. Transcending Competency Testing in Hospital-Based Simulation.

    PubMed

    Lassche, Madeline; Wilson, Barbara

    2016-02-01

    Simulation is a frequently used method for training students in health care professions and has recently gained acceptance in acute care hospital settings for use in educational programs and competency testing. Although hospital-based simulation is currently limited primarily to use in skills acquisition, expansion of the use of simulation via a modified Quality Health Outcomes Model to address systems factors such as the physical environment and human factors such as fatigue, reliance on memory, and reliance on vigilance could drive system-wide changes. Simulation is an expensive resource and should not be limited to use for education and competency testing. Well-developed, peer-reviewed simulations can be used for environmental factors, human factors, and interprofessional education to improve patients' outcomes and drive system-wide change for quality improvement initiatives.

  14. Constraint-based soft tissue simulation for virtual surgical training.

    PubMed

    Tang, Wen; Wan, Tao Ruan

    2014-11-01

    Most of surgical simulators employ a linear elastic model to simulate soft tissue material properties due to its computational efficiency and the simplicity. However, soft tissues often have elaborate nonlinear material characteristics. Most prominently, soft tissues are soft and compliant to small strains, but after initial deformations they are very resistant to further deformations even under large forces. Such material characteristic is referred as the nonlinear material incompliant which is computationally expensive and numerically difficult to simulate. This paper presents a constraint-based finite-element algorithm to simulate the nonlinear incompliant tissue materials efficiently for interactive simulation applications such as virtual surgery. Firstly, the proposed algorithm models the material stiffness behavior of soft tissues with a set of 3-D strain limit constraints on deformation strain tensors. By enforcing a large number of geometric constraints to achieve the material stiffness, the algorithm reduces the task of solving stiff equations of motion with a general numerical solver to iteratively resolving a set of constraints with a nonlinear Gauss-Seidel iterative process. Secondly, as a Gauss-Seidel method processes constraints individually, in order to speed up the global convergence of the large constrained system, a multiresolution hierarchy structure is also used to accelerate the computation significantly, making interactive simulations possible at a high level of details. Finally, this paper also presents a simple-to-build data acquisition system to validate simulation results with ex vivo tissue measurements. An interactive virtual reality-based simulation system is also demonstrated.

  15. Correlated EEG Signals Simulation Based on Artificial Neural Networks.

    PubMed

    Tomasevic, Nikola M; Neskovic, Aleksandar M; Neskovic, Natasa J

    2016-09-30

    In recent years, simulation of the human electroencephalogram (EEG) data found its important role in medical domain and neuropsychology. In this paper, a novel approach to simulation of two cross-correlated EEG signals is proposed. The proposed method is based on the principles of artificial neural networks (ANN). Contrary to the existing EEG data simulators, the ANN-based approach was leveraged solely on the experimentally acquired EEG data. More precisely, measured EEG data were utilized to optimize the simulator which consisted of two ANN models (each model responsible for generation of one EEG sequence). In order to acquire the EEG recordings, the measurement campaign was carried out on a healthy awake adult having no cognitive, physical or mental load. For the evaluation of the proposed approach, comprehensive quantitative and qualitative statistical analysis was performed considering probability distribution, correlation properties and spectral characteristics of generated EEG processes. The obtained results clearly indicated the satisfactory agreement with the measurement data.

  16. Solution of partial differential equations by agent-based simulation

    NASA Astrophysics Data System (ADS)

    Szilagyi, Miklos N.

    2014-01-01

    The purpose of this short note is to demonstrate that partial differential equations can be quickly solved by agent-based simulation with high accuracy. There is no need for the solution of large systems of algebraic equations. This method is especially useful for quick determination of potential distributions and demonstration purposes in teaching electromagnetism.

  17. Multi-pass Monte Carlo simulation method in nuclear transmutations.

    PubMed

    Mateescu, Liviu; Kadambi, N Prasad; Ravindra, Nuggehalli M

    2016-12-01

    Monte Carlo methods, in their direct brute simulation incarnation, bring realistic results if the involved probabilities, be they geometrical or otherwise, remain constant for the duration of the simulation. However, there are physical setups where the evolution of the simulation represents a modification of the simulated system itself. Chief among such evolving simulated systems are the activation/transmutation setups. That is, the simulation starts with a given set of probabilities, which are determined by the geometry of the system, the components and by the microscopic interaction cross-sections. However, the relative weight of the components of the system changes along with the steps of the simulation. A natural measure would be adjusting probabilities after every step of the simulation. On the other hand, the physical system has typically a number of components of the order of Avogadro's number, usually 10(25) or 10(26) members. A simulation step changes the characteristics for just a few of these members; a probability will therefore shift by a quantity of 1/10(25). Such a change cannot be accounted for within a simulation, because then the simulation should have then a number of at least 10(28) steps in order to have some significance. This is not feasible, of course. For our computing devices, a simulation of one million steps is comfortable, but a further order of magnitude becomes too big a stretch for the computing resources. We propose here a method of dealing with the changing probabilities, leading to the increasing of the precision. This method is intended as a fast approximating approach, and also as a simple introduction (for the benefit of students) in the very branched subject of Monte Carlo simulations vis-à-vis nuclear reactors.

  18. Numerical Methods and Simulations of Complex Multiphase Flows

    NASA Astrophysics Data System (ADS)

    Brady, Peter

    Multiphase flows are an important part of many natural and technological phenomena such as ocean-air coupling (which is important for climate modeling) and the atomization of liquid fuel jets in combustion engines. The unique challenges of multiphase flow often make analytical solutions to the governing equations impossible and experimental investigations very difficult. Thus, high-fidelity numerical simulations can play a pivotal role in understanding these systems. This dissertation describes numerical methods developed for complex multiphase flows and the simulations performed using these methods. First, the issue of multiphase code verification is addressed. Code verification answers the question "Is this code solving the equations correctly?" The method of manufactured solutions (MMS) is a procedure for generating exact benchmark solutions which can test the most general capabilities of a code. The chief obstacle to applying MMS to multiphase flow lies in the discontinuous nature of the material properties at the interface. An extension of the MMS procedure to multiphase flow is presented, using an adaptive marching tetrahedron style algorithm to compute the source terms near the interface. Guidelines for the use of the MMS to help locate coding mistakes are also detailed. Three multiphase systems are then investigated: (1) the thermocapillary motion of three-dimensional and axisymmetric drops in a confined apparatus, (2) the flow of two immiscible fluids completely filling an enclosed cylinder and driven by the rotation of the bottom endwall, and (3) the atomization of a single drop subjected to a high shear turbulent flow. The systems are simulated numerically by solving the full multiphase Navier-Stokes equations coupled to the various equations of state and a level set interface tracking scheme based on the refined level set grid method. The codes have been parallelized using MPI in order to take advantage of today's very large parallel computational

  19. Daylighting simulation: methods, algorithms, and resources

    SciTech Connect

    Carroll, William L.

    1999-12-01

    This document presents work conducted as part of Subtask C, ''Daylighting Design Tools'', Subgroup C2, ''New Daylight Algorithms'', of the IEA SHC Task 21 and the ECBCS Program Annex 29 ''Daylight in Buildings''. The search for and collection of daylighting analysis methods and algorithms led to two important observations. First, there is a wide range of needs for different types of methods to produce a complete analysis tool. These include: Geometry; Light modeling; Characterization of the natural illumination resource; Materials and components properties, representations; and Usability issues (interfaces, interoperability, representation of analysis results, etc). Second, very advantageously, there have been rapid advances in many basic methods in these areas, due to other forces. They are in part driven by: The commercial computer graphics community (commerce, entertainment); The lighting industry; Architectural rendering and visualization for projects; and Academia: Course materials, research. This has led to a very rich set of information resources that have direct applicability to the small daylighting analysis community. Furthermore, much of this information is in fact available online. Because much of the information about methods and algorithms is now online, an innovative reporting strategy was used: the core formats are electronic, and used to produce a printed form only secondarily. The electronic forms include both online WWW pages and a downloadable .PDF file with the same appearance and content. Both electronic forms include live primary and indirect links to actual information sources on the WWW. In most cases, little additional commentary is provided regarding the information links or citations that are provided. This in turn allows the report to be very concise. The links are expected speak for themselves. The report consists of only about 10+ pages, with about 100+ primary links, but with potentially thousands of indirect links. For purposes of

  20. Kinetic Method for Hydrogen-Deuterium-Tritium Mixture Distillation Simulation

    SciTech Connect

    Sazonov, A.B.; Kagramanov, Z.G.; Magomedbekov, E.P.

    2005-07-15

    Simulation of hydrogen distillation plants requires mathematical procedures suitable for multicomponent systems. In most of the present-day simulation methods a distillation column is assumed to be composed of theoretical stages, or plates. However, in the case of a multicomponent mixture theoretical plate does not exist.An alternative kinetic method of simulation is depicted in the work. According to this method a system of mass-transfer differential equations is solved numerically. Mass-transfer coefficients are estimated with using experimental results and empirical equations.Developed method allows calculating the steady state of a distillation column as well as its any non-steady state when initial conditions are given. The results for steady states are compared with ones obtained via Thiele-Geddes theoretical stage technique and the necessity of using kinetic method is demonstrated. Examples of a column startup period and periodic distillation simulations are shown as well.

  1. Simulations of 6-DOF Motion with a Cartesian Method

    NASA Technical Reports Server (NTRS)

    Murman, Scott M.; Aftosmis, Michael J.; Berger, Marsha J.; Kwak, Dochan (Technical Monitor)

    2003-01-01

    Coupled 6-DOF/CFD trajectory predictions using an automated Cartesian method are demonstrated by simulating a GBU-32/JDAM store separating from an F-18C aircraft. Numerical simulations are performed at two Mach numbers near the sonic speed, and compared with flight-test telemetry and photographic-derived data. Simulation results obtained with a sequential-static series of flow solutions are contrasted with results using a time-dependent flow solver. Both numerical methods show good agreement with the flight-test data through the first half of the simulations. The sequential-static and time-dependent methods diverge over the last half of the trajectory prediction. after the store produces peak angular rates. A cost comparison for the Cartesian method is included, in terms of absolute cost and relative to computing uncoupled 6-DOF trajectories. A detailed description of the 6-DOF method, as well as a verification of its accuracy, is provided in an appendix.

  2. Parallel Performance Optimization of the Direct Simulation Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Gao, Da; Zhang, Chonglin; Schwartzentruber, Thomas

    2009-11-01

    Although the direct simulation Monte Carlo (DSMC) particle method is more computationally intensive compared to continuum methods, it is accurate for conditions ranging from continuum to free-molecular, accurate in highly non-equilibrium flow regions, and holds potential for incorporating advanced molecular-based models for gas-phase and gas-surface interactions. As available computer resources continue their rapid growth, the DSMC method is continually being applied to increasingly complex flow problems. Although processor clock speed continues to increase, a trend of increasing multi-core-per-node parallel architectures is emerging. To effectively utilize such current and future parallel computing systems, a combined shared/distributed memory parallel implementation (using both Open Multi-Processing (OpenMP) and Message Passing Interface (MPI)) of the DSMC method is under development. The parallel implementation of a new state-of-the-art 3D DSMC code employing an embedded 3-level Cartesian mesh will be outlined. The presentation will focus on performance optimization strategies for DSMC, which includes, but is not limited to, modified algorithm designs, practical code-tuning techniques, and parallel performance optimization. Specifically, key issues important to the DSMC shared memory (OpenMP) parallel performance are identified as (1) granularity (2) load balancing (3) locality and (4) synchronization. Challenges and solutions associated with these issues as they pertain to the DSMC method will be discussed.

  3. Microcomputer based software for biodynamic simulation

    NASA Technical Reports Server (NTRS)

    Rangarajan, N.; Shams, T.

    1993-01-01

    This paper presents a description of a microcomputer based software package, called DYNAMAN, which has been developed to allow an analyst to simulate the dynamics of a system consisting of a number of mass segments linked by joints. One primary application is in predicting the motion of a human occupant in a vehicle under the influence of a variety of external forces, specially those generated during a crash event. Extensive use of a graphical user interface has been made to aid the user in setting up the input data for the simulation and in viewing the results from the simulation. Among its many applications, it has been successfully used in the prototype design of a moving seat that aids in occupant protection during a crash, by aircraft designers in evaluating occupant injury in airplane crashes, and by users in accident reconstruction for reconstructing the motion of the occupant and correlating the impacts with observed injuries.

  4. Simulation Improves Resident Performance in Catheter-Based Intervention

    PubMed Central

    Chaer, Rabih A.; DeRubertis, Brian G.; Lin, Stephanie C.; Bush, Harry L.; Karwowski, John K.; Birk, Daniel; Morrissey, Nicholas J.; Faries, Peter L.; McKinsey, James F.; Kent, K Craig

    2006-01-01

    Objectives: Surgical simulation has been shown to enhance the training of general surgery residents. Since catheter-based techniques have become an important part of the vascular surgeon's armamentarium, we explored whether simulation might impact the acquisition of catheter skills by surgical residents. Methods: Twenty general surgery residents received didactic training in the techniques of catheter intervention. Residents were then randomized with 10 receiving additional training with the Procedicus, computer-based, haptic simulator. All 20 residents then participated in 2 consecutive mentored catheter-based interventions for lower extremity occlusive disease in an OR/angiography suite. Resident performance was graded by attending surgeons blinded to the resident's training status, using 18 procedural steps as well as a global rating scale. Results: There were no differences between the 2 resident groups with regard to demographics or scores on a visuospatial test administered at study outset. Overall, residents exposed to simulation scored higher than controls during the first angio/OR intervention: procedural steps (simulation/control) (50 ± 6 vs. 33 ± 9, P = 0.0015); global rating scale (30 ± 7 vs. 19 ± 5, P = 0.0052). The advantage provided by simulator training persisted with the second intervention (53 ± 6 vs. 36 ± 7, P = 0.0006); global rating scale (33 ± 6 vs. 21 ± 6, P = 0.0015). Moreover, simulation training, particularly for the second intervention, led to enhancement in almost all of the individual measures of performance. Conclusion: Simulation is a valid tool for instructing surgical residents and fellows in basic endovascular techniques and should be incorporated into surgical training programs. Moreover, simulators may also benefit the large number of vascular surgeons who seek retraining in catheter-based intervention. PMID:16926560

  5. Space-based radar array system simulation and validation

    NASA Astrophysics Data System (ADS)

    Schuman, H. K.; Pflug, D. R.; Thompson, L. D.

    1981-08-01

    The present status of the space-based radar phased array lens simulator is discussed. Huge arrays of thin wire radiating elements on either side of a ground screen are modeled by the simulator. Also modeled are amplitude and phase adjust modules connecting radiating elements between arrays, feedline to radiator mismatch, and lens warping. A successive approximation method is employed. The first approximation is based on a plane wave expansion (infinite array) moment method especially suited to large array analysis. the first approximation results then facilitate higher approximation computations that account for effects of nonuniform periodicities (lens edge, lens section interfaces, failed modules, etc.). The programming to date is discussed via flow diagrams. An improved theory is presented in a consolidated development. The use of the simulator is illustrated by computing active impedances and radiating element current distributions for infinite planar arrays of straight and 'swept back' dipoles (arms inclined with respect to the array plane) with feedline scattering taken into account.

  6. Dual Energy Method for Breast Imaging: A Simulation Study.

    PubMed

    Koukou, V; Martini, N; Michail, C; Sotiropoulou, P; Fountzoula, C; Kalyvas, N; Kandarakis, I; Nikiforidis, G; Fountos, G

    2015-01-01

    Dual energy methods can suppress the contrast between adipose and glandular tissues in the breast and therefore enhance the visibility of calcifications. In this study, a dual energy method based on analytical modeling was developed for the detection of minimum microcalcification thickness. To this aim, a modified radiographic X-ray unit was considered, in order to overcome the limited kVp range of mammographic units used in previous DE studies, combined with a high resolution CMOS sensor (pixel size of 22.5 μm) for improved resolution. Various filter materials were examined based on their K-absorption edge. Hydroxyapatite (HAp) was used to simulate microcalcifications. The contrast to noise ratio (CNR tc ) of the subtracted images was calculated for both monoenergetic and polyenergetic X-ray beams. The optimum monoenergetic pair was 23/58 keV for the low and high energy, respectively, resulting in a minimum detectable microcalcification thickness of 100 μm. In the polyenergetic X-ray study, the optimal spectral combination was 40/70 kVp filtered with 100 μm cadmium and 1000 μm copper, respectively. In this case, the minimum detectable microcalcification thickness was 150 μm. The proposed dual energy method provides improved microcalcification detectability in breast imaging with mean glandular dose values within acceptable levels.

  7. Wavelet based Simulation of Reservoir Flow

    NASA Astrophysics Data System (ADS)

    Siddiqi, A. H.; Verma, A. K.; Noor-E-Zahra, Noor-E.-Zahra; Chandiok, Ashish; Hasan, A.

    2009-07-01

    Petroleum reservoirs consist of hydrocarbons and other chemicals trapped in the pores of a rock. The exploration and production of hydrocarbon reservoirs is still the most important technology to develop natural energy resources. Therefore, fluid flow simulators play a key role in order to help oil companies. In fact, simulation is the most important tool to model changes in a reservoir over the time. The main problem in petroleum reservoir simulation is to model the displacement of one fluid by another within a porous medium. A typical problem is characterized by the injection of a wetting fluid, for example water into the reservoir at a particular location displacing to the non wetting fluid, for example oil, which is extracted or produced at another location. Buckley-Leverett equation [1] models this process and its numerical simulation and visualization is of paramount importance. There are several numerical methods applied for numerical solution of partial differential equations modeling real world problems. We review in this paper the numerical solution of Buckley-Leverett equation for flat and non flat structures with special focus on wavelet method. We also indicate a few new avenues for further research.

  8. A web-based virtual lighting simulator

    SciTech Connect

    Papamichael, Konstantinos; Lai, Judy; Fuller, Daniel; Tariq, Tara

    2002-05-06

    This paper is about a web-based ''virtual lighting simulator,'' which is intended to allow architects and lighting designers to quickly assess the effect of key parameters on the daylighting and lighting performance in various space types. The virtual lighting simulator consists of a web-based interface that allows navigation through a large database of images and data, which were generated through parametric lighting simulations. At its current form, the virtual lighting simulator has two main modules, one for daylighting and one for electric lighting. The daylighting module includes images and data for a small office space, varying most key daylighting parameters, such as window size and orientation, glazing type, surface reflectance, sky conditions, time of the year, etc. The electric lighting module includes images and data for five space types (classroom, small office, large open office, warehouse and small retail), varying key lighting parameters, such as the electric lighting system, surface reflectance, dimming/switching, etc. The computed images include perspectives and plans and are displayed in various formats to support qualitative as well as quantitative assessment. The quantitative information is in the form of iso-contour lines superimposed on the images, as well as false color images and statistical information on work plane illuminance. The qualitative information includes images that are adjusted to account for the sensitivity and adaptation of the human eye. The paper also includes a section on the major technical issues and their resolution.

  9. Mathematical modeling and simulation in animal health - Part II: principles, methods, applications, and value of physiologically based pharmacokinetic modeling in veterinary medicine and food safety assessment.

    PubMed

    Lin, Z; Gehring, R; Mochel, J P; Lavé, T; Riviere, J E

    2016-10-01

    This review provides a tutorial for individuals interested in quantitative veterinary pharmacology and toxicology and offers a basis for establishing guidelines for physiologically based pharmacokinetic (PBPK) model development and application in veterinary medicine. This is important as the application of PBPK modeling in veterinary medicine has evolved over the past two decades. PBPK models can be used to predict drug tissue residues and withdrawal times in food-producing animals, to estimate chemical concentrations at the site of action and target organ toxicity to aid risk assessment of environmental contaminants and/or drugs in both domestic animals and wildlife, as well as to help design therapeutic regimens for veterinary drugs. This review provides a comprehensive summary of PBPK modeling principles, model development methodology, and the current applications in veterinary medicine, with a focus on predictions of drug tissue residues and withdrawal times in food-producing animals. The advantages and disadvantages of PBPK modeling compared to other pharmacokinetic modeling approaches (i.e., classical compartmental/noncompartmental modeling, nonlinear mixed-effects modeling, and interspecies allometric scaling) are further presented. The review finally discusses contemporary challenges and our perspectives on model documentation, evaluation criteria, quality improvement, and offers solutions to increase model acceptance and applications in veterinary pharmacology and toxicology.

  10. Simulations of infrared atmospheric transmittance based on measured data

    NASA Astrophysics Data System (ADS)

    Song, Fu-yin; Lu, Yuan; Qiao, Ya; Tao, Hui-feng; Tang, Cong; Ling, Yong-shun

    2016-10-01

    There are two regular methods to calculate infrared atmospheric transmittance, including empirical formula and professional software. However, it has large deviations to use empirical formula. It is complicated to use professional software and difficult to apply in other infrared simulative system. Therefore, based on measured atmospheric data in some area for many years, article used the method of molecular single absorption to calculate absorption coefficients of water vapor and carbon dioxide in different temperature. Temperatures, pressures, and consequent scattering coefficients which distributed in different high were fitted with analysis formula according to different months. Then, it built simulative calculation model of atmospheric transmittance of infrared radiation. The simulative results are very close to accuracy results calculated by user-defined model of MODTRAN. The method is easy and convenient to use and has certain referent value in the project application.

  11. Deployment Simulation Methods for Ultra-Lightweight Inflatable Structures

    NASA Technical Reports Server (NTRS)

    Wang, John T.; Johnson, Arthur R.

    2003-01-01

    Two dynamic inflation simulation methods are employed for modeling the deployment of folded thin-membrane tubes. The simulations are necessary because ground tests include gravity effects and may poorly represent deployment in space. The two simulation methods are referred to as the Control Volume (CV) method and the Arbitrary Lagrangian Eulerian (ALE) method. They are available in the LS-DYNA nonlinear dynamic finite element code. Both methods are suitable for modeling the interactions between the inflation gas and the thin-membrane tube structures. The CV method only considers the pressure induced by the inflation gas in the simulation, while the ALE method models the actual flow of the inflation gas. Thus, the transient fluid properties at any location within the tube can be predicted by the ALE method. Deployment simulations of three packaged tube models; namely coiled, Z-folded, and telescopically-folded configurations, are performed. Results predicted by both methods for the telescopically-folded configuration are correlated and computational efficiency issues are discussed.

  12. Weak turbulence simulations with the Hermite-Fourier spectral method

    NASA Astrophysics Data System (ADS)

    Vencels, Juris; Delzanno, Gian Luca; Manzini, Gianmarco; Roytershteyn, Vadim; Markidis, Stefano

    2015-11-01

    Recently, a new (transform) method based on a Fourier-Hermite (FH) discretization of the Vlasov-Maxwell equations has been developed. The resulting set of moment equations is discretized implicitly in time with a Crank-Nicolson scheme and solved with a nonlinear Newton-Krylov technique. For periodic boundary conditions, this discretization delivers a scheme that conserves the total mass, momentum and energy of the system exactly. In this work, we apply the FH method to study a problem of Langmuir turbulence, where a low signal-to-noise ratio is important to follow the turbulent cascade and might require a lot of computational resources if studied with PIC. We simulate a weak (low density) electron beam moving in a Maxwellian plasma and subject to an instability that generates Langmuir waves and a weak turbulence field. We also discuss some optimization techniques to optimally select the Hermite basis in terms of its shift and scaling argument, and show that this technique improve the overall accuracy of the method. Finally, we discuss the applicability of the HF method for studying kinetic plasma turbulence. This work was funded by LDRD under the auspices of the NNSA of the U.S. by LANL under contract DE-AC52-06NA25396 and by EC through the EPiGRAM project (grant agreement no. 610598. epigram-project.eu).

  13. Numerical simulation on snow melting phenomena by CIP method

    NASA Astrophysics Data System (ADS)

    Mizoe, H.; Yoon, Seong Y.; Josho, M.; Yabe, T.

    2001-04-01

    A numerical scheme based on the C-CUP method to simulate melting phenomena in snow is proposed. To calculate these complex phenomena we introduce the phase change, elastic-plastic model, porous model, and verify each model by using some simple examples. This scheme is applied to a practical model, such as the snow piled on the insulator of electrical transmission line, in which snow is modeled as a compound material composed of air, water, and ice, and is calculated by elastic-plastic model. The electric field between two electrodes is solved by the Poisson equation giving the Joule heating in the energy conservation that eventually leads to snow melting. Comparison is made by changing the fraction of water in the snow to see its effect on melting process for the cases of applied voltage of 50 and 500 kV on the two electrodes.

  14. Simulation of large-scale rule-based models

    SciTech Connect

    Hlavacek, William S; Monnie, Michael I; Colvin, Joshua; Faseder, James

    2008-01-01

    Interactions of molecules, such as signaling proteins, with multiple binding sites and/or multiple sites of post-translational covalent modification can be modeled using reaction rules. Rules comprehensively, but implicitly, define the individual chemical species and reactions that molecular interactions can potentially generate. Although rules can be automatically processed to define a biochemical reaction network, the network implied by a set of rules is often too large to generate completely or to simulate using conventional procedures. To address this problem, we present DYNSTOC, a general-purpose tool for simulating rule-based models. DYNSTOC implements a null-event algorithm for simulating chemical reactions in a homogenous reaction compartment. The simulation method does not require that a reaction network be specified explicitly in advance, but rather takes advantage of the availability of the reaction rules in a rule-based specification of a network to determine if a randomly selected set of molecular components participates in a reaction during a time step. DYNSTOC reads reaction rules written in the BioNetGen language which is useful for modeling protein-protein interactions involved in signal transduction. The method of DYNSTOC is closely related to that of STOCHSIM. DYNSTOC differs from STOCHSIM by allowing for model specification in terms of BNGL, which extends the range of protein complexes that can be considered in a model. DYNSTOC enables the simulation of rule-based models that cannot be simulated by conventional methods. We demonstrate the ability of DYNSTOC to simulate models accounting for multisite phosphorylation and multivalent binding processes that are characterized by large numbers of reactions. DYNSTOC is free for non-commercial use. The C source code, supporting documentation and example input files are available at .

  15. Fast spot-based multiscale simulations of granular drainage

    SciTech Connect

    Rycroft, Chris H.; Wong, Yee Lok; Bazant, Martin Z.

    2009-05-22

    We develop a multiscale simulation method for dense granular drainage, based on the recently proposed spot model, where the particle packing flows by local collective displacements in response to diffusing"spots'" of interstitial free volume. By comparing with discrete-element method (DEM) simulations of 55,000 spheres in a rectangular silo, we show that the spot simulation is able to approximately capture many features of drainage, such as packing statistics, particle mixing, and flow profiles. The spot simulation runs two to three orders of magnitude faster than DEM, making it an appropriate method for real-time control or optimization. We demonstrateextensions for modeling particle heaping and avalanching at the free surface, and for simulating the boundary layers of slower flow near walls. We show that the spot simulations are robust and flexible, by demonstrating that they can be used in both event-driven and fixed timestep approaches, and showing that the elastic relaxation step used in the model can be applied much less frequently and still create good results.

  16. Research on data communication method in periscope semi-physical training simulation system

    NASA Astrophysics Data System (ADS)

    Xiao, Jianbo; Hu, Dabin

    2013-03-01

    Data communication plays a very important role in the hardware in the loop simulation system. The system architecture of periscope semi-physical simulation system is proposed at first. Then the data communication method based on FINS between PLC and PC is introduced, the user's interaction of scene is achieved by PLC. The communication based on TCP between 2D chart console and scene simulation system is also introduced. The 6-DOF motion model and the scene simulation system is connected by TCP, and a DR method is introduced in solving the data amount problem. The test shows that the simulation system has no error package and no missing in a simulation circle. And can meet the requirements of training, also shows good performance in reliability and real-time.

  17. Experiential Learning through Computer-Based Simulations.

    ERIC Educational Resources Information Center

    Maynes, Bill; And Others

    1992-01-01

    Describes experiential learning instructional model and simulation for student principals. Describes interactive laser videodisc simulation. Reports preliminary findings about student principal learning from simulation. Examines learning approaches by unsuccessful and successful students and learning levels of model learners. Simulation's success…

  18. Interactive physically-based sound simulation

    NASA Astrophysics Data System (ADS)

    Raghuvanshi, Nikunj

    The realization of interactive, immersive virtual worlds requires the ability to present a realistic audio experience that convincingly compliments their visual rendering. Physical simulation is a natural way to achieve such realism, enabling deeply immersive virtual worlds. However, physically-based sound simulation is very computationally expensive owing to the high-frequency, transient oscillations underlying audible sounds. The increasing computational power of desktop computers has served to reduce the gap between required and available computation, and it has become possible to bridge this gap further by using a combination of algorithmic improvements that exploit the physical, as well as perceptual properties of audible sounds. My thesis is a step in this direction. My dissertation concentrates on developing real-time techniques for both sub-problems of sound simulation: synthesis and propagation. Sound synthesis is concerned with generating the sounds produced by objects due to elastic surface vibrations upon interaction with the environment, such as collisions. I present novel techniques that exploit human auditory perception to simulate scenes with hundreds of sounding objects undergoing impact and rolling in real time. Sound propagation is the complementary problem of modeling the high-order scattering and diffraction of sound in an environment as it travels from source to listener. I discuss my work on a novel numerical acoustic simulator (ARD) that is hundred times faster and consumes ten times less memory than a high-accuracy finite-difference technique, allowing acoustic simulations on previously-intractable spaces, such as a cathedral, on a desktop computer. Lastly, I present my work on interactive sound propagation that leverages my ARD simulator to render the acoustics of arbitrary static scenes for multiple moving sources and listener in real time, while accounting for scene-dependent effects such as low-pass filtering and smooth attenuation

  19. Exploring Solute Transport and Streamline Connectivity Using Two-point and Multipoint Simulation Methods

    NASA Astrophysics Data System (ADS)

    Klise, K. A.; McKenna, S. A.; Tidwell, V. C.; Lane, J. W.; Weissmann, G. S.; Wawrzyniec, T. F.; Nichols, E. M.

    2008-12-01

    Sequential indicator simulation is widely used to create lithofacies models based on the two-point correlation of the desired heterogeneous field. However, two-point correlation (i.e. the variogram) is not capable of preserving complex patterns such as connected curvilinear structures often noted in realistic geologic media. As an alternative, several multipoint simulation methods have been suggested to replicate structural patterns based on a training image. To understand the implications that two-point and multipoint methods have on predicting solute transport, rigorous tests are needed that use realistic aquifer analogs. For this study, we use high-resolution terrestrial lidar scans to identify sand and gravel lithofacies at the outcrop (meter) scale. The lithofacies map serves as the aquifer analog and is used as a training image. The use of two-point (sisim) and multipoint (filtersim and snesim) stochastic simulation methods are then compared based on the ability of the resulting simulations to replicate solute transport characteristics using the aquifer analog. Detailed particle tracking simulations are used to explore the streamline-based connectivity that is preserved using each method. Based on the three simulation methods tested here, filtersim, a multipoint method that replicates structural patterns seen in the aquifer analog, best predicts non- Fickian solute transport characteristics by matching the connectivity of facies along streamlines. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04- 94AL85000.

  20. Efficient Combustion Simulation via the Adaptive Wavelet Collocation Method

    NASA Astrophysics Data System (ADS)

    Lung, Kevin; Brown-Dymkoski, Eric; Guerrero, Victor; Doran, Eric; Museth, Ken; Balme, Jo; Urberger, Bob; Kessler, Andre; Jones, Stephen; Moses, Billy; Crognale, Anthony

    Rocket engine development continues to be driven by the intuition and experience of designers, progressing through extensive trial-and-error test campaigns. Extreme temperatures and pressures frustrate direct observation, while high-fidelity simulation can be impractically expensive owing to the inherent muti-scale, multi-physics nature of the problem. To address this cost, an adaptive multi-resolution PDE solver has been designed which targets the high performance, many-core architecture of GPUs. The adaptive wavelet collocation method is used to maintain a sparse-data representation of the high resolution simulation, greatly reducing the memory footprint while tightly controlling physical fidelity. The tensorial, stencil topology of wavelet-based grids lends itself to highly vectorized algorithms which are necessary to exploit the performance of GPUs. This approach permits efficient implementation of direct finite-rate kinetics, and improved resolution of steep thermodynamic gradients and the smaller mixing scales that drive combustion dynamics. Resolving these scales is crucial for accurate chemical kinetics, which are typically degraded or lost in statistical modeling approaches.

  1. Methods for increased computational efficiency of multibody simulations

    NASA Astrophysics Data System (ADS)

    Epple, Alexander

    This thesis is concerned with the efficient numerical simulation of finite element based flexible multibody systems. Scaling operations are systematically applied to the governing index-3 differential algebraic equations in order to solve the problem of ill conditioning for small time step sizes. The importance of augmented Lagrangian terms is demonstrated. The use of fast sparse solvers is justified for the solution of the linearized equations of motion resulting in significant savings of computational costs. Three time stepping schemes for the integration of the governing equations of flexible multibody systems are discussed in detail. These schemes are the two-stage Radau IIA scheme, the energy decaying scheme, and the generalized-a method. Their formulations are adapted to the specific structure of the governing equations of flexible multibody systems. The efficiency of the time integration schemes is comprehensively evaluated on a series of test problems. Formulations for structural and constraint elements are reviewed and the problem of interpolation of finite rotations in geometrically exact structural elements is revisited. This results in the development of a new improved interpolation algorithm, which preserves the objectivity of the strain field and guarantees stable simulations in the presence of arbitrarily large rotations. Finally, strategies for the spatial discretization of beams in the presence of steep variations in cross-sectional properties are developed. These strategies reduce the number of degrees of freedom needed to accurately analyze beams with discontinuous properties, resulting in improved computational efficiency.

  2. Performance Analysis of an Actor-Based Distributed Simulation

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D.

    1998-01-01

    Object-oriented design of simulation programs appears to be very attractive because of the natural association of components in the simulated system with objects. There is great potential in distributing the simulation across several computers for the purpose of parallel computation and its consequent handling of larger problems in less elapsed time. One approach to such a design is to use "actors", that is, active objects with their own thread of control. Because these objects execute concurrently, communication is via messages. This is in contrast to an object-oriented design using passive objects where communication between objects is via method calls (direct calls when they are in the same address space and remote procedure calls when they are in different address spaces or different machines). This paper describes a performance analysis program for the evaluation of a design for distributed simulations based upon actors.

  3. Coherent-wave Monte Carlo method for simulating light propagation in tissue

    NASA Astrophysics Data System (ADS)

    Kraszewski, Maciej; Pluciński, Jerzy

    2016-03-01

    Simulating propagation and scattering of coherent light in turbid media, such as biological tissues, is a complex problem. Numerical methods for solving Helmholtz or wave equation (e.g. finite-difference or finite-element methods) require large amount of computer memory and long computation time. This makes them impractical for simulating laser beam propagation into deep layers of tissue. Other group of methods, based on radiative transfer equation, allows to simulate only propagation of light averaged over the ensemble of turbid medium realizations. This makes them unuseful for simulating phenomena connected to coherence properties of light. We propose a new method for simulating propagation of coherent light (e.g. laser beam) in biological tissue, that we called Coherent-Wave Monte Carlo method. This method is based on direct computation of optical interaction between scatterers inside the random medium, what allows to reduce amount of memory and computation time required for simulation. We present the theoretical basis of the proposed method and its comparison with finite-difference methods for simulating light propagation in scattering media in Rayleigh approximation regime.

  4. A tool for simulating parallel branch-and-bound methods

    NASA Astrophysics Data System (ADS)

    Golubeva, Yana; Orlov, Yury; Posypkin, Mikhail

    2016-01-01

    The Branch-and-Bound method is known as one of the most powerful but very resource consuming global optimization methods. Parallel and distributed computing can efficiently cope with this issue. The major difficulty in parallel B&B method is the need for dynamic load redistribution. Therefore design and study of load balancing algorithms is a separate and very important research topic. This paper presents a tool for simulating parallel Branchand-Bound method. The simulator allows one to run load balancing algorithms with various numbers of processors, sizes of the search tree, the characteristics of the supercomputer's interconnect thereby fostering deep study of load distribution strategies. The process of resolution of the optimization problem by B&B method is replaced by a stochastic branching process. Data exchanges are modeled using the concept of logical time. The user friendly graphical interface to the simulator provides efficient visualization and convenient performance analysis.

  5. Ocean Wave Simulation Based on Wind Field.

    PubMed

    Li, Zhongyi; Wang, Hao

    2016-01-01

    Ocean wave simulation has a wide range of applications in movies, video games and training systems. Wind force is the main energy resource for generating ocean waves, which are the result of the interaction between wind and the ocean surface. While numerous methods to handle simulating oceans and other fluid phenomena have undergone rapid development during the past years in the field of computer graphic, few of them consider to construct ocean surface height field from the perspective of wind force driving ocean waves. We introduce wind force to the construction of the ocean surface height field through applying wind field data and wind-driven wave particles. Continual and realistic ocean waves result from the overlap of wind-driven wave particles, and a strategy was proposed to control these discrete wave particles and simulate an endless ocean surface. The results showed that the new method is capable of obtaining a realistic ocean scene under the influence of wind fields at real time rates.

  6. Meshless thin-shell simulation based on global conformal parameterization.

    PubMed

    Guo, Xiaohu; Li, Xin; Bao, Yunfan; Gu, Xianfeng; Qin, Hong

    2006-01-01

    This paper presents a new approach to the physically-based thin-shell simulation of point-sampled geometry via explicit, global conformal point-surface parameterization and meshless dynamics. The point-based global parameterization is founded upon the rigorous mathematics of Riemann surface theory and Hodge theory. The parameterization is globally conformal everywhere except for a minimum number of zero points. Within our parameterization framework, any well-sampled point surface is functionally equivalent to a manifold, enabling popular and powerful surface-based modeling and physically-based simulation tools to be readily adapted for point geometry processing and animation. In addition, we propose a meshless surface computational paradigm in which the partial differential equations (for dynamic physical simulation) can be applied and solved directly over point samples via Moving Least Squares (MLS) shape functions defined on the global parametric domain without explicit connectivity information. The global conformal parameterization provides a common domain to facilitate accurate meshless simulation and efficient discontinuity modeling for complex branching cracks. Through our experiments on thin-shell elastic deformation and fracture simulation, we demonstrate that our integrative method is very natural, and that it has great potential to further broaden the application scope of point-sampled geometry in graphics and relevant fields.

  7. Comparing Intravenous Insertion Instructional Methods with Haptic Simulators

    PubMed Central

    Malecha, Ann

    2017-01-01

    Objective. The objective of this review was to compare traditional intravenous (IV) insertion instructional methods with the use of haptic IV simulators. Design. An integrative research design was used to analyze the current literature. Data Sources. A search was conducted using key words intravenous (IV) insertion or cannulation or venipuncture and simulation from 2000 to 2015 in the English language. The databases included Academic Search Complete, CINAHL Complete, Education Resource Information Center, and Medline. Review Methods. Whittemore and Knafl's (2005) strategies were used to critique the articles for themes and similarities. Results. Comparisons of outcomes between traditional IV instructional methods and the use of haptic IV simulators continue to show various results. Positive results indicate that the use of the haptic IV simulator decreases both band constriction and total procedure time. While students are satisfied with practicing on the haptic simulators, they still desire faculty involvement. Conclusion. Combining the haptic IV simulator with practical experience on the IV arm may be the best practice for learning IV insertion. Research employing active learning strategies while using a haptic IV simulator during the learning process may reduce cost and faculty time. PMID:28250987

  8. Comparing Intravenous Insertion Instructional Methods with Haptic Simulators.

    PubMed

    McWilliams, Lenora A; Malecha, Ann

    2017-01-01

    Objective. The objective of this review was to compare traditional intravenous (IV) insertion instructional methods with the use of haptic IV simulators. Design. An integrative research design was used to analyze the current literature. Data Sources. A search was conducted using key words intravenous (IV) insertion or cannulation or venipuncture and simulation from 2000 to 2015 in the English language. The databases included Academic Search Complete, CINAHL Complete, Education Resource Information Center, and Medline. Review Methods. Whittemore and Knafl's (2005) strategies were used to critique the articles for themes and similarities. Results. Comparisons of outcomes between traditional IV instructional methods and the use of haptic IV simulators continue to show various results. Positive results indicate that the use of the haptic IV simulator decreases both band constriction and total procedure time. While students are satisfied with practicing on the haptic simulators, they still desire faculty involvement. Conclusion. Combining the haptic IV simulator with practical experience on the IV arm may be the best practice for learning IV insertion. Research employing active learning strategies while using a haptic IV simulator during the learning process may reduce cost and faculty time.

  9. Remote Sensing Requirements Development: A Simulation-Based Approach

    NASA Technical Reports Server (NTRS)

    Zanoni, Vicki; Davis, Bruce; Ryan, Robert; Gasser, Gerald; Blonski, Slawomir

    2002-01-01

    Earth science research and application requirements for multispectral data have often been driven by currently available remote sensing technology. Few parametric studies exist that specify data required for certain applications. Consequently, data requirements are often defined based on the best data available or on what has worked successfully in the past. Since properties such as spatial resolution, swath width, spectral bands, signal-to-noise ratio (SNR), data quantization and band-to-band registration drive sensor platform and spacecraft system architecture and cost, analysis of these criteria is important to optimize system design objectively. Remote sensing data requirements are also linked to calibration and characterization methods. Parameters such as spatial resolution, radiometric accuracy and geopositional accuracy affect the complexity and cost of calibration methods. However, few studies have quantified the true accuracies required for specific problems. As calibration methods and standards are proposed, it is important that they be tied to well-known data requirements. The Application Research Toolbox (ART) developed at the John C. Stennis Space Center provides a simulation-based method for multispectral data requirements development. The ART produces simulated datasets from hyperspectral data through band synthesis. Parameters such as spectral band shape and width, SNR, data quantization, spatial resolution and band-to-band registration can be varied to create many different simulated data products. Simulated data utility can then be assessed for different applications so that requirements can be better understood.

  10. Remote Sensing System Requirements Development: A Simulation-Based Approach

    NASA Technical Reports Server (NTRS)

    Zanoni, Vicki; Davis, Bruce; Ryan, Robert; Blonski, Slavomir; Gasser, Gerald

    2002-01-01

    Earth science research and application requirements for multispectral data have often been driven by currently available remote sensing technology. Few parametric studies exist that specify data required for certain applications. Consequently, data requirements are often defined based on the best data available or on what has worked successfully in the past. Since properites such as spatial resolution, swath width, spectral bands, signal-to-noise ratio (SNR), data quantization, and band-to-band registration drive sensor platform and spaceraft system architecture and cost, analysis of these criteria is important to objectively optimize system design. Remote sensing data requirements are also linked to calibration and characterization methods. Parameters such as spatial resolution, radiometric accuracy, and geopositional accuracy affect the complexity and cost of calibration methods. However, there are few studies that quantify the true accuracies required for specific problems. As calibration methods and standards are proposed, it is important that they be tied to well-known data requirements. The Application Research Toolbox (ART) developed at Stennis Space Center provides a simulation-based method for multispectral data requirements development. The ART produces simulated data sets from hyperspectral data through band synthesis. Parameters such as spectral band shape and width, SNR, data quantization, spatial resolution, and band-to-band registration can be varied to create many different simulated data products. Simulated data utility can then be assessed for different applications so that requirements can be better understood. This paper describes the ART and its applicability for rigorously deriving remote sensing data requirements.

  11. Simulation of nonpoint source contamination based on adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Kourakos, G.; Harter, T.

    2014-12-01

    Contamination of groundwater aquifers from nonpoint sources is a worldwide problem. Typical agricultural groundwater basins receive contamination from a large array (in the order of ~10^5-6) of spatially and temporally heterogeneous sources such as fields, crops, dairies etc, while the received contaminants emerge at significantly uncertain time lags to a large array of discharge surfaces such as public supply, domestic and irrigation wells and streams. To support decision making in such complex regimes several approaches have been developed, which can be grouped into 3 categories: i) Index methods, ii)regression methods and iii) physically based methods. Among the three, physically based methods are considered more accurate, but at the cost of computational demand. In this work we present a physically based simulation framework which exploits the latest hardware and software developments to simulate large (>>1,000 km2) groundwater basins. First we simulate groundwater flow using a sufficiently detailed mesh to capture the spatial heterogeneity. To achieve optimal mesh quality we combine adaptive mesh refinement with the nonlinear solution for unconfined flow. Starting from a coarse grid the mesh is refined iteratively in the parts of the domain where the flow heterogeneity appears higher resulting in optimal grid. Secondly we simulate the nonpoint source pollution based on the detailed velocity field computed from the previous step. In our approach we use the streamline model where the 3D transport problem is decomposed into multiple 1D transport problems. The proposed framework is applied to simulate nonpoint source pollution in the Central Valley aquifer system, California.

  12. Simulation-based instruction of technical skills

    NASA Technical Reports Server (NTRS)

    Towne, Douglas M.; Munro, Allen

    1991-01-01

    A rapid intelligent tutoring development system (RAPIDS) was developed to facilitate the production of interactive, real-time graphical device models for use in instructing the operation and maintenance of complex systems. The tools allowed subject matter experts to produce device models by creating instances of previously defined objects and positioning them in the emerging device model. These simulation authoring functions, as well as those associated with demonstrating procedures and functional effects on the completed model, required no previous programming experience or use of frame-based instructional languages. Three large simulations were developed in RAPIDS, each involving more than a dozen screen-sized sections. Seven small, single-view applications were developed to explore the range of applicability. Three workshops were conducted to train others in the use of the authoring tools. Participants learned to employ the authoring tools in three to four days and were able to produce small working device models on the fifth day.

  13. An Efficient, Semi-implicit Pressure-based Scheme Employing a High-resolution Finitie Element Method for Simulating Transient and Steady, Inviscid and Viscous, Compressible Flows on Unstructured Grids

    SciTech Connect

    Richard C. Martineau; Ray A. Berry

    2003-04-01

    A new semi-implicit pressure-based Computational Fluid Dynamics (CFD) scheme for simulating a wide range of transient and steady, inviscid and viscous compressible flow on unstructured finite elements is presented here. This new CFD scheme, termed the PCICEFEM (Pressure-Corrected ICE-Finite Element Method) scheme, is composed of three computational phases, an explicit predictor, an elliptic pressure Poisson solution, and a semiimplicit pressure-correction of the flow variables. The PCICE-FEM scheme is capable of second-order temporal accuracy by incorporating a combination of a time-weighted form of the two-step Taylor-Galerkin Finite Element Method scheme as an explicit predictor for the balance of momentum equations and the finite element form of a time-weighted trapezoid rule method for the semi-implicit form of the governing hydrodynamic equations. Second-order spatial accuracy is accomplished by linear unstructured finite element discretization. The PCICE-FEM scheme employs Flux-Corrected Transport as a high-resolution filter for shock capturing. The scheme is capable of simulating flows from the nearly incompressible to the high supersonic flow regimes. The PCICE-FEM scheme represents an advancement in mass-momentum coupled, pressurebased schemes. The governing hydrodynamic equations for this scheme are the conservative form of the balance of momentum equations (Navier-Stokes), mass conservation equation, and total energy equation. An operator splitting process is performed along explicit and implicit operators of the semi-implicit governing equations to render the PCICE-FEM scheme in the class of predictor-corrector schemes. The complete set of semi-implicit governing equations in the PCICE-FEM scheme are cast in this form, an explicit predictor phase and a semi-implicit pressure-correction phase with the elliptic pressure Poisson solution coupling the predictor-corrector phases. The result of this predictor-corrector formulation is that the pressure Poisson

  14. Knowledge Based Cloud FE Simulation of Sheet Metal Forming Processes.

    PubMed

    Zhou, Du; Yuan, Xi; Gao, Haoxiang; Wang, Ailing; Liu, Jun; El Fakir, Omer; Politis, Denis J; Wang, Liliang; Lin, Jianguo

    2016-12-13

    The use of Finite Element (FE) simulation software to adequately predict the outcome of sheet metal forming processes is crucial to enhancing the efficiency and lowering the development time of such processes, whilst reducing costs involved in trial-and-error prototyping. Recent focus on the substitution of steel components with aluminum alloy alternatives in the automotive and aerospace sectors has increased the need to simulate the forming behavior of such alloys for ever more complex component geometries. However these alloys, and in particular their high strength variants, exhibit limited formability at room temperature, and high temperature manufacturing technologies have been developed to form them. Consequently, advanced constitutive models are required to reflect the associated temperature and strain rate effects. Simulating such behavior is computationally very expensive using conventional FE simulation techniques. This paper presents a novel Knowledge Based Cloud FE (KBC-FE) simulation technique that combines advanced material and friction models with conventional FE simulations in an efficient manner thus enhancing the capability of commercial simulation software packages. The application of these methods is demonstrated through two example case studies, namely: the prediction of a material's forming limit under hot stamping conditions, and the tool life prediction under multi-cycle loading conditions.

  15. Knowledge Based Cloud FE Simulation of Sheet Metal Forming Processes

    PubMed Central

    Zhou, Du; Yuan, Xi; Gao, Haoxiang; Wang, Ailing; Liu, Jun; El Fakir, Omer; Politis, Denis J.; Wang, Liliang; Lin, Jianguo

    2016-01-01

    The use of Finite Element (FE) simulation software to adequately predict the outcome of sheet metal forming processes is crucial to enhancing the efficiency and lowering the development time of such processes, whilst reducing costs involved in trial-and-error prototyping. Recent focus on the substitution of steel components with aluminum alloy alternatives in the automotive and aerospace sectors has increased the need to simulate the forming behavior of such alloys for ever more complex component geometries. However these alloys, and in particular their high strength variants, exhibit limited formability at room temperature, and high temperature manufacturing technologies have been developed to form them. Consequently, advanced constitutive models are required to reflect the associated temperature and strain rate effects. Simulating such behavior is computationally very expensive using conventional FE simulation techniques. This paper presents a novel Knowledge Based Cloud FE (KBC-FE) simulation technique that combines advanced material and friction models with conventional FE simulations in an efficient manner thus enhancing the capability of commercial simulation software packages. The application of these methods is demonstrated through two example case studies, namely: the prediction of a material's forming limit under hot stamping conditions, and the tool life prediction under multi-cycle loading conditions. PMID:28060298

  16. Current concepts in simulation-based trauma education.

    PubMed

    Cherry, Robert A; Ali, Jameel

    2008-11-01

    The use of simulation-based technology in trauma education has focused on providing a safe and effective alternative to the more traditional methods that are used to teach technical skills and critical concepts in trauma resuscitation. Trauma team training using simulation-based technology is also being used to develop skills in leadership, team-information sharing, communication, and decision-making. The integration of simulators into medical student curriculum, residency training, and continuing medical education has been strongly recommended by the American College of Surgeons as an innovative means of enhancing patient safety, reducing medical errors, and performing a systematic evaluation of various competencies. Advanced human patient simulators are increasingly being used in trauma as an evaluation tool to assess clinical performance and to teach and reinforce essential knowledge, skills, and abilities. A number of specialty simulators in trauma and critical care have also been designed to meet these educational objectives. Ongoing educational research is still needed to validate long-term retention of knowledge and skills, provide reliable methods to evaluate teaching effectiveness and performance, and to demonstrate improvement in patient safety and overall quality of care.

  17. Numerical methods for large eddy simulation of acoustic combustion instabilities

    NASA Astrophysics Data System (ADS)

    Wall, Clifton T.

    Acoustic combustion instabilities occur when interaction between the combustion process and acoustic modes in a combustor results in periodic oscillations in pressure, velocity, and heat release. If sufficiently large in amplitude, these instabilities can cause operational difficulties or the failure of combustor hardware. In many situations, the dominant instability is the result of the interaction between a low frequency acoustic mode of the combustor and the large scale hydrodynamics. Large eddy simulation (LES), therefore, is a promising tool for the prediction of these instabilities, since both the low frequency acoustic modes and the large scale hydrodynamics are well resolved in LES. Problems with the tractability of such simulations arise, however, due to the difficulty of solving the compressible Navier-Stokes equations efficiently at low Mach number and due to the large number of acoustic periods that are often required for such instabilities to reach limit cycles. An implicit numerical method for the solution of the compressible Navier-Stokes equations has been developed which avoids the acoustic CFL restriction, allowing for significant efficiency gains at low Mach number, while still resolving the low frequency acoustic modes of interest. In the limit of a uniform grid the numerical method causes no artificial damping of acoustic waves. New, non-reflecting boundary conditions have also been developed for use with the characteristic-based approach of Poinsot and Lele (1992). The new boundary conditions are implemented in a manner which allows for significant reduction of the computational domain of an LES by eliminating the need to perform LES in regions where one-dimensional acoustics significantly affect the instability but details of the hydrodynamics do not. These new numerical techniques have been demonstrated in an LES of an experimental combustor. The new techniques are shown to be an efficient means of performing LES of acoustic combustion

  18. Reduction and reconstruction methods for simulation and control of fluids

    NASA Astrophysics Data System (ADS)

    Ma, Zhanhua

    POD/ERA algorithms, they can be applied to linear time-varying systems. A motivating and model problem of stabilization of an unstable vortex shedding cycle with high average lift is then shown as an application of the lifted ERA method. We consider the flow past a flat plate at a post-stall angle of attack with periodic forcing at the trailing edge. The Newton-GMRES method is used to find a high-lift unstable orbit at a forcing period slightly larger than the natural period. A six-dimensional reduced-order model is constructed using lifted ERA to reconstruct the full (with a dimension about 1.4 x 105) linearized input-output dynamics about the orbit. An observer-based feedback controller is then designed using the reduced-order model. Simulation results show that the controller stabilizes the unstable orbit, and the reduced-order model correctly predicts the behavior of the full simulation. The second part of the thesis addresses a different type of reduction, namely symmetry reduction. In particular, we exploit symmetries to design special numerical integrators for a general class of systems (Lie-Poisson Hamiltonian systems) such that conservation laws, such as conservation of energy and momentum, are obeyed in numerical simulations. The motivating problem is a system of N point vortices evolving on a sphere that possesses a Lie-Poisson Hamiltonian structure. The design approach is a variational one at the Hamiltonian side that directly discretizes the corresponding Lie-Poisson variational principle, in which the Lie-Poisson system is regarded as a system reduced from a full canonical Hamiltonian system by symmetry. A modified version of Lie-Poisson variational principle is also proposed in this work. By construction the resulting integrators will not only simulate the Lie-Poisson dynamics, but also reconstruct some dynamics for the full system or the dual system (the so called Euler-Poincare reduced Lagrangian system). The integrators are then applied to a free

  19. Improved Collision Modeling for Direct Simulation Monte Carlo Methods

    DTIC Science & Technology

    2011-03-01

    number is a measure of the rarefaction of a gas , and will be explained more thoroughly in the following chap- ter. Continuum solvers that use Navier...Limits on Mathematical Models [4] Kn=0.1, and the flow can be considered rarefied above that value. Direct Simulation Monte Carlo (DSMC) is a stochastic...method which utilizes the Monte Carlo statistical model to simulate gas behavior, which is very useful for these rarefied atmosphere hypersonic

  20. Evaluation methods of a middleware for networked surgical simulations.

    PubMed

    Cai, Qingbo; Liberatore, Vincenzo; Cavuşoğlu, M Cenk; Yoo, Youngjin

    2006-01-01

    Distributed surgical virtual environments are desirable since they substantially extend the accessibility of computational resources by network communication. However, network conditions critically affects the quality of a networked surgical simulation in terms of bandwidth limit, delays, and packet losses, etc. A solution to this problem is to introduce a middleware between the simulation application and the network so that it can take actions to enhance the user-perceived simulation performance. To comprehensively assess the effectiveness of such a middleware, we propose several evaluation methods in this paper, i.e., semi-automatic evaluation, middleware overhead measurement, and usability test.

  1. Fault diagnosis based on continuous simulation models

    NASA Technical Reports Server (NTRS)

    Feyock, Stefan

    1987-01-01

    The results are described of an investigation of techniques for using continuous simulation models as basis for reasoning about physical systems, with emphasis on the diagnosis of system faults. It is assumed that a continuous simulation model of the properly operating system is available. Malfunctions are diagnosed by posing the question: how can we make the model behave like that. The adjustments that must be made to the model to produce the observed behavior usually provide definitive clues to the nature of the malfunction. A novel application of Dijkstra's weakest precondition predicate transformer is used to derive the preconditions for producing the required model behavior. To minimize the size of the search space, an envisionment generator based on interval mathematics was developed. In addition to its intended application, the ability to generate qualitative state spaces automatically from quantitative simulations proved to be a fruitful avenue of investigation in its own right. Implementations of the Dijkstra transform and the envisionment generator are reproduced in the Appendix.

  2. An experimental method for the assessment of color simulation tools.

    PubMed

    Lillo, Julio; Alvaro, Leticia; Moreira, Humberto

    2014-07-22

    The Simulcheck method for evaluating the accuracy of color simulation tools in relation to dichromats is described and used to test three color simulation tools: Variantor, Coblis, and Vischeck. A total of 10 dichromats (five protanopes, five deuteranopes) and 10 normal trichromats participated in the current study. Simulcheck includes two psychophysical tasks: the Pseudoachromatic Stimuli Identification task and the Minimum Achromatic Contrast task. The Pseudoachromatic Stimuli Identification task allows determination of the two chromatic angles (h(uv) values) that generate a minimum response in the yellow–blue opponent mechanism and, consequently, pseudoachromatic stimuli (greens or reds). The Minimum Achromatic Contrast task requires the selection of the gray background that produces minimum contrast (near zero change in the achromatic mechanism) for each pseudoachromatic stimulus selected in the previous task (L(R) values). Results showed important differences in the colorimetric transformations performed by the three evaluated simulation tools and their accuracy levels. Vischeck simulation accurately implemented the algorithm of Brettel, Viénot, and Mollon (1997). Only Vischeck appeared accurate (similarity in huv and L(R) values between real and simulated dichromats) and, consequently, could render reliable color selections. It is concluded that Simulcheck is a consistent method because it provided an equivalent pattern of results for huv and L(R) values irrespective of the stimulus set used to evaluate a simulation tool. Simulcheck was also considered valid because real dichromats provided expected huv and LR values when performing the two psychophysical tasks included in this method.

  3. Replica exchange simulation method using temperature and solvent viscosity

    NASA Astrophysics Data System (ADS)

    Nguyen, Phuong H.

    2010-04-01

    We propose an efficient and simple method for fast conformational sampling by introducing the solvent viscosity as a parameter to the conventional temperature replica exchange molecular dynamics (T-REMD) simulation method. The method, named V-REMD (V stands for viscosity), uses both low solvent viscosity and high temperature to enhance sampling for each replica; therefore it requires fewer replicas than the T-REMD method. To reduce the solvent viscosity by a factor of λ in a molecular dynamics simulation, one can simply reduce the mass of solvent molecules by a factor of λ2. This makes the method as simple as the conventional method. Moreover, thermodynamic and conformational properties of structures in replicas are still useful as long as one has sufficiently sampled the Boltzmann ensemble. The advantage of the present method has been demonstrated with the simulations of the trialanine, deca-alanine, and a 16-residue β-hairpin peptides. It shows that the method could reduce the number of replicas by a factor of 1.5 to 2 as compared with the T-REMD method.

  4. Simulation method for interference fringe patterns in measuring gear tooth flanks by laser interferometry.

    PubMed

    Fang, Suping; Wang, Leijie; Komori, Masaharu; Kubo, Aizoh

    2010-11-20

    We present a ray-tracing-based method for simulation of interference fringe patterns (IFPs) for measuring gear tooth flanks with a two-path interferometer. This simulation method involves two steps. In the first step, the profile of an IFP is achieved by means of ray tracing within the object path of the interferometer. In the second step, the profile of an IFP is filled with interference fringes, according to a set of functions from an optical path length to a fringe gray level. To examine the correctness of this simulation method, simulations are performed for two spur involute gears, and the simulated IFPs are verified by experiments using the actual two-path interferometer built on an optical platform.

  5. Simulation and design of high precision unit processes via numerical methods

    NASA Astrophysics Data System (ADS)

    Stafford, Roger

    1988-08-01

    SDRC has developed new computer codes specifically tailored for precise and fist simulations of manufacturing processes. Critical aspects of unit processes involve nonlinear transient heat transfer coupled with slow creeping flow. Finite element methods are chosen. Numerical algorithms are adopted which are specifically suited to the problem. Key elements of these simulations are outlined. SDRC has integrated unit process simulations with CAD/CAM design systems, analysis graphics systems, automated inspection, and data base. An example will illustrate data flow, simulation results, and how engineers are using these tools to design new processes for large complex parts.

  6. A General Simulation Method for Multiple Bodies in Proximate Flight

    NASA Technical Reports Server (NTRS)

    Meakin, Robert L.

    2003-01-01

    Methods of unsteady aerodynamic simulation for an arbitrary number of independent bodies flying in close proximity are considered. A novel method to efficiently detect collision contact points is described. A method to compute body trajectories in response to aerodynamic loads, applied loads, and inter-body collisions is also given. The physical correctness of the methods are verified by comparison to a set of analytic solutions. The methods, combined with a Navier-Stokes solver, are used to demonstrate the possibility of predicting the unsteady aerodynamics and flight trajectories of moving bodies that involve rigid-body collisions.

  7. Rotor dynamic simulation and system identification methods for application to vacuum whirl data

    NASA Technical Reports Server (NTRS)

    Berman, A.; Giansante, N.; Flannelly, W. G.

    1980-01-01

    Methods of using rotor vacuum whirl data to improve the ability to model helicopter rotors were developed. The work consisted of the formulation of the equations of motion of elastic blades on a hub using a Galerkin method; the development of a general computer program for simulation of these equations; the study and implementation of a procedure for determining physical parameters based on measured data; and the application of a method for computing the normal modes and natural frequencies based on test data.

  8. Broadening the interface bandwidth in simulation based training

    NASA Technical Reports Server (NTRS)

    Somers, Larry E.

    1989-01-01

    Currently most computer based simulations rely exclusively on computer generated graphics to create the simulation. When training is involved, the method almost exclusively used to display information to the learner is text displayed on the cathode ray tube. MICROEXPERT Systems is concentrating on broadening the communications bandwidth between the computer and user by employing a novel approach to video image storage combined with sound and voice output. An expert system is used to combine and control the presentation of analog video, sound, and voice output with computer based graphics and text. Researchers are currently involved in the development of several graphics based user interfaces for NASA, the U.S. Army, and the U.S. Navy. Here, the focus is on the human factors considerations, software modules, and hardware components being used to develop these interfaces.

  9. Computational methods for coupling microstructural and micromechanical materials response simulations

    SciTech Connect

    HOLM,ELIZABETH A.; BATTAILE,CORBETT C.; BUCHHEIT,THOMAS E.; FANG,HUEI ELIOT; RINTOUL,MARK DANIEL; VEDULA,VENKATA R.; GLASS,S. JILL; KNOROVSKY,GERALD A.; NEILSEN,MICHAEL K.; WELLMAN,GERALD W.; SULSKY,DEBORAH; SHEN,YU-LIN; SCHREYER,H. BUCK

    2000-04-01

    Computational materials simulations have traditionally focused on individual phenomena: grain growth, crack propagation, plastic flow, etc. However, real materials behavior results from a complex interplay between phenomena. In this project, the authors explored methods for coupling mesoscale simulations of microstructural evolution and micromechanical response. In one case, massively parallel (MP) simulations for grain evolution and microcracking in alumina stronglink materials were dynamically coupled. In the other, codes for domain coarsening and plastic deformation in CuSi braze alloys were iteratively linked. this program provided the first comparison of two promising ways to integrate mesoscale computer codes. Coupled microstructural/micromechanical codes were applied to experimentally observed microstructures for the first time. In addition to the coupled codes, this project developed a suite of new computational capabilities (PARGRAIN, GLAD, OOF, MPM, polycrystal plasticity, front tracking). The problem of plasticity length scale in continuum calculations was recognized and a solution strategy was developed. The simulations were experimentally validated on stockpile materials.

  10. Assessment of Human Patient Simulation-Based Learning

    PubMed Central

    Schwartz, Catrina R.; Odegard, Peggy Soule; Hammer, Dana P.; Seybert, Amy L.

    2011-01-01

    The most common types of assessment of human patient simulation are satisfaction and/or confidence surveys or tests of knowledge acquisition. There is an urgent need to develop valid, reliable assessment instruments related to simulation-based learning. Assessment practices for simulation-based activities in the pharmacy curricula are highlighted, with a focus on human patient simulation. Examples of simulation-based assessment activities are reviewed according to type of assessment or domain being assessed. Assessment strategies are suggested for faculty members and programs that use simulation-based learning. PMID:22345727

  11. Assessment of human patient simulation-based learning.

    PubMed

    Bray, Brenda S; Schwartz, Catrina R; Odegard, Peggy Soule; Hammer, Dana P; Seybert, Amy L

    2011-12-15

    The most common types of assessment of human patient simulation are satisfaction and/or confidence surveys or tests of knowledge acquisition. There is an urgent need to develop valid, reliable assessment instruments related to simulation-based learning. Assessment practices for simulation-based activities in the pharmacy curricula are highlighted, with a focus on human patient simulation. Examples of simulation-based assessment activities are reviewed according to type of assessment or domain being assessed. Assessment strategies are suggested for faculty members and programs that use simulation-based learning.

  12. A fully nonlinear characteristic method for gyrokinetic simulation

    SciTech Connect

    Parker, S.E.; Lee, W.W.

    1992-07-01

    We present a new scheme which evolves the perturbed part of the distribution function along a set of characteristics that solves the fully nonlinear gyrokinetic equations. This nonlinear characteristic method for particle simulation is an extension of the partially linear weighting scheme, and may be considered an improvement of existing {delta} f methods. Some of the features of this new method are: the ability to keep all of the nonlinearities, particularly those associated with parallel acceleration; the loading of the physical equilibrium distribution function f{sub o} (e.g., a Maxwellian), with or without the multiple spatial scale approximation; the use of a single of trajectories for the particles; and also, the retention of the conservation properties of the original gyrokinetic system in the numerically converged limit. Therefore, one can take advantage of the low noise property of the weighting scheme together with the quiet start techniques to simulate weak instabilities, with a substantially reduced number of particles than required for a conventional simulation. The new method is used to study a one dimensional drift wave model which isolates the parallel velocity nonlinearity. A mode coupling calculation of the saturation mechanism is given, which is in good agreement with the simulation results and predicts a considerably lower saturation level then the estimate of Sagdeev and Galeev. Finally, we extend the nonlinear characteristic method to the electromagnetic gyrokinetic equations in general geometry.

  13. MASSIS: a mass spectrum simulation system 1. Principle and method.

    PubMed

    Chen, HaiFeng; Fan, BoTao; Xia, HaiRong; Petitjean, Michael; Yuan, ShenGang; Panaye, Annick; Doucet, Jean-Pierre

    2003-01-01

    A mass spectrum simulation system was developed. The simulated spectrum for a given target structure is computed based on the cleavage knowledge and statistical rules established and stocked in pivot databases: cleavage rule knowledge, function groups, small fragments and fragment-intensity relationships. These databases were constructed from correlation charts and statistical analysis of large population of organic mass spectra using data mining techniques. Since 1980, several systems were proposed for mass spectrum simulation, but in present there is no any commercial software available. This shows the complexity and difficulties in the development of a such system. The reported mass spectral simulation system in this paper could be the first general software for organic chemistry use

  14. Atomistic simulation of Voronoi-based coated nanoporous metals

    NASA Astrophysics Data System (ADS)

    Onur Yildiz, Yunus; Kirca, Mesut

    2017-02-01

    In this study, a new method developed for the generation of periodic atomistic models of coated and uncoated nanoporous metals (NPMs) is presented by examining the thermodynamic stability of coated nanoporous structures. The proposed method is mainly based on the Voronoi tessellation technique, which provides the ability to control cross-sectional dimension and slenderness of ligaments as well as the thickness of coating. By the utilization of the method, molecular dynamic (MD) simulations of randomly structured NPMs with coating can be performed efficiently in order to investigate their physical characteristics. In this context, for the purpose of demonstrating the functionality of the method, sample atomistic models of Au/Pt NPMs are generated and the effects of coating and porosity on the thermodynamic stability are investigated by using MD simulations. In addition to that, uniaxial tensile loading simulations are performed via MD technique to validate the nanoporous models by comparing the effective Young’s modulus values with the results from literature. Based on the results, while it is demonstrated that coating the nanoporous structures slightly decreases the structural stability causing atomistic configurational changes, it is also shown that the stability of the atomistic models is higher at lower porosities. Furthermore, adaptive common neighbour analysis is also performed to identify the stabilized atomistic structure after the coating process, which provides direct foresights for the mechanical behaviour of coated nanoporous structures.

  15. Tools for evaluating team performance in simulation-based training.

    PubMed

    Rosen, Michael A; Weaver, Sallie J; Lazzara, Elizabeth H; Salas, Eduardo; Wu, Teresa; Silvestri, Salvatore; Schiebel, Nicola; Almeida, Sandra; King, Heidi B

    2010-10-01

    Teamwork training constitutes one of the core approaches for moving healthcare systems toward increased levels of quality and safety, and simulation provides a powerful method of delivering this training, especially for face-paced and dynamic specialty areas such as Emergency Medicine. Team performance measurement and evaluation plays an integral role in ensuring that simulation-based training for teams (SBTT) is systematic and effective. However, this component of SBTT systems is overlooked frequently. This article addresses this gap by providing a review and practical introduction to the process of developing and implementing evaluation systems in SBTT. First, an overview of team performance evaluation is provided. Second, best practices for measuring team performance in simulation are reviewed. Third, some of the prominent measurement tools in the literature are summarized and discussed relative to the best practices. Subsequently, implications of the review are discussed for the practice of training teamwork in Emergency Medicine.

  16. Tools for evaluating team performance in simulation-based training

    PubMed Central

    Rosen, Michael A; Weaver, Sallie J; Lazzara, Elizabeth H; Salas, Eduardo; Wu, Teresa; Silvestri, Salvatore; Schiebel, Nicola; Almeida, Sandra; King, Heidi B

    2010-01-01

    Teamwork training constitutes one of the core approaches for moving healthcare systems toward increased levels of quality and safety, and simulation provides a powerful method of delivering this training, especially for face-paced and dynamic specialty areas such as Emergency Medicine. Team performance measurement and evaluation plays an integral role in ensuring that simulation-based training for teams (SBTT) is systematic and effective. However, this component of SBTT systems is overlooked frequently. This article addresses this gap by providing a review and practical introduction to the process of developing and implementing evaluation systems in SBTT. First, an overview of team performance evaluation is provided. Second, best practices for measuring team performance in simulation are reviewed. Third, some of the prominent measurement tools in the literature are summarized and discussed relative to the best practices. Subsequently, implications of the review are discussed for the practice of training teamwork in Emergency Medicine. PMID:21063558

  17. Particle-based sampling of N-body simulations

    NASA Astrophysics Data System (ADS)

    Faber, N. T.; Stibbe, D.; Portegies Zwart, S.; McMillan, S. L. W.; Boily, C. M.

    2010-01-01

    This paper introduces a novel approach for sampling the orbits of an N-body simulation. The gist of the method is to exploit individual phase-space coordinates acquired during integration of the equations of motion. This technique, which we dub `particle-based sampling scheme', is tailor-made for resolving rapid time-variation of coordinates when needed. The PBaSS requires less disk space (by factors of 10 or more) to retrieve orbits at a chosen accuracy than those reconstructed using the classic snapshot approach. Furthermore, the PBaSS also allows a reconstruction of the system at any time-resolution not smaller than the smallest integration time-step in a post-simulation treatment, thus avoiding costly simulation reruns.

  18. Parallel methods for dynamic simulation of multiple manipulator systems

    NASA Technical Reports Server (NTRS)

    Mcmillan, Scott; Sadayappan, P.; Orin, David E.

    1993-01-01

    In this paper, efficient dynamic simulation algorithms for a system of m manipulators, cooperating to manipulate a large load, are developed; their performance, using two possible forms of parallelism on a general-purpose parallel computer, is investigated. One form, temporal parallelism, is obtained with the use of parallel numerical integration methods. A speedup of 3.78 on four processors of CRAY Y-MP8 was achieved with a parallel four-point block predictor-corrector method for the simulation of a four manipulator system. These multi-point methods suffer from reduced accuracy, and when comparing these runs with a serial integration method, the speedup can be as low as 1.83 for simulations with the same accuracy. To regain the performance lost due to accuracy problems, a second form of parallelism is employed. Spatial parallelism allows most of the dynamics of each manipulator chain to be computed simultaneously. Used exclusively in the four processor case, this form of parallelism in conjunction with a serial integration method results in a speedup of 3.1 on four processors over the best serial method. In cases where there are either more processors available or fewer chains in the system, the multi-point parallel integration methods are still advantageous despite the reduced accuracy because both forms of parallelism can then combine to generate more parallel tasks and achieve greater effective speedups. This paper also includes results for these cases.

  19. Low dimensional gyrokinetic PIC simulation by δf method

    NASA Astrophysics Data System (ADS)

    Chen, C. M.; Nishimura, Yasutaro; Cheng, C. Z.

    2015-11-01

    A step by step development of our low dimensional gyrokinetic Particle-in-Cell (PIC) simulation is reported. One dimensional PIC simulation of Langmuir wave dynamics is benchmarked. We then take temporal plasma echo as a test problem to incorporate the δf method. Electrostatic driftwave simulation in one dimensional slab geometry is resumed in the presence of finite density gradients. By carefully diagnosing contour plots of the δf values in the phase space, we discuss the saturation mechanism of the driftwave instabilities. A v∥ formulation is employed in our new electromagnetic gyrokinetic method by solving Helmholtz equation for time derivative of the vector potential. Electron and ion momentum balance equations are employed in the time derivative of the Ampere's law. This work is supported by Ministry of Science and Technology of Taiwan, MOST 103-2112-M-006-007 and MOST 104-2112-M-006-019.

  20. Hybrid-CVFE method for flexible-grid reservoir simulation

    SciTech Connect

    Fung, L.S.K.; Buchanan, L.; Sharma, R. )

    1994-08-01

    Well flows and pressures are the most important boundary conditions in reservoir simulation. In a typical simulation, rapid changes and large pressure, temperature, saturation, and composition gradients occur in near-well regions. Treatment of these near-well phenomena significantly affects the accuracy of reservoir simulation results; therefore, extensive efforts have been devoted to the numerical treatment of wells and near-well flows. The flexible control-volume finite-element (CVFE) method is used to construct hybrid grids. The method involves use of a local cylindrical or elliptical grid to represent near-well flow accurately while honoring complex reservoir boundaries. The grid transition is smooth without any special discretization approximation, which eliminates the grid transition problem experienced with Cartesian local grid refinement and hybrid Cartesian gridding techniques.

  1. Validation of chemistry models employed in a particle simulation method

    NASA Technical Reports Server (NTRS)

    Haas, Brian L.; Mcdonald, Jeffrey D.

    1991-01-01

    The chemistry models employed in a statistical particle simulation method, as implemented in the Intel iPSC/860 multiprocessor computer, are validated and applied. Chemical relaxation of five-species air in these reservoirs involves 34 simultaneous dissociation, recombination, and atomic-exchange reactions. The reaction rates employed in the analytic solutions are obtained from Arrhenius experimental correlations as functions of temperature for adiabatic gas reservoirs in thermal equilibrium. Favorable agreement with the analytic solutions validates the simulation when applied to relaxation of O2 toward equilibrium in reservoirs dominated by dissociation and recombination, respectively, and when applied to relaxation of air in the temperature range 5000 to 30,000 K. A flow of O2 over a circular cylinder at high Mach number is simulated to demonstrate application of the method to multidimensional reactive flows.

  2. A Simulation Study of Methods for Assessing Differential Item Functioning in Computerized Adaptive Tests.

    ERIC Educational Resources Information Center

    Zwick, Rebecca; And Others

    1994-01-01

    Simulated data were used to investigate the performance of modified versions of the Mantel-Haenszel method of differential item functioning (DIF) analysis in computerized adaptive tests (CAT). Results indicate that CAT-based DIF procedures perform well and support the use of item response theory-based matching variables in DIF analysis. (SLD)

  3. Simulation as a Teaching Method in Family Communication Class.

    ERIC Educational Resources Information Center

    Parmenter, C. Irvin

    Simulation was used as a teaching method in a family communication class to foster a feeling of empathy with others. Although the course was originally designed to be taught as a seminar, the large number of students prompted the division of students into groups of five or six, characterized as families, each of which was to discuss concepts and…

  4. Numerical Simulation of Turbulent Flames using Vortex Methods.

    DTIC Science & Technology

    1987-10-05

    layer," Phys. Fluids , 30, pp. 706-721, 1987. (11) Ghoniem, A.F., and Knio, O.M., "Numerical Simulation of Flame Propagation in Constant Volume Chambers...1985. 4. "Numerical solution of a confined shear layer using vortex methods," The International Symposium on Computational Fluid Dynamics, Tokyo...Symposium on Computational Fluid Dynamics, Tokyo, Japan, September 1985. 8. "Application of Computational Methods in Turbulent Reacting Flow

  5. A new battery-charging method suggested by molecular dynamics simulations.

    PubMed

    Abou Hamad, Ibrahim; Novotny, M A; Wipf, D O; Rikvold, P A

    2010-03-20

    Based on large-scale molecular dynamics simulations, we propose a new charging method that should be capable of charging a lithium-ion battery in a fraction of the time needed when using traditional methods. This charging method uses an additional applied oscillatory electric field. Our simulation results show that this charging method offers a great reduction in the average intercalation time for Li(+) ions, which dominates the charging time. The oscillating field not only increases the diffusion rate of Li(+) ions in the electrolyte but, more importantly, also enhances intercalation by lowering the corresponding overall energy barrier.

  6. Numerical Simulation of High Velocity Impact Phenomenon by the Distinct Element Method (dem)

    NASA Astrophysics Data System (ADS)

    Tsukahara, Y.; Matsuo, A.; Tanaka, K.

    2007-12-01

    Continuous-DEM (Distinct Element Method) for impact analysis is proposed in this paper. Continuous-DEM is based on DEM (Distinct Element Method) and the idea of the continuum theory. Numerical simulations of impacts between SUS 304 projectile and concrete target has been performed using the proposed method. The results agreed quantitatively with the impedance matching method. Experimental elastic-plastic behavior with compression and rarefaction wave under plate impact was also qualitatively reproduced, matching the result by AUTODYN®.

  7. A comparative study of interface reconstruction methods for multi-material ALE simulations

    SciTech Connect

    Kucharik, Milan; Garimalla, Rao; Schofield, Samuel; Shashkov, Mikhail

    2009-01-01

    In this paper we compare the performance of different methods for reconstructing interfaces in multi-material compressible flow simulations. The methods compared are a material-order-dependent Volume-of-Fluid (VOF) method, a material-order-independent VOF method based on power diagram partitioning of cells and the Moment-of-Fluid method (MOF). We demonstrate that the MOF method provides the most accurate tracking of interfaces, followed by the VOF method with the right material ordering. The material-order-independent VOF method performs some-what worse than the above two while the solutions with VOF using the wrong material order are considerably worse.

  8. A comparative study of interface reconstruction methods for multi-material ALE simulations

    SciTech Connect

    Kucharik, Milan Garimella, Rao V. Schofield, Samuel P. Shashkov, Mikhail J.

    2010-04-01

    In this paper we compare the performance of different methods for reconstructing interfaces in multi-material compressible flow simulations. The methods compared are a material-order-dependent Volume-of-Fluid (VOF) method, a material-order-independent VOF method based on power diagram partitioning of cells and the Moment-of-Fluid method (MOF). We demonstrate that the MOF method provides the most accurate tracking of interfaces, followed by the VOF method with the right material ordering. The material-order-independent VOF method performs somewhat worse than the above two while the solutions with VOF using the wrong material order are considerably worse.

  9. Cost efficiency of simulation-based acquisition

    NASA Astrophysics Data System (ADS)

    Luzeaux, Dominique J. P.

    2003-09-01

    Reduction of risks and acquisition delays is a major issue for procurement services, as it contributes directly to the cost and availability of the system. A new approach, know as simulation-based acquisition (SBA) has been used increasingly within the last past years. In this paper, we address cost-effectiveness issues of SBA. Using the standard cost estimates familiar to program managers, we show first that the cost overhead of using SBA instead of a "conservative" approach is cancelled and turned into a financial gain as soon as the first unforeseen event arises. Then, we show that reuse within SBA of a system-of-systems induces financial gains which give the design of the encompassing meta-system for free.

  10. Surrogate modeling of ultrasonic simulations using data-driven methods

    NASA Astrophysics Data System (ADS)

    Du, Xiaosong; Grandin, Robert; Leifsson, Leifur

    2017-02-01

    Ultrasonic testing (UT) is used to detect internal flaws in materials and to characterize material properties. In many applications, computational simulations are an important part of the inspection-design and analysis processes. Having fast surrogate models for UT simulations is key for enabling efficient inverse analysis and model-assisted probability of detection (MAPOD). In many cases, it is impractical to perform the aforementioned tasks in a timely manner using current simulation models directly. Fast surrogate models can make these processes computationally tractable. This paper presents investigations of using surrogate modeling techniques to create fast approximate models of UT simulator responses. In particular, we propose to integrate data-driven methods (here, kriging interpolation with variable-fidelity models to construct an accurate and fast surrogate model. These techniques are investigated using test cases involving UT simulations of solid components immersed in a water bath during the inspection process. We will apply the full ultrasonic solver and the surrogate model to the detection and characterization of the flaw. The methods will be compared in terms of quality of the responses.

  11. Crystal level simulations using Eulerian finite element methods

    SciTech Connect

    Becker, R; Barton, N R; Benson, D J

    2004-02-06

    Over the last several years, significant progress has been made in the use of crystal level material models in simulations of forming operations. However, in Lagrangian finite element approaches simulation capabilities are limited in many cases by mesh distortion associated with deformation heterogeneity. Contexts in which such large distortions arise include: bulk deformation to strains approaching or exceeding unity, especially in highly anisotropic or multiphase materials; shear band formation and intersection of shear bands; and indentation with sharp indenters. Investigators have in the past used Eulerian finite element methods with material response determined from crystal aggregates to study steady state forming processes. However, Eulerian and Arbitrary Lagrangian-Eulerian (ALE) finite element methods have not been widely utilized for simulation of transient deformation processes at the crystal level. The advection schemes used in Eulerian and ALE codes control mesh distortion and allow for simulation of much larger total deformations. We will discuss material state representation issues related to advection and will present results from ALE simulations.

  12. System and Method for Finite Element Simulation of Helicopter Turbulence

    NASA Technical Reports Server (NTRS)

    McFarland, R. E. (Inventor); Dulsenberg, Ken (Inventor)

    1999-01-01

    The present invention provides a turbulence model that has been developed for blade-element helicopter simulation. This model uses an innovative temporal and geometrical distribution algorithm that preserves the statistical characteristics of the turbulence spectra over the rotor disc, while providing velocity components in real time to each of five blade-element stations along each of four blades. for a total of twenty blade-element stations. The simulator system includes a software implementation of flight dynamics that adheres to the guidelines for turbulence set forth in military specifications. One of the features of the present simulator system is that it applies simulated turbulence to the rotor blades of the helicopter, rather than to its center of gravity. The simulator system accurately models the rotor penetration into a gust field. It includes time correlation between the front and rear of the main rotor, as well as between the side forces felt at the center of gravity and at the tail rotor. It also includes features for added realism, such as patchy turbulence and vertical gusts in to which the rotor disc penetrates. These features are realized by a unique real time implementation of the turbulence filters. The new simulator system uses two arrays one on either side of the main rotor to record the turbulence field and to produce time-correlation from the front to the rear of the rotor disc. The use of Gaussian Interpolation between the two arrays maintains the statistical properties of the turbulence across the rotor disc. The present simulator system and method may be used in future and existing real-time helicopter simulations with minimal increase in computational workload.

  13. A rainfall simulator based on multifractal generator

    NASA Astrophysics Data System (ADS)

    Akrour, Nawal; mallet, Cecile; barthes, Laurent; chazottes, Aymeric

    2015-04-01

    illustrating the simulator's capabilities will be provided. They show that the simulated two-dimensional fields have coherent statistical properties in term of cumulative rain rate distribution but also in term of power spectrum and structure function with the observed ones at different spatial scales (1, 4, 16 km2) involving that scale features are well represented by the model. Keywords: precipitation, multifractal modeling, variogram, structure function, scale invariance, rain intermittency Akrour, N., Aymeric; C., Verrier, S., Barthes, L., Mallet, C.: 2013. Calibrating synthetic multifractal times series with observed data. International Precipitation Conference (IPC 11), Wageningen, The Netherlands http://www.wageningenur.nl/upload_mm/7/5/e/a72f004a-8e66-445c-bb0b-f489ed0ff0d4_Abstract%20book_TotaalLR-SEC.pdf Akrour, N., Aymeric; C., Verrier, S., Mallet, C., Barthes, L.: 2014: Simulation of yearly rainfall time series at micro-scale resolution with actual properties: intermittency, scale invariance, rainfall distribution, submitted to Water Resources Research (under revision) Schertzer, D., S. Lovejoy, 1987: Physically based rain and cloud modeling by anisotropic, multiplicative turbulent cascades. J. Geophys. Res. 92, 9692-9714 Schleiss, M., S. Chamoun, and A. Berne (2014), Stochastic simulation of intermittent rainfall using the concept of dry drift, Water Resources Research, 50 (3), 2329-2349

  14. Numerical simulation of the blast impact problem using the Direct Simulation Monte Carlo (DSMC) method

    NASA Astrophysics Data System (ADS)

    Sharma, Anupam; Long, Lyle N.

    2004-10-01

    A particle approach using the Direct Simulation Monte Carlo (DSMC) method is used to solve the problem of blast impact with structures. A novel approach to model the solid boundary condition for particle methods is presented. The solver is validated against an analytical solution of the Riemann shocktube problem and against experiments on interaction of a planar shock with a square cavity. Blast impact simulations are performed for two model shapes, a box and an I-shaped beam, assuming that the solid body does not deform. The solver uses domain decomposition technique to run in parallel. The parallel performance of the solver on two Beowulf clusters is also presented.

  15. A short introduction to numerical methods used in cosmological N-body simulations

    NASA Astrophysics Data System (ADS)

    Hellwing, Wojciech

    2015-12-01

    We give a short introduction to modern numerical methods commonly used in cosmological N-body simulations. First, we present some simple considerations based on linear perturbation theory which indicate the necessity for N-body simulations. Then, based on a working example of the publicly available gadget-2 code, we describe particle mesh and Barnes-Hut oct-tree methods used in numerical gravity N-body solvers. We also briefly discuss methods used in an elementary hydrodynamic implementation used for baryonic gas. Next, we give a very basic description of time integration of equations of motion commonly used in N-body codes. Finally we describe the Zeldovitch approximation as an example method for generating initial conditions for computer simulations.

  16. Treecode-based generalized Born method

    NASA Astrophysics Data System (ADS)

    Xu, Zhenli; Cheng, Xiaolin; Yang, Haizhao

    2011-02-01

    We have developed a treecode-based O(Nlog N) algorithm for the generalized Born (GB) implicit solvation model. Our treecode-based GB (tGB) is based on the GBr6 [J. Phys. Chem. B 111, 3055 (2007)], an analytical GB method with a pairwise descreening approximation for the R6 volume integral expression. The algorithm is composed of a cutoff scheme for the effective Born radii calculation, and a treecode implementation of the GB charge-charge pair interactions. Test results demonstrate that the tGB algorithm can reproduce the vdW surface based Poisson solvation energy with an average relative error less than 0.6% while providing an almost linear-scaling calculation for a representative set of 25 proteins with different sizes (from 2815 atoms to 65456 atoms). For a typical system of 10k atoms, the tGB calculation is three times faster than the direct summation as implemented in the original GBr6 model. Thus, our tGB method provides an efficient way for performing implicit solvent GB simulations of larger biomolecular systems at longer time scales.

  17. PNS and statistical experiments simulation in subcritical systems using Monte-Carlo method on example of Yalina-Thermal assembly

    NASA Astrophysics Data System (ADS)

    Sadovich, Sergey; Talamo, A.; Burnos, V.; Kiyavitskaya, H.; Fokov, Yu.

    2014-06-01

    In subcritical systems driven by an external neutron source, the experimental methods based on pulsed neutron source and statistical techniques play an important role for reactivity measurement. Simulation of these methods is very time-consumed procedure. For simulations in Monte-Carlo programs several improvements for neutronic calculations have been made. This paper introduces a new method for simulation PNS and statistical measurements. In this method all events occurred in the detector during simulation are stored in a file using PTRAC feature in the MCNP. After that with a special code (or post-processing) PNS and statistical methods can be simulated. Additionally different shapes of neutron pulses and its lengths as well as dead time of detectors can be included into simulation. The methods described above were tested on subcritical assembly Yalina-Thermal, located in Joint Institute for Power and Nuclear Research SOSNY, Minsk, Belarus. A good agreement between experimental and simulated results was shown.

  18. Histology-based simulations of ultrasound imaging: methodology.

    PubMed

    Gyöngy, Miklós; Balogh, Lajos; Szalai, Klára; Kalló, Imre

    2013-10-01

    Simulations of ultrasound (US) images based on histology may shed light on the process by which microscopic tissue features translate to a US image and may enable predictions of feature detectability as a function of US system parameters. This technical note describes how whole-slide hematoxylin and eosin-stained histology images can be used to generate maps of fractional change in bulk modulus, whose convolution with the impulse response of the US system yields simulated US images. The method is illustrated by two canine mastocytoma histology images, one with and the other without signs of intra-operative hemorrhaging. Quantitative comparisons of the envelope statistics with corresponding clinical US images provide preliminary validation of the method.

  19. Impact and Implementation of Simulation-Based Training for Safety

    PubMed Central

    Bilotta, Federico F.; Werner, Samantha M.; Bergese, Sergio D.; Rosa, Giovanni

    2013-01-01

    Patient safety is an issue of imminent concern in the high-risk field of medicine, and systematic changes that alter the way medical professionals approach patient care are needed. Simulation-based training (SBT) is an exemplary solution for addressing the dynamic medical environment of today. Grounded in methodologies developed by the aviation industry, SBT exceeds traditional didactic and apprenticeship models in terms of speed of learning, amount of information retained, and capability for deliberate practice. SBT remains an option in many medical schools and continuing medical education curriculums (CMEs), though its use in training has been shown to improve clinical practice. Future simulation-based anesthesiology training research needs to develop methods for measuring both the degree to which training translates into increased practitioner competency and the effect of training on safety improvements for patients. PMID:24311981

  20. Imaging Earth's Interior Based Upon Adjoint Methods

    NASA Astrophysics Data System (ADS)

    Tromp, J.; Komatitsch, D.; Liu, Q.; Tape, C.; Maggi, A.

    2008-12-01

    Modern numerical methods in combination with rapid advances in parallel computing have enabled the simulation of seismic wave propagation in 3D Earth models at unpredcented resolution and accuracy. On a modest PC cluster one can now simulate global seismic wave propagation at periods of 20~s longer accounting for heterogeneity in the crust and mantle, topography, anisotropy, attenuation, fluid-solid interactions, self-gravitation, rotation, and the oceans. On the 'Ranger' system at the Texas Advanced Computing Center one can break the 2~s barrier. By drawing connections between seismic tomography, adjoint methods popular in climate and ocean dynamics, time-reversal imaging, and finite-frequency 'banana-doughnut' kernels, it has been demonstrated that Fréchet derivatives for tomographic and (finite) source inversions in complex 3D Earth models may be obtained based upon just two numerical simulations for each earthquake: one calculation for the current model and a second, 'adjoint', calculation that uses time-reversed signals at the receivers as simultaneous, fictitious sources. The adjoint wavefield is calculated while the regular wavefield is reconstructed on the fly by propagating the last frame of the wavefield saved by a previous forward simulation backward in time. This aproach has been used to calculate sensitivity kernels in regional and global Earth models for various body- and surface-wave arrivals. These kernels illustrate the sensitivity of the observations to the structural parameters and form the basis of 'adjoint tomography'. We use a non-linear conjugate gradient method in combination with a source subspace projection preconditioning technique to iterative minimize the misfit function. Using an automated time window selection algorithm, our emphasis is on matching targeted, frequency-dependent body-wave traveltimes and surface-wave phase anomalies, rather than entire waveforms. To avoid reaching a local minimum in the optimization procedure, we

  1. Codependency in nursing: using a simulation/gaming teaching method.

    PubMed

    Farnsworth, B J; Thomas, K J

    1993-01-01

    Practicing nurses can benefit by learning to differentiate their caretaking (potentially destructive) from their caregiving (constructive) behaviors, and by learning strategies to facilitate caregiving. A new simulation/game was developed to assist nurses to recognize codependent behaviors in themselves and others and to practice some alternative patterns of behavior. This team-based simulation/game, "The Climb," uses the metaphor of a mountain-climbing expedition. The experiences of the journey promote dynamic insights into the consequences of codependency in the professional and personal lives of the nurse.

  2. Applications of large eddy simulation methods to gyrokinetic turbulence

    SciTech Connect

    Bañón Navarro, A. Happel, T.; Teaca, B. [Applied Mathematics Research Centre, Coventry University, Coventry CV1 5FB; Max-Planck für Sonnensystemforschung, Max-Planck-Str. 2, D-37191 Katlenburg-Lindau; Max-Planck Jenko, F. [Max-Planck-Institut für Plasmaphysik, EURATOM Association, D-85748 Garching; Max-Planck Hammett, G. W. [Max-Planck Collaboration: ASDEX Upgrade Team

    2014-03-15

    The large eddy simulation (LES) approach—solving numerically the large scales of a turbulent system and accounting for the small-scale influence through a model—is applied to nonlinear gyrokinetic systems that are driven by a number of different microinstabilities. Comparisons between modeled, lower resolution, and higher resolution simulations are performed for an experimental measurable quantity, the electron density fluctuation spectrum. Moreover, the validation and applicability of LES is demonstrated through a series of diagnostics based on the free energetics of the system.

  3. Simulation: A Complementary Method for Teaching Health Services Strategic Management

    PubMed Central

    Reddick, W. T.

    1990-01-01

    Rapid change in the health care environment mandates a more comprehensive approach to the education of future health administrators. The area of consideration in this study is that of health care strategic management. A comprehensive literature review suggests microcomputer-based simulation as an appropriate vehicle for addressing the needs of both educators and students. Seven strategic management software packages are reviewed and rated with an instrument adapted from the Infoworld review format. The author concludes that a primary concern is the paucity of health care specific strategic management simulations.

  4. A comparison of methods for melting point calculation using molecular dynamics simulations.

    PubMed

    Zhang, Yong; Maginn, Edward J

    2012-04-14

    Accurate and efficient prediction of melting points for complex molecules is still a challenging task for molecular simulation, although many methods have been developed. Four melting point computational methods, including one free energy-based method (the pseudo-supercritical path (PSCP) method) and three direct methods (two interface-based methods and the voids method) were applied to argon and a widely studied ionic liquid 1-n-butyl-3-methylimidazolium chloride ([BMIM][Cl]). The performance of each method was compared systematically. All the methods under study reproduce the argon experimental melting point with reasonable accuracy. For [BMIM][Cl], the melting point was computed to be 320 K using a revised PSCP procedure, which agrees with the experimental value 337-339 K very well. However, large errors were observed in the computed results using the direct methods, suggesting that these methods are inappropriate for large molecules with sluggish dynamics. The strengths and weaknesses of each method are discussed.

  5. Smoothed Profile Method to Simulate Colloidal Particles in Complex Fluids

    NASA Astrophysics Data System (ADS)

    Yamamoto, Ryoichi; Nakayama, Yasuya; Kim, Kang

    A new direct numerical simulation scheme, called "Smoothed Profile (SP) method," is presented. The SP method, as a direct numerical simulation of particulate flow, provides a way to couple continuum fluid dynamics with rigid-body dynamics through smoothed profile of colloidal particle. Our formulation includes extensions to colloids in multicomponent solvents such as charged colloids in electrolyte solutions. This method enables us to compute the time evolutions of colloidal particles, ions, and host fluids simultaneously by solving Newton, advection-diffusion, and Navier-Stokes equations so that the electro-hydrodynamic couplings can be fully taken into account. The electrophoretic mobilities of charged spherical particles are calculated in several situations. The comparisons with approximation theories show quantitative agreements for dilute dispersions without any empirical parameters.

  6. Simulation of ground motion using the stochastic method

    USGS Publications Warehouse

    Boore, D.M.

    2003-01-01

    A simple and powerful method for simulating ground motions is to combine parametric or functional descriptions of the ground motion's amplitude spectrum with a random phase spectrum modified such that the motion is distributed over a duration related to the earthquake magnitude and to the distance from the source. This method of simulating ground motions often goes by the name "the stochastic method." It is particularly useful for simulating the higher-frequency ground motions of most interest to engineers (generally, f>0.1 Hz), and it is widely used to predict ground motions for regions of the world in which recordings of motion from potentially damaging earthquakes are not available. This simple method has been successful in matching a variety of ground-motion measures for earthquakes with seismic moments spanning more than 12 orders of magnitude and in diverse tectonic environments. One of the essential characteristics of the method is that it distills what is known about the various factors affecting ground motions (source, path, and site) into simple functional forms. This provides a means by which the results of the rigorous studies reported in other papers in this volume can be incorporated into practical predictions of ground motion.

  7. Efficient simulation of stochastic chemical kinetics with the Stochastic Bulirsch-Stoer extrapolation method

    PubMed Central

    2014-01-01

    Background Biochemical systems with relatively low numbers of components must be simulated stochastically in order to capture their inherent noise. Although there has recently been considerable work on discrete stochastic solvers, there is still a need for numerical methods that are both fast and accurate. The Bulirsch-Stoer method is an established method for solving ordinary differential equations that possesses both of these qualities. Results In this paper, we present the Stochastic Bulirsch-Stoer method, a new numerical method for simulating discrete chemical reaction systems, inspired by its deterministic counterpart. It is able to achieve an excellent efficiency due to the fact that it is based on an approach with high deterministic order, allowing for larger stepsizes and leading to fast simulations. We compare it to the Euler τ-leap, as well as two more recent τ-leap methods, on a number of example problems, and find that as well as being very accurate, our method is the most robust, in terms of efficiency, of all the methods considered in this paper. The problems it is most suited for are those with increased populations that would be too slow to simulate using Gillespie’s stochastic simulation algorithm. For such problems, it is likely to achieve higher weak order in the moments. Conclusions The Stochastic Bulirsch-Stoer method is a novel stochastic solver that can be used for fast and accurate simulations. Crucially, compared to other similar methods, it better retains its high accuracy when the timesteps are increased. Thus the Stochastic Bulirsch-Stoer method is both computationally efficient and robust. These are key properties for any stochastic numerical method, as they must typically run many thousands of simulations. PMID:24939084

  8. A virtual reality based simulator for learning nasogastric tube placement.

    PubMed

    Choi, Kup-Sze; He, Xuejian; Chiang, Vico Chung-Lim; Deng, Zhaohong

    2015-02-01

    Nasogastric tube (NGT) placement is a common clinical procedure where a plastic tube is inserted into the stomach through the nostril for feeding or drainage. However, the placement is a blind process in which the tube may be mistakenly inserted into other locations, leading to unexpected complications or fatal incidents. The placement techniques are conventionally acquired by practising on unrealistic rubber mannequins or on humans. In this paper, a virtual reality based training simulation system is proposed to facilitate the training of NGT placement. It focuses on the simulation of tube insertion and the rendering of the feedback forces with a haptic device. A hybrid force model is developed to compute the forces analytically or numerically under different conditions, including the situations when the patient is swallowing or when the tube is buckled at the nostril. To ensure real-time interactive simulations, an offline simulation approach is adopted to obtain the relationship between the insertion depth and insertion force using a non-linear finite element method. The offline dataset is then used to generate real-time feedback forces by interpolation. The virtual training process is logged quantitatively with metrics that can be used for assessing objective performance and tracking progress. The system has been evaluated by nursing professionals. They found that the haptic feeling produced by the simulated forces is similar to their experience during real NGT insertion. The proposed system provides a new educational tool to enhance conventional training in NGT placement.

  9. How to qualify and validate wear simulation devices and methods.

    PubMed

    Heintze, S D

    2006-08-01

    The clinical significance of increased wear can mainly be attributed to impaired aesthetic appearance and/or functional restrictions. Little is known about the systemic effects of swallowed or inhaled worn particles that derive from restorations. As wear measurements in vivo are complicated and time-consuming, wear simulation devices and methods had been developed without, however, systematically looking at the factors that influence important wear parameters. Wear simulation devices shall simulate processes that occur in the oral cavity during mastication, namely force, force profile, contact time, sliding movement, clearance of worn material, etc. Different devices that use different force actuator principles are available. Those with the highest citation frequency in the literature are - in descending order - the Alabama, ACTA, OHSU, Zurich and MTS wear simulators. When following the FDA guidelines on good laboratory practice (GLP) only the expensive MTS wear simulator is a qualified machine to test wear in vitro; the force exerted by the hydraulic actuator is controlled and regulated during all movements of the stylus. All the other simulators lack control and regulation of force development during dynamic loading of the flat specimens. This may be an explanation for the high coefficient of variation of the results in some wear simulators (28-40%) and the poor reproducibility of wear results if dental databases are searched for wear results of specific dental materials (difference of 22-72% for the same material). As most of the machines are not qualifiable, wear methods applying the machine may have a sound concept but cannot be validated. Only with the MTS method have wear parameters and influencing factors been documented and verified. A good compromise with regard to costs, practicability and robustness is the Willytec chewing simulator, which uses weights as force actuator and step motors for vertical and lateral movements. The Ivoclar wear method run on

  10. Applying dynamic simulation modeling methods in health care delivery research-the SIMULATE checklist: report of the ISPOR simulation modeling emerging good practices task force.

    PubMed

    Marshall, Deborah A; Burgos-Liz, Lina; IJzerman, Maarten J; Osgood, Nathaniel D; Padula, William V; Higashi, Mitchell K; Wong, Peter K; Pasupathy, Kalyan S; Crown, William

    2015-01-01

    Health care delivery systems are inherently complex, consisting of multiple tiers of interdependent subsystems and processes that are adaptive to changes in the environment and behave in a nonlinear fashion. Traditional health technology assessment and modeling methods often neglect the wider health system impacts that can be critical for achieving desired health system goals and are often of limited usefulness when applied to complex health systems. Researchers and health care decision makers can either underestimate or fail to consider the interactions among the people, processes, technology, and facility designs. Health care delivery system interventions need to incorporate the dynamics and complexities of the health care system context in which the intervention is delivered. This report provides an overview of common dynamic simulation modeling methods and examples of health care system interventions in which such methods could be useful. Three dynamic simulation modeling methods are presented to evaluate system interventions for health care delivery: system dynamics, discrete event simulation, and agent-based modeling. In contrast to conventional evaluations, a dynamic systems approach incorporates the complexity of the system and anticipates the upstream and downstream consequences of changes in complex health care delivery systems. This report assists researchers and decision makers in deciding whether these simulation methods are appropriate to address specific health system problems through an eight-point checklist referred to as the SIMULATE (System, Interactions, Multilevel, Understanding, Loops, Agents, Time, Emergence) tool. It is a primer for researchers and decision makers working in health care delivery and implementation sciences who face complex challenges in delivering effective and efficient care that can be addressed with system interventions. On reviewing this report, the readers should be able to identify whether these simulation modeling

  11. Simulation-Based Learning Environment for Assisting Error-Correction

    NASA Astrophysics Data System (ADS)

    Horiguchi, Tomoya; Hirashima, Tsukasa

    In simulation-based learning environments, 'unexpected' phenomena often work as counterexamples which promote a learner to reconsider the problem. It is important that counterexamples contain sufficient information which leads a learner to correct understanding. This paper proposes a method for creating such counterexamples. Error-Based Simulation (EBS) is used for this purpose, which simulates the erroneous motion in mechanics based on a learner's erroneous equation. Our framework is as follows: (1) to identify the cause of errors by comparing a learner's answer with the problem-solver's correct one, (2) to visualize the cause of errors by the unnatural motions in EBS. To perform (1), misconceptions are classified based on problem-solving model, and related to their appearance on a learner's answers (error-identification rules). To perform (2), objects' motions in EBS are classified and related to their suggesting misconceptions (error-visualization rules). A prototype system is implemented and evaluated through a preliminary test, to confirm the usefulness of the framework.

  12. Modeling electrokinetic flow by Lagrangian particle-based method

    NASA Astrophysics Data System (ADS)

    Pan, Wenxiao; Kim, Kyungjoo; Perego, Mauro; Tartakovsky, Alexandre; Parks, Mike

    2015-11-01

    This work focuses on mathematical models and numerical schemes based on Lagrangian particle-based method that can effectively capture mesoscale multiphysics (hydrodynamics, electrostatics, and advection-diffusion) associated in applications of micro-/nano-transport and technology. The order of accuracy is significantly improved for particle-based method with the presented implicit consistent numerical scheme. Specifically, we show simulation results on electrokinetic flows and microfluidic mixing processes in micro-/nano-channel and through semi-permeable porous structures.

  13. Simulated scaling method for localized enhanced sampling and simultaneous "alchemical" free energy simulations: a general method for molecular mechanical, quantum mechanical, and quantum mechanical/molecular mechanical simulations.

    PubMed

    Li, Hongzhi; Fajer, Mikolai; Yang, Wei

    2007-01-14

    A potential scaling version of simulated tempering is presented to efficiently sample configuration space in a localized region. The present "simulated scaling" method is developed with a Wang-Landau type of updating scheme in order to quickly flatten the distributions in the scaling parameter lambdam space. This proposal is meaningful for a broad range of biophysical problems, in which localized sampling is required. Besides its superior capability and robustness in localized conformational sampling, this simulated scaling method can also naturally lead to efficient "alchemical" free energy predictions when dual-topology alchemical hybrid potential is applied; thereby simultaneously, both of the chemically and conformationally distinct portions of two end point chemical states can be efficiently sampled. As demonstrated in this work, the present method is also feasible for the quantum mechanical and quantum mechanical/molecular mechanical simulations.

  14. Image based numerical simulation of hemodynamics in a intracranial aneurysm

    NASA Astrophysics Data System (ADS)

    Le, Trung; Ge, Liang; Sotiropoulos, Fotis; Kallmes, David; Cloft, Harry; Lewis, Debra; Dai, Daying; Ding, Yonghong; Kadirvel, Ramanathan

    2007-11-01

    Image-based numerical simulations of hemodynamics in a intracranial aneurysm are carried out. The numerical solver based on CURVIB (curvilinear grid/immersed boundary method) approach developed in Ge and Sotiropoulos, JCP 2007 is used to simulate the blood flow. A curvilinear grid system that gradually follows the curved geometry of artery wall and consists of approximately 5M grid nodes is constructed as the background grid system and the boundaries of the investigated artery and aneurysm are treated as immersed boundaries. The surface geometry of aneurysm wall is reconstructed from an angiography study of an aneurysm formed on the common carotid artery (CCA) of a rabbit and discretized with triangular meshes. At the inlet a physiological flow waveform is specified and direct numerical simulations are used to simulate the blood flow. Very rich vortical dynamics is observed within the aneurysm area, with a ring like vortex sheds from the proximal side of aneurysm, develops and impinge onto the distal side of the aneurysm as flow develops, and destructs into smaller vortices during later cardiac cycle. This work was supported in part by the University of Minnesota Supercomputing Institute.

  15. Energy Efficient Biomolecular Simulations with FPGA-based Reconfigurable Computing

    SciTech Connect

    Hampton, Scott S; Agarwal, Pratul K

    2010-05-01

    Reconfigurable computing (RC) is being investigated as a hardware solution for improving time-to-solution for biomolecular simulations. A number of popular molecular dynamics (MD) codes are used to study various aspects of biomolecules. These codes are now capable of simulating nanosecond time-scale trajectories per day on conventional microprocessor-based hardware, but biomolecular processes often occur at the microsecond time-scale or longer. A wide gap exists between the desired and achievable simulation capability; therefore, there is considerable interest in alternative algorithms and hardware for improving the time-to-solution of MD codes. The fine-grain parallelism provided by Field Programmable Gate Arrays (FPGA) combined with their low power consumption make them an attractive solution for improving the performance of MD simulations. In this work, we use an FPGA-based coprocessor to accelerate the compute-intensive calculations of LAMMPS, a popular MD code, achieving up to 5.5 fold speed-up on the non-bonded force computations of the particle mesh Ewald method and up to 2.2 fold speed-up in overall time-to-solution, and potentially an increase by a factor of 9 in power-performance efficiencies for the pair-wise computations. The results presented here provide an example of the multi-faceted benefits to an application in a heterogeneous computing environment.

  16. Supernova Simulations with Boltzmann Neutrino Transport: A Comparison of Methods

    SciTech Connect

    Liebendoerfer, M.; Rampp, M.; Janka, H.-Th.; Mezzacappa, Anthony

    2005-02-01

    Accurate neutrino transport has been built into spherically symmetric simulations of stellar core collapse and postbounce evolution. The results of such simulations agree that spherically symmetric models with standard microphysical input fail to explode by the delayed, neutrino-driven mechanism. Independent groups implemented fundamentally different numerical methods to tackle the Boltzmann neutrino transport equation. Here we present a direct and detailed comparison of such neutrino radiation-hydrodynamics simulations for two codes, AGILE-BOLTZTRAN of the Oak Ridge-Basel group and VERTEX of the Garching group. The former solves the Boltzmann equation directly by an implicit, general relativistic discrete-angle method on the adaptive grid of a conservative implicit hydrodynamics code with second-order TVD advection. In contrast, the latter couples a variable Eddington factor technique with an explicit, moving-grid, conservative high-order Riemann solver with important relativistic effects treated by an effective gravitational potential. The presented study is meant to test our neutrino radiation-hydrodynamics implementations and to provide a data basis for comparisons and verifications of supernova codes to be developed in the future. Results are discussed for simulations of the core collapse and postbounce evolution of a 13 M{sub {circle_dot}} star with Newtonian gravity and a 15 M{sub {circle_dot}} star with relativistic gravity.

  17. Flood frequency estimation by hydrological continuous simulation and classical methods

    NASA Astrophysics Data System (ADS)

    Brocca, L.; Camici, S.; Melone, F.; Moramarco, T.; Tarpanelli, A.

    2009-04-01

    In recent years, the effects of flood damages have motivated the development of new complex methodologies for the simulation of the hydrologic/hydraulic behaviour of river systems, fundamental to direct the territorial planning as well as for the floodplain management and risk analysis. The valuation of the flood-prone areas can be carried out through various procedures that are usually based on the estimation of the peak discharge for an assigned probability of exceedence. In the case of ungauged or scarcely gauged catchments this is not straightforward, as the limited availability of historical peak flow data induces a relevant uncertainty in the flood frequency analysis. A possible solution to overcome this problem is the application of hydrological simulation studies in order to generate long synthetic discharge time series. For this purpose, recently, new methodologies based on the stochastic generation of rainfall and temperature data have been proposed. The inferred information can be used as input for a continuous hydrological model to generate a synthetic time series of peak river flow and, hence, the flood frequency distribution at a given site. In this study stochastic rainfall data have been generated via the Neyman-Scott Rectangular Pulses (NSRP) model characterized by a flexible structure in which the model parameters broadly relate to underlying physical features observed in rainfall fields and it is capable of preserving statistical properties of a rainfall time series over a range of time scales. The peak river flow time series have been generated through a continuous hydrological model aimed at flood prediction and developed for the purpose (hereinafter named MISDc) (Brocca, L., Melone, F., Moramarco, T., Singh, V.P., 2008. A continuous rainfall-runoff model as tool for the critical hydrological scenario assessment in natural channels. In: M. Taniguchi, W.C. Burnett, Y. Fukushima, M. Haigh, Y. Umezawa (Eds), From headwater to the ocean

  18. A Simulation Base Investigation of High Latency Space Systems Operations

    NASA Technical Reports Server (NTRS)

    Li, Zu Qun; Crues, Edwin Z.; Bielski, Paul; Moore, Michael

    2017-01-01

    NASA's human space program has developed considerable experience with near Earth space operations. Although NASA has experience with deep space robotic missions, NASA has little substantive experience with human deep space operations. Even in the Apollo program, the missions lasted only a few weeks and the communication latencies were on the order of seconds. Human missions beyond the relatively close confines of the Earth-Moon system will involve missions with durations measured in months and communications latencies measured in minutes. To minimize crew risk and to maximize mission success, NASA needs to develop a better understanding of the implications of these types of mission durations and communication latencies on vehicle design, mission design and flight controller interaction with the crew. To begin to address these needs, NASA performed a study using a physics-based subsystem simulation to investigate the interactions between spacecraft crew and a ground-based mission control center for vehicle subsystem operations across long communication delays. The simulation, built with a subsystem modeling tool developed at NASA's Johnson Space Center, models the life support system of a Mars transit vehicle. The simulation contains models of the cabin atmosphere and pressure control system, electrical power system, drinking and waste water systems, internal and external thermal control systems, and crew metabolic functions. The simulation has three interfaces: 1) a real-time crew interface that can be use to monitor and control the vehicle subsystems; 2) a mission control center interface with data transport delays up to 15 minutes each way; 3) a real-time simulation test conductor interface that can be use to insert subsystem malfunctions and observe the interactions between the crew, ground, and simulated vehicle. The study was conducted at the 21st NASA Extreme Environment Mission Operations (NEEMO) mission between July 18th and Aug 3rd of year 2016. The NEEMO

  19. Exact hybrid particle/population simulation of rule-based models of biochemical systems.

    PubMed

    Hogg, Justin S; Harris, Leonard A; Stover, Lori J; Nair, Niketh S; Faeder, James R

    2014-04-01

    Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, such as ordinary differential equations or Gillespie's algorithm, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This "network-free" approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of "partial network expansion" into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim. Performance tests show that significant memory savings

  20. Assessment of simulation-based calibration of rectangular pulse models

    NASA Astrophysics Data System (ADS)

    Vanhaute, Willem Jan; Vandenberghe, Sander; Willems, Patrick; Verhoest, Niko E. C.

    2013-04-01

    The use of stochastic rainfall models has become widespread in many hydrologic applications, especially when historical rainfall records lack in length or quality to be used for practical purposes. Among a variety of models, rectangular pulse models such as the Neyman-scott and Bartlett-Lewis type models are known for their parsimonious nature and relative ease in simulating long rainfall time series. The aforementioned models are often calibrated using the generalized method of moments which fits modeled to observed moments. To ease the computational burden, the expected values of the modeled moments are usually expressed in function of the model parameters through analytical expressions. The derivation of such analytical expressions is considered to be an important bottleneck in the development of these rectangular pulse models. Any adjustment to the model structure must be accompanied by an adjustment of the analytical moments in order to be able to calibrate the adjusted model. To avoid the use of analytical moments during calibration, a simulation-based calibration is needed. The latter would enable the modeler to make and validate adjustments in a more organic matter. However, such simulation-based calibration must be able to account for the randomness of the simulation. As such, ensemble runs must be made for every objective function evaluation, resulting in considerable computational requirements. The presented research investigates how to exploit today's available computational resources in order to enable simulation-based calibration. Once such type of calibration is feasible, it will open doors to implementing adjustments to the model structure (such as the introduction of dependencies between model variables by using copulas) without the need to rely on analytical expressions of the different moments.

  1. Momentum-exchange method in lattice Boltzmann simulations of particle-fluid interactions.

    PubMed

    Chen, Yu; Cai, Qingdong; Xia, Zhenhua; Wang, Moran; Chen, Shiyi

    2013-07-01

    The momentum exchange method has been widely used in lattice Boltzmann simulations for particle-fluid interactions. Although proved accurate for still walls, it will result in inaccurate particle dynamics without corrections. In this work, we reveal the physical cause of this problem and find that the initial momentum of the net mass transfer through boundaries in the moving-boundary treatment is not counted in the conventional momentum exchange method. A corrected momentum exchange method is then proposed by taking into account the initial momentum of the net mass transfer at each time step. The method is easy to implement with negligible extra computation cost. Direct numerical simulations of a single elliptical particle sedimentation are carried out to evaluate the accuracy for our method as well as other lattice Boltzmann-based methods by comparisons with the results of the finite element method. A shear flow test shows that our method is Galilean invariant.

  2. Patch-based iterative conditional geostatistical simulation using graph cuts

    NASA Astrophysics Data System (ADS)

    Li, Xue; Mariethoz, Gregoire; Lu, DeTang; Linde, Niklas

    2016-08-01

    Training image-based geostatistical methods are increasingly popular in groundwater hydrology even if existing algorithms present limitations that often make real-world applications difficult. These limitations include a computational cost that can be prohibitive for high-resolution 3-D applications, the presence of visual artifacts in the model realizations, and a low variability between model realizations due to the limited pool of patterns available in a finite-size training image. In this paper, we address these issues by proposing an iterative patch-based algorithm which adapts a graph cuts methodology that is widely used in computer graphics. Our adapted graph cuts method optimally cuts patches of pixel values borrowed from the training image and assembles them successively, each time accounting for the information of previously stitched patches. The initial simulation result might display artifacts, which are identified as regions of high cost. These artifacts are reduced by iteratively placing new patches in high-cost regions. In contrast to most patch-based algorithms, the proposed scheme can also efficiently address point conditioning. An advantage of the method is that the cut process results in the creation of new patterns that are not present in the training image, thereby increasing pattern variability. To quantify this effect, a new measure of variability is developed, the merging index, quantifies the pattern variability in the realizations with respect to the training image. A series of sensitivity analyses demonstrates the stability of the proposed graph cuts approach, which produces satisfying simulations for a wide range of parameters values. Applications to 2-D and 3-D cases are compared to state-of-the-art multiple-point methods. The results show that the proposed approach obtains significant speedups and increases variability between realizations. Connectivity functions applied to 2-D models transport simulations in 3-D models are used to

  3. Calibration of three rainfall simulators with automatic measurement methods

    NASA Astrophysics Data System (ADS)

    Roldan, Margarita

    2010-05-01

    CALIBRATION OF THREE RAINFALL SIMULATORS WITH AUTOMATIC MEASUREMENT METHODS M. Roldán (1), I. Martín (2), F. Martín (2), S. de Alba(3), M. Alcázar(3), F.I. Cermeño(3) 1 Grupo de Investigación Ecología y Gestión Forestal Sostenible. ECOGESFOR-Universidad Politécnica de Madrid. E.U.I.T. Forestal. Avda. Ramiro de Maeztu s/n. Ciudad Universitaria. 28040 Madrid. margarita.roldan@upm.es 2 E.U.I.T. Forestal. Avda. Ramiro de Maeztu s/n. Ciudad Universitaria. 28040 Madrid. 3 Facultad de Ciencias Geológicas. Universidad Complutense de Madrid. Ciudad Universitaria s/n. 28040 Madrid The rainfall erosivity is the potential ability of rain to cause erosion. It is function of the physical characteristics of rainfall (Hudson, 1971). Most expressions describing erosivity are related to kinetic energy or momentum and so with drop mass or size and fall velocity. Therefore, research on factors determining erosivity leds to the necessity to study the relation between fall height and fall velocity for different drop sizes, generated in a rainfall simulator (Epema G.F.and Riezebos H.Th, 1983) Rainfall simulators are one of the most used tools for erosion studies and are used to determine fall velocity and drop size. Rainfall simulators allow repeated and multiple measurements The main reason for use of rainfall simulation as a research tool is to reproduce in a controlled way the behaviour expected in the natural environment. But in many occasions when simulated rain is used in order to compare it with natural rain, there is a lack of correspondence between natural and simulated rain and this can introduce some doubt about validity of data because the characteristics of natural rain are not adequately represented in rainfall simulation research (Dunkerley D., 2008). Many times the rainfall simulations have high rain rates and they do not resemble natural rain events and these measures are not comparables. And besides the intensity is related to the kinetic energy which

  4. NEW APPROACHES: Addressing students' common difficulties in basic electricity by qualitative simulation-based activities

    NASA Astrophysics Data System (ADS)

    Ronen, M.; Eliahu, M.

    1997-11-01

    Simulation-based activities provide students with an opportunity to compare their physical intuition with the behaviour of the model and can sometimes offer unique advantages over other methods. This article presents various approaches to the development of qualitative simulation- based activities and describes how these activities can be addressed to students' common difficulties in basic electricity.

  5. Lattice Boltzmann Method for Spacecraft Propellant Slosh Simulation

    NASA Technical Reports Server (NTRS)

    Orr, Jeb S.; Powers, Joseph F.; Yang, Hong Q.

    2015-01-01

    A scalable computational approach to the simulation of propellant tank sloshing dynamics in microgravity is presented. In this work, we use the lattice Boltzmann equation (LBE) to approximate the behavior of two-phase, single-component isothermal flows at very low Bond numbers. Through the use of a non-ideal gas equation of state and a modified multiple relaxation time (MRT) collision operator, the proposed method can simulate thermodynamically consistent phase transitions at temperatures and density ratios consistent with typical spacecraft cryogenic propellants, for example, liquid oxygen. Determination of the tank forces and moments relies upon the global momentum conservation of the fluid domain, and a parametric wall wetting model allows tuning of the free surface contact angle. Development of the interface is implicit and no interface tracking approach is required. Numerical examples illustrate the method's application to predicting bulk fluid motion including lateral propellant slosh in low-g conditions.

  6. Methods for variance reduction in Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Bixler, Joel N.; Hokr, Brett H.; Winblad, Aidan; Elpers, Gabriel; Zollars, Byron; Thomas, Robert J.

    2016-03-01

    Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, due to the probabilistic nature of these simulations, large numbers of photons are often required in order to generate relevant results. Here, we present methods for reduction in the variance of dose distribution in a computational volume. Dose distribution is computed via tracing of a large number of rays, and tracking the absorption and scattering of the rays within discrete voxels that comprise the volume. Variance reduction is shown here using quasi-random sampling, interaction forcing for weakly scattering media, and dose smoothing via bi-lateral filtering. These methods, along with the corresponding performance enhancements are detailed here.

  7. Experiences using DAKOTA stochastic expansion methods in computational simulations.

    SciTech Connect

    Templeton, Jeremy Alan; Ruthruff, Joseph R.

    2012-01-01

    Uncertainty quantification (UQ) methods bring rigorous statistical connections to the analysis of computational and experiment data, and provide a basis for probabilistically assessing margins associated with safety and reliability. The DAKOTA toolkit developed at Sandia National Laboratories implements a number of UQ methods, which are being increasingly adopted by modeling and simulation teams to facilitate these analyses. This report disseminates results as to the performance of DAKOTA's stochastic expansion methods for UQ on a representative application. Our results provide a number of insights that may be of interest to future users of these methods, including the behavior of the methods in estimating responses at varying probability levels, and the expansion levels for the methodologies that may be needed to achieve convergence.

  8. Transformation-optics simulation method for stimulated Brillouin scattering

    NASA Astrophysics Data System (ADS)

    Zecca, Roberto; Bowen, Patrick T.; Smith, David R.; Larouche, Stéphane

    2016-12-01

    We develop an approach to enable the full-wave simulation of stimulated Brillouin scattering and related phenomena in a frequency-domain, finite-element environment. The method uses transformation-optics techniques to implement a time-harmonic coordinate transform that reconciles the different frames of reference used by electromagnetic and mechanical finite-element solvers. We show how this strategy can be successfully applied to bulk and guided systems, comparing the results with the predictions of established theory.

  9. Use of simulated data sets to evaluate the fidelity of metagenomic processing methods.

    PubMed

    Mavromatis, Konstantinos; Ivanova, Natalia; Barry, Kerrie; Shapiro, Harris; Goltsman, Eugene; McHardy, Alice C; Rigoutsos, Isidore; Salamov, Asaf; Korzeniewski, Frank; Land, Miriam; Lapidus, Alla; Grigoriev, Igor; Richardson, Paul; Hugenholtz, Philip; Kyrpides, Nikos C

    2007-06-01

    Metagenomics is a rapidly emerging field of research for studying microbial communities. To evaluate methods presently used to process metagenomic sequences, we constructed three simulated data sets of varying complexity by combining sequencing reads randomly selected from 113 isolate genomes. These data sets were designed to model real metagenomes in terms of complexity and phylogenetic composition. We assembled sampled reads using three commonly used genome assemblers (Phrap, Arachne and JAZZ), and predicted genes using two popular gene-finding pipelines (fgenesb and CRITICA/GLIMMER). The phylogenetic origins of the assembled contigs were predicted using one sequence similarity-based (blast hit distribution) and two sequence composition-based (PhyloPythia, oligonucleotide frequencies) binning methods. We explored the effects of the simulated community structure and method combinations on the fidelity of each processing step by comparison to the corresponding isolate genomes. The simulated data sets are available online to facilitate standardized benchmarking of tools for metagenomic analysis.

  10. Use of simulated data sets to evaluate the fidelity of Metagenomicprocessing methods

    SciTech Connect

    Mavromatis, Konstantinos; Ivanova, Natalia; Barry, Kerri; Shapiro, Harris; Goltsman, Eugene; McHardy, Alice C.; Rigoutsos, Isidore; Salamov, Asaf; Korzeniewski, Frank; Land, Miriam; Lapidus, Alla; Grigoriev, Igor; Richardson, Paul; Hugenholtz, Philip; Kyrpides, Nikos C.

    2006-12-01

    Metagenomics is a rapidly emerging field of research for studying microbial communities. To evaluate methods presently used to process metagenomic sequences, we constructed three simulated data sets of varying complexity by combining sequencing reads randomly selected from 113 isolate genomes. These data sets were designed to model real metagenomes in terms of complexity and phylogenetic composition. We assembled sampled reads using three commonly used genome assemblers (Phrap, Arachne and JAZZ), and predicted genes using two popular gene finding pipelines (fgenesb and CRITICA/GLIMMER). The phylogenetic origins of the assembled contigs were predicted using one sequence similarity--based (blast hit distribution) and two sequence composition--based (PhyloPythia, oligonucleotide frequencies) binning methods. We explored the effects of the simulated community structure and method combinations on the fidelity of each processing step by comparison to the corresponding isolate genomes. The simulated data sets are available online to facilitate standardized benchmarking of tools for metagenomic analysis.

  11. Nonnull interferometer simulation for aspheric testing based on ray tracing.

    PubMed

    Tian, Chao; Yang, Yongying; Wei, Tao; Zhuo, Yongmo

    2011-07-10

    The nonnull interferometric method that employs a partial compensation system to compensate for the longitude aberration of the aspheric under test and a reverse optimization procedure to correct retrace errors is a useful technique for general aspheric testing. However, accurate system modeling and simulation are required to correct retrace errors and reconstruct fabrication error of the aspheric. Here, we propose a ray-tracing-based method to simulate the nonnull interferometer, which calculates the optical path difference by tracing rays through the reference path and the test path. To model a nonrotationally symmetric fabrication error, we mathematically represent it with a set of Zernike polynomials (i.e., Zernike deformation) and derive ray-tracing formulas for the deformed surface, which can also deal with misalignment situations (i.e., a surface with tilts and/or decenters) and thus facilitates system modeling extremely. Simulation results of systems with (relatively) large and small Zernike deformations and their comparisons with the lens design program Zemax have demonstrated the correctness and effectiveness of the method.

  12. Computer Simulations of Valveless Pumping using the Immersed Boundary Method

    NASA Astrophysics Data System (ADS)

    Jung, Eunok; Peskin, Charles

    2000-03-01

    Pumping blood in one direction is the main function of the heart, and the heart is equipped with valves that ensure unidirectional flow. Is it possible, though, to pump blood without valves? This report is intended to show by numerical simulation the possibility of a net flow which is generated by a valveless mechanism in a circulatory system. Simulations of valveless pumping are motivated by biomedical applications: cardiopulmonary resuscitation (CPR); and the human foetus before the development of the heart valves. The numerical method used in this work is immersed boundary method, which is applicable to problems involving an elastic structure interacting with a viscous incompressible fluid. This method has already been applied to blood flow in the heart, platelet aggregation during blood clotting, aquatic animal locomotion, and flow in collapsible tubes. The direction of flow inside a loop of tubing which consists of (almost) rigid and flexible parts is investigated when the boundary of one end of the flexible segment is forced periodically in time. Despite the absence of valves, net flow around the loop may appear in these simulations. Furthermore, we present the new, unexpected results that the direction of this flow is determined not only by the position of the periodic compression, but also by the frequency and amplitude of the driving force.

  13. Simulation Methods for Self-Assembled Polymers and Rings

    NASA Astrophysics Data System (ADS)

    Kindt, James T.

    2003-11-01

    New off-lattice grand canonical Monte Carlo simulation methods have been developed and used to model the equilibrium structure and phase diagrams of equilibrium polymers and rings. A scheme called Polydisperse Insertion, Removal, and Resizing (PDIRR) is used to accelerate the equilibration of the size distribution of self-assembled aggregates. This method allows the insertion or removal of aggregates (e.g., chains) containing an arbitrary number of monomers in a single Monte Carlo move, or the re-sizing of an existing aggregate. For the equilibrium polymer model under semi-dilute conditions, a several-fold increase in equilibration rate compared with single-monomer moves is observed, facilitating the study of the isotropic-nematic transition of semiflexible, self-assembled chains. Combined with the pivot-coupled GCMC method for ring simulation, the PDIRR approach also allows the phenomenological simulation of a polydisperse equilibrium phase of rings, 2-dimensional fluid domains, or flat self-assembled disks in three dimensions.

  14. The Local Variational Multiscale Method for Turbulence Simulation.

    SciTech Connect

    Collis, Samuel Scott; Ramakrishnan, Srinivas

    2005-05-01

    Accurate and efficient turbulence simulation in complex geometries is a formidable chal-lenge. Traditional methods are often limited by low accuracy and/or restrictions to simplegeometries. We explore the merger of Discontinuous Galerkin (DG) spatial discretizationswith Variational Multi-Scale (VMS) modeling, termed Local VMS (LVMS), to overcomethese limitations. DG spatial discretizations support arbitrarily high-order accuracy on un-structured grids amenable for complex geometries. Furthermore, high-order, hierarchicalrepresentation within DG provides a natural framework fora prioriscale separation crucialfor VMS implementation. We show that the combined benefits of DG and VMS within theLVMS method leads to promising new approach to LES for use in complex geometries.The efficacy of LVMS for turbulence simulation is assessed by application to fully-developed turbulent channelflow. First, a detailed spatial resolution study is undertakento record the effects of the DG discretization on turbulence statistics. Here, the localhp[?]refinement capabilites of DG are exploited to obtain reliable low-order statistics effi-ciently. Likewise, resolution guidelines for simulating wall-bounded turbulence using DGare established. We also explore the influence of enforcing Dirichlet boundary conditionsindirectly through numericalfluxes in DG which allows the solution to jump (slip) at thechannel walls. These jumps are effective in simulating the influence of the wall commen-surate with the local resolution and this feature of DG is effective in mitigating near-wallresolution requirements. In particular, we show that by locally modifying the numericalviscousflux used at the wall, we are able to regulate the near-wall slip through a penaltythat leads to improved shear-stress predictions. This work, demonstrates the potential ofthe numerical viscousflux to act as a numerically consistent wall-model and this successwarrents future research.As in any high-order numerical method some

  15. Modeling and simulation of crack detection for underwater structures using an ACFM method

    NASA Astrophysics Data System (ADS)

    Li, Wei; Chen, Guoming; Yin, Xiaokang; Zhang, Chuanrong; Liu, Tao

    2013-01-01

    Considering the influence of seawater environment, numerical model of the alternating current field measurement (ACFM) method was built for underwater structure surface defect detection in this article. Based on ACFM principle and ANSYS simulation software, finite element simulation was performed to investigate rules and characteristics of the electromagnetic signal distributions in regions with defect, which are verified by the underwater artificial crack detection experiment. The experiment results show that the distributions of electromagnetic signals picked up in the artificial crack experiment are accord with the simulation results and the numerical model is validated.

  16. Simulation of Foam Divot Weight on External Tank Utilizing Least Squares and Neural Network Methods

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Coroneos, Rula M.

    2007-01-01

    Simulation of divot weight in the insulating foam, associated with the external tank of the U.S. space shuttle, has been evaluated using least squares and neural network concepts. The simulation required models based on fundamental considerations that can be used to predict under what conditions voids form, the size of the voids, and subsequent divot ejection mechanisms. The quadratic neural networks were found to be satisfactory for the simulation of foam divot weight in various tests associated with the external tank. Both linear least squares method and the nonlinear neural network predicted identical results.

  17. The impact of cloud vertical profile on liquid water path retrieval based on the bispectral method: A theoretical study based on large-eddy simulations of shallow marine boundary layer clouds

    NASA Astrophysics Data System (ADS)

    Miller, Daniel J.; Zhang, Zhibo; Ackerman, Andrew S.; Platnick, Steven; Baum, Bryan A.

    2016-04-01

    Passive optical retrievals of cloud liquid water path (LWP), like those implemented for Moderate Resolution Imaging Spectroradiometer (MODIS), rely on cloud vertical profile assumptions to relate optical thickness (τ) and effective radius (re) retrievals to LWP. These techniques typically assume that shallow clouds are vertically homogeneous; however, an adiabatic cloud model is plausibly more realistic for shallow marine boundary layer cloud regimes. In this study a satellite retrieval simulator is used to perform MODIS-like satellite retrievals, which in turn are compared directly to the large-eddy simulation (LES) output. This satellite simulator creates a framework for rigorous quantification of the impact that vertical profile features have on LWP retrievals, and it accomplishes this while also avoiding sources of bias present in previous observational studies. The cloud vertical profiles from the LES are often more complex than either of the two standard assumptions, and the favored assumption was found to be sensitive to cloud regime (cumuliform/stratiform). Confirming previous studies, drizzle and cloud top entrainment of dry air are identified as physical features that bias LWP retrievals away from adiabatic and toward homogeneous assumptions. The mean bias induced by drizzle-influenced profiles was shown to be on the order of 5-10 g/m2. In contrast, the influence of cloud top entrainment was found to be smaller by about a factor of 2. A theoretical framework is developed to explain variability in LWP retrievals by introducing modifications to the adiabatic re profile. In addition to analyzing bispectral retrievals, we also compare results with the vertical profile sensitivity of passive polarimetric retrieval techniques.

  18. Rupture Dynamics Simulation for Non-Planar fault by a Curved Grid Finite Difference Method

    NASA Astrophysics Data System (ADS)

    Zhang, Z.; Zhu, G.; Chen, X.

    2011-12-01

    We first implement the non-staggered finite difference method to solve the dynamic rupture problem, with split-node, for non-planar fault. Split-node method for dynamic simulation has been used widely, because of that it's more precise to represent the fault plane than other methods, for example, thick fault, stress glut and so on. The finite difference method is also a popular numeric method to solve kinematic and dynamic problem in seismology. However, previous works focus most of theirs eyes on the staggered-grid method, because of its simplicity and computational efficiency. However this method has its own disadvantage comparing to non-staggered finite difference method at some fact for example describing the boundary condition, especially the irregular boundary, or non-planar fault. Zhang and Chen (2006) proposed the MacCormack high order non-staggered finite difference method based on curved grids to precisely solve irregular boundary problem. Based upon on this non-staggered grid method, we make success of simulating the spontaneous rupture problem. The fault plane is a kind of boundary condition, which could be irregular of course. So it's convinced that we could simulate rupture process in the case of any kind of bending fault plane. We will prove this method is valid in the case of Cartesian coordinate first. In the case of bending fault, the curvilinear grids will be used.

  19. Investigation of Ribosomes Using Molecular Dynamics Simulation Methods.

    PubMed

    Makarov, G I; Makarova, T M; Sumbatyan, N V; Bogdanov, A A

    2016-12-01

    The ribosome as a complex molecular machine undergoes significant conformational changes while synthesizing a protein molecule. Molecular dynamics simulations have been used as complementary approaches to X-ray crystallography and cryoelectron microscopy, as well as biochemical methods, to answer many questions that modern structural methods leave unsolved. In this review, we demonstrate that all-atom modeling of ribosome molecular dynamics is particularly useful in describing the process of tRNA translocation, atomic details of behavior of nascent peptides, antibiotics, and other small molecules in the ribosomal tunnel, and the putative mechanism of allosteric signal transmission to functional sites of the ribosome.

  20. A Precision and High-Speed Behavioral Simulation Method for Transient Response and Frequency Characteristics of Switching Converters

    NASA Astrophysics Data System (ADS)

    Sai, Toru; Sugimoto, Shoko; Sugimoto, Yasuhiro

    We propose a fast and precise transient response and frequency characteristics simulation method for switching converters. This method uses a behavioral simulation tool without using a SPICE-like analog simulator. The nonlinear operation of the circuit is considered, and the nonlinear function is realized by defining the nonlinear formula based on the circuit operation and by applying feedback. To assess the accuracy and simulation time of the proposed simulation method, we designed current-mode buck and boost converters and fabricated them using a 0.18-µm high-voltage CMOS process. The comparison in the transient response and frequency characteristics among SPICE, the proposed program on a behavioral simulation tool which we named NSTVR (New Simulation Tool for Voltage Regulators) and experiments of fabricated IC chips showed good agreement, while NSTVR was more than 22 times faster in transient response and 85 times faster in frequency characteristics than SPICE in CPU time in a boost converter simulation.

  1. An Ellipsoidal Particle-Finite Element Method for Hypervelocity Impact Simulation. Chapter 1

    NASA Technical Reports Server (NTRS)

    Shivarama, Ravishankar; Fahrenthold, Eric P.

    2004-01-01

    A number of coupled particle-element and hybrid particle-element methods have been developed for the simulation of hypervelocity impact problems, to avoid certain disadvantages associated with the use of pure continuum based or pure particle based methods. To date these methods have employed spherical particles. In recent work a hybrid formulation has been extended to the ellipsoidal particle case. A model formulation approach based on Lagrange's equations, with particles entropies serving as generalized coordinates, avoids the angular momentum conservation problems which have been reported with ellipsoidal smooth particle hydrodynamics models.

  2. Some Developments of the Equilibrium Particle Simulation Method for the Direct Simulation of Compressible Flows

    NASA Technical Reports Server (NTRS)

    Macrossan, M. N.

    1995-01-01

    The direct simulation Monte Carlo (DSMC) method is the established technique for the simulation of rarefied gas flows. In some flows of engineering interest, such as occur for aero-braking spacecraft in the upper atmosphere, DSMC can become prohibitively expensive in CPU time because some regions of the flow, particularly on the windward side of blunt bodies, become collision dominated. As an alternative to using a hybrid DSMC and continuum gas solver (Euler or Navier-Stokes solver) this work is aimed at making the particle simulation method efficient in the high density regions of the flow. A high density, infinite collision rate limit of DSMC, the Equilibrium Particle Simulation method (EPSM) was proposed some 15 years ago. EPSM is developed here for the flow of a gas consisting of many different species of molecules and is shown to be computationally efficient (compared to DSMC) for high collision rate flows. It thus offers great potential as part of a hybrid DSMC/EPSM code which could handle flows in the transition regime between rarefied gas flows and fully continuum flows. As a first step towards this goal a pure EPSM code is described. The next step of combining DSMC and EPSM is not attempted here but should be straightforward. EPSM and DSMC are applied to Taylor-Couette flow with Kn = 0.02 and 0.0133 and S(omega) = 3). Toroidal vortices develop for both methods but some differences are found, as might be expected for the given flow conditions. EPSM appears to be less sensitive to the sequence of random numbers used in the simulation than is DSMC and may also be more dissipative. The question of the origin and the magnitude of the dissipation in EPSM is addressed. It is suggested that this analysis is also relevant to DSMC when the usual accuracy requirements on the cell size and decoupling time step are relaxed in the interests of computational efficiency.

  3. Distributed adaptive simulation through standards-based integration of simulators and adaptive learning systems.

    PubMed

    Bergeron, Bryan; Cline, Andrew; Shipley, Jaime

    2012-01-01

    We have developed a distributed, standards-based architecture that enables simulation and simulator designers to leverage adaptive learning systems. Our approach, which incorporates an electronic competency record, open source LMS, and open source microcontroller hardware, is a low-cost, pragmatic option to integrating simulators with traditional courseware.

  4. Axon voltage-clamp simulations. I. Methods and tests.

    PubMed Central

    Moore, J W; Ramón, F; Joyner, R W

    1975-01-01

    This is the first in a series of four papers in which we present the numerical simulation of the application of the voltage clamp technique to excitable cells. In this paper we describe the application of the Crank-Nicolson (1947) method for the solution of the parabolic partial differential equations that describe a cylindrical cell in which the ionic conductances are functions of voltage and time (Hodgkin and Huxley, 1952). This method is compared with other methods in terms of accuracy and speed of solution for a propagated action potential. In addition, differential equations representing a simple voltage-clamp electronic circuit are presented. Using the voltage clamp circuit equations, we simulate the voltage clamp of a single isopotential membrane patch and show how the parameters of the circuit affect the transient response of the patch to a step change in the control potential.The stimulation methods presented in this series of papers allow the evaluation of voltage clamp control of an excitable cell or a syncytium of excitable cells. To the extent that membrane parameters and geometrical factors can be determined, the methods presented here provide solutions for the voltage profile as a function of time. PMID:1174640

  5. Wargame Simulation Theory and Evaluation Method for Emergency Evacuation of Residents from Urban Waterlogging Disaster Area

    PubMed Central

    Chen, Peng; Zhang, Jiquan; Sun, Yingyue; Liu, Xiaojing

    2016-01-01

    Urban waterlogging seriously threatens the safety of urban residents and properties. Wargame simulation research on resident emergency evacuation from waterlogged areas can determine the effectiveness of emergency response plans for high risk events at low cost. Based on wargame theory and emergency evacuation plans, we used a wargame exercise method, incorporating qualitative and quantitative aspects, to build an urban waterlogging disaster emergency shelter using a wargame exercise and evaluation model. The simulation was empirically tested in Daoli District of Harbin. The results showed that the wargame simulation scored 96.40 points, evaluated as good. From the simulation results, wargame simulation of urban waterlogging emergency procedures for disaster response can improve the flexibility and capacity for command, management and decision-making in emergency management departments. PMID:28009805

  6. Multigrid Methods for Fully Implicit Oil Reservoir Simulation

    NASA Technical Reports Server (NTRS)

    Molenaar, J.

    1996-01-01

    In this paper we consider the simultaneous flow of oil and water in reservoir rock. This displacement process is modeled by two basic equations: the material balance or continuity equations and the equation of motion (Darcy's law). For the numerical solution of this system of nonlinear partial differential equations there are two approaches: the fully implicit or simultaneous solution method and the sequential solution method. In the sequential solution method the system of partial differential equations is manipulated to give an elliptic pressure equation and a hyperbolic (or parabolic) saturation equation. In the IMPES approach the pressure equation is first solved, using values for the saturation from the previous time level. Next the saturations are updated by some explicit time stepping method; this implies that the method is only conditionally stable. For the numerical solution of the linear, elliptic pressure equation multigrid methods have become an accepted technique. On the other hand, the fully implicit method is unconditionally stable, but it has the disadvantage that in every time step a large system of nonlinear algebraic equations has to be solved. The most time-consuming part of any fully implicit reservoir simulator is the solution of this large system of equations. Usually this is done by Newton's method. The resulting systems of linear equations are then either solved by a direct method or by some conjugate gradient type method. In this paper we consider the possibility of applying multigrid methods for the iterative solution of the systems of nonlinear equations. There are two ways of using multigrid for this job: either we use a nonlinear multigrid method or we use a linear multigrid method to deal with the linear systems that arise in Newton's method. So far only a few authors have reported on the use of multigrid methods for fully implicit simulations. Two-level FAS algorithm is presented for the black-oil equations, and linear multigrid for

  7. Evaluation of null-point detection methods on simulation data

    NASA Astrophysics Data System (ADS)

    Olshevsky, Vyacheslav; Fu, Huishan; Vaivads, Andris; Khotyaintsev, Yuri; Lapenta, Giovanni; Markidis, Stefano

    2014-05-01

    We model the measurements of artificial spacecraft that resemble the configuration of CLUSTER propagating in the particle-in-cell simulation of turbulent magnetic reconnection. The simulation domain contains multiple isolated X-type null-points, but the majority are O-type null-points. Simulations show that current pinches surrounded by twisted fields, analogous to laboratory pinches, are formed along the sequences of O-type nulls. In the simulation, the magnetic reconnection is mainly driven by the kinking of the pinches, at spatial scales of several ion inertial lentghs. We compute the locations of magnetic null-points and detect their type. When the satellites are separated by the fractions of ion inertial length, as it is for CLUSTER, they are able to locate both the isolated null-points, and the pinches. We apply the method to the real CLUSTER data and speculate how common are pinches in the magnetosphere, and whether they play a dominant role in the dissipation of magnetic energy.

  8. Auditorium acoustics evaluation based on simulated impulse response

    NASA Astrophysics Data System (ADS)

    Wu, Shuoxian; Wang, Hongwei; Zhao, Yuezhe

    2001-05-01

    The impulse responses and other acoustical parameters of Huangpu Teenager Palace in Guangzhou were measured. Meanwhile, the acoustical simulation and auralization based on software ODEON were also made. The comparison between the parameters based on computer simulation and measuring is given. This case study shows that auralization technique based on computer simulation can be used for predicting the acoustical quality of a hall at its design stage.

  9. A web-based repository of surgical simulator projects.

    PubMed

    Leskovský, Peter; Harders, Matthias; Székely, Gábor

    2006-01-01

    The use of computer-based surgical simulators for training of prospective surgeons has been a topic of research for more than a decade. As a result, a large number of academic projects have been carried out, and a growing number of commercial products are available on the market. Keeping track of all these endeavors for established groups as well as for newly started projects can be quite arduous. Gathering information on existing methods, already traveled research paths, and problems encountered is a time consuming task. To alleviate this situation, we have established a modifiable online repository of existing projects. It contains detailed information about a large number of simulator projects gathered from web pages, papers and personal communication. The database is modifiable (with password protected sections) and also allows for a simple statistical analysis of the collected data. For further information, the surgical repository web page can be found at www.virtualsurgery.vision.ee.ethz.ch.

  10. Grid generation and adaptation for the Direct Simulation Monte Carlo Method. [for complex flows past wedges and cones

    NASA Technical Reports Server (NTRS)

    Olynick, David P.; Hassan, H. A.; Moss, James N.

    1988-01-01

    A grid generation and adaptation procedure based on the method of transfinite interpolation is incorporated into the Direct Simulation Monte Carlo Method of Bird. In addition, time is advanced based on a local criterion. The resulting procedure is used to calculate steady flows past wedges and cones. Five chemical species are considered. In general, the modifications result in a reduced computational effort. Moreover, preliminary results suggest that the simulation method is time step dependent if requirements on cell sizes are not met.

  11. Comparison of AGE and Spectral Methods for the Simulation of Far-Wakes

    NASA Technical Reports Server (NTRS)

    Bisset, D. K.; Rogers, M. M.; Kega, Dennis (Technical Monitor)

    1999-01-01

    Turbulent flow simulation methods based on finite differences are attractive for their simplicity, flexibility and efficiency, but not always for accuracy or stability. This report demonstrates that a good compromise is possible with the Advected Grid Explicit (AGE) method. AGE has proven to be both efficient and accurate for simulating turbulent free-shear flows, including planar mixing layers and planar jets. Its efficiency results from its localized fully explicit finite difference formulation (Bisset 1998a,b) that is very straightforward to compute, outweighing the need for a fairly small timestep. Also, most of the successful simulations were slightly under-resolved, and therefore they were, in effect, large-eddy simulations (LES) without a sub-grid-scale (SGS) model, rather than direct numerical simulations (DNS). The principle is that the role of the smallest scales of turbulent motion (when the Reynolds number is not too low) is to dissipate turbulent energy, and therefore they do not have to be simulated when the numerical method is inherently dissipative at its resolution limits. Such simulations are termed 'auto-LES' (LES with automatic SGS modeling) in this report.

  12. Exploring Discretization Error in Simulation-Based Aerodynamic Databases

    NASA Technical Reports Server (NTRS)

    Aftosmis, Michael J.; Nemec, Marian

    2010-01-01

    This work examines the level of discretization error in simulation-based aerodynamic databases and introduces strategies for error control. Simulations are performed using a parallel, multi-level Euler solver on embedded-boundary Cartesian meshes. Discretization errors in user-selected outputs are estimated using the method of adjoint-weighted residuals and we use adaptive mesh refinement to reduce these errors to specified tolerances. Using this framework, we examine the behavior of discretization error throughout a token database computed for a NACA 0012 airfoil consisting of 120 cases. We compare the cost and accuracy of two approaches for aerodynamic database generation. In the first approach, mesh adaptation is used to compute all cases in the database to a prescribed level of accuracy. The second approach conducts all simulations using the same computational mesh without adaptation. We quantitatively assess the error landscape and computational costs in both databases. This investigation highlights sensitivities of the database under a variety of conditions. The presence of transonic shocks or the stiffness in the governing equations near the incompressible limit are shown to dramatically increase discretization error requiring additional mesh resolution to control. Results show that such pathologies lead to error levels that vary by over factor of 40 when using a fixed mesh throughout the database. Alternatively, controlling this sensitivity through mesh adaptation leads to mesh sizes which span two orders of magnitude. We propose strategies to minimize simulation cost in sensitive regions and discuss the role of error-estimation in database quality.

  13. Fuzzy-based simulation of real color blindness.

    PubMed

    Lee, Jinmi; dos Santos, Wellington P

    2010-01-01

    About 8% of men are affected by color blindness. That population is at a disadvantage since they cannot perceive a substantial amount of the visual information. This work presents two computational tools developed to assist color blind people. The first one tests color blindness and assess its severity. The second tool is based on Fuzzy Logic, and implements a method proposed to simulate real red and green color blindness in order to generate synthetic cases of color vision disturbance in a statistically significant amount. Our purpose is to develop correction tools and obtain a deeper understanding of the accessibility problems faced by people with chromatic visual impairment.

  14. Validation of Ultrafilter Performance Model Based on Systematic Simulant Evaluation

    SciTech Connect

    Russell, Renee L.; Billing, Justin M.; Smith, Harry D.; Peterson, Reid A.

    2009-11-18

    Because of limited availability of test data with actual Hanford tank waste samples, a method was developed to estimate expected filtration performance based on physical characterization data for the Hanford Tank Waste Treatment and Immobilization Plant. A test with simulated waste was analyzed to demonstrate that filtration of this class of waste is consistent with a concentration polarization model. Subsequently, filtration data from actual waste samples were analyzed to demonstrate that centrifuged solids concentrations provide a reasonable estimate of the limiting concentration for filtration.

  15. An Implicit Characteristic Based Method for Electromagnetics

    NASA Technical Reports Server (NTRS)

    Beggs, John H.; Briley, W. Roger

    2001-01-01

    An implicit characteristic-based approach for numerical solution of Maxwell's time-dependent curl equations in flux conservative form is introduced. This method combines a characteristic based finite difference spatial approximation with an implicit lower-upper approximate factorization (LU/AF) time integration scheme. This approach is advantageous for three-dimensional applications because the characteristic differencing enables a two-factor approximate factorization that retains its unconditional stability in three space dimensions, and it does not require solution of tridiagonal systems. Results are given both for a Fourier analysis of stability, damping and dispersion properties, and for one-dimensional model problems involving propagation and scattering for free space and dielectric materials using both uniform and nonuniform grids. The explicit Finite Difference Time Domain Method (FDTD) algorithm is used as a convenient reference algorithm for comparison. The one-dimensional results indicate that for low frequency problems on a highly resolved uniform or nonuniform grid, this LU/AF algorithm can produce accurate solutions at Courant numbers significantly greater than one, with a corresponding improvement in efficiency for simulating a given period of time. This approach appears promising for development of dispersion optimized LU/AF schemes for three dimensional applications.

  16. Multi-scale simulation method for electroosmotic flows

    NASA Astrophysics Data System (ADS)

    Guo, Lin; Chen, Shiyi; Robbins, Mark O.

    2016-10-01

    Electroosmotic transport in micro-and nano- channels has important applications in biological and engineering systems but is difficult to model because nanoscale structure near surfaces impacts flow throughout the channel. We develop an efficient multi-scale simulation method that treats near-wall and bulk subdomains with different physical descriptions and couples them through a finite overlap region. Molecular dynamics is used in the near-wall subdomain where the ion density is inconsistent with continuum models and the discrete structure of solvent molecules is important. In the bulk region the solvent is treated as a continuum fluid described by the incompressible Navier-Stokes equations with thermal fluctuations. A discrete description of ions is retained because of the low density of ions and the long range of electrostatic interactions. A stochastic Euler-Lagrangian method is used to simulate the dynamics of these ions in the implicit continuum solvent. The overlap region allows free exchange of solvent and ions between the two subdomains. The hybrid approach is validated against full molecular dynamics simulations for different geometries and types of flows.

  17. Study on self-calibration angle encoder using simulation method

    NASA Astrophysics Data System (ADS)

    Wang, Yan; Xue, Zi; Huang, Yao; Wang, Xiaona

    2016-01-01

    The angle measurement technology is very important in precision manufacture, optical industry, aerospace, aviation and navigation, etc. Further, the angle encoder, which uses concept `subdivision of full circle (2π rad=360°)' and transforms the angle into number of electronic pulse, is the most common instrument for angle measurement. To improve the accuracy of the angle encoder, a novel self-calibration method was proposed that enables the angle encoder to calibrate itself without angle reference. An angle deviation curve among 0° to 360° was simulated with equal weights Fourier components for the study of the self-calibration method. In addition, a self-calibration algorithm was used in the process of this deviation curve. The simulation result shows the relationship between the arrangement of multi-reading heads and the Fourier components distribution of angle encoder deviation curve. Besides, an actual self-calibration angle encoder was calibrated by polygon angle standard in national institute of metrology, China. The experiment result indicates the actual self-calibration effect on the Fourier components distribution of angle encoder deviation curve. In the end, the comparison, which is between the simulation self-calibration result and the experiment self-calibration result, reflects good consistency and proves the reliability of the self-calibration angle encoder.

  18. Investigation on Accelerating Dust Storm Simulation via Domain Decomposition Methods

    NASA Astrophysics Data System (ADS)

    Yu, M.; Gui, Z.; Yang, C. P.; Xia, J.; Chen, S.

    2014-12-01

    Dust storm simulation is a data and computing intensive process, which requires high efficiency and adequate computing resources. To speed up the process, high performance computing is widely adopted. By partitioning a large study area into small subdomains according to their geographic location and executing them on different computing nodes in a parallel fashion, the computing performance can be significantly improved. However, it is still a question worthy of consideration that how to allocate these subdomain processes into computing nodes without introducing imbalanced task loads and unnecessary communications among computing nodes. Here we propose a domain decomposition and allocation framework that can carefully leverage the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire system. The framework is tested in the NMM (Nonhydrostatic Mesoscale Model)-dust model, where a 72-hour processes of the dust load are simulated. Performance result using the proposed scheduling method is compared with the one using default scheduling methods of MPI. Results demonstrate that the system improves the performance of simulation by 20% up to 80%.

  19. Intercomparison of 3D pore-scale flow and solute transport simulation methods

    SciTech Connect

    Yang, Xiaofan; Mehmani, Yashar; Perkins, William A.; Pasquali, Andrea; Schonherr, Martin; Kim, Kyungjoo; Perego, Mauro; Parks, Michael L.; Trask, Nathaniel; Balhoff, Matthew T.; Richmond, Marshall C.; Geier, Martin; Krafczyk, Manfred; Luo, Li -Shi; Tartakovsky, Alexandre M.; Scheibe, Timothy D.

    2015-09-28

    In this study, multiple numerical approaches have been developed to simulate porous media fluid flow and solute transport at the pore scale. These include (1) methods that explicitly model the three-dimensional geometry of pore spaces and (2) methods that conceptualize the pore space as a topologically consistent set of stylized pore bodies and pore throats. In previous work we validated a model of the first type, using computational fluid dynamics (CFD) codes employing a standard finite volume method (FVM), against magnetic resonance velocimetry (MRV) measurements of pore-scale velocities. Here we expand that validation to include additional models of the first type based on the lattice Boltzmann method (LBM) and smoothed particle hydrodynamics (SPH), as well as a model of the second type, a pore-network model (PNM). The PNM approach used in the current study was recently improved and demonstrated to accurately simulate solute transport in a two-dimensional experiment. While the PNM approach is computationally much less demanding than direct numerical simulation methods, the effect of conceptualizing complex three-dimensional pore geometries on solute transport in the manner of PNMs has not been fully determined. We apply all four approaches (FVM-based CFD, LBM, SPH and PNM) to simulate pore-scale velocity distributions and (for capable codes) nonreactive solute transport, and intercompare the model results. Comparisons are drawn both in terms of macroscopic variables (e.g., permeability, solute breakthrough curves) and microscopic variables (e.g., local velocities and concentrations). Generally good agreement was achieved among the various approaches, but some differences were observed depending on the model context. The intercomparison work was challenging because of variable capabilities of the codes, and inspired some code enhancements to allow consistent comparison of flow and transport simulations across the full suite of methods. This study provides support

  20. Intercomparison of 3D pore-scale flow and solute transport simulation methods

    NASA Astrophysics Data System (ADS)

    Yang, Xiaofan; Mehmani, Yashar; Perkins, William A.; Pasquali, Andrea; Schönherr, Martin; Kim, Kyungjoo; Perego, Mauro; Parks, Michael L.; Trask, Nathaniel; Balhoff, Matthew T.; Richmond, Marshall C.; Geier, Martin; Krafczyk, Manfred; Luo, Li-Shi; Tartakovsky, Alexandre M.; Scheibe, Timothy D.

    2016-09-01

    Multiple numerical approaches have been developed to simulate porous media fluid flow and solute transport at the pore scale. These include 1) methods that explicitly model the three-dimensional geometry of pore spaces and 2) methods that conceptualize the pore space as a topologically consistent set of stylized pore bodies and pore throats. In previous work we validated a model of the first type, using computational fluid dynamics (CFD) codes employing a standard finite volume method (FVM), against magnetic resonance velocimetry (MRV) measurements of pore-scale velocities. Here we expand that validation to include additional models of the first type based on the lattice Boltzmann method (LBM) and smoothed particle hydrodynamics (SPH), as well as a model of the second type, a pore-network model (PNM). The PNM approach used in the current study was recently improved and demonstrated to accurately simulate solute transport in a two-dimensional experiment. While the PNM approach is computationally much less demanding than direct numerical simulation methods, the effect of conceptualizing complex three-dimensional pore geometries on solute transport in the manner of PNMs has not been fully determined. We apply all four approaches (FVM-based CFD, LBM, SPH and PNM) to simulate pore-scale velocity distributions and (for capable codes) nonreactive solute transport, and intercompare the model results. Comparisons are drawn both in terms of macroscopic variables (e.g., permeability, solute breakthrough curves) and microscopic variables (e.g., local velocities and concentrations). Generally good agreement was achieved among the various approaches, but some differences were observed depending on the model context. The intercomparison work was challenging because of variable capabilities of the codes, and inspired some code enhancements to allow consistent comparison of flow and transport simulations across the full suite of methods. This study provides support for confidence

  1. Model-Building Tools for Simulation-Based Training.

    ERIC Educational Resources Information Center

    Towne, Douglas M.; And Others

    1990-01-01

    Explains the Intelligent Maintenance Training System that allows a nonprogramming subject matter expert to produce an interactive graphical model of a complex device for computer simulation. Previous simulation-based training systems are reviewed; simulation algorithms are described; and the student interface is discussed. (Contains 24…

  2. Porting a Ptolemy-Based Simulation Between Sites,

    DTIC Science & Technology

    2007-11-02

    Sanders RASSP contract and to document the process of porting a Ptolemy -based simulation between two sites. It is beneficial to have this capability to...as well as the design environment in which the simulation was built, Ptolemy . The difficulties encountered in porting this particular simulation

  3. Simulation of Experimental Parameters of RC Beams by Employing the Polynomial Regression Method

    NASA Astrophysics Data System (ADS)

    Sayin, B.; Sevgen, S.; Samli, R.

    2016-07-01

    A numerical model based on the method polynomial regression is developed to simulate the mechanical behavior of reinforced concrete beams strengthened with a carbon-fiber-reinforced polymer and subjected to four-point bending. The results obtained are in good agreement with data of laboratory tests.

  4. Citizen Decision Making, Reflective Thinking and Simulation Gaming: A Marriage of Purpose, Method and Strategy.

    ERIC Educational Resources Information Center

    White, Charles S.

    1985-01-01

    A conception of citizen decision making based on participatory democratic theory is most likely to foster effective citizenship. An examination of social studies traditions suggests that reflective thinking as a teaching method is congenial to this conception. Simulation gaming is a potentially powerful instructional strategy for supporting…

  5. New Simulation Methods to Facilitate Achieving a Mechanistic Understanding of Basic Pharmacology Principles in the Classroom

    ERIC Educational Resources Information Center

    Grover, Anita; Lam, Tai Ning; Hunt, C. Anthony

    2008-01-01

    We present a simulation tool to aid the study of basic pharmacology principles. By taking advantage of the properties of agent-based modeling, the tool facilitates taking a mechanistic approach to learning basic concepts, in contrast to the traditional empirical methods. Pharmacodynamics is a particular aspect of pharmacology that can benefit from…

  6. Assessing the Accuracy of Classwide Direct Observation Methods: Two Analyses Using Simulated and Naturalistic Data

    ERIC Educational Resources Information Center

    Dart, Evan H.; Radley, Keith C.; Briesch, Amy M.; Furlow, Christopher M.; Cavell, Hannah J.; Briesch, Amy M.

    2016-01-01

    Two studies investigated the accuracy of eight different interval-based group observation methods that are commonly used to assess the effects of classwide interventions. In Study 1, a Microsoft Visual Basic program was created to simulate a large set of observational data. Binary data were randomly generated at the student level to represent…

  7. Amyloid oligomer structure characterization from simulations: A general method

    SciTech Connect

    Nguyen, Phuong H.; Li, Mai Suan

    2014-03-07

    Amyloid oligomers and plaques are composed of multiple chemically identical proteins. Therefore, one of the first fundamental problems in the characterization of structures from simulations is the treatment of the degeneracy, i.e., the permutation of the molecules. Second, the intramolecular and intermolecular degrees of freedom of the various molecules must be taken into account. Currently, the well-known dihedral principal component analysis method only considers the intramolecular degrees of freedom, and other methods employing collective variables can only describe intermolecular degrees of freedom at the global level. With this in mind, we propose a general method that identifies all the structures accurately. The basis idea is that the intramolecular and intermolecular states are described in terms of combinations of single-molecule and double-molecule states, respectively, and the overall structures of oligomers are the product basis of the intramolecular and intermolecular states. This way, the degeneracy is automatically avoided. The method is illustrated on the conformational ensemble of the tetramer of the Alzheimer's peptide Aβ{sub 9−40}, resulting from two atomistic molecular dynamics simulations in explicit solvent, each of 200 ns, starting from two distinct structures.

  8. Amyloid oligomer structure characterization from simulations: A general method

    NASA Astrophysics Data System (ADS)

    Nguyen, Phuong H.; Li, Mai Suan; Derreumaux, Philippe

    2014-03-01

    Amyloid oligomers and plaques are composed of multiple chemically identical proteins. Therefore, one of the first fundamental problems in the characterization of structures from simulations is the treatment of the degeneracy, i.e., the permutation of the molecules. Second, the intramolecular and intermolecular degrees of freedom of the various molecules must be taken into account. Currently, the well-known dihedral principal component analysis method only considers the intramolecular degrees of freedom, and other methods employing collective variables can only describe intermolecular degrees of freedom at the global level. With this in mind, we propose a general method that identifies all the structures accurately. The basis idea is that the intramolecular and intermolecular states are described in terms of combinations of single-molecule and double-molecule states, respectively, and the overall structures of oligomers are the product basis of the intramolecular and intermolecular states. This way, the degeneracy is automatically avoided. The method is illustrated on the conformational ensemble of the tetramer of the Alzheimer's peptide Aβ9-40, resulting from two atomistic molecular dynamics simulations in explicit solvent, each of 200 ns, starting from two distinct structures.

  9. Amyloid oligomer structure characterization from simulations: a general method.

    PubMed

    Nguyen, Phuong H; Li, Mai Suan; Derreumaux, Philippe

    2014-03-07

    Amyloid oligomers and plaques are composed of multiple chemically identical proteins. Therefore, one of the first fundamental problems in the characterization of structures from simulations is the treatment of the degeneracy, i.e., the permutation of the molecules. Second, the intramolecular and intermolecular degrees of freedom of the various molecules must be taken into account. Currently, the well-known dihedral principal component analysis method only considers the intramolecular degrees of freedom, and other methods employing collective variables can only describe intermolecular degrees of freedom at the global level. With this in mind, we propose a general method that identifies all the structures accurately. The basis idea is that the intramolecular and intermolecular states are described in terms of combinations of single-molecule and double-molecule states, respectively, and the overall structures of oligomers are the product basis of the intramolecular and intermolecular states. This way, the degeneracy is automatically avoided. The method is illustrated on the conformational ensemble of the tetramer of the Alzheimer's peptide Aβ9-40, resulting from two atomistic molecular dynamics simulations in explicit solvent, each of 200 ns, starting from two distinct structures.

  10. Construction of dynamic stochastic simulation models using knowledge-based techniques

    NASA Technical Reports Server (NTRS)

    Williams, M. Douglas; Shiva, Sajjan G.

    1990-01-01

    Over the past three decades, computer-based simulation models have proven themselves to be cost-effective alternatives to the more structured deterministic methods of systems analysis. During this time, many techniques, tools and languages for constructing computer-based simulation models have been developed. More recently, advances in knowledge-based system technology have led many researchers to note the similarities between knowledge-based programming and simulation technologies and to investigate the potential application of knowledge-based programming techniques to simulation modeling. The integration of conventional simulation techniques with knowledge-based programming techniques is discussed to provide a development environment for constructing knowledge-based simulation models. A comparison of the techniques used in the construction of dynamic stochastic simulation models and those used in the construction of knowledge-based systems provides the requirements for the environment. This leads to the design and implementation of a knowledge-based simulation development environment. These techniques were used in the construction of several knowledge-based simulation models including the Advanced Launch System Model (ALSYM).

  11. Large-eddy simulation using the finite element method

    SciTech Connect

    McCallen, R.C.; Gresho, P.M.; Leone, J.M. Jr.; Kollmann, W.

    1993-10-01

    In a large-eddy simulation (LES) of turbulent flows, the large-scale motion is calculated explicitly (i.e., approximated with semi-empirical relations). Typically, finite difference or spectral numerical schemes are used to generate an LES; the use of finite element methods (FEM) has been far less prominent. In this study, we demonstrate that FEM in combination with LES provides a viable tool for the study of turbulent, separating channel flows, specifically the flow over a two-dimensional backward-facing step. The combination of these methodologies brings together the advantages of each: LES provides a high degree of accuracy with a minimum of empiricism for turbulence modeling and FEM provides a robust way to simulate flow in very complex domains of practical interest. Such a combination should prove very valuable to the engineering community.

  12. [Numerical flow simulation : A new method for assessing nasal breathing].

    PubMed

    Hildebrandt, T; Osman, J; Goubergrits, L

    2016-08-01

    The current options for objective assessment of nasal breathing are limited. The maximum they can determine is the total nasal resistance. Possibilities to analyze the endonasal airstream are lacking. In contrast, numerical flow simulation is able to provide detailed information of the flow field within the nasal cavity. Thus, it has the potential to analyze the nasal airstream of an individual patient in a comprehensive manner and only a computed tomography (CT) scan of the paranasal sinuses is required. The clinical application is still limited due to the necessary technical and personnel resources. In particular, a statistically based referential characterization of normal nasal breathing does not yet exist in order to be able to compare and classify the simulation results.

  13. Adaptive mesh refinement and adjoint methods in geophysics simulations

    NASA Astrophysics Data System (ADS)

    Burstedde, Carsten

    2013-04-01

    It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times

  14. Computer-based simulator for radiology: an educational tool.

    PubMed

    Towbin, Alexander J; Paterson, Brian E; Chang, Paul J

    2008-01-01

    In the past decade, radiology has moved from being predominantly film based to predominantly digital. Although in clinical terms the transition has been relatively smooth, the method in which radiology is taught has not kept pace. Simulator programs have proved effective in other specialties as a method for teaching a specific skill set. Because many radiologists already work in the digital environment, a simulator could easily and safely be integrated with a picture archiving and communication system (PACS) and become a powerful tool for radiology education. Thus, a simulator program was designed for the specific purpose of giving residents practice in reading images independently, thereby helping them to prepare more fully for the rigors of being on call. The program is similar to a typical PACS, thus allowing a more interactive learning process, and closely mimics the real-world practice of radiology to help prepare the user for a variety of clinical scenarios. Besides education, other possible uses include certification, testing, and the creation of teaching files.

  15. Light and Human Vision Based Simulation Technology

    DTIC Science & Technology

    2009-10-01

    windshields and materials , display emission, infrared observation, street lighting, opponents and moving targets. 1.0 INTRODUCTION Designing a realtime...in a scene. Targeted effects include, but are not limited to, natural lighting, human machine interfaces, reflections on windshields and materials ...qualities. In driving simulation, some systems are focused on night driving and virtual testing of headlamps enabling you to simulate the effect of

  16. A hybrid method for flood simulation in small catchments combining hydrodynamic and hydrological techniques

    NASA Astrophysics Data System (ADS)

    Bellos, Vasilis; Tsakiris, George

    2016-09-01

    The study presents a new hybrid method for the simulation of flood events in small catchments. It combines a physically-based two-dimensional hydrodynamic model and the hydrological unit hydrograph theory. Unit hydrographs are derived using the FLOW-R2D model which is based on the full form of two-dimensional Shallow Water Equations, solved by a modified McCormack numerical scheme. The method is tested at a small catchment in a suburb of Athens-Greece for a storm event which occurred in February 2013. The catchment is divided into three friction zones and unit hydrographs of 15 and 30 min are produced. The infiltration process is simulated by the empirical Kostiakov equation and the Green-Ampt model. The results from the implementation of the proposed hybrid method are compared with recorded data at the hydrometric station at the outlet of the catchment and the results derived from the fully hydrodynamic model FLOW-R2D. It is concluded that for the case studied, the proposed hybrid method produces results close to those of the fully hydrodynamic simulation at substantially shorter computational time. This finding, if further verified in a variety of case studies, can be useful in devising effective hybrid tools for the two-dimensional flood simulations, which are lead to accurate and considerably faster results than those achieved by the fully hydrodynamic simulations.

  17. Mosquito population dynamics from cellular automata-based simulation

    NASA Astrophysics Data System (ADS)

    Syafarina, Inna; Sadikin, Rifki; Nuraini, Nuning

    2016-02-01

    In this paper we present an innovative model for simulating mosquito-vector population dynamics. The simulation consist of two stages: demography and dispersal dynamics. For demography simulation, we follow the existing model for modeling a mosquito life cycles. Moreover, we use cellular automata-based model for simulating dispersal of the vector. In simulation, each individual vector is able to move to other grid based on a random walk. Our model is also capable to represent immunity factor for each grid. We simulate the model to evaluate its correctness. Based on the simulations, we can conclude that our model is correct. However, our model need to be improved to find a realistic parameters to match real data.

  18. A multi-stage method for connecting participatory sensing and noise simulations.

    PubMed

    Hu, Mingyuan; Che, Weitao; Zhang, Qiuju; Luo, Qingli; Lin, Hui

    2015-01-22

    Most simulation-based noise maps are important for official noise assessment but lack local noise characteristics. The main reasons for this lack of information are that official noise simulations only provide information about expected noise levels, which is limited by the use of large-scale monitoring of noise sources, and are updated infrequently. With the emergence of smart cities and ubiquitous sensing, the possible improvements enabled by sensing technologies provide the possibility to resolve this problem. This study proposed an integrated methodology to propel participatory sensing from its current random and distributed sampling origins to professional noise simulation. The aims of this study were to effectively organize the participatory noise data, to dynamically refine the granularity of the noise features on road segments (e.g., different portions of a road segment), and then to provide a reasonable spatio-temporal data foundation to support noise simulations, which can be of help to researchers in understanding how participatory sensing can play a role in smart cities. This study first discusses the potential limitations of the current participatory sensing and simulation-based official noise maps. Next, we explain how participatory noise data can contribute to a simulation-based noise map by providing (1) spatial matching of the participatory noise data to the virtual partitions at a more microscopic level of road networks; (2) multi-temporal scale noise estimations at the spatial level of virtual partitions; and (3) dynamic aggregation of virtual partitions by comparing the noise values at the relevant temporal scale to form a dynamic segmentation of each road segment to support multiple spatio-temporal noise simulations. In this case study, we demonstrate how this method could play a significant role in a simulation-based noise map. Together, these results demonstrate the potential benefits of participatory noise data as dynamic input sources for

  19. Lattice Boltzmann simulation of rising bubble dynamics using an effective buoyancy method

    NASA Astrophysics Data System (ADS)

    Ngachin, Merlin; Galdamez, Rinaldo G.; Gokaltun, Seckin; Sukop, Michael C.

    2015-08-01

    This study describes the behavior of bubbles rising under gravity using the Shan and Chen-type multicomponent multiphase lattice Boltzmann method (LBM) [X. Shan and H. Chen, Phys. Rev. E47, 1815 (1993)]. Two-dimensional (2D) single bubble motions were simulated, considering the buoyancy effect for which the topology of the bubble was characterized by the nondimensional Eötvös (Eo), and Morton (M) numbers. In this study, a new approach based on the "effective buoyancy" was adopted and proven to be consistent with the expected bubble shape deformation. This approach expands the range of effective density differences between the bubble and the liquid that can be simulated. Based on the balance of forces acting on the bubble, it can deform from spherical to ellipsoidal shape with skirts appearing at high Eo number. A benchmark computational case for qualitative and quantitative validation was performed using COMSOL Multiphysics based on the level set method. Simulations were conducted for 1 ≤ Eo ≤ 100 and 3 × 10-6 ≤ M ≤ 2.73 × 10-3. Interfacial tension was checked through simulations without gravity, where Laplace's law was satisfied. Finally, quantitative analyses based on the terminal rise velocity and the degree of circularity was performed for various Eo and M values. Our results were compared with both the theoretical shape regimes given in literature and available simulation results.

  20. Development of modelling method selection tool for health services management: From problem structuring methods to modelling and simulation methods

    PubMed Central

    2011-01-01

    Background There is an increasing recognition that modelling and simulation can assist in the process of designing health care policies, strategies and operations. However, the current use is limited and answers to questions such as what methods to use and when remain somewhat underdeveloped. Aim The aim of this study is to provide a mechanism for decision makers in health services planning and management to compare a broad range of modelling and simulation methods so that they can better select and use them or better commission relevant modelling and simulation work. Methods This paper proposes a modelling and simulation method comparison and selection tool developed from a comprehensive literature review, the research team's extensive expertise and inputs from potential users. Twenty-eight different methods were identified, characterised by their relevance to different application areas, project life cycle stages, types of output and levels of insight, and four input resources required (time, money, knowledge and data). Results The characterisation is presented in matrix forms to allow quick comparison and selection. This paper also highlights significant knowledge gaps in the existing literature when assessing the applicability of particular approaches to health services management, where modelling and simulation skills are scarce let alone money and time. Conclusions A modelling and simulation method comparison and selection tool is developed to assist with the selection of methods appropriate to supporting specific decision making processes. In particular it addresses the issue of which method is most appropriate to which specific health services management problem, what the user might expect to be obtained from the method, and what is required to use the method. In summary, we believe the tool adds value to the scarce existing literature on methods comparison and selection. PMID:21595946

  1. A hybrid Boundary Element Unstructured Transmission-line (BEUT) method for accurate 2D electromagnetic simulation

    NASA Astrophysics Data System (ADS)

    Simmons, Daniel; Cools, Kristof; Sewell, Phillip

    2016-11-01

    Time domain electromagnetic simulation tools have the ability to model transient, wide-band applications, and non-linear problems. The Boundary Element Method (BEM) and the Transmission Line Modeling (TLM) method are both well established numerical techniques for simulating time-varying electromagnetic fields. The former surface based method can accurately describe outwardly radiating fields from piecewise uniform objects and efficiently deals with large domains filled with homogeneous media. The latter volume based method can describe inhomogeneous and non-linear media and has been proven to be unconditionally stable. Furthermore, the Unstructured TLM (UTLM) enables modelling of geometrically complex objects by using triangular meshes which removes staircasing and unnecessary extensions of the simulation domain. The hybridization of BEM and UTLM which is described in this paper is named the Boundary Element Unstructured Transmission-line (BEUT) method. It incorporates the advantages of both methods. The theory and derivation of the 2D BEUT method is described in this paper, along with any relevant implementation details. The method is corroborated by studying its correctness and efficiency compared to the traditional UTLM method when applied to complex problems such as the transmission through a system of Luneburg lenses and the modelling of antenna radomes for use in wireless communications.

  2. Simulation of polarization-sensitive optical coherence tomography images by a Monte Carlo method.

    PubMed

    Meglinski, Igor; Kirillin, Mikhail; Kuzmin, Vladimir; Myllylä, Risto

    2008-07-15

    We introduce a new Monte Carlo (MC) method for simulating optical coherence tomography (OCT) images of complex multilayered turbid scattering media. We demonstrate, for the first time of our knowledge, the use of a MC technique to imitate two-dimensional polarization-sensitive OCT images with nonplanar boundaries of layers in the medium like a human skin. The simulation of polarized low-coherent optical radiation is based on the vector approach generalized from the iterative procedure of the solution of Bethe-Saltpeter equation. The performances of the developed method are demonstrated both for conventional and polarization-sensitive OCT modalities.

  3. Accelerated GPU simulation of compressible flow by the discontinuous evolution Galerkin method

    NASA Astrophysics Data System (ADS)

    Block, B. J.; Lukáčová-Medvid'ová, M.; Virnau, P.; Yelash, L.

    2012-08-01

    The aim of the present paper is to report on our recent results for GPU accelerated simulations of compressible flows. For numerical simulation the adaptive discontinuous Galerkin method with the multidimensional bicharacteristic based evolution Galerkin operator has been used. For time discretization we have applied the explicit third order Runge-Kutta method. Evaluation of the genuinely multidimensional evolution operator has been accelerated using the GPU implementation. We have obtained a speedup up to 30 (in comparison to a single CPU core) for the calculation of the evolution Galerkin operator on a typical discretization mesh consisting of 16384 mesh cells.

  4. Comparison of Different Methods of Grading a Level Turn Task on a Flight Simulator

    NASA Technical Reports Server (NTRS)

    Heath, Bruce E.; Crier, tomyka

    2003-01-01

    With the advancements in the computing power of personal computers, pc-based flight simulators and trainers have opened new avenues in the training of airplane pilots. It may be desirable to have the flight simulator make a quantitative evaluation of the progress of a pilot's training thereby reducing the physical requirement of the flight instructor who must, in turn, watch every flight. In an experiment, University students conducted six different flights, each consisting of two level turns. The flights were three minutes in duration. By evaluating videotapes, two certified flight instructors provided separate letter grades for each turn. These level turns were also evaluated using two other computer based grading methods. One method determined automated grades based on prescribed tolerances in bank angle, airspeed and altitude. The other method used was deviations in altitude and bank angle for performance index and performance grades.

  5. Traffic and Driving Simulator Based on Architecture of Interactive Motion

    PubMed Central

    Paz, Alexander; Veeramisti, Naveen; Khaddar, Romesh; de la Fuente-Mella, Hanns; Modorcea, Luiza

    2015-01-01

    This study proposes an architecture for an interactive motion-based traffic simulation environment. In order to enhance modeling realism involving actual human beings, the proposed architecture integrates multiple types of simulation, including: (i) motion-based driving simulation, (ii) pedestrian simulation, (iii) motorcycling and bicycling simulation, and (iv) traffic flow simulation. The architecture has been designed to enable the simulation of the entire network; as a result, the actual driver, pedestrian, and bike rider can navigate anywhere in the system. In addition, the background traffic interacts with the actual human beings. This is accomplished by using a hybrid mesomicroscopic traffic flow simulation modeling approach. The mesoscopic traffic flow simulation model loads the results of a user equilibrium traffic assignment solution and propagates the corresponding traffic through the entire system. The microscopic traffic flow simulation model provides background traffic around the vicinities where actual human beings are navigating the system. The two traffic flow simulation models interact continuously to update system conditions based on the interactions between actual humans and the fully simulated entities. Implementation efforts are currently in progress and some preliminary tests of individual components have been conducted. The implementation of the proposed architecture faces significant challenges ranging from multiplatform and multilanguage integration to multievent communication and coordination. PMID:26491711

  6. Traffic and Driving Simulator Based on Architecture of Interactive Motion.

    PubMed

    Paz, Alexander; Veeramisti, Naveen; Khaddar, Romesh; de la Fuente-Mella, Hanns; Modorcea, Luiza

    2015-01-01

    This study proposes an architecture for an interactive motion-based traffic simulation environment. In order to enhance modeling realism involving actual human beings, the proposed architecture integrates multiple types of simulation, including: (i) motion-based driving simulation, (ii) pedestrian simulation, (iii) motorcycling and bicycling simulation, and (iv) traffic flow simulation. The architecture has been designed to enable the simulation of the entire network; as a result, the actual driver, pedestrian, and bike rider can navigate anywhere in the system. In addition, the background traffic interacts with the actual human beings. This is accomplished by using a hybrid mesomicroscopic traffic flow simulation modeling approach. The mesoscopic traffic flow simulation model loads the results of a user equilibrium traffic assignment solution and propagates the corresponding traffic through the entire system. The microscopic traffic flow simulation model provides background traffic around the vicinities where actual human beings are navigating the system. The two traffic flow simulation models interact continuously to update system conditions based on the interactions between actual humans and the fully simulated entities. Implementation efforts are currently in progress and some preliminary tests of individual components have been conducted. The implementation of the proposed architecture faces significant challenges ranging from multiplatform and multilanguage integration to multievent communication and coordination.

  7. High-order finite element methods for cardiac monodomain simulations

    PubMed Central

    Vincent, Kevin P.; Gonzales, Matthew J.; Gillette, Andrew K.; Villongco, Christopher T.; Pezzuto, Simone; Omens, Jeffrey H.; Holst, Michael J.; McCulloch, Andrew D.

    2015-01-01

    Computational modeling of tissue-scale cardiac electrophysiology requires numerically converged solutions to avoid spurious artifacts. The steep gradients inherent to cardiac action potential propagation necessitate fine spatial scales and therefore a substantial computational burden. The use of high-order interpolation methods has previously been proposed for these simulations due to their theoretical convergence advantage. In this study, we compare the convergence behavior of linear Lagrange, cubic Hermite, and the newly proposed cubic Hermite-style serendipity interpolation methods for finite element simulations of the cardiac monodomain equation. The high-order methods reach converged solutions with fewer degrees of freedom and longer element edge lengths than traditional linear elements. Additionally, we propose a dimensionless number, the cell Thiele modulus, as a more useful metric for determining solution convergence than element size alone. Finally, we use the cell Thiele modulus to examine convergence criteria for obtaining clinically useful activation patterns for applications such as patient-specific modeling where the total activation time is known a priori. PMID:26300783

  8. High-order finite element methods for cardiac monodomain simulations.

    PubMed

    Vincent, Kevin P; Gonzales, Matthew J; Gillette, Andrew K; Villongco, Christopher T; Pezzuto, Simone; Omens, Jeffrey H; Holst, Michael J; McCulloch, Andrew D

    2015-01-01

    Computational modeling of tissue-scale cardiac electrophysiology requires numerically converged solutions to avoid spurious artifacts. The steep gradients inherent to cardiac action potential propagation necessitate fine spatial scales and therefore a substantial computational burden. The use of high-order interpolation methods has previously been proposed for these simulations due to their theoretical convergence advantage. In this study, we compare the convergence behavior of linear Lagrange, cubic Hermite, and the newly proposed cubic Hermite-style serendipity interpolation methods for finite element simulations of the cardiac monodomain equation. The high-order methods reach converged solutions with fewer degrees of freedom and longer element edge lengths than traditional linear elements. Additionally, we propose a dimensionless number, the cell Thiele modulus, as a more useful metric for determining solution convergence than element size alone. Finally, we use the cell Thiele modulus to examine convergence criteria for obtaining clinically useful activation patterns for applications such as patient-specific modeling where the total activation time is known a priori.

  9. Rapid simulation of spatial epidemics: a spectral method.

    PubMed

    Brand, Samuel P C; Tildesley, Michael J; Keeling, Matthew J

    2015-04-07

    Spatial structure and hence the spatial position of host populations plays a vital role in the spread of infection. In the majority of situations, it is only possible to predict the spatial spread of infection using simulation models, which can be computationally demanding especially for large population sizes. Here we develop an approximation method that vastly reduces this computational burden. We assume that the transmission rates between individuals or sub-populations are determined by a spatial transmission kernel. This kernel is assumed to be isotropic, such that the transmission rate is simply a function of the distance between susceptible and infectious individuals; as such this provides the ideal mechanism for modelling localised transmission in a spatial environment. We show that the spatial force of infection acting on all susceptibles can be represented as a spatial convolution between the transmission kernel and a spatially extended 'image' of the infection state. This representation allows the rapid calculation of stochastic rates of infection using fast-Fourier transform (FFT) routines, which greatly improves the computational efficiency of spatial simulations. We demonstrate the efficiency and accuracy of this fast spectral rate recalculation (FSR) method with two examples: an idealised scenario simulating an SIR-type epidemic outbreak amongst N habitats distributed across a two-dimensional plane; the spread of infection between US cattle farms, illustrating that the FSR method makes continental-scale outbreak forecasting feasible with desktop processing power. The latter model demonstrates which areas of the US are at consistently high risk for cattle-infections, although predictions of epidemic size are highly dependent on assumptions about the tail of the transmission kernel.

  10. Benchmark Study of 3D Pore-scale Flow and Solute Transport Simulation Methods

    NASA Astrophysics Data System (ADS)

    Scheibe, T. D.; Yang, X.; Mehmani, Y.; Perkins, W. A.; Pasquali, A.; Schoenherr, M.; Kim, K.; Perego, M.; Parks, M. L.; Trask, N.; Balhoff, M.; Richmond, M. C.; Geier, M.; Krafczyk, M.; Luo, L. S.; Tartakovsky, A. M.

    2015-12-01

    Multiple numerical approaches have been developed to simulate porous media fluid flow and solute transport at the pore scale. These include 1) methods that explicitly model the three-dimensional geometry of pore spaces and 2) methods that conceptualize the pore space as a topologically consistent set of stylized pore bodies and pore throats. In previous work we validated a model of the first type, using computational fluid dynamics (CFD) codes employing standard finite volume method (FVM), against magnetic resonance velocimetry (MRV) measurements of pore-scale velocities. Here we expand that benchmark study to include additional models of the first type based on the immersed-boundary method (IMB), lattice Boltzmann method (LBM), and smoothed particle hydrodynamics (SPH), as well as a model of the second type, a pore-network model (PNM). While the PNM approach is computationally much less demanding than direct numerical simulation methods, the effect of conceptualizing complex three-dimensional pore geometries in the manner of PNMs has not been fully determined. We apply all five approaches (FVM-based CFD, IMB, LBM, SPH and PNM) to simulate pore-scale velocity distributions and nonreactive solute transport, and intercompare the model results. Comparisons are drawn both in terms of macroscopic variables (e.g., permeability, solute breakthrough curves) and microscopic variables (e.g., local velocities and concentrations). Generally good agreement was achieved among the various approaches, but some differences were observed depending on the model context. The benchmark study was challenging because of variable capabilities of the codes, and inspired some code enhancements to allow consistent comparison of flow and transport simulations across the full suite of methods. This study provides support for confidence in a variety of pore-scale modeling methods, and motivates further development and application of pore-scale simulation methods.

  11. Methodical aspects of text testing in a driving simulator.

    PubMed

    Sundin, A; Patten, C J D; Bergmark, M; Hedberg, A; Iraeus, I-M; Pettersson, I

    2012-01-01

    A test with 30 test persons was conducted in a driving simulator. The test was a concept exploration and comparison of existing user interaction technologies for text message handling with focus on traffic safety and experience (technology familiarity and learning effects). Focus was put on methodical aspects how to measure and how to analyze the data. Results show difficulties with the eye tracking system (calibration etc.) per se, and also include the subsequent raw data preparation. The physical setup in the car where found important for the test completion.

  12. Structure identification methods for atomistic simulations of crystalline materials

    DOE PAGES

    Stukowski, Alexander

    2012-05-28

    Here, we discuss existing and new computational analysis techniques to classify local atomic arrangements in large-scale atomistic computer simulations of crystalline solids. This article includes a performance comparison of typical analysis algorithms such as common neighbor analysis (CNA), centrosymmetry analysis, bond angle analysis, bond order analysis and Voronoi analysis. In addition we propose a simple extension to the CNA method that makes it suitable for multi-phase systems. Finally, we introduce a new structure identification algorithm, the neighbor distance analysis, which is designed to identify atomic structure units in grain boundaries.

  13. Simulations of string vibrations with boundary conditions of third kind using the functional transformation method

    NASA Astrophysics Data System (ADS)

    Trautmann, L.; Petrausch, S.; Bauer, M.

    2005-09-01

    The functional transformation method (FTM) is an established mathematical method for accurate simulation of multidimensional physical systems from various fields of science, including optics, heat and mass transfer, electrical engineering, and acoustics. It is a frequency-domain method based on the decomposition into eigenvectors and eigenfrequencies of the underlying physical problem. In this article, the FTM is applied to real-time simulations of vibrating strings which are ideally fixed at one end while the fixing at the other end is modeled by a frequency-dependent input impedance. Thus, boundary conditions of third kind are applied to the model at the end fixed with the input impedance. It is shown that accurate and stable simulations are achieved with nearly the same computational cost as with strings ideally fixed at both ends.

  14. Anisotropic interpolation method of silicon carbide oxidation growth rates for three-dimensional simulation

    NASA Astrophysics Data System (ADS)

    Šimonka, Vito; Nawratil, Georg; Hössinger, Andreas; Weinbub, Josef; Selberherr, Siegfried

    2017-02-01

    We investigate anisotropical and geometrical aspects of hexagonal structures of Silicon Carbide and propose a direction dependent interpolation method for oxidation growth rates. We compute three-dimensional oxidation rates and perform one-, two-, and three-dimensional simulations for 4H- and 6H-Silicon Carbide thermal oxidation. The rates of oxidation are computed according to the four known growth rate values for the Si- (0 0 0 1) , a- (1 1 2 bar 0) , m- (1 1 bar 0 0) , and C-face (0 0 0 1 bar) . The simulations are based on the proposed interpolation method together with available thermal oxidation models. We additionally analyze the temperature dependence of Silicon Carbide oxidation rates for different crystal faces using Arrhenius plots. The proposed interpolation method is an essential step towards highly accurate three-dimensional oxide growth simulations which help to better understand the anisotropic nature and oxidation mechanism of Silicon Carbide.

  15. A collision-selection rule for a particle simulation method suited to vector computers

    NASA Technical Reports Server (NTRS)

    Baganoff, D.; Mcdonald, J. D.

    1990-01-01

    A theory is developed for a selection rule governing collisions in a particle simulation of rarefied gas-dynamic flows. The selection rule leads to an algorithmic form highly compatible with fine grain parallel decomposition, allowing for efficient utilization of supercomputers having vector or massively parallel single instruction multiple data architectures. A comparison of shock-wave profiles obtained using both the selection rule and Bird's direct simulation Monte Carlo (DSMC) method show excellent agreement. The equation on which the selection rule is based is shown to be directly related to the time-counter procedure in the DSMC method. The results of several example simulations of representative rarefied flows are presented, for which the number of particles used ranged from 10 to the 6th to 10 to the 7th demonstrating the greatly improved computational efficiency of the method.

  16. Improved transition path sampling methods for simulation of rare events.

    PubMed

    Chopra, Manan; Malshe, Rohit; Reddy, Allam S; de Pablo, J J

    2008-04-14

    The free energy surfaces of a wide variety of systems encountered in physics, chemistry, and biology are characterized by the existence of deep minima separated by numerous barriers. One of the central aims of recent research in computational chemistry and physics has been to determine how transitions occur between deep local minima on rugged free energy landscapes, and transition path sampling (TPS) Monte-Carlo methods have emerged as an effective means for numerical investigation of such transitions. Many of the shortcomings of TPS-like approaches generally stem from their high computational demands. Two new algorithms are presented in this work that improve the efficiency of TPS simulations. The first algorithm uses biased shooting moves to render the sampling of reactive trajectories more efficient. The second algorithm is shown to substantially improve the accuracy of the transition state ensemble by introducing a subset of local transition path simulations in the transition state. The system considered in this work consists of a two-dimensional rough energy surface that is representative of numerous systems encountered in applications. When taken together, these algorithms provide gains in efficiency of over two orders of magnitude when compared to traditional TPS simulations.

  17. Simulation of photonic crystals antenna using ADI-FDTD method

    NASA Astrophysics Data System (ADS)

    Wu, Zhengzhong; Zhong, Xianxin; Yu, Wenge; Chen, Yu

    2004-11-01

    In order to meet the demand for miniaturization and excellent performances of antennas to send and receive the wireless signals, in this paper a novel Photonic Band Gap (PBG) structure of a two-dimensional square lattice array etched on one side of silicon wafer is proposed as the grounds of a microstrip patch antenna. An analysis of the performance of a patch antenna with a PBG ground has been carried out, then two rectangle MEMS microstrip antennas with a conventional and a PBG ground respectively, are designed, while the alternating direction implicit finite-difference time-domain (ADI-FDTD) is adopted to perform time simulations of Gaussian pulse propagation in the microstrip antennas, as a result of the versatile method, the frequency-dependent scattering parameters and input impedance could be derived. An important reduction of the surface waves in the PBG antenna has been observed in the simulations, which consequently leads to an improvement of the antenna efficiency and bandwidth. Subsequently, the MEMS PBG antenna is micromachined and measured, and the simulation characteristics are verified by the measured curves of the MEMS PBG antenna. The measured peak return loss of PBG patch antenna is -21dB at 5.36GHz, and the bandwidth of 8.5%, which is three times wider than that of the conventional patch, therefore the gain and the bandwidth are enhanced by means of PBG process.

  18. Development of Human Posture Simulation Method for Assessing Posture Angles and Spinal Loads

    PubMed Central

    Lu, Ming-Lun; Waters, Thomas; Werren, Dwight

    2015-01-01

    Video-based posture analysis employing a biomechanical model is gaining a growing popularity for ergonomic assessments. A human posture simulation method of estimating multiple body postural angles and spinal loads from a video record was developed to expedite ergonomic assessments. The method was evaluated by a repeated measures study design with three trunk flexion levels, two lift asymmetry levels, three viewing angles and three trial repetitions as experimental factors. The study comprised two phases evaluating the accuracy of simulating self and other people’s lifting posture via a proxy of a computer-generated humanoid. The mean values of the accuracy of simulating self and humanoid postures were 12° and 15°, respectively. The repeatability of the method for the same lifting condition was excellent (~2°). The least simulation error was associated with side viewing angle. The estimated back compressive force and moment, calculated by a three dimensional biomechanical model, exhibited a range of 5% underestimation. The posture simulation method enables researchers to simultaneously quantify body posture angles and spinal loading variables with accuracy and precision comparable to on-screen posture matching methods. PMID:26361435

  19. Microelectronics mounted on a piezoelectric transducer: method, simulations, and measurements.

    PubMed

    Johansson, Jonny; Delsing, Jerker

    2006-01-01

    This paper describes the design of a highly integrated ultrasound sensor where the piezoelectric ceramic transducer is used as the carrier for the driver electronics. Intended as one part in a complete portable, battery operated ultrasound sensor system, focus has been to achieve small size and low power consumption. An optimized ASIC driver stage is mounted directly on the piezoelectric transducer and connected using wire bond technology. The absence of wiring between driver and transducer provides excellent pulse control possibilities and eliminates the need for broad band matching networks. Estimates of the sensor power consumption are made based on the capacitive behavior of the piezoelectric transducer. System behavior and power consumption are simulated using SPICE models of the ultrasound transducer together with transistor level modelling of the driver stage. Measurements and simulations are presented of system power consumption and echo energy in a pulse echo setup. It is shown that the power consumption varies with the excitation pulse width, which also affects the received ultrasound energy in a pulse echo setup. The measured power consumption for a 16 mm diameter 4.4 MHz piezoelectric transducer varies between 95 microW and 130 microW at a repetition frequency of 1 kHz. As a lower repetition frequency gives a linearly lower power consumption, very long battery operating times can be achieved. The measured results come very close to simulations as well as estimated ideal minimum power consumption.

  20. Rapid simulation of electromagnetic telemetry using an axisymmetric semianalytical finite element method

    NASA Astrophysics Data System (ADS)

    Chen, Jiefu; Zeng, Shubin; Dong, Qiuzhao; Huang, Yueqin

    2017-02-01

    An axisymmetric semianalytical finite element method is proposed and employed for rapid simulations of electromagnetic telemetry in layered underground formation. In this method, the layered media is decomposed into several subdomains and the interfaces between subdomains are discretized by conventional finite elements. Then a Riccati equation based high precision integration scheme is applied to exploit the homogeneity along the vertical direction in each layer. This semianalytical finite element scheme is very efficient in modeling electromagnetic telemetry in layered formation. Numerical examples as well as a field case with water based mud as drilling fluid are given to demonstrate the validity and effectiveness of this method.

  1. Aeroacoustic simulation of slender partially covered cavities using a Lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    de Jong, A. T.; Bijl, H.; Hazir, A.; Wiedemann, J.

    2013-04-01

    The present investigation focuses on simulation of the aero-acoustic resonance of partially covered cavities with a width much larger than their length or depth, that represent simplified door and trunk lid gaps. These cavities are under influence of a low Mach number flow with a relatively thick boundary layer. Under certain conditions, flow-induced acoustic resonance can occur. The requirements to simulate the resonance behavior using a Lattice Boltzmann method (LBM) model are investigated. Special focus is put on the effect of simulation spanwise width and inflow conditions. In order to validate the simulations, experiments have been conducted on simplified geometries. The configuration consists of a partially covered, rectangular cavity geometry 32×50×250 mm3 in size, with opening dimensions of 8×250 mm. Cavity flow induced acoustic response is measured with microphones at different spanwise locations inside the cavity. Hot-wire measurements are performed to quantify the boundary layer characteristics. Furthermore, high speed time resolved particle image velocimetry is used to capture the instantaneous velocity field around the opening geometry. Flow simulations show that the turbulent fluctuation content of the boundary layer is important to correctly simulate the flow induced resonance response. A minimum simulation spanwise width is needed to show good resemblance with experimental cavity pressure spectra. When a full spanwise width simulation is employed, base mode and higher modes are retrieved.

  2. A pseudo non-linear method for fast simulations of ultrasonic reverberation

    NASA Astrophysics Data System (ADS)

    Byram, Brett; Shu, Jasmine

    2016-04-01

    There is growing evidence that reverberation is a primary mechanism of clinical image degradation. This has led to a number of new approaches to suppress reverberation, including our recently proposed model-based algorithm. The algorithm can work well, but it must be trained to reject clutter, while preserving the signal of interest. One way to do this is to use simulated data, but current simulation methods that include multipath scattering are slow and do not readily allow separation of clutter and signal. Here, we propose a more convenient pseudo non-linear simulation method that utilizes existing linear simulation tools like Field II. The approach functions by linearly simulating scattered wavefronts at shallow depths, and then time-shifting these wavefronts to deeper depths. The simulation only requires specification of the first and last scatterers encountered by a multiply reflected wave and a third point that establishes the arrival time of the reverberation. To maintain appropriate 2D correlation, this set of three points is fixed for the entire simulation and is shifted as with a normal linear simulation scattering field. We show example images, and we compute first order speckle statistics as a function of scatterer density. We perform ex vivo measures of reverberation where we find that the average speckle SNR is 1.73, which we can simulate with 2 reverberation scatterers per resolution cell. We also compare ex vivo lateral speckle statistics to those from linear and pseudo non-linear simulation data. Finally, the van Cittert-Zernike curve was shown to match empirical and theoretical observations.

  3. Mars Simulation Chamber 2 - goals , instrumentation and methods

    NASA Astrophysics Data System (ADS)

    Ehrenfreund, P.; ten Kate, I. L.; Ruiterkamp, R.; Botta, O.; Lehmann, B.; Boudin, N.; Foing, B. H.

    2003-04-01

    We have installed at ESTEC and instrumented a Mars Simulation Chamber (MSC), in order to answer a range of questions on the subject of the apparent absence of organic compounds on Mars. We shall investigate: A. The effects of the changes of the Martian atmosphere over the history of Mars. B. The effect of UV irradiation on organic molecules embedded in the soil. C. The effect of oxidation on organic molecules embedded in the soil. D. The effect of thermal cycling on the surface. E. A combination of the above mentioned parameters. Techniques to be used include gas analysis, environmental sensors, HPLC, spectroscopy and other analytical techniques. We shall also assess the sensitivity of instruments for the detection of minerals and organic compounds of exobiological relevance in Martian analogue soils (mixed under controlled conditions with traces of these organics). The results concerning the simulation of complex organics on Mars, as well as lander instrument chamber simulations will be included in a database to serve for the interpretation of Beagle 2 data and other future Mars missions. The results of the experiments can also provide constraints for the observations from orbit, such as spectroscopy of minerals, measurements of the water cycle, frost and subsurface water, the CO2 cycle, and the landing site selection. In summary, the experiments have as a main goal to simulate various processes on organics, such as the effects of UV radiation, diffusion, and temperature, as a function of their depth in the soil. The specific organics will be embedded in either porous or compact Martian soil analogues or quartz beads. In this presentation we will concentrate on the goals, the instrumentation and the methods, used to operate the chamber.

  4. Budget Time: A Gender-Based Negotiation Simulation

    ERIC Educational Resources Information Center

    Barkacs, Linda L.; Barkacs, Craig B.

    2017-01-01

    This article presents a gender-based negotiation simulation designed to make participants aware of gender-based stereotypes and their effect on negotiation outcomes. In this simulation, the current research on gender issues is animated via three role sheets: (a) Vice president (VP), (b) advantaged department head, and (c) disadvantaged department…

  5. Validation techniques of agent based modelling for geospatial simulations

    NASA Astrophysics Data System (ADS)

    Darvishi, M.; Ahmadi, G.

    2014-10-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  6. Performance optimization of web-based medical simulation.

    PubMed

    Halic, Tansel; Ahn, Woojin; De, Suvranu

    2013-01-01

    This paper presents a technique for performance optimization of multimodal interactive web-based medical simulation. A web-based simulation framework is promising for easy access and wide dissemination of medical simulation. However, the real-time performance of the simulation highly depends on hardware capability on the client side. Providing consistent simulation in different hardware is critical for reliable medical simulation. This paper proposes a non-linear mixed integer programming model to optimize the performance of visualization and physics computation while considering hardware capability and application specific constraints. The optimization model identifies and parameterizes the rendering and computing capabilities of the client hardware using an exploratory proxy code. The parameters are utilized to determine the optimized simulation conditions including texture sizes, mesh sizes and canvas resolution. The test results show that the optimization model not only achieves a desired frame per second but also resolves visual artifacts due to low performance hardware.

  7. Comparative Study on Two Melting Simulation Methods: Melting Curve of Gold

    NASA Astrophysics Data System (ADS)

    Liu, Zhong-Li; Sun, Jun-Sheng; Li, Rui; Zhang, Xiu-Lu; Cai, Ling-Cang

    2016-05-01

    Melting simulation methods are of crucial importance to determining melting temperature of materials efficiently. A high-efficiency melting simulation method saves much simulation time and computational resources. To compare the efficiency of our newly developed shock melting (SM) method with that of the well-established two-phase (TP) method, we calculate the high-pressure melting curve of Au using the two methods based on the optimally selected interatomic potentials. Although we only use 640 atoms to determine the melting temperature of Au in the SM method, the resulting melting curve accords very well with the results from the TP method using much more atoms. Thus, this shows that a much smaller system size in SM method can still achieve a fully converged melting curve compared with the TP method, implying the robustness and efficiency of the SM method. Supported by the National Natural Science Foundation of China under Grant No. 41574076 and the NSAF of China under Grant No. U1230201/A06, and the Young Core Teacher Scheme of Henan Province under Grant No. 2014GGJS-108

  8. Time-domain simulation of a guitar: model and method.

    PubMed

    Derveaux, Grégoire; Chaigne, Antoine; Joly, Patrick; Bécache, Eliane

    2003-12-01

    This paper presents a three-dimensional time-domain numerical model of the vibration and acoustic radiation from a guitar. The model involves the transverse displacement of the string excited by a force pulse, the flexural motion of the soundboard, and the sound radiation. A specific spectral method is used for solving the Kirchhoff-Love's dynamic top plate model for a damped, heterogeneous orthotropic material. The air-plate interaction is solved with a fictitious domain method, and a conservative scheme is used for the time discretization. Frequency analysis is performed on the simulated sound pressure and plate velocity waveforms in order to evaluate quantitatively the transfer of energy through the various components of the coupled system: from the string to the soundboard and from the soundboard to the air. The effects of some structural changes in soundboard thickness and cavity volume on the produced sounds are presented and discussed. Simulations of the same guitar in three different cases are also performed: "in vacuo," in air with a perfectly rigid top plate, and in air with an elastic top plate. This allows comparisons between structural, acoustic, and structural-acoustic modes of the instrument. Finally, attention is paid to the evolution with time of the spatial pressure field. This shows, in particular, the complex evolution of the directivity pattern in the near field of the instrument, especially during the attack.

  9. Time-domain simulation of a guitar: Model and method

    NASA Astrophysics Data System (ADS)

    Derveaux, Grégoire; Chaigne, Antoine; Joly, Patrick; Bécache, Eliane

    2003-12-01

    This paper presents a three-dimensional time-domain numerical model of the vibration and acoustic radiation from a guitar. The model involves the transverse displacement of the string excited by a force pulse, the flexural motion of the soundboard, and the sound radiation. A specific spectral method is used for solving the Kirchhoff-Love's dynamic top plate model for a damped, heterogeneous orthotropic material. The air-plate interaction is solved with a fictitious domain method, and a conservative scheme is used for the time discretization. Frequency analysis is performed on the simulated sound pressure and plate velocity waveforms in order to evaluate quantitatively the transfer of energy through the various components of the coupled system: from the string to the soundboard and from the soundboard to the air. The effects of some structural changes in soundboard thickness and cavity volume on the produced sounds are presented and discussed. Simulations of the same guitar in three different cases are also performed: ``in vacuo,'' in air with a perfectly rigid top plate, and in air with an elastic top plate. This allows comparisons between structural, acoustic, and structural-acoustic modes of the instrument. Finally, attention is paid to the evolution with time of the spatial pressure field. This shows, in particular, the complex evolution of the directivity pattern in the near field of the instrument, especially during the attack.

  10. Probabilistic-Based Modeling and Simulation Assessment

    DTIC Science & Technology

    2010-06-01

    mph crash simulation at 100 ms with an unbelted Hybrid III model The Hybrid III dummy model was then restrained using a finite element seatbelt ...true physics of the impact, and can thus be qualified as unwanted noise in the model response. Unfortunately, it is difficult to quantify the

  11. Simulation based analysis of laser beam brazing

    NASA Astrophysics Data System (ADS)

    Dobler, Michael; Wiethop, Philipp; Schmid, Daniel; Schmidt, Michael

    2016-03-01

    Laser beam brazing is a well-established joining technology in car body manufacturing with main applications in the joining of divided tailgates and the joining of roof and side panels. A key advantage of laser brazed joints is the seam's visual quality which satisfies highest requirements. However, the laser beam brazing process is very complex and process dynamics are only partially understood. In order to gain deeper knowledge of the laser beam brazing process, to determine optimal process parameters and to test process variants, a transient three-dimensional simulation model of laser beam brazing is developed. This model takes into account energy input, heat transfer as well as fluid and wetting dynamics that lead to the formation of the brazing seam. A validation of the simulation model is performed by metallographic analysis and thermocouple measurements for different parameter sets of the brazing process. These results show that the multi-physical simulation model not only can be used to gain insight into the laser brazing process but also offers the possibility of process optimization in industrial applications. The model's capabilities in determining optimal process parameters are exemplarily shown for the laser power. Small deviations in the energy input can affect the brazing results significantly. Therefore, the simulation model is used to analyze the effect of the lateral laser beam position on the energy input and the resulting brazing seam.

  12. Issues of Simulation-Based Route Assignment

    SciTech Connect

    Nagel, K.; Rickert, M.

    1999-07-20

    The authors use an iterative re-planning scheme with simulation feedback to generate a self-consistent route-set for a given street network and origin-destination matrix. The iteration process is defined by three parameters. They found that they have influence on the speed of the relaxation, but not necessarily on its final state.

  13. SimTool - An object based approach to simulation construction

    NASA Technical Reports Server (NTRS)

    Crues, Edwin Z.; Yazbeck, Marwan E.; Edwards, H. C.; Barnette, Randall D.

    1993-01-01

    The creation and maintenance of large complex simulations can be a difficult and error prone task. A number of interactive and automated tools have been developed to aid in simulation construction and maintenance. Many of these tools are based upon object oriented analysis and design concepts. One such tool, SimTool, is an object based integrated tool set for the development, maintenance, and operation of large, complex and long lived simulations. This paper discusses SimTool's object based approach to simulation design, construction and execution. It also discusses the services provided to various levels of SimTool users to assist them in a wide range of simulation tasks. Also, with the aid of an implemented and working simulation example, this paper discusses SimTool's key design and operational features. Finally, this paper presents a condensed discussion of SimTool's Entity-Relationship-Attribute (ERA) modeling approach.

  14. Dshell++: A Component Based, Reusable Space System Simulation Framework

    NASA Technical Reports Server (NTRS)

    Lim, Christopher S.; Jain, Abhinandan

    2009-01-01

    This paper describes the multi-mission Dshell++ simulation framework for high fidelity, physics-based simulation of spacecraft, robotic manipulation and mobility systems. Dshell++ is a C++/Python library which uses modern script driven object-oriented techniques to allow component reuse and a dynamic run-time interface for complex, high-fidelity simulation of spacecraft and robotic systems. The goal of the Dshell++ architecture is to manage the inherent complexity of physicsbased simulations while supporting component model reuse across missions. The framework provides several features that support a large degree of simulation configurability and usability.

  15. Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images'.

    PubMed

    de Nijs, Robin

    2015-07-21

    In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics for counts below 100. Only Poisson resampling was not affected by this, while Gaussian redrawing was less affected by it than Poisson redrawing. Poisson resampling is the method of choice, when simulating half-count (or less) images from full-count images. It simulates correctly the statistical properties, also in the case of rounding off of the images.

  16. Review of Methods Related to Assessing Human Performance in Nuclear Power Plant Control Room Simulations

    SciTech Connect

    Katya L Le Blanc; Ronald L Boring; David I Gertman

    2001-11-01

    With the increased use of digital systems in Nuclear Power Plant (NPP) control rooms comes a need to thoroughly understand the human performance issues associated with digital systems. A common way to evaluate human performance is to test operators and crews in NPP control room simulators. However, it is often challenging to characterize human performance in meaningful ways when measuring performance in NPP control room simulations. A review of the literature in NPP simulator studies reveals a variety of ways to measure human performance in NPP control room simulations including direct observation, automated computer logging, recordings from physiological equipment, self-report techniques, protocol analysis and structured debriefs, and application of model-based evaluation. These methods and the particular measures used are summarized and evaluated.

  17. Simulation and optimization of an experimental membrane wastewater treatment plant using computational intelligence methods.

    PubMed

    Ludwig, T; Kern, P; Bongards, M; Wolf, C

    2011-01-01

    The optimization of relaxation and filtration times of submerged microfiltration flat modules in membrane bioreactors used for municipal wastewater treatment is essential for efficient plant operation. However, the optimization and control of such plants and their filtration processes is a challenging problem due to the underlying highly nonlinear and complex processes. This paper presents the use of genetic algorithms for this optimization problem in conjunction with a fully calibrated simulation model, as computational intelligence methods are perfectly suited to the nonconvex multi-objective nature of the optimization problems posed by these complex systems. The simulation model is developed and calibrated using membrane modules from the wastewater simulation software GPS-X based on the Activated Sludge Model No.1 (ASM1). Simulation results have been validated at a technical reference plant. They clearly show that filtration process costs for cleaning and energy can be reduced significantly by intelligent process optimization.

  18. Agent-based modeling to simulate the dengue spread

    NASA Astrophysics Data System (ADS)

    Deng, Chengbin; Tao, Haiyan; Ye, Zhiwei

    2008-10-01

    In this paper, we introduce a novel method ABM in simulating the unique process for the dengue spread. Dengue is an acute infectious disease with a long history of over 200 years. Unlike the diseases that can be transmitted directly from person to person, dengue spreads through a must vector of mosquitoes. There is still no any special effective medicine and vaccine for dengue up till now. The best way to prevent dengue spread is to take precautions beforehand. Thus, it is crucial to detect and study the dynamic process of dengue spread that closely relates to human-environment interactions where Agent-Based Modeling (ABM) effectively works. The model attempts to simulate the dengue spread in a more realistic way in the bottom-up way, and to overcome the limitation of ABM, namely overlooking the influence of geographic and environmental factors. Considering the influence of environment, Aedes aegypti ecology and other epidemiological characteristics of dengue spread, ABM can be regarded as a useful way to simulate the whole process so as to disclose the essence of the evolution of dengue spread.

  19. A Computer-Based Simulation of an Acid-Base Titration

    ERIC Educational Resources Information Center

    Boblick, John M.

    1971-01-01

    Reviews the advantages of computer simulated environments for experiments, referring in particular to acid-base titrations. Includes pre-lab instructions and a sample computer printout of a student's use of an acid-base simulation. Ten references. (PR)

  20. The Impact of Content Area Focus on the Effectiveness of a Web-Based Simulation

    ERIC Educational Resources Information Center

    Adcock, Amy B.; Duggan, Molly H.; Watson, Ginger S.; Belfore, Lee A.

    2010-01-01

    This paper describes an assessment of a web-based interview simulation designed to teach empathetic helping skills. The system includes an animated character acting as a client and responses designed to recreate a simulated role-play, a common assessment method used for teaching these skills. The purpose of this study was to determine whether…

  1. Fast integral methods for integrated optical systems simulations: a review

    NASA Astrophysics Data System (ADS)

    Kleemann, Bernd H.

    2015-09-01

    Boundary integral equation methods (BIM) or simply integral methods (IM) in the context of optical design and simulation are rigorous electromagnetic methods solving Helmholtz or Maxwell equations on the boundary (surface or interface of the structures between two materials) for scattering or/and diffraction purposes. This work is mainly restricted to integral methods for diffracting structures such as gratings, kinoforms, diffractive optical elements (DOEs), micro Fresnel lenses, computer generated holograms (CGHs), holographic or digital phase holograms, periodic lithographic structures, and the like. In most cases all of the mentioned structures have dimensions of thousands of wavelengths in diameter. Therefore, the basic methods necessary for the numerical treatment are locally applied electromagnetic grating diffraction algorithms. Interestingly, integral methods belong to the first electromagnetic methods investigated for grating diffraction. The development started in the mid 1960ies for gratings with infinite conductivity and it was mainly due to the good convergence of the integral methods especially for TM polarization. The first integral equation methods (IEM) for finite conductivity were the methods by D. Maystre at Fresnel Institute in Marseille: in 1972/74 for dielectric, and metallic gratings, and later for multiprofile, and other types of gratings and for photonic crystals. Other methods such as differential and modal methods suffered from unstable behaviour and slow convergence compared to BIMs for metallic gratings in TM polarization from the beginning to the mid 1990ies. The first BIM for gratings using a parametrization of the profile was developed at Karl-Weierstrass Institute in Berlin under a contract with Carl Zeiss Jena works in 1984-1986 by A. Pomp, J. Creutziger, and the author. Due to the parametrization, this method was able to deal with any kind of surface grating from the beginning: whether profiles with edges, overhanging non

  2. A machine learning method for the prediction of receptor activation in the simulation of synapses.

    PubMed

    Montes, Jesus; Gomez, Elena; Merchán-Pérez, Angel; Defelipe, Javier; Peña, Jose-Maria

    2013-01-01

    Chemical synaptic transmission involves the release of a neurotransmitter that diffuses in the extracellular space and interacts with specific receptors located on the postsynaptic membrane. Computer simulation approaches provide fundamental tools for exploring various aspects of the synaptic transmission under different conditions. In particular, Monte Carlo methods can track the stochastic movements of neurotransmitter molecules and their interactions with other discrete molecules, the receptors. However, these methods are computationally expensive, even when used with simplified models, preventing their use in large-scale and multi-scale simulations of complex neuronal systems that may involve large numbers of synaptic connections. We have developed a machine-learning based method that can accurately predict relevant aspects of the behavior of synapses, such as the percentage of open synaptic receptors as a function of time since the release of the neurotransmitter, with considerably lower computational cost compared with the conventional Monte Carlo alternative. The method is designed to learn patterns and general principles from a corpus of previously generated Monte Carlo simulations of synapses covering a wide range of structural and functional characteristics. These patterns are later used as a predictive model of the behavior of synapses under different conditions without the need for additional computationally expensive Monte Carlo simulations. This is performed in five stages: data sampling, fold creation, machine learning, validation and curve fitting. The resulting procedure is accurate, automatic, and it is general enough to predict synapse behavior under experimental conditions that are different to the ones it has been trained on. Since our method efficiently reproduces the results that can be obtained with Monte Carlo simulations at a considerably lower computational cost, it is suitable for the simulation of high numbers of synapses and it is

  3. A Machine Learning Method for the Prediction of Receptor Activation in the Simulation of Synapses

    PubMed Central

    Montes, Jesus; Gomez, Elena; Merchán-Pérez, Angel; DeFelipe, Javier; Peña, Jose-Maria

    2013-01-01

    Chemical synaptic transmission involves the release of a neurotransmitter that diffuses in the extracellular space and interacts with specific receptors located on the postsynaptic membrane. Computer simulation approaches provide fundamental tools for exploring various aspects of the synaptic transmission under different conditions. In particular, Monte Carlo methods can track the stochastic movements of neurotransmitter molecules and their interactions with other discrete molecules, the receptors. However, these methods are computationally expensive, even when used with simplified models, preventing their use in large-scale and multi-scale simulations of complex neuronal systems that may involve large numbers of synaptic connections. We have developed a machine-learning based method that can accurately predict relevant aspects of the behavior of synapses, such as the percentage of open synaptic receptors as a function of time since the release of the neurotransmitter, with considerably lower computational cost compared with the conventional Monte Carlo alternative. The method is designed to learn patterns and general principles from a corpus of previously generated Monte Carlo simulations of synapses covering a wide range of structural and functional characteristics. These patterns are later used as a predictive model of the behavior of synapses under different conditions without the need for additional computationally expensive Monte Carlo simulations. This is performed in five stages: data sampling, fold creation, machine learning, validation and curve fitting. The resulting procedure is accurate, automatic, and it is general enough to predict synapse behavior under experimental conditions that are different to the ones it has been trained on. Since our method efficiently reproduces the results that can be obtained with Monte Carlo simulations at a considerably lower computational cost, it is suitable for the simulation of high numbers of synapses and it is

  4. Bluff Body Flow Simulation Using a Vortex Element Method

    SciTech Connect

    Anthony Leonard; Phillippe Chatelain; Michael Rebel

    2004-09-30

    Heavy ground vehicles, especially those involved in long-haul freight transportation, consume a significant part of our nation's energy supply. it is therefore of utmost importance to improve their efficiency, both to reduce emissions and to decrease reliance on imported oil. At highway speeds, more than half of the power consumed by a typical semi truck goes into overcoming aerodynamic drag, a fraction which increases with speed and crosswind. Thanks to better tools and increased awareness, recent years have seen substantial aerodynamic improvements by the truck industry, such as tractor/trailer height matching, radiator area reduction, and swept fairings. However, there remains substantial room for improvement as understanding of turbulent fluid dynamics grows. The group's research effort focused on vortex particle methods, a novel approach for computational fluid dynamics (CFD). Where common CFD methods solve or model the Navier-Stokes equations on a grid which stretches from the truck surface outward, vortex particle methods solve the vorticity equation on a Lagrangian basis of smooth particles and do not require a grid. They worked to advance the state of the art in vortex particle methods, improving their ability to handle the complicated, high Reynolds number flow around heavy vehicles. Specific challenges that they have addressed include finding strategies to accurate capture vorticity generation and resultant forces at the truck wall, handling the aerodynamics of spinning bodies such as tires, application of the method to the GTS model, computation time reduction through improved integration methods, a closest point transform for particle method in complex geometrics, and work on large eddy simulation (LES) turbulence modeling.

  5. Improving the Aircraft Design Process Using Web-Based Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.; Follen, Gregory J. (Technical Monitor)

    2000-01-01

    Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and multifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.

  6. Improving the Aircraft Design Process Using Web-based Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.

    2003-01-01

    Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and muitifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.

  7. Correlation theory-based signal processing method for CMF signals

    NASA Astrophysics Data System (ADS)

    Shen, Yan-lin; Tu, Ya-qing

    2016-06-01

    Signal processing precision of Coriolis mass flowmeter (CMF) signals affects measurement accuracy of Coriolis mass flowmeters directly. To improve the measurement accuracy of CMFs, a correlation theory-based signal processing method for CMF signals is proposed, which is comprised of the correlation theory-based frequency estimation method and phase difference estimation method. Theoretical analysis shows that the proposed method eliminates the effect of non-integral period sampling signals on frequency and phase difference estimation. The results of simulations and field experiments demonstrate that the proposed method improves the anti-interference performance of frequency and phase difference estimation and has better estimation performance than the adaptive notch filter, discrete Fourier transform and autocorrelation methods in terms of frequency estimation and the data extension-based correlation, Hilbert transform, quadrature delay estimator and discrete Fourier transform methods in terms of phase difference estimation, which contributes to improving the measurement accuracy of Coriolis mass flowmeters.

  8. Maintain rigid structures in Verlet based cartesian molecular dynamics simulations.

    PubMed

    Tao, Peng; Wu, Xiongwu; Brooks, Bernard R

    2012-10-07

    An algorithm is presented to maintain rigid structures in Verlet based cartesian molecular dynamics (MD) simulations. After each unconstrained MD step, the coordinates of selected particles are corrected to maintain rigid structures through an iterative procedure of rotation matrix computation. This algorithm, named as SHAPE and implemented in CHARMM program suite, avoids the calculations of Lagrange multipliers, so that the complexity of computation does not increase with the number of particles in a rigid structure. The implementation of this algorithm does not require significant modification of propagation integrator, and can be plugged into any cartesian based MD integration scheme. A unique feature of the SHAPE method is that it is interchangeable with SHAKE for any object that can be constrained as a rigid structure using multiple SHAKE constraints. Unlike SHAKE, the SHAPE method can be applied to large linear (with three or more centers) and planar (with four or more centers) rigid bodies. Numerical tests with four model systems including two proteins demonstrate that the accuracy and reliability of the SHAPE method are comparable to the SHAKE method, but with much more applicability and efficiency.

  9. Synchrotron-based EUV lithography illuminator simulator

    DOEpatents

    Naulleau, Patrick P.

    2004-07-27

    A lithographic illuminator to illuminate a reticle to be imaged with a range of angles is provided. The illumination can be employed to generate a pattern in the pupil of the imaging system, where spatial coordinates in the pupil plane correspond to illumination angles in the reticle plane. In particular, a coherent synchrotron beamline is used along with a potentially decoherentizing holographic optical element (HOE), as an experimental EUV illuminator simulation station. The pupil fill is completely defined by a single HOE, thus the system can be easily modified to model a variety of illuminator fill patterns. The HOE can be designed to generate any desired angular spectrum and such a device can serve as the basis for an illuminator simulator.

  10. Simulation-based training in echocardiography.

    PubMed

    Biswas, Monodeep; Patel, Rajendrakumar; German, Charles; Kharod, Anant; Mohamed, Ahmed; Dod, Harvinder S; Kapoor, Poonam Malhotra; Nanda, Navin C

    2016-10-01

    The knowledge gained from echocardiography is paramount for the clinician in diagnosing, interpreting, and treating various forms of disease. While cardiologists traditionally have undergone training in this imaging modality during their fellowship, many other specialties are beginning to show interest as well, including intensive care, anesthesia, and primary care trainees, in both transesophageal and transthoracic echocardiography. Advances in technology have led to the development of simulation programs accessible to trainees to help gain proficiency in the nuances of obtaining quality images, in a low stress, pressure free environment, often with a functioning ultrasound probe and mannequin that can mimic many of the pathologies seen in living patients. Although there are various training simulation programs each with their own benefits and drawbacks, it is clear that these programs are a powerful tool in educating the trainee and likely will lead to improved patient outcomes.

  11. Air Pollution Simulation based on different seasons

    NASA Astrophysics Data System (ADS)

    Muhaimin

    2017-01-01

    Simulation distribution of pollutants (SOx and NOx) emitted from Cirebon power plant activities have been carried out. Gaussian models and scenarios are used to predict the concentration of pollutant gasses. The purposes of this study were to determine the distribution of the flue gas from the power plant activity and differences pollutant gas concentrations in the wet and dry seasons. The result showed that the concentration of pollutant gasses in the dry season was higher than the wet season. The difference of pollutant concentration because of wind speed, gas flow rate, and temperature of the gas that flows out of the chimney. The maximum concentration of pollutant gasses in wet season for SOx is 30.14 µg/m3, while NOx is 26.35 µg/m3. Then, The simulation of air pollution in the dry season for SOx is 42.38 µg/m3, while NOx is 34.78 µg/m3.

  12. On the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods

    PubMed Central

    Lee, Anthony; Yau, Christopher; Giles, Michael B.; Doucet, Arnaud; Holmes, Christopher C.

    2011-01-01

    We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design. PMID:22003276

  13. On the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods.

    PubMed

    Lee, Anthony; Yau, Christopher; Giles, Michael B; Doucet, Arnaud; Holmes, Christopher C

    2010-12-01

    We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design.

  14. Flow simulation of a Pelton bucket using finite volume particle method

    NASA Astrophysics Data System (ADS)

    Vessaz, C.; Jahanbakhsh, E.; Avellan, F.

    2014-03-01

    The objective of the present paper is to perform an accurate numerical simulation of the high-speed water jet impinging on a Pelton bucket. To reach this goal, the Finite Volume Particle Method (FVPM) is used to discretize the governing equations. FVPM is an arbitrary Lagrangian-Eulerian method, which combines attractive features of Smoothed Particle Hydrodynamics and conventional mesh-based Finite Volume Method. This method is able to satisfy free surface and no-slip wall boundary conditions precisely. The fluid flow is assumed weakly compressible and the wall boundary is represented by one layer of particles located on the bucket surface. In the present study, the simulations of the flow in a stationary bucket are investigated for three different impinging angles: 72°, 90° and 108°. The particles resolution is first validated by a convergence study. Then, the FVPM results are validated with available experimental data and conventional grid-based Volume Of Fluid simulations. It is shown that the wall pressure field is in good agreement with the experimental and numerical data. Finally, the torque evolution and water sheet location are presented for a simulation of five rotating Pelton buckets.

  15. Selecting a dynamic simulation modeling method for health care delivery research-part 2: report of the ISPOR Dynamic Simulation Modeling Emerging Good Practices Task Force.

    PubMed

    Marshall, Deborah A; Burgos-Liz, Lina; IJzerman, Maarten J; Crown, William; Padula, William V; Wong, Peter K; Pasupathy, Kalyan S; Higashi, Mitchell K; Osgood, Nathaniel D

    2015-03-01

    In a previous report, the ISPOR Task Force on Dynamic Simulation Modeling Applications in Health Care Delivery Research Emerging Good Practices introduced the fundamentals of dynamic simulation modeling and identified the types of health care delivery problems for which dynamic simulation modeling can be used more effectively than other modeling methods. The hierarchical relationship between the health care delivery system, providers, patients, and other stakeholders exhibits a level of complexity that ought to be captured using dynamic simulation modeling methods. As a tool to help researchers decide whether dynamic simulation modeling is an appropriate method for modeling the effects of an intervention on a health care system, we presented the System, Interactions, Multilevel, Understanding, Loops, Agents, Time, Emergence (SIMULATE) checklist consisting of eight elements. This report builds on the previous work, systematically comparing each of the three most commonly used dynamic simulation modeling methods-system dynamics, discrete-event simulation, and agent-based modeling. We review criteria for selecting the most suitable method depending on 1) the purpose-type of problem and research questions being investigated, 2) the object-scope of the model, and 3) the method to model the object to achieve the purpose. Finally, we provide guidance for emerging good practices for dynamic simulation modeling in the health sector, covering all aspects, from the engagement of decision makers in the model design through model maintenance and upkeep. We conclude by providing some recommendations about the application of these methods to add value to informed decision making, with an emphasis on stakeholder engagement, starting with the problem definition. Finally, we identify areas in which further methodological development will likely occur given the growing "volume, velocity and variety" and availability of "big data" to provide empirical evidence and techniques

  16. Cyber-Based Turbulent Combustion Simulation

    DTIC Science & Technology

    2012-02-28

    within the combustion chamber of a scramjet with nonequilibrium chemical reactions. Additional information is also sought in the scaling of combustor ...in regarding to the ignition delay. The overall length of the simulated scramjet is 140 cm with a cylindrical combustor with the dimensions of 100...5.21 and the fuel is injecting into the combustor at a speed of 600 m/s and at a pressure of 2 Atm. The chemical kinetic model of the hydrogen-air

  17. The Simulation-Based Acquisition Research Laboratory

    DTIC Science & Technology

    1998-12-01

    Joseph G., Piplani , Lalit K ., and Roop, Richard 0., "Systems Acquisition Manager’s Guide for the Use of Models and Simulations," Defense Systems...Principal Advisor: David F. Matthews Associate Advisor: Michael L . McGinnis Approved for public release; distribution is unlimited. REPORT DOCUMENTATION PAGE...Andrew J. DiMarco Approved by::___ David F. Matthews, Thesis Advisor Michael L . McGinnis, Associate Advisor 7 Reuben T/ Harris

  18. Science Classroom Inquiry (SCI) Simulations: A Novel Method to Scaffold Science Learning

    PubMed Central

    Peffer, Melanie E.; Beckler, Matthew L.; Schunn, Christian; Renken, Maggie; Revak, Amanda

    2015-01-01

    Science education is progressively more focused on employing inquiry-based learning methods in the classroom and increasing scientific literacy among students. However, due to time and resource constraints, many classroom science activities and laboratory experiments focus on simple inquiry, with a step-by-step approach to reach predetermined outcomes. The science classroom inquiry (SCI) simulations were designed to give students real life, authentic science experiences within the confines of a typical classroom. The SCI simulations allow students to engage with a science problem in a meaningful, inquiry-based manner. Three discrete SCI simulations were created as website applications for use with middle school and high school students. For each simulation, students were tasked with solving a scientific problem through investigation and hypothesis testing. After completion of the simulation, 67% of students reported a change in how they perceived authentic science practices, specifically related to the complex and dynamic nature of scientific research and how scientists approach problems. Moreover, 80% of the students who did not report a change in how they viewed the practice of science indicated that the simulation confirmed or strengthened their prior understanding. Additionally, we found a statistically significant positive correlation between students’ self-reported changes in understanding of authentic science practices and the degree to which each simulation benefitted learning. Since SCI simulations were effective in promoting both student learning and student understanding of authentic science practices with both middle and high school students, we propose that SCI simulations are a valuable and versatile technology that can be used to educate and inspire a wide range of science students on the real-world complexities inherent in scientific study. PMID:25786245

  19. Science classroom inquiry (SCI) simulations: a novel method to scaffold science learning.

    PubMed

    Peffer, Melanie E; Beckler, Matthew L; Schunn, Christian; Renken, Maggie; Revak, Amanda

    2015-01-01

    Science education is progressively more focused on employing inquiry-based learning methods in the classroom and increasing scientific literacy among students. However, due to time and resource constraints, many classroom science activities and laboratory experiments focus on simple inquiry, with a step-by-step approach to reach predetermined outcomes. The science classroom inquiry (SCI) simulations were designed to give students real life, authentic science experiences within the confines of a typical classroom. The SCI simulations allow students to engage with a science problem in a meaningful, inquiry-based manner. Three discrete SCI simulations were created as website applications for use with middle school and high school students. For each simulation, students were tasked with solving a scientific problem through investigation and hypothesis testing. After completion of the simulation, 67% of students reported a change in how they perceived authentic science practices, specifically related to the complex and dynamic nature of scientific research and how scientists approach problems. Moreover, 80% of the students who did not report a change in how they viewed the practice of science indicated that the simulation confirmed or strengthened their prior understanding. Additionally, we found a statistically significant positive correlation between students' self-reported changes in understanding of authentic science practices and the degree to which each simulation benefitted learning. Since SCI simulations were effective in promoting both student learning and student understanding of authentic science practices with both middle and high school students, we propose that SCI simulations are a valuable and versatile technology that can be used to educate and inspire a wide range of science students on the real-world complexities inherent in scientific study.

  20. B-spline methods and zonal grids for numerical simulations of turbulent flows

    NASA Astrophysics Data System (ADS)

    Kravchenko, Arthur Grigorievich

    1998-12-01

    A novel numerical technique is developed for simulations of complex turbulent flows on zonal embedded grids. This technique is based on the Galerkin method with basis functions constructed using B-splines. The technique permits fine meshes to be embedded in physically significant flow regions without placing a large number of grid points in the rest of the computational domain. The numerical technique has been tested successfully in simulations of a fully developed turbulent channel flow. Large eddy simulations of turbulent channel flow at Reynolds numbers up to Rec = 110,000 (based on centerline velocity and channel half-width) show good agreement with the existing experimental data. These tests indicate that the method provides an efficient information transfer between zones without accumulation of errors in the regions of sudden grid changes. The numerical solutions on multi-zone grids are of the same accuracy as those on a single-zone grid but require less computer resources. The performance of the numerical method in a generalized coordinate system is assessed in simulations of laminar flows over a circular cylinder at low Reynolds numbers and three-dimensional simulations at ReD = 300 (based on free-stream velocity and cylinder diameter). The drag coefficients, the size of the recirculation region, and the vortex shedding frequency all agree well with the experimental data and previous simulations of these flows. Large eddy simulations of a flow over a circular cylinder at a sub-critical Reynolds number, ReD = 3900, are performed and compared with previous upwind-biased and central finite-difference computations. In the very near-wake, all three simulations are in agreement with each other and agree fairly well with the PIV experimental data of Lourenco & Shih (1993). Farther downstream, the results of the B- spline computations are in better agreement with the hot- wire experiment of Ong & Wallace (1996) than those obtained in finite-difference simulations