Science.gov

Sample records for based simulation methods

  1. Fast simulation method for airframe analysis based on big data

    NASA Astrophysics Data System (ADS)

    Liu, Dongliang; Zhang, Lixin

    2016-10-01

    In this paper, we employ the big data method to structural analysis by considering the correlations between loads and loads, loads and results and results and results. By means of fundamental mathematics and physical rules, the principle, feasibility and error control of the method are discussed. We then establish the analysis process and procedures. The method is validated by two examples. The results show that the fast simulation method based on big data is fast and precise when it is applied to structural analysis.

  2. Kinetic Plasma Simulation Using a Quadrature-based Moment Method

    NASA Astrophysics Data System (ADS)

    Larson, David J.

    2008-11-01

    The recently developed quadrature-based moment method [Desjardins, Fox, and Villedieu, J. Comp. Phys. 227 (2008)] is an interesting alternative to standard Lagrangian particle simulations. The two-node quadrature formulation allows multiple flow velocities within a cell, thus correctly representing crossing particle trajectories and lower-order velocity moments without resorting to Lagrangian methods. Instead of following many particles per cell, the Eulerian transport equations are solved for selected moments of the kinetic equation. The moments are then inverted to obtain a discrete representation of the velocity distribution function. Potential advantages include reduced computational cost, elimination of statistical noise, and a simpler treatment of collisional effects. We present results obtained using the quadrature-based moment method applied to the Vlasov equation in simple one-dimensional electrostatic plasma simulations. In addition we explore the use of the moment inversion process in modeling collisional processes within the Complex Particle Kinetics framework.

  3. Recent advances in implicit solvent based methods for biomolecular simulations

    PubMed Central

    Chen, Jianhan; Brooks, Charles L.; Khandogin, Jana

    2008-01-01

    Implicit solvent based methods play an increasingly important role in molecular modeling of biomolecular structure and dynamics. Recent methodological developments have mainly focused on extension of the generalized Born (GB) formalism for variable dielectric environments and accurate treatment of nonpolar solvation. Extensive efforts in parameterization of GB models and implicit solvent force fields have enabled ab initio simulation of protein folding to native or near-native structures. Another exciting area that has benefitted from the advances in implicit solvent models is the development of constant pH molecular dynamics methods, which have recently been applied to calculations of protein pKa values and studies of pH-dependent peptide and protein folding. PMID:18304802

  4. Study of Flapping Flight Using Discrete Vortex Method Based Simulations

    NASA Astrophysics Data System (ADS)

    Devranjan, S.; Jalikop, Shreyas V.; Sreenivas, K. R.

    2013-12-01

    In recent times, research in the area of flapping flight has attracted renewed interest with an endeavor to use this mechanism in Micro Air vehicles (MAVs). For a sustained and high-endurance flight, having larger payload carrying capacity we need to identify a simple and efficient flapping-kinematics. In this paper, we have used flow visualizations and Discrete Vortex Method (DVM) based simulations for the study of flapping flight. Our results highlight that simple flapping kinematics with down-stroke period (tD) shorter than the upstroke period (tU) would produce a sustained lift. We have identified optimal asymmetry ratio (Ar = tD/tU), for which flapping-wings will produce maximum lift and find that introducing optimal wing flexibility will further enhances the lift.

  5. Hybrid statistics-simulations based method for atom-counting from ADF STEM images.

    PubMed

    De Wael, Annelies; De Backer, Annick; Jones, Lewys; Nellist, Peter D; Van Aert, Sandra

    2017-01-25

    A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials.

  6. A method for MREIT-based source imaging: simulation studies

    NASA Astrophysics Data System (ADS)

    Song, Yizhuang; Jeong, Woo Chul; Woo, Eung Je; Seo, Jin Keun

    2016-08-01

    This paper aims to provide a method for using magnetic resonance electrical impedance tomography (MREIT) to visualize local conductivity changes associated with evoked neuronal activities in the brain. MREIT is an MRI-based technique for conductivity mapping by probing the magnetic flux density induced by an externally injected current through surface electrodes. Since local conductivity changes resulting from evoked neural activities are very small (less than a few %), a major challenge is to acquire exogenous magnetic flux density data exceeding a certain noise level. Noting that the signal-to-noise ratio is proportional to the square root of the number of averages, it is important to reduce the data acquisition time to get more averages within a given total data collection time. The proposed method uses a sub-sampled k-space data set in the phase-encoding direction to significantly reduce the data acquisition time. Since the sub-sampled data violates the Nyquist criteria, we only get a nonlinearly wrapped version of the exogenous magnetic flux density data, which is insufficient for conductivity imaging. Taking advantage of the sparseness of the conductivity change, the proposed method detects local conductivity changes by estimating the time-change of the Laplacian of the nonlinearly wrapped data.

  7. Understanding exoplanet populations with simulation-based methods

    NASA Astrophysics Data System (ADS)

    Morehead, Robert Charles

    The Kepler candidate catalog represents an unprecedented sample of exoplanet host stars. This dataset is ideal for probing the populations of exoplanet systems and exploring their architectures. Confirming transiting exoplanets candidates through traditional follow-up methods is challenging, especially for faint host stars. Most of Kepler's validated planets relied on statistical methods to separate true planets from false-positives. Multiple transiting planet systems (MTPS) have been previously shown to have low false-positive rates and over 850 planets in MTPSs have been statistically validated so far. We show that the period-normalized transit duration ratio (xi) offers additional information that can be used to establish the planetary nature of these systems. We briefly discuss the observed distribution of xi for the Q1-Q17 Kepler Candidate Search. We also use xi to develop a Bayesian statistical framework combined with Monte Carlo methods to determine which pairs of planet candidates in an MTPS are consistent with the planet hypothesis for a sample of 862 MTPSs that include candidate planets, confirmed planets, and known false-positives. This analysis proves to be efficient and advantageous in that it only requires catalog-level bulk candidate properties and galactic population modeling to compute the probabilities of a myriad of feasible scenarios composed of background and companion stellar blends in the photometric aperture, without needing additional observational follow-up. Our results agree with the previous results of a low false-positive rate in the Kepler MTPSs. This implies, independently of any other estimates, that most of the MTPSs detected by Kepler are planetary in nature, but that a substantial fraction could be orbiting stars other than then the putative target star, and therefore may be subject to significant error in the inferred planet parameters resulting from unknown or mismeasured stellar host attributes. We also apply approximate

  8. Utility of Combining a Simulation-Based Method With a Lecture-Based Method for Fundoscopy Training in Neurology Residency.

    PubMed

    Gupta, Deepak K; Khandker, Namir; Stacy, Kristin; Tatsuoka, Curtis M; Preston, David C

    2017-09-11

    Fundoscopic examination is an essential component of the neurologic examination. Competence in its performance is mandated as a required clinical skill for neurology residents by the American Council of Graduate Medical Education. Government and private insurance agencies require its performance and documentation for moderate- and high-level neurologic evaluations. Traditionally, assessment and teaching of this key clinical examination technique have been difficult in neurology residency training. To evaluate the utility of a simulation-based method and the traditional lecture-based method for assessment and teaching of fundoscopy to neurology residents. This study was a prospective, single-blinded, education research study of 48 neurology residents recruited from July 1, 2015, through June 30, 2016, at a large neurology residency training program. Participants were equally divided into control and intervention groups after stratification by training year. Baseline and postintervention assessments were performed using questionnaire, survey, and fundoscopy simulators. After baseline assessment, both groups initially received lecture-based training, which covered fundamental knowledge on the components of fundoscopy and key neurologic findings observed on fundoscopic examination. The intervention group additionally received simulation-based training, which consisted of an instructor-led, hands-on workshop that covered practical skills of performing fundoscopic examination and identifying neurologically relevant findings on another fundoscopy simulator. The primary outcome measures were the postintervention changes in fundoscopy knowledge, skills, and total scores. A total of 30 men and 18 women were equally distributed between the 2 groups. The intervention group had significantly higher mean (SD) increases in skills (2.5 [2.3] vs 0.8 [1.8], P = .01) and total (9.3 [4.3] vs 5.3 [5.8], P = .02) scores compared with the control group. Knowledge scores (6.8 [3

  9. Human swallowing simulation based on videofluorography images using Hamiltonian MPS method

    NASA Astrophysics Data System (ADS)

    Kikuchi, Takahiro; Michiwaki, Yukihiro; Kamiya, Tetsu; Toyama, Yoshio; Tamai, Tasuku; Koshizuka, Seiichi

    2015-09-01

    In developed nations, swallowing disorders and aspiration pneumonia have become serious problems. We developed a method to simulate the behavior of the organs involved in swallowing to clarify the mechanisms of swallowing and aspiration. The shape model is based on anatomically realistic geometry, and the motion model utilizes forced displacements based on realistic dynamic images to reflect the mechanisms of human swallowing. The soft tissue organs are modeled as nonlinear elastic material using the Hamiltonian MPS method. This method allows for stable simulation of the complex swallowing movement. A penalty method using metaballs is employed to simulate contact between organ walls and smooth sliding along the walls. We performed four numerical simulations under different analysis conditions to represent four cases of swallowing, including a healthy volunteer and a patient with a swallowing disorder. The simulation results were compared to examine the epiglottic downfolding mechanism, which strongly influences the risk of aspiration.

  10. The simulation of the recharging method of active medical implant based on Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Kong, Xianyue; Song, Yong; Hao, Qun; Cao, Jie; Zhang, Xiaoyu; Dai, Pantao; Li, Wansong

    2014-11-01

    The recharging of Active Medical Implant (AMI) is an important issue for its future application. In this paper, a method for recharging active medical implant using wearable incoherent light source has been proposed. Firstly, the models of the recharging method are developed. Secondly, the recharging processes of the proposed method have been simulated by using Monte Carlo (MC) method. Finally, some important conclusions have been reached. The results indicate that the proposed method will help to result in a convenient, safe and low-cost recharging method of AMI, which will promote the application of this kind of implantable device.

  11. The Simulation of the Recharging Method Based on Solar Radiation for an Implantable Biosensor

    PubMed Central

    Li, Yun; Song, Yong; Kong, Xianyue; Li, Maoyuan; Zhao, Yufei; Hao, Qun; Gao, Tianxin

    2016-01-01

    A method of recharging implantable biosensors based on solar radiation is proposed. Firstly, the models of the proposed method are developed. Secondly, the recharging processes based on solar radiation are simulated using Monte Carlo (MC) method and the energy distributions of sunlight within the different layers of human skin have been achieved and discussed. Finally, the simulation results are verified experimentally, which indicates that the proposed method will contribute to achieve a low-cost, convenient and safe method for recharging implantable biosensors. PMID:27626422

  12. A stable cutting method for finite elements based virtual surgery simulation.

    PubMed

    Jerábková, Lenka; Jerábek, Jakub; Chudoba, Rostislav; Kuhlen, Torsten

    2007-01-01

    In this paper we present a novel approach for stable interactive cutting of deformable objects in virtual environments. Our method is based on the extended finite elements method, allowing for a modeling of discontinuities without remeshing. As no new elements are created, the impact on simulation performance is minimized. We also propose an appropriate mass lumping technique to guarantee for the stability of the simulation regardless of the position of the cut.

  13. A Research of Weapon System Storage Reliability Simulation Method Based on Fuzzy Theory

    NASA Astrophysics Data System (ADS)

    Shi, Yonggang; Wu, Xuguang; Chen, Haijian; Xu, Tingxue

    Aimed at the problem of the new, complicated weapon equipment system storage reliability analyze, the paper researched on the methods of fuzzy fault tree analysis and fuzzy system storage reliability simulation, discussed the path that regarded weapon system as fuzzy system, and researched the storage reliability of weapon system based on fuzzy theory, provided a method of storage reliability research for the new, complicated weapon equipment system. As an example, built up the fuzzy fault tree of one type missile control instrument based on function analysis, and used the method of fuzzy system storage reliability simulation to analyze storage reliability index of control instrument.

  14. Simulation of the recharging method of implantable biosensors based on a wearable incoherent light source.

    PubMed

    Song, Yong; Hao, Qun; Kong, Xianyue; Hu, Lanxin; Cao, Jie; Gao, Tianxin

    2014-11-03

    Recharging implantable electronics from the outside of the human body is very important for applications such as implantable biosensors and other implantable electronics. In this paper, a recharging method for implantable biosensors based on a wearable incoherent light source has been proposed and simulated. Firstly, we develop a model of the incoherent light source and a multi-layer model of skin tissue. Secondly, the recharging processes of the proposed method have been simulated and tested experimentally, whereby some important conclusions have been reached. Our results indicate that the proposed method will offer a convenient, safe and low-cost recharging method for implantable biosensors, which should promote the application of implantable electronics.

  15. Real-time simulation of ultrasound refraction phenomena using ray-trace based wavefront construction method.

    PubMed

    Szostek, Kamil; Piórkowski, Adam

    2016-10-01

    Ultrasound (US) imaging is one of the most popular techniques used in clinical diagnosis, mainly due to lack of adverse effects on patients and the simplicity of US equipment. However, the characteristics of the medium cause US imaging to imprecisely reconstruct examined tissues. The artifacts are the results of wave phenomena, i.e. diffraction or refraction, and should be recognized during examination to avoid misinterpretation of an US image. Currently, US training is based on teaching materials and simulators and ultrasound simulation has become an active research area in medical computer science. Many US simulators are limited by the complexity of the wave phenomena, leading to intensive sophisticated computation that makes it difficult for systems to operate in real time. To achieve the required frame rate, the vast majority of simulators reduce the problem of wave diffraction and refraction. The following paper proposes a solution for an ultrasound simulator based on methods known in geophysics. To improve simulation quality, a wavefront construction method was adapted which takes into account the refraction phenomena. This technique uses ray tracing and velocity averaging to construct wavefronts in the simulation. Instead of a geological medium, real CT scans are applied. This approach can produce more realistic projections of pathological findings and is also capable of providing real-time simulation.

  16. Evaluation of a clinical simulation-based assessment method for EHR-platforms.

    PubMed

    Jensen, Sanne; Rasmussen, Stine Loft; Lyng, Karen Marie

    2014-01-01

    In a procurement process assessment of issues like human factors and interaction between technology and end-users can be challenging. In a large public procurement of an Electronic health record-platform (EHR-platform) in Denmark a clinical simulation-based method for assessing and comparing human factor issues was developed and evaluated. This paper describes the evaluation of the method, its advantages and disadvantages. Our findings showed that clinical simulation is beneficial for assessing user satisfaction, usefulness and patient safety, all though it is resource demanding. The method made it possible to assess qualitative topics during the procurement and it provides an excellent ground for user involvement.

  17. Physics-based statistical model and simulation method of RF propagation in urban environments

    DOEpatents

    Pao, Hsueh-Yuan; Dvorak, Steven L.

    2010-09-14

    A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.

  18. A novel method for simulation of brushless DC motor servo-control system based on MATLAB

    NASA Astrophysics Data System (ADS)

    Tao, Keyan; Yan, Yingmin

    2006-11-01

    This paper provides a research about the simulation of brush-less DC motor (BLDCM) servo control system. Based on the mathematical model of Brush-less DC motor (BLDCM), built the system simulation model with the MATLAB software. When the system model is made, the isolated functional blocks, such as BLDCM block, the rotor's position detection block, change-phase logic block etc. have been modeled. By the organic combination of these blocks, the model of BLDCM can be established easily. The reasonability and validity have been testified by the simulation results and this novel method offers a new thought way for designing and debugging actual motors.

  19. Apparatus and method for interaction phenomena with world modules in data-flow-based simulation

    DOEpatents

    Xavier, Patrick G.; Gottlieb, Eric J.; McDonald, Michael J.; Oppel, III, Fred J.

    2006-08-01

    A method and apparatus accommodate interaction phenomenon in a data-flow-based simulation of a system of elements, by establishing meta-modules to simulate system elements and by establishing world modules associated with interaction phenomena. World modules are associated with proxy modules from a group of meta-modules associated with one of the interaction phenomenon. The world modules include a communication world, a sensor world, a mobility world, and a contact world. World modules can be further associated with other world modules if necessary. Interaction phenomenon are simulated in corresponding world modules by accessing member functions in the associated group of proxy modules. Proxy modules can be dynamically allocated at a desired point in the simulation to accommodate the addition of elements in the system of elements such as a system of robots, a system of communication terminals, or a system of vehicles, being simulated.

  20. Methods for simulation-based analysis of fluid-structure interaction.

    SciTech Connect

    Barone, Matthew Franklin; Payne, Jeffrey L.

    2005-10-01

    Methods for analysis of fluid-structure interaction using high fidelity simulations are critically reviewed. First, a literature review of modern numerical techniques for simulation of aeroelastic phenomena is presented. The review focuses on methods contained within the arbitrary Lagrangian-Eulerian (ALE) framework for coupling computational fluid dynamics codes to computational structural mechanics codes. The review treats mesh movement algorithms, the role of the geometric conservation law, time advancement schemes, wetted surface interface strategies, and some representative applications. The complexity and computational expense of coupled Navier-Stokes/structural dynamics simulations points to the need for reduced order modeling to facilitate parametric analysis. The proper orthogonal decomposition (POD)/Galerkin projection approach for building a reduced order model (ROM) is presented, along with ideas for extension of the methodology to allow construction of ROMs based on data generated from ALE simulations.

  1. Two dimensional Unite element method simulation to determine the brain capacitance based on ECVT measurement

    NASA Astrophysics Data System (ADS)

    Sirait, S. H.; Taruno, W. P.; Khotimah, S. N.; Haryanto, F.

    2016-03-01

    A simulation to determine capacitance of brain's electrical activity based on two electrodes ECVT was conducted in this study. This study began with construction of 2D coronal head geometry with five different layers and ECVT sensor design, and then both of these designs were merged. After that, boundary conditions were applied on two electrodes in the ECVT sensor. The first electrode was defined as a Dirichlet boundary condition with 20 V in potential and another electrode was defined as a Dirichlet boundary condition with 0 V in potential. Simulated Hodgkin-Huxley -based action potentials were applied as electrical activity of the brain and sequentially were put on 3 different cross-sectional positions. As the governing equation, the Poisson equation was implemented in the geometry. Poisson equation was solved by finite element method. The simulation showed that the simulated capacitance values were affected by action potentials and cross-sectional action potential positions.

  2. Parallel electromagnetic simulator based on the Finite-Difference Time Domain method

    NASA Astrophysics Data System (ADS)

    Walendziuk, Wojciech

    2006-03-01

    In the following paper the parallel tool for electromagnetic field distribution analysis is presented. The main simulation programme is based on the parallel algorithm of the Finite-Difference Time-Domain method and use Message Passing Interface as a communication library. In the paper also ways of communications among computation nodes in a parallel environment and efficiency of the parallel algorithm are presented.

  3. Validation of population-based disease simulation models: a review of concepts and methods

    PubMed Central

    2010-01-01

    Background Computer simulation models are used increasingly to support public health research and policy, but questions about their quality persist. The purpose of this article is to review the principles and methods for validation of population-based disease simulation models. Methods We developed a comprehensive framework for validating population-based chronic disease simulation models and used this framework in a review of published model validation guidelines. Based on the review, we formulated a set of recommendations for gathering evidence of model credibility. Results Evidence of model credibility derives from examining: 1) the process of model development, 2) the performance of a model, and 3) the quality of decisions based on the model. Many important issues in model validation are insufficiently addressed by current guidelines. These issues include a detailed evaluation of different data sources, graphical representation of models, computer programming, model calibration, between-model comparisons, sensitivity analysis, and predictive validity. The role of external data in model validation depends on the purpose of the model (e.g., decision analysis versus prediction). More research is needed on the methods of comparing the quality of decisions based on different models. Conclusion As the role of simulation modeling in population health is increasing and models are becoming more complex, there is a need for further improvements in model validation methodology and common standards for evaluating model credibility. PMID:21087466

  4. An efficient surrogate-based simulation-optimization method for calibrating a regional MODFLOW model

    NASA Astrophysics Data System (ADS)

    Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.

    2017-01-01

    Simulation-optimization method entails a large number of model simulations, which is computationally intensive or even prohibitive if the model simulation is extremely time-consuming. Statistical models have been examined as a surrogate of the high-fidelity physical model during simulation-optimization process to tackle this problem. Among them, Multivariate Adaptive Regression Splines (MARS), a non-parametric adaptive regression method, is superior in overcoming problems of high-dimensions and discontinuities of the data. Furthermore, the stability and accuracy of MARS model can be improved by bootstrap aggregating methods, namely, bagging. In this paper, Bagging MARS (BMARS) method is integrated to a surrogate-based simulation-optimization framework to calibrate a three-dimensional MODFLOW model, which is developed to simulate the groundwater flow in an arid hardrock-alluvium region in northwestern Oman. The physical MODFLOW model is surrogated by the statistical model developed using BMARS algorithm. The surrogate model, which is fitted and validated using training dataset generated by the physical model, can approximate solutions rapidly. An efficient Sobol' method is employed to calculate global sensitivities of head outputs to input parameters, which are used to analyze their importance for the model outputs spatiotemporally. Only sensitive parameters are included in the calibration process to further improve the computational efficiency. Normalized root mean square error (NRMSE) between measured and simulated heads at observation wells is used as the objective function to be minimized during optimization. The reasonable history match between the simulated and observed heads demonstrated feasibility of this high-efficient calibration framework.

  5. A neural-network-based method of model reduction for the dynamic simulation of MEMS

    NASA Astrophysics Data System (ADS)

    Liang, Y. C.; Lin, W. Z.; Lee, H. P.; Lim, S. P.; Lee, K. H.; Feng, D. P.

    2001-05-01

    This paper proposes a neuro-network-based method for model reduction that combines the generalized Hebbian algorithm (GHA) with the Galerkin procedure to perform the dynamic simulation and analysis of nonlinear microelectromechanical systems (MEMS). An unsupervised neural network is adopted to find the principal eigenvectors of a correlation matrix of snapshots. It has been shown that the extensive computer results of the principal component analysis using the neural network of GHA can extract an empirical basis from numerical or experimental data, which can be used to convert the original system into a lumped low-order macromodel. The macromodel can be employed to carry out the dynamic simulation of the original system resulting in a dramatic reduction of computation time while not losing flexibility and accuracy. Compared with other existing model reduction methods for the dynamic simulation of MEMS, the present method does not need to compute the input correlation matrix in advance. It needs only to find very few required basis functions, which can be learned directly from the input data, and this means that the method possesses potential advantages when the measured data are large. The method is evaluated to simulate the pull-in dynamics of a doubly-clamped microbeam subjected to different input voltage spectra of electrostatic actuation. The efficiency and the flexibility of the proposed method are examined by comparing the results with those of the fully meshed finite-difference method.

  6. A vascular image registration method based on network structure and circuit simulation.

    PubMed

    Chen, Li; Lian, Yuxi; Guo, Yi; Wang, Yuanyuan; Hatsukami, Thomas S; Pimentel, Kristi; Balu, Niranjan; Yuan, Chun

    2017-05-02

    Image registration is an important research topic in the field of image processing. Applying image registration to vascular image allows multiple images to be strengthened and fused, which has practical value in disease detection, clinical assisted therapy, etc. However, it is hard to register vascular structures with high noise and large difference in an efficient and effective method. Different from common image registration methods based on area or features, which were sensitive to distortion and uncertainty in vascular structure, we proposed a novel registration method based on network structure and circuit simulation. Vessel images were transformed to graph networks and segmented to branches to reduce the calculation complexity. Weighted graph networks were then converted to circuits, in which node voltages of the circuit reflecting the vessel structures were used for node registration. The experiments in the two-dimensional and three-dimensional simulation and clinical image sets showed the success of our proposed method in registration. The proposed vascular image registration method based on network structure and circuit simulation is stable, fault tolerant and efficient, which is a useful complement to the current mainstream image registration methods.

  7. Numerical simulation of separating flows using computational models based on the vorticity confinement method

    NASA Astrophysics Data System (ADS)

    Wang, Lesong

    The objective of the present research is to investigate the recent development of the vorticity confinement method. First, a new formulation of the vorticity confinement term is studied. Advantages of the new formulation over the original one include the ability to conserve the momentum, and the ability to preserve the centroid motion of some flow properties such as the vorticity magnitude. Next, new difference schemes, which are simpler and more efficient than the old schemes, are discussed. At last, two computational models based on the vorticity confinement method are investigated. One of the models is devised to simulate inviscid flows over bodies with surfaces not aligned with the grid. The other is a surface boundary layer model, which is intended for efficiently simulating viscous flows with separations from the body surfaces. To validate the computational models, numerical simulations of three-dimensional flows over a 6:1 ellipsoid at incidence are performed. Comparisons have been made with exact solutions for inviscid simulations or experimental data for viscous simulations, and data obtained with conventional CFD methods. It is observed that both the inviscid and the viscous solutions with the new models exhibit good agreement with the exact solutions or the experiment data. The new models can have much higher efficiency than conventional CFD methods, and are able to obtain solutions with comparable accuracy.

  8. GPU-accelerated Monte Carlo simulation of particle coagulation based on the inverse method

    NASA Astrophysics Data System (ADS)

    Wei, J.; Kruis, F. E.

    2013-09-01

    Simulating particle coagulation using Monte Carlo methods is in general a challenging computational task due to its numerical complexity and the computing cost. Currently, the lowest computing costs are obtained when applying a graphic processing unit (GPU) originally developed for speeding up graphic processing in the consumer market. In this article we present an implementation of accelerating a Monte Carlo method based on the Inverse scheme for simulating particle coagulation on the GPU. The abundant data parallelism embedded within the Monte Carlo method is explained as it will allow an efficient parallelization of the MC code on the GPU. Furthermore, the computation accuracy of the MC on GPU was validated with a benchmark, a CPU-based discrete-sectional method. To evaluate the performance gains by using the GPU, the computing time on the GPU against its sequential counterpart on the CPU were compared. The measured speedups show that the GPU can accelerate the execution of the MC code by a factor 10-100, depending on the chosen particle number of simulation particles. The algorithm shows a linear dependence of computing time with the number of simulation particles, which is a remarkable result in view of the n2 dependence of the coagulation.

  9. Misclassification Errors in Unsupervised Classification Methods. Comparison Based on the Simulation of Targeted Proteomics Data

    PubMed Central

    Andreev, Victor P; Gillespie, Brenda W; Helfand, Brian T; Merion, Robert M

    2016-01-01

    Unsupervised classification methods are gaining acceptance in omics studies of complex common diseases, which are often vaguely defined and are likely the collections of disease subtypes. Unsupervised classification based on the molecular signatures identified in omics studies have the potential to reflect molecular mechanisms of the subtypes of the disease and to lead to more targeted and successful interventions for the identified subtypes. Multiple classification algorithms exist but none is ideal for all types of data. Importantly, there are no established methods to estimate sample size in unsupervised classification (unlike power analysis in hypothesis testing). Therefore, we developed a simulation approach allowing comparison of misclassification errors and estimating the required sample size for a given effect size, number, and correlation matrix of the differentially abundant proteins in targeted proteomics studies. All the experiments were performed in silico. The simulated data imitated the expected one from the study of the plasma of patients with lower urinary tract dysfunction with the aptamer proteomics assay Somascan (SomaLogic Inc, Boulder, CO), which targeted 1129 proteins, including 330 involved in inflammation, 180 in stress response, 80 in aging, etc. Three popular clustering methods (hierarchical, k-means, and k-medoids) were compared. K-means clustering performed much better for the simulated data than the other two methods and enabled classification with misclassification error below 5% in the simulated cohort of 100 patients based on the molecular signatures of 40 differentially abundant proteins (effect size 1.5) from among the 1129-protein panel. PMID:27524871

  10. Agent-based modeling: Methods and techniques for simulating human systems

    PubMed Central

    Bonabeau, Eric

    2002-01-01

    Agent-based modeling is a powerful simulation modeling technique that has seen a number of applications in the last few years, including applications to real-world business problems. After the basic principles of agent-based simulation are briefly introduced, its four areas of application are discussed by using real-world applications: flow simulation, organizational simulation, market simulation, and diffusion simulation. For each category, one or several business applications are described and analyzed. PMID:12011407

  11. A general parallelization strategy for random path based geostatistical simulation methods

    NASA Astrophysics Data System (ADS)

    Mariethoz, Grégoire

    2010-07-01

    The size of simulation grids used for numerical models has increased by many orders of magnitude in the past years, and this trend is likely to continue. Efficient pixel-based geostatistical simulation algorithms have been developed, but for very large grids and complex spatial models, the computational burden remains heavy. As cluster computers become widely available, using parallel strategies is a natural step for increasing the usable grid size and the complexity of the models. These strategies must profit from of the possibilities offered by machines with a large number of processors. On such machines, the bottleneck is often the communication time between processors. We present a strategy distributing grid nodes among all available processors while minimizing communication and latency times. It consists in centralizing the simulation on a master processor that calls other slave processors as if they were functions simulating one node every time. The key is to decouple the sending and the receiving operations to avoid synchronization. Centralization allows having a conflict management system ensuring that nodes being simulated simultaneously do not interfere in terms of neighborhood. The strategy is computationally efficient and is versatile enough to be applicable to all random path based simulation methods.

  12. The method of infrared image simulation based on the measured image

    NASA Astrophysics Data System (ADS)

    Lou, Shuli; Liu, Liang; Ren, Jiancun

    2015-10-01

    The development of infrared imaging guidance technology has promoted the research of infrared imaging simulation technology and the key of infrared imaging simulation is the generation of IR image. The generation of IR image is worthful in military and economy. In order to solve the problem of credibility and economy of infrared scene generation, a method of infrared scene generation based on the measured image is proposed. Through researching on optical properties of ship-target and sea background, ship-target images with various gestures are extracted from recorded images based on digital image processing technology. The ship-target image is zoomed in and out to simulate the relative motion between the viewpoint and the target according to field of view and the distance between the target and the sensor. The gray scale of ship-target image is adjusted to simulate the radiation change of the ship-target according to the distance between the viewpoint and the target and the atmospheric transmission. Frames of recorded infrared images without target are interpolated to simulate high frame rate of missile. Processed ship-target images and sea-background infrared images are synthetized to obtain infrared scenes according to different viewpoints. Experiments proved that this method is flexible and applicable, and the fidelity and the reliability of synthesis infrared images can be guaranteed.

  13. Wang-Landau Reaction Ensemble Method: Simulation of Weak Polyelectrolytes and General Acid-Base Reactions.

    PubMed

    Landsgesell, Jonas; Holm, Christian; Smiatek, Jens

    2017-02-14

    We present a novel method for the study of weak polyelectrolytes and general acid-base reactions in molecular dynamics and Monte Carlo simulations. The approach combines the advantages of the reaction ensemble and the Wang-Landau sampling method. Deprotonation and protonation reactions are simulated explicitly with the help of the reaction ensemble method, while the accurate sampling of the corresponding phase space is achieved by the Wang-Landau approach. The combination of both techniques provides a sufficient statistical accuracy such that meaningful estimates for the density of states and the partition sum can be obtained. With regard to these estimates, several thermodynamic observables like the heat capacity or reaction free energies can be calculated. We demonstrate that the computation times for the calculation of titration curves with a high statistical accuracy can be significantly decreased when compared to the original reaction ensemble method. The applicability of our approach is validated by the study of weak polyelectrolytes and their thermodynamic properties.

  14. Simulation of the Recharging Method of Implantable Biosensors Based on a Wearable Incoherent Light Source

    PubMed Central

    Song, Yong; Hao, Qun; Kong, Xianyue; Hu, Lanxin; Cao, Jie; Gao, Tianxin

    2014-01-01

    Recharging implantable electronics from the outside of the human body is very important for applications such as implantable biosensors and other implantable electronics. In this paper, a recharging method for implantable biosensors based on a wearable incoherent light source has been proposed and simulated. Firstly, we develop a model of the incoherent light source and a multi-layer model of skin tissue. Secondly, the recharging processes of the proposed method have been simulated and tested experimentally, whereby some important conclusions have been reached. Our results indicate that the proposed method will offer a convenient, safe and low-cost recharging method for implantable biosensors, which should promote the application of implantable electronics. PMID:25372616

  15. A comparison of different estimation methods for simulation-based sample size determination in longitudinal studies

    NASA Astrophysics Data System (ADS)

    Bahçecitapar, Melike Kaya

    2017-07-01

    Determining sample size necessary for correct results is a crucial step in the design of longitudinal studies. Simulation-based statistical power calculation is a flexible approach to determine number of subjects and repeated measures of longitudinal studies especially in complex design. Several papers have provided sample size/statistical power calculations for longitudinal studies incorporating data analysis by linear mixed effects models (LMMs). In this study, different estimation methods (methods based on maximum likelihood (ML) and restricted ML) with different iterative algorithms (quasi-Newton and ridge-stabilized Newton-Raphson) in fitting LMMs to generated longitudinal data for simulation-based power calculation are compared. This study examines statistical power of F-test statistics for parameter representing difference in responses over time from two treatment groups in the LMM with a longitudinal covariate. The most common procedures in SAS, such as PROC GLIMMIX using quasi-Newton algorithm and PROC MIXED using ridge-stabilized algorithm are used for analyzing generated longitudinal data in simulation. It is seen that both procedures present similar results. Moreover, it is found that the magnitude of the parameter of interest in the model for simulations affect statistical power calculations in both procedures substantially.

  16. Face-based smoothed finite element method for real-time simulation of soft tissue

    NASA Astrophysics Data System (ADS)

    Mendizabal, Andrea; Bessard Duparc, Rémi; Bui, Huu Phuoc; Paulus, Christoph J.; Peterlik, Igor; Cotin, Stéphane

    2017-03-01

    In soft tissue surgery, a tumor and other anatomical structures are usually located using the preoperative CT or MR images. However, due to the deformation of the concerned tissues, this information suffers from inaccuracy when employed directly during the surgery. In order to account for these deformations in the planning process, the use of a bio-mechanical model of the tissues is needed. Such models are often designed using the finite element method (FEM), which is, however, computationally expensive, in particular when a high accuracy of the simulation is required. In our work, we propose to use a smoothed finite element method (S-FEM) in the context of modeling of the soft tissue deformation. This numerical technique has been introduced recently to overcome the overly stiff behavior of the standard FEM and to improve the solution accuracy and the convergence rate in solid mechanics problems. In this paper, a face-based smoothed finite element method (FS-FEM) using 4-node tetrahedral elements is presented. We show that in some cases, the method allows for reducing the number of degrees of freedom, while preserving the accuracy of the discretization. The method is evaluated on a simulation of a cantilever beam loaded at the free end and on a simulation of a 3D cube under traction and compression forces. Further, it is applied to the simulation of the brain shift and of the kidney's deformation. The results demonstrate that the method outperforms the standard FEM in a bending scenario and that has similar accuracy as the standard FEM in the simulations of the brain-shift and of the kidney's deformation.

  17. A Monte Carlo simulation based inverse propagation method for stochastic model updating

    NASA Astrophysics Data System (ADS)

    Bao, Nuo; Wang, Chunjie

    2015-08-01

    This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.

  18. Efficient Molecular Dynamics Simulations of Multiple Radical Center Systems Based on the Fragment Molecular Orbital Method

    SciTech Connect

    Nakata, Hiroya; Schmidt, Michael W; Fedorov, Dmitri G; Kitaura, Kazuo; Nakamura, Shinichiro; Gordon, Mark S

    2014-10-16

    The fully analytic energy gradient has been developed and implemented for the restricted open-shell Hartree–Fock (ROHF) method based on the fragment molecular orbital (FMO) theory for systems that have multiple open-shell molecules. The accuracy of the analytic ROHF energy gradient is compared with the corresponding numerical gradient, illustrating the accuracy of the analytic gradient. The ROHF analytic gradient is used to perform molecular dynamics simulations of an unusual open-shell system, liquid oxygen, and mixtures of oxygen and nitrogen. These molecular dynamics simulations provide some insight about how triplet oxygen molecules interact with each other. Timings reveal that the method can calculate the energy gradient for a system containing 4000 atoms in only 6 h. Therefore, it is concluded that the FMO-ROHF method will be useful for investigating systems with multiple open shells.

  19. Efficient molecular dynamics simulations of multiple radical center systems based on the fragment molecular orbital method.

    PubMed

    Nakata, Hiroya; Schmidt, Michael W; Fedorov, Dmitri G; Kitaura, Kazuo; Nakamura, Shinichiro; Gordon, Mark S

    2014-10-16

    The fully analytic energy gradient has been developed and implemented for the restricted open-shell Hartree-Fock (ROHF) method based on the fragment molecular orbital (FMO) theory for systems that have multiple open-shell molecules. The accuracy of the analytic ROHF energy gradient is compared with the corresponding numerical gradient, illustrating the accuracy of the analytic gradient. The ROHF analytic gradient is used to perform molecular dynamics simulations of an unusual open-shell system, liquid oxygen, and mixtures of oxygen and nitrogen. These molecular dynamics simulations provide some insight about how triplet oxygen molecules interact with each other. Timings reveal that the method can calculate the energy gradient for a system containing 4000 atoms in only 6 h. Therefore, it is concluded that the FMO-ROHF method will be useful for investigating systems with multiple open shells.

  20. Thermoelastic Simulations Based on Discontinuous Galerkin Methods: Formulation and Application in Gas Turbines

    NASA Astrophysics Data System (ADS)

    Hao, Zengrong; Gu, Chunwei; Song, Yin

    2016-06-01

    This study extends the discontinuous Galerkin (DG) methods to simulations of thermoelasticity. A thermoelastic formulation of interior penalty DG (IP-DG) method is presented and aspects of the numerical implementation are discussed in matrix form. The content related to thermal expansion effects is illustrated explicitly in the discretized equation system. The feasibility of the method for general thermoelastic simulations is validated through typical test cases, including tackling stress discontinuities caused by jumps of thermal expansive properties and controlling accompanied non-physical oscillations through adjusting the magnitude of IP term. The developed simulation platform upon the method is applied to the engineering analysis of thermoelastic performance for a turbine vane and a series of vanes with various types of simplified thermal barrier coating (TBC) systems. This analysis demonstrates that while TBC properties on heat conduction are generally the major consideration for protecting the alloy base vanes, the mechanical properties may have more significant effects on protections of coatings themselves. Changing characteristics of normal tractions on TBC/base interface, closely related to the occurrence of coating failures, over diverse components distributions along TBC thickness of the functional graded materials are summarized and analysed, illustrating the opposite tendencies in situations with different thermal-stress-free temperatures for coatings.

  1. An novel frequent probability pattern mining algorithm based on circuit simulation method in uncertain biological networks

    PubMed Central

    2014-01-01

    Background Motif mining has always been a hot research topic in bioinformatics. Most of current research on biological networks focuses on exact motif mining. However, due to the inevitable experimental error and noisy data, biological network data represented as the probability model could better reflect the authenticity and biological significance, therefore, it is more biological meaningful to discover probability motif in uncertain biological networks. One of the key steps in probability motif mining is frequent pattern discovery which is usually based on the possible world model having a relatively high computational complexity. Methods In this paper, we present a novel method for detecting frequent probability patterns based on circuit simulation in the uncertain biological networks. First, the partition based efficient search is applied to the non-tree like subgraph mining where the probability of occurrence in random networks is small. Then, an algorithm of probability isomorphic based on circuit simulation is proposed. The probability isomorphic combines the analysis of circuit topology structure with related physical properties of voltage in order to evaluate the probability isomorphism between probability subgraphs. The circuit simulation based probability isomorphic can avoid using traditional possible world model. Finally, based on the algorithm of probability subgraph isomorphism, two-step hierarchical clustering method is used to cluster subgraphs, and discover frequent probability patterns from the clusters. Results The experiment results on data sets of the Protein-Protein Interaction (PPI) networks and the transcriptional regulatory networks of E. coli and S. cerevisiae show that the proposed method can efficiently discover the frequent probability subgraphs. The discovered subgraphs in our study contain all probability motifs reported in the experiments published in other related papers. Conclusions The algorithm of probability graph isomorphism

  2. An Improved SoC Test Scheduling Method Based on Simulated Annealing Algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Jingjing; Shen, Zhihang; Gao, Huaien; Chen, Bianna; Zheng, Weida; Xiong, Xiaoming

    2017-02-01

    In this paper, we propose an improved SoC test scheduling method based on simulated annealing algorithm (SA). It is our first to disorganize IP core assignment for each TAM to produce a new solution for SA, allocate TAM width for each TAM using greedy algorithm and calculate corresponding testing time. And accepting the core assignment according to the principle of simulated annealing algorithm and finally attain the optimum solution. Simultaneously, we run the test scheduling experiment with the international reference circuits provided by International Test Conference 2002(ITC’02) and the result shows that our algorithm is superior to the conventional integer linear programming algorithm (ILP), simulated annealing algorithm (SA) and genetic algorithm(GA). When TAM width reaches to 48,56 and 64, the testing time based on our algorithm is lesser than the classic methods and the optimization rates are 30.74%, 3.32%, 16.13% respectively. Moreover, the testing time based on our algorithm is very close to that of improved genetic algorithm (IGA), which is state-of-the-art at present.

  3. Simulation of Electromagnetic Wave Logging Response in Deviated Wells Based on Vector Finite Element Method

    NASA Astrophysics Data System (ADS)

    Lv, Wei-Guo; Chu, Zhao-Tan; Zhao, Xiao-Qing; Fan, Yu-Xiu; Song, Ruo-Long; Han, Wei

    2009-01-01

    The vector finite element method of tetrahedral elements is used to model 3D electromagnetic wave logging response. The tangential component of the vector field at the mesh edges is used as a degree of freedom to overcome the shortcomings of node-based finite element methods. The algorithm can simulate inhomogeneous media with arbitrary distribution of conductivity and magnetic permeability. The electromagnetic response of well logging tools are studied in dipping bed layers with the borehole and invasion included. In order to simulate realistic logging tools, we take the transmitter antennas consisting of circular wire loops instead of magnetic dipoles. We also investigate the apparent resistivity of inhomogeneous formation for different dip angles.

  4. Diffusion approximation-based simulation of stochastic ion channels: which method to use?

    PubMed Central

    Pezo, Danilo; Soudry, Daniel; Orio, Patricio

    2014-01-01

    To study the effects of stochastic ion channel fluctuations on neural dynamics, several numerical implementation methods have been proposed. Gillespie's method for Markov Chains (MC) simulation is highly accurate, yet it becomes computationally intensive in the regime of a high number of channels. Many recent works aim to speed simulation time using the Langevin-based Diffusion Approximation (DA). Under this common theoretical approach, each implementation differs in how it handles various numerical difficulties—such as bounding of state variables to [0,1]. Here we review and test a set of the most recently published DA implementations (Goldwyn et al., 2011; Linaro et al., 2011; Dangerfield et al., 2012; Orio and Soudry, 2012; Schmandt and Galán, 2012; Güler, 2013; Huang et al., 2013a), comparing all of them in a set of numerical simulations that assess numerical accuracy and computational efficiency on three different models: (1) the original Hodgkin and Huxley model, (2) a model with faster sodium channels, and (3) a multi-compartmental model inspired in granular cells. We conclude that for a low number of channels (usually below 1000 per simulated compartment) one should use MC—which is the fastest and most accurate method. For a high number of channels, we recommend using the method by Orio and Soudry (2012), possibly combined with the method by Schmandt and Galán (2012) for increased speed and slightly reduced accuracy. Consequently, MC modeling may be the best method for detailed multicompartment neuron models—in which a model neuron with many thousands of channels is segmented into many compartments with a few hundred channels. PMID:25404914

  5. An Observationally-Based Method for Simulating Stochasticity in NWP Model Physics

    NASA Astrophysics Data System (ADS)

    Bao, Jian-Wen; Penland, Cecile; Tulich, Stefan; Pegion, Philip Phil; Whitaker, Jeffrey S.; Michelson, Sara A.; Grell, Evelyn D.

    2017-04-01

    We have developed a method that is more general and suitable for accounting for the model physics uncertainty in ensemble modeling systems based on observations and datasets from large-eddy simulations. The essence of the method is a physically-based stochastic differential equation that can efficiently generate the stochastically-generated skew (SGS) distribution that is commonly seen in the statistics of atmospheric variable properties. A critical objective of this development is to upgrade the current operational algorithms in generating the model-error component of ensemble spread with improved ones that are more process-based and physically sound. The ongoing development involves (i) analyses of observations and dataset output from large-eddy simulations to specify parameters required for generating the SGS distribution, and (ii) implementing and testing the newly developed method in NOAA's GEFS. We will use the stochastic parameterization of convection-induced momentum transport at the subgrid scale to demonstrate the advantage of the newly developed method.

  6. Simulations of Ground Motion in Southern California based upon the Spectral-Element Method

    NASA Astrophysics Data System (ADS)

    Tromp, J.; Komatitsch, D.; Liu, Q.

    2003-12-01

    We use the spectral-element method to simulate ground motion generated by recent well-recorded small earthquakes in Southern California. Simulations are performed using a new sedimentary basin model that is constrained by hundreds of petroleum industry well logs and more than twenty thousand kilometers of seismic reflection profiles. The numerical simulations account for 3D variations of seismic wave speeds and density, topography and bathymetry, and attenuation. Simulations for several small recent events demonstrate that the combination of a detailed sedimentary basin model and an accurate numerical technique facilitates the simulation of ground motion at periods of 2 seconds and longer inside the Los Angeles basin and 6 seconds and longer elsewhere. Peak ground displacement, velocity and acceleration maps illustrate that significant amplification occurs in the basin. Centroid-Moment Tensor mechanisms are obtained based upon Pnl and surface waveforms and numerically calculated 3D Frechet derivatives. We use a combination of waveform and waveform-envelope misfit criteria, and facilitate pure double-couple or zero-trace moment-tensor inversions.

  7. A Fatigue Crack Size Evaluation Method Based on Lamb Wave Simulation and Limited Experimental Data.

    PubMed

    He, Jingjing; Ran, Yunmeng; Liu, Bin; Yang, Jinsong; Guan, Xuefei

    2017-09-13

    This paper presents a systematic and general method for Lamb wave-based crack size quantification using finite element simulations and Bayesian updating. The method consists of construction of a baseline quantification model using finite element simulation data and Bayesian updating with limited Lamb wave data from target structure. The baseline model correlates two proposed damage sensitive features, namely the normalized amplitude and phase change, with the crack length through a response surface model. The two damage sensitive features are extracted from the first received S₀ mode wave package. The model parameters of the baseline model are estimated using finite element simulation data. To account for uncertainties from numerical modeling, geometry, material and manufacturing between the baseline model and the target model, Bayesian method is employed to update the baseline model with a few measurements acquired from the actual target structure. A rigorous validation is made using in-situ fatigue testing and Lamb wave data from coupon specimens and realistic lap-joint components. The effectiveness and accuracy of the proposed method is demonstrated under different loading and damage conditions.

  8. Evaluation of a Validation Method for MR Imaging-Based Motion Tracking Using Image Simulation

    NASA Astrophysics Data System (ADS)

    Moerman, Kevin M.; Kerskens, Christian M.; Lally, Caitríona; Flamini, Vittoria; Simms, Ciaran K.

    2009-12-01

    Magnetic Resonance (MR) imaging-based motion and deformation tracking techniques combined with finite element (FE) analysis are a powerful method for soft tissue constitutive model parameter identification. However, deriving deformation data from MR images is complex and generally requires validation. In this paper a validation method is presented based on a silicone gel phantom containing contrasting spherical markers. Tracking of these markers provides a direct measure of deformation. Validation of in vivo medical imaging techniques is often challenging due to the lack of appropriate reference data and the validation method may lack an appropriate reference. This paper evaluates a validation method using simulated MR image data. This provided an appropriate reference and allowed different error sources to be studied independently and allowed evaluation of the method for various signal-to-noise ratios (SNRs). The geometric bias error was between 0-[InlineEquation not available: see fulltext.] voxels while the noisy magnitude MR image simulations demonstrated errors under 0.1161 voxels (SNR: 5-35).

  9. Development and elaboration of numerical method for simulating gas-liquid-solid three-phase flows based on particle method

    NASA Astrophysics Data System (ADS)

    Takahashi, Ryohei; Mamori, Hiroya; Yamamoto, Makoto

    2016-02-01

    A numerical method for simulating gas-liquid-solid three-phase flows based on the moving particle semi-implicit (MPS) approach was developed in this study. Computational instability often occurs in multiphase flow simulations if the deformations of the free surfaces between different phases are large, among other reasons. To avoid this instability, this paper proposes an improved coupling procedure between different phases in which the physical quantities of particles in different phases are calculated independently. We performed numerical tests on two illustrative problems: a dam-break problem and a solid-sphere impingement problem. The former problem is a gas-liquid two-phase problem, and the latter is a gas-liquid-solid three-phase problem. The computational results agree reasonably well with the experimental results. Thus, we confirmed that the proposed MPS method reproduces the interaction between different phases without inducing numerical instability.

  10. An novel frequent probability pattern mining algorithm based on circuit simulation method in uncertain biological networks.

    PubMed

    He, Jieyue; Wang, Chunyan; Qiu, Kunpu; Zhong, Wei

    2014-01-01

    Motif mining has always been a hot research topic in bioinformatics. Most of current research on biological networks focuses on exact motif mining. However, due to the inevitable experimental error and noisy data, biological network data represented as the probability model could better reflect the authenticity and biological significance, therefore, it is more biological meaningful to discover probability motif in uncertain biological networks. One of the key steps in probability motif mining is frequent pattern discovery which is usually based on the possible world model having a relatively high computational complexity. In this paper, we present a novel method for detecting frequent probability patterns based on circuit simulation in the uncertain biological networks. First, the partition based efficient search is applied to the non-tree like subgraph mining where the probability of occurrence in random networks is small. Then, an algorithm of probability isomorphic based on circuit simulation is proposed. The probability isomorphic combines the analysis of circuit topology structure with related physical properties of voltage in order to evaluate the probability isomorphism between probability subgraphs. The circuit simulation based probability isomorphic can avoid using traditional possible world model. Finally, based on the algorithm of probability subgraph isomorphism, two-step hierarchical clustering method is used to cluster subgraphs, and discover frequent probability patterns from the clusters. The experiment results on data sets of the Protein-Protein Interaction (PPI) networks and the transcriptional regulatory networks of E. coli and S. cerevisiae show that the proposed method can efficiently discover the frequent probability subgraphs. The discovered subgraphs in our study contain all probability motifs reported in the experiments published in other related papers. The algorithm of probability graph isomorphism evaluation based on circuit simulation

  11. Simulation of 2D Brain's Potential Distribution Based on Two Electrodes ECVT Using Finite Element Method

    NASA Astrophysics Data System (ADS)

    Sirait, S. H.; Edison, R. E.; Baidillah, M. R.; Taruno, W. P.; Haryanto, F.

    2016-08-01

    The aim of this study is to simulate the potential distribution of 2D brain geometry based on two electrodes ECVT. ECVT (electrical capacitance tomography) is a tomography modality which produces dielectric distribution image of a subject from several capacitance electrodes measurements. This study begins by producing the geometry of 2D brain based on MRI image and then setting the boundary conditions on the boundaries of the geometry. The values of boundary conditions follow the potential values used in two electrodes brain ECVT, and for this reason the first boundary is set to 20 volt and 2.5 MHz signal and another boundary is set to ground. Poisson equation is implemented as the governing equation in the 2D brain geometry and finite element method is used to solve the equation. Simulated Hodgkin-Huxley action potential is applied as disturbance potential in the geometry. We divide this study into two which comprises simulation without disturbance potential and simulation with disturbance potential. From this study, each of time dependent potential distributions from non-disturbance and disturbance potential of the 2D brain geometry has been generated.

  12. A Local Order Parameter-Based Method for Simulation of Free Energy Barriers in Crystal Nucleation.

    PubMed

    Eslami, Hossein; Khanjari, Neda; Müller-Plathe, Florian

    2017-03-14

    While global order parameters have been widely used as reaction coordinates in nucleation and crystallization studies, their use in nucleation studies is claimed to have a serious drawback. In this work, a local order parameter is introduced as a local reaction coordinate to drive the simulation from the liquid phase to the solid phase and vice versa. This local order parameter holds information regarding the order in the first- and second-shell neighbors of a particle and has different well-defined values for local crystallites and disordered neighborhoods but is insensitive to the type of the crystal structure. The order parameter is employed in metadynamics simulations to calculate the solid-liquid phase equilibria and free energy barrier to nucleation. Our results for repulsive soft spheres and the Lennard-Jones potential, LJ(12-6), reveal better-resolved solid and liquid basins compared with the case in which a global order parameter is used. It is also shown that the configuration space is sampled more efficiently in the present method, allowing a more accurate calculation of the free energy barrier and the solid-liquid interfacial free energy. Another feature of the present local order parameter-based method is that it is possible to apply the bias potential to regions of interest in the order parameter space, for example, on the largest nucleus in the case of nucleation studies. In the present scheme for metadynamics simulation of the nucleation in supercooled LJ(12-6) particles, unlike the cases in which global order parameters are employed, there is no need to have an estimate of the size of the critical nucleus and to refine the results with the results of umbrella sampling simulations. The barrier heights and the nucleation pathway obtained from this method agree very well with the results of former umbrella sampling simulations.

  13. A Volume-of-Fluid based simulation method for wave impact problems

    NASA Astrophysics Data System (ADS)

    Kleefsman, K. M. T.; Fekken, G.; Veldman, A. E. P.; Iwanowski, B.; Buchner, B.

    2005-06-01

    In this paper, some aspects of water impact and green water loading are considered by numerically investigating a dambreak problem and water entry problems. The numerical method is based on the Navier-Stokes equations that describe the flow of an incompressible viscous fluid. The equations are discretised on a fixed Cartesian grid using the finite volume method. Even though very small cut cells can appear when moving an object through the fixed grid, the method is stable. The free surface is displaced using the Volume-of-Fluid method together with a local height function, resulting in a strictly mass conserving method. The choice of boundary conditions at the free surface appears to be crucial for the accuracy and robustness of the method. For validation, results of a dambreak simulation are shown that can be compared with measurements. A box has been placed in the flow, as a model for a container on the deck of an offshore floater on which forces are calculated. The water entry problem has been investigated by dropping wedges with different dead-rise angles, a cylinder and a cone into calm water with a prescribed velocity. The resulting free surface dynamics, with the sideways jets, has been compared with photographs of experiments. Also a comparison of slamming coefficients with theory and experimental results has been made. Finally, a drop test with a free falling wedge has been simulated.

  14. Full wave simulation of lower hybrid waves in Maxwellian plasma based on the finite element method

    SciTech Connect

    Meneghini, O.; Shiraiwa, S.; Parker, R.

    2009-09-15

    A full wave simulation of the lower-hybrid (LH) wave based on the finite element method is presented. For the LH wave, the most important terms of the dielectric tensor are the cold plasma contribution and the electron Landau damping (ELD) term, which depends only on the component of the wave vector parallel to the background magnetic field. The nonlocal hot plasma ELD effect was expressed as a convolution integral along the magnetic field lines and the resultant integro-differential Helmholtz equation was solved iteratively. The LH wave propagation in a Maxwellian tokamak plasma based on the Alcator C experiment was simulated for electron temperatures in the range of 2.5-10 keV. Comparison with ray tracing simulations showed good agreement when the single pass damping is strong. The advantages of the new approach include a significant reduction of computational requirements compared to full wave spectral methods and seamless treatment of the core, the scrape off layer and the launcher regions.

  15. Models and Methods for Adaptive Management of Individual and Team-Based Training Using a Simulator

    NASA Astrophysics Data System (ADS)

    Lisitsyna, L. S.; Smetyuh, N. P.; Golikov, S. P.

    2017-05-01

    Research of adaptive individual and team-based training has been analyzed and helped find out that both in Russia and abroad, individual and team-based training and retraining of AASTM operators usually includes: production training, training of general computer and office equipment skills, simulator training including virtual simulators which use computers to simulate real-world manufacturing situation, and, as a rule, the evaluation of AASTM operators’ knowledge determined by completeness and adequacy of their actions under the simulated conditions. Such approach to training and re-training of AASTM operators stipulates only technical training of operators and testing their knowledge based on assessing their actions in a simulated environment.

  16. Minimizing the Discrepancy between Simulated and Historical Failures in Turbine Engines: A Simulation-Based Optimization Method (Postprint)

    DTIC Science & Technology

    2015-01-01

    AFRL-RX-WP-JA-2015-0169 MINIMIZING THE DISCREPANCY BETWEEN SIMULATED AND HISTORICAL FAILURES IN TURBINE ENGINES: A SIMULATION-BASED...To) 15 November 2011 – 30 December 2014 4. TITLE AND SUBTITLE MINIMIZING THE DISCREPANCY BETWEEN SIMULATED AND HISTORICAL FAILURES IN TURBINE ...final publication is available at http://dx.doi.org/10.1155/2015/813565. 14. ABSTRACT The reliability modeling of a module in a turbine engine

  17. On a Wavelet-Based Method for the Numerical Simulation of Wave Propagation

    NASA Astrophysics Data System (ADS)

    Hong, Tae-Kyung; Kennett, B. L. N.

    2002-12-01

    A wavelet-based method for the numerical simulation of acoustic and elastic wave propagation is developed. Using a displacement-velocity formulation and treating spatial derivatives with linear operators, the wave equations are rewritten as a system of equations whose evolution in time is controlled by first-order derivatives. The linear operators for spatial derivatives are implemented in wavelet bases using an operator projection technique with nonstandard forms of wavelet transform. Using a semigroup approach, the discretized solution in time can be represented in an explicit recursive form, based on Taylor expansion of exponential functions of operator matrices. The boundary conditions are implemented by augmenting the system of equations with equivalent force terms at the boundaries. The wavelet-based method is applied to the acoustic wave equation with rigid boundary conditions at both ends in 1-D domain and to the elastic wave equation with a traction-free boundary conditions at a free surface in 2-D spatial media. The method can be applied directly to media with plane surfaces, and surface topography can be included with the aid of distortion of the grid describing the properties of the medium. The numerical results are compared with analytic solutions based on the Cagniard technique and show high accuracy. The wavelet-based approach is also demonstrated for complex media including highly varying topography or stochastic heterogeneity with rapid variations in physical parameters. These examples indicate the value of the approach as an accurate and stable tool for the simulation of wave propagation in general complex media.

  18. Stray light analysis and suppression method of dynamic star simulator based on LCOS splicing technology

    NASA Astrophysics Data System (ADS)

    Meng, Yao; Zhang, Guo-yu

    2015-10-01

    Star simulator acts ground calibration equipment of the star sensor, It testes the related parameters and performance of the star sensor. At present, when the dynamic star simulator based on LCOS splicing is identified by the star sensor, there is a major problem which is the poor LCOS contrast. In this paper, we analysis the cause of LC OS stray light , which is the relation between the incident angle of light and contrast ratio and set up the function relationship between the angle and the irradiance of the stray light. According to this relationship, we propose a scheme that we control the incident angle . It is a popular method to use the compound parabolic concentrator (CPC), although it can control any angle what we want in theory, in fact, we usually use it above +/-15° because of the length and the manufacturing cost. Then I set a telescopic system in front of the CPC , that principle is the same as the laser beam expander. We simulate the CPC with the Tracepro, it simulate the exit surface irradiance. The telescopic system should be designed by the ZEMAX because of the chromatic aberration correction. As a result, we get a collimating light source which the viewing angle is less than +/-5° and the area of uniform irradiation surface is greater than 20mm×20mm.

  19. An agent-based method for simulating porous fluid-saturated structures with indistinguishable components

    NASA Astrophysics Data System (ADS)

    Kashani, Jamal; Pettet, Graeme John; Gu, YuanTong; Zhang, Lihai; Oloyede, Adekunle

    2017-10-01

    Single-phase porous materials contain multiple components that intermingle up to the ultramicroscopic level. Although the structures of the porous materials have been simulated with agent-based methods, the results of the available methods continue to provide patterns of distinguishable solid and fluid agents which do not represent materials with indistinguishable phases. This paper introduces a new agent (hybrid agent) and category of rules (intra-agent rule) that can be used to create emergent structures that would more accurately represent single-phase structures and materials. The novel hybrid agent carries the characteristics of system's elements and it is capable of changing within itself, while also responding to its neighbours as they also change. As an example, the hybrid agent under one-dimensional cellular automata formalism in a two-dimensional domain is used to generate patterns that demonstrate the striking morphological and characteristic similarities with the porous saturated single-phase structures where each agent of the ;structure; carries semi-permeability property and consists of both fluid and solid in space and at all times. We conclude that the ability of the hybrid agent to change locally provides an enhanced protocol to simulate complex porous structures such as biological tissues which could facilitate models for agent-based techniques and numerical methods.

  20. A novel antibody humanization method based on epitopes scanning and molecular dynamics simulation.

    PubMed

    Zhang, Ding; Chen, Cai-Feng; Zhao, Bin-Bin; Gong, Lu-Lu; Jin, Wen-Jing; Liu, Jing-Jun; Wang, Jing-Fei; Wang, Tian-Tian; Yuan, Xiao-Hui; He, You-Wen

    2013-01-01

    1-17-2 is a rat anti-human DEC-205 monoclonal antibody that induces internalization and delivers antigen to dendritic cells (DCs). The potentially clinical application of this antibody is limited by its murine origin. Traditional humanization method such as complementarity determining regions (CDRs) graft often leads to a decreased or even lost affinity. Here we have developed a novel antibody humanization method based on computer modeling and bioinformatics analysis. First, we used homology modeling technology to build the precise model of Fab. A novel epitope scanning algorithm was designed to identify antigenic residues in the framework regions (FRs) that need to be mutated to human counterpart in the humanization process. Then virtual mutation and molecular dynamics (MD) simulation were used to assess the conformational impact imposed by all the mutations. By comparing the root-mean-square deviations (RMSDs) of CDRs, we found five key residues whose mutations would destroy the original conformation of CDRs. These residues need to be back-mutated to rescue the antibody binding affinity. Finally we constructed the antibodies in vitro and compared their binding affinity by flow cytometry and surface plasmon resonance (SPR) assay. The binding affinity of the refined humanized antibody was similar to that of the original rat antibody. Our results have established a novel method based on epitopes scanning and MD simulation for antibody humanization.

  1. Parallel octree-based multiresolution mesh method for large-scale earthquake ground motion simulation

    NASA Astrophysics Data System (ADS)

    Kim, Eui Joong

    Large scale ground motion simulation requires supercomputing systems in order to obtain reliable and useful results within reasonable elapsed time. In this study, we develop a framework for terascale ground motion simulations in highly heterogeneous basins. As part of the development, we present a parallel octree-based multiresolution finite element methodology for the elastodynamic wave propagation problem. The octree-based multiresolution finite element method reduces memory use significantly and improves overall computational performance. The framework is comprised of three parts; (1) an octree-based mesh generator, Euclid developed by TV and O'Hallaron, (2) a parallel mesh partitioner, ParMETIS developed by Karypis et al.[2], and (3) a parallel octree-based multiresolution finite element solver, QUAKE developed in this study. Realistic earthquakes parameters, soil material properties, and sedimentary basins dimensions will produce extremely large meshes. The out-of-core versional octree-based mesh generator, Euclid overcomes the resulting severe memory limitations. By using a parallel, distributed-memory graph partitioning algorithm, ParMETIS partitions large meshes, overcoming the memory and cost problem. Despite capability of the Octree-Based Multiresolution Mesh Method ( OBM3), large problem sizes necessitate parallelism to handle large memory and work requirements. The parallel OBM 3 elastic wave propagation code, QUAKE has been developed to address these issues. The numerical methodology and the framework have been used to simulate the seismic response of both idealized systems and of the Greater Los Angeles basin to simple pulses and to a mainshock of the 1994 Northridge Earthquake, for frequencies of up to 1 Hz and domain size of 80 km x 80 km x 30 km. In the idealized models, QUAKE shows good agreement with the analytical Green's function solutions. In the realistic models for the Northridge earthquake mainshock, QUAKE qualitatively agrees, with at most

  2. A simple numerical method for snowmelt simulation based on the equation of heat energy.

    PubMed

    Stojković, Milan; Jaćimović, Nenad

    2016-01-01

    This paper presents one-dimensional numerical model for snowmelt/accumulation simulations, based on the equation of heat energy. It is assumed that the snow column is homogeneous at the current time step; however, its characteristics such as snow density and thermal conductivity are treated as functions of time. The equation of heat energy for snow column is solved using the implicit finite difference method. The incoming energy at the snow surface includes the following parts: conduction, convection, radiation and the raindrop energy. Along with the snow melting process, the model includes a model for snow accumulation. The Euler method for the numerical integration of the balance equation is utilized in the proposed model. The model applicability is demonstrated at the meteorological station Zlatibor, located in the western region of Serbia at 1,028 meters above sea level (m.a.s.l.) Simulation results of snowmelt/accumulation suggest that the proposed model achieved better agreement with observed data in comparison with the temperature index method. The proposed method may be utilized as part of a deterministic hydrological model in order to improve short and long term predictions of possible flood events.

  3. Evaluation of FTIR-based analytical methods for the analysis of simulated wastes

    SciTech Connect

    Rebagay, T.V.; Cash, R.J.; Dodd, D.A.; Lockrem, L.L.; Meacham, J.E.; Winkelman, W.D.

    1994-09-30

    Three FTIR-based analytical methods that have potential to characterize simulated waste tank materials have been evaluated. These include: (1) fiber optics, (2) modular transfer optic using light guides equipped with non-contact sampling peripherals, and (3) photoacoustic spectroscopy. Pertinent instrumentation and experimental procedures for each method are described. The results show that the near-infrared (NIR) region of the infrared spectrum is the region of choice for the measurement of moisture in waste simulants. Differentiation of the NIR spectrum, as a preprocessing steps, will improve the analytical result. Preliminary data indicate that prominent combination bands of water and the first overtone band of the ferrocyanide stretching vibration may be utilized to measure water and ferrocyanide species simultaneously. Both near-infrared and mid-infrared spectra must be collected, however, to measure ferrocyanide species unambiguously and accurately. For ease of sample handling and the potential for field or waste tank deployment, the FTIR-Fiber Optic method is preferred over the other two methods. Modular transfer optic using light guides and photoacoustic spectroscopy may be used as backup systems and for the validation of the fiber optic data.

  4. IR imaging simulation and analysis for aeroengine exhaust system based on reverse Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Chen, Shiguo; Chen, Lihai; Mo, Dongla; Shi, Jingcheng

    2014-11-01

    The IR radiation characteristics of aeroengine are the important basis for IR stealth design and anti-stealth detection of aircraft. With the development of IR imaging sensor technology, the importance of aircraft IR stealth increases. An effort is presented to explore target IR radiation imaging simulation based on Reverse Monte Carlo Method (RMCM), which combined with the commercial CFD software. Flow and IR radiation characteristics of an aeroengine exhaust system are investigated, which developing a full size geometry model based on the actual parameters, using a flow-IR integration structured mesh, obtaining the engine performance parameters as the inlet boundary conditions of mixer section, and constructing a numerical simulation model of engine exhaust system of IR radiation characteristics based on RMCM. With the above models, IR radiation characteristics of aeroengine exhaust system is given, and focuses on the typical detecting band of IR spectral radiance imaging at azimuth 20°. The result shows that: (1) in small azimuth angle, the IR radiation is mainly from the center cone of all hot parts; near the azimuth 15°, mixer has the biggest radiation contribution, while center cone, turbine and flame stabilizer equivalent; (2) the main radiation components and space distribution in different spectrum is different, CO2 at 4.18, 4.33 and 4.45 micron absorption and emission obviously, H2O at 3.0 and 5.0 micron absorption and emission obviously.

  5. Method for distributed agent-based non-expert simulation of manufacturing process behavior

    DOEpatents

    Ivezic, Nenad; Potok, Thomas E.

    2004-11-30

    A method for distributed agent based non-expert simulation of manufacturing process behavior on a single-processor computer comprises the steps of: object modeling a manufacturing technique having a plurality of processes; associating a distributed agent with each the process; and, programming each the agent to respond to discrete events corresponding to the manufacturing technique, wherein each discrete event triggers a programmed response. The method can further comprise the step of transmitting the discrete events to each agent in a message loop. In addition, the programming step comprises the step of conditioning each agent to respond to a discrete event selected from the group consisting of a clock tick message, a resources received message, and a request for output production message.

  6. Simulation and evaluation of tablet-coating burst based on finite element method.

    PubMed

    Yang, Yan; Li, Juan; Miao, Kong-Song; Shan, Wei-Guang; Tang, Lan; Yu, Hai-Ning

    2016-09-01

    The objective of this study was to simulate and evaluate the burst behavior of coated tablets. Three-dimensional finite element models of tablet-coating were established using software ANSYS. Swelling pressure of cores was measured by a self-made device and applied at the internal surface of the models. Mechanical properties of the polymer film were determined using a texture analyzer and applied as material properties of the models. The resulted finite element models were validated by experimental data. The validated models were used to assess the factors those influenced burst behavior and predict the coating burst behavior. The simulation results of coating burst and failure location were strongly matched with the experimental data. It was found that internal swelling pressure, inside corner radius and corner thickness were three main factors controlling the stress distribution and burst behavior. Based on the linear relationship between the internal pressure and the maximum principle stress on coating, burst pressure of coatings was calculated and used to predict the burst behavior. This study demonstrated that burst behavior of coated tablets could be simulated and evaluated by finite element method.

  7. Density-of-states based Monte Carlo methods for simulation of biological systems

    NASA Astrophysics Data System (ADS)

    Rathore, Nitin; Knotts, Thomas A.; de Pablo, Juan J.

    2004-03-01

    We have developed density-of-states [1] based Monte Carlo techniques for simulation of biological molecules. Two such methods are discussed. The first, Configurational Temperature Density of States (CTDOS) [2], relies on computing the density of states of a peptide system from knowledge of its configurational temperature. The reciprocal of this intrinsic temperature, computed from instantaneous configurational information of the system, is integrated to arrive at the density of states. The method shows improved efficiency and accuracy over techniques that are based on histograms of random visits to distinct energy states. The second approach, Expanded Ensemble Density of States (EXEDOS), incorporates elements from both the random walk method and the expanded ensemble formalism. It is used in this work to study mechanical deformation of model peptides. Results are presented in the form of force-extension curves and the corresponding potentials of mean force. The application of this proposed technique is further generalized to other biological systems; results will be presented for ion transport through protein channels, base stacking in nucleic acids and hybridization of DNA strands. [1]. F. Wang and D. P. Landau, Phys. Rev. Lett., 86, 2050 (2001). [2]. N. Rathore, T. A. Knotts IV and J. J. de Pablo, Biophys. J., Dec. (2003).

  8. Spin tracking simulations in AGS based on ray-tracing methods - bare lattice, no snakes -

    SciTech Connect

    Meot, F.; Ahrens, L.; Gleen, J.; Huang, H.; Luccio, A.; MacKay, W. W.; Roser, T.; Tsoupas, N.

    2009-09-01

    This Note reports on the first simulations of and spin dynamics in the AGS using the ray-tracing code Zgoubi. It includes lattice analysis, comparisons with MAD, DA tracking, numerical calculation of depolarizing resonance strengths and comparisons with analytical models, etc. It also includes details on the setting-up of Zgoubi input data files and on the various numerical methods of concern in and available from Zgoubi. Simulations of crossing and neighboring of spin resonances in AGS ring, bare lattice, without snake, have been performed, in order to assess the capabilities of Zgoubi in that matter, and are reported here. This yields a rather long document. The two main reasons for that are, on the one hand the desire of an extended investigation of the energy span, and on the other hand a thorough comparison of Zgoubi results with analytical models as the 'thin lens' approximation, the weak resonance approximation, and the static case. Section 2 details the working hypothesis : AGS lattice data, formulae used for deriving various resonance related quantities from the ray-tracing based 'numerical experiments', etc. Section 3 gives inventories of the intrinsic and imperfection resonances together with, in a number of cases, the strengths derived from the ray-tracing. Section 4 gives the details of the numerical simulations of resonance crossing, including behavior of various quantities (closed orbit, synchrotron motion, etc.) aimed at controlling that the conditions of particle and spin motions are correct. In a similar manner Section 5 gives the details of the numerical simulations of spin motion in the static case: fixed energy in the neighboring of the resonance. In Section 6, weak resonances are explored, Zgoubi results are compared with the Fresnel integrals model. Section 7 shows the computation of the {rvec n} vector in the AGS lattice and tuning considered. Many details on the numerical conditions as data files etc. are given in the Appendix Section

  9. Simulation on Temperature Field of Radiofrequency Lesions System Based on Finite Element Method

    NASA Astrophysics Data System (ADS)

    Xiao, D.; Qian, L.; Qian, Z.; Li, W.

    2011-01-01

    This paper mainly describes the way to get the volume model of damaged region according to the simulation on temperature field of radiofrequency ablation lesion system in curing Parkinson's disease based on finite element method. This volume model reflects, to some degree, the shape and size of the damaged tissue during the treatment with all tendencies in different time or core temperature. By using Pennes equation as heat conduction equation of radiofrequency ablation of biological tissue, the author obtains the temperature distribution field of biological tissue in the method of finite element for solving equations. In order to establish damage models at temperature points of 60°C, 65°C, 70°C, 75°C, 80°C, 85°C and 90 °C while the time points are 30s, 60s, 90s and 120s, Parkinson's disease model of nuclei is reduced to uniform, infinite model with RF pin at the origin. Theoretical simulations of these models are displayed, focusing on a variety of conditions about the effective lesion size on horizontal and vertical. The results show the binary complete quadratic non-linear joint temperature-time models of the maximum damage diameter and maximum height. The models can comprehensively reflect the degeneration of target tissue caused by radio frequency temperature and duration. This lay the foundation for accurately monitor of clinical RF treatment of Parkinson's disease in the future.

  10. A VOF-based method for the simulation of thermocapillary flow

    NASA Astrophysics Data System (ADS)

    Ma, Chen; Bothe, Dieter

    2010-11-01

    This contribution concerns 3D direct numerical simulation of surface tension-driven two-phase flow with free deformable interface. The two-phase Navier-Stokes equations together with the energy balance in temperature form for incompressible, immiscible fluids are solved. We employ an extended VOF (volume of fluid) method, where the interface is kept sharp using the PLIC-method (piecewise linear interface construction). The surface tension, modeled as a body force via the interface delta-function, is assumed to be linearly dependent on temperature. The surface temperature gradient calculation is based on carefully computed interface temperatures. Numerical results on thermocapillary migration of droplets are obtained for a wide range of Marangoni numbers. Both the terminal and initial stage of the migration are studied and very good agreement with theoretical and experimental results is achieved. In addition, simulation of the B'enard-Marangoni instability in square containers with small aspect ratio and high-Prandtl-number fluids is discussed concerning the development and numbers of convection cells in relation to the aspect ratio.

  11. A method based on Monte Carlo simulation for the determination of the G(E) function.

    PubMed

    Chen, Wei; Feng, Tiancheng; Liu, Jun; Su, Chuanying; Tian, Yanjie

    2015-02-01

    The G(E) function method is a spectrometric method for the exposure dose estimation; this paper describes a method based on Monte Carlo method to determine the G(E) function of a 4″ × 4″ × 16″ NaI(Tl) detector. Simulated spectrums of various monoenergetic gamma rays in the region of 40 -3200 keV and the corresponding deposited energy in an air ball in the energy region of full-energy peak were obtained using Monte Carlo N-particle Transport Code. Absorbed dose rate in air was obtained according to the deposited energy and divided by counts of corresponding full-energy peak to get the G(E) function value at energy E in spectra. Curve-fitting software 1st0pt was used to determine coefficients of the G(E) function. Experimental results show that the calculated dose rates using the G(E) function determined by the authors' method are accordant well with those values obtained by ionisation chamber, with a maximum deviation of 6.31 %.

  12. A simulation-based probabilistic design method for arctic sea transport systems

    NASA Astrophysics Data System (ADS)

    Martin, Bergström; Ove, Erikstad Stein; Sören, Ehlers

    2016-12-01

    When designing an arctic cargo ship, it is necessary to consider multiple stochastic factors. This paper evaluates the merits of a simulation-based probabilistic design method specifically developed to deal with this challenge. The outcome of the paper indicates that the incorporation of simulations and probabilistic design parameters into the design process enables more informed design decisions. For instance, it enables the assessment of the stochastic transport capacity of an arctic ship, as well as of its long-term ice exposure that can be used to determine an appropriate level of ice-strengthening. The outcome of the paper also indicates that significant gains in transport system cost-efficiency can be obtained by extending the boundaries of the design task beyond the individual vessel. In the case of industrial shipping, this allows for instance the consideration of port-based cargo storage facilities allowing for temporary shortages in transport capacity and thus a reduction in the required fleet size / ship capacity.

  13. Numerical Simulation of Drophila Flight Based on Arbitrary Langrangian-Eulerian Method

    NASA Astrophysics Data System (ADS)

    Erzincanli, Belkis; Sahin, Mehmet

    2012-11-01

    A parallel unstructured finite volume algorithm based on Arbitrary Lagrangian Eulerian (ALE) method has been developed in order to investigate the wake structure around a pair of flapping Drosophila wings. The numerical method uses a side-centered arrangement of the primitive variables that does not require any ad-hoc modifications in order to enhance pressure coupling. A radial basis function (RBF) interpolation method is also implemented in order to achieve large mesh deformations. For the parallel solution of resulting large-scale algebraic equations, a matrix factorization is introduced similar to that of the projection method for the whole coupled system and two-cycle of BoomerAMG solver is used for the scaled discrete Laplacian provided by the HYPRE library which we access through the PETSc library. The present numerical algorithm is initially validated for the flow past an oscillating circular cylinder in a channel and the flow induced by an oscillating sphere in a cubic cavity. Then the numerical algorithm is applied to the numerical simulation of flow field around a pair of flapping Drosophila wing in hover flight. The time variation of the near wake structure is shown along with the aerodynamic loads and particle traces. The authors acknowledge financial support from Turkish National Scientific and Technical Research Council (TUBITAK) through project number 111M332. The authors would like to thank Michael Dickinson and Michael Elzinga for providing the experimental data.

  14. Simulating underwater propulsion using an immersed boundary method based open-source solver

    NASA Astrophysics Data System (ADS)

    Senturk, Utku; Hemmati, Arman; Smits, Alexander J.

    2016-11-01

    The performance of a newly developed Immersed Boundary Method (IBM) incorporated into a finite volume solver is examined using foam-extend-3.2. IBM uses a discrete forcing approach based on the weighted least squares interpolation to preserve the sharpness of the boundary, which decreases the computational complexity of the problem. Initially, four case studies with gradually increasing complexities are considered to verify the accuracy of the IBM approach. These include the flow past 2D stationary and transversely oscillating cylinders and 3D wake of stationary and pitching flat plates with aspect ratio 1.0 at Re=2000. The primary objective of this study, which is pursued by an ongoing simulation of the wake formed behind a pitching deformable 3D flat plate, is to investigate the underwater locomotion of a fish at Re=10000. The results of the IBM based solver are compared to the experimental results, which suggest that the force computations are accurate in general. Spurious oscillations in the forces are observed for problems with moving bodies which change based on spatial and temporal grid resolutions. Although it still has the full advantage of the main code features, the IBM-based solver in foam-extend-3.2 requires further development to be exploited for complex grids. The work was supported by ONR under MURI Grant N00014-14-1-0533.

  15. Comparison of meaningful learning characteristics in simulated nursing practice after traditional versus computer-based simulation method: a qualitative videography study.

    PubMed

    Poikela, Paula; Ruokamo, Heli; Teräs, Marianne

    2015-02-01

    Nursing educators must ensure that nursing students acquire the necessary competencies; finding the most purposeful teaching methods and encouraging learning through meaningful learning opportunities is necessary to meet this goal. We investigated student learning in a simulated nursing practice using videography. The purpose of this paper is to examine how two different teaching methods presented students' meaningful learning in a simulated nursing experience. The 6-hour study was divided into three parts: part I, general information; part II, training; and part III, simulated nursing practice. Part II was delivered by two different methods: a computer-based simulation and a lecture. The study was carried out in the simulated nursing practice in two universities of applied sciences, in Northern Finland. The participants in parts II and I were 40 first year nursing students; 12 student volunteers continued to part III. Qualitative analysis method was used. The data were collected using video recordings and analyzed by videography. The students who used a computer-based simulation program were more likely to report meaningful learning themes than those who were first exposed to lecture method. Educators should be encouraged to use computer-based simulation teaching in conjunction with other teaching methods to ensure that nursing students are able to receive the greatest educational benefits. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Comparison of three-dimensional poisson solution methods for particle-based simulation and inhomogeneous dielectrics.

    PubMed

    Berti, Claudio; Gillespie, Dirk; Bardhan, Jaydeep P; Eisenberg, Robert S; Fiegna, Claudio

    2012-07-01

    Particle-based simulation represents a powerful approach to modeling physical systems in electronics, molecular biology, and chemical physics. Accounting for the interactions occurring among charged particles requires an accurate and efficient solution of Poisson's equation. For a system of discrete charges with inhomogeneous dielectrics, i.e., a system with discontinuities in the permittivity, the boundary element method (BEM) is frequently adopted. It provides the solution of Poisson's equation, accounting for polarization effects due to the discontinuity in the permittivity by computing the induced charges at the dielectric boundaries. In this framework, the total electrostatic potential is then found by superimposing the elemental contributions from both source and induced charges. In this paper, we present a comparison between two BEMs to solve a boundary-integral formulation of Poisson's equation, with emphasis on the BEMs' suitability for particle-based simulations in terms of solution accuracy and computation speed. The two approaches are the collocation and qualocation methods. Collocation is implemented following the induced-charge computation method of D. Boda et al. [J. Chem. Phys. 125, 034901 (2006)]. The qualocation method is described by J. Tausch et al. [IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 20, 1398 (2001)]. These approaches are studied using both flat and curved surface elements to discretize the dielectric boundary, using two challenging test cases: a dielectric sphere embedded in a different dielectric medium and a toy model of an ion channel. Earlier comparisons of the two BEM approaches did not address curved surface elements or semiatomistic models of ion channels. Our results support the earlier findings that for flat-element calculations, qualocation is always significantly more accurate than collocation. On the other hand, when the dielectric boundary is discretized with curved surface elements, the

  17. Video-Based Method of Quantifying Performance and Instrument Motion During Simulated Phonosurgery

    PubMed Central

    Conroy, Ellen; Surender, Ketan; Geng, Zhixian; Chen, Ting; Dailey, Seth; Jiang, Jack

    2015-01-01

    Objectives/Hypothesis To investigate the use of the Video-Based Phonomicrosurgery Instrument Tracking System to collect instrument position data during simulated phonomicrosurgery and calculate motion metrics using these data. We used this system to determine if novice subject motion metrics improved over 1 week of training. Study Design Prospective cohort study. Methods Ten subjects performed simulated surgical tasks once per day for 5 days. Instrument position data were collected and used to compute motion metrics (path length, depth perception, and motion smoothness). Data were analyzed to determine if motion metrics improved with practice time. Task outcome was also determined each day, and relationships between task outcome and motion metrics were used to evaluate the validity of motion metrics as indicators of surgical performance. Results Significant decreases over time were observed for path length (P <.001), depth perception (P <.001), and task outcome (P <.001). No significant change was observed for motion smoothness. Significant relationships were observed between task outcome and path length (P <.001), depth perception (P <.001), and motion smoothness (P <.001). Conclusions Our system can estimate instrument trajectory and provide quantitative descriptions of surgical performance. It may be useful for evaluating phonomicrosurgery performance. Path length and depth perception may be particularly useful indicators. PMID:24737286

  18. Surface defects evaluation system based on electromagnetic model simulation and inverse-recognition calibration method

    NASA Astrophysics Data System (ADS)

    Yang, Yongying; Chai, Huiting; Li, Chen; Zhang, Yihui; Wu, Fan; Bai, Jian; Shen, Yibing

    2017-05-01

    Digitized evaluation of micro sparse defects on large fine optical surfaces is one of the challenges in the field of optical manufacturing and inspection. The surface defects evaluation system (SDES) for large fine optical surfaces is developed based on our previously reported work. In this paper, the electromagnetic simulation model based on Finite-Difference Time-Domain (FDTD) for vector diffraction theory is firstly established to study the law of microscopic scattering dark-field imaging. Given the aberration in actual optical systems, point spread function (PSF) approximated by a Gaussian function is introduced in the extrapolation from the near field to the far field and the scatter intensity distribution in the image plane is deduced. Analysis shows that both diffraction-broadening imaging and geometrical imaging should be considered in precise size evaluation of defects. Thus, a novel inverse-recognition calibration method is put forward to avoid confusion caused by diffraction-broadening effect. The evaluation method is applied to quantitative evaluation of defects information. The evaluation results of samples of many materials by SDES are compared with those by OLYMPUS microscope to verify the micron-scale resolution and precision. The established system has been applied to inspect defects on large fine optical surfaces and can achieve defects inspection of surfaces as large as 850 mm×500 mm with the resolution of 0.5 μm.

  19. A new method to extract stable feature points based on self-generated simulation images

    NASA Astrophysics Data System (ADS)

    Long, Fei; Zhou, Bin; Ming, Delie; Tian, Jinwen

    2015-10-01

    Recently, image processing has got a lot of attention in the field of photogrammetry, medical image processing, etc. Matching two or more images of the same scene taken at different times, by different cameras, or from different viewpoints, is a popular and important problem. Feature extraction plays an important part in image matching. Traditional SIFT detectors reject the unstable points by eliminating the low contrast and edge response points. The disadvantage is the need to set the threshold manually. The main idea of this paper is to get the stable extremums by machine learning algorithm. Firstly we use ASIFT approach coupled with the light changes and blur to generate multi-view simulated images, which make up the set of the simulated images of the original image. According to the way of generating simulated images set, affine transformation of each generated image is also known. Instead of the traditional matching process which contain the unstable RANSAC method to get the affine transformation, this approach is more stable and accurate. Secondly we calculate the stability value of the feature points by the set of image with its affine transformation. Then we get the different feature properties of the feature point, such as DOG features, scales, edge point density, etc. Those two form the training set while stability value is the dependent variable and feature property is the independent variable. At last, a process of training by Rank-SVM is taken. We will get a weight vector. In use, based on the feature properties of each points and weight vector calculated by training, we get the sort value of each feature point which refers to the stability value, then we sort the feature points. In conclusion, we applied our algorithm and the original SIFT detectors to test as a comparison. While in different view changes, blurs, illuminations, it comes as no surprise that experimental results show that our algorithm is more efficient.

  20. Numerical simulation for the Gross-Pitaevskii equation based on the lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Wang, Huimin

    2017-09-01

    A lattice Boltzmann model for the Gross-Pitaevskii equation is proposed in this paper. Some numerical tests for one- and two-dimensional Gross-Pitaevskii equation have been conducted. The waves of the Gross-Pitaevskii equation are simulated. Numerical results show that the lattice Boltzmann method is an effective method for the wave of the Gross-Pitaevskii equation.

  1. [Method for environmental management in paper industry based on pollution control technology simulation].

    PubMed

    Zhang, Xue-Ying; Wen, Zong-Guo

    2014-11-01

    To evaluate the reduction potential of industrial water pollutant emissions and to study the application of technology simulation in pollutant control and environment management, an Industrial Reduction Potential Analysis and Environment Management (IRPAEM) model was developed based on coupling of "material-process-technology-product". The model integrated bottom-up modeling and scenario analysis method, and was applied to China's paper industry. Results showed that under CM scenario, the reduction potentials of waster water, COD and ammonia nitrogen would reach 7 x 10(8) t, 39 x 10(4) t and 0.3 x 10(4) t, respectively in 2015, 13.8 x 10(8) t, 56 x 10(4) t and 0.5 x 10(4) t, respectively in 2020. Strengthening the end-treatment would still be the key method to reduce emissions during 2010-2020, while the reduction effect of structure adjustment would be more obvious during 2015-2020. Pollution production could basically reach the domestic or international advanced level of clean production in 2015 and 2020; the index of wastewater and ammonia nitrogen would basically meet the emission standards in 2015 and 2020 while COD would not.

  2. An efficient parallel algebraic multigrid method for 3D injection moulding simulation based on finite volume method

    NASA Astrophysics Data System (ADS)

    Hu, Zixiang; Zhang, Yun; Liang, Junjie; Shi, Songxin; Zhou, Huamin

    2014-07-01

    Elapsed time is always one of the most important performance measures for polymer injection moulding simulation. Solving pressure correction equations is the most time-consuming part in the mould filling simulation using finite volume method with SIMPLE-like algorithms. Algebraic multigrid (AMG) is one of the most promising methods for this type of elliptic equations. It, thus, has better performance by contrast with some common one-level iterative methods, especially for large problems. And it is also suitable for parallel computing. However, AMG is not easy to be applied due to its complex theory and poor generality for the large range of computational fluid dynamics applications. This paper gives a robust and efficient parallel AMG solver, A1-pAMG, for 3D mould filling simulation of injection moulding. Numerical experiments demonstrate that, A1-pAMG has better parallel performance than the classical AMG, and also has algorithmic scalability in the context of 3D unstructured problems.

  3. Proposal of Classification Method of Time Series Data in International Emissions Trading Market Using Agent-based Simulation

    NASA Astrophysics Data System (ADS)

    Nakada, Tomohiro; Takadama, Keiki; Watanabe, Shigeyoshi

    This paper proposes the classification method using Bayesian analytical method to classify the time series data in the international emissions trading market depend on the agent-based simulation and compares the case with Discrete Fourier transform analytical method. The purpose demonstrates the analytical methods mapping time series data such as market price. These analytical methods have revealed the following results: (1) the classification methods indicate the distance of mapping from the time series data, it is easier the understanding and inference than time series data; (2) these methods can analyze the uncertain time series data using the distance via agent-based simulation including stationary process and non-stationary process; and (3) Bayesian analytical method can show the 1% difference description of the emission reduction targets of agent.

  4. A grouping method based on grid density and relationship for crowd evacuation simulation

    NASA Astrophysics Data System (ADS)

    Li, Yan; Liu, Hong; Liu, Guang-peng; Li, Liang; Moore, Philip; Hu, Bin

    2017-05-01

    Psychological factors affect the movement of people in the competitive or panic mode of evacuation, in which the density of pedestrians is relatively large and the distance among them is small. In this paper, a crowd is divided into groups according to their social relations to simulate the actual movement of crowd evacuation more realistically and increase the attractiveness of the group based on social force model. The force of group attraction is the synthesis of two forces; one is the attraction of the individuals generated by their social relations to gather, and the other is that of the group leader to the individuals within the group to ensure that the individuals follow the leader. The synthetic force determines the trajectory of individuals. The evacuation process is demonstrated using the improved social force model. In the improved social force model, the individuals with close social relations gradually present a closer and coordinated action while following the leader. In this paper, a grouping algorithm is proposed based on grid density and relationship via computer simulation to illustrate the features of the improved social force model. The definition of the parameters involved in the algorithm is given, and the effect of relational value on the grouping is tested. Reasonable numbers of grids and weights are selected. The effectiveness of the algorithm is shown through simulation experiments. A simulation platform is also established using the proposed grouping algorithm and the improved social force model for crowd evacuation simulation.

  5. Reduction of very large reaction mechanisms using methods based on simulation error minimization

    SciTech Connect

    Nagy, Tibor; Turanyi, Tamas

    2009-02-15

    A new species reduction method called the Simulation Error Minimization Connectivity Method (SEM-CM) was developed. According to the SEM-CM algorithm, a mechanism building procedure is started from the important species. Strongly connected sets of species, identified on the basis of the normalized Jacobian, are added and several consistent mechanisms are produced. The combustion model is simulated with each of these mechanisms and the mechanism causing the smallest error (i.e. deviation from the model that uses the full mechanism), considering the important species only, is selected. Then, in several steps other strongly connected sets of species are added, the size of the mechanism is gradually increased and the procedure is terminated when the error becomes smaller than the required threshold. A new method for the elimination of redundant reactions is also presented, which is called the Principal Component Analysis of Matrix F with Simulation Error Minimization (SEM-PCAF). According to this method, several reduced mechanisms are produced by using various PCAF thresholds. The reduced mechanism having the least CPU time requirement among the ones having almost the smallest error is selected. Application of SEM-CM and SEM-PCAF together provides a very efficient way to eliminate redundant species and reactions from large mechanisms. The suggested approach was tested on a mechanism containing 6874 irreversible reactions of 345 species that describes methane partial oxidation to high conversion. The aim is to accurately reproduce the concentration-time profiles of 12 major species with less than 5% error at the conditions of an industrial application. The reduced mechanism consists of 246 reactions of 47 species and its simulation is 116 times faster than using the full mechanism. The SEM-CM was found to be more effective than the classic Connectivity Method, and also than the DRG, two-stage DRG, DRGASA, basic DRGEP and extended DRGEP methods. (author)

  6. [Simulation of water and carbon fluxes in harvard forest area based on data assimilation method].

    PubMed

    Zhang, Ting-Long; Sun, Rui; Zhang, Rong-Hua; Zhang, Lei

    2013-10-01

    Model simulation and in situ observation are the two most important means in studying the water and carbon cycles of terrestrial ecosystems, but have their own advantages and shortcomings. To combine these two means would help to reflect the dynamic changes of ecosystem water and carbon fluxes more accurately. Data assimilation provides an effective way to integrate the model simulation and in situ observation. Based on the observation data from the Harvard Forest Environmental Monitoring Site (EMS), and by using ensemble Kalman Filter algorithm, this paper assimilated the field measured LAI and remote sensing LAI into the Biome-BGC model to simulate the water and carbon fluxes in Harvard forest area. As compared with the original model simulated without data assimilation, the improved Biome-BGC model with the assimilation of the field measured LAI in 1998, 1999, and 2006 increased the coefficient of determination R2 between model simulation and flux observation for the net ecosystem exchange (NEE) and evapotranspiration by 8.4% and 10.6%, decreased the sum of absolute error (SAE) and root mean square error (RMSE) of NEE by 17.7% and 21.2%, and decreased the SAE and RMSE of the evapotranspiration by 26. 8% and 28.3%, respectively. After assimilated the MODIS LAI products of 2000-2004 into the improved Biome-BGC model, the R2 between simulated and observed results of NEE and evapotranspiration was increased by 7.8% and 4.7%, the SAE and RMSE of NEE were decreased by 21.9% and 26.3%, and the SAE and RMSE of evapotranspiration were decreased by 24.5% and 25.5%, respectively. It was suggested that the simulation accuracy of ecosystem water and carbon fluxes could be effectively improved if the field measured LAI or remote sensing LAI was integrated into the model.

  7. An Effective Method of Teaching Advanced Cardiac Life Support (ACLS) Skills in Simulation-Based Training.

    PubMed

    Yoo, Hyo Bin; Park, Jae Hyun; Ko, Jin Kyung

    2012-03-01

    In this study, we compared the effects of constructivist and traditional teaching strategies in teaching advanced cardiac life support (ACLS) skills during simulation-based training (SBT). A randomized, pre- and post-test control group study was designed to examine this issue in 29 third-year emergency medical technician (EMT) students. Participants received SBT through constructivist SBT (CSBT) or traditional lecture-based SBT (TSBT) teaching strategies. We evaluated the effects of the simulation training on ACLS knowledge, and performance immediately after practice and at retention. The knowledge and performance of the CSBT group were higher than compared with the TSBT group (mean knowledge 33.3+/-5.03 vs. 29.5+/-5.33, p=0.36; and mean performance 12.20+/-1.85 vs. 8.85+/-3.54, p=0.010). However, there was no difference between two groups in retention between groups 1 month later (mean knowledge 31.86+/-4.45 vs. 31.50+/-4.65, p=0.825; and mean performance 12.13+/-0.99 vs. 12.57+/-1.78, p=0.283). CSBT is more effective with regard to knowledge acquisition and performance than TSBT. Further studies are needed to explore ways of improving retention and transfer of knowledge from simulated to real situations with SBT.

  8. Advanced Spacecraft EM Modelling Based on Geometric Simplification Process and Multi-Methods Simulation

    NASA Astrophysics Data System (ADS)

    Leman, Samuel; Hoeppe, Frederic

    2016-05-01

    This paper is about the first results of a new generation of ElectroMagnetic (EM) methodology applied to spacecraft systems modelling in the low frequency range (system's dimensions are of the same order of magnitude as the wavelength).This innovative approach aims at implementing appropriate simplifications of the real system based on the identification of the dominant electrical and geometrical parameters driving the global EM behaviour. One rigorous but expensive simulation is performed to quantify the error generated by the use of simpler multi-models. If both the speed up of the simulation time and the quality of the EM response are satisfied, uncertainty simulation could be performed based on the simple models library implementing in a flexible and robust Kron's network formalism.This methodology is expected to open up new perspectives concerning fast parametric analysis, and deep understanding of systems behaviour. It will ensure the identification of main radiated and conducted coupling paths and the sensitive EM parameters in order to optimize the protections and to control the disturbance sources in spacecraft design phases.

  9. A simulation-based marginal method for longitudinal data with dropout and mismeasured covariates.

    PubMed

    Yi, Grace Y

    2008-07-01

    Longitudinal data often contain missing observations and error-prone covariates. Extensive attention has been directed to analysis methods to adjust for the bias induced by missing observations. There is relatively little work on investigating the effects of covariate measurement error on estimation of the response parameters, especially on simultaneously accounting for the biases induced by both missing values and mismeasured covariates. It is not clear what the impact of ignoring measurement error is when analyzing longitudinal data with both missing observations and error-prone covariates. In this article, we study the effects of covariate measurement error on estimation of the response parameters for longitudinal studies. We develop an inference method that adjusts for the biases induced by measurement error as well as by missingness. The proposed method does not require the full specification of the distribution of the response vector but only requires modeling its mean and variance structures. Furthermore, the proposed method employs the so-called functional modeling strategy to handle the covariate process, with the distribution of covariates left unspecified. These features, plus the simplicity of implementation, make the proposed method very attractive. In this paper, we establish the asymptotic properties for the resulting estimators. With the proposed method, we conduct sensitivity analyses on a cohort data set arising from the Framingham Heart Study. Simulation studies are carried out to evaluate the impact of ignoring covariate measurement error and to assess the performance of the proposed method.

  10. Two methods for transmission line simulation model creation based on time domain measurements

    NASA Astrophysics Data System (ADS)

    Rinas, D.; Frei, S.

    2011-07-01

    The emission from transmission lines plays an important role in the electromagnetic compatibility of automotive electronic systems. In a frequency range below 200 MHz radiation from cables is often the dominant emission factor. In higher frequency ranges radiation from PCBs and their housing becomes more relevant. Main sources for this emission are the conducting traces. The established field measurement methods according CISPR 25 for evaluation of emissions suffer from the need to use large anechoic chambers. Furthermore measurement data can not be used for simulation model creation in order to compute the overall fields radiated from a car. In this paper a method to determine the far-fields and a simulation model of radiating transmission lines, esp. cable bundles and conducting traces on planar structures, is proposed. The method measures the electromagnetic near-field above the test object. Measurements are done in time domain in order to get phase information and to reduce measurement time. On the basis of near-field data equivalent source identification can be done. Considering correlations between sources along each conductive structure in model creation process, the model accuracy increases and computational costs can be reduced.

  11. Full wave simulation of waves in ECRIS plasmas based on the finite element method

    SciTech Connect

    Torrisi, G.; Mascali, D.; Neri, L.; Castro, G.; Patti, G.; Celona, L.; Gammino, S.; Ciavola, G.; Di Donato, L.; Sorbello, G.; Isernia, T.

    2014-02-12

    This paper describes the modeling and the full wave numerical simulation of electromagnetic waves propagation and absorption in an anisotropic magnetized plasma filling the resonant cavity of an electron cyclotron resonance ion source (ECRIS). The model assumes inhomogeneous, dispersive and tensorial constitutive relations. Maxwell's equations are solved by the finite element method (FEM), using the COMSOL Multiphysics{sup ®} suite. All the relevant details have been considered in the model, including the non uniform external magnetostatic field used for plasma confinement, the local electron density profile resulting in the full-3D non uniform magnetized plasma complex dielectric tensor. The more accurate plasma simulations clearly show the importance of cavity effect on wave propagation and the effects of a resonant surface. These studies are the pillars for an improved ECRIS plasma modeling, that is mandatory to optimize the ion source output (beam intensity distribution and charge state, especially). Any new project concerning the advanced ECRIS design will take benefit by an adequate modeling of self-consistent wave absorption simulations.

  12. Pseudospectral method based on prolate spheroidal wave functions for semiconductor nanodevice simulation

    NASA Astrophysics Data System (ADS)

    Lin, Wenbin; Kovvali, Narayan; Carin, Lawrence

    2006-07-01

    We solve Schrödinger's equation for semiconductor nanodevices by applying prolate spheroidal wave functions of order zero as basis functions in the pseudospectral method. When the functions involved in the problem are bandlimited, the prolate pseudospectral method outperforms the conventional pseudospectral methods based on trigonometric and orthogonal polynomials and related functions, asymptotically achieving similar accuracy using a factor of π/2 less unknowns than the latter. The prolate pseudospectral method also employs a more uniform spatial grid, achieving better resolution near the center of the domain.

  13. Monte Carlo method based radiative transfer simulation of stochastic open forest generated by circle packing application

    NASA Astrophysics Data System (ADS)

    Jin, Shengye; Tamura, Masayuki

    2013-10-01

    Monte Carlo Ray Tracing (MCRT) method is a versatile application for simulating radiative transfer regime of the Solar - Atmosphere - Landscape system. Moreover, it can be used to compute the radiation distribution over a complex landscape configuration, as an example like a forest area. Due to its robustness to the complexity of the 3-D scene altering, MCRT method is also employed for simulating canopy radiative transfer regime as the validation source of other radiative transfer models. In MCRT modeling within vegetation, one basic step is the canopy scene set up. 3-D scanning application was used for representing canopy structure as accurately as possible, but it is time consuming. Botanical growth function can be used to model the single tree growth, but cannot be used to express the impaction among trees. L-System is also a functional controlled tree growth simulation model, but it costs large computing memory. Additionally, it only models the current tree patterns rather than tree growth during we simulate the radiative transfer regime. Therefore, it is much more constructive to use regular solid pattern like ellipsoidal, cone, cylinder etc. to indicate single canopy. Considering the allelopathy phenomenon in some open forest optical images, each tree in its own `domain' repels other trees. According to this assumption a stochastic circle packing algorithm is developed to generate the 3-D canopy scene in this study. The canopy coverage (%) and the tree amount (N) of the 3-D scene are declared at first, similar to the random open forest image. Accordingly, we randomly generate each canopy radius (rc). Then we set the circle central coordinate on XY-plane as well as to keep circles separate from each other by the circle packing algorithm. To model the individual tree, we employ the Ishikawa's tree growth regressive model to set the tree parameters including DBH (dt), tree height (H). However, the relationship between canopy height (Hc) and trunk height (Ht) is

  14. Entropy in bimolecular simulations: A comprehensive review of atomic fluctuations-based methods.

    PubMed

    Kassem, Summer; Ahmed, Marawan; El-Sheikh, Salah; Barakat, Khaled H

    2015-11-01

    Entropy of binding constitutes a major, and in many cases a detrimental, component of the binding affinity in biomolecular interactions. While the enthalpic part of the binding free energy is easier to calculate, estimating the entropy of binding is further more complicated. A precise evaluation of entropy requires a comprehensive exploration of the complete phase space of the interacting entities. As this task is extremely hard to accomplish in the context of conventional molecular simulations, calculating entropy has involved many approximations. Most of these golden standard methods focused on developing a reliable estimation of the conformational part of the entropy. Here, we review these methods with a particular emphasis on the different techniques that extract entropy from atomic fluctuations. The theoretical formalisms behind each method is explained highlighting its strengths as well as its limitations, followed by a description of a number of case studies for each method. We hope that this brief, yet comprehensive, review provides a useful tool to understand these methods and realize the practical issues that may arise in such calculations. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. RC Model-based Comparison Tests of the Added Compliance Method with Computer Simulations and a Standard Method

    NASA Astrophysics Data System (ADS)

    Pałko, Krzysztof J.; Rogalski, Andrzej; Zieliński, Krzysztof; Glapiński, Jarosław; Kozarski, Maciej; Pałko, Tadeusz; Darowski, Marek

    2007-01-01

    Ventilation of the lungs involves the exchange of gases during inhalation and exhalation causing the movement of respiratory gases between alveolars and the atmosphere as a result of a pressure drop between alveolars and the atmosphere. During artificial ventilation what is most important is to keep specific mechanical parameters of the lungs such as total compliance of the respiratory system Cp (consisting of the lung and the thorax compliances) and the airway resistance Rp when the patient is ventilated. Therefore, as the main goal of this work and as the first step to use our earlier method of added lung compliance in clinical practice was: 1) to carry out computer simulations to compare the application of this method during different expiratory phases, and 2) to compare this method with the standard method for its accuracy. The primary tests of the added-compliance method of the main lung parameters measurement have been made using the RC mechanical model of the lungs.

  16. Qualitative Simulation of Photon Transport in Free Space Based on Monte Carlo Method and Its Parallel Implementation

    PubMed Central

    Chen, Xueli; Gao, Xinbo; Qu, Xiaochao; Chen, Duofang; Ma, Bin; Wang, Lin; Peng, Kuan; Liang, Jimin; Tian, Jie

    2010-01-01

    During the past decade, Monte Carlo method has obtained wide applications in optical imaging to simulate photon transport process inside tissues. However, this method has not been effectively extended to the simulation of free-space photon transport at present. In this paper, a uniform framework for noncontact optical imaging is proposed based on Monte Carlo method, which consists of the simulation of photon transport both in tissues and in free space. Specifically, the simplification theory of lens system is utilized to model the camera lens equipped in the optical imaging system, and Monte Carlo method is employed to describe the energy transformation from the tissue surface to the CCD camera. Also, the focusing effect of camera lens is considered to establish the relationship of corresponding points between tissue surface and CCD camera. Furthermore, a parallel version of the framework is realized, making the simulation much more convenient and effective. The feasibility of the uniform framework and the effectiveness of the parallel version are demonstrated with a cylindrical phantom based on real experimental results. PMID:20689705

  17. Waveform-based simulated annealing of crosshole transmission data: a semi-global method for estimating seismic anisotropy

    NASA Astrophysics Data System (ADS)

    Afanasiev, Michael V.; Pratt, R. Gerhard; Kamei, Rie; McDowell, Glenn

    2014-12-01

    We successfully apply the semi-global inverse method of simulated annealing to determine the best-fitting 1-D anisotropy model for use in acoustic frequency domain waveform tomography. Our forward problem is based on a numerical solution of the frequency domain acoustic wave equation, and we minimize wavefield phase residuals through random perturbations to a 1-D vertically varying anisotropy profile. Both real and synthetic examples are presented in order to demonstrate and validate the approach. For the real data example, we processed and inverted a cross-borehole data set acquired by Vale Technology Development (Canada) Ltd. in the Eastern Deeps deposit, located in Voisey's Bay, Labrador, Canada. The inversion workflow comprises the full suite of acquisition, data processing, starting model building through traveltime tomography, simulated annealing and finally waveform tomography. Waveform tomography is a high resolution method that requires an accurate starting model. A cycle-skipping issue observed in our initial starting model was hypothesized to be due to an erroneous anisotropy model from traveltime tomography. This motivated the use of simulated annealing as a semi-global method for anisotropy estimation. We initially tested the simulated annealing approach on a synthetic data set based on the Voisey's Bay environment; these tests were successful and led to the application of the simulated annealing approach to the real data set. Similar behaviour was observed in the anisotropy models obtained through traveltime tomography in both the real and synthetic data sets, where simulated annealing produced an anisotropy model which solved the cycle-skipping issue. In the real data example, simulated annealing led to a final model that compares well with the velocities independently estimated from borehole logs. By comparing the calculated ray paths and wave paths, we attributed the failure of anisotropic traveltime tomography to the breakdown of the ray

  18. Incompressible SPH method based on Rankine source solution for violent water wave simulation

    NASA Astrophysics Data System (ADS)

    Zheng, X.; Ma, Q. W.; Duan, W. Y.

    2014-11-01

    With wide applications, the smoothed particle hydrodynamics method (abbreviated as SPH) has become an important numerical tool for solving complex flows, in particular those with a rapidly moving free surface. For such problems, the incompressible Smoothed Particle Hydrodynamics (ISPH) has been shown to yield better and more stable pressure time histories than the traditional SPH by many papers in literature. However, the existing ISPH method directly approximates the second order derivatives of the functions to be solved by using the Poisson equation. The order of accuracy of the method becomes low, especially when particles are distributed in a disorderly manner, which generally happens for modelling violent water waves. This paper introduces a new formulation using the Rankine source solution. In the new approach to the ISPH, the Poisson equation is first transformed into another form that does not include any derivative of the functions to be solved, and as a result, does not need to numerically approximate derivatives. The advantage of the new approach without need of numerical approximation of derivatives is obvious, potentially leading to a more robust numerical method. The newly formulated method is tested by simulating various water waves, and its convergent behaviours are numerically studied in this paper. Its results are compared with experimental data in some cases and reasonably good agreement is achieved. More importantly, numerical results clearly show that the newly developed method does need less number of particles and so less computational costs to achieve the similar level of accuracy, or to produce more accurate results with the same number of particles compared with the traditional SPH and existing ISPH when it is applied to modelling water waves.

  19. The Corrected Simulation Method of Critical Heat Flux Prediction for Water-Cooled Divertor Based on Euler Homogeneous Model

    NASA Astrophysics Data System (ADS)

    Zhang, Jingyang; Han, Le; Chang, Haiping; Liu, Nan; Xu, Tiejun

    2016-02-01

    An accurate critical heat flux (CHF) prediction method is the key factor for realizing the steady-state operation of a water-cooled divertor that works under one-sided high heating flux conditions. An improved CHF prediction method based on Euler's homogeneous model for flow boiling combined with realizable k-ɛ model for single-phase flow is adopted in this paper in which time relaxation coefficients are corrected by the Hertz-Knudsen formula in order to improve the calculation accuracy of vapor-liquid conversion efficiency under high heating flux conditions. Moreover, local large differences of liquid physical properties due to the extreme nonuniform heating flux on cooling wall along the circumference direction are revised by formula IAPWS-IF97. Therefore, this method can improve the calculation accuracy of heat and mass transfer between liquid phase and vapor phase in a CHF prediction simulation of water-cooled divertors under the one-sided high heating condition. An experimental example is simulated based on the improved and the uncorrected methods. The simulation results, such as temperature, void fraction and heat transfer coefficient, are analyzed to achieve the CHF prediction. The results show that the maximum error of CHF based on the improved method is 23.7%, while that of CHF based on uncorrected method is up to 188%, as compared with the experiment results of Ref. [12]. Finally, this method is verified by comparison with the experimental data obtained by International Thermonuclear Experimental Reactor (ITER), with a maximum error of 6% only. This method provides an efficient tool for the CHF prediction of water-cooled divertors. supported by the National Magnetic Confinement Fusion Science Program of China (No. 2010GB104005) and National Natural Science Foundation of China (No. 51406085)

  20. DETECTORS AND EXPERIMENTAL METHODS Design and simulations for the detector based on DSSSD

    NASA Astrophysics Data System (ADS)

    Xu, Yan-Bing; Wang, Huan-Yu; Meng, Xiang-Cheng; Wang, Hui; Lu, Hong; Ma, Yu-Qian; Li, Xin-Qiao; Shi, Feng; Wang, Ping; Zhao, Xiao-Yun; Wu, Feng

    2010-12-01

    The present paper describes the design and simulation results of a position-sensitive charged particle detector based on the Double Sided Silicon Strip Detector (DSSSD). Also, the characteristics of the DSSSD and its testing result were are discussed. With the application of the DSSSD, the position-sensitive charged particle detector can not only give particle flux and energy spectra information and identify different types of charged particles, but also measure the location and angle of incident particles. As the detector can make multiparameter measurements of charged particles, it is widely used in space detection and exploration missions, such as charged particle detection related to earthquakes, space environment monitoring and solar activity inspection.

  1. A simplified numerical simulation method of bending properties for glass fiber cloth reinforced denture base resin.

    PubMed

    Tanimoto, Yasuhiro; Nishiwaki, Tsuyoshi; Nishiyama, Norihiro; Nemoto, Kimiya; Maekawa, Zen-ichiro

    2002-06-01

    The purpose of this study was to propose a new numerical modeling of the glass fiber cloth reinforced denture base resin (GFRP). The proposed model is constructed with an isotropic shell, beam and orthotropic shell elements representing the outmost resin, interlaminar resin and glass fiber cloth, respectively. The proposed model was applied to the failure progress analysis under three-point bending conditions, the validity of the numerical model was checked through comparisons with experimental results. The failure progress behaviors involving the local failures, such as interlaminar delamination and resin failure, could be simulated using the numerical model for analyzing the failure progress of GFRP. It is concluded that the model was effective for the failure progress analysis of GFRP.

  2. Cut-cell method based large-eddy simulation of tip-leakage flow

    NASA Astrophysics Data System (ADS)

    Pogorelov, Alexej; Meinke, Matthias; Schröder, Wolfgang

    2015-07-01

    The turbulent low Mach number flow through an axial fan at a Reynolds number of 9.36 × 105 based on the outer casing diameter is investigated by large-eddy simulation. A finite-volume flow solver in an unstructured hierarchical Cartesian setup for the compressible Navier-Stokes equations is used. To account for sharp edges, a fully conservative cut-cell approach is applied. A newly developed rotational periodic boundary condition for Cartesian meshes is introduced such that the simulations are performed just for a 72° segment, i.e., the flow field over one out of five axial blades is resolved. The focus of this numerical analysis is on the development of the vortical flow structures in the tip-gap region. A detailed grid convergence study is performed on four computational grids with 50 × 106, 250 × 106, 1 × 109, and 1.6 × 109 cells. Results of the instantaneous and the mean fan flow field are thoroughly analyzed based on the solution with 1 × 109 cells. High levels of turbulent kinetic energy and pressure fluctuations are generated by a tip-gap vortex upstream of the blade, the separating vortices inside the tip gap, and a counter-rotating vortex on the outer casing wall. An intermittent interaction of the turbulent wake, generated by the tip-gap vortex, with the downstream blade, leads to a cyclic transition with high pressure fluctuations on the suction side of the blade and a decay of the tip-gap vortex. The disturbance of the tip-gap vortex results in an unsteady behavior of the turbulent wake causing the intermittent interaction. For this interaction and the cyclic transition, two dominant frequencies are identified which perfectly match with the characteristic frequencies in the experimental sound power level and therefore explain their physical origin.

  3. Thickness-based adaptive mesh refinement methods for multi-phase flow simulations with thin regions

    SciTech Connect

    Chen, Xiaodong; Yang, Vigor

    2014-07-15

    In numerical simulations of multi-scale, multi-phase flows, grid refinement is required to resolve regions with small scales. A notable example is liquid-jet atomization and subsequent droplet dynamics. It is essential to characterize the detailed flow physics with variable length scales with high fidelity, in order to elucidate the underlying mechanisms. In this paper, two thickness-based mesh refinement schemes are developed based on distance- and topology-oriented criteria for thin regions with confining wall/plane of symmetry and in any situation, respectively. Both techniques are implemented in a general framework with a volume-of-fluid formulation and an adaptive-mesh-refinement capability. The distance-oriented technique compares against a critical value, the ratio of an interfacial cell size to the distance between the mass center of the cell and a reference plane. The topology-oriented technique is developed from digital topology theories to handle more general conditions. The requirement for interfacial mesh refinement can be detected swiftly, without the need of thickness information, equation solving, variable averaging or mesh repairing. The mesh refinement level increases smoothly on demand in thin regions. The schemes have been verified and validated against several benchmark cases to demonstrate their effectiveness and robustness. These include the dynamics of colliding droplets, droplet motions in a microchannel, and atomization of liquid impinging jets. Overall, the thickness-based refinement technique provides highly adaptive meshes for problems with thin regions in an efficient and fully automatic manner.

  4. Detached eddy simulation for turbulent fluid-structure interaction of moving bodies using the constraint-based immersed boundary method

    NASA Astrophysics Data System (ADS)

    Nangia, Nishant; Bhalla, Amneet P. S.; Griffith, Boyce E.; Patankar, Neelesh A.

    2016-11-01

    Flows over bodies of industrial importance often contain both an attached boundary layer region near the structure and a region of massively separated flow near its trailing edge. When simulating these flows with turbulence modeling, the Reynolds-averaged Navier-Stokes (RANS) approach is more efficient in the former, whereas large-eddy simulation (LES) is more accurate in the latter. Detached-eddy simulation (DES), based on the Spalart-Allmaras model, is a hybrid method that switches from RANS mode of solution in attached boundary layers to LES in detached flow regions. Simulations of turbulent flows over moving structures on a body-fitted mesh incur an enormous remeshing cost every time step. The constraint-based immersed boundary (cIB) method eliminates this operation by placing the structure on a Cartesian mesh and enforcing a rigidity constraint as an additional forcing in the Navier-Stokes momentum equation. We outline the formulation and development of a parallel DES-cIB method using adaptive mesh refinement. We show preliminary validation results for flows past stationary bodies with both attached and separated boundary layers along with results for turbulent flows past moving bodies. This work is supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1324585.

  5. A particle-in-cell method for the simulation of plasmas based on an unconditionally stable field solver

    SciTech Connect

    Wolf, Eric M.; Causley, Matthew; Christlieb, Andrew; Bettencourt, Matthew

    2016-08-09

    Here, we propose a new particle-in-cell (PIC) method for the simulation of plasmas based on a recently developed, unconditionally stable solver for the wave equation. This method is not subject to a CFL restriction, limiting the ratio of the time step size to the spatial step size, typical of explicit methods, while maintaining computational cost and code complexity comparable to such explicit schemes. We describe the implementation in one and two dimensions for both electrostatic and electromagnetic cases, and present the results of several standard test problems, showing good agreement with theory with time step sizes much larger than allowed by typical CFL restrictions.

  6. A particle-in-cell method for the simulation of plasmas based on an unconditionally stable field solver

    NASA Astrophysics Data System (ADS)

    Wolf, Eric M.; Causley, Matthew; Christlieb, Andrew; Bettencourt, Matthew

    2016-12-01

    We propose a new particle-in-cell (PIC) method for the simulation of plasmas based on a recently developed, unconditionally stable solver for the wave equation. This method is not subject to a CFL restriction, limiting the ratio of the time step size to the spatial step size, typical of explicit methods, while maintaining computational cost and code complexity comparable to such explicit schemes. We describe the implementation in one and two dimensions for both electrostatic and electromagnetic cases, and present the results of several standard test problems, showing good agreement with theory with time step sizes much larger than allowed by typical CFL restrictions.

  7. A particle-in-cell method for the simulation of plasmas based on an unconditionally stable field solver

    SciTech Connect

    Wolf, Eric M.; Causley, Matthew; Christlieb, Andrew; Bettencourt, Matthew

    2016-08-09

    Here, we propose a new particle-in-cell (PIC) method for the simulation of plasmas based on a recently developed, unconditionally stable solver for the wave equation. This method is not subject to a CFL restriction, limiting the ratio of the time step size to the spatial step size, typical of explicit methods, while maintaining computational cost and code complexity comparable to such explicit schemes. We describe the implementation in one and two dimensions for both electrostatic and electromagnetic cases, and present the results of several standard test problems, showing good agreement with theory with time step sizes much larger than allowed by typical CFL restrictions.

  8. Emission profile variability in hot star winds. A pseudo-3D method based on radiation hydrodynamics simulations

    NASA Astrophysics Data System (ADS)

    Dessart, L.; Owocki, S. P.

    2002-03-01

    We present theoretical calculations of emission line profile variability based on hot star wind structure calculated numerically using radiation hydrodynamics simulations. A principal goal is to examine how well short-time-scale variations observed in wind emission lines can be modelled by wind structure arising from small-scale instabilities intrinsic to the line-driving of these winds. The simulations here use a new implementation of the Smooth Source Function formalism for line-driving within a one-dimensional (1D) operation of the standard hydrodynamics code ZEUS-2D. As in previous wind instability simulations, the restriction to 1D is necessitated by the computational costs of non-local integrations needed for the line-driving force; but we find that naive application of such simulations within an explicit assumption of spherically symmetric structure leads to an unobserved strong concentration of profile variability toward the line wings. We thus introduce a new ``patch method'' for mimicking a full 3D wind structure by collecting random sequences of 1D simulations to represent the structure evolution along radial rays that extend over a selectable patch-size of solid angle. We provide illustrative results for a selection of patch sizes applied to a simulation with standard assumptions that govern the details of instability-generated wind structure, and show in particular that a typical model with a patch size of about 3 deg can qualitatively reproduce the fundamental properties of observed profile variations. We conclude with a discussion of prospects for extending the simulation method to optically thick winds of Wolf-Rayet (WR) stars, and for thereby applying our ``patch method'' to dynamical modelling of the extensive variability observed in wind emission lines from these WR stars.

  9. Statistical modification analysis of helical planetary gears based on response surface method and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Guo, Fan

    2015-11-01

    Tooth modification technique is widely used in gear industry to improve the meshing performance of gearings. However, few of the present studies on tooth modification considers the influence of inevitable random errors on gear modification effects. In order to investigate the uncertainties of tooth modification amount variations on system's dynamic behaviors of a helical planetary gears, an analytical dynamic model including tooth modification parameters is proposed to carry out a deterministic analysis on the dynamics of a helical planetary gear. The dynamic meshing forces as well as the dynamic transmission errors of the sun-planet 1 gear pair with and without tooth modifications are computed and compared to show the effectiveness of tooth modifications on gear dynamics enhancement. By using response surface method, a fitted regression model for the dynamic transmission error(DTE) fluctuations is established to quantify the relationship between modification amounts and DTE fluctuations. By shifting the inevitable random errors arousing from manufacturing and installing process to tooth modification amount variations, a statistical tooth modification model is developed and a methodology combining Monte Carlo simulation and response surface method is presented for uncertainty analysis of tooth modifications. The uncertainly analysis reveals that the system's dynamic behaviors do not obey the normal distribution rule even though the design variables are normally distributed. In addition, a deterministic modification amount will not definitely achieve an optimal result for both static and dynamic transmission error fluctuation reduction simultaneously.

  10. Efficacy of laser-based irrigant activation methods in removing debris from simulated root canal irregularities.

    PubMed

    Deleu, Ellen; Meire, Maarten A; De Moor, Roeland J G

    2015-02-01

    In root canal therapy, irrigating solutions are essential to assist in debridement and disinfection, but their spread and action is often restricted by canal anatomy. Hence, activation of irrigants is suggested to improve their distribution in the canal system, increasing irrigation effectiveness. Activation can be done with lasers, termed laser-activated irrigation (LAI). The purpose of this in vitro study was to compare the efficacy of different irrigant activation methods in removing debris from simulated root canal irregularities. Twenty-five straight human canine roots were embedded in resin, split, and their canals prepared to a standardized shape. A groove was cut in the wall of each canal and filled with dentin debris. Canals were filled with sodium hypochlorite and six irrigant activation procedures were tested: conventional needle irrigation (CI), manual-dynamic irrigation with a tapered gutta percha cone (manual-dynamic irrigation (MDI)), passive ultrasonic irrigation, LAI with 2,940-nm erbium-doped yttrium aluminum garnet (Er:YAG) laser with a plain fiber tip inside the canal (Er-flat), LAI with Er:YAG laser with a conical tip held at the canal entrance (Er-PIPS), and LAI with a 980-nm diode laser moving the fiber inside the canal (diode). The amount of remaining debris in the groove was scored and compared among the groups using non-parametric tests. Conventional irrigation removed significantly less debris than all other groups. The Er:YAG with plain fiber tip was more efficient than MDI, CI, diode, and Er:YAG laser with PIPS tip in removing debris from simulated root canal irregularities.

  11. A method motion simulator design based on modeling characteristics of the human operator

    NASA Technical Reports Server (NTRS)

    Repperger, D. W.; Junker, A. M.

    1978-01-01

    A design criteria is obtained to compare two simulators and evaluate their equivalence or credibility. In the subsequent analysis the comparison of two simulators can be considered as the same problem as the comparison of a real world situation and a simulation's representation of this real world situation. The design criteria developed involves modeling of the human operator and defining simple parameters to describe his behavior in the simulator and in the real world situation. In the process of obtaining human operator parameters to define characteristics to evaluate simulators, measures are also obtained on these human operator characteristics which can be used to describe the human as an information processor and controller. First, a study is conducted on the simulator design problem in such a manner that this modeling approach can be used to develop a criteria for the comparison of two simulators.

  12. Fast Simulation Method for Ocean Wave Base on Ocean Wave Spectrum and Improved Gerstner Model with GPU

    NASA Astrophysics Data System (ADS)

    Zhang, Wenqiao; Zhang, Jing; Zhang, Tianchi

    2017-01-01

    For the randomness and complexity of ocean wave, and the simulation of large-scale ocean requires a great amount of computation, but the computational efficiency is low, the real-time ability is poor, a fast method of wave simulation is proposed based on the observation and research results of oceanography, it takes advantage of the grid which combined with the technique of LOD and projection, and use the height map of ocean which is formd by retrieval of ocean wave spectrum and directional spectrum to compute with FFT, and it uses the height map to cyclic mapping for the grid on GPU which combined with the technique of LOD and projection to get the dynamic height data and simulation of ocean. The experimental results show that the method is vivid and it conforms with randomness and complexity of ocean wave, it effectively improves the simulation speed of the wave and satisfied with the real-time ability and fidelity in simulation system of ocean.

  13. Ground motion simulation for the 23 August 2011, Mineral, Virginia earthquake using physics-based and stochastic broadband methods

    USGS Publications Warehouse

    Sun, Xiaodan; Hartzell, Stephen; Rezaeian, Sanaz

    2015-01-01

    Three broadband simulation methods are used to generate synthetic ground motions for the 2011 Mineral, Virginia, earthquake and compare with observed motions. The methods include a physics‐based model by Hartzell et al. (1999, 2005), a stochastic source‐based model by Boore (2009), and a stochastic site‐based model by Rezaeian and Der Kiureghian (2010, 2012). The ground‐motion dataset consists of 40 stations within 600 km of the epicenter. Several metrics are used to validate the simulations: (1) overall bias of response spectra and Fourier spectra (from 0.1 to 10 Hz); (2) spatial distribution of residuals for GMRotI50 peak ground acceleration (PGA), peak ground velocity, and pseudospectral acceleration (PSA) at various periods; (3) comparison with ground‐motion prediction equations (GMPEs) for the eastern United States. Our results show that (1) the physics‐based model provides satisfactory overall bias from 0.1 to 10 Hz and produces more realistic synthetic waveforms; (2) the stochastic site‐based model also yields more realistic synthetic waveforms and performs superiorly for frequencies greater than about 1 Hz; (3) the stochastic source‐based model has larger bias at lower frequencies (<0.5  Hz) and cannot reproduce the varying frequency content in the time domain. The spatial distribution of GMRotI50 residuals shows that there is no obvious pattern with distance in the simulation bias, but there is some azimuthal variability. The comparison between synthetics and GMPEs shows similar fall‐off with distance for all three models, comparable PGA and PSA amplitudes for the physics‐based and stochastic site‐based models, and systematic lower amplitudes for the stochastic source‐based model at lower frequencies (<0.5  Hz).

  14. Post-OPC verification using a full-chip pattern-based simulation verification method

    NASA Astrophysics Data System (ADS)

    Hung, Chi-Yuan; Wang, Ching-Heng; Ma, Cliff; Zhang, Gary

    2005-11-01

    In this paper, we evaluated and investigated techniques for performing fast full-chip post-OPC verification using a commercial product platform. A number of databases from several technology nodes, i.e. 0.13um, 0.11um and 90nm are used in the investigation. Although it has proven that for most cases, our OPC technology is robust in general, due to the variety of tape-outs with complicated design styles and technologies, it is difficult to develop a "complete or bullet-proof" OPC algorithm that would cover every possible layout patterns. In the evaluation, among dozens of databases, some OPC databases were found errors by Model-based post-OPC checking, which could cost significantly in manufacturing - reticle, wafer process, and more importantly the production delay. From such a full-chip OPC database verification, we have learned that optimizing OPC models and recipes on a limited set of test chip designs may not provide sufficient coverage across the range of designs to be produced in the process. And, fatal errors (such as pinch or bridge) or poor CD distribution and process-sensitive patterns may still occur. As a result, more than one reticle tape-out cycle is not uncommon to prove models and recipes that approach the center of process for a range of designs. So, we will describe a full-chip pattern-based simulation verification flow serves both OPC model and recipe development as well as post OPC verification after production release of the OPC. Lastly, we will discuss the differentiation of the new pattern-based and conventional edge-based verification tools and summarize the advantages of our new tool and methodology: 1). Accuracy: Superior inspection algorithms, down to 1nm accuracy with the new "pattern based" approach 2). High speed performance: Pattern-centric algorithms to give best full-chip inspection efficiency 3). Powerful analysis capability: Flexible error distribution, grouping, interactive viewing and hierarchical pattern extraction to narrow

  15. Physical parameter identification method based on modal analysis for two-axis on-road vehicles: Theory and simulation

    NASA Astrophysics Data System (ADS)

    Zheng, Minyi; Zhang, Bangji; Zhang, Jie; Zhang, Nong

    2016-07-01

    Physical parameters are very important for vehicle dynamic modeling and analysis. However, most of physical parameter identification methods are assuming some physical parameters of vehicle are known, and the other unknown parameters can be identified. In order to identify physical parameters of vehicle in the case that all physical parameters are unknown, a methodology based on the State Variable Method(SVM) for physical parameter identification of two-axis on-road vehicle is presented. The modal parameters of the vehicle are identified by the SVM, furthermore, the physical parameters of the vehicle are estimated by least squares method. In numerical simulations, physical parameters of Ford Granada are chosen as parameters of vehicle model, and half-sine bump function is chosen to simulate tire stimulated by impulse excitation. The first numerical simulation shows that the present method can identify all of the physical parameters and the largest absolute value of percentage error of the identified physical parameter is 0.205%; and the effect of the errors of additional mass, structural parameter and measurement noise are discussed in the following simulations, the results shows that when signal contains 30 dB noise, the largest absolute value of percentage error of the identification is 3.78%. These simulations verify that the presented method is effective and accurate for physical parameter identification of two-axis on-road vehicles. The proposed methodology can identify all physical parameters of 7-DOF vehicle model by using free-decay responses of vehicle without need to assume some physical parameters are known.

  16. Parallel adaptive mesh refinement method based on WENO finite difference scheme for the simulation of multi-dimensional detonation

    NASA Astrophysics Data System (ADS)

    Wang, Cheng; Dong, XinZhuang; Shu, Chi-Wang

    2015-10-01

    For numerical simulation of detonation, computational cost using uniform meshes is large due to the vast separation in both time and space scales. Adaptive mesh refinement (AMR) is advantageous for problems with vastly different scales. This paper aims to propose an AMR method with high order accuracy for numerical investigation of multi-dimensional detonation. A well-designed AMR method based on finite difference weighted essentially non-oscillatory (WENO) scheme, named as AMR&WENO is proposed. A new cell-based data structure is used to organize the adaptive meshes. The new data structure makes it possible for cells to communicate with each other quickly and easily. In order to develop an AMR method with high order accuracy, high order prolongations in both space and time are utilized in the data prolongation procedure. Based on the message passing interface (MPI) platform, we have developed a workload balancing parallel AMR&WENO code using the Hilbert space-filling curve algorithm. Our numerical experiments with detonation simulations indicate that the AMR&WENO is accurate and has a high resolution. Moreover, we evaluate and compare the performance of the uniform mesh WENO scheme and the parallel AMR&WENO method. The comparison results provide us further insight into the high performance of the parallel AMR&WENO method.

  17. A novel method based on maximum likelihood estimation for the construction of seismic fragility curves using numerical simulations

    NASA Astrophysics Data System (ADS)

    Dang, Cong-Thuat; Le, Thien-Phu; Ray, Pascal

    2017-10-01

    Seismic fragility curves presenting some probability of failure or of a damage state exceedance versus seismic intensity can be established by engineering judgment, empirical or numerical approaches. This paper focuses on the latter issue. In recent studies, three popular methods based on numerical simulations, comprising scaled seismic intensity, maximum likelihood estimation and probabilistic seismic demand/capacity models, have been studied and compared. The results obtained show that the maximum likelihood estimation (MLE) method is in general better than other ones. However, previous publications also indicated the dependence of the MLE method on the ground excitation input. The objective of this paper is thus to propose a novel method improving the existing MLE one. Improvements are based on probabilistic ground motion information, which is taken into account in the proposed procedure. The validity of this new approach is verified by analytical tests and numerical examples.

  18. On-orbit servicing system assessment and optimization methods based on lifecycle simulation under mixed aleatory and epistemic uncertainties

    NASA Astrophysics Data System (ADS)

    Yao, Wen; Chen, Xiaoqian; Huang, Yiyong; van Tooren, Michel

    2013-06-01

    To assess the on-orbit servicing (OOS) paradigm and optimize its utilities by taking advantage of its inherent flexibility and responsiveness, the OOS system assessment and optimization methods based on lifecycle simulation under uncertainties are studied. The uncertainty sources considered in this paper include both the aleatory (random launch/OOS operation failure and on-orbit component failure) and the epistemic (the unknown trend of the end-used market price) types. Firstly, the lifecycle simulation under uncertainties is discussed. The chronological flowchart is presented. The cost and benefit models are established, and the uncertainties thereof are modeled. The dynamic programming method to make optimal decision in face of the uncertain events is introduced. Secondly, the method to analyze the propagation effects of the uncertainties on the OOS utilities is studied. With combined probability and evidence theory, a Monte Carlo lifecycle Simulation based Unified Uncertainty Analysis (MCS-UUA) approach is proposed, based on which the OOS utility assessment tool under mixed uncertainties is developed. Thirdly, to further optimize the OOS system under mixed uncertainties, the reliability-based optimization (RBO) method is studied. To alleviate the computational burden of the traditional RBO method which involves nested optimum search and uncertainty analysis, the framework of Sequential Optimization and Mixed Uncertainty Analysis (SOMUA) is employed to integrate MCS-UUA, and the RBO algorithm SOMUA-MCS is developed. Fourthly, a case study on the OOS system for a hypothetical GEO commercial communication satellite is investigated with the proposed assessment tool. Furthermore, the OOS system is optimized with SOMUA-MCS. Lastly, some conclusions are given and future research prospects are highlighted.

  19. Inferring Population Decline and Expansion From Microsatellite Data: A Simulation-Based Evaluation of the Msvar Method

    PubMed Central

    Girod, Christophe; Vitalis, Renaud; Leblois, Raphaël; Fréville, Hélène

    2011-01-01

    Reconstructing the demographic history of populations is a central issue in evolutionary biology. Using likelihood-based methods coupled with Monte Carlo simulations, it is now possible to reconstruct past changes in population size from genetic data. Using simulated data sets under various demographic scenarios, we evaluate the statistical performance of Msvar, a full-likelihood Bayesian method that infers past demographic change from microsatellite data. Our simulation tests show that Msvar is very efficient at detecting population declines and expansions, provided the event is neither too weak nor too recent. We further show that Msvar outperforms two moment-based methods (the M-ratio test and Bottleneck) for detecting population size changes, whatever the time and the severity of the event. The same trend emerges from a compilation of empirical studies. The latest version of Msvar provides estimates of the current and the ancestral population size and the time since the population started changing in size. We show that, in the absence of prior knowledge, Msvar provides little information on the mutation rate, which results in biased estimates and/or wide credibility intervals for each of the demographic parameters. However, scaling the population size parameters with the mutation rate and scaling the time with current population size, as coalescent theory requires, significantly improves the quality of the estimates for contraction but not for expansion scenarios. Finally, our results suggest that Msvar is robust to moderate departures from a strict stepwise mutation model. PMID:21385729

  20. Fragment Molecular Orbital method-based Molecular Dynamics (FMO-MD) as a simulator for chemical reactions in explicit solvation.

    PubMed

    Komeiji, Yuto; Ishikawa, Takeshi; Mochizuki, Yuji; Yamataka, Hiroshi; Nakano, Tatsuya

    2009-01-15

    Fragment Molecular Orbital based-Molecular Dynamics (FMO-MD, Komeiji et al., Chem Phys Lett 2003, 372, 342) is an ab initio MD method suitable for large molecular systems. Here, FMO-MD was implemented to conduct full quantum simulations of chemical reactions in explicit solvation. Several FMO-MD simulations were performed for a sphere of water to find a suitable simulation protocol. It was found that annealing of the initial configuration by a classical MD brought the subsequent FMO-MD trajectory to faster stabilization, and also that use of bond constraint in the FMO-MD heating stage effectively reduced the computation time. Then, the blue moon ensemble method (Sprik and Ciccotti, J Chem Phys 1998, 109, 7737) was implemented and was tested by calculating free energy profiles of the Menschutkin reaction (H3N + CH3Cl --> +H3NCH3 + Cl-) in the presence and absence of the solvent water via FMO-MD. The obtained free energy profiles were consistent with the Hammond postulate in that stabilization of the product by the solvent, namely hydration of Cl-, shifted the transition state to the reactant-side. Based on these FMO-MD results, plans for further improvement of the method are discussed. Copyright 2008 Wiley Periodicals, Inc.

  1. National Clinical Skills Competition: an effective simulation-based method to improve undergraduate medical education in China

    PubMed Central

    Jiang, Guanchao; Chen, Hong; Wang, Qiming; Chi, Baorong; He, Qingnan; Xiao, Haipeng; Zhou, Qinghuan; Liu, Jing; Wang, Shan

    2016-01-01

    Background The National Clinical Skills Competition has been held in China for 5 consecutive years since 2010 to promote undergraduate education reform and improve the teaching quality. The effects of the simulation-based competition will be analyzed in this study. Methods Participation in the competitions and the compilation of the questions used in the competition finals are summarized, and the influence and guidance quality are further analyzed. Through the nationwide distribution of questionnaires in medical colleges, the effects of the simulation-based competition on promoting undergraduate medical education reform were evaluated. Results The results show that approximately 450 students from more than 110 colleges (accounting for 81% of colleges providing undergraduate clinical medical education in China) participated in the competition each year. The knowledge, skills, and attitudes were comprehensively evaluated by simulation-based assessment. Eight hundred and eighty copies of the questionnaires were distributed to 110 participating medical schools in 2015. In total, 752 valid responses were received across 95 schools. The majority of the interviewees agreed or strongly agreed that competition promoted the adoption of advanced educational principles (76.8%), updated the curriculum model and instructional methods (79.8%), strengthened faculty development (84.0%), improved educational resources (82.1%), and benefited all students (53.4%). Conclusions The National Clinical Skills Competition is widely accepted in China. It has effectively promoted the reform and development of undergraduate medical education in China. PMID:26894586

  2. National Clinical Skills Competition: an effective simulation-based method to improve undergraduate medical education in China.

    PubMed

    Jiang, Guanchao; Chen, Hong; Wang, Qiming; Chi, Baorong; He, Qingnan; Xiao, Haipeng; Zhou, Qinghuan; Liu, Jing; Wang, Shan

    2016-01-01

    Background The National Clinical Skills Competition has been held in China for 5 consecutive years since 2010 to promote undergraduate education reform and improve the teaching quality. The effects of the simulation-based competition will be analyzed in this study. Methods Participation in the competitions and the compilation of the questions used in the competition finals are summarized, and the influence and guidance quality are further analyzed. Through the nationwide distribution of questionnaires in medical colleges, the effects of the simulation-based competition on promoting undergraduate medical education reform were evaluated. Results The results show that approximately 450 students from more than 110 colleges (accounting for 81% of colleges providing undergraduate clinical medical education in China) participated in the competition each year. The knowledge, skills, and attitudes were comprehensively evaluated by simulation-based assessment. Eight hundred and eighty copies of the questionnaires were distributed to 110 participating medical schools in 2015. In total, 752 valid responses were received across 95 schools. The majority of the interviewees agreed or strongly agreed that competition promoted the adoption of advanced educational principles (76.8%), updated the curriculum model and instructional methods (79.8%), strengthened faculty development (84.0%), improved educational resources (82.1%), and benefited all students (53.4%). Conclusions The National Clinical Skills Competition is widely accepted in China. It has effectively promoted the reform and development of undergraduate medical education in China.

  3. Simulation of metal cutting using the particle finite-element method and a physically based plasticity model

    NASA Astrophysics Data System (ADS)

    Rodríguez, J. M.; Jonsén, P.; Svoboda, A.

    2017-01-01

    Metal cutting is one of the most common metal-shaping processes. In this process, specified geometrical and surface properties are obtained through the break-up of material and removal by a cutting edge into a chip. The chip formation is associated with large strains, high strain rates and locally high temperatures due to adiabatic heating. These phenomena together with numerical complications make modeling of metal cutting difficult. Material models, which are crucial in metal-cutting simulations, are usually calibrated based on data from material testing. Nevertheless, the magnitudes of strains and strain rates involved in metal cutting are several orders of magnitude higher than those generated from conventional material testing. Therefore, a highly desirable feature is a material model that can be extrapolated outside the calibration range. In this study, a physically based plasticity model based on dislocation density and vacancy concentration is used to simulate orthogonal metal cutting of AISI 316L. The material model is implemented into an in-house particle finite-element method software. Numerical simulations are in agreement with experimental results, but also with previous results obtained with the finite-element method.

  4. Task-based image quality evaluation of iterative reconstruction methods for low dose CT using computer simulations

    NASA Astrophysics Data System (ADS)

    Xu, Jingyan; Fuld, Matthew K.; Fung, George S. K.; Tsui, Benjamin M. W.

    2015-04-01

    Iterative reconstruction (IR) methods for x-ray CT is a promising approach to improve image quality or reduce radiation dose to patients. The goal of this work was to use task based image quality measures and the channelized Hotelling observer (CHO) to evaluate both analytic and IR methods for clinical x-ray CT applications. We performed realistic computer simulations at five radiation dose levels, from a clinical reference low dose D0 to 25% D0. A fixed size and contrast lesion was inserted at different locations into the liver of the XCAT phantom to simulate a weak signal. The simulated data were reconstructed on a commercial CT scanner (SOMATOM Definition Flash; Siemens, Forchheim, Germany) using the vendor-provided analytic (WFBP) and IR (SAFIRE) methods. The reconstructed images were analyzed by CHOs with both rotationally symmetric (RS) and rotationally oriented (RO) channels, and with different numbers of lesion locations (5, 10, and 20) in a signal known exactly (SKE), background known exactly but variable (BKEV) detection task. The area under the receiver operating characteristic curve (AUC) was used as a summary measure to compare the IR and analytic methods; the AUC was also used as the equal performance criterion to derive the potential dose reduction factor of IR. In general, there was a good agreement in the relative AUC values of different reconstruction methods using CHOs with RS and RO channels, although the CHO with RO channels achieved higher AUCs than RS channels. The improvement of IR over analytic methods depends on the dose level. The reference dose level D0 was based on a clinical low dose protocol, lower than the standard dose due to the use of IR methods. At 75% D0, the performance improvement was statistically significant (p < 0.05). The potential dose reduction factor also depended on the detection task. For the SKE/BKEV task involving 10 lesion locations, a dose reduction of at least 25% from D0 was achieved.

  5. Numerical Simulation of Evacuation Process in Malaysia By Using Distinct-Element-Method Based Multi-Agent Model

    NASA Astrophysics Data System (ADS)

    Abustan, M. S.; Rahman, N. A.; Gotoh, H.; Harada, E.; Talib, S. H. A.

    2016-07-01

    In Malaysia, not many researches on crowd evacuation simulation had been reported. Hence, the development of numerical crowd evacuation process by taking into account people behavioral patterns and psychological characteristics is crucial in Malaysia. On the other hand, tsunami disaster began to gain attention of Malaysian citizens after the 2004 Indian Ocean Tsunami that need quick evacuation process. In relation to the above circumstances, we have conducted simulations of tsunami evacuation process at the Miami Beach of Penang Island by using Distinct Element Method (DEM)-based crowd behavior simulator. The main objectives are to investigate and reproduce current conditions of evacuation process at the said locations under different hypothetical scenarios for the efficiency study of the evacuation. The sim-1 is initial condition of evacuation planning while sim-2 as improvement of evacuation planning by adding new evacuation area. From the simulation result, sim-2 have a shorter time of evacuation process compared to the sim-1. The evacuation time recuded 53 second. The effect of the additional evacuation place is confirmed from decreasing of the evacuation completion time. Simultaneously, the numerical simulation may be promoted as an effective tool in studying crowd evacuation process.

  6. Occurrence and simulation of trihalomethanes in swimming pool water: A simple prediction method based on DOC and mass balance.

    PubMed

    Peng, Di; Saravia, Florencia; Abbt-Braun, Gudrun; Horn, Harald

    2016-01-01

    Trihalomethanes (THM) are the most typical disinfection by-products (DBPs) found in public swimming pool water. DBPs are produced when organic and inorganic matter in water reacts with chemical disinfectants. The irregular contribution of substances from pool visitors and long contact time with disinfectant make the forecast of THM in pool water a challenge. In this work occurrence of THM in a public indoor swimming pool was investigated and correlated with the dissolved organic carbon (DOC). Daily sampling of pool water for 26 days showed a positive correlation between DOC and THM with a time delay of about two days, while THM and DOC didn't directly correlate with the number of visitors. Based on the results and mass-balance in the pool water, a simple simulation model for estimating THM concentration in indoor swimming pool water was proposed. Formation of THM from DOC, volatilization into air and elimination by pool water treatment were included in the simulation. Formation ratio of THM gained from laboratory analysis using native pool water and information from field study in an indoor swimming pool reduced the uncertainty of the simulation. The simulation was validated by measurements in the swimming pool for 50 days. The simulated results were in good compliance with measured results. This work provides a useful and simple method for predicting THM concentration and its accumulation trend for long term in indoor swimming pool water.

  7. Simulation of two-dimensional target motion based on a liquid crystal beam steering method

    NASA Astrophysics Data System (ADS)

    Lin, Yixiang; Ai, Yong; Shan, Xin; Liu, Min

    2015-05-01

    A simulation platform is established for target motion using a liquid crystal (LC) spatial light modulator as a nonmechanical beam steering control device. By controlling the period and orientation of the phase grating generated by the spatial light modulator, the platform realizes two-dimensional (2-D) beam steering using a single LC device. The zenith and azimuth angle range from 0 deg to 2.89 deg and from 0 deg to 360 deg, respectively, with control resolution of 0.0226 deg and 0.0300 deg, respectively. The response time of the beam steering is always less than 0.04 s, irrespective of steering angle. Three typical aircraft tracks are imitated to evaluate the performance of the simulation platform. The correlation coefficients between the theoretical and simulated motions are larger than 0.9822. Results show that it is highly feasible to realize 2-D target motion simulation using the LC spatial light modulator.

  8. Genetic Algorithm Based Simulated Annealing Method for Solving Unit Commitment Problem in Utility System

    NASA Astrophysics Data System (ADS)

    Rajan, C. Christober Asir

    2010-10-01

    The objective of this paper is to find the generation scheduling such that the total operating cost can be minimized, when subjected to a variety of constraints. This also means that it is desirable to find the optimal generating unit commitment in the power system for the next H hours. Genetic Algorithms (GA's) are general-purpose optimization techniques based on principles inspired from the biological evolution using metaphors of mechanisms such as neural section, genetic recombination and survival of the fittest. In this, the unit commitment schedule is coded as a string of symbols. An initial population of parent solutions is generated at random. Here, each schedule is formed by committing all the units according to their initial status ("flat start"). Here the parents are obtained from a pre-defined set of solution's i.e. each and every solution is adjusted to meet the requirements. Then, a random recommitment is carried out with respect to the unit's minimum down times. And SA improves the status. A 66-bus utility power system with twelve generating units in India demonstrates the effectiveness of the proposed approach. Numerical results are shown comparing the cost solutions and computation time obtained by using the Genetic Algorithm method and other conventional methods.

  9. Evaluation of deformation accuracy of a virtual pneumoperitoneum method based on clinical trials for patient-specific laparoscopic surgery simulator

    NASA Astrophysics Data System (ADS)

    Oda, Masahiro; Qu, Jia Di; Nimura, Yukitaka; Kitasaka, Takayuki; Misawa, Kazunari; Mori, Kensaku

    2012-02-01

    This paper evaluates deformation accuracy of a virtual pneumoperitoneum method by utilizing measurement data of real deformations of patient bodies. Laparoscopic surgery is an option of surgical operations that is less invasive technique as compared with traditional surgical operations. In laparoscopic surgery, the pneumoperitoneum process is performed to create a viewing and working space. Although a virtual pneumoperitoneum method based on 3D CT image deformation has been proposed for patient-specific laparoscopy simulators, quantitative evaluation based on measurements obtained in real surgery has not been performed. In this paper, we evaluate deformation accuracy of the virtual pneumoperitoneum method based on real deformation data of the abdominal wall measured in operating rooms (ORs.) The evaluation results are used to find optimal deformation parameters of the virtual pneumoperitoneum method. We measure landmark positions on the abdominal wall on a 3D CT image taken before performing a pneumoperitoneum process. The landmark positions are defined based on anatomical structure of a patient body. We also measure the landmark positions on a 3D CT image deformed by the virtual pneumoperitoneum method. To measure real deformations of the abdominal wall, we measure the landmark positions on the abdominal wall of a patient before and after the pneumoperitoneum process in the OR. We transform the landmark positions measured in the OR from the tracker coordinate system to the CT coordinate system. A positional error of the virtual pneumoperitoneum method is calculated based on positional differences between the landmark positions on the 3D CT image and the transformed landmark positions. Experimental results based on eight cases of surgeries showed that the minimal positional error was 13.8 mm. The positional error can be decreased from the previous method by calculating optimal deformation parameters of the virtual pneumoperitoneum method from the experimental

  10. Improving the degree-day method for sub-daily melt simulations with physically-based diurnal variations

    NASA Astrophysics Data System (ADS)

    Tobin, Cara; Schaefli, Bettina; Nicótina, Ludovico; Simoni, Silvia; Barrenetxea, Guillermo; Smith, Russell; Parlange, Marc; Rinaldo, Andrea

    2013-05-01

    This paper proposes a new extension of the classical degree-day snowmelt model applicable to hourly simulations for regions with limited data and adaptable to a broad range of spatially-explicit hydrological models. The snowmelt schemes have been tested with a point measurement dataset at the Cotton Creek Experimental Watershed (CCEW) in British Columbia, Canada and with a detailed dataset available from the Dranse de Ferret catchment, an extensively monitored catchment in the Swiss Alps. The snowmelt model performance is quantified with the use of a spatially-explicit model of the hydrologic response. Comparative analyses are presented with the widely-known, grid-based method proposed by Hock which combines a local, temperature-index approach with potential radiation. The results suggest that a simple diurnal cycle of the degree-day melt parameter based on minimum and maximum temperatures is competitive with the Hock approach for sub-daily melt simulations. Advantages of the new extension of the classical degree-day method over other temperature-index methods include its use of physically-based, diurnal variations and its ability to be adapted to data-constrained hydrological models which are lumped in some nature.

  11. A free energy-based surface tension force model for simulation of multiphase flows by level-set method

    NASA Astrophysics Data System (ADS)

    Yuan, H. Z.; Chen, Z.; Shu, C.; Wang, Y.; Niu, X. D.; Shu, S.

    2017-09-01

    In this paper, a free energy-based surface tension force (FESF) model is presented for accurately resolving the surface tension force in numerical simulation of multiphase flows by the level set method. By using the analytical form of order parameter along the normal direction to the interface in the phase-field method and the free energy principle, FESF model offers an explicit and analytical formulation for the surface tension force. The only variable in this formulation is the normal distance to the interface, which can be substituted by the distance function solved by the level set method. On one hand, as compared to conventional continuum surface force (CSF) model in the level set method, FESF model introduces no regularized delta function, due to which it suffers less from numerical diffusions and performs better in mass conservation. On the other hand, as compared to the phase field surface tension force (PFSF) model, the evaluation of surface tension force in FESF model is based on an analytical approach rather than numerical approximations of spatial derivatives. Therefore, better numerical stability and higher accuracy can be expected. Various numerical examples are tested to validate the robustness of the proposed FESF model. It turns out that FESF model performs better than CSF model and PFSF model in terms of accuracy, stability, convergence speed and mass conservation. It is also shown in numerical tests that FESF model can effectively simulate problems with high density/viscosity ratio, high Reynolds number and severe topological interfacial changes.

  12. National Clinical Skills Competition: an effective simulation-based method to improve undergraduate medical education in China.

    PubMed

    Jiang, Guanchao; Chen, Hong; Wang, Qiming; Chi, Baorong; He, Qingnan; Xiao, Haipeng; Zhou, Qinghuan; Liu, Jing; Wang, Shan

    2016-01-01

    The National Clinical Skills Competition has been held in China for 5 consecutive years since 2010 to promote undergraduate education reform and improve the teaching quality. The effects of the simulation-based competition will be analyzed in this study. Participation in the competitions and the compilation of the questions used in the competition finals are summarized, and the influence and guidance quality are further analyzed. Through the nationwide distribution of questionnaires in medical colleges, the effects of the simulation-based competition on promoting undergraduate medical education reform were evaluated. The results show that approximately 450 students from more than 110 colleges (accounting for 81% of colleges providing undergraduate clinical medical education in China) participated in the competition each year. The knowledge, skills, and attitudes were comprehensively evaluated by simulation-based assessment. Eight hundred and eighty copies of the questionnaires were distributed to 110 participating medical schools in 2015. In total, 752 valid responses were received across 95 schools. The majority of the interviewees agreed or strongly agreed that competition promoted the adoption of advanced educational principles (76.8%), updated the curriculum model and instructional methods (79.8%), strengthened faculty development (84.0%), improved educational resources (82.1%), and benefited all students (53.4%). The National Clinical Skills Competition is widely accepted in China. It has effectively promoted the reform and development of undergraduate medical education in China.

  13. Applying Synchronous Methods during the Development of an Online Classroom-Based Simulation

    ERIC Educational Resources Information Center

    Ferry, Brian; Kervin, Lisa

    2006-01-01

    Purpose: The purpose of this paper is to report the impact of an online simulation that was designed to provide pre-service teachers with experience in dealing with complex classroom situations associated with the teaching of literacy. Design/methodology/approach: A developmental approach to the research was used. This is also known as…

  14. Comparison of different methods to calculate total runoff and sediment yield based on aliquot sampling from rainfall simulations

    NASA Astrophysics Data System (ADS)

    Tresch, Simon; Fister, Wolfgang; Marzen, Miriam; Kuhn, Nikolaus J.

    2015-04-01

    The quality of data obtained by rainfall experiments depends mainly on the quality of the rainfall simulation itself. However, the best rainfall simulation cannot deliver valuable data, if runoff and sediment discharge from the plot are not sampled at a proper interval or if poor interpolation methods are being used. The safest way to get good results would be to collect all runoff and sediment amounts that come off the plot in the shortest possible intervals. Unfortunately, high rainfall amounts often coincide with limited transport and analysis capacities. Therefore, it is in most cases necessary to find a good compromise between sampling frequency, interpolation method, and available analysis capacities. The aim of this study was to compare different methods to calculate total sediment yield based on aliquot sampling intervals. The methods tested were (1) simple extrapolation of one sample until next sample was collected; (2) averaging between two successive samples; (3) extrapolation of the sediment concentration; (4) extrapolation using a regression function. The results indicate that all methods could, theoretically, be used to calculate total sediment yields, but errors between 10-25% would have to be taken into account for interpretation of the gained data. Highest deviations were always found for the first measurement interval, which shows that it is very important to capture the initial flush of sediment from the plot to be able to calculate reliable total values.

  15. Simulation of the early stage of binary alloy decomposition, based on the free energy density functional method

    NASA Astrophysics Data System (ADS)

    L'vov, P. E.; Svetukhin, V. V.

    2016-07-01

    Based on the free energy density functional method, the early stage of decomposition of a onedimensional binary alloy corresponding to the approximation of regular solutions has been simulated. In the simulation, Gaussian composition fluctuations caused by the initial alloy state are taken into account. The calculation is performed using the block approach implying discretization of the extensive solution volume into independent fragments for each of which the decomposition process is calculated, and then a joint analysis of the formed second phase segregations is performed. It was possible to trace all stages of solid solution decomposition: nucleation, growth, and coalescence (initial stage). The time dependences of the main phase distribution characteristics are calculated: the average size and concentration of the second phase particles, their size distribution function, and the nucleation rate of the second phase particles (clusters). Cluster trajectories in the size-composition space are constructed for the cases of growth and dissolution.

  16. Monte Carlo simulation methods in moment-based scale-bridging algorithms for thermal radiative-transfer problems

    SciTech Connect

    Densmore, J.D.; Park, H.; Wollaber, A.B.; Rauenzahn, R.M.; Knoll, D.A.

    2015-03-01

    We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption–emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck–Cummings algorithm.

  17. A probability-based multi-cycle sorting method for 4D-MRI: A simulation study.

    PubMed

    Liang, Xiao; Yin, Fang-Fang; Liu, Yilin; Cai, Jing

    2016-12-01

    by the 4D images, and also the accuracy of average intensity projection (AIP) of 4D images. Probability-based sorting showed improved similarity of breathing motion PDF from 4D images to reference PDF compared to single cycle sorting, indicated by the significant increase in Dice similarity coefficient (DSC) (probability-based sorting, DSC = 0.89 ± 0.03, and single cycle sorting, DSC = 0.83 ± 0.05, p-value <0.001). Based on the simulation study on XCAT, the probability-based method outperforms the conventional phase-based methods in qualitative evaluation on motion artifacts and quantitative evaluation on tumor volume precision and accuracy and accuracy of AIP of the 4D images. In this paper the authors demonstrated the feasibility of a novel probability-based multicycle 4D image sorting method. The authors' preliminary results showed that the new method can improve the accuracy of tumor motion PDF and the AIP of 4D images, presenting potential advantages over the conventional phase-based sorting method for radiation therapy motion management.

  18. A particle-in-cell method for the simulation of plasmas based on an unconditionally stable field solver

    DOE PAGES

    Wolf, Eric M.; Causley, Matthew; Christlieb, Andrew; ...

    2016-08-09

    Here, we propose a new particle-in-cell (PIC) method for the simulation of plasmas based on a recently developed, unconditionally stable solver for the wave equation. This method is not subject to a CFL restriction, limiting the ratio of the time step size to the spatial step size, typical of explicit methods, while maintaining computational cost and code complexity comparable to such explicit schemes. We describe the implementation in one and two dimensions for both electrostatic and electromagnetic cases, and present the results of several standard test problems, showing good agreement with theory with time step sizes much larger than allowedmore » by typical CFL restrictions.« less

  19. A simulation-based evaluation of methods for inferring linear barriers to gene flow

    Treesearch

    Christopher Blair; Dana E. Weigel; Matthew Balazik; Annika T. H. Keeley; Faith M. Walker; Erin Landguth; Sam Cushman; Melanie Murphy; Lisette Waits; Niko Balkenhol

    2012-01-01

    Different analytical techniques used on the same data set may lead to different conclusions about the existence and strength of genetic structure. Therefore, reliable interpretation of the results from different methods depends on the efficacy and reliability of different statistical methods. In this paper, we evaluated the performance of multiple analytical methods to...

  20. Simulation modeling based method for choosing an effective set of fault tolerance mechanisms for real-time avionics systems

    NASA Astrophysics Data System (ADS)

    Bakhmurov, A. G.; Balashov, V. V.; Glonina, A. B.; Pashkov, V. N.; Smeliansky, R. L.; Volkanov, D. Yu.

    2013-12-01

    In this paper, the reliability allocation problem (RAP) for real-time avionics systems (RTAS) is considered. The proposed method for solving this problem consists of two steps: (i) creation of an RTAS simulation model at the necessary level of abstraction and (ii) application of metaheuristic algorithm to find an optimal solution (i. e., to choose an optimal set of fault tolerance techniques). When during the algorithm execution it is necessary to measure the execution time of some software components, the simulation modeling is applied. The procedure of simulation modeling also consists of the following steps: automatic construction of simulation model of the RTAS configuration and running this model in a simulation environment to measure the required time. This method was implemented as an experimental software tool. The tool works in cooperation with DYANA simulation environment. The results of experiments with the implemented method are presented. Finally, future plans for development of the presented method and tool are briefly described.

  1. Finite analytic method based on mixed-form Richards' equation for simulating water flow in vadose zone

    NASA Astrophysics Data System (ADS)

    Zhang, Zaiyong; Wang, Wenke; Yeh, Tian-chyi Jim; Chen, Li; Wang, Zhoufeng; Duan, Lei; An, Kedong; Gong, Chengcheng

    2016-06-01

    In this paper, we develop a finite analytic method (FAMM), which combines flexibility of numerical methods and advantages of analytical solutions, to solve the mixed-form Richards' equation. This new approach minimizes mass balance errors and truncation errors associated with most numerical approaches. We use numerical experiments to demonstrate that FAMM can obtain more accurate numerical solutions and control the global mass balance better than modified Picard finite difference method (MPFD) as compared with analytical solutions. In addition, FAMM is superior to the finite analytic method based on head-based Richards' equation (FAMH). Besides, FAMM solutions are compared to analytical solutions for wetting and drying processes in Brindabella Silty Clay Loam and Yolo Light Clay soils. Finally, we demonstrate that FAMM yields comparable results with those from MPFD and Hydrus-1D for simulating infiltration into other different soils under wet and dry conditions. These numerical experiments further confirm the fact that as long as a hydraulic constitutive model captures general behaviors of other models, it can be used to yield flow fields comparable to those based on other models.

  2. Continuous surface force based lattice Boltzmann equation method for simulating thermocapillary flow

    NASA Astrophysics Data System (ADS)

    Zheng, Lin; Zheng, Song; Zhai, Qinglan

    2016-02-01

    In this paper, we extend a lattice Boltzmann equation (LBE) with continuous surface force (CSF) to simulate thermocapillary flows. The model is designed on our previous CSF LBE for athermal two phase flow, in which the interfacial tension forces and the Marangoni stresses as the results of the interface interactions between different phases are described by a conception of CSF. In this model, the sharp interfaces between different phases are separated by a narrow transition layers, and the kinetics and morphology evolution of phase separation would be characterized by an order parameter via Cahn-Hilliard equation which is solved in the frame work of LBE. The scalar convection-diffusion equation for temperature field is resolved by thermal LBE. The models are validated by thermal two layered Poiseuille flow, and two superimposed planar fluids at negligibly small Reynolds and Marangoni numbers for the thermocapillary driven convection, which have analytical solutions for the velocity and temperature. Then thermocapillary migration of two/three dimensional deformable droplet are simulated. Numerical results show that the predictions of present LBE agreed with the analytical solution/other numerical results.

  3. A Simulation-Based Comparison of Several Stochastic Linear Regression Methods in the Presence of Outliers.

    ERIC Educational Resources Information Center

    Rule, David L.

    Several regression methods were examined within the framework of weighted structural regression (WSR), comparing their regression weight stability and score estimation accuracy in the presence of outlier contamination. The methods compared are: (1) ordinary least squares; (2) WSR ridge regression; (3) minimum risk regression; (4) minimum risk 2;…

  4. A numerical simulation of the hole-tone feedback cycle based on an axisymmetric discrete vortex method and Curle's equation

    NASA Astrophysics Data System (ADS)

    Langthjem, M. A.; Nakano, M.

    2005-11-01

    An axisymmetric numerical simulation approach to the hole-tone self-sustained oscillation problem is developed, based on the discrete vortex method for the incompressible flow field, and a representation of flow noise sources on an acoustically compact impingement plate by Curle's equation. The shear layer of the jet is represented by 'free' discrete vortex rings, and the jet nozzle and the end plate by bound vortex rings. A vortex ring is released from the nozzle at each time step in the simulation. The newly released vortex rings are disturbed by acoustic feedback. It is found that the basic feedback cycle works hydrodynamically. The effect of the acoustic feedback is to suppress the broadband noise and reinforce the characteristic frequency and its higher harmonics. An experimental investigation is also described. A hot wire probe was used to measure velocity fluctuations in the shear layer, and a microphone to measure acoustic pressure fluctuations. Comparisons between simulated and experimental results show quantitative agreement with respect to both frequency and amplitude of the shear layer velocity fluctuations. As to acoustic pressure fluctuations, there is quantitative agreement w.r.t. frequencies, and reasonable qualitative agreement w.r.t. peaks of the characteristic frequency and its higher harmonics. Both simulated and measured frequencies f follow the criterion L/uc+L/c0=n/f where L is the gap length between nozzle exit and end plate, uc is the shear layer convection velocity, c0 is the speed of sound, and n is a mode number (n={1}/{2},1,{3}/{2},…). The experimental results however display a complicated pattern of mode jumps, which the numerical method cannot capture.

  5. A computational method to model radar return range in a polygonally based, computer-generated-imagery simulation

    NASA Technical Reports Server (NTRS)

    Moran, F. J.; Phillips, J. D.

    1986-01-01

    Described is a method for modeling a ground-mapping radar system for use in simulations where the terrain is in a polygonal form commonly used with computer generated imagery (CGI). The method employs a unique approach for rapidly rejecting polygons not visible to the radar to facilitate the real-time simulation of the radar return. This rapid rejection of the nonvisible polygons requires the precalculation and storage of a set of parameters that do not vary during the simulation. The calculation of a radar range as a function of the radar forward-looking angle to the CGI terrain is carried out only for the visible polygons. This method was used as part of a simulation for terrain-following helicopter operations on the vertical motion simulator at the NASA Ames Research Center. It proved to be an efficient means for returning real-time simulated radar range data.

  6. ANS Based Submarine Simulation

    DTIC Science & Technology

    1994-08-01

    computer based simulation proraon supplied by Dr. John Ware at Computer Sceinces Corporation (CSC). Thee am two reasons to use simulated data instead...ANS (Artificial Neural System) capable of modeling submarine perfomncie based on full scale data generated using a computer based simulabon program...The Optimized Entropy algorilth enables the solutions to diffcu problems on a desktop computer within an acceptable time frame. Ob6ectve for w

  7. Mirage events & driver haptic steering alerts in a motion-base driving simulator: A method for selecting an optimal HMI.

    PubMed

    Talamonti, Walter; Tijerina, Louis; Blommer, Mike; Swaminathan, Radhakrishnan; Curry, Reates; Ellis, R Darin

    2017-11-01

    This paper describes a new method, a 'mirage scenario,' to support formative evaluation of driver alerting or warning displays for manual and automated driving. This method provides driving contexts (e.g., various Times-To-Collision (TTCs) to a lead vehicle) briefly presented and then removed. In the present study, during each mirage event, a haptic steering display was evaluated. This haptic display indicated a steering response may be initiated to drive around an obstacle ahead. A motion-base simulator was used in a 32-participant study to present vehicle motion cues similar to the actual application. Surprise was neither present nor of concern, as it would be for a summative evaluation of a forward collision warning system. Furthermore, no collision avoidance maneuvers were performed, thereby reducing the risk of simulator sickness. This paper illustrates the mirage scenario procedures, the rating methods and definitions used with the mirage scenario, and analysis of the ratings obtained, together with a multi-attribute utility theory (MAUT) approach to evaluate and select among alternative designs for future summative evaluation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. An Effective Correction Method for Seriously Oblique Remote Sensing Images Based on Multi-View Simulation and a Piecewise Model

    PubMed Central

    Wang, Chunyuan; Liu, Xiang; Zhao, Xiaoli; Wang, Yongqi

    2016-01-01

    Conventional correction approaches are unsuitable for effectively correcting remote sensing images acquired in the seriously oblique condition which has severe distortions and resolution disparity. Considering that the extraction of control points (CPs) and the parameter estimation of the correction model play important roles in correction accuracy, this paper introduces an effective correction method for large angle (LA) images. Firstly, a new CP extraction algorithm is proposed based on multi-view simulation (MVS) to ensure the effective matching of CP pairs between the reference image and the LA image. Then, a new piecewise correction algorithm is advanced with the optimized CPs, where a concept of distribution measurement (DM) is introduced to quantify the CPs distribution. The whole image is partitioned into contiguous subparts which are corrected by different correction formulae to guarantee the accuracy of each subpart. The extensive experimental results demonstrate that the proposed method significantly outperforms conventional approaches. PMID:27763538

  9. Numerical simulation of mechanical deformation of semi-solid material using a level-set based finite element method

    NASA Astrophysics Data System (ADS)

    Sun, Zhidan; Bernacki, Marc; Logé, Roland; Gu, Guochao

    2017-09-01

    In this work, a level-set based finite element method was used to numerically evaluate the mechanical behavior in a small deformation range of semi-solid materials with different microstructure configurations. For this purpose, a finite element model of the semi-solid phase was built based on Voronoï diagram. Interfaces between the solid and the liquid phases were implicitly described by level-set functions coupled to an anisotropic meshing technique. The liquid phase was considered as a Newtonian fluid, whereas the behavior of the solid phase was described by a viscoplastic law. Simulations were performed to study the effect of different parameters such as solid phase fraction and solid bridging. Results show that the macroscopic mechanical behavior of semi-solid material strongly depends on the solid fraction and the local microstructure which play important roles in the formation of hot tearing. These results could provide valuable information for the processing of semi-solid materials.

  10. Performance Simulation: The Method.

    ERIC Educational Resources Information Center

    Rucker, Lance M.

    A logical, performer-based approach to teaching psychomotor skills is described. Four phases of surgical psychomotor skills training are identified, using an example from a dental preclinical training curriculum: (1) dental students are acquainted with the postural and positional parameters of balanced psychomotor performances; (2) students learn…

  11. Numerical simulation and experimental validation of biofilm in a multi-physics framework using an SPH based method

    NASA Astrophysics Data System (ADS)

    Soleimani, Meisam; Wriggers, Peter; Rath, Henryke; Stiesch, Meike

    2016-10-01

    In this paper, a 3D computational model has been developed to investigate biofilms in a multi-physics framework using smoothed particle hydrodynamics (SPH) based on a continuum approach. Biofilm formation is a complex process in the sense that several physical phenomena are coupled and consequently different time-scales are involved. On one hand, biofilm growth is driven by biological reaction and nutrient diffusion and on the other hand, it is influenced by fluid flow causing biofilm deformation and interface erosion in the context of fluid and deformable solid interaction. The geometrical and numerical complexity arising from these phenomena poses serious complications and challenges in grid-based techniques such as finite element. Here the solution is based on SPH as one of the powerful meshless methods. SPH based computational modeling is quite new in the biological community and the method is uniquely robust in capturing the interface-related processes of biofilm formation such as erosion. The obtained results show a good agreement with experimental and published data which demonstrates that the model is capable of simulating and predicting overall spatial and temporal evolution of biofilm.

  12. Basis set generation for quantum dynamics simulations using simple trajectory-based methods.

    PubMed

    Saller, Maximilian A C; Habershon, Scott

    2015-01-13

    Methods for solving the time-dependent Schrödinger equation generally employ either a global static basis set, which is fixed at the outset, or a dynamic basis set, which evolves according to classical-like or variational equations of motion; the former approach results in the well-known exponential scaling with system size, while the latter can suffer from challenging numerical problems, such as singular matrices, as well as violation of energy conservation. Here, we suggest a middle road: building a basis set using trajectories to place time-independent basis functions in the regions of phase space relevant to wave function propagation. This simple approach, which potentially circumvents many of the problems traditionally associated with global or dynamic basis sets, is successfully demonstrated for two challenging benchmark problems in quantum dynamics, namely, relaxation dynamics following photoexcitation in pyrazine, and the spin Boson model.

  13. Development of modern approach to absorbed dose assessment in radionuclide therapy, based on Monte Carlo method simulation of patient scintigraphy

    NASA Astrophysics Data System (ADS)

    Lysak, Y. V.; Klimanov, V. A.; Narkevich, B. Ya

    2017-01-01

    One of the most difficult problems of modern radionuclide therapy (RNT) is control of the absorbed dose in pathological volume. This research presents new approach based on estimation of radiopharmaceutical (RP) accumulated activity value in tumor volume, based on planar scintigraphic images of the patient and calculated radiation transport using Monte Carlo method, including absorption and scattering in biological tissues of the patient, and elements of gamma camera itself. In our research, to obtain the data, we performed modeling scintigraphy of the vial with administered to the patient activity of RP in gamma camera, the vial was placed at the certain distance from the collimator, and the similar study was performed in identical geometry, with the same values of activity of radiopharmaceuticals in the pathological target in the body of the patient. For correct calculation results, adapted Fisher-Snyder human phantom was simulated in MCNP program. In the context of our technique, calculations were performed for different sizes of pathological targets and various tumors deeps inside patient’s body, using radiopharmaceuticals based on a mixed β-γ-radiating (131I, 177Lu), and clear β- emitting (89Sr, 90Y) therapeutic radionuclides. Presented method can be used for adequate implementing in clinical practice estimation of absorbed doses in the regions of interest on the basis of planar scintigraphy of the patient with sufficient accuracy.

  14. A novel state-space based method for direct numerical simulation of particle-laden turbulent flows

    NASA Astrophysics Data System (ADS)

    Ranjan, Reetesh; Pantano, Carlos

    2012-11-01

    We present a novel state-space-based numerical method for transport of the particle density function, which can be used to investigate particle-laden turbulent flows. Here, the problem can be stated purely in a deterministic Eulerian framework. The method is coupled to an incompressible three-dimensional flow solver. We consider a dilute suspension where the volume fraction and mass loading of the particles in the flow are low enough so that the approximation of one-way coupling remains valid. The particle transport equation is derived from the governing equation of the particle dynamics described in a Lagrangian frame, by treating position and velocity of the particle as state-space variables. Application and features of this method will be demonstrated by simulating a particle-laden decaying isotropic turbulent flow. It is well known that even in an isotropic turbulent flow, the distribution of particles is not uniform. For example, heavier-than-fluid particles tend to accumulate in regions of low vorticity and high strain rate. This lead to large regions in the flow where particles remain sparsely distributed. The new approach can capture the statistics of the particle in such sparsely distributed regions in an accurate manner compared to other numerical methods.

  15. Comparison of Ensemble Kalman Filter groundwater-data assimilation methods based on stochastic moment equations and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Panzeri, M.; Riva, M.; Guadagnini, A.; Neuman, S. P.

    2014-04-01

    Traditional Ensemble Kalman Filter (EnKF) data assimilation requires computationally intensive Monte Carlo (MC) sampling, which suffers from filter inbreeding unless the number of simulations is large. Recently we proposed an alternative EnKF groundwater-data assimilation method that obviates the need for sampling and is free of inbreeding issues. In our new approach, theoretical ensemble moments are approximated directly by solving a system of corresponding stochastic groundwater flow equations. Like MC-based EnKF, our moment equations (ME) approach allows Bayesian updating of system states and parameters in real-time as new data become available. Here we compare the performances and accuracies of the two approaches on two-dimensional transient groundwater flow toward a well pumping water in a synthetic, randomly heterogeneous confined aquifer subject to prescribed head and flux boundary conditions.

  16. Nonlinear gyrokinetic simulation of ion temperature gradient turbulence based on a numerical Lie-transform perturbation method

    NASA Astrophysics Data System (ADS)

    Xu, Yingfeng; Ye, Lei; Dai, Zongliang; Xiao, Xiaotao; Wang, Shaojie

    2017-08-01

    The electrostatic gyrokinetic nonlinear turbulence code NLT, which is based on a numerical Lie-transform perturbation method, is developed. For improving the computational efficiency and avoiding the numerical instabilities, field-aligned coordinates and a Fourier filter are adopted in the NLT code. Nonlinear tests of the ion temperature gradient driven turbulence with adiabatic electrons are performed for verifying the NLT code by comparing with other gyrokinetic codes. The time evolution of the ion heat diffusivity and the relation between the ion heat diffusivity and the ion temperature gradient are compared in the nonlinear tests. Good agreements are achieved from the nonlinear benchmarks between the NLT code and other codes. The mode structures of the perturbed electric potential representing different phases have been simulated.

  17. Behavior simulation for electrically actuated bow-tie shaped fixed-fixed beams based on nodal analysis method

    NASA Astrophysics Data System (ADS)

    Li, Min; Huang, Qing-an; Li, Wei-hua

    2009-07-01

    This paper reports a nodal model for the trapeziform beam element with gradual change cross-sections. Using this model, electromechanical behavior of the electrically actuated bow-tie shaped fixed-fixed beams can be simulated in a system level. The model is developed by treating the governing equations of the trapeziform beam based on the Galerkin residual method and decomposing the 4th-order partial differential equation into discrete modal ordinary differential equations. After that, the equivalent circuits and corresponding nodal model are established. In the model, the nonlinearities including mid-plane stretching and electrostatic forcing are considered. The accuracy of the developed model is verified by extensively comparing the static and dynamic analysis results with those obtained from FEA and available experiment data. The developed model is also applicable to beam-like structures with uniform cross-sections.

  18. Final Performance Report on Grant FA9550-07-1-0366 (Simulation-Based and Sampling Method for Global Optimization)

    DTIC Science & Technology

    2010-01-25

    Lipschitz -type conditions, then V π̂ k (x) −→ V π∗(x) ∀x ∈ S w.p.1. 2.2.3 Simulation-Based Approach to POMDPs In a simulation-based approach to POMDPs, we...Advances in Mathematical Finance, Birkhauser, 2007. 4.5 Awards • Michael Fu: Elected Fellow of the Institute of Electrical and Electronics Engineers (IEEE

  19. Inferring past demographic changes from contemporary genetic data: A simulation-based evaluation of the ABC methods implemented in diyabc.

    PubMed

    Cabrera, Andrea A; Palsbøll, Per J

    2017-06-27

    Inferring the demographic history of species and their populations is crucial to understand their contemporary distribution, abundance and adaptations. The high computational overhead of likelihood-based inference approaches severely restricts their applicability to large data sets or complex models. In response to these restrictions, approximate Bayesian computation (ABC) methods have been developed to infer the demographic past of populations and species. Here, we present the results of an evaluation of the ABC-based approach implemented in the popular software package diyabc using simulated data sets (mitochondrial DNA sequences, microsatellite genotypes and single nucleotide polymorphisms). We simulated population genetic data under five different simple, single-population models to assess the model recovery rates as well as the bias and error of the parameter estimates. The ability of diyabc to recover the correct model was relatively low (0.49): 0.6 for the simplest models and 0.3 for the more complex models. The recovery rate improved significantly when reducing the number of candidate models from five to three (from 0.57 to 0.71). Among the parameters of interest, the effective population size was estimated at a higher accuracy compared to the timing of events. Increased amounts of genetic data did not significantly improve the accuracy of the parameter estimates. Some gains in accuracy and decreases in error were observed for scaled parameters (e.g., Ne μ) compared to unscaled parameters (e.g., Ne and μ). We concluded that diyabc-based assessments are not suited to capture a detailed demographic history, but might be efficient at capturing simple, major demographic changes. © 2017 John Wiley & Sons Ltd.

  20. Simulation of collaborative studies for real-time PCR-based quantitation methods for genetically modified crops.

    PubMed

    Watanabe, Satoshi; Sawada, Hiroshi; Naito, Shigehiro; Akiyama, Hiroshi; Teshima, Reiko; Furui, Satoshi; Kitta, Kazumi; Hino, Akihiro

    2013-01-01

    To study impacts of various random effects and parameters of collaborative studies on the precision of quantitation methods of genetically modified (GM) crops, we developed a set of random effects models for cycle time values of a standard curve-based relative real-time PCR that makes use of an endogenous gene sequence as the internal standard. The models and data from a published collaborative study for six GM lines at four concentration levels were used to simulate collaborative studies under various conditions. Results suggested that by reducing the numbers of well replications from three to two, and standard levels of endogenous sequence from five to three, the number of unknown samples analyzable on a 96-well PCR plate in routine analyses could be almost doubled, and still the acceptable repeatability RSD (RSDr < or = 25%) and the reproducibility RSD (RSDR < 35%) of the collaborative study could be met. Further, RSDr and RSD(R) were found most sensitive to random effects attributable to inhomogeneity among blind replicates, but they were little influenced by those attributable to DNA extractions. The proposed models are expected to be useful for optimizing standard curve-based relative quantitation methods for GM crops by real-time PCR and their collaborative studies.

  1. A GPU accelerated, discrete time random walk model for simulating reactive transport in porous media using colocation probability function based reaction methods

    NASA Astrophysics Data System (ADS)

    Barnard, J. M.; Augarde, C. E.

    2012-12-01

    The simulation of reactions in flow through unsaturated porous media is a more complicated process when using particle tracking based models than in continuum based models. In the fomer particles are reacted on an individual particle-to-particle basis using either deterministic or probabilistic methods. This means that particle tracking methods, especially when simulations of reactions are included, are computationally intensive as the reaction simulations require tens of thousands of nearest neighbour searches per time step. Despite this, particle tracking methods merit further study due to their ability to eliminate numerical dispersion, to simulate anomalous transport and incomplete mixing of reactive solutes. A new model has been developed using discrete time random walk particle tracking methods to simulate reactive mass transport in porous media which includes a variation of colocation probability function based methods of reaction simulation from those presented by Benson & Meerschaert (2008). Model development has also included code acceleration via graphics processing units (GPUs). The nature of particle tracking methods means that they are well suited to parallelization using GPUs. The architecture of GPUs is single instruction - multiple data (SIMD). This means that only one operation can be performed at any one time but can be performed on multiple data simultaneously. This allows for significant speed gains where long loops of independent operations are performed. Computationally expensive code elements, such the nearest neighbour searches required by the reaction simulation, are therefore prime targets for GPU acceleration.

  2. Application of Wavelet-Based Methods for Accelerating Multi-Time-Scale Simulation of Bistable Heterogeneous Catalysis

    DOE PAGES

    Gur, Sourav; Frantziskonis, George N.; Univ. of Arizona, Tucson, AZ; ...

    2017-02-16

    Here, we report results from a numerical study of multi-time-scale bistable dynamics for CO oxidation on a catalytic surface in a flowing, well-mixed gas stream. The problem is posed in terms of surface and gas-phase submodels that dynamically interact in the presence of stochastic perturbations, reflecting the impact of molecular-scale fluctuations on the surface and turbulence in the gas. Wavelet-based methods are used to encode and characterize the temporal dynamics produced by each submodel and detect the onset of sudden state shifts (bifurcations) caused by nonlinear kinetics. When impending state shifts are detected, a more accurate but computationally expensive integrationmore » scheme can be used. This appears to make it possible, at least in some cases, to decrease the net computational burden associated with simulating multi-time-scale, nonlinear reacting systems by limiting the amount of time in which the more expensive integration schemes are required. Critical to achieving this is being able to detect unstable temporal transitions such as the bistable shifts in the example problem considered here. Lastly, our results indicate that a unique wavelet-based algorithm based on the Lipschitz exponent is capable of making such detections, even under noisy conditions, and may find applications in critical transition detection problems beyond catalysis.« less

  3. System simulation method for fiber-based homodyne multiple target interferometers using short coherence length laser sources

    NASA Astrophysics Data System (ADS)

    Fox, Maik; Beuth, Thorsten; Streck, Andreas; Stork, Wilhelm

    2015-09-01

    Homodyne laser interferometers for velocimetry are well-known optical systems used in many applications. While the detector power output signal of such a system, using a long coherence length laser and a single target, is easily modelled using the Doppler shift, scenarios with a short coherence length source, e.g. an unstabilized semiconductor laser, and multiple weak targets demand a more elaborated approach for simulation. Especially when using fiber components, the actual setup is an important factor for system performance as effects like return losses and multiple way propagation have to be taken into account. If the power received from the targets is in the same region as stray light created in the fiber setup, a complete system simulation becomes a necessity. In previous work, a phasor based signal simulation approach for interferometers based on short coherence length laser sources has been evaluated. To facilitate the use of the signal simulation, a fiber component ray tracer has since been developed that allows the creation of input files for the signal simulation environment. The software uses object oriented MATLAB code, simplifying the entry of different fiber setups and the extension of the ray tracer. Thus, a seamless way from a system description based on arbitrarily interconnected fiber components to a signal simulation for different target scenarios has been established. The ray tracer and signal simulation are being used for the evaluation of interferometer concepts incorporating delay lines to compensate for short coherence length.

  4. On the direct numerical simulation of moderate-Stokes-number turbulent particulate flows using algebraic-closure-based and kinetic-based moments methods

    NASA Astrophysics Data System (ADS)

    Vie, Aymeric; Masi, Enrica; Simonin, Olivier; Massot, Marc; EM2C/Ecole Centrale Paris Team; IMFT Team

    2012-11-01

    To simulate particulate flows, a convenient formalism for HPC is to use Eulerian moment methods, which describe the evolution of velocity moments instead of tracking directly the number density function (NDF) of the droplets. By using a conditional PDF approach, the Mesoscopic Eulerian Formalism (MEF) of Février et al. 2005 offers a solution for the direct numerical simulation of turbulent particulate flows, even at relatively high Stokes number. Here, we propose to compare to existing approaches used to solved for this formalism: the Algebraic-Closure-Based Moment method (Kaufmann et al. 2008, Masi et al. 2011), and the Kinetic-Based Moment Method (Yuan et al. 2010, Chalons et al. 2010, Vié et al. 2012). Therefore, the goal of the current work is to evaluate both strategies in turbulent test cases. For the ACBMM, viscosity-type and non-linear closures are envisaged, whereas for the KBMM, isotropic and anisotropic closures are investigated. A main aspect of the current methodology for the comparison is that the same numerical methods are used for both approaches. Results show that the new non-linear closure and the Anisotropic Gaussian closures are both accurate in shear flows, whereas viscosity-type and isotropic closures lead to wrong results.

  5. Simulation-Based Bronchoscopy Training

    PubMed Central

    Kennedy, Cassie C.; Maldonado, Fabien

    2013-01-01

    Background: Simulation-based bronchoscopy training is increasingly used, but effectiveness remains uncertain. We sought to perform a comprehensive synthesis of published work on simulation-based bronchoscopy training. Methods: We searched MEDLINE, EMBASE, CINAHL, PsycINFO, ERIC, Web of Science, and Scopus for eligible articles through May 11, 2011. We included all original studies involving health professionals that evaluated, in comparison with no intervention or an alternative instructional approach, simulation-based training for flexible or rigid bronchoscopy. Study selection and data abstraction were performed independently and in duplicate. We pooled results using random effects meta-analysis. Results: From an initial pool of 10,903 articles, we identified 17 studies evaluating simulation-based bronchoscopy training. In comparison with no intervention, simulation training was associated with large benefits on skills and behaviors (pooled effect size, 1.21 [95% CI, 0.82-1.60]; n = 8 studies) and moderate benefits on time (0.62 [95% CI, 0.12-1.13]; n = 7). In comparison with clinical instruction, behaviors with real patients showed nonsignificant effects favoring simulation for time (0.61 [95% CI, −1.47 to 2.69]) and process (0.33 [95% CI, −1.46 to 2.11]) outcomes (n = 2 studies each), although variation in training time might account for these differences. Four studies compared alternate simulation-based training approaches. Inductive analysis to inform instructional design suggested that longer or more structured training is more effective, authentic clinical context adds value, and animal models and plastic part-task models may be superior to more costly virtual-reality simulators. Conclusions: Simulation-based bronchoscopy training is effective in comparison with no intervention. Comparative effectiveness studies are few. PMID:23370487

  6. Deformation Behavior of Powder Metallurgy Connecting Rod Preform During Hot Forging Based on Hot Compression and Finite Element Method Simulation

    NASA Astrophysics Data System (ADS)

    Li, Fengxian; Yi, Jianhong; Eckert, Jürgen

    2017-06-01

    Powder-forged connecting rod with a complex geometry shape always has a problem with nonuniform density distribution. Moreover, the physical property of preform plays a critical role for optimizing the connecting rod quality. The flow behavior of a Fe-3Cu-0.5C (wt pct) alloy with a relative density of 0.8 manufactured by powder metallurgy (P/M, Fe-Cu-C) was studied using isothermal compression tests. The material constitutive equation, power dissipation ( η) maps, and hot processing maps of the P/M Fe-Cu-C alloy were established. Then, the hot forging process of the connecting rod preforms was simulated using the material constitutive model based on finite element method simulation. The calculated results agree well with the experimental ones. The results show that the flow stress increases with decreasing temperature and increasing strain rate. The activation energy of the P/M Fe-Cu-C alloy with a relative density of 0.8 is 188.42 kJ/mol. The optimum temperature at the strain of 0.4 for good hot workability of sintered Fe-Cu-C alloy ranges from 1333 K to 1380 K (1060 °C to 1107 °C). The relative density of the hot-forged connecting rod at the central part changed significantly compared with that at the big end and that at the small end. These present theoretical and experimental investigations can provide a methodology for accurately predicting the densification behavior of the P/M connecting rod preform during hot forging, and they help to optimize the processing parameters.

  7. Numerical hydrodynamic simulations based on semi-analytic galaxy merger trees: method and Milky Way-like galaxies

    NASA Astrophysics Data System (ADS)

    Moster, Benjamin P.; Macciò, Andrea V.; Somerville, Rachel S.

    2014-01-01

    We present a new approach to study galaxy evolution in a cosmological context. We combine cosmological merger trees and semi-analytic models of galaxy formation to provide the initial conditions for multimerger hydrodynamic simulations. In this way, we exploit the advantages of merger simulations (high resolution and inclusion of the gas physics) and semi-analytic models (cosmological background and low computational cost), and integrate them to create a novel tool. This approach allows us to study the evolution of various galaxy properties, including the treatment of the hot gaseous halo from which gas cools and accretes on to the central disc, which has been neglected in many previous studies. This method shows several advantages over other methods. As only the particles in the regions of interest are included, the run time is much shorter than in traditional cosmological simulations, leading to greater computational efficiency. Using cosmological simulations, we show that multiple mergers are expected to be more common than sequences of isolated mergers, and therefore studies of galaxy mergers should take this into account. In this pilot study, we present our method and illustrate the results of simulating 10 Milky Way-like galaxies since z = 1. We find good agreement with observations for the total stellar masses, star formation rates, cold gas fractions and disc scalelength parameters. We expect that this novel numerical approach will be very useful for pursuing a number of questions pertaining to the transformation of galaxy internal structure through cosmic time.

  8. Bootstrapping Methods Applied for Simulating Laboratory Works

    ERIC Educational Resources Information Center

    Prodan, Augustin; Campean, Remus

    2005-01-01

    Purpose: The aim of this work is to implement bootstrapping methods into software tools, based on Java. Design/methodology/approach: This paper presents a category of software e-tools aimed at simulating laboratory works and experiments. Findings: Both students and teaching staff use traditional statistical methods to infer the truth from sample…

  9. Bootstrapping Methods Applied for Simulating Laboratory Works

    ERIC Educational Resources Information Center

    Prodan, Augustin; Campean, Remus

    2005-01-01

    Purpose: The aim of this work is to implement bootstrapping methods into software tools, based on Java. Design/methodology/approach: This paper presents a category of software e-tools aimed at simulating laboratory works and experiments. Findings: Both students and teaching staff use traditional statistical methods to infer the truth from sample…

  10. Application of a flexible lattice Boltzmann method based simulation tool for modelling physico-chemical processes at different scales

    NASA Astrophysics Data System (ADS)

    Patel, Ravi A.; Perko, Janez; Jacques, Diederik

    2017-04-01

    Often, especially in the disciplines related to natural porous media, such as for example vadoze zone or aquifer hydrology or contaminant transport, the relevant spatial and temporal scales on which we need to provide information is larger than the scale where the processes actually occur. Usual techniques used to deal with these problems assume the existence of a REV. However, in order to understand the behavior on larger scales it is important to downscale the problem onto the relevant scale of the processes. Due to the limitations of resources (time, memory) the downscaling can only be made up to the certain lower scale. At this lower scale still several scales may co-exist - the scale which can be explicitly described and a scale which needs to be conceptualized by effective properties. Hence, models which are supposed to provide effective properties on relevant scales should therefor be flexible enough to represent complex pore-structure by explicit geometry on one side, and differently defined processes (e.g. by the effective properties) which emerge on lower scale. In this work we present the state-of-the-art lattice Boltzmann method based simulation tool applicable to advection-diffusion equation coupled to geochemical processes. The lattice Boltzmann transport solver can be coupled with an external geochemical solver which allows to account for a wide range of geochemical reaction networks through thermodynamic databases. The applicability to multiphase systems is ongoing. We provide several examples related to the calculation of an effective diffusion properties, permeability and effective reaction rate based on a continuum scale based on the pore scale geometry.

  11. Data mining of the GAW14 simulated data using rough set theory and tree-based methods.

    PubMed

    Wei, Liang-Ying; Huang, Cheng-Lung; Chen, Chien-Hsiun

    2005-12-30

    Rough set theory and decision trees are data mining methods used for dealing with vagueness and uncertainty. They have been utilized to unearth hidden patterns in complicated datasets collected for industrial processes. The Genetic Analysis Workshop 14 simulated data were generated using a system that implemented multiple correlations among four consequential layers of genetic data (disease-related loci, endophenotypes, phenotypes, and one disease trait). When information of one layer was blocked and uncertainty was created in the correlations among these layers, the correlation between the first and last layers (susceptibility genes and the disease trait in this case), was not easily directly detected. In this study, we proposed a two-stage process that applied rough set theory and decision trees to identify genes susceptible to the disease trait. During the first stage, based on phenotypes of subjects and their parents, decision trees were built to predict trait values. Phenotypes retained in the decision trees were then advanced to the second stage, where rough set theory was applied to discover the minimal subsets of genes associated with the disease trait. For comparison, decision trees were also constructed to map susceptible genes during the second stage. Our results showed that the decision trees of the first stage had accuracy rates of about 99% in predicting the disease trait. The decision trees and rough set theory failed to identify the true disease-related loci.

  12. Combination of a latin hypercube sampling and of an simulated annealing method to optimize a physically based hydrological model

    NASA Astrophysics Data System (ADS)

    Robert, D.; Braud, I.; Cohard, J.; Zin, I.; Vauclin, M.

    2010-12-01

    Physically based hydrological models involve a large amount of parameters and data. Any of them is associated with uncertainties because of indirect measurements of some characteristics or because of spatial or temporal variability of others, …. Then, even if lots of data are measured in the field or in the laboratory, ignorance and uncertainty about data persist and a large degree of freedom remains for modeling. Moreover the choice for physical parameterization also induces uncertainties and errors in model behavior and simulation results. To address this problem, sensitivity analyses are useful. They allow the determination of the influence of each parameter on modeling results and allow the adjustment of an optimal parameter set by minimizing a cost function. However, the larger the number of parameters, the more expensive the computational costs to explore the whole parameter space. In this context, we carried out an original approach in the hydrology domain to perform this sensitivity analysis using a 1D Soil - Vegetation - Atmosphere Transfer model. The chosen method is a global method. It focuses on the output data variability due to the input parameter uncertainties. The latin hypercube sampling is adopted to sample the analyzed input parameter space. This method has the advantage to reduce the computational cost. The method is applied using the SiSPAT (Simple Soil Vegetation Atmosphere Transfer) model over a complete year period with observations collected in a small catchments in Benin, within the AMMA project. It involves sensitivity to 30 parameters sampled in 40 intervals. The quality of the modeled results is evaluated by calculating several criteria: the bias, the root mean square error and the Nash-Sutcliffe efficiency coefficient between modeled and observed time series of net radiation, heat fluxes, soil temperatures and volumetric water contents.... To hierarchize the influence of the various input parameters on the results, the study of

  13. Aerosol source apportionment based on multi-wavelength photoacoustic light absorption measurements: a simulation method for system's optimisation

    NASA Astrophysics Data System (ADS)

    Simon, Károly; Ajtai, Tibor; Kiss-Albert, Gergely; Utry, Noémi; Pintér, Máté; Szabó, Gábor; Bozóki, Zoltán

    2017-04-01

    Aerosol source apportionment is currently one of the outstanding challenges for environmental monitoring. In most cases atmospheric aerosol is a heterogeneous mixture as it typically originates from various sources. Consequently, each aerosol type has distinct chemical and physical properties. Contrary to chemical properties, optical absorption and size distribution of airborne particles can be measured in real time with high time resolution i.e. their measurement facilitates real time source apportionment (Favez et al (2009), Ajtai et al (2011), Favez et al (2010)). The wavelength dependency of the optical absorption coefficient (OAC) is usually characterised by the Absorption Angström Exponent (AAE). So far, the selection of light sources (lasers) into a photoacoustic aerosol measuring system was based on rule of thumb type estimations only. Recently, we proposed a simulation method that can be used to estimate the accuracy of aerosol source apportionment in case of a dual wavelength photoacoustic system (Simon et al., (2017)). This simulation is based on the assumption that the atmospheric aerosol load is dominated by two distinct sources and each of them is strongly light absorbing with specific AAE values. This is a typical scenario e.g. for urban measurements under wintry conditions when dominating aerosol sources are fossil fuel and wood burning with characteristic AAE 1 and 2, respectively. The wavelength pair of 405 and 1064 nm was found to be optimal for source apportionment in this case. In the presented study we investigated the situation when there are aerosol components with only slightly different AAE values and searched for a photoacoustic system which is optimal for distinguishing these components. Ajtai, T.; Filep, Á.; Utry, N.; Schnaiter, M.; Linke, C.; Bozóki, Z.; Szabó, G. and Leisner T. (2011) Journal of Aerosol Science 42, 859-866. Favez, O.; Cachier, H.; Sciare, J.; Sarda-Estève, R. and Martinon, L. (2009) Atmospheric Environment 43

  14. Forecasting nonlinear chaotic time series with function expression method based on an improved genetic-simulated annealing algorithm.

    PubMed

    Wang, Jun; Zhou, Bi-hua; Zhou, Shu-dao; Sheng, Zheng

    2015-01-01

    The paper proposes a novel function expression method to forecast chaotic time series, using an improved genetic-simulated annealing (IGSA) algorithm to establish the optimum function expression that describes the behavior of time series. In order to deal with the weakness associated with the genetic algorithm, the proposed algorithm incorporates the simulated annealing operation which has the strong local search ability into the genetic algorithm to enhance the performance of optimization; besides, the fitness function and genetic operators are also improved. Finally, the method is applied to the chaotic time series of Quadratic and Rossler maps for validation. The effect of noise in the chaotic time series is also studied numerically. The numerical results verify that the method can forecast chaotic time series with high precision and effectiveness, and the forecasting precision with certain noise is also satisfactory. It can be concluded that the IGSA algorithm is energy-efficient and superior.

  15. Forecasting Nonlinear Chaotic Time Series with Function Expression Method Based on an Improved Genetic-Simulated Annealing Algorithm

    PubMed Central

    Wang, Jun; Zhou, Bi-hua; Zhou, Shu-dao; Sheng, Zheng

    2015-01-01

    The paper proposes a novel function expression method to forecast chaotic time series, using an improved genetic-simulated annealing (IGSA) algorithm to establish the optimum function expression that describes the behavior of time series. In order to deal with the weakness associated with the genetic algorithm, the proposed algorithm incorporates the simulated annealing operation which has the strong local search ability into the genetic algorithm to enhance the performance of optimization; besides, the fitness function and genetic operators are also improved. Finally, the method is applied to the chaotic time series of Quadratic and Rossler maps for validation. The effect of noise in the chaotic time series is also studied numerically. The numerical results verify that the method can forecast chaotic time series with high precision and effectiveness, and the forecasting precision with certain noise is also satisfactory. It can be concluded that the IGSA algorithm is energy-efficient and superior. PMID:26000011

  16. A method of simulating polarization-sensitive optical coherence tomography based on a polarization-sensitive Monte Carlo program and a sphere cylinder birefringence model

    NASA Astrophysics Data System (ADS)

    Chen, Dongsheng; Zeng, Nan; Liu, Celong; Ma, Hui

    2012-12-01

    In this paper, we present a new method to simulate the signal of polarization-sensitive optical coherence tomography (for short, PS-OCT) by the use of the sphere cylinder birefringence Monte Carlo program developed by our laboratory. Using the program, we can simulate various turbid media based on different optical models and analyze the scattering and polarization information of the simulated media. The detecting area and angle range and the scattering times of the photons are three main conditions we can use to screen out the photons which contribute to the signal of PS-OCT, and in this paper, we study the effects of these three factors on simulation results using our program, and find that the scattering times of the photon is the main factor to affect the signal, and the detecting area and angle range are less important but necessary conditions. In order to test and verify the feasibility of our simulation, we use two methods as a reference. One is called Extended Huygens Fresnel (for short, EHF) method, which is based on electromagnetism theory and can describe the single scattering and multiple scattering of light. By comparison of the results obtained from EHF method and ours, we explore the screening regularities of the photons in the simulation. Meanwhile, we also compare our simulation with another polarization related simulation presented by a Russian group, and our experimental results. Both the comparisons find that our simulation is efficient for PS-OCT at the superficial depth range, and should be further corrected in order to simulate the signal of PS-OCT at deeper depth.

  17. Jacobian Free-Newton Krylov Discontinuous Galerkin Method and Physics-Based Preconditioning for Nuclear Reactor Simulations

    SciTech Connect

    HyeongKae Park; R. Nourgaliev; Richard C. Martineau; Dana A. Knoll

    2008-09-01

    Multidimensional, higher-order (2nd and higher) numerical methods have come to the forefront in recent years due to significant advances of computer technology and numerical algorithms, and have shown great potential as viable design tools for realistic applications. To achieve this goal, implicit high-order accurate coupling of the multiphysics simulations is a critical component. One of the issues that arise from multiphysics simulation is the necessity to resolve multiple time scales. For example, the dynamical time scales of neutron kinetics, fluid dynamics and heat conduction significantly differ (typically >1010 magnitude), with the dominant (fastest) physical mode also changing during the course of transient [Pope and Mousseau, 2007]. This leads to the severe time step restriction for stability in traditional multiphysics (i.e. operator split, semi-implicit discretization) simulations. The lower order methods suffer from an undesirable numerical dissipation. Thus implicit, higher order accurate scheme is necessary to perform seamlessly-coupled multiphysics simulations that can be used to analyze the “what-if” regulatory accident scenarios, or to design and optimize engineering systems.

  18. Student perceptions of a simulation-based flipped classroom for the surgery clerkship: A mixed-methods study.

    PubMed

    Liebert, Cara A; Mazer, Laura; Bereknyei Merrell, Sylvia; Lin, Dana T; Lau, James N

    2016-09-01

    The flipped classroom, a blended learning paradigm that uses pre-session online videos reinforced with interactive sessions, has been proposed as an alternative to traditional lectures. This article investigates medical students' perceptions of a simulation-based, flipped classroom for the surgery clerkship and suggests best practices for implementation in this setting. A prospective cohort of students (n = 89), who were enrolled in the surgery clerkship during a 1-year period, was taught via a simulation-based, flipped classroom approach. Students completed an anonymous, end-of-clerkship survey regarding their perceptions of the curriculum. Quantitative analysis of Likert responses and qualitative analysis of narrative responses were performed. Students' perceptions of the curriculum were positive, with 90% rating it excellent or outstanding. The majority reported the curriculum should be continued (95%) and applied to other clerkships (84%). The component received most favorably by the students was the simulation-based skill sessions. Students rated the effectiveness of the Khan Academy-style videos the highest compared with other video formats (P < .001). Qualitative analysis identified 21 subthemes in 4 domains: general positive feedback, educational content, learning environment, and specific benefits to medical students. The students reported that the learning environment fostered accountability and self-directed learning. Specific perceived benefits included preparation for the clinical rotation and the National Board of Medical Examiners shelf exam, decreased class time, socialization with peers, and faculty interaction. Medical students' perceptions of a simulation-based, flipped classroom in the surgery clerkship were overwhelmingly positive. The flipped classroom approach can be applied successfully in a surgery clerkship setting and may offer additional benefits compared with traditional lecture-based curricula. Copyright © 2016 Elsevier Inc. All

  19. A GPU-based large-scale Monte Carlo simulation method for systems with long-range interactions

    NASA Astrophysics Data System (ADS)

    Liang, Yihao; Xing, Xiangjun; Li, Yaohang

    2017-06-01

    In this work we present an efficient implementation of Canonical Monte Carlo simulation for Coulomb many body systems on graphics processing units (GPU). Our method takes advantage of the GPU Single Instruction, Multiple Data (SIMD) architectures, and adopts the sequential updating scheme of Metropolis algorithm. It makes no approximation in the computation of energy, and reaches a remarkable 440-fold speedup, compared with the serial implementation on CPU. We further use this method to simulate primitive model electrolytes, and measure very precisely all ion-ion pair correlation functions at high concentrations. From these data, we extract the renormalized Debye length, renormalized valences of constituent ions, and renormalized dielectric constants. These results demonstrate unequivocally physics beyond the classical Poisson-Boltzmann theory.

  20. Partially Averaged Navier-Stokes method based on k-ω model for simulating unsteady cavitating flows

    NASA Astrophysics Data System (ADS)

    Hu, C. L.; Wang, G. Y.; Wang, Z. Y.

    2015-01-01

    The turbulence closure is significant to unsteady cavitating flow computations as the flow is frequently time-dependent accompanied with multiple scales of vortex. A turbulence bridging model named as PANS (Partially-averaged Navier-Stokes) purported for any filter-width is developed recently. The model filter width is controlled through two parameters: the unresolved-to-total ratios of kinetic energy fk and dissipation rate fω. In the present paper, the PANS method based on k-ω model is used to simulate unsteady cavitating flows over a Clark-y hydrofoil. The main objective of this work is to present the characteristics of PANS k-ω model and evaluate it depending on experimental data. The PANS k-ω model is implemented with various filter parameters (fk=0.2~1, fω =1/fk). The comparisons with the experimental data show that with the decrease of the filter parameter fk, the PANS model can reasonably predict the time evolution process of cavity shapes and lift force fluctuating in time. As the PANS model with smaller fk can overcome the over-prediction of turbulent kinetic energy with original k-ω model, the time-averaged eddy viscosity at the rear of attached cavity decreases and more levels of physical turbulent fluctuations are resolved. What's more, it is found that the value of ω in the free stream significantly affects the numerical results such as time-averaged cavity and fluctuations of the lift coefficient. With decreasing fk, the sensitivity of ω-equation on free stream becomes much weaker.

  1. Dynamic light scattering-based method to determine primary particle size of iron oxide nanoparticles in simulated gastrointestinal fluid.

    PubMed

    Yang, Seung-Chul; Paik, Sae-Yeol-Rim; Ryu, Jina; Choi, Kyeong-Ok; Kang, Tae Seok; Lee, Jong Kwon; Song, Chi Won; Ko, Sanghoon

    2014-10-15

    Simple dynamic light scattering (DLS)-based methodologies were developed to determine primary particle size distribution of iron oxide particles in simulated gastrointestinal fluid. Iron oxide particles, which easily agglomerate in aqueous media, were converted into dispersed particles by modification of surface charge using citric acid and sodium citrate. After the modification, zeta-potential value decreased to -40mV at pH 7. Mean particle diameters in suspensions of iron oxide nano- and microparticles stabilized by the mixture of citric acid and sodium citrate were dramatically decreased to 166 and 358nm, respectively, which were close to the particle size distributions observed in the micrographs. In simulated gastrointestinal fluid, both iron oxide nano- and microparticles were heavily agglomerated with particle diameters of almost 2600 and 5200nm, respectively, due to charge shielding on the citrate-modified surface by ions in the media. For determining primary particle size distribution by using DLS-based approach, the iron oxide particles incubated in the simulated gastrointestinal fluid were converted to monodisperse particles by altering the pH to 7 and electrolyte elimination. The simple DLS-based methodologies are well suited to determine primary particle size distribution of mineral nanoparticles at various physical, chemical, and biological conditions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. A simulation method for the fruitage body

    NASA Astrophysics Data System (ADS)

    Lu, Ling; Song, Weng-lin; Wang, Lei

    2009-07-01

    An effective visual modeling for creating the fruitage body has been present. According to the geometry shape character of fruitage, we build up its face model base on ellipsoid deformation. The face model is relation with radius. We consider different radius become a face in the fruitage, and uses same method to simulate the shape of fruitage inside. The body model is formed by combine face model and radius direction. Our method can simulate virtual inter and outer structure for fruitage body. The method decreases a lot of data and increases display speed. Another, the texture model of fruitage is defined by sum of different base function. This kind of method is simple and speed. We show the feasibility of our method by creating a winter-jujube and an apricot. They include exocorp, mesocorp and endocarp. It is useful that develop virtual plant.

  3. An image-based reaction field method for electrostatic interactions in molecular dynamics simulations of aqueous solutions

    NASA Astrophysics Data System (ADS)

    Lin, Yuchun; Baumketner, Andrij; Deng, Shaozhong; Xu, Zhenli; Jacobs, Donald; Cai, Wei

    2009-10-01

    In this paper, a new solvation model is proposed for simulations of biomolecules in aqueous solutions that combines the strengths of explicit and implicit solvent representations. Solute molecules are placed in a spherical cavity filled with explicit water, thus providing microscopic detail where it is most needed. Solvent outside of the cavity is modeled as a dielectric continuum whose effect on the solute is treated through the reaction field corrections. With this explicit/implicit model, the electrostatic potential represents a solute molecule in an infinite bath of solvent, thus avoiding unphysical interactions between periodic images of the solute commonly used in the lattice-sum explicit solvent simulations. For improved computational efficiency, our model employs an accurate and efficient multiple-image charge method to compute reaction fields together with the fast multipole method for the direct Coulomb interactions. To minimize the surface effects, periodic boundary conditions are employed for nonelectrostatic interactions. The proposed model is applied to study liquid water. The effect of model parameters, which include the size of the cavity, the number of image charges used to compute reaction field, and the thickness of the buffer layer, is investigated in comparison with the particle-mesh Ewald simulations as a reference. An optimal set of parameters is obtained that allows for a faithful representation of many structural, dielectric, and dynamic properties of the simulated water, while maintaining manageable computational cost. With controlled and adjustable accuracy of the multiple-image charge representation of the reaction field, it is concluded that the employed model achieves convergence with only one image charge in the case of pure water. Future applications to pKa calculations, conformational sampling of solvated biomolecules and electrolyte solutions are briefly discussed.

  4. A new method for the prediction of chatter stability lobes based on dynamic cutting force simulation model and support vector machine

    NASA Astrophysics Data System (ADS)

    Peng, Chong; Wang, Lun; Liao, T. Warren

    2015-10-01

    Currently, chatter has become the critical factor in hindering machining quality and productivity in machining processes. To avoid cutting chatter, a new method based on dynamic cutting force simulation model and support vector machine (SVM) is presented for the prediction of chatter stability lobes. The cutting force is selected as the monitoring signal, and the wavelet energy entropy theory is used to extract the feature vectors. A support vector machine is constructed using the MATLAB LIBSVM toolbox for pattern classification based on the feature vectors derived from the experimental cutting data. Then combining with the dynamic cutting force simulation model, the stability lobes diagram (SLD) can be estimated. Finally, the predicted results are compared with existing methods such as zero-order analytical (ZOA) and semi-discretization (SD) method as well as actual cutting experimental results to confirm the validity of this new method.

  5. Mass Conservation of the Unified Continuous and Discontinuous Element-Based Galerkin Methods on Dynamically Adaptive Grids with Application to Atmospheric Simulations

    DTIC Science & Technology

    2015-09-01

    that matter. [12] (with a later follow-up in [13]) addressed the conservation issue of the conforming continuous Galerkin method on the cubed -sphere...atmospheric dynamical core on the cubed -sphere grid, in: J. Phys. Conf. Ser., Vol. 78, IOP Publishing, 2007, p. 012074. [13] M. A. Taylor, A. Fournier, A...Discontinuous Element-Based Galerkin Methods on Dynamically Adaptive Grids with Application to Atmospheric Simulations 5a. CONTRACT NUMBER 5b. GRANT NUMBER

  6. Matrix method for acoustic levitation simulation.

    PubMed

    Andrade, Marco A B; Perez, Nicolas; Buiochi, Flavio; Adamowski, Julio C

    2011-08-01

    A matrix method is presented for simulating acoustic levitators. A typical acoustic levitator consists of an ultrasonic transducer and a reflector. The matrix method is used to determine the potential for acoustic radiation force that acts on a small sphere in the standing wave field produced by the levitator. The method is based on the Rayleigh integral and it takes into account the multiple reflections that occur between the transducer and the reflector. The potential for acoustic radiation force obtained by the matrix method is validated by comparing the matrix method results with those obtained by the finite element method when using an axisymmetric model of a single-axis acoustic levitator. After validation, the method is applied in the simulation of a noncontact manipulation system consisting of two 37.9-kHz Langevin-type transducers and a plane reflector. The manipulation system allows control of the horizontal position of a small levitated sphere from -6 mm to 6 mm, which is done by changing the phase difference between the two transducers. The horizontal position of the sphere predicted by the matrix method agrees with the horizontal positions measured experimentally with a charge-coupled device camera. The main advantage of the matrix method is that it allows simulation of non-symmetric acoustic levitators without requiring much computational effort.

  7. Sensitive and rapid reversed-phase liquid chromatography-fluorescence method for determining bisphenol A diglycidyl ether in aqueous-based food simulants.

    PubMed

    Paseiro Losada, P; López Mahía, P; Vázquez Odériz, L; Simal Lozano, J; Simal Gándara, J

    1991-01-01

    A method has been developed for determination of bisphenol A diglycidyl ether (BADGE) in 3 aqueous-based food simulants: water, 15% (v/v) ethanol, and 3% (w/v) acetic acid. BADGE is extracted with C18 cartridges and the extract is concentrated under a stream of nitrogen. BADGE is quantitated by reversed-phase liquid chromatography with fluorescence detection. Relative precision at 200 micrograms/L was 3.4%, the detection limit of the method was 0.1 micrograms/L, and recoveries of spiking concentrations from 1 to 8 micrograms/L were nearly 100%. Relative standard deviations for the method ranged from 3.5 to 5.9%, depending on the identity of the spiked aqueous-based food simulant.

  8. Symplectic partitioned Runge-Kutta method based on the eighth-order nearly analytic discrete operator and its wavefield simulations

    NASA Astrophysics Data System (ADS)

    Zhang, Chao-Yuan; Ma, Xiao; Yang, Lei; Song, Guo-Jie

    2014-03-01

    We propose a symplectic partitioned Runge-Kutta (SPRK) method with eighth-order spatial accuracy based on the extended Hamiltonian system of the acoustic wave equation. Known as the eighth-order NSPRK method, this technique uses an eighth-order accurate nearly analytic discrete (NAD) operator to discretize high-order spatial differential operators and employs a second-order SPRK method to discretize temporal derivatives. The stability criteria and numerical dispersion relations of the eighth-order NSPRK method are given by a semi-analytical method and are tested by numerical experiments. We also show the differences of the numerical dispersions between the eighth-order NSPRK method and conventional numerical methods such as the fourth-order NSPRK method, the eighth-order Lax-Wendroff correction (LWC) method and the eighth-order staggered-grid (SG) method. The result shows that the ability of the eighth-order NSPRK method to suppress the numerical dispersion is obviously superior to that of the conventional numerical methods. In the same computational environment, to eliminate visible numerical dispersions, the eighth-order NSPRK is approximately 2.5 times faster than the fourth-order NSPRK and 3.4 times faster than the fourth-order SPRK, and the memory requirement is only approximately 47.17% of the fourth-order NSPRK method and 49.41 % of the fourth-order SPRK method, which indicates the highest computational efficiency. Modeling examples for the two-layer models such as the heterogeneous and Marmousi models show that the wavefields generated by the eighth-order NSPRK method are very clear with no visible numerical dispersion. These numerical experiments illustrate that the eighth-order NSPRK method can effectively suppress numerical dispersion when coarse grids are adopted. Therefore, this method can greatly decrease computer memory requirement and accelerate the forward modeling productivity. In general, the eighth-order NSPRK method has tremendous potential

  9. A Monte Carlo method for the simulation of coagulation and nucleation based on weighted particles and the concepts of stochastic resolution and merging

    NASA Astrophysics Data System (ADS)

    Kotalczyk, G.; Kruis, F. E.

    2017-07-01

    Monte Carlo simulations based on weighted simulation particles can solve a variety of population balance problems and allow thus to formulate a solution-framework for many chemical engineering processes. This study presents a novel concept for the calculation of coagulation rates of weighted Monte Carlo particles by introducing a family of transformations to non-weighted Monte Carlo particles. The tuning of the accuracy (named 'stochastic resolution' in this paper) of those transformations allows the construction of a constant-number coagulation scheme. Furthermore, a parallel algorithm for the inclusion of newly formed Monte Carlo particles due to nucleation is presented in the scope of a constant-number scheme: the low-weight merging. This technique is found to create significantly less statistical simulation noise than the conventional technique (named 'random removal' in this paper). Both concepts are combined into a single GPU-based simulation method which is validated by comparison with the discrete-sectional simulation technique. Two test models describing a constant-rate nucleation coupled to a simultaneous coagulation in 1) the free-molecular regime or 2) the continuum regime are simulated for this purpose.

  10. Evaluation of a transient, simultaneous, arbitrary Lagrange-Euler based multi-physics method for simulating the mitral heart valve.

    PubMed

    Espino, Daniel M; Shepherd, Duncan E T; Hukins, David W L

    2014-01-01

    A transient multi-physics model of the mitral heart valve has been developed, which allows simultaneous calculation of fluid flow and structural deformation. A recently developed contact method has been applied to enable simulation of systole (the stage when blood pressure is elevated within the heart to pump blood to the body). The geometry was simplified to represent the mitral valve within the heart walls in two dimensions. Only the mitral valve undergoes deformation. A moving arbitrary Lagrange-Euler mesh is used to allow true fluid-structure interaction (FSI). The FSI model requires blood flow to induce valve closure by inducing strains in the region of 10-20%. Model predictions were found to be consistent with existing literature and will undergo further development.

  11. Implicit temperature-correction-based immersed-boundary thermal lattice Boltzmann method for the simulation of natural convection.

    PubMed

    Seta, Takeshi

    2013-06-01

    In the present paper, we apply the implicit-correction method to the immersed-boundary thermal lattice Boltzmann method (IB-TLBM) for the natural convection between two concentric horizontal cylinders and in a square enclosure containing a circular cylinder. The Chapman-Enskog multiscale expansion proves the existence of an extra term in the temperature equation from the source term of the kinetic equation. In order to eliminate the extra term, we redefine the temperature and the source term in the lattice Boltzmann equation. When the relaxation time is less than unity, the new definition of the temperature and source term enhances the accuracy of the thermal lattice Boltzmann method. The implicit-correction method is required in order to calculate the thermal interaction between a fluid and a rigid solid using the redefined temperature. Simulation of the heat conduction between two concentric cylinders indicates that the error at each boundary point of the proposed IB-TLBM is reduced by the increment of the number of Lagrangian points constituting the boundaries. We derive the theoretical relation between a temperature slip at the boundary and the relaxation time and demonstrate that the IB-TLBM requires a small relaxation time in order to avoid temperature distortion around the immersed boundary. The streamline, isotherms, and average Nusselt number calculated by the proposed method agree well with those of previous numerical studies involving natural convection. The proposed IB-TLBM improves the accuracy of the boundary conditions for the temperature and velocity using an adequate discrete area for each of the Lagrangian nodes and reduces the penetration of the streamline on the surface of the body.

  12. A New Combined Stepwise-Based High-Order Decoupled Direct and Reduced-Form Method To Improve Uncertainty Analysis in PM2.5 Simulations.

    PubMed

    Huang, Zhijiong; Hu, Yongtao; Zheng, Junyu; Yuan, Zibing; Russell, Armistead G; Ou, Jiamin; Zhong, Zhuangmin

    2017-04-04

    The traditional reduced-form model (RFM) based on the high-order decoupled direct method (HDDM), is an efficient uncertainty analysis approach for air quality models, but it has large biases in uncertainty propagation due to the limitation of the HDDM in predicting nonlinear responses to large perturbations of model inputs. To overcome the limitation, a new stepwise-based RFM method that combines several sets of local sensitive coefficients under different conditions is proposed. Evaluations reveal that the new RFM improves the prediction of nonlinear responses. The new method is applied to quantify uncertainties in simulated PM2.5 concentrations in the Pearl River Delta (PRD) region of China as a case study. Results show that the average uncertainty range of hourly PM2.5 concentrations is -28% to 57%, which can cover approximately 70% of the observed PM2.5 concentrations, while the traditional RFM underestimates the upper bound of the uncertainty range by 1-6%. Using a variance-based method, the PM2.5 boundary conditions and primary PM2.5 emissions are found to be the two major uncertainty sources in PM2.5 simulations. The new RFM better quantifies the uncertainty range in model simulations and can be applied to improve applications that rely on uncertainty information.

  13. Including anatomical and functional information in MC simulation of PET and SPECT brain studies. Brain-VISET: a voxel-based iterative method.

    PubMed

    Marti-Fuster, Berta; Esteban, Oscar; Thielemans, Kris; Setoain, Xavier; Santos, Andres; Ros, Domenec; Pavia, Javier

    2014-10-01

    Monte Carlo (MC) simulation provides a flexible and robust framework to efficiently evaluate and optimize image processing methods in emission tomography. In this work we present Brain-VISET (Voxel-based Iterative Simulation for Emission Tomography), a method that aims to simulate realistic [ (99m) Tc]-SPECT and [ (18) F]-PET brain databases by including anatomical and functional information. To this end, activity and attenuation maps generated using high-resolution anatomical images from patients were used as input maps in a MC projector to simulate SPECT or PET sinograms. The reconstructed images were compared with the corresponding real SPECT or PET studies in an iterative process where the activity inputs maps were being modified at each iteration. Datasets of 30 refractory epileptic patients were used to assess the new method. Each set consisted of structural images (MRI and CT) and functional studies (SPECT and PET), thereby allowing the inclusion of anatomical and functional variability in the simulation input models. SPECT and PET sinograms were obtained using the SimSET package and were reconstructed with the same protocols as those employed for the clinical studies. The convergence of Brain-VISET was evaluated by studying the behavior throughout iterations of the correlation coefficient, the quotient image histogram and a ROI analysis comparing simulated with real studies. The realism of generated maps was also evaluated. Our findings show that Brain-VISET is able to generate realistic SPECT and PET studies and that four iterations is a suitable number of iterations to guarantee a good agreement between simulated and real studies.

  14. Land Surface Model and Particle Swarm Optimization Algorithm Based on the Model-Optimization Method for Improving Soil Moisture Simulation in a Semi-Arid Region.

    PubMed

    Yang, Qidong; Zuo, Hongchao; Li, Weidong

    2016-01-01

    Improving the capability of land-surface process models to simulate soil moisture assists in better understanding the atmosphere-land interaction. In semi-arid regions, due to limited near-surface observational data and large errors in large-scale parameters obtained by the remote sensing method, there exist uncertainties in land surface parameters, which can cause large offsets between the simulated results of land-surface process models and the observational data for the soil moisture. In this study, observational data from the Semi-Arid Climate Observatory and Laboratory (SACOL) station in the semi-arid loess plateau of China were divided into three datasets: summer, autumn, and summer-autumn. By combing the particle swarm optimization (PSO) algorithm and the land-surface process model SHAW (Simultaneous Heat and Water), the soil and vegetation parameters that are related to the soil moisture but difficult to obtain by observations are optimized using three datasets. On this basis, the SHAW model was run with the optimized parameters to simulate the characteristics of the land-surface process in the semi-arid loess plateau. Simultaneously, the default SHAW model was run with the same atmospheric forcing as a comparison test. Simulation results revealed the following: parameters optimized by the particle swarm optimization algorithm in all simulation tests improved simulations of the soil moisture and latent heat flux; differences between simulated results and observational data are clearly reduced, but simulation tests involving the adoption of optimized parameters cannot simultaneously improve the simulation results for the net radiation, sensible heat flux, and soil temperature. Optimized soil and vegetation parameters based on different datasets have the same order of magnitude but are not identical; soil parameters only vary to a small degree, but the variation range of vegetation parameters is large.

  15. Binding mechanism of CDK5 with roscovitine derivatives based on molecular dynamics simulations and MM/PBSA methods.

    PubMed

    Dong, Keke; Wang, Xuan; Yang, Xueyu; Zhu, Xiaolei

    2016-07-01

    Roscovitine derivatives are potent inhibitors of cyclin-dependent kinase 5 (CDK5), but they exhibit different activities, which has not been understood clearly up to now. On the other hand, the task of drug design is difficult because of the fuzzy binding mechanism. In this context, the methods of molecular docking, molecular dynamics (MD) simulation, and binding free energy analysis are applied to investigate and reveal the detailed binding mechanism of four roscovitine derivatives with CDK5. The electrostatic and van der Waals interactions of the four inhibitors with CDK5 are analyzed and discussed. The calculated binding free energies in terms of MM-PBSA method are consistent with experimental ranking of inhibitor effectiveness for the four inhibitors. The hydrogen bonds of the inhibitors with Cys83 and Lys33 can stabilize the inhibitors in binding sites. The van der Waals interactions, especially the pivotal contacts with Ile10 and Leu133 have larger contributions to the binding free energy and play critical roles in distinguishing the variant bioactivity of four inhibitors. In terms of binding mechanism of the four inhibitors with CDK5 and energy contribution of fragments of each inhibitor, two new CDK5 inhibitors are designed and have stronger inhibitory potency. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Three-Dimensional Carotid Plaque Progression Simulation Using Meshless Generalized Finite Difference Method Based on Multi-Year MRI Patient-Tracking Data

    PubMed Central

    Yang, Chun; Tang, Dalin; Atluri, Satya

    2010-01-01

    Cardiovascular disease (CVD) is becoming the number one cause of death worldwide. Atherosclerotic plaque rupture and progression are closely related to most severe cardiovascular syndromes such as heart attack and stroke. Mechanisms governing plaque rupture and progression are not well understood. A computational procedure based on three-dimensional meshless generalized finite difference (MGFD) method and serial magnetic resonance imaging (MRI) data was introduced to quantify patient-specific carotid atherosclerotic plaque growth functions and simulate plaque progression. Participating patients were scanned three times (T1, T2, and T3, at intervals of about 18 months) to obtain plaque progression data. Vessel wall thickness (WT) changes were used as the measure for plaque progression. Since there was insufficient data with the current technology to quantify individual plaque component growth, the whole plaque was assumed to be uniform, homogeneous, isotropic, linear, and nearly incompressible. The linear elastic model was used. The 3D plaque model was discretized and solved using a meshless generalized finite difference (GFD) method. Four growth functions with different combinations of wall thickness, stress, and neighboring point terms were introduced to predict future plaque growth based on previous time point data. Starting from the T2 plaque geometry, plaque progression was simulated by solving the solid model and adjusting wall thickness using plaque growth functions iteratively until T3 is reached. Numerically simulated plaque progression agreed very well with the target T3 plaque geometry with errors ranging from 11.56%, 6.39%, 8.24%, to 4.45%, given by the four growth functions. We believe this is the first time 3D plaque progression simulation based on multi-year patient-tracking data was reported. Serial MRI-based progression simulation adds time dimension to plaque vulnerability assessment and will improve prediction accuracy for potential plaque rupture

  17. Epistemology of knowledge based simulation

    SciTech Connect

    Reddy, R.

    1987-04-01

    Combining artificial intelligence concepts, with traditional simulation methodologies yields a powerful design support tool known as knowledge based simulation. This approach turns a descriptive simulation tool into a prescriptive tool, one which recommends specific goals. Much work in the area of general goal processing and explanation of recommendations remains to be done.

  18. [Application of simulated annealing method and neural network on optimizing soil sampling schemes based on road distribution].

    PubMed

    Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng

    2015-03-01

    Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency.

  19. Simulations of light scattering spectra of a nanoshell on plane interface based on the discrete sources method

    NASA Astrophysics Data System (ADS)

    Eremina, Elena; Eremin, Yuri; Wriedt, Thomas

    2006-11-01

    The resonance properties of nanoshells are of great interest in nanosensing applications such as surface enhanced Raman scattering or biological sensing. In this paper the discrete sources method has been applied to analyze the spectrum of evanescent light scattering from a nanoshell particle deposited near a plane surface. Based on the rigorous theoretical model, which allows to take into account all features of the scattering problem as: medium with frequency dispersion, presence of the interface, the objective aperture and its location and core-shell asphericity, the scattering spectrum of nanoshells was calculated. The dependence of the local nanoshell spectral density behavior on its properties is discussed.

  20. Simulation-based surgical education.

    PubMed

    Evgeniou, Evgenios; Loizou, Peter

    2013-09-01

    The reduction in time for training at the workplace has created a challenge for the traditional apprenticeship model of training. Simulation offers the opportunity for repeated practice in a safe and controlled environment, focusing on trainees and tailored to their needs. Recent technological advances have led to the development of various simulators, which have already been introduced in surgical training. The complexity and fidelity of the available simulators vary, therefore depending on our recourses we should select the appropriate simulator for the task or skill we want to teach. Educational theory informs us about the importance of context in professional learning. Simulation should therefore recreate the clinical environment and its complexity. Contemporary approaches to simulation have introduced novel ideas for teaching teamwork, communication skills and professionalism. In order for simulation-based training to be successful, simulators have to be validated appropriately and integrated in a training curriculum. Within a surgical curriculum, trainees should have protected time for simulation-based training, under appropriate supervision. Simulation-based surgical education should allow the appropriate practice of technical skills without ignoring the clinical context and must strike an adequate balance between the simulation environment and simulators.

  1. Influence of River Bed Elevation Survey Configurations and Interpolation Methods on the Accuracy of LIDAR Dtm-Based River Flow Simulations

    NASA Astrophysics Data System (ADS)

    Santillan, J. R.; Serviano, J. L.; Makinano-Santillan, M.; Marqueso, J. T.

    2016-09-01

    In this paper, we investigated how survey configuration and the type of interpolation method can affect the accuracy of river flow simulations that utilize LIDAR DTM integrated with interpolated river bed as its main source of topographic information. Aside from determining the accuracy of the individually-generated river bed topographies, we also assessed the overall accuracy of the river flow simulations in terms of maximum flood depth and extent. Four survey configurations consisting of river bed elevation data points arranged as cross-section (XS), zig-zag (ZZ), river banks-centerline (RBCL), and river banks-centerline-zig-zag (RBCLZZ), and two interpolation methods (Inverse Distance-Weighted and Ordinary Kriging) were considered. Major results show that the choice of survey configuration, rather than the interpolation method, has significant effect on the accuracy of interpolated river bed surfaces, and subsequently on the accuracy of river flow simulations. The RMSEs of the interpolated surfaces and the model results vary from one configuration to another, and depends on how each configuration evenly collects river bed elevation data points. The large RMSEs for the RBCL configuration and the low RMSEs for the XS configuration confirm that as the data points become evenly spaced and cover more portions of the river, the resulting interpolated surface and the river flow simulation where it was used also become more accurate. The XS configuration with Ordinary Kriging (OK) as interpolation method provided the best river bed interpolation and river flow simulation results. The RBCL configuration, regardless of the interpolation algorithm used, resulted to least accurate river bed surfaces and simulation results. Based on the accuracy analysis, the use of XS configuration to collect river bed data points and applying the OK method to interpolate the river bed topography are the best methods to use to produce satisfactory river flow simulation outputs. The use of

  2. A Lattice Boltzmann Method for Turbomachinery Simulations

    NASA Technical Reports Server (NTRS)

    Hsu, A. T.; Lopez, I.

    2003-01-01

    Lattice Boltzmann (LB) Method is a relatively new method for flow simulations. The start point of LB method is statistic mechanics and Boltzmann equation. The LB method tries to set up its model at molecular scale and simulate the flow at macroscopic scale. LBM has been applied to mostly incompressible flows and simple geometry.

  3. Development and Implementation of a Transport Method for the Transport and Reaction Simulation Engine (TaRSE) based on the Godunov-Mixed Finite Element Method

    USGS Publications Warehouse

    James, Andrew I.; Jawitz, James W.; Munoz-Carpena, Rafael

    2009-01-01

    A model to simulate transport of materials in surface water and ground water has been developed to numerically approximate solutions to the advection-dispersion equation. This model, known as the Transport and Reaction Simulation Engine (TaRSE), uses an algorithm that incorporates a time-splitting technique where the advective part of the equation is solved separately from the dispersive part. An explicit finite-volume Godunov method is used to approximate the advective part, while a mixed-finite element technique is used to approximate the dispersive part. The dispersive part uses an implicit discretization, which allows it to run stably with a larger time step than the explicit advective step. The potential exists to develop algorithms that run several advective steps, and then one dispersive step that encompasses the time interval of the advective steps. Because the dispersive step is computationally most expensive, schemes can be implemented that are more computationally efficient than non-time-split algorithms. This technique enables scientists to solve problems with high grid Peclet numbers, such as transport problems with sharp solute fronts, without spurious oscillations in the numerical approximation to the solution and with virtually no artificial diffusion.

  4. Spectral simulation methods for enhancing qualitative and quantitative analyses based on infrared spectroscopy and quantitative calibration methods for passive infrared remote sensing of volatile organic compounds

    NASA Astrophysics Data System (ADS)

    Sulub, Yusuf Ismail

    Infrared spectroscopy (IR) has over the years found a myriad of applications including passive environmental remote sensing of toxic pollutants and the development of a blood glucose sensor. In this dissertation, capabilities of both these applications are further enhanced with data analysis strategies employing digital signal processing and novel simulation approaches. Both quantitative and qualitative determinations of volatile organic compounds are investigated in the passive IR remote sensing research described in this dissertation. In the quantitative work, partial least-squares (PLS) regression analysis is used to generate multivariate calibration models for passive Fourier transform IR remote sensing measurements of open-air generated vapors of ethanol in the presence methanol as an interfering species. A step-wise co-addition scheme coupled with a digital filtering approach is used to attenuate the effects of variation in optical path length or plume width. For the qualitative study, an IR imaging line scanner is used to acquire remote sensing data in both spatial and spectral domains. This technology is capable of not only identifying but also specifying the location of the sample under investigation. Successful implementation of this methodology is hampered by the huge costs incurred to conduct these experiments and the impracticality of acquiring large amounts of representative training data. To address this problem, a novel simulation approach is developed that generates training data based on synthetic analyte-active and measured analyte-inactive data. Subsequently, automated pattern classifiers are generated using piecewise linear discriminant analysis to predict the presence of the analyte signature in measured imaging data acquired in remote sensing applications. Near infrared glucose determinations based on the region of 5000--4000 cm-1 is the focus of the research in the latter part of this dissertation. A six-component aqueous matrix of glucose

  5. 3-D simulation of soot formation in a direct-injection diesel engine based on a comprehensive chemical mechanism and method of moments

    NASA Astrophysics Data System (ADS)

    Zhong, Bei-Jing; Dang, Shuai; Song, Ya-Na; Gong, Jing-Song

    2012-02-01

    Here, we propose both a comprehensive chemical mechanism and a reduced mechanism for a three-dimensional combustion simulation, describing the formation of polycyclic aromatic hydrocarbons (PAHs), in a direct-injection diesel engine. A soot model based on the reduced mechanism and a method of moments is also presented. The turbulent diffusion flame and PAH formation in the diesel engine were modelled using the reduced mechanism based on the detailed mechanism using a fixed wall temperature as a boundary condition. The spatial distribution of PAH concentrations and the characteristic parameters for soot formation in the engine cylinder were obtained by coupling a detailed chemical kinetic model with the three-dimensional computational fluid dynamic (CFD) model. Comparison of the simulated results with limited experimental data shows that the chemical mechanisms and soot model are realistic and correctly describe the basic physics of diesel combustion but require further development to improve their accuracy.

  6. Development of an Output-based Adaptive Method for Multi-Dimensional Euler and Navier-Stokes Simulations

    NASA Technical Reports Server (NTRS)

    Darmofal, David L.

    2003-01-01

    The use of computational simulations in the prediction of complex aerodynamic flows is becoming increasingly prevalent in the design process within the aerospace industry. Continuing advancements in both computing technology and algorithmic development are ultimately leading to attempts at simulating ever-larger, more complex problems. However, by increasing the reliance on computational simulations in the design cycle, we must also increase the accuracy of these simulations in order to maintain or improve the reliability arid safety of the resulting aircraft. At the same time, large-scale computational simulations must be made more affordable so that their potential benefits can be fully realized within the design cycle. Thus, a continuing need exists for increasing the accuracy and efficiency of computational algorithms such that computational fluid dynamics can become a viable tool in the design of more reliable, safer aircraft. The objective of this research was the development of an error estimation and grid adaptive strategy for reducing simulation errors in integral outputs (functionals) such as lift or drag from from multi-dimensional Euler and Navier-Stokes simulations. In this final report, we summarize our work during this grant.

  7. Development of an Output-based Adaptive Method for Multi-Dimensional Euler and Navier-Stokes Simulations

    NASA Technical Reports Server (NTRS)

    Darmofal, David L.

    2003-01-01

    The use of computational simulations in the prediction of complex aerodynamic flows is becoming increasingly prevalent in the design process within the aerospace industry. Continuing advancements in both computing technology and algorithmic development are ultimately leading to attempts at simulating ever-larger, more complex problems. However, by increasing the reliance on computational simulations in the design cycle, we must also increase the accuracy of these simulations in order to maintain or improve the reliability arid safety of the resulting aircraft. At the same time, large-scale computational simulations must be made more affordable so that their potential benefits can be fully realized within the design cycle. Thus, a continuing need exists for increasing the accuracy and efficiency of computational algorithms such that computational fluid dynamics can become a viable tool in the design of more reliable, safer aircraft. The objective of this research was the development of an error estimation and grid adaptive strategy for reducing simulation errors in integral outputs (functionals) such as lift or drag from from multi-dimensional Euler and Navier-Stokes simulations. In this final report, we summarize our work during this grant.

  8. Investigating internal architecture effect in plastic deformation and failure for TPMS-based scaffolds using simulation methods and experimental procedure.

    PubMed

    Kadkhodapour, J; Montazerian, H; Raeisi, S

    2014-10-01

    Rapid prototyping (RP) has been a promising technique for producing tissue engineering scaffolds which mimic the behavior of host tissue as properly as possible. Biodegradability, agreeable feasibility of cell growth, and migration parallel to mechanical properties, such as strength and energy absorption, have to be considered in design procedure. In order to study the effect of internal architecture on the plastic deformation and failure pattern, the architecture of triply periodic minimal surfaces which have been observed in nature were used. P and D surfaces at 30% and 60% of volume fractions were modeled with 3∗3∗ 3 unit cells and imported to Objet EDEN 260 3-D printer. Models were printed by VeroBlue FullCure 840 photopolymer resin. Mechanical compression test was performed to investigate the compressive behavior of scaffolds. Deformation procedure and stress-strain curves were simulated by FEA and exhibited good agreement with the experimental observation. Current approaches for predicting dominant deformation mode under compression containing Maxwell's criteria and scaling laws were also investigated to achieve an understanding of the relationships between deformation pattern and mechanical properties of porous structures. It was observed that effect of stress concentration in TPMS-based scaffolds resultant by heterogeneous mass distribution, particularly at lower volume fractions, led to a different behavior from that of typical cellular materials. As a result, although more parameters are considered for determining dominant deformation in scaling laws, two mentioned approaches could not exclusively be used to compare the mechanical response of cellular materials at the same volume fraction.

  9. Simulation and Non-Simulation Based Human Reliability Analysis Approaches

    SciTech Connect

    Boring, Ronald Laurids; Shirley, Rachel Elizabeth; Joe, Jeffrey Clark; Mandelli, Diego

    2014-12-01

    Part of the U.S. Department of Energy’s Light Water Reactor Sustainability (LWRS) Program, the Risk-Informed Safety Margin Characterization (RISMC) Pathway develops approaches to estimating and managing safety margins. RISMC simulations pair deterministic plant physics models with probabilistic risk models. As human interactions are an essential element of plant risk, it is necessary to integrate human actions into the RISMC risk model. In this report, we review simulation-based and non-simulation-based human reliability assessment (HRA) methods. Chapter 2 surveys non-simulation-based HRA methods. Conventional HRA methods target static Probabilistic Risk Assessments for Level 1 events. These methods would require significant modification for use in dynamic simulation of Level 2 and Level 3 events. Chapter 3 is a review of human performance models. A variety of methods and models simulate dynamic human performance; however, most of these human performance models were developed outside the risk domain and have not been used for HRA. The exception is the ADS-IDAC model, which can be thought of as a virtual operator program. This model is resource-intensive but provides a detailed model of every operator action in a given scenario, along with models of numerous factors that can influence operator performance. Finally, Chapter 4 reviews the treatment of timing of operator actions in HRA methods. This chapter is an example of one of the critical gaps between existing HRA methods and the needs of dynamic HRA. This report summarizes the foundational information needed to develop a feasible approach to modeling human interactions in the RISMC simulations.

  10. Simulation-based medical teaching and learning.

    PubMed

    Al-Elq, Abdulmohsen H

    2010-01-01

    One of the most important steps in curriculum development is the introduction of simulation- based medical teaching and learning. Simulation is a generic term that refers to an artificial representation of a real world process to achieve educational goals through experiential learning. Simulation based medical education is defined as any educational activity that utilizes simulation aides to replicate clinical scenarios. Although medical simulation is relatively new, simulation has been used for a long time in other high risk professions such as aviation. Medical simulation allows the acquisition of clinical skills through deliberate practice rather than an apprentice style of learning. Simulation tools serve as an alternative to real patients. A trainee can make mistakes and learn from them without the fear of harming the patient. There are different types and classification of simulators and their cost vary according to the degree of their resemblance to the reality, or 'fidelity'. Simulation- based learning is expensive. However, it is cost-effective if utilized properly. Medical simulation has been found to enhance clinical competence at the undergraduate and postgraduate levels. It has also been found to have many advantages that can improve patient safety and reduce health care costs through the improvement of the medical provider's competencies. The objective of this narrative review article is to highlight the importance of simulation as a new teaching method in undergraduate and postgraduate education.

  11. Multigrid methods with applications to reservoir simulation

    SciTech Connect

    Xiao, Shengyou

    1994-05-01

    Multigrid methods are studied for solving elliptic partial differential equations. Focus is on parallel multigrid methods and their use for reservoir simulation. Multicolor Fourier analysis is used to analyze the behavior of standard multigrid methods for problems in one and two dimensions. Relation between multicolor and standard Fourier analysis is established. Multiple coarse grid methods for solving model problems in 1 and 2 dimensions are considered; at each coarse grid level we use more than one coarse grid to improve convergence. For a given Dirichlet problem, a related extended problem is first constructed; a purification procedure can be used to obtain Moore-Penrose solutions of the singular systems encountered. For solving anisotropic equations, semicoarsening and line smoothing techniques are used with multiple coarse grid methods to improve convergence. Two-level convergence factors are estimated using multicolor. In the case where each operator has the same stencil on each grid point on one level, exact multilevel convergence factors can be obtained. For solving partial differential equations with discontinuous coefficients, interpolation and restriction operators should include information about the equation coefficients. Matrix-dependent interpolation and restriction operators based on the Schur complement can be used in nonsymmetric cases. A semicoarsening multigrid solver with these operators is used in UTCOMP, a 3-D, multiphase, multicomponent, compositional reservoir simulator. The numerical experiments are carried out on different computing systems. Results indicate that the multigrid methods are promising.

  12. A heterogeneous graph-based recommendation simulator

    SciTech Connect

    Yeonchan, Ahn; Sungchan, Park; Lee, Matt Sangkeun; Sang-goo, Lee

    2013-01-01

    Heterogeneous graph-based recommendation frameworks have flexibility in that they can incorporate various recommendation algorithms and various kinds of information to produce better results. In this demonstration, we present a heterogeneous graph-based recommendation simulator which enables participants to experience the flexibility of a heterogeneous graph-based recommendation method. With our system, participants can simulate various recommendation semantics by expressing the semantics via meaningful paths like User Movie User Movie. The simulator then returns the recommendation results on the fly based on the user-customized semantics using a fast Monte Carlo algorithm.

  13. A comparison of model-based imputation methods for handling missing predictor values in a linear regression model: A simulation study

    NASA Astrophysics Data System (ADS)

    Hasan, Haliza; Ahmad, Sanizah; Osman, Balkish Mohd; Sapri, Shamsiah; Othman, Nadirah

    2017-08-01

    In regression analysis, missing covariate data has been a common problem. Many researchers use ad hoc methods to overcome this problem due to the ease of implementation. However, these methods require assumptions about the data that rarely hold in practice. Model-based methods such as Maximum Likelihood (ML) using the expectation maximization (EM) algorithm and Multiple Imputation (MI) are more promising when dealing with difficulties caused by missing data. Then again, inappropriate methods of missing value imputation can lead to serious bias that severely affects the parameter estimates. The main objective of this study is to provide a better understanding regarding missing data concept that can assist the researcher to select the appropriate missing data imputation methods. A simulation study was performed to assess the effects of different missing data techniques on the performance of a regression model. The covariate data were generated using an underlying multivariate normal distribution and the dependent variable was generated as a combination of explanatory variables. Missing values in covariate were simulated using a mechanism called missing at random (MAR). Four levels of missingness (10%, 20%, 30% and 40%) were imposed. ML and MI techniques available within SAS software were investigated. A linear regression analysis was fitted and the model performance measures; MSE, and R-Squared were obtained. Results of the analysis showed that MI is superior in handling missing data with highest R-Squared and lowest MSE when percent of missingness is less than 30%. Both methods are unable to handle larger than 30% level of missingness.

  14. Rule-based simulation models

    NASA Technical Reports Server (NTRS)

    Nieten, Joseph L.; Seraphine, Kathleen M.

    1991-01-01

    Procedural modeling systems, rule based modeling systems, and a method for converting a procedural model to a rule based model are described. Simulation models are used to represent real time engineering systems. A real time system can be represented by a set of equations or functions connected so that they perform in the same manner as the actual system. Most modeling system languages are based on FORTRAN or some other procedural language. Therefore, they must be enhanced with a reaction capability. Rule based systems are reactive by definition. Once the engineering system has been decomposed into a set of calculations using only basic algebraic unary operations, a knowledge network of calculations and functions can be constructed. The knowledge network required by a rule based system can be generated by a knowledge acquisition tool or a source level compiler. The compiler would take an existing model source file, a syntax template, and a symbol table and generate the knowledge network. Thus, existing procedural models can be translated and executed by a rule based system. Neural models can be provide the high capacity data manipulation required by the most complex real time models.

  15. Experimental Evaluation of Suitability of Selected Multi-Criteria Decision-Making Methods for Large-Scale Agent-Based Simulations

    PubMed Central

    2016-01-01

    Multi-criteria decision-making (MCDM) can be formally implemented by various methods. This study compares suitability of four selected MCDM methods, namely WPM, TOPSIS, VIKOR, and PROMETHEE, for future applications in agent-based computational economic (ACE) models of larger scale (i.e., over 10 000 agents in one geographical region). These four MCDM methods were selected according to their appropriateness for computational processing in ACE applications. Tests of the selected methods were conducted on four hardware configurations. For each method, 100 tests were performed, which represented one testing iteration. With four testing iterations conducted on each hardware setting and separated testing of all configurations with the–server parameter de/activated, altogether, 12800 data points were collected and consequently analyzed. An illustrational decision-making scenario was used which allows the mutual comparison of all of the selected decision making methods. Our test results suggest that although all methods are convenient and can be used in practice, the VIKOR method accomplished the tests with the best results and thus can be recommended as the most suitable for simulations of large-scale agent-based models. PMID:27806061

  16. Experimental Evaluation of Suitability of Selected Multi-Criteria Decision-Making Methods for Large-Scale Agent-Based Simulations.

    PubMed

    Tučník, Petr; Bureš, Vladimír

    2016-01-01

    Multi-criteria decision-making (MCDM) can be formally implemented by various methods. This study compares suitability of four selected MCDM methods, namely WPM, TOPSIS, VIKOR, and PROMETHEE, for future applications in agent-based computational economic (ACE) models of larger scale (i.e., over 10 000 agents in one geographical region). These four MCDM methods were selected according to their appropriateness for computational processing in ACE applications. Tests of the selected methods were conducted on four hardware configurations. For each method, 100 tests were performed, which represented one testing iteration. With four testing iterations conducted on each hardware setting and separated testing of all configurations with the-server parameter de/activated, altogether, 12800 data points were collected and consequently analyzed. An illustrational decision-making scenario was used which allows the mutual comparison of all of the selected decision making methods. Our test results suggest that although all methods are convenient and can be used in practice, the VIKOR method accomplished the tests with the best results and thus can be recommended as the most suitable for simulations of large-scale agent-based models.

  17. Evaluation of six scatter correction methods based on spectral analysis in (99m)Tc SPECT imaging using SIMIND Monte Carlo simulation.

    PubMed

    Asl, Mahsa Noori; Sadremomtaz, Alireza; Bitarafan-Rajabi, Ahmad

    2013-10-01

    Compton-scattered photons included within the photopeak pulse-height window result in the degradation of SPECT images both qualitatively and quantitatively. The purpose of this study is to evaluate and compare six scatter correction methods based on setting the energy windows in (99m)Tc spectrum. SIMIND Monte Carlo simulation is used to generate the projection images from a cold-sphere hot-background phantom. For evaluation of different scatter correction methods, three assessment criteria including image contrast, signal-to-noise ratio (SNR) and relative noise of the background (RNB) are considered. Except for the dual-photopeak window (DPW) method, the image contrast of the five cold spheres is improved in the range of 2.7-26%. Among methods considered, two methods show a nonuniform correction performance. The RNB for all of the scatter correction methods is ranged from minimum 0.03 for DPW method to maximum 0.0727 for the three energy window (TEW) method using trapezoidal approximation. The TEW method using triangular approximation because of ease of implementation, good improvement of the image contrast and the SNR for the five cold spheres, and the low noise level is proposed as most appropriate correction method.

  18. Designing and conducting simulation-based research.

    PubMed

    Cheng, Adam; Auerbach, Marc; Hunt, Elizabeth A; Chang, Todd P; Pusic, Martin; Nadkarni, Vinay; Kessler, David

    2014-06-01

    As simulation is increasingly used to study questions pertaining to pediatrics, it is important that investigators use rigorous methods to conduct their research. In this article, we discuss several important aspects of conducting simulation-based research in pediatrics. First, we describe, from a pediatric perspective, the 2 main types of simulation-based research: (1) studies that assess the efficacy of simulation as a training methodology and (2) studies where simulation is used as an investigative methodology. We provide a framework to help structure research questions for each type of research and describe illustrative examples of published research in pediatrics using these 2 frameworks. Second, we highlight the benefits of simulation-based research and how these apply to pediatrics. Third, we describe simulation-specific confounding variables that serve as threats to the internal validity of simulation studies and offer strategies to mitigate these confounders. Finally, we discuss the various types of outcome measures available for simulation research and offer a list of validated pediatric assessment tools that can be used in future simulation-based studies. Copyright © 2014 by the American Academy of Pediatrics.

  19. Simulation analysis of airflow alteration in the trachea following the vascular ring surgery based on CT images using the computational fluid dynamics method.

    PubMed

    Chen, Fong-Lin; Horng, Tzyy-Leng; Shih, Tzu-Ching

    2014-01-01

    This study presents a computational fluid dynamics (CFD) model to simulate the three-dimensional airflow in the trachea before and after the vascular ring surgery (VRS). The simulation was based on CT-scan images of the patients with the vascular ring diseases. The surface geometry of the tracheal airway was reconstructed using triangular mesh by the Amira software package. The unstructured tetrahedral volume meshes were generated by the ANSYS ICEM CFD software package. The airflow in the tracheal airway was solved by the ESI CFD-ACE+ software package. Numerical simulation shows that the pressure drops across the tracheal stenosis before and after the surgery were 0.1789 and 0.0967 Pa, respectively, with the inspiratory inlet velocity 0.1 m/s. Meanwhile, the improvement percentage by the surgery was 45.95%. In the expiratory phase, by contrast, the improvement percentage was 40.65%. When the inspiratory velocity reached 1 m/s, the pressure drop became 4.988~Pa and the improvement percentage was 43.32%. Simulation results further show that after treatment the pressure drop in the tracheal airway was significantly decreased, especially for low inspiratory and expiratory velocities. The CFD method can be applied to quantify the airway pressure alteration and to evaluate the treatment outcome of the vascular ring surgery under different respiratory velocities.

  20. Collaborative simulation method with spatiotemporal synchronization process control

    NASA Astrophysics Data System (ADS)

    Zou, Yisheng; Ding, Guofu; Zhang, Weihua; Zhang, Jian; Qin, Shengfeng; Tan, John Kian

    2016-10-01

    When designing a complex mechatronics system, such as high speed trains, it is relatively difficult to effectively simulate the entire system's dynamic behaviors because it involves multi-disciplinary subsystems. Currently,a most practical approach for multi-disciplinary simulation is interface based coupling simulation method, but it faces a twofold challenge: spatial and time unsynchronizations among multi-directional coupling simulation of subsystems. A new collaborative simulation method with spatiotemporal synchronization process control is proposed for coupling simulating a given complex mechatronics system across multiple subsystems on different platforms. The method consists of 1) a coupler-based coupling mechanisms to define the interfacing and interaction mechanisms among subsystems, and 2) a simulation process control algorithm to realize the coupling simulation in a spatiotemporal synchronized manner. The test results from a case study show that the proposed method 1) can certainly be used to simulate the sub-systems interactions under different simulation conditions in an engineering system, and 2) effectively supports multi-directional coupling simulation among multi-disciplinary subsystems. This method has been successfully applied in China high speed train design and development processes, demonstrating that it can be applied in a wide range of engineering systems design and simulation with improved efficiency and effectiveness.

  1. Formability analysis of aluminum alloy sheets at elevated temperatures with numerical simulation based on the M-K method

    SciTech Connect

    Bagheriasl, Reza; Ghavam, Kamyar; Worswick, Michael

    2011-05-04

    The effect of temperature on formability of aluminum alloy sheet is studied by developing the Forming Limit Diagrams, FLD, for aluminum alloy 3000-series using the Marciniak and Kuczynski technique by numerical simulation. The numerical model is conducted in LS-DYNA and incorporates the Barlat's YLD2000 anisotropic yield function and the temperature dependant Bergstrom hardening law. Three different temperatures; room temperature, 250 deg. C and 300 deg. C, are studied. For each temperature case, various loading conditions are applied to the M-K defect model. The effect of the material anisotropy is considered by varying the defect angle. A simplified failure criterion is used to predict the onset of necking. Minor and major strains are obtained from the simulations and plotted for each temperature level. It is demonstrated that temperature improves the forming limit of aluminum 3000-series alloy sheet.

  2. Simulating marine propellers with vortex particle method

    NASA Astrophysics Data System (ADS)

    Wang, Youjiang; Abdel-Maksoud, Moustafa; Song, Baowei

    2017-01-01

    The vortex particle method is applied to compute the open water characteristics of marine propellers. It is based on the large-eddy simulation technique, and the Smagorinsky-Lilly sub-grid scale model is implemented for the eddy viscosity. The vortex particle method is combined with the boundary element method, in the sense that the body is modelled with boundary elements and the slipstream is modelled with vortex particles. Rotational periodic boundaries are adopted, which leads to a cylindrical sector domain for the slipstream. The particle redistribution scheme and the fast multipole method are modified to consider the rotational periodic boundaries. Open water characteristics of three propellers with different skew angles are calculated with the proposed method. The results are compared with the ones obtained with boundary element method and experiments. It is found that the proposed method predicts the open water characteristics more accurately than the boundary element method, especially for high loading condition and high skew propeller. The influence of the Smagorinsky constant is also studied, which shows the results have a low sensitivity to it.

  3. Researches on a novel severe plastic deformation method combining direct extrusion and shearings for AZ61 magnesium alloy based on numerical simulation and experiments

    NASA Astrophysics Data System (ADS)

    Hu, Hongjun; Sun, Zhao; Ou, zhongwen; Wang, xiaoqing

    2017-05-01

    A new severe plastic deformation method called extrusion-shearing shorten for "ES" has been developed to fabricate the ultra-fine grained AZ61 magnesium alloys. The correlation theories of ES process have been studied which includes cumulative strain and Zener-Hollomon parameter etc. Simulations of ES process for wrought AZ61 magnesium alloy have been performed using three-dimensional finite element method. ES dies with one step shearing and two step shearings have been designed, manufactured and installed onto thermo-mechanical simulator and industrial horizontal extruder, respectively. Microstructures evolution has been observed and analysed. The influences of the ES processes on the grain refinements of AZ61magniesium alloys during multistage processes have been investigated. Based on the experimental, simulation and theoretical results, ES process could increase the cumulative strains enormously and refine grain sizes by direct extrusion and additional shearings. ES process can produce the serve plastic deformation and improve the volume fraction of dynamic recrystallization. Continuous dynamic recrystallizaion is the main reason for grain refinements during ES process.

  4. Simulation methods for looping transitions.

    PubMed

    Gaffney, B J; Silverstone, H J

    1998-09-01

    Looping transitions occur in field-swept electron magnetic resonance spectra near avoided crossings and involve a single pair of energy levels that are in resonance at two magnetic field strengths, before and after the avoided crossing. When the distance between the two resonances approaches a linewidth, the usual simulation of the spectra, which results from a linear approximation of the dependence of the transition frequency on magnetic field, breaks down. A cubic approximation to the transition frequency, which can be obtained from the two resonance fields and the field-derivatives of the transition frequencies, along with linear (or better) interpolation of the transition-probability factor, restores accurate simulation. The difference is crucial for accurate line shapes at fixed angles, as in an oriented single crystal, but the difference turns out to be a smaller change in relative intensity for a powder spectrum. Spin-3/2 Cr3+ in ruby and spin-5/2 Fe3+ in transferrin oxalate are treated as examples.

  5. Simulator certification methods and the vertical motion simulator

    NASA Technical Reports Server (NTRS)

    Showalter, T. W.

    1981-01-01

    The vertical motion simulator (VMS) is designed to simulate a variety of experimental helicopter and STOL/VTOL aircraft as well as other kinds of aircraft with special pitch and Z axis characteristics. The VMS includes a large motion base with extensive vertical and lateral travel capabilities, a computer generated image visual system, and a high speed CDC 7600 computer system, which performs aero model calculations. Guidelines on how to measure and evaluate VMS performance were developed. A survey of simulation users was conducted to ascertain they evaluated and certified simulators for use. The results are presented.

  6. Determining design gust loads for nonlinear aircraft similarity between methods based on matched filter theory and on stochastic simulation

    NASA Technical Reports Server (NTRS)

    Scott, Robert C.; Pototzky, Anthony S.; Perry, Boyd, III

    1992-01-01

    This is a work-in-progress paper. It explores the similarity between the results from two different analysis methods - one deterministic, the other stochastic - for computing maximized and time-correlated gust loads for nonlinear aircraft. To date, numerical studies have been performed using two different nonlinear aircraft configurations. These studies demonstrate that results from the deterministic analysis method are realizable in the stochastic analysis method.

  7. Angioplasty simulation using ChainMail method

    NASA Astrophysics Data System (ADS)

    Le Fol, Tanguy; Acosta-Tamayo, Oscar; Lucas, Antoine; Haigron, Pascal

    2007-03-01

    Tackling transluminal angioplasty planning, the aim of our work is to bring, in a patient specific way, solutions to clinical problems. This work focuses on realization of simple simulation scenarios taking into account macroscopic behaviors of stenosis. It means simulating geometrical and physical data from the inflation of a balloon while integrating data from tissues analysis and parameters from virtual tool-tissues interactions. In this context, three main behaviors has been identified: soft tissues crush completely under the effect of the balloon, calcified plaques, do not admit any deformation but could move in deformable structures, the blood vessel wall undergoes consequences from compression phenomenon and tries to find its original form. We investigated the use of Chain-Mail which is based on elements linked with the others thanks to geometric constraints. Compared with time consuming methods or low realism ones, Chain-Mail methods provide a good compromise between physical and geometrical approaches. In this study, constraints are defined from pixel density from angio-CT images. The 2D method, proposed in this paper, first initializes the balloon in the blood vessel lumen. Then the balloon inflates and the moving propagation, gives an approximate reaction of tissues. Finally, a minimal energy level is calculated to locally adjust element positions, throughout elastic relaxation stage. Preliminary experimental results obtained on 2D computed tomography (CT) images (100x100 pixels) show that the method is fast enough to handle a great number of linked-element. The simulation is able to verify real-time and realistic interactions, particularly for hard and soft plaques.

  8. Reconsidering fidelity in simulation-based training.

    PubMed

    Hamstra, Stanley J; Brydges, Ryan; Hatala, Rose; Zendejas, Benjamin; Cook, David A

    2014-03-01

    In simulation-based health professions education, the concept of simulator fidelity is usually understood as the degree to which a simulator looks, feels, and acts like a human patient. Although this can be a useful guide in designing simulators, this definition emphasizes technological advances and physical resemblance over principles of educational effectiveness. In fact, several empirical studies have shown that the degree of fidelity appears to be independent of educational effectiveness. The authors confronted these issues while conducting a recent systematic review of simulation-based health professions education, and in this Perspective they use their experience in conducting that review to examine key concepts and assumptions surrounding the topic of fidelity in simulation.Several concepts typically associated with fidelity are more useful in explaining educational effectiveness, such as transfer of learning, learner engagement, and suspension of disbelief. Given that these concepts more directly influence properties of the learning experience, the authors make the following recommendations: (1) abandon the term fidelity in simulation-based health professions education and replace it with terms reflecting the underlying primary concepts of physical resemblance and functional task alignment; (2) make a shift away from the current emphasis on physical resemblance to a focus on functional correspondence between the simulator and the applied context; and (3) focus on methods to enhance educational effectiveness using principles of transfer of learning, learner engagement, and suspension of disbelief. These recommendations clarify underlying concepts for researchers in simulation-based health professions education and will help advance this burgeoning field.

  9. Adjoint-based Simultaneous Estimation Method of Fault Slip and Asthenosphere Viscosity Using Large-Scale Finite Element Simulation of Viscoelastic Deformation

    NASA Astrophysics Data System (ADS)

    Agata, R.; Ichimura, T.; Hori, T.; Hirahara, K.; Hashimoto, C.; Hori, M.

    2016-12-01

    Estimation of the coseismic/postseismic slip using postseismic deformation observation data is an important topic in the field of geodetic inversion. Estimation methods for this purpose are expected to be improved by introducing numerical simulation tools (e.g. finite element (FE) method) of viscoelastic deformation, in which the computation model is of high fidelity to the available high-resolution crustal data. The authors have proposed a large-scale simulation method using such FE high-fidelity models (HFM), assuming use of a large-scale computation environment such as the K computer in Japan (Ichimura et al. 2016). On the other hand, the values of viscosity in the heterogeneous viscoelastic structure in the high-fidelity model are not trivial. In this study, we developed an adjoint-based optimization method incorporating HFM, in which fault slip and asthenosphere viscosity are simultaneously estimated. We carried out numerical experiments using synthetic crustal deformation data. We constructed an HFM in the domain of 2048x1536x850 km, which includes the Tohoku region in northeast Japan based on Ichimura et al. (2013). We used the model geometry data set of JTOPO30 (2003), Koketsu et al. (2008) and CAMP standard model (Hashimoto et al. 2004). The geometry of crustal structures in HFM is in 1km resolution, resulting in 36 billion degrees-of-freedom. Synthetic crustal deformation data due to prescribed coseismic slip and after slips in the location of GEONET, GPS/A observation points, and S-net are used. The target inverse analysis is formulated as minimization of L2 norm of the difference between the FE simulation results and the observation data with respect to viscosity and fault slip, combining the quasi-Newton algorithm with the adjoint method. Use of this combination decreases the necessary number of forward analyses in the optimization calculation. As a result, we are now able to finish the estimation using 2560 computer nodes of the K computer for less

  10. Plume base flow simulation technology

    NASA Technical Reports Server (NTRS)

    Roberts, B. B.; Wallace, R. O.; Sims, J. L.

    1983-01-01

    A combined analytical/empirical approach was studied in an effort to define the plume simulation parameters for base flow. For design purposes, rocket exhaust simulation (i.e., plume simulation) is determined by wind tunnel testing. Cold gas testing was concluded to be a cost and schedule effective data base of substantial scope. The results fell short of the target, although work conducted was conclusive and advanced the state of the art. Comparisons of wind tunnel predictions with Space Transportation System (STS) flight data showed considerable differences. However, a review of the technology program data base has yielded an additional parameter that may correlate flight and cold gas test data. Data from the plume technology program and the NASA test flights are presented to substantiate the proposed simulation parameters.

  11. Medical students’ satisfaction with the Applied Basic Clinical Seminar with Scenarios for Students, a novel simulation-based learning method in Greece

    PubMed Central

    2016-01-01

    Purpose: The integration of simulation-based learning (SBL) methods holds promise for improving the medical education system in Greece. The Applied Basic Clinical Seminar with Scenarios for Students (ABCS3) is a novel two-day SBL course that was designed by the Scientific Society of Hellenic Medical Students. The ABCS3 targeted undergraduate medical students and consisted of three core components: the case-based lectures, the ABCDE hands-on station, and the simulation-based clinical scenarios. The purpose of this study was to evaluate the general educational environment of the course, as well as the skills and knowledge acquired by the participants. Methods: Two sets of questions were distributed to the participants: the Dundee Ready Educational Environment Measure (DREEM) questionnaire and an internally designed feedback questionnaire (InEv). A multiple-choice examination was also distributed prior to the course and following its completion. A total of 176 participants answered the DREEM questionnaire, 56 the InEv, and 60 the MCQs. Results: The overall DREEM score was 144.61 (±28.05) out of 200. Delegates who participated in both the case-based lectures and the interactive scenarios core components scored higher than those who only completed the case-based lecture session (P=0.038). The mean overall feedback score was 4.12 (±0.56) out of 5. Students scored significantly higher on the post-test than on the pre-test (P<0.001). Conclusion: The ABCS3 was found to be an effective SBL program, as medical students reported positive opinions about their experiences and exhibited improvements in their clinical knowledge and skills. PMID:27012313

  12. Identification of substance in complicated mixture of simulants under the action of THz radiation on the base of SDA (spectral dynamics analysis) method

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Varentsova, Svetlana A.; Krotkus, Arunas; Molis, Gediminas

    2010-10-01

    The SDA (Spectral Dynamics Analysis) - method (method of THz spectrum dynamics analysis in THz range of frequencies) is used for the detection and identification of substances with similar THz Fourier spectra (such substances are named usually as the simulants) in the two- or three-component medium. This method allows us to obtain the unique 2D THz signature of the substance - the spectrogram- and to analyze the dynamics of many spectral lines of the THz signal, passed through or reflected from substance, by one set of its integral measurements simultaneously; even measurements are made on short-term intervals (less than 20 ps). For long-term intervals (100 ps and more) the SDA method gives an opportunity to define the relaxation time for excited energy levels of molecules. This information gives new opportunity to identify the substance because the relaxation time is different for molecules of different substances. The restoration of the signal by its integral values is made on the base of SVD - Single Value Decomposition - technique. We consider three examples for PTFE mixed with small content of the L-Tartaric Acid and the Sucrose in pellets. A concentration of these substances is about 5%-10%. Our investigations show that the spectrograms and dynamics of spectral lines of THz pulse passed through the pure PTFE differ from the spectrograms of the compound medium containing PTFE and the L-Tartaric Acid or the Sucrose or both these substances together. So, it is possible to detect the presence of a small amount of the additional substances in the sample even their THz Fourier spectra are practically identical. Therefore, the SDA method can be very effective for the defense and security applications and for quality control in pharmaceutical industry. We also show that in the case of substances-simulants the use of auto- and correlation functions has much worse resolvability in a comparison with the SDA method.

  13. Pipette-based Method to Study Embryoid Body Formation Derived from Mouse and Human Pluripotent Stem Cells Partially Recapitulating Early Embryonic Development Under Simulated Microgravity Conditions

    NASA Astrophysics Data System (ADS)

    Shinde, Vaibhav; Brungs, Sonja; Hescheler, Jürgen; Hemmersbach, Ruth; Sachinidis, Agapios

    2016-06-01

    The in vitro differentiation of pluripotent stem cells partially recapitulates early in vivo embryonic development. More recently, embryonic development under the influence of microgravity has become a primary focus of space life sciences. In order to integrate the technique of pluripotent stem cell differentiation with simulated microgravity approaches, the 2-D clinostat compatible pipette-based method was experimentally investigated and adapted for investigating stem cell differentiation processes under simulated microgravity conditions. In order to keep residual accelerations as low as possible during clinorotation, while also guaranteeing enough material for further analysis, stem cells were exposed in 1-mL pipettes with a diameter of 3.5 mm. The differentiation of mouse and human pluripotent stem cells inside the pipettes resulted in the formation of embryoid bodies at normal gravity (1 g) after 24 h and 3 days. Differentiation of the mouse pluripotent stem cells on a 2-D pipette-clinostat for 3 days also resulted in the formation of embryoid bodies. Interestingly, the expression of myosin heavy chain was downregulated when cultivation was continued for an additional 7 days at normal gravity. This paper describes the techniques for culturing and differentiation of pluripotent stem cells and exposure to simulated microgravity during culturing or differentiation on a 2-D pipette clinostat. The implementation of these methodologies along with -omics technologies will contribute to understand the mechanisms regulating how microgravity influences early embryonic development.

  14. Jacobian-free Newton Krylov discontinuous Galerkin method and physics-based preconditioning for nuclear reactor simulations

    SciTech Connect

    HyeongKae Park; Robert R. Nourgaliev; Richard C. Martineau; Dana A. Knoll

    2008-09-01

    We present high-order accurate spatiotemporal discretization of all-speed flow solvers using Jacobian-free Newton Krylov framework. One of the key developments in this work is the physics-based preconditioner for the all-speed flow, which makes use of traditional semi-implicit schemes. The physics-based preconditioner is developed in the primitive variable form, which allows a straightforward separation of physical phenomena. Numerical examples demonstrate that the developed preconditioner effectively reduces the number of the Krylov iterations, and the efficiency is independent of the Mach number and mesh sizes under a fixed CFL condition.

  15. The impact of web-based and face-to-face simulation on patient deterioration and patient safety: protocol for a multi-site multi-method design.

    PubMed

    Cooper, Simon J; Kinsman, Leigh; Chung, Catherine; Cant, Robyn; Boyle, Jayne; Bull, Loretta; Cameron, Amanda; Connell, Cliff; Kim, Jeong-Ah; McInnes, Denise; McKay, Angela; Nankervis, Katrina; Penz, Erika; Rotter, Thomas

    2016-09-07

    There are international concerns in relation to the management of patient deterioration which has led to a body of evidence known as the 'failure to rescue' literature. Nursing staff are known to miss cues of deterioration and often fail to call for assistance. Medical Emergency Teams (Rapid Response Teams) do improve the management of acutely deteriorating patients, but first responders need the requisite skills to impact on patient safety. In this study we aim to address these issues in a mixed methods interventional trial with the objective of measuring and comparing the cost and clinical impact of face-to-face and web-based simulation programs on the management of patient deterioration and related patient outcomes. The education programs, known as 'FIRST(2)ACT', have been found to have an impact on education and will be tested in four hospitals in the State of Victoria, Australia. Nursing staff will be trained in primary (the first 8 min) responses to emergencies in two medical wards using a face-to-face approach and in two medical wards using a web-based version FIRST(2)ACTWeb. The impact of these interventions will be determined through quantitative and qualitative approaches, cost analyses and patient notes review (time series analyses) to measure quality of care and patient outcomes. In this 18 month study it is hypothesised that both simulation programs will improve the detection and management of deteriorating patients but that the web-based program will have lower total costs. The study will also add to our overall understanding of the utility of simulation approaches in the preparation of nurses working in hospital wards. (ACTRN12616000468426, retrospectively registered 8.4.2016).

  16. Methods of sound simulation and applications in flight simulators

    NASA Technical Reports Server (NTRS)

    Gaertner, K. P.

    1980-01-01

    An overview of methods for electronically synthesizing sounds is presented. A given amount of hardware and computer capacity places an upper limit on the degree and fidelity of realism of sound simulation which is attainable. Good sound realism for aircraft simulators can be especially expensive because of the complexity of flight sounds and their changing patterns through time. Nevertheless, the flight simulator developed at the Research Institute for Human Engineering, West Germany, shows that it is possible to design an inexpensive sound simulator with the required acoustic properties using analog computer elements. The characteristics of the sub-sound elements produced by this sound simulator for take-off, cruise and approach are discussed.

  17. Parametrizing Physics-Based Earthquake Simulations

    NASA Astrophysics Data System (ADS)

    Schultz, Kasey W.; Yoder, Mark R.; Wilson, John M.; Heien, Eric M.; Sachs, Michael K.; Rundle, John B.; Turcotte, Don L.

    2017-06-01

    Utilizing earthquake source parameter scaling relations, we formulate an extensible slip weakening friction law for quasi-static earthquake simulations. This algorithm is based on the method used to generate fault strengths for a recent earthquake simulator comparison study of the California fault system. Here we focus on the application of this algorithm in the Virtual Quake earthquake simulator. As a case study we probe the effects of the friction law's parameters on simulated earthquake rates for the UCERF3 California fault model, and present the resulting conditional probabilities for California earthquake scenarios. The new friction model significantly extends the moment magnitude range over which simulated earthquake rates match observed rates in California, as well as substantially improving the agreement between simulated and observed scaling relations for mean slip and total rupture area.

  18. Parametrizing Physics-Based Earthquake Simulations

    NASA Astrophysics Data System (ADS)

    Schultz, Kasey W.; Yoder, Mark R.; Wilson, John M.; Heien, Eric M.; Sachs, Michael K.; Rundle, John B.; Turcotte, Don L.

    2016-11-01

    Utilizing earthquake source parameter scaling relations, we formulate an extensible slip weakening friction law for quasi-static earthquake simulations. This algorithm is based on the method used to generate fault strengths for a recent earthquake simulator comparison study of the California fault system. Here we focus on the application of this algorithm in the Virtual Quake earthquake simulator. As a case study we probe the effects of the friction law's parameters on simulated earthquake rates for the UCERF3 California fault model, and present the resulting conditional probabilities for California earthquake scenarios. The new friction model significantly extends the moment magnitude range over which simulated earthquake rates match observed rates in California, as well as substantially improving the agreement between simulated and observed scaling relations for mean slip and total rupture area.

  19. Simulation-Based Training for Colonoscopy

    PubMed Central

    Preisler, Louise; Svendsen, Morten Bo Søndergaard; Nerup, Nikolaj; Svendsen, Lars Bo; Konge, Lars

    2015-01-01

    Abstract The aim of this study was to create simulation-based tests with credible pass/fail standards for 2 different fidelities of colonoscopy models. Only competent practitioners should perform colonoscopy. Reliable and valid simulation-based tests could be used to establish basic competency in colonoscopy before practicing on patients. Twenty-five physicians (10 consultants with endoscopic experience and 15 fellows with very little endoscopic experience) were tested on 2 different simulator models: a virtual-reality simulator and a physical model. Tests were repeated twice on each simulator model. Metrics with discriminatory ability were identified for both modalities and reliability was determined. The contrasting-groups method was used to create pass/fail standards and the consequences of these were explored. The consultants significantly performed faster and scored higher than the fellows on both the models (P < 0.001). Reliability analysis showed Cronbach α = 0.80 and 0.87 for the virtual-reality and the physical model, respectively. The established pass/fail standards failed one of the consultants (virtual-reality simulator) and allowed one fellow to pass (physical model). The 2 tested simulations-based modalities provided reliable and valid assessments of competence in colonoscopy and credible pass/fail standards were established for both the tests. We propose to use these standards in simulation-based training programs before proceeding to supervised training on patients. PMID:25634177

  20. Large-Eddy Simulation of Shallow Water Langmuir Turbulence Using Isogeometric Analysis and the Residual-Based Variational Multiscale Method

    DTIC Science & Technology

    2012-01-01

    generating turbulence in the ocean; others include wind-and tidal -driven shear, buoyancy-driven convection and wave breaking. Wind speeds greater than 3 m...structure to the primary, mean component of the flow driven by the wind. LC results from surface wave -current interaction and often occurs within the...equations with an extra vortex force term accounting for wave -current interaction giving rise to LC. The RBVMS method with quadratic NURBS is shown to

  1. Large-Eddy Simulation and Multigrid Methods

    SciTech Connect

    Falgout,R D; Naegle,S; Wittum,G

    2001-06-18

    A method to simulate turbulent flows with Large-Eddy Simulation on unstructured grids is presented. Two kinds of dynamic models are used to model the unresolved scales of motion and are compared with each other on different grids. Thereby the behavior of the models is shown and additionally the feature of adaptive grid refinement is investigated. Furthermore the parallelization aspect is addressed.

  2. [Comparison of two types of double-lined simulated landfill leakage detection based on high voltage DC method].

    PubMed

    Yang, Ping; Nai, Chang-Xin; Dong, Lu; Wang, Qi; Wang, Yan-Wen

    2006-01-01

    Two types of double high density polyethylene (HDPE) liners landfill that clay or geogrid was added between the two HDPE liners. The general resistance of the second mode is 15% larger than the general resistance of the first mode in the primary HDPE liner detection, and 20% larger than that of the first one in the secondary HDPE liner detection. High voltage DC method can accomplish the leakage detection and location of these two types of landfill and the error of leakage location is less than 10cm when electrode space is 1m.

  3. Fast and accurate algorithm for repeated optical trapping simulations on arbitrarily shaped particles based on boundary element method

    NASA Astrophysics Data System (ADS)

    Xu, Kai-Jiang; Pan, Xiao-Min; Li, Ren-Xian; Sheng, Xin-Qing

    2017-07-01

    In optical trapping applications, the optical force should be investigated within a wide range of parameter space in terms of beam configuration to reach the desirable performance. A simple but reliable way of conducting the related investigation is to evaluate optical forces corresponding to all possible beam configurations. Although the optical force exerted on arbitrarily shaped particles can be well predicted by boundary element method (BEM), such investigation is time costing because it involves many repetitions of expensive computation, where the forces are calculated from the equivalent surface currents. An algorithm is proposed to alleviate the difficulty by exploiting our previously developed skeletonization framework. The proposed algorithm succeeds in reducing the number of repetitions. Since the number of skeleton beams is always much less than that of beams in question, the computation can be very efficient. The proposed algorithm is accurate because the skeletonization is accuracy controllable.

  4. A fluid model simulation of a simplified plasma limiter based on spectral-element time-domain method

    SciTech Connect

    Qian, Cheng; Ding, Dazhi Fan, Zhenhong; Chen, Rushan

    2015-03-15

    A simplified plasma limiter prototype is proposed and the fluid model coupled with Maxwell's equations is established to describe the operating mechanism of plasma limiter. A three-dimensional (3-D) simplified sandwich structure plasma limiter model is analyzed with the spectral-element time-domain (SETD) method. The field breakdown threshold of air and argon at different frequency is predicted and compared with the experimental data and there is a good agreement between them for gas microwave breakdown discharge problems. Numerical results demonstrate that the two-layer plasma limiter (plasma-slab-plasma) has better protective characteristics than a one-layer plasma limiter (slab-plasma-slab) with the same length of gas chamber.

  5. A Method to Estimate the Size and Characteristics of HIV-positive Populations Using an Individual-based Stochastic Simulation Model

    PubMed Central

    van Sighem, Ard; Thiebaut, Rodolphe; Smith, Colette; Ratmann, Oliver; Cambiano, Valentina; Albert, Jan; Amato-Gauci, Andrew; Bezemer, Daniela; Campbell, Colin; Commenges, Daniel; Donoghoe, Martin; Ford, Deborah; Kouyos, Roger; Lodwick, Rebecca; Lundgren, Jens; Pantazis, Nikos; Pharris, Anastasia; Quinten, Chantal; Thorne, Claire; Touloumi, Giota; Delpech, Valerie; Phillips, Andrew

    2016-01-01

    It is important not only to collect epidemiologic data on HIV but to also fully utilize such information to understand the epidemic over time and to help inform and monitor the impact of policies and interventions. We describe and apply a novel method to estimate the size and characteristics of HIV-positive populations. The method was applied to data on men who have sex with men living in the UK and to a pseudo dataset to assess performance for different data availability. The individual-based simulation model was calibrated using an approximate Bayesian computation-based approach. In 2013, 48,310 (90% plausibility range: 39,900–45,560) men who have sex with men were estimated to be living with HIV in the UK, of whom 10,400 (6,160–17,350) were undiagnosed. There were an estimated 3,210 (1,730–5,350) infections per year on average between 2010 and 2013. Sixty-two percent of the total HIV-positive population are thought to have viral load <500 copies/ml. In the pseudo-epidemic example, HIV estimates have narrower plausibility ranges and are closer to the true number, the greater the data availability to calibrate the model. We demonstrate that our method can be applied to settings with less data, however plausibility ranges for estimates will be wider to reflect greater uncertainty of the data used to fit the model. PMID:26605814

  6. A Method to Estimate the Size and Characteristics of HIV-positive Populations Using an Individual-based Stochastic Simulation Model.

    PubMed

    Nakagawa, Fumiyo; van Sighem, Ard; Thiebaut, Rodolphe; Smith, Colette; Ratmann, Oliver; Cambiano, Valentina; Albert, Jan; Amato-Gauci, Andrew; Bezemer, Daniela; Campbell, Colin; Commenges, Daniel; Donoghoe, Martin; Ford, Deborah; Kouyos, Roger; Lodwick, Rebecca; Lundgren, Jens; Pantazis, Nikos; Pharris, Anastasia; Quinten, Chantal; Thorne, Claire; Touloumi, Giota; Delpech, Valerie; Phillips, Andrew

    2016-03-01

    It is important not only to collect epidemiologic data on HIV but to also fully utilize such information to understand the epidemic over time and to help inform and monitor the impact of policies and interventions. We describe and apply a novel method to estimate the size and characteristics of HIV-positive populations. The method was applied to data on men who have sex with men living in the UK and to a pseudo dataset to assess performance for different data availability. The individual-based simulation model was calibrated using an approximate Bayesian computation-based approach. In 2013, 48,310 (90% plausibility range: 39,900-45,560) men who have sex with men were estimated to be living with HIV in the UK, of whom 10,400 (6,160-17,350) were undiagnosed. There were an estimated 3,210 (1,730-5,350) infections per year on average between 2010 and 2013. Sixty-two percent of the total HIV-positive population are thought to have viral load <500 copies/ml. In the pseudo-epidemic example, HIV estimates have narrower plausibility ranges and are closer to the true number, the greater the data availability to calibrate the model. We demonstrate that our method can be applied to settings with less data, however plausibility ranges for estimates will be wider to reflect greater uncertainty of the data used to fit the model.

  7. Accelerated simulation methods for plasma kinetics

    NASA Astrophysics Data System (ADS)

    Caflisch, Russel

    2016-11-01

    Collisional kinetics is a multiscale phenomenon due to the disparity between the continuum (fluid) and the collisional (particle) length scales. This paper describes a class of simulation methods for gases and plasmas, and acceleration techniques for improving their speed and accuracy. Starting from the Landau-Fokker-Planck equation for plasmas, the focus will be on a binary collision model that is solved using a Direct Simulation Monte Carlo (DSMC) method. Acceleration of this method is achieved by coupling the particle method to a continuum fluid description. The velocity distribution function f is represented as a combination of a Maxwellian M (the thermal component) and a set of discrete particles fp (the kinetic component). For systems that are close to (local) equilibrium, this reduces the number N of simulated particles that are required to represent f for a given level of accuracy. We present two methods for exploiting this representation. In the first method, equilibration of particles in fp, as well as disequilibration of particles from M, due to the collision process, is represented by a thermalization/dethermalization step that employs an entropy criterion. Efficiency of the representation is greatly increased by inclusion of particles with negative weights. This significantly complicates the simulation, but the second method is a tractable approach for negatively weighted particles. The accelerated simulation method is compared with standard PIC-DSMC method for both spatially homogeneous problems such as a bump-on-tail and inhomogeneous problems such as nonlinear Landau damping.

  8. Characterization of soil lead by comparing sequential Gaussian simulation, simulated annealing simulation and kriging methods

    NASA Astrophysics Data System (ADS)

    Lin, Yu-Pin; Chang, Tsun-Kuo; Teng, Tung-Po

    2001-11-01

    This study attempted to characterize the spatial patterns of lead (Pb) for further soil monitoring and remediation by comparing the sequential Gaussian simulation, simulated annealing techniques and ordinary kriging methods to delineate soil lead in a rice paddy field in the north of Changhua County, Taiwan. For reproducing the statistics of Pb and natural log Pb (ln(Pb)), simulation techniques yielded better results than ordinary kriging. Meanwhile, sequential Gaussian simulation and simulated annealing reproduced the spatial variation of the measured Pb and ln(Pb), as well as identified the global spatial continuity and discontinuity patterns. Furthermore, the simulated annealing method equaled the global measurement statistics and spatial patterns of Pb and ln(Pb) more so than sequential Gaussian simulation and kriging. Finally, the realizations generated by sequential Gaussian simulation displayed significantly higher local heterogeneity than those generated by simulated annealing. The realizations of simulated annealing simulation are consistent in presenting the spatial patterns of soil Pb.

  9. Interval sampling methods and measurement error: a computer simulation.

    PubMed

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.

  10. Constraint methods that accelerate free-energy simulations of biomolecules

    SciTech Connect

    Perez, Alberto; MacCallum, Justin L.; Coutsias, Evangelos A.; Dill, Ken A.

    2015-12-28

    Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann’s law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.

  11. Constraint methods that accelerate free-energy simulations of biomolecules.

    PubMed

    Perez, Alberto; MacCallum, Justin L; Coutsias, Evangelos A; Dill, Ken A

    2015-12-28

    Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann's law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.

  12. Constraint methods that accelerate free-energy simulations of biomolecules

    PubMed Central

    MacCallum, Justin L.; Dill, Ken A.

    2015-01-01

    Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann’s law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions. PMID:26723628

  13. Constraint methods that accelerate free-energy simulations of biomolecules

    NASA Astrophysics Data System (ADS)

    Perez, Alberto; MacCallum, Justin L.; Coutsias, Evangelos A.; Dill, Ken A.

    2015-12-01

    Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann's law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.

  14. Estimating School Efficiency: A Comparison of Methods Using Simulated Data.

    ERIC Educational Resources Information Center

    Bifulco, Robert; Bretschneider, Stuart

    2001-01-01

    Uses simulated data to assess the adequacy of two econometric and linear-programming techniques (data-envelopment analysis and corrected ordinary least squares) for measuring performance-based school reform. In complex data sets (simulated to contain measurement error and endogeneity), these methods are inadequate efficiency measures. (Contains 40…

  15. A task-based parallelism and vectorized approach to 3D Method of Characteristics (MOC) reactor simulation for high performance computing architectures

    NASA Astrophysics Data System (ADS)

    Tramm, John R.; Gunow, Geoffrey; He, Tim; Smith, Kord S.; Forget, Benoit; Siegel, Andrew R.

    2016-05-01

    In this study we present and analyze a formulation of the 3D Method of Characteristics (MOC) technique applied to the simulation of full core nuclear reactors. Key features of the algorithm include a task-based parallelism model that allows independent MOC tracks to be assigned to threads dynamically, ensuring load balancing, and a wide vectorizable inner loop that takes advantage of modern SIMD computer architectures. The algorithm is implemented in a set of highly optimized proxy applications in order to investigate its performance characteristics on CPU, GPU, and Intel Xeon Phi architectures. Speed, power, and hardware cost efficiencies are compared. Additionally, performance bottlenecks are identified for each architecture in order to determine the prospects for continued scalability of the algorithm on next generation HPC architectures.

  16. Image based SAR product simulation for analysis

    NASA Technical Reports Server (NTRS)

    Domik, G.; Leberl, F.

    1987-01-01

    SAR product simulation serves to predict SAR image gray values for various flight paths. Input typically consists of a digital elevation model and backscatter curves. A new method is described of product simulation that employs also a real SAR input image for image simulation. This can be denoted as 'image-based simulation'. Different methods to perform this SAR prediction are presented and advantages and disadvantages discussed. Ascending and descending orbit images from NASA's SIR-B experiment were used for verification of the concept: input images from ascending orbits were converted into images from a descending orbit; the results are compared to the available real imagery to verify that the prediction technique produces meaningful image data.

  17. INTERVAL SAMPLING METHODS AND MEASUREMENT ERROR: A COMPUTER SIMULATION

    PubMed Central

    Wirth, Oliver; Slaven, James; Taylor, Matthew A.

    2015-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method’s inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. PMID:24127380

  18. Meta-Analysis of a Continuous Outcome Combining Individual Patient Data and Aggregate Data: A Method Based on Simulated Individual Patient Data

    ERIC Educational Resources Information Center

    Yamaguchi, Yusuke; Sakamoto, Wataru; Goto, Masashi; Staessen, Jan A.; Wang, Jiguang; Gueyffier, Francois; Riley, Richard D.

    2014-01-01

    When some trials provide individual patient data (IPD) and the others provide only aggregate data (AD), meta-analysis methods for combining IPD and AD are required. We propose a method that reconstructs the missing IPD for AD trials by a Bayesian sampling procedure and then applies an IPD meta-analysis model to the mixture of simulated IPD and…

  19. Meta-Analysis of a Continuous Outcome Combining Individual Patient Data and Aggregate Data: A Method Based on Simulated Individual Patient Data

    ERIC Educational Resources Information Center

    Yamaguchi, Yusuke; Sakamoto, Wataru; Goto, Masashi; Staessen, Jan A.; Wang, Jiguang; Gueyffier, Francois; Riley, Richard D.

    2014-01-01

    When some trials provide individual patient data (IPD) and the others provide only aggregate data (AD), meta-analysis methods for combining IPD and AD are required. We propose a method that reconstructs the missing IPD for AD trials by a Bayesian sampling procedure and then applies an IPD meta-analysis model to the mixture of simulated IPD and…

  20. Simulating protein dynamics: Novel methods and applications

    NASA Astrophysics Data System (ADS)

    Vishal, V.

    This Ph.D dissertation describes several methodological advances in molecular dynamics (MD) simulations. Methods like Markov State Models can be used effectively in combination with distributed computing to obtain long time scale behavior from an ensemble of short simulations. Advanced computing architectures like Graphics Processors can be used to greatly extend the scope of MD. Applications of MD techniques to problems like Alzheimer's Disease and fundamental questions in protein dynamics are described.

  1. Simulation of the «COSMONAUT-ROBOT» System Interaction on the Lunar Surface Based on Methods of Machine Vision and Computer Graphics

    NASA Astrophysics Data System (ADS)

    Kryuchkov, B. I.; Usov, V. M.; Chertopolokhov, V. A.; Ronzhin, A. L.; Karpov, A. A.

    2017-05-01

    Extravehicular activity (EVA) on the lunar surface, necessary for the future exploration of the Moon, involves extensive use of robots. One of the factors of safe EVA is a proper interaction between cosmonauts and robots in extreme environments. This requires a simple and natural man-machine interface, e.g. multimodal contactless interface based on recognition of gestures and cosmonaut's poses. When travelling in the "Follow Me" mode (master/slave), a robot uses onboard tools for tracking cosmonaut's position and movements, and on the basis of these data builds its itinerary. The interaction in the system "cosmonaut-robot" on the lunar surface is significantly different from that on the Earth surface. For example, a man, dressed in a space suit, has limited fine motor skills. In addition, EVA is quite tiring for the cosmonauts, and a tired human being less accurately performs movements and often makes mistakes. All this leads to new requirements for the convenient use of the man-machine interface designed for EVA. To improve the reliability and stability of human-robot communication it is necessary to provide options for duplicating commands at the task stages and gesture recognition. New tools and techniques for space missions must be examined at the first stage of works in laboratory conditions, and then in field tests (proof tests at the site of application). The article analyzes the methods of detection and tracking of movements and gesture recognition of the cosmonaut during EVA, which can be used for the design of human-machine interface. A scenario for testing these methods by constructing a virtual environment simulating EVA on the lunar surface is proposed. Simulation involves environment visualization and modeling of the use of the "vision" of the robot to track a moving cosmonaut dressed in a spacesuit.

  2. Spectral Methods in General Relativistic MHD Simulations

    NASA Astrophysics Data System (ADS)

    Garrison, David

    2012-03-01

    In this talk I discuss the use of spectral methods in improving the accuracy of a General Relativistic Magnetohydrodynamic (GRMHD) computer code. I introduce SpecCosmo, a GRMHD code developed as a Cactus arrangement at UHCL, and show simulation results using both Fourier spectral methods and finite differencing. This work demonstrates the use of spectral methods with the FFTW 3.3 Fast Fourier Transform package integrated with the Cactus Framework to perform spectral differencing using MPI.

  3. Inversion based on computational simulations

    SciTech Connect

    Hanson, K.M.; Cunningham, G.S.; Saquib, S.S.

    1998-09-01

    A standard approach to solving inversion problems that involve many parameters uses gradient-based optimization to find the parameters that best match the data. The authors discuss enabling techniques that facilitate application of this approach to large-scale computational simulations, which are the only way to investigate many complex physical phenomena. Such simulations may not seem to lend themselves to calculation of the gradient with respect to numerous parameters. However, adjoint differentiation allows one to efficiently compute the gradient of an objective function with respect to all the variables of a simulation. When combined with advanced gradient-based optimization algorithms, adjoint differentiation permits one to solve very large problems of optimization or parameter estimation. These techniques will be illustrated through the simulation of the time-dependent diffusion of infrared light through tissue, which has been used to perform optical tomography. The techniques discussed have a wide range of applicability to modeling including the optimization of models to achieve a desired design goal.

  4. Rainfall Simulation: methods, research questions and challenges

    NASA Astrophysics Data System (ADS)

    Ries, J. B.; Iserloh, T.

    2012-04-01

    In erosion research, rainfall simulations are used for the improvement of process knowledge as well as in the field for the assessment of overland flow generation, infiltration, and erosion rates. In all these fields of research, rainfall experiments have become an indispensable part of the research methods. In this context, small portable rainfall simulators with small test-plot sizes of one square-meter or even less, and devices of low weight and water consumption are in demand. Accordingly, devices with manageable technical effort like nozzle-type simulators seem to prevail against larger simulators. The reasons are obvious: lower costs and less time consumption needed for mounting enable a higher repetition rate. Regarding the high number of research questions, of different fields of application, and not least also due to the great technical creativity of our research staff, a large number of different experimental setups is available. Each of the devices produces a different rainfall, leading to different kinetic energy amounts influencing the soil surface and accordingly, producing different erosion results. Hence, important questions contain the definition, the comparability, the measurement and the simulation of natural rainfall and the problem of comparability in general. Another important discussion topic will be the finding of an agreement on an appropriate calibration method for the simulated rainfalls, in order to enable a comparison of the results of different rainfall simulator set-ups. In most of the publications, only the following "nice" sentence can be read: "Our rainfall simulator generates a rainfall spectrum that is similar to natural rainfall!". The most substantial and critical properties of a simulated rainfall are the drop-size distribution, the fall velocities of the drops, and the spatial distribution of the rainfall on the plot-area. In a comparison of the most important methods, the Laser Distrometer turned out to be the most up

  5. Estimation of total CH4 emission from Japanese rice paddies using a new estimation method based on the DNDC-Rice simulation model.

    PubMed

    Katayanagi, Nobuko; Fumoto, Tamon; Hayano, Michiko; Shirato, Yasuhito; Takata, Yusuke; Leon, Ai; Yagi, Kazuyuki

    2017-12-01

    Methane (CH4) is a greenhouse gas, and paddy fields are one of its main anthropogenic sources. In Japan, country-specific emission factors (EFs) have been applied since 2003 to estimate national-scale CH4 emission from paddy field. However, these EFs did not consider the effects of factors that influence CH4 emission (e.g., amount of organic C inputs, field drainage rate, climate) and can therefore produce estimates with high uncertainty. To improve the reliability of national-scale estimates, we revised the EFs based on simulations by the DeNitrification-DeComposition-Rice (DNDC-Rice) model in a previous study. Here, we estimated total CH4 emission from paddy fields in Japan from 1990 to 2010 using these revised EFs and databases on independent variables that influence emission (organic C application rate, paddy area, proportions of paddy area for each drainage rate class and water management regime). CH4 emission ranged from 323 to 455ktCyr(-1) (1.1 to 2.2 times the range of 206 to 285ktCyr(-1) calculated using previous EFs). Although our method may have overestimated CH4 emissions, most of the abovementioned differences were presumably caused by underestimation by the previous method due to a lack of emission data from slow-drainage fields, lower organic C inputs than recent levels, neglect of regional climatic differences, and underestimation of the area of continuously flooded paddies. Our estimate (406ktC in 2000) was higher than that by the IPCC Tier 1 method (305ktC in 2000), presumably because regional variations in CH4 emission rates are not accounted for by the Tier 1 method. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Novel Methods for Electromagnetic Simulation and Design

    DTIC Science & Technology

    2016-08-03

    AFRL-AFOSR-VA-TR-2016-0272 NOVEL METHODS FOR ELECTROMAGNETIC SIMULATION AND DESIGN Leslie Greengard NEW YORK UNIVERSITY 70 WASHINGTON SQUARE S NEW...METHODS FOR ELECTROMAGNETIC SIMULATION AND DESIGN 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA9550-10-1-0180 5c.  PROGRAM ELEMENT NUMBER 61102F 6. AUTHOR(S... electromagnetic scattering in realistic environments involving complex geometry. During the six year performance period (including a one-year no cost extension

  7. Reduced Basis Method for Nanodevices Simulation

    SciTech Connect

    Pau, George Shu Heng

    2008-05-23

    Ballistic transport simulation in nanodevices, which involves self-consistently solving a coupled Schrodinger-Poisson system of equations, is usually computationally intensive. Here, we propose coupling the reduced basis method with the subband decomposition method to improve the overall efficiency of the simulation. By exploiting a posteriori error estimation procedure and greedy sampling algorithm, we are able to design an algorithm where the computational cost is reduced significantly. In addition, the computational cost only grows marginally with the number of grid points in the confined direction.

  8. Simulation methods for advanced scientific computing

    SciTech Connect

    Booth, T.E.; Carlson, J.A.; Forster, R.A.

    1998-11-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The objective of the project was to create effective new algorithms for solving N-body problems by computer simulation. The authors concentrated on developing advanced classical and quantum Monte Carlo techniques. For simulations of phase transitions in classical systems, they produced a framework generalizing the famous Swendsen-Wang cluster algorithms for Ising and Potts models. For spin-glass-like problems, they demonstrated the effectiveness of an extension of the multicanonical method for the two-dimensional, random bond Ising model. For quantum mechanical systems, they generated a new method to compute the ground-state energy of systems of interacting electrons. They also improved methods to compute excited states when the diffusion quantum Monte Carlo method is used and to compute longer time dynamics when the stationary phase quantum Monte Carlo method is used.

  9. A novel, simple method to simulate gelling process of injectable biodegradable in situ forming drug delivery system based on determination of electrical conductivity.

    PubMed

    Wang, Keke; Jia, Qiang; Yuan, Jing; Li, Sanming

    2011-02-14

    The purpose of the present study was to develop a novel, simple and determination-of-electrical-conductivity-based method to trace the gelling process of injectable biodegradable in situ forming organogels after administration. The electrical conductivity of pH 7.4 PBS solution with different amount of N-methyl-2-pyrrolodone (NMP) and drug-free organogel formulation contained 0.6mL NMP were determined at 37 °C, respectively. The electrical conductivity of PBS solution was linearly proportional to the amount of NMP. Organogel contained 0.6 mL NMP in PBS solution showed a descending of electrical conductivity as time runs, while the value of electrical conductivity was almost a constant at 7.58 ms/cm after 110 min, which was nearly equaled to the electrical conductivity of 0.6 mL NMP in PBS solution (7.59 ms/cm). This data indicated that the diffusion of NMP caused the descending of system electrical conductivity and NMP completely diffused from organogel after 110 min, which led to the constant electrical conductivity. Meanwhile, photographs of organogel showed that the gel formed from periphery to center gradually, and totally formed after 110 min. The diffusion terminal point of NMP from organogel could be perfectly anticipated and controlled by this method. Consequently, this electrochemical method had visually simulated the gelling process and located gelling time of organogel in medium solution by measuring variation of electrical conductivity. Copyright © 2010 Elsevier B.V. All rights reserved.

  10. A Simulation Method Measuring Psychomotor Nursing Skills.

    ERIC Educational Resources Information Center

    McBride, Helena; And Others

    1981-01-01

    The development of a simulation technique to evaluate performance of psychomotor skills in an undergraduate nursing program is described. This method is used as one admission requirement to an alternate route nursing program. With modifications, any health profession could use this technique where psychomotor skills performance is important.…

  11. Method for Constructing Standardized Simulated Root Canals.

    ERIC Educational Resources Information Center

    Schulz-Bongert, Udo; Weine, Franklin S.

    1990-01-01

    The construction of visual and manipulative aids, clear resin blocks with root-canal-like spaces, for simulation of root canals is explained. Time, materials, and techniques are discussed. The method allows for comparison of canals, creation of any configuration of canals, and easy presentation during instruction. (MSE)

  12. Novel methods for molecular dynamics simulations.

    PubMed

    Elber, R

    1996-04-01

    In the past year, significant progress was made in the development of molecular dynamics methods for the liquid phase and for biological macromolecules. Specifically, faster algorithms to pursue molecular dynamics simulations were introduced and advances were made in the design of new optimization algorithms guided by molecular dynamics protocols. A technique to calculate the quantum spectra of protein vibrations was introduced.

  13. Method for Constructing Standardized Simulated Root Canals.

    ERIC Educational Resources Information Center

    Schulz-Bongert, Udo; Weine, Franklin S.

    1990-01-01

    The construction of visual and manipulative aids, clear resin blocks with root-canal-like spaces, for simulation of root canals is explained. Time, materials, and techniques are discussed. The method allows for comparison of canals, creation of any configuration of canals, and easy presentation during instruction. (MSE)

  14. A method for simulating a flux-locked DC SQUID

    NASA Technical Reports Server (NTRS)

    Gutt, G. M.; Kasdin, N. J.; Condron, M. R., II; Muhlfelder, B.; Lockhart, J. M.; Cromar, M. W.

    1993-01-01

    The authors describe a computationally efficient and accurate method for simulating a dc SQUID's V-Phi (voltage-flux) and I-V characteristics which has proven valuable in evaluating and improving various SQUID readout methods. The simulation of the SQUID is based on fitting of previously acquired data from either a real or a modeled device using the Fourier transform of the V-Phi curve. This method does not predict SQUID behavior, but rather is a way of replicating a known behavior efficiently with portability into various simulation programs such as SPICE. The authors discuss the methods used to simulate the SQUID and the flux-locking control electronics, and present specific examples of this approach. Results include an estimate of the slew rate and linearity of a simple flux-locked loop using a characterized dc SQUID.

  15. A method for simulating a flux-locked DC SQUID

    NASA Technical Reports Server (NTRS)

    Gutt, G. M.; Kasdin, N. J.; Condron, M. R., II; Muhlfelder, B.; Lockhart, J. M.; Cromar, M. W.

    1993-01-01

    The authors describe a computationally efficient and accurate method for simulating a dc SQUID's V-Phi (voltage-flux) and I-V characteristics which has proven valuable in evaluating and improving various SQUID readout methods. The simulation of the SQUID is based on fitting of previously acquired data from either a real or a modeled device using the Fourier transform of the V-Phi curve. This method does not predict SQUID behavior, but rather is a way of replicating a known behavior efficiently with portability into various simulation programs such as SPICE. The authors discuss the methods used to simulate the SQUID and the flux-locking control electronics, and present specific examples of this approach. Results include an estimate of the slew rate and linearity of a simple flux-locked loop using a characterized dc SQUID.

  16. A method for ensemble wildland fire simulation

    Treesearch

    Mark A. Finney; Isaac C. Grenfell; Charles W. McHugh; Robert C. Seli; Diane Trethewey; Richard D. Stratton; Stuart Brittain

    2011-01-01

    An ensemble simulation system that accounts for uncertainty in long-range weather conditions and two-dimensional wildland fire spread is described. Fuel moisture is expressed based on the energy release component, a US fire danger rating index, and its variation throughout the fire season is modeled using time series analysis of historical weather data. This analysis...

  17. Bridging the gap: simulations meet knowledge bases

    NASA Astrophysics Data System (ADS)

    King, Gary W.; Morrison, Clayton T.; Westbrook, David L.; Cohen, Paul R.

    2003-09-01

    Tapir and Krill are declarative languages for specifying actions and agents, respectively, that can be executed in simulation. As such, they bridge the gap between strictly declarative knowledge bases and strictly executable code. Tapir and Krill components can be combined to produce models of activity which can answer questions about mechanisms and processes using conventional inference methods and simulation. Tapir was used in DARPA's Rapid Knowledge Formation (RKF) project to construct models of military tactics from the Army Field Manual FM3-90. These were then used to build Courses of Actions (COAs) which could be critiqued by declarative reasoning or via Monte Carlo simulation. Tapir and Krill can be read and written by non-knowledge engineers making it an excellent vehicle for Subject Matter Experts to build and critique knowledge bases.

  18. Fast Particle Methods for Multiscale Phenomena Simulations

    NASA Technical Reports Server (NTRS)

    Koumoutsakos, P.; Wray, A.; Shariff, K.; Pohorille, Andrew

    2000-01-01

    We are developing particle methods oriented at improving computational modeling capabilities of multiscale physical phenomena in : (i) high Reynolds number unsteady vortical flows, (ii) particle laden and interfacial flows, (iii)molecular dynamics studies of nanoscale droplets and studies of the structure, functions, and evolution of the earliest living cell. The unifying computational approach involves particle methods implemented in parallel computer architectures. The inherent adaptivity, robustness and efficiency of particle methods makes them a multidisciplinary computational tool capable of bridging the gap of micro-scale and continuum flow simulations. Using efficient tree data structures, multipole expansion algorithms, and improved particle-grid interpolation, particle methods allow for simulations using millions of computational elements, making possible the resolution of a wide range of length and time scales of these important physical phenomena.The current challenges in these simulations are in : [i] the proper formulation of particle methods in the molecular and continuous level for the discretization of the governing equations [ii] the resolution of the wide range of time and length scales governing the phenomena under investigation. [iii] the minimization of numerical artifacts that may interfere with the physics of the systems under consideration. [iv] the parallelization of processes such as tree traversal and grid-particle interpolations We are conducting simulations using vortex methods, molecular dynamics and smooth particle hydrodynamics, exploiting their unifying concepts such as : the solution of the N-body problem in parallel computers, highly accurate particle-particle and grid-particle interpolations, parallel FFT's and the formulation of processes such as diffusion in the context of particle methods. This approach enables us to transcend among seemingly unrelated areas of research.

  19. Effective medium based optical analysis with finite element method simulations to study photochromic transitions in Ag-TiO2 nanocomposite films

    NASA Astrophysics Data System (ADS)

    Abhilash, T.; Balasubrahmaniyam, M.; Kasiviswanathan, S.

    2016-03-01

    Photochromic transitions in silver nanoparticles (AgNPs) embedded titanium dioxide (TiO2) films under green light illumination are marked by reduction in strength and blue shift in the position of the localized surface plasmon resonance (LSPR) associated with AgNPs. These transitions, which happen in the sub-nanometer length scale, have been analysed using the variations observed in the effective dielectric properties of the Ag-TiO2 nanocomposite films in response to the size reduction of AgNPs and subsequent changes in the surrounding medium due to photo-oxidation. Bergman-Milton formulation based on spectral density approach is used to extract dielectric properties and information about the geometrical distribution of the effective medium. Combined with finite element method simulations, we isolate the effects due to the change in average size of the nanoparticles and those due to the change in the dielectric function of the surrounding medium. By analysing the dynamics of photochromic transitions in the effective medium, we conclude that the observed blue shift in LSPR is mainly because of the change in the dielectric function of surrounding medium, while a shape-preserving effective size reduction of the AgNPs causes decrease in the strength of LSPR.

  20. Parallel methods for the flight simulation model

    SciTech Connect

    Xiong, Wei Zhong; Swietlik, C.

    1994-06-01

    The Advanced Computer Applications Center (ACAC) has been involved in evaluating advanced parallel architecture computers and the applicability of these machines to computer simulation models. The advanced systems investigated include parallel machines with shared. memory and distributed architectures consisting of an eight processor Alliant FX/8, a twenty four processor sor Sequent Symmetry, Cray XMP, IBM RISC 6000 model 550, and the Intel Touchstone eight processor Gamma and 512 processor Delta machines. Since parallelizing a truly efficient application program for the parallel machine is a difficult task, the implementation for these machines in a realistic setting has been largely overlooked. The ACAC has developed considerable expertise in optimizing and parallelizing application models on a collection of advanced multiprocessor systems. One of aspect of such an application model is the Flight Simulation Model, which used a set of differential equations to describe the flight characteristics of a launched missile by means of a trajectory. The Flight Simulation Model was written in the FORTRAN language with approximately 29,000 lines of source code. Depending on the number of trajectories, the computation can require several hours to full day of CPU time on DEC/VAX 8650 system. There is an impetus to reduce the execution time and utilize the advanced parallel architecture computing environment available. ACAC researchers developed a parallel method that allows the Flight Simulation Model to be able to run in parallel on the multiprocessor system. For the benchmark data tested, the parallel Flight Simulation Model implemented on the Alliant FX/8 has achieved nearly linear speedup. In this paper, we describe a parallel method for the Flight Simulation Model. We believe the method presented in this paper provides a general concept for the design of parallel applications. This concept, in most cases, can be adapted to many other sequential application programs.

  1. Mesoscopic Simulation Methods for Polymer Dynamics

    NASA Astrophysics Data System (ADS)

    Larson, Ronald

    2015-03-01

    We assess the accuracy and efficiency of mesoscopic simulation methods, namely Brownian Dynamics (BD), Stochastic Rotation Dynamics (SRD) and Dissipative Particle Dynamics (DPD), for polymers in solution at equilibrium and in flows in microfluidic geometries. Both SRD and DPD use solvent ``particles'' to carry momentum, and so account automatically for hydrodynamic interactions both within isolated polymer coils, and with other polymer molecules and with nearby solid boundaries. We assess quantitatively the effects of artificial particle inertia and fluid compressibility and show that they can be made small with appropriate choice of simulation parameters. We then use these methods to study flow-induced migration of polymer chains produced by: 1) hydrodynamic interactions, 2) streamline curvature or stress-gradients, and 3) convection of wall depletion zones. We show that huge concentration gradients can be produced by these mechanisms in microfluidic geometries that can be exploited for separation of polymers by size in periodic contraction-expansion geometries. We also assess the range of conditions for which BD, SRD or DPD is preferable for mesoscopic simulations. Finally, we show how such methods can be used to simulate quantitatively the swimming of micro-organisms such as E. coli. In collaboration with Lei Jiang and Tongyang Zhao, University of Michigan, Ann Arbor, MI.

  2. Discontinuous Galerkin Methods for Turbulence Simulation

    NASA Technical Reports Server (NTRS)

    Collis, S. Scott

    2002-01-01

    A discontinuous Galerkin (DG) method is formulated, implemented, and tested for simulation of compressible turbulent flows. The method is applied to turbulent channel flow at low Reynolds number, where it is found to successfully predict low-order statistics with fewer degrees of freedom than traditional numerical methods. This reduction is achieved by utilizing local hp-refinement such that the computational grid is refined simultaneously in all three spatial coordinates with decreasing distance from the wall. Another advantage of DG is that Dirichlet boundary conditions can be enforced weakly through integrals of the numerical fluxes. Both for a model advection-diffusion problem and for turbulent channel flow, weak enforcement of wall boundaries is found to improve results at low resolution. Such weak boundary conditions may play a pivotal role in wall modeling for large-eddy simulation.

  3. A method to incorporate the effect of beam quality on image noise in a digitally reconstructed radiograph (DRR) based computer simulation for optimisation of digital radiography

    NASA Astrophysics Data System (ADS)

    Moore, Craig S.; Wood, Tim J.; Saunderson, John R.; Beavis, Andrew W.

    2017-09-01

    The use of computer simulated digital x-radiographs for optimisation purposes has become widespread in recent years. To make these optimisation investigations effective, it is vital simulated radiographs contain accurate anatomical and system noise. Computer algorithms that simulate radiographs based solely on the incident detector x-ray intensity (‘dose’) have been reported extensively in the literature. However, while it has been established for digital mammography that x-ray beam quality is an important factor when modelling noise in simulated images there are no such studies for diagnostic imaging of the chest, abdomen and pelvis. This study investigates the influence of beam quality on image noise in a digital radiography (DR) imaging system, and incorporates these effects into a digitally reconstructed radiograph (DRR) computer simulator. Image noise was measured on a real DR imaging system as a function of dose (absorbed energy) over a range of clinically relevant beam qualities. Simulated ‘absorbed energy’ and ‘beam quality’ DRRs were then created for each patient and tube voltage under investigation. Simulated noise images, corrected for dose and beam quality, were subsequently produced from the absorbed energy and beam quality DRRs, using the measured noise, absorbed energy and beam quality relationships. The noise images were superimposed onto the noiseless absorbed energy DRRs to create the final images. Signal-to-noise measurements in simulated chest, abdomen and spine images were within 10% of the corresponding measurements in real images. This compares favourably to our previous algorithm where images corrected for dose only were all within 20%.

  4. Physalis: a New Method for Particle Simulations

    NASA Astrophysics Data System (ADS)

    Takagi, Shu; Oguz, Hasan; Prosperetti, Andrea

    2000-11-01

    A new computational method for the full Navier-Stokes viscous flow past cylinders and spheres is described and illustrated with preliminary results. Since, in the rest frame, the velocity vanishes on the particle, the Stokes equations apply in the immediate neighborhood of the surface. The analytic solutions of these equations available for both spheres and cylinders permit to effectively remove the particle, the effect of which is replaced by a consistency condition on the nodes of the computational grid that surround the particle. This condition is satisfied iteratively by a method that solves the field equations over the entire computational domain disregarding the presence of the particles, so that fast solvers can be used. The procedure eliminates the geometrical complexity of multi-particle simulations and permits to simulate disperse flows containing a large number of particles with a moderate computatonal cost. Supported by DOE and Japanese MESSC.

  5. Twitter's tweet method modelling and simulation

    NASA Astrophysics Data System (ADS)

    Sarlis, Apostolos S.; Sakas, Damianos P.; Vlachos, D. S.

    2015-02-01

    This paper seeks to purpose the concept of Twitter marketing methods. The tools that Twitter provides are modelled and simulated using iThink in the context of a Twitter media-marketing agency. The paper has leveraged the system's dynamic paradigm to conduct Facebook marketing tools and methods modelling, using iThink™ system to implement them. It uses the design science research methodology for the proof of concept of the models and modelling processes. The following models have been developed for a twitter marketing agent/company and tested in real circumstances and with real numbers. These models were finalized through a number of revisions and iterators of the design, develop, simulate, test and evaluate. It also addresses these methods that suit most organized promotion through targeting, to the Twitter social media service. The validity and usefulness of these Twitter marketing methods models for the day-to-day decision making are authenticated by the management of the company organization. It implements system dynamics concepts of Twitter marketing methods modelling and produce models of various Twitter marketing situations. The Tweet method that Twitter provides can be adjusted, depending on the situation, in order to maximize the profit of the company/agent.

  6. A method based on Monte Carlo simulations and voxelized anatomical atlases to evaluate and correct uncertainties on radiotracer accumulation quantitation in beta microprobe studies in the rat brain

    NASA Astrophysics Data System (ADS)

    Pain, F.; Dhenain, M.; Gurden, H.; Routier, A. L.; Lefebvre, F.; Mastrippolito, R.; Lanièce, P.

    2008-10-01

    The β-microprobe is a simple and versatile technique complementary to small animal positron emission tomography (PET). It relies on local measurements of the concentration of positron-labeled molecules. So far, it has been successfully used in anesthetized rats for pharmacokinetics experiments and for the study of brain energetic metabolism. However, the ability of the technique to provide accurate quantitative measurements using 18F, 11C and 15O tracers is likely to suffer from the contribution of 511 keV gamma rays background to the signal and from the contribution of positrons from brain loci surrounding the locus of interest. The aim of the present paper is to provide a method of evaluating several parameters, which are supposed to affect the quantification of recordings performed in vivo with this methodology. We have developed realistic voxelized phantoms of the rat whole body and brain, and used them as input geometries for Monte Carlo simulations of previous β-microprobe reports. In the context of realistic experiments (binding of 11C-Raclopride to D2 dopaminergic receptors in the striatum; local glucose metabolic rate measurement with 18F-FDG and H2O15 blood flow measurements in the somatosensory cortex), we have calculated the detection efficiencies and corresponding contribution of 511 keV gammas from peripheral organs accumulation. We confirmed that the 511 keV gammas background does not impair quantification. To evaluate the contribution of positrons from adjacent structures, we have developed β-Assistant, a program based on a rat brain voxelized atlas and matrices of local detection efficiencies calculated by Monte Carlo simulations for several probe geometries. This program was used to calculate the 'apparent sensitivity' of the probe for each brain structure included in the detection volume. For a given localization of a probe within the brain, this allows us to quantify the different sources of beta signal. Finally, since stereotaxic accuracy is

  7. An improved method for simulating radiographs

    SciTech Connect

    Laguna, G.W.

    1986-09-30

    The parameters involved in generating actual radiographs and what can and cannot be modeled are examined in this report. Using the spectral distribution of the radiation source and the mass absorption curve for the material comprising the part to be modeled, the actual amount of radiation that would pass through the part and reach the film is determined. This method increases confidence in the results of the simulation and enables the modeling of parts made of multiple materials. (LEW)

  8. A method to produce and validate a digitally reconstructed radiograph-based computer simulation for optimisation of chest radiographs acquired with a computed radiography imaging system

    PubMed Central

    Moore, C S; Liney, G P; Beavis, A W; Saunderson, J R

    2011-01-01

    Objectives The purpose of this study was to develop and validate a computer model to produce realistic simulated computed radiography (CR) chest images using CT data sets of real patients. Methods Anatomical noise, which is the limiting factor in determining pathology in chest radiography, is realistically simulated by the CT data, and frequency-dependent noise has been added post-digitally reconstructed radiograph (DRR) generation to simulate exposure reduction. Realistic scatter and scatter fractions were measured in images of a chest phantom acquired on the CR system simulated by the computer model and added post-DRR calculation. Results The model has been validated with a phantom and patients and shown to provide predictions of signal-to-noise ratios (SNRs), tissue-to-rib ratios (TRRs: a measure of soft tissue pixel value to that of rib) and pixel value histograms that lie within the range of values measured with patients and the phantom. The maximum difference in measured SNR to that calculated was 10%. TRR values differed by a maximum of 1.3%. Conclusion Experienced image evaluators have responded positively to the DRR images, are satisfied they contain adequate anatomical features and have deemed them clinically acceptable. Therefore, the computer model can be used by image evaluators to grade chest images presented at different tube potentials and doses in order to optimise image quality and patient dose for clinical CR chest radiographs without the need for repeat patient exposures. PMID:21933979

  9. Computer Based Simulation of Laboratory Experiments.

    ERIC Educational Resources Information Center

    Edward, Norrie S.

    1997-01-01

    Examines computer based simulations of practical laboratory experiments in engineering. Discusses the aims and achievements of lab work (cognitive, process, psychomotor, and affective); types of simulations (model building and behavioral); and the strengths and weaknesses of simulations. Describes the development of a centrifugal pump simulation,…

  10. Computer Based Simulation of Laboratory Experiments.

    ERIC Educational Resources Information Center

    Edward, Norrie S.

    1997-01-01

    Examines computer based simulations of practical laboratory experiments in engineering. Discusses the aims and achievements of lab work (cognitive, process, psychomotor, and affective); types of simulations (model building and behavioral); and the strengths and weaknesses of simulations. Describes the development of a centrifugal pump simulation,…

  11. Comparison of EBSD patterns simulated by two multislice methods.

    PubMed

    Liu, Q B; Cai, C Y; Zhou, G W; Wang, Y G

    2016-10-01

    The extraction of crystallography information from electron backscatter diffraction (EBSD) patterns can be facilitated by diffraction simulations based on the dynamical electron diffraction theory. In this work, the EBSD patterns are successfully simulated by two multislice methods, that is, the real space (RS) method and the revised real space (RRS) method. The calculation results by the two multislice methods are compared and analyzed in detail with respect to different accelerating voltages, Debye-Waller factors and aperture radii. It is found that the RRS method provides a larger view field of the EBSD patterns than that by the RS method under the same calculation conditions. Moreover, the Kikuchi bands of the EBSD patterns obtained by the RRS method have a better match with the experimental patterns than those by the RS method. Especially, the lattice parameters obtained by the RRS method are more accurate than those by the RS method. These results demonstrate that the RRS method is more accurate for simulating the EBSD patterns than the RS method within the accepted computation time. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  12. Interactive methods for exploring particle simulation data

    SciTech Connect

    Co, Christopher S.; Friedman, Alex; Grote, David P.; Vay, Jean-Luc; Bethel, E. Wes; Joy, Kenneth I.

    2004-05-01

    In this work, we visualize high-dimensional particle simulation data using a suite of scatter plot-based visualizations coupled with interactive selection tools. We use traditional 2D and 3D projection scatter plots as well as a novel oriented disk rendering style to convey various information about the data. Interactive selection tools allow physicists to manually classify ''interesting'' sets of particles that are highlighted across multiple, linked views of the data. The power of our application is the ability to correspond new visual representations of the simulation data with traditional, well understood visualizations. This approach supports the interactive exploration of the high-dimensional space while promoting discovery of new particle behavior.

  13. Matching methods to create paired survival data based on an exposure occurring over time: a simulation study with application to breast cancer

    PubMed Central

    2014-01-01

    Background Paired survival data are often used in clinical research to assess the prognostic effect of an exposure. Matching generates correlated censored data expecting that the paired subjects just differ from the exposure. Creating pairs when the exposure is an event occurring over time could be tricky. We applied a commonly used method, Method 1, which creates pairs a posteriori and propose an alternative method, Method 2, which creates pairs in “real-time”. We used two semi-parametric models devoted to correlated censored data to estimate the average effect of the exposure HR¯(t): the Holt and Prentice (HP), and the Lee Wei and Amato (LWA) models. Contrary to the HP, the LWA allowed adjustment for the matching covariates (LWA a ) and for an interaction (LWA i ) between exposure and covariates (assimilated to prognostic profiles). The aim of our study was to compare the performances of each model according to the two matching methods. Methods Extensive simulations were conducted. We simulated cohort data sets on which we applied the two matching methods, the HP and the LWA. We used our conclusions to assess the prognostic effect of subsequent pregnancy after treatment for breast cancer in a female cohort treated and followed up in eight french hospitals. Results In terms of bias and RMSE, Method 2 performed better than Method 1 in designing the pairs, and LWA a was the best model for all the situations except when there was an interaction between exposure and covariates, for which LWA i was more appropriate. On our real data set, we found opposite effects of pregnancy according to the six prognostic profiles, but none were statistically significant. We probably lacked statistical power or reached the limits of our approach. The pairs’ censoring options chosen for combination Method 2 - LWA had to be compared with others. Conclusions Correlated censored data designing by Method 2 seemed to be the most pertinent method to create pairs, when the criterion

  14. Dispersion Estimation and Its Effect on Test Performance in RNA-seq Data Analysis: A Simulation-Based Comparison of Methods

    PubMed Central

    Landau, William Michael; Liu, Peng

    2013-01-01

    A central goal of RNA sequencing (RNA-seq) experiments is to detect differentially expressed genes. In the ubiquitous negative binomial model for RNA-seq data, each gene is given a dispersion parameter, and correctly estimating these dispersion parameters is vital to detecting differential expression. Since the dispersions control the variances of the gene counts, underestimation may lead to false discovery, while overestimation may lower the rate of true detection. After briefly reviewing several popular dispersion estimation methods, this article describes a simulation study that compares them in terms of point estimation and the effect on the performance of tests for differential expression. The methods that maximize the test performance are the ones that use a moderate degree of dispersion shrinkage: the DSS, Tagwise wqCML, and Tagwise APL. In practical RNA-seq data analysis, we recommend using one of these moderate-shrinkage methods with the QLShrink test in QuasiSeq R package. PMID:24349066

  15. A discrete event method for wave simulation

    SciTech Connect

    Nutaro, James J

    2006-01-01

    This article describes a discrete event interpretation of the finite difference time domain (FDTD) and digital wave guide network (DWN) wave simulation schemes. The discrete event method is formalized using the discrete event system specification (DEVS). The scheme is shown to have errors that are proportional to the resolution of the spatial grid. A numerical example demonstrates the relative efficiency of the scheme with respect to FDTD and DWN schemes. The potential for the discrete event scheme to reduce numerical dispersion and attenuation errors is discussed.

  16. A novel load balancing method for hierarchical federation simulation system

    NASA Astrophysics Data System (ADS)

    Bin, Xiao; Xiao, Tian-yuan

    2013-07-01

    In contrast with single HLA federation framework, hierarchical federation framework can improve the performance of large-scale simulation system in a certain degree by distributing load on several RTI. However, in hierarchical federation framework, RTI is still the center of message exchange of federation, and it is still the bottleneck of performance of federation, the data explosion in a large-scale HLA federation may cause overload on RTI, It may suffer HLA federation performance reduction or even fatal error. Towards this problem, this paper proposes a load balancing method for hierarchical federation simulation system based on queuing theory, which is comprised of three main module: queue length predicting, load controlling policy, and controller. The method promotes the usage of resources of federate nodes, and improves the performance of HLA simulation system with balancing load on RTIG and federates. Finally, the experiment results are presented to demonstrate the efficient control of the method.

  17. A Method of Simulating Fluid Structure Interactions for Deformable Decelerators

    NASA Astrophysics Data System (ADS)

    Gidzak, Vladimyr Mykhalo

    A method is developed for performing simulations that contain fluid-structure interactions between deployable decelerators and a high speed compressible flow. The problem of coupling together multiple physical systems is examined with discussion of the strength of coupling for various methods. A non-monolithic strongly coupled option is presented for fluid-structure systems based on grid deformation. A class of algebraic grid deformation methods is then presented with examples of increasing complexity. The strength of the fluid-structure coupling is validated against two analytic problems, chosen to test the time dependent behavior of structure on fluid interactions, and of fluid on structure interruptions. A one-dimentional material heating model is also validated against experimental data. Results are provided for simulations of a wind tunnel scale disk-gap-band parachute with comparison to experimental data. Finally, a simulation is performed on a flight scale tension cone decelerator, with examination of time-dependent material stress, and heating.

  18. Efficient method for transport simulations in quantum cascade lasers

    NASA Astrophysics Data System (ADS)

    Maczka, Mariusz; Pawlowski, Stanislaw

    2016-12-01

    An efficient method for simulating quantum transport in quantum cascade lasers is presented. The calculations are performed within a simple approximation inspired by Büttiker probes and based on a finite model for semiconductor superlattices. The formalism of non-equilibrium Green's functions is applied to determine the selected transport parameters in a typical structure of a terahertz laser. Results were compared with those obtained for a infinite model as well as other methods described in literature.

  19. Physics-Based Simulations of Natural Hazards

    NASA Astrophysics Data System (ADS)

    Schultz, Kasey William

    Earthquakes and tsunamis are some of the most damaging natural disasters that we face. Just two recent events, the 2004 Indian Ocean earthquake and tsunami and the 2011 Haiti earthquake, claimed more than 400,000 lives. Despite their catastrophic impacts on society, our ability to predict these natural disasters is still very limited. The main challenge in studying the earthquake cycle is the non-linear and multi-scale properties of fault networks. Earthquakes are governed by physics across many orders of magnitude of spatial and temporal scales; from the scale of tectonic plates and their evolution over millions of years, down to the scale of rock fracturing over milliseconds to minutes at the sub-centimeter scale during an earthquake. Despite these challenges, there are useful patterns in earthquake occurrence. One such pattern, the frequency-magnitude relation, relates the number of large earthquakes to small earthquakes and forms the basis for assessing earthquake hazard. However the utility of these relations is proportional to the length of our earthquake records, and typical records span at most a few hundred years. Utilizing physics based interactions and techniques from statistical physics, earthquake simulations provide rich earthquake catalogs allowing us to measure otherwise unobservable statistics. In this dissertation I will discuss five applications of physics-based simulations of natural hazards, utilizing an earthquake simulator called Virtual Quake. The first is an overview of computing earthquake probabilities from simulations, focusing on the California fault system. The second uses simulations to help guide satellite-based earthquake monitoring methods. The third presents a new friction model for Virtual Quake and describes how we tune simulations to match reality. The fourth describes the process of turning Virtual Quake into an open source research tool. This section then focuses on a resulting collaboration using Virtual Quake for a detailed

  20. Investigation on the Transformation of Absorbed Oxygen at ZnO {101̅0} Surface Based on a Novel Thermal Pulse Method and Density Functional Theory Simulation.

    PubMed

    Yang, Tingqiang; Liu, Yueli; Jin, Wei; Han, Yiyang; Yang, Shuang; Chen, Wen

    2017-07-28

    Absorbed oxygen plays a key role in gas sensing process of ZnO nanomaterials. In this work, the transformation of absorbed oxygen on ZnO (101̅0) and its effects on gas sensing properties to ethanol are studied by a novel thermal pulse method and density functional theory (DFT) simulation. Thermal pulse results reveal that the absorbed O2 molecule dissociates into two individual oxygen adatoms by extracting electrons from ZnO surface layers when temperature is above 443 K. The temperature at which absorbed O2 molecule begins to dissociate is the lowest working temperature for gas sensing. DFT simulation demonstrates the dissociation process of O2 at ZnO (101̅0) surface, and the activation energy (Ea) of dissociation is calculated to be 351.71 kJ/mol, which suggests that the absorbed O2 molecule is not likely to dissociate at room temperature. The reactions between ethanol and absorbed O2 molecule, as well as reactions between ethanol and O adatom, are also simulated. The results indicate that ethanol cannot react with absorbed O2 molecule, while it can be oxidized by O adatom to acetaldehyde and then to acetic acid spontaneously. Mulliken charge analysis suggests electrons extracted by O adatom return to ZnO after the oxidation of ethanol.

  1. TU-C-17A-08: Improving IMRT Planning and Reducing Inter-Planner Variability Using the Stochastic Frontier Method: Validation Based On Clinical and Simulated Data

    SciTech Connect

    Gagne, MC; Archambault, L; Tremblay, D; Varfalvy, N

    2014-06-15

    Purpose: Intensity modulated radiation therapy always requires compromises between PTV coverage and organs at risk (OAR) sparing. We previously developed metrics that correlate doses to OAR to specific patients’ morphology using stochastic frontier analysis (SFA). Here, we aim to examine the validity of this approach using a large set of realistically simulated dosimetric and geometric data. Methods: SFA describes a set of treatment plans as an asymmetric distribution with respect to a frontier defining optimal plans. Eighty head and neck IMRT plans were used to establish a metric predicting the mean dose to parotids as a function of simple geometric parameters. A database of 140 parotids was used as a basis distribution to simulate physically plausible data of geometry and dose. Distributions comprising between 20 and 5000 were simulated and the SFA was applied to obtain new frontiers, which were compared to the original frontier. Results: It was possible to simulate distributions consistent with the original dataset. Below 160 organs, the SFA could not always describe distributions as asymmetric: a few cases showed a Gaussian or half-Gaussian distribution. In order to converge to a stable solution, the number of organs in a distribution must ideally be above 100, but in many cases stable parameters could be achieved with as low as 60 samples of organ data. Mean RMS value of the error of new frontiers was significantly reduced when additional organs are used. Conclusion: The number of organs in a distribution showed to have an impact on the effectiveness of the model. It is always possible to obtain a frontier, but if the number of organs in the distribution is small (< 160), it may not represent de lowest dose achievable. These results will be used to determine number of cases necessary to adapt the model to other organs.

  2. Apparatus for and method of simulating turbulence

    DOEpatents

    Dimas, Athanassios; Lottati, Isaac; Bernard, Peter; Collins, James; Geiger, James C.

    2003-01-01

    In accordance with a preferred embodiment of the invention, a novel apparatus for and method of simulating physical processes such as fluid flow is provided. Fluid flow near a boundary or wall of an object is represented by a collection of vortex sheet layers. The layers are composed of a grid or mesh of one or more geometrically shaped space filling elements. In the preferred embodiment, the space filling elements take on a triangular shape. An Eulerian approach is employed for the vortex sheets, where a finite-volume scheme is used on the prismatic grid formed by the vortex sheet layers. A Lagrangian approach is employed for the vortical elements (e.g., vortex tubes or filaments) found in the remainder of the flow domain. To reduce the computational time, a hairpin removal scheme is employed to reduce the number of vortex filaments, and a Fast Multipole Method (FMM), preferably implemented using parallel processing techniques, reduces the computation of the velocity field.

  3. RELAP5 based engineering simulator

    SciTech Connect

    Charlton, T.R.; Laats, E.T.; Burtt, J.D.

    1990-01-01

    The INEL Engineering Simulation Center was established in 1988 to provide a modern, flexible, state-of-the-art simulation facility. This facility and two of the major projects which are part of the simulation center, the Advance Test Reactor (ATR) engineering simulator project and the Experimental Breeder Reactor II (EBR-II) advanced reactor control system, have been the subject of several papers in the past few years. Two components of the ATR engineering simulator project, RELAP5 and the Nuclear Plant Analyzer (NPA), have recently been improved significantly. This paper will present an overview of the INEL Engineering Simulation Center, and discuss the RELAP5/MOD3 and NPA/MOD1 codes, specifically how they are being used at the INEL Engineering Simulation Center. It will provide an update on the modifications to these two codes and their application to the ATR engineering simulator project, as well as, a discussion on the reactor system representation, control system modeling, two phase flow and heat transfer modeling. It will also discuss how these two codes are providing desktop, stand-alone reactor simulation. 12 refs., 2 figs.

  4. Assessing the performance of the MM/PBSA and MM/GBSA methods: I. The accuracy of binding free energy calculations based on molecular dynamics simulations

    PubMed Central

    Hou, Tingjun; Wang, Junmei; Li, Youyong; Wang, Wei

    2011-01-01

    The Molecular Mechanics/Poisson Boltzmann Surface Area (MM/PBSA) and the Molecular Mechanics/Generalized Born Surface Area (MM/GBSA) methods calculate binding free energies for macromolecules by combining molecular mechanics calculations and continuum solvation models. To systematically evaluate the performance of these methods, we report here an extensive study of 59 ligands interacting with six different proteins. First, we explored the effects of the length of the molecular dynamics (MD) simulation, ranging from 400 to 4800 ps, and the solute dielectric constant (1, 2 or 4) to the binding free energies predicted by MM/PBSA. The following three important conclusions could be observed: (1). MD simulation lengths have obvious impact on the predictions, and longer MD simulations are not always necessary to achieve better predictions; (2). The predictions are quite sensitive to solute dielectric constant, and this parameter should be carefully determined according to the characteristics of the protein/ligand binding interface; (3). Conformational entropy showed large fluctuations in MD trajectories and a large number of snapshots are necessary to achieve stable predictions. Next, we evaluated the accuracy of the binding free energies calculated by three Generalized Born (GB) models. We found that the GB model developed by Onufriev and Case was the most successful model in ranking the binding affinities of the studied inhibitors. Finally, we evaluated the performance of MM/GBSA and MM/PBSA in predicting binding free energies. Our results showed that MM/PBSA performed better in calculating absolute, but not necessarily relative, binding free energies than MM/GBSA. Considering its computational efficiency, MM/GBSA can serve as a powerful tool in drug design, where correct ranking of inhibitors is often emphasized. PMID:21117705

  5. Physics-Based Simulator for NEO Exploration Analysis & Simulation

    NASA Technical Reports Server (NTRS)

    Balaram, J.; Cameron, J.; Jain, A.; Kline, H.; Lim, C.; Mazhar, H.; Myint, S.; Nayar, H.; Patton, R.; Pomerantz, M.; hide

    2011-01-01

    As part of the Space Exploration Analysis and Simulation (SEAS) task, the National Aeronautics and Space Administration (NASA) is using physics-based simulations at NASA's Jet Propulsion Laboratory (JPL) to explore potential surface and near-surface mission operations at Near Earth Objects (NEOs). The simulator is under development at JPL and can be used to provide detailed analysis of various surface and near-surface NEO robotic and human exploration concepts. In this paper we describe the SEAS simulator and provide examples of recent mission systems and operations concepts investigated using the simulation. We also present related analysis work and tools developed for both the SEAS task as well as general modeling, analysis and simulation capabilites for asteroid/small-body objects.

  6. Physics-Based Simulator for NEO Exploration Analysis & Simulation

    NASA Technical Reports Server (NTRS)

    Balaram, J.; Cameron, J.; Jain, A.; Kline, H.; Lim, C.; Mazhar, H.; Myint, S.; Nayar, H.; Patton, R.; Pomerantz, M.; Quadrelli, M.; Shakkotai, P.; Tso, K.

    2011-01-01

    As part of the Space Exploration Analysis and Simulation (SEAS) task, the National Aeronautics and Space Administration (NASA) is using physics-based simulations at NASA's Jet Propulsion Laboratory (JPL) to explore potential surface and near-surface mission operations at Near Earth Objects (NEOs). The simulator is under development at JPL and can be used to provide detailed analysis of various surface and near-surface NEO robotic and human exploration concepts. In this paper we describe the SEAS simulator and provide examples of recent mission systems and operations concepts investigated using the simulation. We also present related analysis work and tools developed for both the SEAS task as well as general modeling, analysis and simulation capabilites for asteroid/small-body objects.

  7. Simulation of dry granular flows using discrete element methods

    NASA Astrophysics Data System (ADS)

    Martin, Hugo; Lefebvre, Aline; Maday, Yvon; Mangeney, Anne; Maury, Bertrand; Sainte-Marie, Jacques

    2017-04-01

    Granular flows are composed of interacting particles (for instance sand grains). While natural flow simulations at the field scale are generally based on continuum models, discrete element methods are very useful to get insight into the detailed contact interactions between the particles involved. We shall consider here both well known molecular dynamics (MD) and contact dynamics (CD) methods to simulate granular particle interaction. The difference between these methods is the linearisation of contact forces in MD. We are interested to compare these methods, and especially the effects of the linearisation in simulations. In the present work, we introduce a new rigid bodies model at the scale of the particles and its resolution by contact dynamics. The interesting aspect of our CD method is to treat the contacts in all the material system in one step without any iterative process required when the contacts are dealt with one after the other. All contacts are calculated here at the same time in just one iteration and the normal and tangential constraints are treated simultaneously. The present model follows from a convex optimization problem presented in [1] by B. Maury in which we add a frictional behaviour to the contact law between the particles. To analyse the behaviour of this model, we compare our results to analytical solutions when we can compute them and otherwise to simulations with molecular dynamics method. [1] A time-stepping scheme for inelastic collisions. Numerical handling of the nonoverlapping constraint, B. Maury, Numerische Mathematik, 17 january 2006.

  8. Nonstationary multiscale turbulence simulation based on local PCA.

    PubMed

    Beghi, Alessandro; Cenedese, Angelo; Masiero, Andrea

    2014-09-01

    Turbulence simulation methods are of fundamental importance for evaluating the performance of control strategies for Adaptive Optics (AO) systems. In order to obtain a reliable evaluation of the performance a statistically accurate turbulence simulation method has to be used. This work generalizes a previously proposed method for turbulence simulation based on the use of a multiscale stochastic model. The main contributions of this work are: first, a multiresolution local PCA representation is considered. In typical operating conditions, the computational load for turbulence simulation is reduced approximately by a factor of 4, with respect to the previously proposed method, by means of this PCA representation. Second, thanks to a different low resolution method, based on a moving average model, the wind velocity can be in any direction (not necessarily that of the spatial axes). Finally, this paper extends the simulation procedure to generate, if needed, turbulence samples by using a more general model than that of the frozen flow hypothesis.

  9. Implicit methods for efficient musculoskeletal simulation and optimal control

    PubMed Central

    van den Bogert, Antonie J.; Blana, Dimitra; Heinrich, Dieter

    2011-01-01

    The ordinary differential equations for musculoskeletal dynamics are often numerically stiff and highly nonlinear. Consequently, simulations require small time steps, and optimal control problems are slow to solve and have poor convergence. In this paper, we present an implicit formulation of musculoskeletal dynamics, which leads to new numerical methods for simulation and optimal control, with the expectation that we can mitigate some of these problems. A first order Rosenbrock method was developed for solving forward dynamic problems using the implicit formulation. It was used to perform real-time dynamic simulation of a complex shoulder arm system with extreme dynamic stiffness. Simulations had an RMS error of only 0.11 degrees in joint angles when running at real-time speed. For optimal control of musculoskeletal systems, a direct collocation method was developed for implicitly formulated models. The method was applied to predict gait with a prosthetic foot and ankle. Solutions were obtained in well under one hour of computation time and demonstrated how patients may adapt their gait to compensate for limitations of a specific prosthetic limb design. The optimal control method was also applied to a state estimation problem in sports biomechanics, where forces during skiing were estimated from noisy and incomplete kinematic data. Using a full musculoskeletal dynamics model for state estimation had the additional advantage that forward dynamic simulations, could be done with the same implicitly formulated model to simulate injuries and perturbation responses. While these methods are powerful and allow solution of previously intractable problems, there are still considerable numerical challenges, especially related to the convergence of gradient-based solvers. PMID:22102983

  10. Implicit methods for efficient musculoskeletal simulation and optimal control.

    PubMed

    van den Bogert, Antonie J; Blana, Dimitra; Heinrich, Dieter

    2011-01-01

    The ordinary differential equations for musculoskeletal dynamics are often numerically stiff and highly nonlinear. Consequently, simulations require small time steps, and optimal control problems are slow to solve and have poor convergence. In this paper, we present an implicit formulation of musculoskeletal dynamics, which leads to new numerical methods for simulation and optimal control, with the expectation that we can mitigate some of these problems. A first order Rosenbrock method was developed for solving forward dynamic problems using the implicit formulation. It was used to perform real-time dynamic simulation of a complex shoulder arm system with extreme dynamic stiffness. Simulations had an RMS error of only 0.11 degrees in joint angles when running at real-time speed. For optimal control of musculoskeletal systems, a direct collocation method was developed for implicitly formulated models. The method was applied to predict gait with a prosthetic foot and ankle. Solutions were obtained in well under one hour of computation time and demonstrated how patients may adapt their gait to compensate for limitations of a specific prosthetic limb design. The optimal control method was also applied to a state estimation problem in sports biomechanics, where forces during skiing were estimated from noisy and incomplete kinematic data. Using a full musculoskeletal dynamics model for state estimation had the additional advantage that forward dynamic simulations, could be done with the same implicitly formulated model to simulate injuries and perturbation responses. While these methods are powerful and allow solution of previously intractable problems, there are still considerable numerical challenges, especially related to the convergence of gradient-based solvers.

  11. An improved method for simulating microcalcifications in digital mammograms

    SciTech Connect

    Zanca, Federica; Chakraborty, Dev Prasad; Ongeval, Chantal van; Jacobs, Jurgen; Claus, Filip; Marchal, Guy; Bosmans, Hilde

    2008-09-15

    The assessment of the performance of a digital mammography system requires an observer study with a relatively large number of cases with known truth which is often difficult to assemble. Several investigators have developed methods for generating hybrid abnormal images containing simulated microcalcifications. This article addresses some of the limitations of earlier methods. The new method is based on digital images of needle biopsy specimens. Since the specimens are imaged separately from the breast, the microcalcification attenuation profile scan is deduced without the effects of over and underlying tissues. The resulting templates are normalized for image acquisition specific parameters and reprocessed to simulate microcalcifications appropriate to other imaging systems, with different x-ray, detector and image processing parameters than the original acquisition system. This capability is not shared by previous simulation methods that have relied on extracting microcalcifications from breast images. The method was validated by five experienced mammographers who compared 59 pairs of simulated and real microcalcifications in a two-alternative forced choice task designed to test if they could distinguish the real from the simulated lesions. They also classified the shapes of the microcalcifications according to a standardized clinical lexicon. The observed probability of correct choice was 0.415, 95% confidence interval (0.284, 0.546), showing that the radiologists were unable to distinguish the lesions. The shape classification revealed substantial agreement with the truth (mean kappa=0.70), showing that we were able to accurately simulate the lesion morphology. While currently limited to single microcalcifications, the method is extensible to more complex clusters of microcalcifications and to three-dimensional images. It can be used to objectively assess an imaging technology, especially with respect to its ability to adequately visualize the morphology of the

  12. Template-Based Geometric Simulation of Flexible Frameworks

    PubMed Central

    Wells, Stephen A.; Sartbaeva, Asel

    2012-01-01

    Specialised modelling and simulation methods implementing simplified physical models are valuable generators of insight. Template-based geometric simulation is a specialised method for modelling flexible framework structures made up of rigid units. We review the background, development and implementation of the method, and its applications to the study of framework materials such as zeolites and perovskites. The “flexibility window” property of zeolite frameworks is a particularly significant discovery made using geometric simulation. Software implementing geometric simulation of framework materials, “GASP”, is freely available to researchers. PMID:28817055

  13. Numeric Modified Adomian Decomposition Method for Power System Simulations

    SciTech Connect

    Dimitrovski, Aleksandar D; Simunovic, Srdjan; Pannala, Sreekanth

    2016-01-01

    This paper investigates the applicability of numeric Wazwaz El Sayed modified Adomian Decomposition Method (WES-ADM) for time domain simulation of power systems. WESADM is a numerical method based on a modified Adomian decomposition (ADM) technique. WES-ADM is a numerical approximation method for the solution of nonlinear ordinary differential equations. The non-linear terms in the differential equations are approximated using Adomian polynomials. In this paper WES-ADM is applied to time domain simulations of multimachine power systems. WECC 3-generator, 9-bus system and IEEE 10-generator, 39-bus system have been used to test the applicability of the approach. Several fault scenarios have been tested. It has been found that the proposed approach is faster than the trapezoidal method with comparable accuracy.

  14. IMPACT OF SIMULANT PRODUCTION METHODS ON SRAT PRODUCT

    SciTech Connect

    EIBLING, R

    2006-03-22

    The research and development programs in support of the Defense Waste Processing Facility (DWPF) and other high level waste vitrification processes require the use of both nonradioactive waste simulants and actual waste samples. The nonradioactive waste simulants have been used for laboratory testing, pilot-scale testing and full-scale integrated facility testing. Recent efforts have focused on matching the physical properties of actual sludge. These waste simulants were designed to reproduce the chemical and, if possible, the physical properties of the actual high level waste. This technical report documents a study of simulant production methods for high level waste simulated sludge and their impact on the physical properties of the resultant SRAT product. The sludge simulants used in support of DWPF have been based on average waste compositions and on expected or actual batch compositions. These sludge simulants were created to primarily match the chemical properties of the actual waste. These sludges were produced by generating manganese dioxide, MnO{sub 2}, from permanganate ion (MnO{sub 4}{sup -}) and manganous nitrate, precipitating ferric nitrate and nickel nitrate with sodium hydroxide, washing with inhibited water and then addition of other waste species. While these simulated sludges provided a good match for chemical reaction studies, they did not adequately match the physical properties (primarily rheology) measured on the actual waste. A study was completed in FY04 to determine the impact of simulant production methods on the physical properties of Sludge Batch 3 simulant. This study produced eight batches of sludge simulant, all prepared to the same chemical target, by varying the sludge production methods. The sludge batch, which most closely duplicated the actual SB3 sludge physical properties, was Test 8. Test 8 sludge was prepared by coprecipitating all of the major metals (including Al). After the sludge was washed to meet the target, the sludge

  15. Application of particle method to the casting process simulation

    NASA Astrophysics Data System (ADS)

    Hirata, N.; Zulaida, Y. M.; Anzai, K.

    2012-07-01

    Casting processes involve many significant phenomena such as fluid flow, solidification, and deformation, and it is known that casting defects are strongly influenced by the phenomena. However the phenomena complexly interacts each other and it is difficult to observe them directly because the temperature of the melt and other apparatus components are quite high, and they are generally opaque; therefore, a computer simulation is expected to serve a lot of benefits to consider what happens in the processes. Recently, a particle method, which is one of fully Lagrangian methods, has attracted considerable attention. The particle methods based on Lagrangian methods involving no calculation lattice have been developed rapidly because of their applicability to multi-physics problems. In this study, we combined the fluid flow, heat transfer and solidification simulation programs, and tried to simulate various casting processes such as continuous casting, centrifugal casting and ingot making. As a result of continuous casting simulation, the powder flow could be calculated as well as the melt flow, and the subsequent shape of interface between the melt and the powder was calculated. In the centrifugal casting simulation, the mold was smoothly modeled along the shape of the real mold, and the fluid flow and the rotating mold are simulated directly. As a result, the flow of the melt dragged by the rotating mold was calculated well. The eccentric rotation and the influence of Coriolis force were also reproduced directly and naturally. For ingot making simulation, a shrinkage formation behavior was calculated and the shape of the shrinkage agreed well with the experimental result.

  16. Developing a Theory of Digitally-Enabled Trial-Based Problem Solving through Simulation Methods: The Case of Direct-Response Marketing

    ERIC Educational Resources Information Center

    Clark, Joseph Warren

    2012-01-01

    In turbulent business environments, change is rapid, continuous, and unpredictable. Turbulence undermines those adaptive problem solving methods that generate solutions by extrapolating from what worked (or did not work) in the past. To cope with this challenge, organizations utilize trial-based problem solving (TBPS) approaches in which they…

  17. Developing a Theory of Digitally-Enabled Trial-Based Problem Solving through Simulation Methods: The Case of Direct-Response Marketing

    ERIC Educational Resources Information Center

    Clark, Joseph Warren

    2012-01-01

    In turbulent business environments, change is rapid, continuous, and unpredictable. Turbulence undermines those adaptive problem solving methods that generate solutions by extrapolating from what worked (or did not work) in the past. To cope with this challenge, organizations utilize trial-based problem solving (TBPS) approaches in which they…

  18. A performance-based method for calculating the design thickness of compacted clay liners exposed to high strength leachate under simulated landfill conditions.

    PubMed

    Safari, Edwin; Jalili Ghazizade, Mahdi; Abdoli, Mohammad Ali

    2012-09-01

    Compacted clay liners (CCLs) when feasible, are preferred to composite geosynthetic liners. The thickness of CCLs is typically prescribed by each country's environmental protection regulations. However, considering the fact that construction of CCLs represents a significant portion of overall landfill construction costs; a performance based design of liner thickness would be preferable to 'one size fits all' prescriptive standards. In this study researchers analyzed the hydraulic behaviour of a compacted clayey soil in three laboratory pilot scale columns exposed to high strength leachate under simulated landfill conditions. The temperature of the simulated CCL at the surface was maintained at 40 ± 2 °C and a vertical pressure of 250 kPa was applied to the soil through a gravel layer on top of the 50 cm thick CCL where high strength fresh leachate was circulated at heads of 15 and 30 cm simulating the flow over the CCL. Inverse modelling using HYDRUS-1D indicated that the hydraulic conductivity after 180 days was decreased about three orders of magnitude in comparison with the values measured prior to the experiment. A number of scenarios of different leachate heads and persistence time were considered and saturation depth of the CCL was predicted through modelling. Under a typical leachate head of 30 cm, the saturation depth was predicted to be less than 60 cm for a persistence time of 3 years. This approach can be generalized to estimate an effective thickness of a CCL instead of using prescribed values, which may be conservatively overdesigned and thus unduly costly.

  19. Computational Simulations and the Scientific Method

    NASA Technical Reports Server (NTRS)

    Kleb, Bil; Wood, Bill

    2005-01-01

    As scientific simulation software becomes more complicated, the scientific-software implementor's need for component tests from new model developers becomes more crucial. The community's ability to follow the basic premise of the Scientific Method requires independently repeatable experiments, and model innovators are in the best position to create these test fixtures. Scientific software developers also need to quickly judge the value of the new model, i.e., its cost-to-benefit ratio in terms of gains provided by the new model and implementation risks such as cost, time, and quality. This paper asks two questions. The first is whether other scientific software developers would find published component tests useful, and the second is whether model innovators think publishing test fixtures is a feasible approach.

  20. Physiological Based Simulator Fidelity Design Guidance

    NASA Technical Reports Server (NTRS)

    Schnell, Thomas; Hamel, Nancy; Postnikov, Alex; Hoke, Jaclyn; McLean, Angus L. M. Thom, III

    2012-01-01

    The evolution of the role of flight simulation has reinforced assumptions in aviation that the degree of realism in a simulation system directly correlates to the training benefit, i.e., more fidelity is always better. The construct of fidelity has several dimensions, including physical fidelity, functional fidelity, and cognitive fidelity. Interaction of different fidelity dimensions has an impact on trainee immersion, presence, and transfer of training. This paper discusses research results of a recent study that investigated if physiological-based methods could be used to determine the required level of simulator fidelity. Pilots performed a relatively complex flight task consisting of mission task elements of various levels of difficulty in a fixed base flight simulator and a real fighter jet trainer aircraft. Flight runs were performed using one forward visual channel of 40 deg. field of view for the lowest level of fidelity, 120 deg. field of view for the middle level of fidelity, and unrestricted field of view and full dynamic acceleration in the real airplane. Neuro-cognitive and physiological measures were collected under these conditions using the Cognitive Avionics Tool Set (CATS) and nonlinear closed form models for workload prediction were generated based on these data for the various mission task elements. One finding of the work described herein is that simple heart rate is a relatively good predictor of cognitive workload, even for short tasks with dynamic changes in cognitive loading. Additionally, we found that models that used a wide range of physiological and neuro-cognitive measures can further boost the accuracy of the workload prediction.

  1. Fast simulations of patient-specific haemodynamics of coronary artery bypass grafts based on a POD-Galerkin method and a vascular shape parametrization

    NASA Astrophysics Data System (ADS)

    Ballarin, Francesco; Faggiano, Elena; Ippolito, Sonia; Manzoni, Andrea; Quarteroni, Alfio; Rozza, Gianluigi; Scrofani, Roberto

    2016-06-01

    In this work a reduced-order computational framework for the study of haemodynamics in three-dimensional patient-specific configurations of coronary artery bypass grafts dealing with a wide range of scenarios is proposed. We combine several efficient algorithms to face at the same time both the geometrical complexity involved in the description of the vascular network and the huge computational cost entailed by time dependent patient-specific flow simulations. Medical imaging procedures allow to reconstruct patient-specific configurations from clinical data. A centerlines-based parametrization is proposed to efficiently handle geometrical variations. POD-Galerkin reduced-order models are employed to cut down large computational costs. This computational framework allows to characterize blood flows for different physical and geometrical variations relevant in the clinical practice, such as stenosis factors and anastomosis variations, in a rapid and reliable way. Several numerical results are discussed, highlighting the computational performance of the proposed framework, as well as its capability to carry out sensitivity analysis studies, so far out of reach. In particular, a reduced-order simulation takes only a few minutes to run, resulting in computational savings of 99% of CPU time with respect to the full-order discretization. Moreover, the error between full-order and reduced-order solutions is also studied, and it is numerically found to be less than 1% for reduced-order solutions obtained with just O(100) online degrees of freedom.

  2. Massively parallel simulations of multiphase flows using Lattice Boltzmann methods

    NASA Astrophysics Data System (ADS)

    Ahrenholz, Benjamin

    2010-03-01

    In the last two decades the lattice Boltzmann method (LBM) has matured as an alternative and efficient numerical scheme for the simulation of fluid flows and transport problems. Unlike conventional numerical schemes based on discretizations of macroscopic continuum equations, the LBM is based on microscopic models and mesoscopic kinetic equations. The fundamental idea of the LBM is to construct simplified kinetic models that incorporate the essential physics of microscopic or mesoscopic processes so that the macroscopic averaged properties obey the desired macroscopic equations. Especially applications involving interfacial dynamics, complex and/or changing boundaries and complicated constitutive relationships which can be derived from a microscopic picture are suitable for the LBM. In this talk a modified and optimized version of a Gunstensen color model is presented to describe the dynamics of the fluid/fluid interface where the flow field is based on a multi-relaxation-time model. Based on that modeling approach validation studies of contact line motion are shown. Due to the fact that the LB method generally needs only nearest neighbor information, the algorithm is an ideal candidate for parallelization. Hence, it is possible to perform efficient simulations in complex geometries at a large scale by massively parallel computations. Here, the results of drainage and imbibition (Degree of Freedom > 2E11) in natural porous media gained from microtomography methods are presented. Those fully resolved pore scale simulations are essential for a better understanding of the physical processes in porous media and therefore important for the determination of constitutive relationships.

  3. Simulating cardiac ultrasound image based on MR diffusion tensor imaging

    PubMed Central

    Qin, Xulei; Wang, Silun; Shen, Ming; Lu, Guolan; Zhang, Xiaodong; Wagner, Mary B.; Fei, Baowei

    2015-01-01

    Purpose: Cardiac ultrasound simulation can have important applications in the design of ultrasound systems, understanding the interaction effect between ultrasound and tissue and setting the ground truth for validating quantification methods. Current ultrasound simulation methods fail to simulate the myocardial intensity anisotropies. New simulation methods are needed in order to simulate realistic ultrasound images of the heart. Methods: The proposed cardiac ultrasound image simulation method is based on diffusion tensor imaging (DTI) data of the heart. The method utilizes both the cardiac geometry and the fiber orientation information to simulate the anisotropic intensities in B-mode ultrasound images. Before the simulation procedure, the geometry and fiber orientations of the heart are obtained from high-resolution structural MRI and DTI data, respectively. The simulation includes two important steps. First, the backscatter coefficients of the point scatterers inside the myocardium are processed according to the fiber orientations using an anisotropic model. Second, the cardiac ultrasound images are simulated with anisotropic myocardial intensities. The proposed method was also compared with two other nonanisotropic intensity methods using 50 B-mode ultrasound image volumes of five different rat hearts. The simulated images were also compared with the ultrasound images of a diseased rat heart in vivo. A new segmental evaluation method is proposed to validate the simulation results. The average relative errors (AREs) of five parameters, i.e., mean intensity, Rayleigh distribution parameter σ, and first, second, and third quartiles, were utilized as the evaluation metrics. The simulated images were quantitatively compared with real ultrasound images in both ex vivo and in vivo experiments. Results: The proposed ultrasound image simulation method can realistically simulate cardiac ultrasound images of the heart using high-resolution MR-DTI data. The AREs of their

  4. A Transfer Voltage Simulation Method for Generator Step Up Transformers

    NASA Astrophysics Data System (ADS)

    Funabashi, Toshihisa; Sugimoto, Toshirou; Ueda, Toshiaki; Ametani, Akihiro

    It has been found from measurements for 13 sets of GSU transformers that a transfer voltage of a generator step-up (GSU) transformer involves one dominant oscillation frequency. The frequency can be estimated from the inductance and capacitance values of the GSU transformer low-voltage-side. This observation has led to a new method for simulating a GSU transformer transfer voltage. The method is based on the EMTP TRANSFORMER model, but stray capacitances are added. The leakage inductance and the magnetizing resistance are modified using approximate curves for their frequency characteristics determined from the measured results. The new method is validated in comparison with the measured results.

  5. Component-Based Framework for Subsurface Simulations

    SciTech Connect

    Palmer, Bruce J.; Fang, Yilin; Hammond, Glenn E.; Gurumoorthi, Vidhya

    2007-08-01

    Simulations in the subsurface environment represent a broad range of phenomena covering an equally broad range of scales. Developing modelling capabilities that can integrate models representing different phenomena acting at different scales present formidable challenges both from the algorithmic and computer science perspective. This paper will describe the development of an integrated framework that will be used to combine different models into a single simulation. Initial work has focused on creating two frameworks, one for performing smooth particle hydrodynamics (SPH) simulations of fluid systems, the other for performing grid-based continuum simulations of reactive subsurface flow. The SPH framework is based on a parallel code developed for doing pore scale simulations, the continuum grid-based framework is based on the STOMP (Subsurface Transport Over Multiple Phases) code developed at PNNL. Future work will focus on combining the frameworks together to perform multiscale, multiphysics simulations of reactive subsurface flow.

  6. 3D numerical simulation for the transient electromagnetic field excited by the central loop based on the vector finite-element method

    NASA Astrophysics Data System (ADS)

    Li, J. H.; Zhu, Z. Q.; Liu, S. C.; Zeng, S. H.

    2011-12-01

    Based on the principle of abnormal field algorithms, Helmholtz equations for electromagnetic field have been deduced. We made the electric field Helmholtz equation the governing equation, and derived the corresponding system of vector finite element method equations using the Galerkin method. For solving the governing equation using the vector finite element method, we divided the computing domain into homogenous brick elements, and used Whitney-type vector basis functions. After obtaining the electric field's anomaly field in the Laplace domain using the vector finite element method, we used the Gaver-Stehfest algorithm to transform the electric field's anomaly field to the time domain, and obtained the impulse response of magnetic field's anomaly field through the Faraday law of electromagnetic induction. By comparing 1D analytic solutions of quasi-H-type geoelectric models, the accuracy of the vector finite element method is tested. For the low resistivity brick geoelectric model, the plot shape of electromotive force computed using the vector finite element method coincides with that of the integral equation method and finite difference in time domain solutions.

  7. An example-based brain MRI simulation framework

    NASA Astrophysics Data System (ADS)

    He, Qing; Roy, Snehashis; Jog, Amod; Pham, Dzung L.

    2015-03-01

    The simulation of magnetic resonance (MR) images plays an important role in the validation of image analysis algorithms such as image segmentation, due to lack of sufficient ground truth in real MR images. Previous work on MRI simulation has focused on explicitly modeling the MR image formation process. However, because of the overwhelming complexity of MR acquisition these simulations must involve simplifications and approximations that can result in visually unrealistic simulated images. In this work, we describe an example-based simulation framework, which uses an "atlas" consisting of an MR image and its anatomical models derived from the hard segmentation. The relationships between the MR image intensities and its anatomical models are learned using a patch-based regression that implicitly models the physics of the MR image formation. Given the anatomical models of a new brain, a new MR image can be simulated using the learned regression. This approach has been extended to also simulate intensity inhomogeneity artifacts based on the statistical model of training data. Results show that the example based MRI simulation method is capable of simulating different image contrasts and is robust to different choices of atlas. The simulated images resemble real MR images more than simulations produced by a physics-based model.

  8. Influence of free parameters on time delay calculations using magnetic field-based methods for solar wind propagation simulation from a single spacecraft

    NASA Astrophysics Data System (ADS)

    Kulchitsky, A. V.

    2009-12-01

    Space weather modeling and forecasting techniques are important for a variety of applications, such as satellite operations,GPS navigation, magnetosphere and ionosphere modeling etc. The work described here is intended to help provide a better IMF data forecast near the Earth's magnetosphere by measurements at L1 Lagrange point. Theory, as well as observations made from different satellites simultaneously, shows that the interplanetary magnetic field (IMF) may consist of current layers and wave fronts along which changes in magnetic field are small in the scale of the diameter of the ACE satellite's orbit. The knowledge about current layers and wave fronts along which the changes of IMF are minimal would significantly improve IMF forecast near the Earth by measurements at L1 Lagrange point. Many methods have been developed to determine these structures using measurements at a single spacecraft, based on different fundamental properties of the solar wind and IMF. However, most solar wind parameters, such as density and velocity, cannot be measured with sufficient time resolution comparable with magnetic field measurements. For this reason, methods based on the magnetic field are most frequently used for practical calculations and forecasting. There are two known methods for IMF calculations, MVAB-0 and the upstream-downstream magnetic field cross-product method. In this work, we propose two new methods based on physical laws of the solar wind and magnetic field measurements. We demonstrate their usefulness through comparison of data on the ACE and WIND satellites over long continuous periods of time. We used model skill analysis base on RMS and correlation between the model and measurements. All of these methods depend on a series of 4-6 free parameters, depending on the method. We analyzed all free parameters across a wide range. All analysis was performed on massive parallel computers. Computations reveled that there is no set of constant parameters that allow

  9. Atomistic hybrid DSMC/NEMD method for nonequilibrium multiscale simulations

    NASA Astrophysics Data System (ADS)

    Gu, Kai; Watkins, Charles B.; Koplik, Joel

    2010-03-01

    A multiscale hybrid method for coupling the direct simulation Monte Carlo (DSMC) method to the nonequilibrium molecular dynamics (NEMD) method is introduced. The method addresses Knudsen layer type gas flows within a few mean free paths of an interface or about an object with dimensions of the order of a few mean free paths. It employs the NEMD method to resolve nanoscale phenomena closest to the interface along with coupled DSMC simulation of the remainder of the Knudsen layer. The hybrid DSMC/NEMD method is a particle based algorithm without a buffer zone. It incorporates a new, modified generalized soft sphere (MGSS) molecular collision model to improve the poor computational efficiency of the traditional generalized soft sphere GSS model and to achieve DSMC compatibility with Lennard-Jones NEMD molecular interactions. An equilibrium gas, a Fourier thermal flow, and an oscillatory Couette flow, are simulated to validate the method. The method shows good agreement with Maxwell-Boltzmann theory for the equilibrium system, Chapman-Enskog theory for Fourier flow, and pure DSMC simulations for oscillatory Couette flow. Speedup in CPU time of the hybrid solver is benchmarked against a pure NEMD solver baseline for different system sizes and solver domain partitions. Finally, the hybrid method is applied to investigate interaction of argon gas with solid surface molecules in a parametric study of the influence of wetting effects and solid molecular mass on energy transfer and thermal accommodation coefficients. It is determined that wetting effect strength and solid molecular mass have a significant impact on the energy transfer between gas and solid phases and thermal accommodation coefficient.

  10. A multiscale quantum mechanics/electromagnetics method for device simulations.

    PubMed

    Yam, ChiYung; Meng, Lingyi; Zhang, Yu; Chen, GuanHua

    2015-04-07

    Multiscale modeling has become a popular tool for research applying to different areas including materials science, microelectronics, biology, chemistry, etc. In this tutorial review, we describe a newly developed multiscale computational method, incorporating quantum mechanics into electronic device modeling with the electromagnetic environment included through classical electrodynamics. In the quantum mechanics/electromagnetics (QM/EM) method, the regions of the system where active electron scattering processes take place are treated quantum mechanically, while the surroundings are described by Maxwell's equations and a semiclassical drift-diffusion model. The QM model and the EM model are solved, respectively, in different regions of the system in a self-consistent manner. Potential distributions and current densities at the interface between QM and EM regions are employed as the boundary conditions for the quantum mechanical and electromagnetic simulations, respectively. The method is illustrated in the simulation of several realistic systems. In the case of junctionless field-effect transistors, transfer characteristics are obtained and a good agreement between experiments and simulations is achieved. Optical properties of a tandem photovoltaic cell are studied and the simulations demonstrate that multiple QM regions are coupled through the classical EM model. Finally, the study of a carbon nanotube-based molecular device shows the accuracy and efficiency of the QM/EM method.

  11. Modelica-based TCP simulation

    NASA Astrophysics Data System (ADS)

    Velieva, T. R.; Eferina, E. G.; Korolkova, A. V.; Kulyabov, D. S.; Sevastianov, L. A.

    2017-01-01

    For the study and verification of our mathematical model of telecommunication systems a discrete simulation model and a continuous analytical model were developed. However, for various reasons, these implementations are not entirely satisfactory. It is necessary to develop a more adequate simulation model, possibly using a different modeling paradigm. In order to modeling of the TCP source it is proposed to use a hybrid (continuous-discrete) approach. For computer implementation of the model the physical modeling language Modelica is used. The hybrid approach allows us to take into account the transitions between different states in the continuous model of the TCP protocol. The considered approach allowed to obtain a simple simulation model of TCP source. This model has great potential for expansion. It is possible to implement different types of TCP.

  12. Agent-Based Simulations for Project Management

    NASA Technical Reports Server (NTRS)

    White, J. Chris; Sholtes, Robert M.

    2011-01-01

    Currently, the most common approach used in project planning tools is the Critical Path Method (CPM). While this method was a great improvement over the basic Gantt chart technique being used at the time, it now suffers from three primary flaws: (1) task duration is an input, (2) productivity impacts are not considered , and (3) management corrective actions are not included. Today, computers have exceptional computational power to handle complex simulations of task e)(eculion and project management activities (e.g ., dynamically changing the number of resources assigned to a task when it is behind schedule). Through research under a Department of Defense contract, the author and the ViaSim team have developed a project simulation tool that enables more realistic cost and schedule estimates by using a resource-based model that literally turns the current duration-based CPM approach "on its head." The approach represents a fundamental paradigm shift in estimating projects, managing schedules, and reducing risk through innovative predictive techniques.

  13. Research on BOM based composable modeling method

    NASA Astrophysics Data System (ADS)

    Zhang, Mingxin; He, Qiang; Gong, Jianxing

    2013-03-01

    Composable modeling method has been a research hotpot in the area of Modeling and Simulation for a long time. In order to increase the reuse and interoperability of BOM based model, this paper put forward a composable modeling method based on BOM, studied on the basic theory of composable modeling method based on BOM, designed a general structure of the coupled model based on BOM, and traversed the structure of atomic and coupled model based on BOM. At last, the paper introduced the process of BOM based composable modeling and made a conclusion on composable modeling method based on BOM. From the prototype we developed and accumulative model stocks, we found this method could increase the reuse and interoperability of models.

  14. Agent Based Simulation Output Analysis

    DTIC Science & Technology

    2011-12-01

    over long periods of time) not to have a steady state, but apparently does. These simulation models are available free from sigmawiki.com 2.1...are used in computer animations and movies (for example, in the movie Jurassic Park) as well as to look for emergent social behavior in groups

  15. Electromagnetic metamaterial simulations using a GPU-accelerated FDTD method

    NASA Astrophysics Data System (ADS)

    Seok, Myung-Su; Lee, Min-Gon; Yoo, SeokJae; Park, Q.-Han

    2015-12-01

    Metamaterials composed of artificial subwavelength structures exhibit extraordinary properties that cannot be found in nature. Designing artificial structures having exceptional properties plays a pivotal role in current metamaterial research. We present a new numerical simulation scheme for metamaterial research. The scheme is based on a graphic processing unit (GPU)-accelerated finite-difference time-domain (FDTD) method. The FDTD computation can be significantly accelerated when GPUs are used instead of only central processing units (CPUs). We explain how the fast FDTD simulation of large-scale metamaterials can be achieved through communication optimization in a heterogeneous CPU/GPU-based computer cluster. Our method also includes various advanced FDTD techniques: the non-uniform grid technique, the total-field/scattered-field (TFSF) technique, the auxiliary field technique for dispersive materials, the running discrete Fourier transform, and the complex structure setting. We demonstrate the power of our new FDTD simulation scheme by simulating the negative refraction of light in a coaxial waveguide metamaterial.

  16. Simulation Platform: a cloud-based online simulation environment.

    PubMed

    Yamazaki, Tadashi; Ikeno, Hidetoshi; Okumura, Yoshihiro; Satoh, Shunji; Kamiyama, Yoshimi; Hirata, Yutaka; Inagaki, Keiichiro; Ishihara, Akito; Kannon, Takayuki; Usui, Shiro

    2011-09-01

    For multi-scale and multi-modal neural modeling, it is needed to handle multiple neural models described at different levels seamlessly. Database technology will become more important for these studies, specifically for downloading and handling the neural models seamlessly and effortlessly. To date, conventional neuroinformatics databases have solely been designed to archive model files, but the databases should provide a chance for users to validate the models before downloading them. In this paper, we report our on-going project to develop a cloud-based web service for online simulation called "Simulation Platform". Simulation Platform is a cloud of virtual machines running GNU/Linux. On a virtual machine, various software including developer tools such as compilers and libraries, popular neural simulators such as GENESIS, NEURON and NEST, and scientific software such as Gnuplot, R and Octave, are pre-installed. When a user posts a request, a virtual machine is assigned to the user, and the simulation starts on that machine. The user remotely accesses to the machine through a web browser and carries out the simulation, without the need to install any software but a web browser on the user's own computer. Therefore, Simulation Platform is expected to eliminate impediments to handle multiple neural models that require multiple software.

  17. Attribute-Based Methods

    Treesearch

    Thomas P. Holmes; Wiktor L. Adamowicz

    2003-01-01

    Stated preference methods of environmental valuation have been used by economists for decades where behavioral data have limitations. The contingent valuation method (Chapter 5) is the oldest stated preference approach, and hundreds of contingent valuation studies have been conducted. More recently, and especially over the last decade, a class of stated preference...

  18. On the simulation of space based manipulators with contact

    NASA Technical Reports Server (NTRS)

    Walker, Michael W.; Dionise, Joseph

    1989-01-01

    An efficient method of simulating the motion of space based manipulators is presented. Since the manipulators will come into contact with different objects in their environment while carrying out different tasks, an important part of the simulation is the modeling of those contacts. An inverse dynamics controller is used to control a two armed manipulator whose task is to grasp an object floating in space. Simulation results are presented and an evaluation is made of the performance of the controller.

  19. Meshfree simulation of avalanches with the Finite Pointset Method (FPM)

    NASA Astrophysics Data System (ADS)

    Michel, Isabel; Kuhnert, Jörg; Kolymbas, Dimitrios

    2017-04-01

    Meshfree methods are the numerical method of choice in case of applications which are characterized by strong deformations in conjunction with free surfaces or phase boundaries. In the past the meshfree Finite Pointset Method (FPM) developed by Fraunhofer ITWM (Kaiserslautern, Germany) has been successfully applied to problems in computational fluid dynamics such as water crossing of cars, water turbines, and hydraulic valves. Most recently the simulation of granular flows, e.g. soil interaction with cars (rollover), has also been tackled. This advancement is the basis for the simulation of avalanches. Due to the generalized finite difference formulation in FPM, the implementation of different material models is quite simple. We will demonstrate 3D simulations of avalanches based on the Drucker-Prager yield criterion as well as the nonlinear barodesy model. The barodesy model (Division of Geotechnical and Tunnel Engineering, University of Innsbruck, Austria) describes the mechanical behavior of soil by an evolution equation for the stress tensor. The key feature of successful and realistic simulations of avalanches - apart from the numerical approximation of the occurring differential operators - is the choice of the boundary conditions (slip, no-slip, friction) between the different phases of the flow as well as the geometry. We will discuss their influences for simplified one- and two-phase flow examples. This research is funded by the German Research Foundation (DFG) and the FWF Austrian Science Fund.

  20. Relative solvation free energies calculated using an ab initio QM/MM-based free energy perturbation method: dependence of results on simulation length.

    PubMed

    Reddy, M Rami; Erion, Mark D

    2009-12-01

    Molecular dynamics (MD) simulations in conjunction with thermodynamic perturbation approach was used to calculate relative solvation free energies of five pairs of small molecules, namely; (1) methanol to ethane, (2) acetone to acetamide, (3) phenol to benzene, (4) 1,1,1 trichloroethane to ethane, and (5) phenylalanine to isoleucine. Two studies were performed to evaluate the dependence of the convergence of these calculations on MD simulation length and starting configuration. In the first study, each transformation started from the same well-equilibrated configuration and the simulation length was varied from 230 to 2,540 ps. The results indicated that for transformations involving small structural changes, a simulation length of 860 ps is sufficient to obtain satisfactory convergence. In contrast, transformations involving relatively large structural changes, such as phenylalanine to isoleucine, require a significantly longer simulation length (>2,540 ps) to obtain satisfactory convergence. In the second study, the transformation was completed starting from three different configurations and using in each case 860 ps of MD simulation. The results from this study suggest that performing one long simulation may be better than averaging results from three different simulations using a shorter simulation length and three different starting configurations.

  1. Method for simulating discontinuous physical systems

    DOEpatents

    Baty, Roy S.; Vaughn, Mark R.

    2001-01-01

    The mathematical foundations of conventional numerical simulation of physical systems provide no consistent description of the behavior of such systems when subjected to discontinuous physical influences. As a result, the numerical simulation of such problems requires ad hoc encoding of specific experimental results in order to address the behavior of such discontinuous physical systems. In the present invention, these foundations are replaced by a new combination of generalized function theory and nonstandard analysis. The result is a class of new approaches to the numerical simulation of physical systems which allows the accurate and well-behaved simulation of discontinuous and other difficult physical systems, as well as simpler physical systems. Applications of this new class of numerical simulation techniques to process control, robotics, and apparatus design are outlined.

  2. PIXE simulation: Models, methods and technologies

    SciTech Connect

    Batic, M.; Pia, M. G.; Saracco, P.; Weidenspointner, G.

    2013-04-19

    The simulation of PIXE (Particle Induced X-ray Emission) is discussed in the context of general-purpose Monte Carlo systems for particle transport. Dedicated PIXE codes are mainly concerned with the application of the technique to elemental analysis, but they lack the capability of dealing with complex experimental configurations. General-purpose Monte Carlo codes provide powerful tools to model the experimental environment in great detail, but so far they have provided limited functionality for PIXE simulation. This paper reviews recent developments that have endowed the Geant4 simulation toolkit with advanced capabilities for PIXE simulation, and related efforts for quantitative validation of cross sections and other physical parameters relevant to PIXE simulation.

  3. First Principles based methods and applications for realistic simulations on complex soft materials to develop new materials for energy, health, and environmental sustainability

    NASA Astrophysics Data System (ADS)

    Goddard, William

    2013-03-01

    For soft materials applications it is essential to obtain accurate descriptions of the weak (London dispersion, electrostatic) interactions between nonbond units, to include interactions with and stabilization by solvent, and to obtain accurate free energies and entropic changes during chemical, physical, and thermal processing. We will describe some of the advances being made in first principles based methods for treating soft materials with applications selected from new organic electrodes and electrolytes for batteries and fuel cells, forward osmosis for water cleanup, extended matter stable at ambient conditions, and drugs for modulating activation of GCPR membrane proteins,

  4. A new approach for radiosynoviorthesis: A dose-optimized planning method based on Monte Carlo simulation and synovial measurement using 3D slicer and MRI.

    PubMed

    Torres Berdeguez, Mirta Bárbara; Thomas, Sylvia; Rafful, Patricia; Arruda Sanchez, Tiago; Medeiros Oliveira Ramos, Susie; Souza Albernaz, Marta; Vasconcellos de Sá, Lidia; Lopes de Souza, Sergio Augusto; Mas Milian, Felix; Silva, Ademir Xavier da

    2017-07-01

    Recently, there has been a growing interest in a methodology for dose planning in radiosynoviorthesis to substitute fixed activity. Clinical practice based on fixed activity frequently does not embrace radiopharmaceutical dose optimization in patients. The aim of this paper is to propose and discuss a dose planning methodology considering the radiological findings of interest obtained by three-dimensional magnetic resonance imaging combined with Monte Carlo simulation in radiosynoviorthesis treatment applied to hemophilic arthropathy. The parameters analyzed were: surface area of the synovial membrane (synovial size), synovial thickness and joint effusion obtained by 3D MRI of nine knees from nine patients on a SIEMENS AVANTO 1.5 T scanner using a knee coil. The 3D Slicer software performed both the semiautomatic segmentation and quantitation of these radiological findings. A Lucite phantom 3D MRI validated the quantitation methodology. The study used Monte Carlo N-Particle eXtended code version 2.6 for calculating the S-values required to set up the injected activity to deliver a 100 Gy absorbed dose at a determined synovial thickness. The radionuclides assessed were: 90Y, 32P, 188Re, 186Re, 153Sm, and 177Lu, and the present study shows their effective treatment ranges. The quantitation methodology was successfully tested, with an error below 5% for different materials. S-values calculated could provide data on the activity to be injected into the joint, considering no extra-articular leakage from joint cavity. Calculation of effective treatment range could assist with the therapeutic decision, with an optimized protocol for dose prescription in RSO. Using 3D Slicer software, this study focused on segmentation and quantitation of radiological features such as joint effusion, synovial size, and thickness, all obtained by 3D MRI in patients' knees with hemophilic arthropathy. The combination of synovial size and thickness with the parameters obtained by Monte Carlo

  5. Lattice Boltzmann Method for Spacecraft Propellant Slosh Simulation

    NASA Technical Reports Server (NTRS)

    Orr, Jeb S.; Powers, Joseph F.; Yang, Hong Q

    2015-01-01

    A scalable computational approach to the simulation of propellant tank sloshing dynamics in microgravity is presented. In this work, we use the lattice Boltzmann equation (LBE) to approximate the behavior of two-phase, single-component isothermal flows at very low Bond numbers. Through the use of a non-ideal gas equation of state and a modified multiple relaxation time (MRT) collision operator, the proposed method can simulate thermodynamically consistent phase transitions at temperatures and density ratios consistent with typical spacecraft cryogenic propellants, for example, liquid oxygen. Determination of the tank forces and moments is based upon a novel approach that relies on the global momentum conservation of the closed fluid domain, and a parametric wall wetting model allows tuning of the free surface contact angle. Development of the interface is implicit and no interface tracking approach is required. A numerical example illustrates the method's application to prediction of bulk fluid behavior during a spacecraft ullage settling maneuver.

  6. A new rapid method of solar simulator calibration

    NASA Technical Reports Server (NTRS)

    Ross, B.

    1976-01-01

    A quick method for checking solar simulator spectra content is presented. The method is based upon a solar cell of extended spectral sensitivity and known absolute response, and a dichroic mirror with the reflection transmission transition close to the peak wavelength of the Thekaekara AMO distribution. It compromises the need for spectral discrimination with the ability to integrate wide spectral regions of the distribution which was considered important due to the spiky nature of the high pressure xenon lamp in common use. The results are expressed in terms of a single number, the blue/red ratio, which, combined with the total (unfiltered) output, provides a simple adequate characterization. Measurements were conducted at eleven major facilities across the country and a total of eighteen simulators were measured including five pulsed units.

  7. A Novel Mobile Phone Application for Pulse Pressure Variation Monitoring Based on Feature Extraction Technology: A Method Comparison Study in a Simulated Environment.

    PubMed

    Desebbe, Olivier; Joosten, Alexandre; Suehiro, Koichi; Lahham, Sari; Essiet, Mfonobong; Rinehart, Joseph; Cannesson, Maxime

    2016-07-01

    Pulse pressure variation (PPV) can be used to assess fluid status in the operating room. This measurement, however, is time consuming when done manually and unreliable through visual assessment. Moreover, its continuous monitoring requires the use of expensive devices. Capstesia™ is a novel Android™/iOS™ application, which calculates PPV from a digital picture of the arterial pressure waveform obtained from any monitor. The application identifies the peaks and troughs of the arterial curve, determines maximum and minimum pulse pressures, and computes PPV. In this study, we compared the accuracy of PPV generated with the smartphone application Capstesia (PPVapp) against the reference method that is the manual determination of PPV (PPVman). The Capstesia application was loaded onto a Samsung Galaxy S4 phone. A physiologic simulator including PPV was used to display arterial waveforms on a computer screen. Data were obtained with different sweep speeds (6 and 12 mm/s) and randomly generated PPV values (from 2% to 24%), pulse pressure (30, 45, and 60 mm Hg), heart rates (60-80 bpm), and respiratory rates (10-15 breaths/min) on the simulator. Each metric was recorded 5 times at an arterial height scale X1 (PPV5appX1) and 5 times at an arterial height scale X3 (PPV5appX3). Reproducibility of PPVapp and PPVman was determined from the 5 pictures of the same hemodynamic profile. The effect of sweep speed, arterial waveform scale (X1 or X3), and number of images captured was assessed by a Bland-Altman analysis. The measurement error (ME) was calculated for each pair of data. A receiver operating characteristic curve analysis determined the ability of PPVapp to discriminate a PPVman > 13%. Four hundred eight pairs of PPVapp and PPVman were analyzed. The reproducibility of PPVapp and PPVman was 10% (interquartile range, 7%-14%) and 6% (interquartile range, 3%-10%), respectively, allowing a threshold ME of 12%. The overall mean bias for PPVappX1 was 1.1% within limits of

  8. XML-based resources for simulation

    SciTech Connect

    Kelsey, R. L.; Riese, J. M.; Young, G. A.

    2004-01-01

    As simulations and the machines they run on become larger and more complex the inputs and outputs become more unwieldy. Increased complexity makes the setup of simulation problems difficult. It also contributes to the burden of handling and analyzing large amounts of output results. Another problem is that among a class of simulation codes (such as those for physical system simulation) there is often no single standard format or resource for input data. To run the same problem on different simulations requires a different setup for each simulation code. The extensible Markup Language (XML) is used to represent a general set of data resources including physical system problems, materials, and test results. These resources provide a 'plug and play' approach to simulation setup. For example, a particular material for a physical system can be selected from a material database. The XML-based representation of the selected material is then converted to the native format of the simulation being run and plugged into the simulation input file. In this manner a user can quickly and more easily put together a simulation setup. In the case of output data, an XML approach to regression testing includes tests and test results with XML-based representations. This facilitates the ability to query for specific tests and make comparisons between results. Also, output results can easily be converted to other formats for publishing online or on paper.

  9. Approximate Bayesian computation methods for daily spatiotemporal precipitation occurrence simulation

    NASA Astrophysics Data System (ADS)

    Olson, Branden; Kleiber, William

    2017-04-01

    Stochastic precipitation generators (SPGs) produce synthetic precipitation data and are frequently used to generate inputs for physical models throughout many scientific disciplines. Especially for large data sets, statistical parameter estimation is difficult due to the high dimensionality of the likelihood function. We propose techniques to estimate SPG parameters for spatiotemporal precipitation occurrence based on an emerging set of methods called Approximate Bayesian computation (ABC), which bypass the evaluation of a likelihood function. Our statistical model employs a thresholded Gaussian process that reduces to a probit regression at single sites. We identify appropriate ABC penalization metrics for our model parameters to produce simulations whose statistical characteristics closely resemble those of the observations. Spell length metrics are appropriate for single sites, while a variogram-based metric is proposed for spatial simulations. We present numerical case studies at sites in Colorado and Iowa where the estimated statistical model adequately reproduces local and domain statistics.

  10. Meshless lattice Boltzmann method for the simulation of fluid flows.

    PubMed

    Musavi, S Hossein; Ashrafizaadeh, Mahmud

    2015-02-01

    A meshless lattice Boltzmann numerical method is proposed. The collision and streaming operators of the lattice Boltzmann equation are separated, as in the usual lattice Boltzmann models. While the purely local collision equation remains the same, we rewrite the streaming equation as a pure advection equation and discretize the resulting partial differential equation using the Lax-Wendroff scheme in time and the meshless local Petrov-Galerkin scheme based on augmented radial basis functions in space. The meshless feature of the proposed method makes it a more powerful lattice Boltzmann solver, especially for cases in which using meshes introduces significant numerical errors into the solution, or when improving the mesh quality is a complex and time-consuming process. Three well-known benchmark fluid flow problems, namely the plane Couette flow, the circular Couette flow, and the impulsively started cylinder flow, are simulated for the validation of the proposed method. Excellent agreement with analytical solutions or with previous experimental and numerical results in the literature is observed in all the simulations. Although the computational resources required for the meshless method per node are higher compared to that of the standard lattice Boltzmann method, it is shown that for cases in which the total number of nodes is significantly reduced, the present method actually outperforms the standard lattice Boltzmann method.

  11. Methods for simulating solute breakthrough curves in pumping groundwater wells

    USGS Publications Warehouse

    Starn, J. Jeffrey; Bagtzoglou, Amvrossios C.; Robbins, Gary A.

    2012-01-01

    In modeling there is always a trade-off between execution time and accuracy. For gradient-based parameter estimation methods, where a simulation model is run repeatedly to populate a Jacobian (sensitivity) matrix, there exists a need for rapid simulation methods of known accuracy that can decrease execution time, and thus make the model more useful without sacrificing accuracy. Convolution-based methods can be executed rapidly for any desired input function once the residence-time distribution is known. The residence-time distribution can be calculated efficiently using particle tracking, but particle tracking can be ambiguous near a pumping well if the grid is too coarse. We present several embedded analytical expressions for improving particle tracking near a pumping well and compare them with a finely gridded finite-difference solution in terms of accuracy and CPU usage. Even though the embedded analytical approach can improve particle tracking near a well, particle methods reduce, but do not eliminate, reliance on a grid because velocity fields typically are calculated on a grid, and additional error is incurred using linear interpolation of velocity. A dilution rate can be calculated for a given grid and pumping well to determine if the grid is sufficiently refined. Embedded analytical expressions increase accuracy but add significantly to CPU usage. Structural error introduced by the numerical solution method may affect parameter estimates.

  12. A High Order Element Based Method for the Simulation of Velocity Damping in the Hyporheic Zone of a High Mountain River

    NASA Astrophysics Data System (ADS)

    Preziosi-Ribero, Antonio; Peñaloza-Giraldo, Jorge; Escobar-Vargas, Jorge; Donado-Garzón, Leonardo

    2016-04-01

    Groundwater - Surface water interaction is a topic that has gained relevance among the scientific community over the past decades. However, several questions remain unsolved inside this topic, and almost all the research that has been done in the past regards the transport phenomena and has little to do with understanding the dynamics of the flow patterns of the above mentioned interactions. The aim of this research is to verify the attenuation of the water velocity that comes from the free surface and enters the porous media under the bed of a high mountain river. The understanding of this process is a key feature in order to characterize and quantify the interactions between groundwater and surface water. However, the lack of information and the difficulties that arise when measuring groundwater flows under streams make the physical quantification non reliable for scientific purposes. These issues suggest that numerical simulations and in-stream velocity measurements can be used in order to characterize these flows. Previous studies have simulated the attenuation of a sinusoidal pulse of vertical velocity that comes from a stream and goes into a porous medium. These studies used the Burgers equation and the 1-D Navier-Stokes equations as governing equations. However, the boundary conditions of the problem, and the results when varying the different parameters of the equations show that the understanding of the process is not complete yet. To begin with, a Spectral Multi Domain Penalty Method (SMPM) was proposed for quantifying the velocity damping solving the Navier - Stokes equations in 1D. The main assumptions are incompressibility and a hydrostatic approximation for the pressure distributions. This method was tested with theoretical signals that are mainly trigonometric pulses or functions. Afterwards, in order to test the results with real signals, velocity profiles were captured near the Gualí River bed (Honda, Colombia), with an Acoustic Doppler

  13. Discrete Particle Method for Simulating Hypervelocity Impact Phenomena

    PubMed Central

    Watson, Erkai; Steinhauser, Martin O.

    2017-01-01

    In this paper, we introduce a computational model for the simulation of hypervelocity impact (HVI) phenomena which is based on the Discrete Element Method (DEM). Our paper constitutes the first application of DEM to the modeling and simulating of impact events for velocities beyond 5 kms−1. We present here the results of a systematic numerical study on HVI of solids. For modeling the solids, we use discrete spherical particles that interact with each other via potentials. In our numerical investigations we are particularly interested in the dynamics of material fragmentation upon impact. We model a typical HVI experiment configuration where a sphere strikes a thin plate and investigate the properties of the resulting debris cloud. We provide a quantitative computational analysis of the resulting debris cloud caused by impact and a comprehensive parameter study by varying key parameters of our model. We compare our findings from the simulations with recent HVI experiments performed at our institute. Our findings are that the DEM method leads to very stable, energy–conserving simulations of HVI scenarios that map the experimental setup where a sphere strikes a thin plate at hypervelocity speed. Our chosen interaction model works particularly well in the velocity range where the local stresses caused by impact shock waves markedly exceed the ultimate material strength. PMID:28772739

  14. Discrete Particle Method for Simulating Hypervelocity Impact Phenomena.

    PubMed

    Watson, Erkai; Steinhauser, Martin O

    2017-04-02

    In this paper, we introduce a computational model for the simulation of hypervelocity impact (HVI) phenomena which is based on the Discrete Element Method (DEM). Our paper constitutes the first application of DEM to the modeling and simulating of impact events for velocities beyond 5 kms(-1). We present here the results of a systematic numerical study on HVI of solids. For modeling the solids, we use discrete spherical particles that interact with each other via potentials. In our numerical investigations we are particularly interested in the dynamics of material fragmentation upon impact. We model a typical HVI experiment configuration where a sphere strikes a thin plate and investigate the properties of the resulting debris cloud. We provide a quantitative computational analysis of the resulting debris cloud caused by impact and a comprehensive parameter study by varying key parameters of our model. We compare our findings from the simulations with recent HVI experiments performed at our institute. Our findings are that the DEM method leads to very stable, energy-conserving simulations of HVI scenarios that map the experimental setup where a sphere strikes a thin plate at hypervelocity speed. Our chosen interaction model works particularly well in the velocity range where the local stresses caused by impact shock waves markedly exceed the ultimate material strength.

  15. Computer Simulation Methods for Defect Configurations and Nanoscale Structures

    SciTech Connect

    Gao, Fei

    2010-01-01

    This chapter will describe general computer simulation methods, including ab initio calculations, molecular dynamics and kinetic Monte-Carlo method, and their applications to the calculations of defect configurations in various materials (metals, ceramics and oxides) and the simulations of nanoscale structures due to ion-solid interactions. The multiscale theory, modeling, and simulation techniques (both time scale and space scale) will be emphasized, and the comparisons between computer simulation results and exprimental observations will be made.

  16. Parallel implementation of the particle simulation method with dynamic load balancing: Toward realistic geodynamical simulation

    NASA Astrophysics Data System (ADS)

    Furuichi, M.; Nishiura, D.

    2015-12-01

    Fully Lagrangian methods such as Smoothed Particle Hydrodynamics (SPH) and Discrete Element Method (DEM) have been widely used to solve the continuum and particles motions in the computational geodynamics field. These mesh-free methods are suitable for the problems with the complex geometry and boundary. In addition, their Lagrangian nature allows non-diffusive advection useful for tracking history dependent properties (e.g. rheology) of the material. These potential advantages over the mesh-based methods offer effective numerical applications to the geophysical flow and tectonic processes, which are for example, tsunami with free surface and floating body, magma intrusion with fracture of rock, and shear zone pattern generation of granular deformation. In order to investigate such geodynamical problems with the particle based methods, over millions to billion particles are required for the realistic simulation. Parallel computing is therefore important for handling such huge computational cost. An efficient parallel implementation of SPH and DEM methods is however known to be difficult especially for the distributed-memory architecture. Lagrangian methods inherently show workload imbalance problem for parallelization with the fixed domain in space, because particles move around and workloads change during the simulation. Therefore dynamic load balance is key technique to perform the large scale SPH and DEM simulation. In this work, we present the parallel implementation technique of SPH and DEM method utilizing dynamic load balancing algorithms toward the high resolution simulation over large domain using the massively parallel super computer system. Our method utilizes the imbalances of the executed time of each MPI process as the nonlinear term of parallel domain decomposition and minimizes them with the Newton like iteration method. In order to perform flexible domain decomposition in space, the slice-grid algorithm is used. Numerical tests show that our

  17. Lattice-Boltzmann-based Simulations of Diffusiophoresis

    NASA Astrophysics Data System (ADS)

    Castigliego, Joshua; Kreft Pearce, Jennifer

    We present results from a lattice-Boltzmann-base Brownian Dynamics simulation on diffusiophoresis and the separation of particles within the system. A gradient in viscosity that simulates a concentration gradient in a dissolved polymer allows us to separate various types of particles by their deformability. As seen in previous experiments, simulated particles that have a higher deformability react differently to the polymer matrix than those with a lower deformability. Therefore, the particles can be separated from each other. This simulation, in particular, was intended to model an oceanic system where the particles of interest were zooplankton, phytoplankton and microplastics. The separation of plankton from the microplastics was achieved.

  18. Cloud GPU-based simulations for SQUAREMR

    NASA Astrophysics Data System (ADS)

    Kantasis, George; Xanthis, Christos G.; Haris, Kostas; Heiberg, Einar; Aletras, Anthony H.

    2017-01-01

    Quantitative Magnetic Resonance Imaging (MRI) is a research tool, used more and more in clinical practice, as it provides objective information with respect to the tissues being imaged. Pixel-wise T1 quantification (T1 mapping) of the myocardium is one such application with diagnostic significance. A number of mapping sequences have been developed for myocardial T1 mapping with a wide range in terms of measurement accuracy and precision. Furthermore, measurement results obtained with these pulse sequences are affected by errors introduced by the particular acquisition parameters used. SQUAREMR is a new method which has the potential of improving the accuracy of these mapping sequences through the use of massively parallel simulations on Graphical Processing Units (GPUs) by taking into account different acquisition parameter sets. This method has been shown to be effective in myocardial T1 mapping; however, execution times may exceed 30 min which is prohibitively long for clinical applications. The purpose of this study was to accelerate the construction of SQUAREMR's multi-parametric database to more clinically acceptable levels. The aim of this study was to develop a cloud-based cluster in order to distribute the computational load to several GPU-enabled nodes and accelerate SQUAREMR. This would accommodate high demands for computational resources without the need for major upfront equipment investment. Moreover, the parameter space explored by the simulations was optimized in order to reduce the computational load without compromising the T1 estimates compared to a non-optimized parameter space approach. A cloud-based cluster with 16 nodes resulted in a speedup of up to 13.5 times compared to a single-node execution. Finally, the optimized parameter set approach allowed for an execution time of 28 s using the 16-node cluster, without compromising the T1 estimates by more than 10 ms. The developed cloud-based cluster and optimization of the parameter set reduced

  19. A Method for Large Eddy Simulation of Acoustic Combustion Instabilities

    NASA Astrophysics Data System (ADS)

    Wall, Clifton; Moin, Parviz

    2003-11-01

    A method for performing Large Eddy Simulation of acoustic combustion instabilities is presented. By extending the low Mach number pressure correction method to the case of compressible flow, a numerical method is developed in which the Poisson equation for pressure is replaced by a Helmholtz equation. The method avoids the acoustic CFL condition by using implicit time advancement, leading to large efficiency gains at low Mach number. The method also avoids artificial damping of acoustic waves. The numerical method is attractive for the simulation of acoustics combustion instabilities, since these flows are typically at low Mach number, and the acoustic frequencies of interest are usually low. Additionally, new boundary conditions based on the work of Poinsot and Lele have been developed to model the acoustic effect of a long channel upstream of the computational inlet, thus avoiding the need to include such a channel in the computational domain. The turbulent combustion model used is the Level Set model of Duchamp de Lageneste and Pitsch for premixed combustion. Comparison of LES results to the reacting experiments of Besson et al. will be presented.

  20. A fast mollified impulse method for biomolecular atomistic simulations

    NASA Astrophysics Data System (ADS)

    Fath, L.; Hochbruck, M.; Singh, C. V.

    2017-03-01

    Classical integration methods for molecular dynamics are inherently limited due to resonance phenomena occurring at certain time-step sizes. The mollified impulse method can partially avoid this problem by using appropriate filters based on averaging or projection techniques. However, existing filters are computationally expensive and tedious in implementation since they require either analytical Hessians or they need to solve nonlinear systems from constraints. In this work we follow a different approach based on corotation for the construction of a new filter for (flexible) biomolecular simulations. The main advantages of the proposed filter are its excellent stability properties and ease of implementation in standard softwares without Hessians or solving constraint systems. By simulating multiple realistic examples such as peptide, protein, ice equilibrium and ice-ice friction, the new filter is shown to speed up the computations of long-range interactions by approximately 20%. The proposed filtered integrators allow step sizes as large as 10 fs while keeping the energy drift less than 1% on a 50 ps simulation.

  1. Simulation of turbulent flows using nodal integral method

    NASA Astrophysics Data System (ADS)

    Singh, Suneet

    Nodal methods are the backbone of the production codes for neutron-diffusion and transport equations. Despite their high accuracy, use of these methods for simulation of fluid flow is relatively new. Recently, a modified nodal integral method (MNIM) has been developed for simulation of laminar flows. In view of its high accuracy and efficiency, extension of this method for the simulation of turbulent flows is a logical step forward. In this dissertation, MNIM is extended in two ways to simulate incompressible turbulent flows---a new MNIM is developed for the 2D k-epsilon equations; and 3D, parallel MNIM is developed for direct numerical simulations. Both developments are validated, and test problems are solved. In this dissertation, a new nodal numerical scheme is developed to solve the k-epsilon equations to simulate turbulent flows. The MNIM developed earlier for laminar flow equations is modified to incorporate eddy viscosity approximation and coupled with the above mentioned schemes for the k and epsilon equations, to complete the implementation of the numerical scheme for the k-epsilon model. The scheme developed is validated by comparing the results obtained by the developed method with the results available in the literature obtained using direct numerical simulations (DNS). The results of current simulations match reasonably well with the DNS results. The discrepancies in the results are mainly due to the limitations of the k-epsilon model rather than the deficiency in the developed MNIM. A parallel version of the MNIM is needed to enhance its capability, in order to carry out DNS of the turbulent flows. The parallelization of the scheme, however, presents some unique challenges as dependencies of the discrete variables are different from those that exist in other schemes (for example in finite volume based schemes). Hence, a parallel MNIM (PMNIM) is developed and implemented into a computer code with communication strategies based on the above mentioned

  2. A generic reaction-based biogeochemical simulator

    SciTech Connect

    Fang, Yilin; Yabusaki, Steven B.; Yeh, Gour T.; C.T. Miller, M.W. Farthing, W.G. Gray, and G.F. Pinder

    2004-06-17

    This paper presents a generic biogeochemical simulator, BIOGEOCHEM. The simulator can read a thermodynamic database based on the EQ3/EQ6 database. It can also read user-specified equilibrium and kinetic reactions (reactions not defined in the format of that in EQ3/EQ6 database) symbolically. BIOGEOCHEM is developed with a general paradigm. It overcomes the requirement in most available reaction-based models that reactions and rate laws be specified in a limited number of canonical forms. The simulator interprets the reactions, and rate laws of virtually any type for input to the MAPLE symbolic mathematical software package. MAPLE then generates Fortran code for the analytical Jacobian matrix used in the Newton-Raphson technique, which are compiled and linked into the BIOGEOCHEM executable. With this feature, the users are exempted from recoding the simulator to accept new equilibrium expressions or kinetic rate laws. Two examples are used to demonstrate the new features of the simulator.

  3. Simulation-based training for prostate surgery.

    PubMed

    Khan, Raheej; Aydin, Abdullatif; Khan, Muhammad Shamim; Dasgupta, Prokar; Ahmed, Kamran

    2015-10-01

    models, human cadavers, distributed simulation and advanced training programmes and modules. The currently validated simulators can be used by healthcare organisations to provide supplementary training sessions for trainee surgeons. Further research should be conducted to validate simulated environments, to determine which simulators have greater efficacy than others and to assess the cost-effectiveness of the simulators and the transferability of skills learnt. With surgeons investigating new possibilities for easily reproducible and valid methods of training, simulation offers great scope for implementation alongside traditional methods of training. © 2014 The Authors BJU International © 2014 BJU International Published by John Wiley & Sons Ltd.

  4. 3-D Quantum Transport Solver Based on the Perfectly Matched Layer and Spectral Element Methods for the Simulation of Semiconductor Nanodevices

    PubMed Central

    Cheng, Candong; Lee, Joon-Ho; Lim, Kim Hwa; Massoud, Hisham Z.; Liu, Qing Huo

    2007-01-01

    A 3-D quantum transport solver based on the spectral element method (SEM) and perfectly matched layer (PML) is introduced to solve the 3-D Schrödinger equation with a tensor effective mass. In this solver, the influence of the environment is replaced with the artificial PML open boundary extended beyond the contact regions of the device. These contact regions are treated as waveguides with known incident waves from waveguide mode solutions. As the transmitted wave function is treated as a total wave, there is no need to decompose it into waveguide modes, thus significantly simplifying the problem in comparison with conventional open boundary conditions. The spectral element method leads to an exponentially improving accuracy with the increase in the polynomial order and sampling points. The PML region can be designed such that less than −100 dB outgoing waves are reflected by this artificial material. The computational efficiency of the SEM solver is demonstrated by comparing the numerical and analytical results from waveguide and plane-wave examples, and its utility is illustrated by multiple-terminal devices and semiconductor nanotube devices. PMID:18037971

  5. Alloy Surface Structure:. Computer Simulations Using the Bfs Method

    NASA Astrophysics Data System (ADS)

    Bozzolo, Guillermo; Ferrante, John

    The use of semiempirical methods for modeling alloy properties has proven to be difficult and limited. The two primary approaches to this modeling, the embedded atom method and the phenomenological method of Miedema, have serious limitations in the range of materials studied and the degree of success in predicting properties of such systems. Recently, a new method has been developed by Bozzolo, Ferrante and Smith (BFS) which has had considerable success in predicting a wide range of alloy properties. In this work, we reference previous BFS applications to surface alloy formation and alloy surface structure, leading to the analysis of binary and ternary Ni-based alloy surfaces. We present Monte Carlo simulation results of thin films of NiAl and Ni-Al-Ti alloys, for a wide range of concentration of the Ti alloying addition. The composition of planes close to the surface as well as bulk features are discussed.

  6. Multinomial tau-leaping method for stochastic kinetic simulations

    NASA Astrophysics Data System (ADS)

    Pettigrew, Michel F.; Resat, Haluk

    2007-02-01

    We introduce the multinomial tau-leaping (MτL) method for general reaction networks with multichannel reactant dependencies. The MτL method is an extension of the binomial tau-leaping method where efficiency is improved in several ways. First, τ-leaping steps are determined simply and efficiently using a priori information and Poisson distribution-based estimates of expectation values for reaction numbers over a tentative τ-leaping step. Second, networks are partitioned into closed groups of reactions and corresponding reactants in which no group reactant set is found in any other group. Third, product formation is factored into upper-bound estimation of the number of times a particular reaction occurs. Together, these features allow larger time steps where the numbers of reactions occurring simultaneously in a multichannel manner are estimated accurately using a multinomial distribution. Furthermore, we develop a simple procedure that places a specific upper bound on the total reaction number to ensure non-negativity of species populations over a single multiple-reaction step. Using two disparate test case problems involving cellular processes—epidermal growth factor receptor signaling and a lactose operon model—we show that the τ-leaping based methods such as the MτL algorithm can significantly reduce the number of simulation steps thus increasing the numerical efficiency over the exact stochastic simulation algorithm by orders of magnitude.

  7. A Carbonaceous Chondrite Based Simulant of Phobos

    NASA Technical Reports Server (NTRS)

    Rickman, Douglas L.; Patel, Manish; Pearson, V.; Wilson, S.; Edmunson, J.

    2016-01-01

    In support of an ESA-funded concept study considering a sample return mission, a simulant of the Martian moon Phobos was needed. There are no samples of the Phobos regolith, therefore none of the four characteristics normally used to design a simulant are explicitly known for Phobos. Because of this, specifications for a Phobos simulant were based on spectroscopy, other remote measurements, and judgment. A composition based on the Tagish Lake meteorite was assumed. The requirement that sterility be achieved, especially given the required organic content, was unusual and problematic. The final design mixed JSC-1A, antigorite, pseudo-agglutinates and gilsonite. Sterility was achieved by radiation in a commercial facility.

  8. Measuring Surgical Skills in Simulation-based Training.

    PubMed

    Atesok, Kivanc; Satava, Richard M; Marsh, J Lawrence; Hurwitz, Shepard R

    2017-10-01

    Simulation-based surgical skills training addresses several concerns associated with the traditional apprenticeship model, including patient safety, efficient acquisition of complex skills, and cost. The surgical specialties already recognize the advantages of surgical training using simulation, and simulation-based methods are appearing in surgical education and assessment for board certification. The necessity of simulation-based methods in surgical education along with valid, objective, standardized techniques for measuring learned skills using simulators has become apparent. The most commonly used surgical skill measurement techniques in simulation-based training include questionnaires and post-training surveys, objective structured assessment of technical skills and global rating scale of performance scoring systems, structured assessments using video recording, and motion tracking software. The literature shows that the application of many of these techniques varies based on investigator preference and the convenience of the technique. As simulators become more accepted as a teaching tool, techniques to measure skill proficiencies will need to be standardized nationally and internationally.

  9. Base Camp Design Simulation Training

    DTIC Science & Technology

    2011-07-01

    28 Location 2D Map View (note you are to select a location within the NW region of Yousel Khel) 29 3D Camera View...Figure 3: 2D Representation of Yousel Khel The scenario forces the student to conduct trade-off analysis as competing interests (proximity to a...reason, we designed a 600-man base camp on VBS2TM from an AutoCAD diagram found on the Theater Construction Management System (version 3.2). Known

  10. A simulation based approach to optimize inventory replenishment with RAND algorithm: An extended study of corrected demand using Holt's method for textile industry

    NASA Astrophysics Data System (ADS)

    Morshed, Mohammad Sarwar; Kamal, Mostafa Mashnoon; Khan, Somaiya Islam

    2016-07-01

    Inventory has been a major concern in supply chain and numerous researches have been done lately on inventory control which brought forth a number of methods that efficiently manage inventory and related overheads by reducing cost of replenishment. This research is aimed towards providing a better replenishment policy in case of multi-product, single supplier situations for chemical raw materials of textile industries in Bangladesh. It is assumed that industries currently pursue individual replenishment system. The purpose is to find out the optimum ideal cycle time and individual replenishment cycle time of each product for replenishment that will cause lowest annual holding and ordering cost, and also find the optimum ordering quantity. In this paper indirect grouping strategy has been used. It is suggested that indirect grouping Strategy outperforms direct grouping strategy when major cost is high. An algorithm by Kaspi and Rosenblatt (1991) called RAND is exercised for its simplicity and ease of application. RAND provides an ideal cycle time (T) for replenishment and integer multiplier (ki) for individual items. Thus the replenishment cycle time for each product is found as T×ki. Firstly, based on data, a comparison between currently prevailing (individual) process and RAND is provided that uses the actual demands which presents 49% improvement in total cost of replenishment. Secondly, discrepancies in demand is corrected by using Holt's method. However, demands can only be forecasted one or two months into the future because of the demand pattern of the industry under consideration. Evidently, application of RAND with corrected demand display even greater improvement. The results of this study demonstrates that cost of replenishment can be significantly reduced by applying RAND algorithm and exponential smoothing models.

  11. High-performance liquid chromatography - Ultraviolet method for the determination of total specific migration of nine ultraviolet absorbers in food simulants based on 1,1,3,3-Tetramethylguanidine and organic phase anion exchange solid phase extraction to remove glyceride.

    PubMed

    Wang, Jianling; Xiao, Xiaofeng; Chen, Tong; Liu, Tingfei; Tao, Huaming; He, Jun

    2016-06-17

    The glyceride in oil food simulant usually causes serious interferences to target analytes and leads to failure of the normal function of the RP-HPLC column. In this work, a convenient HPLC-UV method for the determination of the total specific migration of nine ultraviolet (UV) absorbers in food simulants was developed based on 1,1,3,3-tetramethylguanidine (TMG) and organic phase anion exchange (OPAE) SPE to efficiently remove glyceride in olive oil simulant. In contrast to the normal ion exchange carried out in an aqueous solution or aqueous phase environment, the OPAE SPE was performed in the organic phase environments, and the time-consuming and challenging extraction of the nine UV absorbers from vegetable oil with aqueous solution could be readily omitted. The method was proved to have good linearity (r≥0.99992), precision (intra-day RSD≤3.3%), and accuracy(91.0%≤recoveries≤107%); furthermore, the lower limit of quantifications (0.05-0.2mg/kg) in five types of food simulants(10% ethanol, 3% acetic acid, 20% ethanol, 50% ethanol and olive oil) was observed. The method was found to be well suited for quantitative determination of the total specific migration of the nine UV absorbers both in aqueous and vegetable oil simulant according to Commission Regulation (EU) No. 10/2011. Migration levels of the nine UV absorbers were determined in 31 plastic samples, and UV-24, UV-531, HHBP and UV-326 were frequently detected, especially in olive oil simulant for UV-326 in PE samples. In addition, the OPAE SPE procedure was also been applied to efficiently enrich or purify seven antioxidants in olive oil simulant. Results indicate that this procedure will have more extensive applications in the enriching or purification of the extremely weak acidic compounds with phenol hydroxyl group that are relatively stable in TMG n-hexane solution and that can be barely extracted from vegetable oil. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Numerical simulation of self-sustained oscillation of a voice-producing element based on Navier-Stokes equations and the finite element method.

    PubMed

    de Vries, Martinus P; Hamburg, Marc C; Schutte, Harm K; Verkerke, Gijsbertus J; Veldman, Arthur E P

    2003-04-01

    Surgical removal of the larynx results in radically reduced production of voice and speech. To improve voice quality a voice-producing element (VPE) is developed, based on the lip principle, called after the lips of a musician while playing a brass instrument. To optimize the VPE, a numerical model is developed. In this model, the finite element method is used to describe the mechanical behavior of the VPE. The flow is described by two-dimensional incompressible Navier-Stokes equations. The interaction between VPE and airflow is modeled by placing the grid of the VPE model in the grid of the aerodynamical model, and requiring continuity of forces and velocities. By applying and increasing pressure to the numerical model, pulses comparable to glottal volume velocity waveforms are obtained. By variation of geometric parameters their influence can be determined. To validate this numerical model, an in vitro test with a prototype of the VPE is performed. Experimental and numerical results show an acceptable agreement.

  13. The parallel subdomain-levelset deflation method in reservoir simulation

    NASA Astrophysics Data System (ADS)

    van der Linden, J. H.; Jönsthövel, T. B.; Lukyanov, A. A.; Vuik, C.

    2016-01-01

    Extreme and isolated eigenvalues are known to be harmful to the convergence of an iterative solver. These eigenvalues can be produced by strong heterogeneity in the underlying physics. We can improve the quality of the spectrum by 'deflating' the harmful eigenvalues. In this work, deflation is applied to linear systems in reservoir simulation. In particular, large, sudden differences in the permeability produce extreme eigenvalues. The number and magnitude of these eigenvalues is linked to the number and magnitude of the permeability jumps. Two deflation methods are discussed. Firstly, we state that harmonic Ritz eigenvector deflation, which computes the deflation vectors from the information produced by the linear solver, is unfeasible in modern reservoir simulation due to high costs and lack of parallelism. Secondly, we test a physics-based subdomain-levelset deflation algorithm that constructs the deflation vectors a priori. Numerical experiments show that both methods can improve the performance of the linear solver. We highlight the fact that subdomain-levelset deflation is particularly suitable for a parallel implementation. For cases with well-defined permeability jumps of a factor 104 or higher, parallel physics-based deflation has potential in commercial applications. In particular, the good scalability of parallel subdomain-levelset deflation combined with the robust parallel preconditioner for deflated system suggests the use of this method as an alternative for AMG.

  14. Development of a numerical simulator of human swallowing using a particle method (part 1. Preliminary evaluation of the possibility of numerical simulation using the MPS method).

    PubMed

    Kamiya, Tetsu; Toyama, Yoshio; Michiwaki, Yukihiro; Kikuchi, Takahiro

    2013-01-01

    The aim of the present study was to evaluate the possibility of numerical simulation of the swallowing process using a moving particle simulation (MPS) method, which defined the food bolus as a number of particles in a fluid, a solid, and an elastic body. In order to verify the accuracy of the simulation results, a simple water bolus falling model was solved using the three-dimensional (3D) MPS method. We also examined the simplified swallowing simulation using a two-dimensional (2D) MPS method to confirm the interactions between the liquid, solid, elastic bolus, and organ structure. In a comparison of the 3D MPS simulation and experiments, the falling time of the water bolus and the configuration of the interface between the liquid and air corresponded exactly to the experimental measurements and the visualization images. The results showed that the accuracy of the 3D MPS simulation was qualitatively high for the simple falling model. Based on the results of the simplified swallowing simulation using the 2D MPS method, each bolus, defined as a liquid, solid, and elastic body, exhibited different behavior when the organs were transformed forcedly. This confirmed that the MPS method could be used for coupled simulations of the fluid, the solid, the elastic body, and the organ structures. The results suggested that the MPS method could be used to develop a numerical simulator of the swallowing process.

  15. Different percentages of false-positive results obtained using five methods for the calculation of reference change values based on simulated normal and ln-normal distributions of data.

    PubMed

    Lund, Flemming; Petersen, Per Hyltoft; Fraser, Callum G; Sölétormos, György

    2016-11-01

    Background Reference change values provide objective tools to assess the significance of a change in two consecutive results for a biomarker from an individual. The reference change value calculation is based on the assumption that within-subject biological variation has random fluctuation around a homeostatic set point that follows a normal (Gaussian) distribution. This set point (or baseline in steady-state) should be estimated from a set of previous samples, but, in practice, decisions based on reference change value are often based on only two consecutive results. The original reference change value was based on standard deviations according to the assumption of normality, but was soon changed to coefficients of variation (CV) in the formula (reference change value = ± Z ċ 2(½) ċ CV). Z is being dependent on the desired probability of significance, which also defines the percentages of false-positive results. The aim of this study was to investigate false-positive results using five different published methods for calculation of reference change value. Methods The five reference change value methods were examined using normally and ln-normally distributed simulated data. Results One method performed best in approaching the theoretical false-positive percentages on normally distributed data and another method performed best on ln-normally distributed data. The commonly used reference change value method based on two results (without use of estimated set point) performed worst both on normally distributed and ln-normally distributed data. Conclusions The optimal choice of method to calculate reference change value limits requires knowledge of the distribution of data (normal or ln-normal) and, if possible, knowledge of the homeostatic set point.

  16. Computational Methods for Jet Noise Simulation

    NASA Technical Reports Server (NTRS)

    Goodrich, John W. (Technical Monitor); Hagstrom, Thomas

    2003-01-01

    The purpose of our project is to develop, analyze, and test novel numerical technologies central to the long term goal of direct simulations of subsonic jet noise. Our current focus is on two issues: accurate, near-field domain truncations and high-order, single-step discretizations of the governing equations. The Direct Numerical Simulation (DNS) of jet noise poses a number of extreme challenges to computational technique. In particular, the problem involves multiple temporal and spatial scales as well as flow instabilities and is posed on an unbounded spatial domain. Moreover, the basic phenomenon of interest, the radiation of acoustic waves to the far field, involves only a minuscule fraction of the total energy. The best current simulations of jet noise are at low Reynolds number. It is likely that an increase of one to two orders of magnitude will be necessary to reach a regime where the separation between the energy-containing and dissipation scales is sufficient to make the radiated noise essentially independent of the Reynolds number. Such an increase in resolution cannot be obtained in the near future solely through increases in computing power. Therefore, new numerical methodologies of maximal efficiency and accuracy are required.

  17. Microcanonical ensemble simulation method applied to discrete potential fluids

    NASA Astrophysics Data System (ADS)

    Sastre, Francisco; Benavides, Ana Laura; Torres-Arenas, José; Gil-Villegas, Alejandro

    2015-09-01

    In this work we extend the applicability of the microcanonical ensemble simulation method, originally proposed to study the Ising model [A. Hüller and M. Pleimling, Int. J. Mod. Phys. C 13, 947 (2002), 10.1142/S0129183102003693], to the case of simple fluids. An algorithm is developed by measuring the transition rates probabilities between macroscopic states, that has as advantage with respect to conventional Monte Carlo NVT (MC-NVT) simulations that a continuous range of temperatures are covered in a single run. For a given density, this new algorithm provides the inverse temperature, that can be parametrized as a function of the internal energy, and the isochoric heat capacity is then evaluated through a numerical derivative. As an illustrative example we consider a fluid composed of particles interacting via a square-well (SW) pair potential of variable range. Equilibrium internal energies and isochoric heat capacities are obtained with very high accuracy compared with data obtained from MC-NVT simulations. These results are important in the context of the application of the Hüller-Pleimling method to discrete-potential systems, that are based on a generalization of the SW and square-shoulder fluids properties.

  18. Microcanonical ensemble simulation method applied to discrete potential fluids.

    PubMed

    Sastre, Francisco; Benavides, Ana Laura; Torres-Arenas, José; Gil-Villegas, Alejandro

    2015-09-01

    In this work we extend the applicability of the microcanonical ensemble simulation method, originally proposed to study the Ising model [A. Hüller and M. Pleimling, Int. J. Mod. Phys. C 13, 947 (2002)0129-183110.1142/S0129183102003693], to the case of simple fluids. An algorithm is developed by measuring the transition rates probabilities between macroscopic states, that has as advantage with respect to conventional Monte Carlo NVT (MC-NVT) simulations that a continuous range of temperatures are covered in a single run. For a given density, this new algorithm provides the inverse temperature, that can be parametrized as a function of the internal energy, and the isochoric heat capacity is then evaluated through a numerical derivative. As an illustrative example we consider a fluid composed of particles interacting via a square-well (SW) pair potential of variable range. Equilibrium internal energies and isochoric heat capacities are obtained with very high accuracy compared with data obtained from MC-NVT simulations. These results are important in the context of the application of the Hüller-Pleimling method to discrete-potential systems, that are based on a generalization of the SW and square-shoulder fluids properties.

  19. High Performance Parallel Methods for Space Weather Simulations

    NASA Technical Reports Server (NTRS)

    Hunter, Paul (Technical Monitor); Gombosi, Tamas I.

    2003-01-01

    This is the final report of our NASA AISRP grant entitled 'High Performance Parallel Methods for Space Weather Simulations'. The main thrust of the proposal was to achieve significant progress towards new high-performance methods which would greatly accelerate global MHD simulations and eventually make it possible to develop first-principles based space weather simulations which run much faster than real time. We are pleased to report that with the help of this award we made major progress in this direction and developed the first parallel implicit global MHD code with adaptive mesh refinement. The main limitation of all earlier global space physics MHD codes was the explicit time stepping algorithm. Explicit time steps are limited by the Courant-Friedrichs-Lewy (CFL) condition, which essentially ensures that no information travels more than a cell size during a time step. This condition represents a non-linear penalty for highly resolved calculations, since finer grid resolution (and consequently smaller computational cells) not only results in more computational cells, but also in smaller time steps.

  20. A coarse-grained method based on the analysis of short molecular dynamics trajectories for the simulation of non-Markovian dynamics of molecules adsorbed in microporous materials.

    PubMed

    Pintus, Alberto M; Gabrieli, Andrea; Pazzona, Federico G; Demontis, Pierfranco; Suffritti, Giuseppe B

    2014-08-21

    We developed a coarse-grained model suitable for the study of adsorbed molecules in microporous materials. A partition of the space available to the motion of adsorbed molecules was carried out, which allows to formulate the dynamics in terms of jumps between discrete regions. The probabilities of observing given pairs of successive jumps were calculated from Molecular Dynamics (MD) simulations, performed on small systems, and used to drive the motion of molecules in a lattice-gas model. Dynamics is thus reformulated in terms of event-space dynamics and this allows to treat the system despite its inherent non markovity. Despite the assumptions enforced in the algorithm, results show that it can be applied to various spherical molecules adsorbed in the all-silica zeolite ITQ-29, establishing a suitable direct bridge between MD simulation results and coarse-grained models.

  1. Modeling and simulation of wheeled polishing method for aspheric surface

    NASA Astrophysics Data System (ADS)

    Zong, Liang; Xie, Bin; Wang, Ansu

    2016-10-01

    This paper describes a new polishing tool for the polishing process of the aspheric lens: the wheeled polishing tool, equipping with an elastic polishing wheel which can automatically adapt to the surface shape of the lens, has been used to get high-precision surface based on the grinding action between the polishing wheel and the workpiece. In this paper, 3D model of polishing wheel structure is established by using the finite element analysis software. Distribution of the contact pressure between the polishing wheel and optical element is analyzed, and the contact pressure distribution function is deduced by using the least square method based on Hertz contact theory. The removal functions are deduced under different loading conditions based on Preston hypothesis. Finally, dwell time function is calculated. The simulation results show that the removal function and dwell time function are suitable for the wheeled polishing system, and thus establish a theoretical foundation for future research.

  2. Simulation-based medical education in pediatrics.

    PubMed

    Lopreiato, Joseph O; Sawyer, Taylor

    2015-01-01

    The use of simulation-based medical education (SBME) in pediatrics has grown rapidly over the past 2 decades and is expected to continue to grow. Similar to other instructional formats used in medical education, SBME is an instructional methodology that facilitates learning. Successful use of SBME in pediatrics requires attention to basic educational principles, including the incorporation of clear learning objectives. To facilitate learning during simulation the psychological safety of the participants must be ensured, and when done correctly, SBME is a powerful tool to enhance patient safety in pediatrics. Here we provide an overview of SBME in pediatrics and review key topics in the field. We first review the tools of the trade and examine various types of simulators used in pediatric SBME, including human patient simulators, task trainers, standardized patients, and virtual reality simulation. Then we explore several uses of simulation that have been shown to lead to effective learning, including curriculum integration, feedback and debriefing, deliberate practice, mastery learning, and range of difficulty and clinical variation. Examples of how these practices have been successfully used in pediatrics are provided. Finally, we discuss the future of pediatric SBME. As a community, pediatric simulation educators and researchers have been a leading force in the advancement of simulation in medicine. As the use of SBME in pediatrics expands, we hope this perspective will serve as a guide for those interested in improving the state of pediatric SBME.

  3. A distributed UNIX-based simulator

    SciTech Connect

    Wyatt, P.W.; Arnold, T.R.; Hammer, K.E. ); Peery, J.S.; McKaskle, G.A. . Dept. of Nuclear Engineering)

    1990-01-01

    One of the problems confronting the designers of simulators over the last ten years -- particularly the designers of nuclear plant simulators -- has been how to accommodate the demands of their customers for increasing verisimilitude, especially in the modeling of as-faulted conditions. The demand for the modeling of multiphase multi-component thermal-hydraulics, for example, imposed a requirement that taxed the ingenuity of the simulator software developers. Difficulty was encountered in fitting such models into the existing simulator framework -- not least because the real-time requirement of training simulation imposed severe limits on the minimum time step. In the mid-1980's, two evolutions that had been proceeding for some time culminated in mature products of potentially great utility to simulation. One was the emergence of low-cost work stations featuring not only versatile, object-oriented graphics, but also considerable number-crunching capabilities of their own. The other was the adoption of UNIX as a standard'' operating system common to at least some machines offered by virtually all vendors. As a result, it is possible to design a simulator whose graphics and executive functions are off-loaded to one or more work stations, which are designed to handle such tasks. The number-crunching duties are assigned to another machine, which has been designed expressly for that purpose. This paper deals with such a distributed UNIX-based simulator developed at the Savannah River Laboratory using graphics supplied by Texas A M University under contract to SRL.

  4. Improving the performance of a filling line based on simulation

    NASA Astrophysics Data System (ADS)

    Jasiulewicz-Kaczmarek, M.; Bartkowiak, T.

    2016-08-01

    The paper describes the method of improving performance of a filling line based on simulation. This study concerns a production line that is located in a manufacturing centre of a FMCG company. A discrete event simulation model was built using data provided by maintenance data acquisition system. Two types of failures were identified in the system and were approximated using continuous statistical distributions. The model was validated taking into consideration line performance measures. A brief Pareto analysis of line failures was conducted to identify potential areas of improvement. Two improvements scenarios were proposed and tested via simulation. The outcome of the simulations were the bases of financial analysis. NPV and ROI values were calculated taking into account depreciation, profits, losses, current CIT rate and inflation. A validated simulation model can be a useful tool in maintenance decision-making process.

  5. PACO: PArticle COunting Method To Enforce Concentrations in Dynamic Simulations.

    PubMed

    Berti, Claudio; Furini, Simone; Gillespie, Dirk

    2016-03-08

    We present PACO, a computationally efficient method for concentration boundary conditions in nonequilibrium particle simulations. Because it requires only particle counting, its computational effort is significantly smaller than other methods. PACO enables Brownian dynamics simulations of micromolar electrolytes (3 orders of magnitude lower than previously simulated). PACO for Brownian dynamics is integrated in the BROWNIES package (www.phys.rush.edu/BROWNIES). We also introduce a molecular dynamics PACO implementation that allows for very accurate control of concentration gradients.

  6. Experiential Learning Methods, Simulation Complexity and Their Effects on Different Target Groups

    ERIC Educational Resources Information Center

    Kluge, Annette

    2007-01-01

    This article empirically supports the thesis that there is no clear and unequivocal argument in favor of simulations and experiential learning. Instead the effectiveness of simulation-based learning methods depends strongly on the target group's characteristics. Two methods of supporting experiential learning are compared in two different complex…

  7. Experiential Learning Methods, Simulation Complexity and Their Effects on Different Target Groups

    ERIC Educational Resources Information Center

    Kluge, Annette

    2007-01-01

    This article empirically supports the thesis that there is no clear and unequivocal argument in favor of simulations and experiential learning. Instead the effectiveness of simulation-based learning methods depends strongly on the target group's characteristics. Two methods of supporting experiential learning are compared in two different complex…

  8. Development of a numerical simulator of human swallowing using a particle method (Part 2. Evaluation of the accuracy of a swallowing simulation using the 3D MPS method).

    PubMed

    Kamiya, Tetsu; Toyama, Yoshio; Michiwaki, Yukihiro; Kikuchi, Takahiro

    2013-01-01

    The aim of this study was to develop and evaluate the accuracy of a three-dimensional (3D) numerical simulator of the swallowing action using the 3D moving particle simulation (MPS) method, which can simulate splashes and rapid changes in the free surfaces of food materials. The 3D numerical simulator of the swallowing action using the MPS method was developed based on accurate organ models, which contains forced transformation by elapsed time. The validity of the simulation results were evaluated qualitatively based on comparisons with videofluorography (VF) images. To evaluate the validity of the simulation results quantitatively, the normalized brightness around the vallecula was used as the evaluation parameter. The positions and configurations of the food bolus during each time step were compared in the simulated and VF images. The simulation results corresponded to the VF images during each time step in the visual evaluations, which suggested that the simulation was qualitatively correct. The normalized brightness of the simulated and VF images corresponded exactly at all time steps. This showed that the simulation results, which contained information on changes in the organs and the food bolus, were numerically correct. Based on these results, the accuracy of this simulator was high and it could be used to study the mechanism of disorders that cause dysphasia. This simulator also calculated the shear rate at a specific point and the timing with Newtonian and non-Newtonian fluids. We think that the information provided by this simulator could be useful for development of food products, medicines, and in rehabilitation facilities.

  9. A Multiscale simulation method for ice crystallization and frost growth

    NASA Astrophysics Data System (ADS)

    Yazdani, Miad

    2015-11-01

    Formation of ice crystals and frost is associated with physical mechanisms at immensely separated scales. The primary focus of this work is on crystallization and frost growth on a cold plate exposed to the humid air. The nucleation is addressed through Gibbs energy barrier method based on the interfacial energy of crystal and condensate as well as the ambient and surface conditions. The supercooled crystallization of ice crystals is simulated through a phase-field based method where the variation of degree of surface tension anisotropy and its mode in the fluid medium is represented statistically. In addition, the mesoscale width of the interface is quantified asymptotically which serves as a length-scale criterion into a so-called ``Adaptive'' AMR (AAMR) algorithm to tie the grid resolution at the interface to local physical properties. Moreover, due to the exposure of crystal to humid air, a secondary non-equilibrium growth process contributes to the formation of frost at the tip of the crystal. A Monte-Carlo implementation of Diffusion Limited Aggregation method addresses the formation of frost during the crystallization. Finally, a virtual boundary based Immersed Boundary Method (IBM) is adapted to address the interaction of ice crystal with convective air during its growth.

  10. Spectral methods for multiscale plasma-physics simulations

    NASA Astrophysics Data System (ADS)

    Delzanno, Gian Luca; Manzini, Gianmarco; Vencels, Juris; Markidis, Stefano; Roytershteyn, Vadim

    2016-10-01

    In this talk, we present the SpectralPlasmaSolver (SPS) simulation method for the numerical approximation of the Vlasov-Maxwell equations. SPS either uses spectral methods both in physical and velocity space or combines spectral methods for the velocity space and a Discontinuous Galerkin (DG) discretization in space. The spectral methods are based on generalized Hermite's functions or Legendre polynomials, thus resulting in a time-dependent hyperbolic system for the spectral coefficients. The DG method is applied to numerically solve this system after a characteristic decomposition that properly ensures the upwinding in the scheme. This numerical approach can be seen as a generalization of the method of moment expansion and makes it possible to incorporate microscopic kinetic effects in a macroscale fluid-like behavior. The numerical approximation error for a given computational cost and the computational costs for a prescribed accuracy are orders of magnitude less than those provided by the standard PIC method. Moreover, conservation of physical quantities like mass, momentum, and energy can be proved theoretically. Finally, numerical examples are shown to prove the effectiveness of the approach.

  11. Spectral element method implementation on GPU for Lamb wave simulation

    NASA Astrophysics Data System (ADS)

    Kudela, Pawel; Wandowski, Tomasz; Radzienski, Maciej; Ostachowicz, Wieslaw

    2017-04-01

    Parallel implementation of the time domain spectral element method on GPU (Graphics Processing Unit) is presented. The proposed spectral element method implementation is based on sparse matrix storage of local shape function derivatives calculated at Gauss-Lobatto-Legendre points. The algorithm utilizes two basic operations: multiplication of sparse matrix by vector and element-by-element vectors multiplication. Parallel processing is performed on the degree of freedom level. The assembly of resultant force is done by the aid of a mesh coloring algorithm. The implementation enables considerable computation speedup as well as a simulation of complex structural health monitoring systems based on anomalies of propagating Lamb waves. Hence, the complexity of various models can be tested and compared in order to be as close to reality as possible by using modern computers. A comparative example of a composite laminate modeling by using homogenization of material properties in one layer of 3D brick spectral elements with composite in which each ply is simulated by separate layer of 3D brick spectral elements is described. Consequences of application of each technique are explained. Further analysis is performed for composite laminate with delamination. In each case piezoelectric transducer as well as glue layer between actuator and host structure is modeled.

  12. Knowledge-based simulation using object-oriented programming

    NASA Technical Reports Server (NTRS)

    Sidoran, Karen M.

    1993-01-01

    Simulations have become a powerful mechanism for understanding and modeling complex phenomena. Their results have had substantial impact on a broad range of decisions in the military, government, and industry. Because of this, new techniques are continually being explored and developed to make them even more useful, understandable, extendable, and efficient. One such area of research is the application of the knowledge-based methods of artificial intelligence (AI) to the computer simulation field. The goal of knowledge-based simulation is to facilitate building simulations of greatly increased power and comprehensibility by making use of deeper knowledge about the behavior of the simulated world. One technique for representing and manipulating knowledge that has been enhanced by the AI community is object-oriented programming. Using this technique, the entities of a discrete-event simulation can be viewed as objects in an object-oriented formulation. Knowledge can be factual (i.e., attributes of an entity) or behavioral (i.e., how the entity is to behave in certain circumstances). Rome Laboratory's Advanced Simulation Environment (RASE) was developed as a research vehicle to provide an enhanced simulation development environment for building more intelligent, interactive, flexible, and realistic simulations. This capability will support current and future battle management research and provide a test of the object-oriented paradigm for use in large scale military applications.

  13. The frontal method in hydrodynamics simulations

    USGS Publications Warehouse

    Walters, R.A.

    1980-01-01

    The frontal solution method has proven to be an effective means of solving the matrix equations resulting from the application of the finite element method to a variety of problems. In this study, several versions of the frontal method were compared in efficiency for several hydrodynamics problems. Three basic modifications were shown to be of value: 1. Elimination of equations with boundary conditions beforehand, 2. Modification of the pivoting procedures to allow dynamic management of the equation size, and 3. Storage of the eliminated equations in a vector. These modifications are sufficiently general to be applied to other classes of problems. ?? 1980.

  14. Simulation reduction using the Taguchi method

    NASA Technical Reports Server (NTRS)

    Mistree, Farrokh; Lautenschlager, Ume; Erikstad, Stein Owe; Allen, Janet K.

    1993-01-01

    A large amount of engineering effort is consumed in conducting experiments to obtain information needed for making design decisions. Efficiency in generating such information is the key to meeting market windows, keeping development and manufacturing costs low, and having high-quality products. The principal focus of this project is to develop and implement applications of Taguchi's quality engineering techniques. In particular, we show how these techniques are applied to reduce the number of experiments for trajectory simulation of the LifeSat space vehicle. Orthogonal arrays are used to study many parameters simultaneously with a minimum of time and resources. Taguchi's signal to noise ratio is being employed to measure quality. A compromise Decision Support Problem and Robust Design are applied to demonstrate how quality is designed into a product in the early stages of designing.

  15. Multinomial Tau-Leaping Method for Stochastic Kinetic Simulations

    SciTech Connect

    Pettigrew, Michel F.; Resat, Haluk

    2007-02-28

    We introduce the multinomial tau-leaping (MtL) method, an improved version of the binomial tau-leaping method, for general reaction networks. Improvements in efficiency are achieved in several ways. Firstly, tau-leaping steps are determined simply and efficiently using a-prior information. Secondly, networks are partitioned into closed groups of reactions and corresponding reactants in which no group reactant or reaction is found in any other group. Thirdly, product formation is factored into upper bound estimation of the number of times a particular reaction occurs. Together, these features allow for larger time steps where the numbers of reactions occurring simultaneously in a multi-channel manner are estimated accurately using a multinomial distribution. Using a wide range of test case problems of scientific and practical interest involving cellular processes, such as epidermal growth factor receptor signaling and lactose operon model incorporating gene transcription and translation, we show that tau-leaping based methods like the MtL algorithm can significantly reduce the number of simulation steps thus increasing the numerical efficiency over the exact stochastic simulation algorithm by orders of magnitude. Furthermore, the simultaneous multi-channel representation capability of the MtL algorithm makes it a candidate for FPGA implementation or for parallelization in parallel computing environments.

  16. Simulation of the 2-dimensional Drude’s model using molecular dynamics method

    SciTech Connect

    Naa, Christian Fredy; Amin, Aisyah; Ramli,; Suprijadi,; Djamal, Mitra; Wahyoedi, Seramika Ari; Viridi, Sparisoma

    2015-04-16

    In this paper, we reported the results of the simulation of the electronic conduction in solids. The simulation is based on the Drude’s models by applying molecular dynamics (MD) method, which uses the fifth-order predictor-corrector algorithm. A formula of the electrical conductivity as a function of lattice length and ion diameter τ(L, d) cand be obtained empirically based on the simulation results.

  17. Simulation of 3D tumor cell growth using nonlinear finite element method.

    PubMed

    Dong, Shoubing; Yan, Yannan; Tang, Liqun; Meng, Junping; Jiang, Yi

    2016-01-01

    We propose a novel parallel computing framework for a nonlinear finite element method (FEM)-based cell model and apply it to simulate avascular tumor growth. We derive computation formulas to simplify the simulation and design the basic algorithms. With the increment of the proliferation generations of tumor cells, the FEM elements may become larger and more distorted. Then, we describe a remesh and refinement processing of the distorted or over large finite elements and the parallel implementation based on Message Passing Interface to improve the accuracy and efficiency of the simulation. We demonstrate the feasibility and effectiveness of the FEM model and the parallelization methods in simulations of early tumor growth.

  18. Selecting magnet laminations recipes using the method of simulated annealing

    SciTech Connect

    Russell, A.D.; Baiod, R.; Brown, B.C.

    1997-05-01

    The Fermilab Main Injector project is building 344 dipoles using more than 7000 tons of steel. There were significant run-to-run variations in the magnetic properties of the steel. Differences in stress relief in the steel after stamping resulted in variations of gap height. To minimize magnet-to-magnet strength and field shape variations the laminations were shuffled based on the available magnetic and mechanical data and assigned to magnets using a computer program based on the method of simulated annealing. The lamination sets selected by the program have produced magnets which easily satisfy the design requirements. This paper discussed observed gap variations, the program structure and the strength uniformity results for the magnets produced.

  19. Simulations of Ground and Space-Based Oxygen Atom Experiments

    NASA Technical Reports Server (NTRS)

    Finchum, A. (Technical Monitor); Cline, J. A.; Minton, T. K.; Braunstein, M.

    2003-01-01

    A low-earth orbit (LEO) materials erosion scenario and the ground-based experiment designed to simulate it are compared using the direct-simulation Monte Carlo (DSMC) method. The DSMC model provides a detailed description of the interactions between the hyperthermal gas flow and a normally oriented flat plate for each case. We find that while the general characteristics of the LEO exposure are represented in the ground-based experiment, multi-collision effects can potentially alter the impact energy and directionality of the impinging molecules in the ground-based experiment. Multi-collision phenomena also affect downstream flux measurements.

  20. Simulations of Ground and Space-Based Oxygen Atom Experiments

    NASA Technical Reports Server (NTRS)

    Finchum, A. (Technical Monitor); Cline, J. A.; Minton, T. K.; Braunstein, M.

    2003-01-01

    A low-earth orbit (LEO) materials erosion scenario and the ground-based experiment designed to simulate it are compared using the direct-simulation Monte Carlo (DSMC) method. The DSMC model provides a detailed description of the interactions between the hyperthermal gas flow and a normally oriented flat plate for each case. We find that while the general characteristics of the LEO exposure are represented in the ground-based experiment, multi-collision effects can potentially alter the impact energy and directionality of the impinging molecules in the ground-based experiment. Multi-collision phenomena also affect downstream flux measurements.

  1. The finite cell method for bone simulations: verification and validation.

    PubMed

    Ruess, Martin; Tal, David; Trabelsi, Nir; Yosibash, Zohar; Rank, Ernst

    2012-03-01

    Standard methods for predicting bone's mechanical response from quantitative computer tomography (qCT) scans are mainly based on classical h-version finite element methods (FEMs). Due to the low-order polynomial approximation, the need for segmentation and the simplified approach to assign a constant material property to each element in h-FE models, these often compromise the accuracy and efficiency of h-FE solutions. Herein, a non-standard method, the finite cell method (FCM), is proposed for predicting the mechanical response of the human femur. The FCM is free of the above limitations associated with h-FEMs and is orders of magnitude more efficient, allowing its use in the setting of computational steering. This non-standard method applies a fictitious domain approach to simplify the modeling of a complex bone geometry obtained directly from a qCT scan and takes into consideration easily the heterogeneous material distribution of the various bone regions of the femur. The fundamental principles and properties of the FCM are briefly described in relation to bone analysis, providing a theoretical basis for the comparison with the p-FEM as a reference analysis and simulation method of high quality. Both p-FEM and FCM results are validated by comparison with an in vitro experiment on a fresh-frozen femur.

  2. Simulation of secondary fault shear displacements - method and application

    NASA Astrophysics Data System (ADS)

    Fälth, Billy; Hökmark, Harald; Lund, Björn; Mai, P. Martin; Munier, Raymond

    2014-05-01

    We present an earthquake simulation method to calculate dynamically and statically induced shear displacements on faults near a large earthquake. Our results are aimed at improved safety assessment of underground waste storage facilities, e.g. a nuclear waste repository. For our simulations, we use the distinct element code 3DEC. We benchmark 3DEC by running an earthquake simulation and then compare the displacement waveforms at a number of surface receivers with the corresponding results obtained from the COMPSYN code package. The benchmark test shows a good agreement in terms of both phase and amplitude. In our application to a potential earthquake near a storage facility, we use a model with a pre-defined earthquake fault plane (primary fault) surrounded by numerous smaller discontinuities (target fractures) representing faults in which shear movements may be induced by the earthquake. The primary fault and the target fractures are embedded in an elastic medium. Initial stresses are applied and the fault rupture mechanism is simulated through a programmed reduction of the primary fault shear strength, which is initiated at a pre-defined hypocenter. The rupture is propagated at a typical rupture propagation speed and arrested when it reaches the fault plane boundaries. The primary fault residual strength properties are uniform over the fault plane. The method allows for calculation of target fracture shear movements induced by static stress redistribution as well as by dynamic effects. We apply the earthquake simulation method in a model of the Forsmark nuclear waste repository site in Sweden with rock mass properties, in situ stresses and fault geometries according to the description of the site established by the Swedish Nuclear Fuel and Waste Management Co (SKB). The target fracture orientations are based on the Discrete Fracture Network model developed for the site. With parameter values set to provide reasonable upper bound estimates of target fracture

  3. Development of an Agent-Based Model (ABM) to Simulate the Immune System and Integration of a Regression Method to Estimate the Key ABM Parameters by Fitting the Experimental Data.

    PubMed

    Tong, Xuming; Chen, Jinghang; Miao, Hongyu; Li, Tingting; Zhang, Le

    2015-01-01

    Agent-based models (ABM) and differential equations (DE) are two commonly used methods for immune system simulation. However, it is difficult for ABM to estimate key parameters of the model by incorporating experimental data, whereas the differential equation model is incapable of describing the complicated immune system in detail. To overcome these problems, we developed an integrated ABM regression model (IABMR). It can combine the advantages of ABM and DE by employing ABM to mimic the multi-scale immune system with various phenotypes and types of cells as well as using the input and output of ABM to build up the Loess regression for key parameter estimation. Next, we employed the greedy algorithm to estimate the key parameters of the ABM with respect to the same experimental data set and used ABM to describe a 3D immune system similar to previous studies that employed the DE model. These results indicate that IABMR not only has the potential to simulate the immune system at various scales, phenotypes and cell types, but can also accurately infer the key parameters like DE model. Therefore, this study innovatively developed a complex system development mechanism that could simulate the complicated immune system in detail like ABM and validate the reliability and efficiency of model like DE by fitting the experimental data.

  4. Development of an Agent-Based Model (ABM) to Simulate the Immune System and Integration of a Regression Method to Estimate the Key ABM Parameters by Fitting the Experimental Data

    PubMed Central

    Tong, Xuming; Chen, Jinghang; Miao, Hongyu; Li, Tingting; Zhang, Le

    2015-01-01

    Agent-based models (ABM) and differential equations (DE) are two commonly used methods for immune system simulation. However, it is difficult for ABM to estimate key parameters of the model by incorporating experimental data, whereas the differential equation model is incapable of describing the complicated immune system in detail. To overcome these problems, we developed an integrated ABM regression model (IABMR). It can combine the advantages of ABM and DE by employing ABM to mimic the multi-scale immune system with various phenotypes and types of cells as well as using the input and output of ABM to build up the Loess regression for key parameter estimation. Next, we employed the greedy algorithm to estimate the key parameters of the ABM with respect to the same experimental data set and used ABM to describe a 3D immune system similar to previous studies that employed the DE model. These results indicate that IABMR not only has the potential to simulate the immune system at various scales, phenotypes and cell types, but can also accurately infer the key parameters like DE model. Therefore, this study innovatively developed a complex system development mechanism that could simulate the complicated immune system in detail like ABM and validate the reliability and efficiency of model like DE by fitting the experimental data. PMID:26535589

  5. An Adaptive Multiscale Finite Element Method for Large Scale Simulations

    DTIC Science & Technology

    2015-09-28

    the method . Using the above definitions , the weak statement of the non-linear local problem at the kth 4 DISTRIBUTION A: Distribution approved for...AFRL-AFOSR-VA-TR-2015-0305 An Adaptive Multiscale Finite Element Method for Large Scale Simulations Carlos Duarte UNIVERSITY OF ILLINOIS CHAMPAIGN...14-07-2015 4. TITLE AND SUBTITLE An Adaptive Multiscale Generalized Finite Element Method for Large Scale Simulations 5a.  CONTRACT NUMBER 5b

  6. Solution-phase synthesis of silver nanodiscs in HPMC-matrix and simulation of UV-vis extinction spectra using DDA based method

    NASA Astrophysics Data System (ADS)

    Sarkar, Priyanka; Pyne, Santanu; Sahoo, Gobinda P.; Bhui, Dipak K.; Bar, Harekrishna; Samanta, Sadhan; Misra, Ajay

    2011-11-01

    Present investigation demonstrates a very simple seed-mediated route, using hydroxypropyl methyl cellulose (HPMC) as stabilizing agent, for the synthesis of silver nanodiscs in aqueous solution. Central to the concept of seed-mediated growth of nanoparticles is that small nanoparticle seeds serve as nucleation centres to grow nanoparticles to a desired size and shape. It is found that the additional citrate ions in the growth solution play the pivotal role in controlling the size of silver nanodiscs. Similar to the polymers in the solution, citrate ions could be likewise dynamically adsorbed on the growing silver nanoparticles and promote the two-dimensional (2D) growth of nanoparticles. Morphological, structural, and spectral changes associated with the seed-mediated growth of the nanoparticles in the presence of HPMC are characterized using UV-vis and TEM spectroscopic studies. Metal nanoparticles have received increasing attention for their peculiar capability to control local surface plasmon resonance (LSPR) when interacting with incident light waves. Extensive simulation study of the UV-vis extinction spectra of our synthesized silver nanodiscs has been carried out using discrete dipole approximation (DDA) methodology.

  7. Simulation of acoustic wave propagation in a borehole surrounded by cracked media using a finite difference method based on Hudson’s approach

    NASA Astrophysics Data System (ADS)

    Yue, Chongwang; Yue, Xiaopeng

    2017-06-01

    Cracked media are a common geophysical phenomena. It is important to study the propagation characteristics in boreholes for sonic logging theory, as this can provide the basis for the sonic log interpretation. This paper derives velocity-stress staggered finite difference equations of elastic wave propagation in cylindrical coordinates for cracked media. The sound field in the borehole is numerically simulated using the finite-difference technique with second order in time and tenth order in space. It gives the relationship curves between the P-wave, S-wave velocity, anisotropy factor and crack density, and aspect ratio. Furthermore, it gives snapshots of the borehole acoustic wave field in cracked media with different crack densities and aspect ratios. The calculated results show that in dry conditions the P-wave velocity in both the axial and radial directions decreases, and more rapidly in the axial direction while the crack density increases. The S-wave velocity decreases slowly with increasing crack density. The attenuation of the wave energy increases with the increase in crack density. In fluid-saturated cracked media, both the P-wave and S-wave velocity increases with the aspect ratio of the cracks. The anisotropy of the P-wave decreases with the aspect ratio of the cracks. The aspect ratio of the crack does not obviously affect the energy attenuation.

  8. Situating Computer Simulation Professional Development: Does It Promote Inquiry-Based Simulation Use?

    ERIC Educational Resources Information Center

    Gonczi, Amanda L.; Maeng, Jennifer L.; Bell, Randy L.; Whitworth, Brooke A.

    2016-01-01

    This mixed-methods study sought to identify professional development implementation variables that may influence participant (a) adoption of simulations, and (b) use for inquiry-based science instruction. Two groups (Cohort 1, N = 52; Cohort 2, N = 104) received different professional development. Cohort 1 was focused on Web site use mechanics.…

  9. Situating Computer Simulation Professional Development: Does It Promote Inquiry-Based Simulation Use?

    ERIC Educational Resources Information Center

    Gonczi, Amanda L.; Maeng, Jennifer L.; Bell, Randy L.; Whitworth, Brooke A.

    2016-01-01

    This mixed-methods study sought to identify professional development implementation variables that may influence participant (a) adoption of simulations, and (b) use for inquiry-based science instruction. Two groups (Cohort 1, N = 52; Cohort 2, N = 104) received different professional development. Cohort 1 was focused on Web site use mechanics.…

  10. Transcending Competency Testing in Hospital-Based Simulation.

    PubMed

    Lassche, Madeline; Wilson, Barbara

    2016-02-01

    Simulation is a frequently used method for training students in health care professions and has recently gained acceptance in acute care hospital settings for use in educational programs and competency testing. Although hospital-based simulation is currently limited primarily to use in skills acquisition, expansion of the use of simulation via a modified Quality Health Outcomes Model to address systems factors such as the physical environment and human factors such as fatigue, reliance on memory, and reliance on vigilance could drive system-wide changes. Simulation is an expensive resource and should not be limited to use for education and competency testing. Well-developed, peer-reviewed simulations can be used for environmental factors, human factors, and interprofessional education to improve patients' outcomes and drive system-wide change for quality improvement initiatives.

  11. Multi-pass Monte Carlo simulation method in nuclear transmutations.

    PubMed

    Mateescu, Liviu; Kadambi, N Prasad; Ravindra, Nuggehalli M

    2016-12-01

    Monte Carlo methods, in their direct brute simulation incarnation, bring realistic results if the involved probabilities, be they geometrical or otherwise, remain constant for the duration of the simulation. However, there are physical setups where the evolution of the simulation represents a modification of the simulated system itself. Chief among such evolving simulated systems are the activation/transmutation setups. That is, the simulation starts with a given set of probabilities, which are determined by the geometry of the system, the components and by the microscopic interaction cross-sections. However, the relative weight of the components of the system changes along with the steps of the simulation. A natural measure would be adjusting probabilities after every step of the simulation. On the other hand, the physical system has typically a number of components of the order of Avogadro's number, usually 10(25) or 10(26) members. A simulation step changes the characteristics for just a few of these members; a probability will therefore shift by a quantity of 1/10(25). Such a change cannot be accounted for within a simulation, because then the simulation should have then a number of at least 10(28) steps in order to have some significance. This is not feasible, of course. For our computing devices, a simulation of one million steps is comfortable, but a further order of magnitude becomes too big a stretch for the computing resources. We propose here a method of dealing with the changing probabilities, leading to the increasing of the precision. This method is intended as a fast approximating approach, and also as a simple introduction (for the benefit of students) in the very branched subject of Monte Carlo simulations vis-à-vis nuclear reactors.

  12. Streamflow simulation methods for ungauged and poorly gauged watersheds

    NASA Astrophysics Data System (ADS)

    Loukas, A.; Vasiliades, L.

    2014-02-01

    Rainfall-runoff modelling procedures for ungauged and poorly gauged watersheds are developed in this study. A well established hydrological model, the UBC watershed model, is selected and applied in five different river basins located in Canada, Cyprus and Pakistan. Catchments from cold, temperate, continental and semiarid climate zones are included to demonstrate the develop procedures. Two methodologies for streamflow modelling are proposed and analysed. The first method uses the UBC watershed model with a universal set of parameters for water allocation and flow routing, and precipitation gradients estimated from the available annual precipitation data as well as from regional information on the distribution of orographic precipitation. This method is proposed for watersheds without streamflow gauge data and limited meteorological station data. The second hybrid method proposes the coupling of UBC watershed model with artificial neural networks (ANNs) and is intended for use in poorly gauged watersheds which have limited streamflow measurements. The two proposed methods have been applied to five mountainous watersheds with largely varying climatic, physiographic and hydrological characteristics. The evaluation of the applied methods is based on combination of graphical results, statistical evaluation metrics, and normalized goodness-of-fit statistics. The results show that the first method satisfactorily simulates the observed hydrograph assuming that the basins are ungauged. When limited streamflow measurements are available, the coupling of ANNs with the regional non-calibrated UBC flow model components is considered a successful alternative method over the conventional calibration of a hydrological model based on the employed evaluation criteria for streamflow modelling and flood frequency estimation.

  13. Streamflow simulation methods for ungauged and poorly gauged watersheds

    NASA Astrophysics Data System (ADS)

    Loukas, A.; Vasiliades, L.

    2014-07-01

    Rainfall-runoff modelling procedures for ungauged and poorly gauged watersheds are developed in this study. A well-established hydrological model, the University of British Columbia (UBC) watershed model, is selected and applied in five different river basins located in Canada, Cyprus, and Pakistan. Catchments from cold, temperate, continental, and semiarid climate zones are included to demonstrate the procedures developed. Two methodologies for streamflow modelling are proposed and analysed. The first method uses the UBC watershed model with a universal set of parameters for water allocation and flow routing, and precipitation gradients estimated from the available annual precipitation data as well as from regional information on the distribution of orographic precipitation. This method is proposed for watersheds without streamflow gauge data and limited meteorological station data. The second hybrid method proposes the coupling of UBC watershed model with artificial neural networks (ANNs) and is intended for use in poorly gauged watersheds which have limited streamflow measurements. The two proposed methods have been applied to five mountainous watersheds with largely varying climatic, physiographic, and hydrological characteristics. The evaluation of the applied methods is based on the combination of graphical results, statistical evaluation metrics, and normalized goodness-of-fit statistics. The results show that the first method satisfactorily simulates the observed hydrograph assuming that the basins are ungauged. When limited streamflow measurements are available, the coupling of ANNs with the regional, non-calibrated UBC flow model components is considered a successful alternative method to the conventional calibration of a hydrological model based on the evaluation criteria employed for streamflow modelling and flood frequency estimation.

  14. Constraint-based soft tissue simulation for virtual surgical training.

    PubMed

    Tang, Wen; Wan, Tao Ruan

    2014-11-01

    Most of surgical simulators employ a linear elastic model to simulate soft tissue material properties due to its computational efficiency and the simplicity. However, soft tissues often have elaborate nonlinear material characteristics. Most prominently, soft tissues are soft and compliant to small strains, but after initial deformations they are very resistant to further deformations even under large forces. Such material characteristic is referred as the nonlinear material incompliant which is computationally expensive and numerically difficult to simulate. This paper presents a constraint-based finite-element algorithm to simulate the nonlinear incompliant tissue materials efficiently for interactive simulation applications such as virtual surgery. Firstly, the proposed algorithm models the material stiffness behavior of soft tissues with a set of 3-D strain limit constraints on deformation strain tensors. By enforcing a large number of geometric constraints to achieve the material stiffness, the algorithm reduces the task of solving stiff equations of motion with a general numerical solver to iteratively resolving a set of constraints with a nonlinear Gauss-Seidel iterative process. Secondly, as a Gauss-Seidel method processes constraints individually, in order to speed up the global convergence of the large constrained system, a multiresolution hierarchy structure is also used to accelerate the computation significantly, making interactive simulations possible at a high level of details. Finally, this paper also presents a simple-to-build data acquisition system to validate simulation results with ex vivo tissue measurements. An interactive virtual reality-based simulation system is also demonstrated.

  15. Accelerated GPU based SPECT Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris

    2016-06-01

    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: 99m Tc, 111In and 131I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency

  16. Accelerated GPU based SPECT Monte Carlo simulations.

    PubMed

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris

    2016-06-07

    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational

  17. Numerical Methods and Simulations of Complex Multiphase Flows

    NASA Astrophysics Data System (ADS)

    Brady, Peter

    Multiphase flows are an important part of many natural and technological phenomena such as ocean-air coupling (which is important for climate modeling) and the atomization of liquid fuel jets in combustion engines. The unique challenges of multiphase flow often make analytical solutions to the governing equations impossible and experimental investigations very difficult. Thus, high-fidelity numerical simulations can play a pivotal role in understanding these systems. This dissertation describes numerical methods developed for complex multiphase flows and the simulations performed using these methods. First, the issue of multiphase code verification is addressed. Code verification answers the question "Is this code solving the equations correctly?" The method of manufactured solutions (MMS) is a procedure for generating exact benchmark solutions which can test the most general capabilities of a code. The chief obstacle to applying MMS to multiphase flow lies in the discontinuous nature of the material properties at the interface. An extension of the MMS procedure to multiphase flow is presented, using an adaptive marching tetrahedron style algorithm to compute the source terms near the interface. Guidelines for the use of the MMS to help locate coding mistakes are also detailed. Three multiphase systems are then investigated: (1) the thermocapillary motion of three-dimensional and axisymmetric drops in a confined apparatus, (2) the flow of two immiscible fluids completely filling an enclosed cylinder and driven by the rotation of the bottom endwall, and (3) the atomization of a single drop subjected to a high shear turbulent flow. The systems are simulated numerically by solving the full multiphase Navier-Stokes equations coupled to the various equations of state and a level set interface tracking scheme based on the refined level set grid method. The codes have been parallelized using MPI in order to take advantage of today's very large parallel computational

  18. Simulations of 6-DOF Motion with a Cartesian Method

    NASA Technical Reports Server (NTRS)

    Murman, Scott M.; Aftosmis, Michael J.; Berger, Marsha J.; Kwak, Dochan (Technical Monitor)

    2003-01-01

    Coupled 6-DOF/CFD trajectory predictions using an automated Cartesian method are demonstrated by simulating a GBU-32/JDAM store separating from an F-18C aircraft. Numerical simulations are performed at two Mach numbers near the sonic speed, and compared with flight-test telemetry and photographic-derived data. Simulation results obtained with a sequential-static series of flow solutions are contrasted with results using a time-dependent flow solver. Both numerical methods show good agreement with the flight-test data through the first half of the simulations. The sequential-static and time-dependent methods diverge over the last half of the trajectory prediction. after the store produces peak angular rates. A cost comparison for the Cartesian method is included, in terms of absolute cost and relative to computing uncoupled 6-DOF trajectories. A detailed description of the 6-DOF method, as well as a verification of its accuracy, is provided in an appendix.

  19. Kinetic Method for Hydrogen-Deuterium-Tritium Mixture Distillation Simulation

    SciTech Connect

    Sazonov, A.B.; Kagramanov, Z.G.; Magomedbekov, E.P.

    2005-07-15

    Simulation of hydrogen distillation plants requires mathematical procedures suitable for multicomponent systems. In most of the present-day simulation methods a distillation column is assumed to be composed of theoretical stages, or plates. However, in the case of a multicomponent mixture theoretical plate does not exist.An alternative kinetic method of simulation is depicted in the work. According to this method a system of mass-transfer differential equations is solved numerically. Mass-transfer coefficients are estimated with using experimental results and empirical equations.Developed method allows calculating the steady state of a distillation column as well as its any non-steady state when initial conditions are given. The results for steady states are compared with ones obtained via Thiele-Geddes theoretical stage technique and the necessity of using kinetic method is demonstrated. Examples of a column startup period and periodic distillation simulations are shown as well.

  20. Daylighting simulation: methods, algorithms, and resources

    SciTech Connect

    Carroll, William L.

    1999-12-01

    This document presents work conducted as part of Subtask C, ''Daylighting Design Tools'', Subgroup C2, ''New Daylight Algorithms'', of the IEA SHC Task 21 and the ECBCS Program Annex 29 ''Daylight in Buildings''. The search for and collection of daylighting analysis methods and algorithms led to two important observations. First, there is a wide range of needs for different types of methods to produce a complete analysis tool. These include: Geometry; Light modeling; Characterization of the natural illumination resource; Materials and components properties, representations; and Usability issues (interfaces, interoperability, representation of analysis results, etc). Second, very advantageously, there have been rapid advances in many basic methods in these areas, due to other forces. They are in part driven by: The commercial computer graphics community (commerce, entertainment); The lighting industry; Architectural rendering and visualization for projects; and Academia: Course materials, research. This has led to a very rich set of information resources that have direct applicability to the small daylighting analysis community. Furthermore, much of this information is in fact available online. Because much of the information about methods and algorithms is now online, an innovative reporting strategy was used: the core formats are electronic, and used to produce a printed form only secondarily. The electronic forms include both online WWW pages and a downloadable .PDF file with the same appearance and content. Both electronic forms include live primary and indirect links to actual information sources on the WWW. In most cases, little additional commentary is provided regarding the information links or citations that are provided. This in turn allows the report to be very concise. The links are expected speak for themselves. The report consists of only about 10+ pages, with about 100+ primary links, but with potentially thousands of indirect links. For purposes of

  1. Parallel Performance Optimization of the Direct Simulation Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Gao, Da; Zhang, Chonglin; Schwartzentruber, Thomas

    2009-11-01

    Although the direct simulation Monte Carlo (DSMC) particle method is more computationally intensive compared to continuum methods, it is accurate for conditions ranging from continuum to free-molecular, accurate in highly non-equilibrium flow regions, and holds potential for incorporating advanced molecular-based models for gas-phase and gas-surface interactions. As available computer resources continue their rapid growth, the DSMC method is continually being applied to increasingly complex flow problems. Although processor clock speed continues to increase, a trend of increasing multi-core-per-node parallel architectures is emerging. To effectively utilize such current and future parallel computing systems, a combined shared/distributed memory parallel implementation (using both Open Multi-Processing (OpenMP) and Message Passing Interface (MPI)) of the DSMC method is under development. The parallel implementation of a new state-of-the-art 3D DSMC code employing an embedded 3-level Cartesian mesh will be outlined. The presentation will focus on performance optimization strategies for DSMC, which includes, but is not limited to, modified algorithm designs, practical code-tuning techniques, and parallel performance optimization. Specifically, key issues important to the DSMC shared memory (OpenMP) parallel performance are identified as (1) granularity (2) load balancing (3) locality and (4) synchronization. Challenges and solutions associated with these issues as they pertain to the DSMC method will be discussed.

  2. Correlated EEG Signals Simulation Based on Artificial Neural Networks.

    PubMed

    Tomasevic, Nikola M; Neskovic, Aleksandar M; Neskovic, Natasa J

    2016-09-30

    In recent years, simulation of the human electroencephalogram (EEG) data found its important role in medical domain and neuropsychology. In this paper, a novel approach to simulation of two cross-correlated EEG signals is proposed. The proposed method is based on the principles of artificial neural networks (ANN). Contrary to the existing EEG data simulators, the ANN-based approach was leveraged solely on the experimentally acquired EEG data. More precisely, measured EEG data were utilized to optimize the simulator which consisted of two ANN models (each model responsible for generation of one EEG sequence). In order to acquire the EEG recordings, the measurement campaign was carried out on a healthy awake adult having no cognitive, physical or mental load. For the evaluation of the proposed approach, comprehensive quantitative and qualitative statistical analysis was performed considering probability distribution, correlation properties and spectral characteristics of generated EEG processes. The obtained results clearly indicated the satisfactory agreement with the measurement data.

  3. Electromechanical properties of a textured ceramic material in the (1 - x)PMN- xPT system: Simulation based on the effective-medium method

    NASA Astrophysics Data System (ADS)

    Aleshin, V. I.; Raevskiĭ, I. P.; Sitalo, E. I.

    2008-11-01

    A complete set of dielectric, piezoelectric, and elastic parameters for the textured ceramic material 0.67PMN-0.33PT is calculated by the self-consistency method with due regard for the anisotropy and piezoelectric activity of the medium. It is shown that the best piezoelectric properties corresponding to those of a single crystal are observed for the ceramic material with a texture in which all crystallites are oriented parallel to the [001] direction of the parent perovskite cubic cell. The simplest models of the polarization of an untextured ceramic material with a random initial orientation of crystallites are considered. The results obtained are compared with experimental data.

  4. Solution of partial differential equations by agent-based simulation

    NASA Astrophysics Data System (ADS)

    Szilagyi, Miklos N.

    2014-01-01

    The purpose of this short note is to demonstrate that partial differential equations can be quickly solved by agent-based simulation with high accuracy. There is no need for the solution of large systems of algebraic equations. This method is especially useful for quick determination of potential distributions and demonstration purposes in teaching electromagnetism.

  5. Dual Energy Method for Breast Imaging: A Simulation Study

    PubMed Central

    2015-01-01

    Dual energy methods can suppress the contrast between adipose and glandular tissues in the breast and therefore enhance the visibility of calcifications. In this study, a dual energy method based on analytical modeling was developed for the detection of minimum microcalcification thickness. To this aim, a modified radiographic X-ray unit was considered, in order to overcome the limited kVp range of mammographic units used in previous DE studies, combined with a high resolution CMOS sensor (pixel size of 22.5 μm) for improved resolution. Various filter materials were examined based on their K-absorption edge. Hydroxyapatite (HAp) was used to simulate microcalcifications. The contrast to noise ratio (CNRtc) of the subtracted images was calculated for both monoenergetic and polyenergetic X-ray beams. The optimum monoenergetic pair was 23/58 keV for the low and high energy, respectively, resulting in a minimum detectable microcalcification thickness of 100 μm. In the polyenergetic X-ray study, the optimal spectral combination was 40/70 kVp filtered with 100 μm cadmium and 1000 μm copper, respectively. In this case, the minimum detectable microcalcification thickness was 150 μm. The proposed dual energy method provides improved microcalcification detectability in breast imaging with mean glandular dose values within acceptable levels. PMID:26246848

  6. Dual Energy Method for Breast Imaging: A Simulation Study.

    PubMed

    Koukou, V; Martini, N; Michail, C; Sotiropoulou, P; Fountzoula, C; Kalyvas, N; Kandarakis, I; Nikiforidis, G; Fountos, G

    2015-01-01

    Dual energy methods can suppress the contrast between adipose and glandular tissues in the breast and therefore enhance the visibility of calcifications. In this study, a dual energy method based on analytical modeling was developed for the detection of minimum microcalcification thickness. To this aim, a modified radiographic X-ray unit was considered, in order to overcome the limited kVp range of mammographic units used in previous DE studies, combined with a high resolution CMOS sensor (pixel size of 22.5 μm) for improved resolution. Various filter materials were examined based on their K-absorption edge. Hydroxyapatite (HAp) was used to simulate microcalcifications. The contrast to noise ratio (CNR tc ) of the subtracted images was calculated for both monoenergetic and polyenergetic X-ray beams. The optimum monoenergetic pair was 23/58 keV for the low and high energy, respectively, resulting in a minimum detectable microcalcification thickness of 100 μm. In the polyenergetic X-ray study, the optimal spectral combination was 40/70 kVp filtered with 100 μm cadmium and 1000 μm copper, respectively. In this case, the minimum detectable microcalcification thickness was 150 μm. The proposed dual energy method provides improved microcalcification detectability in breast imaging with mean glandular dose values within acceptable levels.

  7. Microcomputer based software for biodynamic simulation

    NASA Technical Reports Server (NTRS)

    Rangarajan, N.; Shams, T.

    1993-01-01

    This paper presents a description of a microcomputer based software package, called DYNAMAN, which has been developed to allow an analyst to simulate the dynamics of a system consisting of a number of mass segments linked by joints. One primary application is in predicting the motion of a human occupant in a vehicle under the influence of a variety of external forces, specially those generated during a crash event. Extensive use of a graphical user interface has been made to aid the user in setting up the input data for the simulation and in viewing the results from the simulation. Among its many applications, it has been successfully used in the prototype design of a moving seat that aids in occupant protection during a crash, by aircraft designers in evaluating occupant injury in airplane crashes, and by users in accident reconstruction for reconstructing the motion of the occupant and correlating the impacts with observed injuries.

  8. Simulation Improves Resident Performance in Catheter-Based Intervention

    PubMed Central

    Chaer, Rabih A.; DeRubertis, Brian G.; Lin, Stephanie C.; Bush, Harry L.; Karwowski, John K.; Birk, Daniel; Morrissey, Nicholas J.; Faries, Peter L.; McKinsey, James F.; Kent, K Craig

    2006-01-01

    Objectives: Surgical simulation has been shown to enhance the training of general surgery residents. Since catheter-based techniques have become an important part of the vascular surgeon's armamentarium, we explored whether simulation might impact the acquisition of catheter skills by surgical residents. Methods: Twenty general surgery residents received didactic training in the techniques of catheter intervention. Residents were then randomized with 10 receiving additional training with the Procedicus, computer-based, haptic simulator. All 20 residents then participated in 2 consecutive mentored catheter-based interventions for lower extremity occlusive disease in an OR/angiography suite. Resident performance was graded by attending surgeons blinded to the resident's training status, using 18 procedural steps as well as a global rating scale. Results: There were no differences between the 2 resident groups with regard to demographics or scores on a visuospatial test administered at study outset. Overall, residents exposed to simulation scored higher than controls during the first angio/OR intervention: procedural steps (simulation/control) (50 ± 6 vs. 33 ± 9, P = 0.0015); global rating scale (30 ± 7 vs. 19 ± 5, P = 0.0052). The advantage provided by simulator training persisted with the second intervention (53 ± 6 vs. 36 ± 7, P = 0.0006); global rating scale (33 ± 6 vs. 21 ± 6, P = 0.0015). Moreover, simulation training, particularly for the second intervention, led to enhancement in almost all of the individual measures of performance. Conclusion: Simulation is a valid tool for instructing surgical residents and fellows in basic endovascular techniques and should be incorporated into surgical training programs. Moreover, simulators may also benefit the large number of vascular surgeons who seek retraining in catheter-based intervention. PMID:16926560

  9. Space-based radar array system simulation and validation

    NASA Astrophysics Data System (ADS)

    Schuman, H. K.; Pflug, D. R.; Thompson, L. D.

    1981-08-01

    The present status of the space-based radar phased array lens simulator is discussed. Huge arrays of thin wire radiating elements on either side of a ground screen are modeled by the simulator. Also modeled are amplitude and phase adjust modules connecting radiating elements between arrays, feedline to radiator mismatch, and lens warping. A successive approximation method is employed. The first approximation is based on a plane wave expansion (infinite array) moment method especially suited to large array analysis. the first approximation results then facilitate higher approximation computations that account for effects of nonuniform periodicities (lens edge, lens section interfaces, failed modules, etc.). The programming to date is discussed via flow diagrams. An improved theory is presented in a consolidated development. The use of the simulator is illustrated by computing active impedances and radiating element current distributions for infinite planar arrays of straight and 'swept back' dipoles (arms inclined with respect to the array plane) with feedline scattering taken into account.

  10. Deployment Simulation Methods for Ultra-Lightweight Inflatable Structures

    NASA Technical Reports Server (NTRS)

    Wang, John T.; Johnson, Arthur R.

    2003-01-01

    Two dynamic inflation simulation methods are employed for modeling the deployment of folded thin-membrane tubes. The simulations are necessary because ground tests include gravity effects and may poorly represent deployment in space. The two simulation methods are referred to as the Control Volume (CV) method and the Arbitrary Lagrangian Eulerian (ALE) method. They are available in the LS-DYNA nonlinear dynamic finite element code. Both methods are suitable for modeling the interactions between the inflation gas and the thin-membrane tube structures. The CV method only considers the pressure induced by the inflation gas in the simulation, while the ALE method models the actual flow of the inflation gas. Thus, the transient fluid properties at any location within the tube can be predicted by the ALE method. Deployment simulations of three packaged tube models; namely coiled, Z-folded, and telescopically-folded configurations, are performed. Results predicted by both methods for the telescopically-folded configuration are correlated and computational efficiency issues are discussed.

  11. Wavelet based Simulation of Reservoir Flow

    NASA Astrophysics Data System (ADS)

    Siddiqi, A. H.; Verma, A. K.; Noor-E-Zahra, Noor-E.-Zahra; Chandiok, Ashish; Hasan, A.

    2009-07-01

    Petroleum reservoirs consist of hydrocarbons and other chemicals trapped in the pores of a rock. The exploration and production of hydrocarbon reservoirs is still the most important technology to develop natural energy resources. Therefore, fluid flow simulators play a key role in order to help oil companies. In fact, simulation is the most important tool to model changes in a reservoir over the time. The main problem in petroleum reservoir simulation is to model the displacement of one fluid by another within a porous medium. A typical problem is characterized by the injection of a wetting fluid, for example water into the reservoir at a particular location displacing to the non wetting fluid, for example oil, which is extracted or produced at another location. Buckley-Leverett equation [1] models this process and its numerical simulation and visualization is of paramount importance. There are several numerical methods applied for numerical solution of partial differential equations modeling real world problems. We review in this paper the numerical solution of Buckley-Leverett equation for flat and non flat structures with special focus on wavelet method. We also indicate a few new avenues for further research.

  12. Hybrid optimization schemes for simulation-based problems.

    SciTech Connect

    Fowler, Katie; Gray, Genetha Anne; Griffin, Joshua D.

    2010-05-01

    The inclusion of computer simulations in the study and design of complex engineering systems has created a need for efficient approaches to simulation-based optimization. For example, in water resources management problems, optimization problems regularly consist of objective functions and constraints that rely on output from a PDE-based simulator. Various assumptions can be made to simplify either the objective function or the physical system so that gradient-based methods apply, however the incorporation of realistic objection functions can be accomplished given the availability of derivative-free optimization methods. A wide variety of derivative-free methods exist and each method has both advantages and disadvantages. Therefore, to address such problems, we propose a hybrid approach, which allows the combining of beneficial elements of multiple methods in order to more efficiently search the design space. Specifically, in this paper, we illustrate the capabilities of two novel algorithms; one which hybridizes pattern search optimization with Gaussian Process emulation and the other which hybridizes pattern search and a genetic algorithm. We describe the hybrid methods and give some numerical results for a hydrological application which illustrate that the hybrids find an optimal solution under conditions for which traditional optimal search methods fail.

  13. A numerical simulation method for aircraft infrared imaging

    NASA Astrophysics Data System (ADS)

    Zhou, Yue; Wang, Qiang; Li, Ting; Hu, Haiyang

    2017-06-01

    Numerical simulation of infrared (IR) emission from aircraft is of great significance for military and civilian applications. In this paper, the narrow band k-distribution (NBK) model is used to calculate radiative properties of non-gray gases in the hot exhaust plume. With model parameters derived from the high resolution spectral database HITEMP 2010, the NBK model is validated by comparisons with exact line by line (LBL) results and experimental data. Based on the NBK model, a new finite volume and back ray tracing (FVBRT) method is proposed to solve the radiative transfer equations and produce IR imaging. Calculated results by the FVBRT method are compared with experimental data and available results in open references, which shows the FVBRT method can maintain good accuracy while producing IR images with better rendering effects. Finally, the NBK model and FVBRT method are integrated to calculate IR signature of an aircraft. The IR images and spatial distributions of radiative intensity are compared and analyzed in both 3 - 5 μm band and 8 - 12 μm band to provide references for engineering applications.

  14. Weak turbulence simulations with the Hermite-Fourier spectral method

    NASA Astrophysics Data System (ADS)

    Vencels, Juris; Delzanno, Gian Luca; Manzini, Gianmarco; Roytershteyn, Vadim; Markidis, Stefano

    2015-11-01

    Recently, a new (transform) method based on a Fourier-Hermite (FH) discretization of the Vlasov-Maxwell equations has been developed. The resulting set of moment equations is discretized implicitly in time with a Crank-Nicolson scheme and solved with a nonlinear Newton-Krylov technique. For periodic boundary conditions, this discretization delivers a scheme that conserves the total mass, momentum and energy of the system exactly. In this work, we apply the FH method to study a problem of Langmuir turbulence, where a low signal-to-noise ratio is important to follow the turbulent cascade and might require a lot of computational resources if studied with PIC. We simulate a weak (low density) electron beam moving in a Maxwellian plasma and subject to an instability that generates Langmuir waves and a weak turbulence field. We also discuss some optimization techniques to optimally select the Hermite basis in terms of its shift and scaling argument, and show that this technique improve the overall accuracy of the method. Finally, we discuss the applicability of the HF method for studying kinetic plasma turbulence. This work was funded by LDRD under the auspices of the NNSA of the U.S. by LANL under contract DE-AC52-06NA25396 and by EC through the EPiGRAM project (grant agreement no. 610598. epigram-project.eu).

  15. A direct simulation method for flows with suspended paramagnetic particles

    SciTech Connect

    Kang, Tae Gon; Hulsen, Martien A. Toonder, Jaap M.J. den; Anderson, Patrick D.; Meijer, Han E.H.

    2008-04-20

    A direct numerical simulation method based on the Maxwell stress tensor and a fictitious domain method has been developed to solve flows with suspended paramagnetic particles. The numerical scheme enables us to take into account both hydrodynamic and magnetic interactions between particles in a fully coupled manner. Particles are assumed to be non-Brownian with negligible inertia. Rigid body motions of particles in 2D are described by a rigid-ring description implemented by Lagrange multipliers. The magnetic force, acting on the particles due to magnetic fields, is represented by the divergence of the Maxwell stress tensor, which acts as a body force added to the momentum balance equation. Focusing on two-dimensional problems, we solve a single-particle problem for verification. With the magnetic force working on the particle, the proper number of collocation points is found to be two points per element. The convergence with mesh refinement is verified by comparing results from regular mesh problems with those from a boundary-fitted mesh problem as references. We apply the developed method to two application problems: two-particle interaction in a uniform magnetic field and the motion of a magnetic chain in a rotating field, demonstrating the capability of the method to tackle general problems. In the motion of a magnetic chain, especially, the deformation pattern at break-up is similar to the experimentally observed one. The present formulation can be extended to three-dimensional and viscoelastic flow problems.

  16. Mathematical modeling and simulation in animal health - Part II: principles, methods, applications, and value of physiologically based pharmacokinetic modeling in veterinary medicine and food safety assessment.

    PubMed

    Lin, Z; Gehring, R; Mochel, J P; Lavé, T; Riviere, J E

    2016-10-01

    This review provides a tutorial for individuals interested in quantitative veterinary pharmacology and toxicology and offers a basis for establishing guidelines for physiologically based pharmacokinetic (PBPK) model development and application in veterinary medicine. This is important as the application of PBPK modeling in veterinary medicine has evolved over the past two decades. PBPK models can be used to predict drug tissue residues and withdrawal times in food-producing animals, to estimate chemical concentrations at the site of action and target organ toxicity to aid risk assessment of environmental contaminants and/or drugs in both domestic animals and wildlife, as well as to help design therapeutic regimens for veterinary drugs. This review provides a comprehensive summary of PBPK modeling principles, model development methodology, and the current applications in veterinary medicine, with a focus on predictions of drug tissue residues and withdrawal times in food-producing animals. The advantages and disadvantages of PBPK modeling compared to other pharmacokinetic modeling approaches (i.e., classical compartmental/noncompartmental modeling, nonlinear mixed-effects modeling, and interspecies allometric scaling) are further presented. The review finally discusses contemporary challenges and our perspectives on model documentation, evaluation criteria, quality improvement, and offers solutions to increase model acceptance and applications in veterinary pharmacology and toxicology.

  17. Numerical simulation on snow melting phenomena by CIP method

    NASA Astrophysics Data System (ADS)

    Mizoe, H.; Yoon, Seong Y.; Josho, M.; Yabe, T.

    2001-04-01

    A numerical scheme based on the C-CUP method to simulate melting phenomena in snow is proposed. To calculate these complex phenomena we introduce the phase change, elastic-plastic model, porous model, and verify each model by using some simple examples. This scheme is applied to a practical model, such as the snow piled on the insulator of electrical transmission line, in which snow is modeled as a compound material composed of air, water, and ice, and is calculated by elastic-plastic model. The electric field between two electrodes is solved by the Poisson equation giving the Joule heating in the energy conservation that eventually leads to snow melting. Comparison is made by changing the fraction of water in the snow to see its effect on melting process for the cases of applied voltage of 50 and 500 kV on the two electrodes.

  18. A web-based virtual lighting simulator

    SciTech Connect

    Papamichael, Konstantinos; Lai, Judy; Fuller, Daniel; Tariq, Tara

    2002-05-06

    This paper is about a web-based ''virtual lighting simulator,'' which is intended to allow architects and lighting designers to quickly assess the effect of key parameters on the daylighting and lighting performance in various space types. The virtual lighting simulator consists of a web-based interface that allows navigation through a large database of images and data, which were generated through parametric lighting simulations. At its current form, the virtual lighting simulator has two main modules, one for daylighting and one for electric lighting. The daylighting module includes images and data for a small office space, varying most key daylighting parameters, such as window size and orientation, glazing type, surface reflectance, sky conditions, time of the year, etc. The electric lighting module includes images and data for five space types (classroom, small office, large open office, warehouse and small retail), varying key lighting parameters, such as the electric lighting system, surface reflectance, dimming/switching, etc. The computed images include perspectives and plans and are displayed in various formats to support qualitative as well as quantitative assessment. The quantitative information is in the form of iso-contour lines superimposed on the images, as well as false color images and statistical information on work plane illuminance. The qualitative information includes images that are adjusted to account for the sensitivity and adaptation of the human eye. The paper also includes a section on the major technical issues and their resolution.

  19. Marking petroglyphs with calcite and gypsum-based chalks: Interaction with granite under different simulated conditions and the effectiveness and harmfulness of cleaning methods.

    PubMed

    Pozo-Antonio, J S; Fernández-Rodríguez, S; Rocha, C S A; Carrera, F; Rivas, T

    2017-08-25

    Marking petroglyphs with chalk is a common practice to enhance them for documentation and reproduction. Although this procedure has started to be less frequently used, there is no knowledge about the interaction between the rock engravings nor about the effectiveness achieved by the common cleaning procedures of such markers considering the chalk extraction and the induced damage to the rock. This study evaluates the interaction between two chalks of different composition (calcite and gypsum) and a granite on which the majority of NW Iberian Peninsula-petroglyphs are carved. Granitic samples marked with these chalks were subjected to artificial rain events and high temperatures (700°C) related to fires. After each aging test, chemical and physical modifications on the rock were analysed by means of stereomicroscopy, x-ray diffraction, Fourier transform infrared spectroscopy, scanning electron microscopy and colour spectrophotometry. Moreover, the evaluation of the effectiveness and harmfulness of several mechanical and chemical cleaning procedures commonly used in the field of cultural heritage conservation was carried out. Both chalks remained at different extent on the surface after the artificial rain events. Water would promote a different penetration-depth of the chalks into the stone, depending on their solubility. High temperatures led to mineral phase transformations of the chalks influencing the interaction with the rock. Regarding cleaning effectiveness, despite a few chalk remains were found in all the cleanings, chemical methods showed higher effectiveness than mechanical procedures even though some of them leave chemical contamination. Benzalkonium chloride can be considered as the cleaner with the best results to extract both types of chalk on granite. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Simulations of infrared atmospheric transmittance based on measured data

    NASA Astrophysics Data System (ADS)

    Song, Fu-yin; Lu, Yuan; Qiao, Ya; Tao, Hui-feng; Tang, Cong; Ling, Yong-shun

    2016-10-01

    There are two regular methods to calculate infrared atmospheric transmittance, including empirical formula and professional software. However, it has large deviations to use empirical formula. It is complicated to use professional software and difficult to apply in other infrared simulative system. Therefore, based on measured atmospheric data in some area for many years, article used the method of molecular single absorption to calculate absorption coefficients of water vapor and carbon dioxide in different temperature. Temperatures, pressures, and consequent scattering coefficients which distributed in different high were fitted with analysis formula according to different months. Then, it built simulative calculation model of atmospheric transmittance of infrared radiation. The simulative results are very close to accuracy results calculated by user-defined model of MODTRAN. The method is easy and convenient to use and has certain referent value in the project application.

  1. Research on data communication method in periscope semi-physical training simulation system

    NASA Astrophysics Data System (ADS)

    Xiao, Jianbo; Hu, Dabin

    2013-03-01

    Data communication plays a very important role in the hardware in the loop simulation system. The system architecture of periscope semi-physical simulation system is proposed at first. Then the data communication method based on FINS between PLC and PC is introduced, the user's interaction of scene is achieved by PLC. The communication based on TCP between 2D chart console and scene simulation system is also introduced. The 6-DOF motion model and the scene simulation system is connected by TCP, and a DR method is introduced in solving the data amount problem. The test shows that the simulation system has no error package and no missing in a simulation circle. And can meet the requirements of training, also shows good performance in reliability and real-time.

  2. Laser method for simulating the transient radiation effects of semiconductor

    NASA Astrophysics Data System (ADS)

    Li, Mo; Sun, Peng; Tang, Ge; Wang, Xiaofeng; Wang, Jianwei; Zhang, Jian

    2017-05-01

    In this paper, we demonstrate the laser simulation adequacy both by theoretical analysis and experiments. We first explain the basic theory and physical mechanisms of laser simulation of transient radiation effect of semiconductor. Based on a simplified semiconductor structure, we describe the reflection, optical absorption and transmission of laser beam. Considering two cases of single-photon absorption when laser intensity is relatively low and two-photon absorption with higher laser intensity, we derive the laser simulation equivalent dose rate model. Then with 2 types of BJT transistors, laser simulation experiments and gamma ray radiation experiments are conducted. We found good linear relationship between laser simulation and gammy ray which depict the reliability of laser simulation.

  3. Fast spot-based multiscale simulations of granular drainage

    SciTech Connect

    Rycroft, Chris H.; Wong, Yee Lok; Bazant, Martin Z.

    2009-05-22

    We develop a multiscale simulation method for dense granular drainage, based on the recently proposed spot model, where the particle packing flows by local collective displacements in response to diffusing"spots'" of interstitial free volume. By comparing with discrete-element method (DEM) simulations of 55,000 spheres in a rectangular silo, we show that the spot simulation is able to approximately capture many features of drainage, such as packing statistics, particle mixing, and flow profiles. The spot simulation runs two to three orders of magnitude faster than DEM, making it an appropriate method for real-time control or optimization. We demonstrateextensions for modeling particle heaping and avalanching at the free surface, and for simulating the boundary layers of slower flow near walls. We show that the spot simulations are robust and flexible, by demonstrating that they can be used in both event-driven and fixed timestep approaches, and showing that the elastic relaxation step used in the model can be applied much less frequently and still create good results.

  4. Simulation of large-scale rule-based models

    SciTech Connect

    Hlavacek, William S; Monnie, Michael I; Colvin, Joshua; Faseder, James

    2008-01-01

    Interactions of molecules, such as signaling proteins, with multiple binding sites and/or multiple sites of post-translational covalent modification can be modeled using reaction rules. Rules comprehensively, but implicitly, define the individual chemical species and reactions that molecular interactions can potentially generate. Although rules can be automatically processed to define a biochemical reaction network, the network implied by a set of rules is often too large to generate completely or to simulate using conventional procedures. To address this problem, we present DYNSTOC, a general-purpose tool for simulating rule-based models. DYNSTOC implements a null-event algorithm for simulating chemical reactions in a homogenous reaction compartment. The simulation method does not require that a reaction network be specified explicitly in advance, but rather takes advantage of the availability of the reaction rules in a rule-based specification of a network to determine if a randomly selected set of molecular components participates in a reaction during a time step. DYNSTOC reads reaction rules written in the BioNetGen language which is useful for modeling protein-protein interactions involved in signal transduction. The method of DYNSTOC is closely related to that of STOCHSIM. DYNSTOC differs from STOCHSIM by allowing for model specification in terms of BNGL, which extends the range of protein complexes that can be considered in a model. DYNSTOC enables the simulation of rule-based models that cannot be simulated by conventional methods. We demonstrate the ability of DYNSTOC to simulate models accounting for multisite phosphorylation and multivalent binding processes that are characterized by large numbers of reactions. DYNSTOC is free for non-commercial use. The C source code, supporting documentation and example input files are available at .

  5. Simulation-based assessment for construction helmets.

    PubMed

    Long, James; Yang, James; Lei, Zhipeng; Liang, Daan

    2015-01-01

    In recent years, there has been a concerted effort for greater job safety in all industries. Personnel protective equipment (PPE) has been developed to help mitigate the risk of injury to humans that might be exposed to hazardous situations. The human head is the most vulnerable to impact as a moderate magnitude can cause serious injury or death. That is why industries have required the use of an industrial hard hat or helmet. There have only been a few articles published to date that are focused on the risk of head injury when wearing an industrial helmet. A full understanding of the effectiveness of construction helmets on reducing injury is lacking. This paper presents a simulation-based method to determine the threshold at which a human will sustain injury when wearing a construction helmet and assesses the risk of injury for wearers of construction helmets or hard hats. Advanced finite element, or FE, models were developed to study the impact on construction helmets. The FE model consists of two parts: the helmet and the human models. The human model consists of a brain, enclosed by a skull and an outer layer of skin. The level and probability of injury to the head was determined using both the head injury criterion (HIC) and tolerance limits set by Deck and Willinger. The HIC has been widely used to assess the likelihood of head injury in vehicles. The tolerance levels proposed by Deck and Willinger are more suited for finite element models but lack wide-scale validation. Different cases of impact were studied using LSTC's LS-DYNA.

  6. Simulation of plume dynamics by the Lattice Boltzmann Method

    NASA Astrophysics Data System (ADS)

    Mora, Peter; Yuen, David A.

    2017-09-01

    The Lattice Boltzmann Method (LBM) is a semi-microscopic method to simulate fluid mechanics by modelling distributions of particles moving and colliding on a lattice. We present 2-D simulations using the LBM of a fluid in a rectangular box being heated from below, and cooled from above, with a Rayleigh of Ra = 108, similar to current estimates of the Earth's mantle, and a Prandtl number of 5000. At this Prandtl number, the flow is found to be in the non-inertial regime where the inertial terms denoted I ≪ 1. Hence, the simulations presented lie within the regime of relevance for geodynamical problems. We obtain narrow upwelling plumes with mushroom heads and chutes of downwelling fluid as expected of a flow in the non-inertial regime. The method developed demonstrates that the LBM has great potential for simulating thermal convection and plume dynamics relevant to geodynamics, albeit with some limitations.

  7. A Copula Based Space-Time Rainfall Simulation Model

    NASA Astrophysics Data System (ADS)

    Aghakouchak, A.; Bárdossy, A.; Habib, E.

    2008-05-01

    Stochastically generated rainfall data are used as input to hydrological and meteorological models to assess model uncertainties and climate variability in water resources systems. Currently, there are very well defined methods to generate time series of rainfall data for a single point. However, hydrological and meteorological modeling over large scales requires high resolution rainfall data to capture temporal and spatial variability of rainfall that is proven to affect the quality of hydrological predictions (Osborn and Reynolds, 1963; Osborn and Keppel, 1966; Rodda, 1967; Dawdy and Bergman, 1969, Seliga et al., 1992; Corradini and Singh, 1985; Obled et al., 1994; Troutman, 1983; Hamlin, 1983; Faures et al., 1995; Shah et al., 1996, Goodrich et al., 1995). In this paper a copula base space-time rainfall simulation model is introduced for simulation of two-dimensional rainfall field based on observed radar data. In contrast with most rainfall simulation techniques, which describe the spatial dependence structure of rainfall fields with a covariance function or a variogram, we introduce spatial dependence without the influence of the marginal distribution using copula. Radar data of the state of Baden-Württemberg in Germany with temporal resolution of 5min and spatial resolution of 1 km2 are used in this study. Gaussian copula and a number of non-Gaussian copulas are used to describe the dependency structure of radar rainfall data. For each radar image, realizations of radar rainfall patters are simulated. The simulation technique used in this work preserves the spatial dependence structure as well and temporal variability of simulated fields similar to the observed radar data. Each simulated realization is then used as input to a hydrological model resulting in an ensemble of predicted runoff hydrographs. The main conclusions are: (a) copula techniques can be used to describe the spatial dependence structure or rainfall fields instead of a simple covariance

  8. Efficient Combustion Simulation via the Adaptive Wavelet Collocation Method

    NASA Astrophysics Data System (ADS)

    Lung, Kevin; Brown-Dymkoski, Eric; Guerrero, Victor; Doran, Eric; Museth, Ken; Balme, Jo; Urberger, Bob; Kessler, Andre; Jones, Stephen; Moses, Billy; Crognale, Anthony

    Rocket engine development continues to be driven by the intuition and experience of designers, progressing through extensive trial-and-error test campaigns. Extreme temperatures and pressures frustrate direct observation, while high-fidelity simulation can be impractically expensive owing to the inherent muti-scale, multi-physics nature of the problem. To address this cost, an adaptive multi-resolution PDE solver has been designed which targets the high performance, many-core architecture of GPUs. The adaptive wavelet collocation method is used to maintain a sparse-data representation of the high resolution simulation, greatly reducing the memory footprint while tightly controlling physical fidelity. The tensorial, stencil topology of wavelet-based grids lends itself to highly vectorized algorithms which are necessary to exploit the performance of GPUs. This approach permits efficient implementation of direct finite-rate kinetics, and improved resolution of steep thermodynamic gradients and the smaller mixing scales that drive combustion dynamics. Resolving these scales is crucial for accurate chemical kinetics, which are typically degraded or lost in statistical modeling approaches.

  9. Methods for increased computational efficiency of multibody simulations

    NASA Astrophysics Data System (ADS)

    Epple, Alexander

    This thesis is concerned with the efficient numerical simulation of finite element based flexible multibody systems. Scaling operations are systematically applied to the governing index-3 differential algebraic equations in order to solve the problem of ill conditioning for small time step sizes. The importance of augmented Lagrangian terms is demonstrated. The use of fast sparse solvers is justified for the solution of the linearized equations of motion resulting in significant savings of computational costs. Three time stepping schemes for the integration of the governing equations of flexible multibody systems are discussed in detail. These schemes are the two-stage Radau IIA scheme, the energy decaying scheme, and the generalized-a method. Their formulations are adapted to the specific structure of the governing equations of flexible multibody systems. The efficiency of the time integration schemes is comprehensively evaluated on a series of test problems. Formulations for structural and constraint elements are reviewed and the problem of interpolation of finite rotations in geometrically exact structural elements is revisited. This results in the development of a new improved interpolation algorithm, which preserves the objectivity of the strain field and guarantees stable simulations in the presence of arbitrarily large rotations. Finally, strategies for the spatial discretization of beams in the presence of steep variations in cross-sectional properties are developed. These strategies reduce the number of degrees of freedom needed to accurately analyze beams with discontinuous properties, resulting in improved computational efficiency.

  10. Coherent-wave Monte Carlo method for simulating light propagation in tissue

    NASA Astrophysics Data System (ADS)

    Kraszewski, Maciej; Pluciński, Jerzy

    2016-03-01

    Simulating propagation and scattering of coherent light in turbid media, such as biological tissues, is a complex problem. Numerical methods for solving Helmholtz or wave equation (e.g. finite-difference or finite-element methods) require large amount of computer memory and long computation time. This makes them impractical for simulating laser beam propagation into deep layers of tissue. Other group of methods, based on radiative transfer equation, allows to simulate only propagation of light averaged over the ensemble of turbid medium realizations. This makes them unuseful for simulating phenomena connected to coherence properties of light. We propose a new method for simulating propagation of coherent light (e.g. laser beam) in biological tissue, that we called Coherent-Wave Monte Carlo method. This method is based on direct computation of optical interaction between scatterers inside the random medium, what allows to reduce amount of memory and computation time required for simulation. We present the theoretical basis of the proposed method and its comparison with finite-difference methods for simulating light propagation in scattering media in Rayleigh approximation regime.

  11. Exploring Solute Transport and Streamline Connectivity Using Two-point and Multipoint Simulation Methods

    NASA Astrophysics Data System (ADS)

    Klise, K. A.; McKenna, S. A.; Tidwell, V. C.; Lane, J. W.; Weissmann, G. S.; Wawrzyniec, T. F.; Nichols, E. M.

    2008-12-01

    Sequential indicator simulation is widely used to create lithofacies models based on the two-point correlation of the desired heterogeneous field. However, two-point correlation (i.e. the variogram) is not capable of preserving complex patterns such as connected curvilinear structures often noted in realistic geologic media. As an alternative, several multipoint simulation methods have been suggested to replicate structural patterns based on a training image. To understand the implications that two-point and multipoint methods have on predicting solute transport, rigorous tests are needed that use realistic aquifer analogs. For this study, we use high-resolution terrestrial lidar scans to identify sand and gravel lithofacies at the outcrop (meter) scale. The lithofacies map serves as the aquifer analog and is used as a training image. The use of two-point (sisim) and multipoint (filtersim and snesim) stochastic simulation methods are then compared based on the ability of the resulting simulations to replicate solute transport characteristics using the aquifer analog. Detailed particle tracking simulations are used to explore the streamline-based connectivity that is preserved using each method. Based on the three simulation methods tested here, filtersim, a multipoint method that replicates structural patterns seen in the aquifer analog, best predicts non- Fickian solute transport characteristics by matching the connectivity of facies along streamlines. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04- 94AL85000.

  12. A tool for simulating parallel branch-and-bound methods

    NASA Astrophysics Data System (ADS)

    Golubeva, Yana; Orlov, Yury; Posypkin, Mikhail

    2016-01-01

    The Branch-and-Bound method is known as one of the most powerful but very resource consuming global optimization methods. Parallel and distributed computing can efficiently cope with this issue. The major difficulty in parallel B&B method is the need for dynamic load redistribution. Therefore design and study of load balancing algorithms is a separate and very important research topic. This paper presents a tool for simulating parallel Branchand-Bound method. The simulator allows one to run load balancing algorithms with various numbers of processors, sizes of the search tree, the characteristics of the supercomputer's interconnect thereby fostering deep study of load distribution strategies. The process of resolution of the optimization problem by B&B method is replaced by a stochastic branching process. Data exchanges are modeled using the concept of logical time. The user friendly graphical interface to the simulator provides efficient visualization and convenient performance analysis.

  13. Experiential Learning through Computer-Based Simulations.

    ERIC Educational Resources Information Center

    Maynes, Bill; And Others

    1992-01-01

    Describes experiential learning instructional model and simulation for student principals. Describes interactive laser videodisc simulation. Reports preliminary findings about student principal learning from simulation. Examines learning approaches by unsuccessful and successful students and learning levels of model learners. Simulation's success…

  14. Comparing Intravenous Insertion Instructional Methods with Haptic Simulators

    PubMed Central

    Malecha, Ann

    2017-01-01

    Objective. The objective of this review was to compare traditional intravenous (IV) insertion instructional methods with the use of haptic IV simulators. Design. An integrative research design was used to analyze the current literature. Data Sources. A search was conducted using key words intravenous (IV) insertion or cannulation or venipuncture and simulation from 2000 to 2015 in the English language. The databases included Academic Search Complete, CINAHL Complete, Education Resource Information Center, and Medline. Review Methods. Whittemore and Knafl's (2005) strategies were used to critique the articles for themes and similarities. Results. Comparisons of outcomes between traditional IV instructional methods and the use of haptic IV simulators continue to show various results. Positive results indicate that the use of the haptic IV simulator decreases both band constriction and total procedure time. While students are satisfied with practicing on the haptic simulators, they still desire faculty involvement. Conclusion. Combining the haptic IV simulator with practical experience on the IV arm may be the best practice for learning IV insertion. Research employing active learning strategies while using a haptic IV simulator during the learning process may reduce cost and faculty time. PMID:28250987

  15. Comparing Intravenous Insertion Instructional Methods with Haptic Simulators.

    PubMed

    McWilliams, Lenora A; Malecha, Ann

    2017-01-01

    Objective. The objective of this review was to compare traditional intravenous (IV) insertion instructional methods with the use of haptic IV simulators. Design. An integrative research design was used to analyze the current literature. Data Sources. A search was conducted using key words intravenous (IV) insertion or cannulation or venipuncture and simulation from 2000 to 2015 in the English language. The databases included Academic Search Complete, CINAHL Complete, Education Resource Information Center, and Medline. Review Methods. Whittemore and Knafl's (2005) strategies were used to critique the articles for themes and similarities. Results. Comparisons of outcomes between traditional IV instructional methods and the use of haptic IV simulators continue to show various results. Positive results indicate that the use of the haptic IV simulator decreases both band constriction and total procedure time. While students are satisfied with practicing on the haptic simulators, they still desire faculty involvement. Conclusion. Combining the haptic IV simulator with practical experience on the IV arm may be the best practice for learning IV insertion. Research employing active learning strategies while using a haptic IV simulator during the learning process may reduce cost and faculty time.

  16. A PDE-based partial discharge simulator

    NASA Astrophysics Data System (ADS)

    Villa, Andrea; Barbieri, Luca; Gondola, Marco; Leon-Garzon, Andres R.; Malgesini, Roberto

    2017-09-01

    Partial discharges are the main ageing and failure mechanism of solid insulating materials subjected to alternated current stresses. This phenomenon, from a simulation point of view, has been almost always tackled using semi-empirical schemes. In this work, a fully physically-based model, based on a set of conservation partial differential equations, is introduced. A numerical algorithm, specifically designed to solve this particular problem, is developed and its validation is discussed considering some experimental data acquired in a simple geometry containing an isolated void.

  17. A fast Chebyshev method for simulating flexible-wing propulsion

    NASA Astrophysics Data System (ADS)

    Moore, M. Nicholas J.

    2017-09-01

    We develop a highly efficient numerical method to simulate small-amplitude flapping propulsion by a flexible wing in a nearly inviscid fluid. We allow the wing's elastic modulus and mass density to vary arbitrarily, with an eye towards optimizing these distributions for propulsive performance. The method to determine the wing kinematics is based on Chebyshev collocation of the 1D beam equation as coupled to the surrounding 2D fluid flow. Through small-amplitude analysis of the Euler equations (with trailing-edge vortex shedding), the complete hydrodynamics can be represented by a nonlocal operator that acts on the 1D wing kinematics. A class of semi-analytical solutions permits fast evaluation of this operator with O (Nlog ⁡ N) operations, where N is the number of collocation points on the wing. This is in contrast to the minimum O (N2) cost of a direct 2D fluid solver. The coupled wing-fluid problem is thus recast as a PDE with nonlocal operator, which we solve using a preconditioned iterative method. These techniques yield a solver of near-optimal complexity, O (Nlog ⁡ N) , allowing one to rapidly search the infinite-dimensional parameter space of all possible material distributions and even perform optimization over this space.

  18. Interactive physically-based sound simulation

    NASA Astrophysics Data System (ADS)

    Raghuvanshi, Nikunj

    The realization of interactive, immersive virtual worlds requires the ability to present a realistic audio experience that convincingly compliments their visual rendering. Physical simulation is a natural way to achieve such realism, enabling deeply immersive virtual worlds. However, physically-based sound simulation is very computationally expensive owing to the high-frequency, transient oscillations underlying audible sounds. The increasing computational power of desktop computers has served to reduce the gap between required and available computation, and it has become possible to bridge this gap further by using a combination of algorithmic improvements that exploit the physical, as well as perceptual properties of audible sounds. My thesis is a step in this direction. My dissertation concentrates on developing real-time techniques for both sub-problems of sound simulation: synthesis and propagation. Sound synthesis is concerned with generating the sounds produced by objects due to elastic surface vibrations upon interaction with the environment, such as collisions. I present novel techniques that exploit human auditory perception to simulate scenes with hundreds of sounding objects undergoing impact and rolling in real time. Sound propagation is the complementary problem of modeling the high-order scattering and diffraction of sound in an environment as it travels from source to listener. I discuss my work on a novel numerical acoustic simulator (ARD) that is hundred times faster and consumes ten times less memory than a high-accuracy finite-difference technique, allowing acoustic simulations on previously-intractable spaces, such as a cathedral, on a desktop computer. Lastly, I present my work on interactive sound propagation that leverages my ARD simulator to render the acoustics of arbitrary static scenes for multiple moving sources and listener in real time, while accounting for scene-dependent effects such as low-pass filtering and smooth attenuation

  19. Performance Analysis of an Actor-Based Distributed Simulation

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D.

    1998-01-01

    Object-oriented design of simulation programs appears to be very attractive because of the natural association of components in the simulated system with objects. There is great potential in distributing the simulation across several computers for the purpose of parallel computation and its consequent handling of larger problems in less elapsed time. One approach to such a design is to use "actors", that is, active objects with their own thread of control. Because these objects execute concurrently, communication is via messages. This is in contrast to an object-oriented design using passive objects where communication between objects is via method calls (direct calls when they are in the same address space and remote procedure calls when they are in different address spaces or different machines). This paper describes a performance analysis program for the evaluation of a design for distributed simulations based upon actors.

  20. Ocean Wave Simulation Based on Wind Field.

    PubMed

    Li, Zhongyi; Wang, Hao

    2016-01-01

    Ocean wave simulation has a wide range of applications in movies, video games and training systems. Wind force is the main energy resource for generating ocean waves, which are the result of the interaction between wind and the ocean surface. While numerous methods to handle simulating oceans and other fluid phenomena have undergone rapid development during the past years in the field of computer graphic, few of them consider to construct ocean surface height field from the perspective of wind force driving ocean waves. We introduce wind force to the construction of the ocean surface height field through applying wind field data and wind-driven wave particles. Continual and realistic ocean waves result from the overlap of wind-driven wave particles, and a strategy was proposed to control these discrete wave particles and simulate an endless ocean surface. The results showed that the new method is capable of obtaining a realistic ocean scene under the influence of wind fields at real time rates.

  1. Ocean Wave Simulation Based on Wind Field

    PubMed Central

    2016-01-01

    Ocean wave simulation has a wide range of applications in movies, video games and training systems. Wind force is the main energy resource for generating ocean waves, which are the result of the interaction between wind and the ocean surface. While numerous methods to handle simulating oceans and other fluid phenomena have undergone rapid development during the past years in the field of computer graphic, few of them consider to construct ocean surface height field from the perspective of wind force driving ocean waves. We introduce wind force to the construction of the ocean surface height field through applying wind field data and wind-driven wave particles. Continual and realistic ocean waves result from the overlap of wind-driven wave particles, and a strategy was proposed to control these discrete wave particles and simulate an endless ocean surface. The results showed that the new method is capable of obtaining a realistic ocean scene under the influence of wind fields at real time rates. PMID:26808718

  2. Meshless thin-shell simulation based on global conformal parameterization.

    PubMed

    Guo, Xiaohu; Li, Xin; Bao, Yunfan; Gu, Xianfeng; Qin, Hong

    2006-01-01

    This paper presents a new approach to the physically-based thin-shell simulation of point-sampled geometry via explicit, global conformal point-surface parameterization and meshless dynamics. The point-based global parameterization is founded upon the rigorous mathematics of Riemann surface theory and Hodge theory. The parameterization is globally conformal everywhere except for a minimum number of zero points. Within our parameterization framework, any well-sampled point surface is functionally equivalent to a manifold, enabling popular and powerful surface-based modeling and physically-based simulation tools to be readily adapted for point geometry processing and animation. In addition, we propose a meshless surface computational paradigm in which the partial differential equations (for dynamic physical simulation) can be applied and solved directly over point samples via Moving Least Squares (MLS) shape functions defined on the global parametric domain without explicit connectivity information. The global conformal parameterization provides a common domain to facilitate accurate meshless simulation and efficient discontinuity modeling for complex branching cracks. Through our experiments on thin-shell elastic deformation and fracture simulation, we demonstrate that our integrative method is very natural, and that it has great potential to further broaden the application scope of point-sampled geometry in graphics and relevant fields.

  3. Methods for Analysis and Simulation of Ballistic Impact

    DTIC Science & Technology

    2017-04-01

    Laboratory Methods for Analysis and Simulation of Ballistic Impact by John D Clayton Weapons and Materials Research Directorate, ARL...SUBJECT TERMS impact physics, materials science, mechanics, terminal ballistics, shock waves 16. SECURITY CLASSIFICATION OF: 17. LIMITATION...and propagation of planar shock waves through solid material specimens induced by collision with flyer plates or by explosive loading. Method: The

  4. Remote Sensing Requirements Development: A Simulation-Based Approach

    NASA Technical Reports Server (NTRS)

    Zanoni, Vicki; Davis, Bruce; Ryan, Robert; Gasser, Gerald; Blonski, Slawomir

    2002-01-01

    Earth science research and application requirements for multispectral data have often been driven by currently available remote sensing technology. Few parametric studies exist that specify data required for certain applications. Consequently, data requirements are often defined based on the best data available or on what has worked successfully in the past. Since properties such as spatial resolution, swath width, spectral bands, signal-to-noise ratio (SNR), data quantization and band-to-band registration drive sensor platform and spacecraft system architecture and cost, analysis of these criteria is important to optimize system design objectively. Remote sensing data requirements are also linked to calibration and characterization methods. Parameters such as spatial resolution, radiometric accuracy and geopositional accuracy affect the complexity and cost of calibration methods. However, few studies have quantified the true accuracies required for specific problems. As calibration methods and standards are proposed, it is important that they be tied to well-known data requirements. The Application Research Toolbox (ART) developed at the John C. Stennis Space Center provides a simulation-based method for multispectral data requirements development. The ART produces simulated datasets from hyperspectral data through band synthesis. Parameters such as spectral band shape and width, SNR, data quantization, spatial resolution and band-to-band registration can be varied to create many different simulated data products. Simulated data utility can then be assessed for different applications so that requirements can be better understood.

  5. Remote Sensing System Requirements Development: A Simulation-Based Approach

    NASA Technical Reports Server (NTRS)

    Zanoni, Vicki; Davis, Bruce; Ryan, Robert; Blonski, Slavomir; Gasser, Gerald

    2002-01-01

    Earth science research and application requirements for multispectral data have often been driven by currently available remote sensing technology. Few parametric studies exist that specify data required for certain applications. Consequently, data requirements are often defined based on the best data available or on what has worked successfully in the past. Since properites such as spatial resolution, swath width, spectral bands, signal-to-noise ratio (SNR), data quantization, and band-to-band registration drive sensor platform and spaceraft system architecture and cost, analysis of these criteria is important to objectively optimize system design. Remote sensing data requirements are also linked to calibration and characterization methods. Parameters such as spatial resolution, radiometric accuracy, and geopositional accuracy affect the complexity and cost of calibration methods. However, there are few studies that quantify the true accuracies required for specific problems. As calibration methods and standards are proposed, it is important that they be tied to well-known data requirements. The Application Research Toolbox (ART) developed at Stennis Space Center provides a simulation-based method for multispectral data requirements development. The ART produces simulated data sets from hyperspectral data through band synthesis. Parameters such as spectral band shape and width, SNR, data quantization, spatial resolution, and band-to-band registration can be varied to create many different simulated data products. Simulated data utility can then be assessed for different applications so that requirements can be better understood. This paper describes the ART and its applicability for rigorously deriving remote sensing data requirements.

  6. An Efficient, Semi-implicit Pressure-based Scheme Employing a High-resolution Finitie Element Method for Simulating Transient and Steady, Inviscid and Viscous, Compressible Flows on Unstructured Grids

    SciTech Connect

    Richard C. Martineau; Ray A. Berry

    2003-04-01

    A new semi-implicit pressure-based Computational Fluid Dynamics (CFD) scheme for simulating a wide range of transient and steady, inviscid and viscous compressible flow on unstructured finite elements is presented here. This new CFD scheme, termed the PCICEFEM (Pressure-Corrected ICE-Finite Element Method) scheme, is composed of three computational phases, an explicit predictor, an elliptic pressure Poisson solution, and a semiimplicit pressure-correction of the flow variables. The PCICE-FEM scheme is capable of second-order temporal accuracy by incorporating a combination of a time-weighted form of the two-step Taylor-Galerkin Finite Element Method scheme as an explicit predictor for the balance of momentum equations and the finite element form of a time-weighted trapezoid rule method for the semi-implicit form of the governing hydrodynamic equations. Second-order spatial accuracy is accomplished by linear unstructured finite element discretization. The PCICE-FEM scheme employs Flux-Corrected Transport as a high-resolution filter for shock capturing. The scheme is capable of simulating flows from the nearly incompressible to the high supersonic flow regimes. The PCICE-FEM scheme represents an advancement in mass-momentum coupled, pressurebased schemes. The governing hydrodynamic equations for this scheme are the conservative form of the balance of momentum equations (Navier-Stokes), mass conservation equation, and total energy equation. An operator splitting process is performed along explicit and implicit operators of the semi-implicit governing equations to render the PCICE-FEM scheme in the class of predictor-corrector schemes. The complete set of semi-implicit governing equations in the PCICE-FEM scheme are cast in this form, an explicit predictor phase and a semi-implicit pressure-correction phase with the elliptic pressure Poisson solution coupling the predictor-corrector phases. The result of this predictor-corrector formulation is that the pressure Poisson

  7. Numerical methods for large eddy simulation of acoustic combustion instabilities

    NASA Astrophysics Data System (ADS)

    Wall, Clifton T.

    Acoustic combustion instabilities occur when interaction between the combustion process and acoustic modes in a combustor results in periodic oscillations in pressure, velocity, and heat release. If sufficiently large in amplitude, these instabilities can cause operational difficulties or the failure of combustor hardware. In many situations, the dominant instability is the result of the interaction between a low frequency acoustic mode of the combustor and the large scale hydrodynamics. Large eddy simulation (LES), therefore, is a promising tool for the prediction of these instabilities, since both the low frequency acoustic modes and the large scale hydrodynamics are well resolved in LES. Problems with the tractability of such simulations arise, however, due to the difficulty of solving the compressible Navier-Stokes equations efficiently at low Mach number and due to the large number of acoustic periods that are often required for such instabilities to reach limit cycles. An implicit numerical method for the solution of the compressible Navier-Stokes equations has been developed which avoids the acoustic CFL restriction, allowing for significant efficiency gains at low Mach number, while still resolving the low frequency acoustic modes of interest. In the limit of a uniform grid the numerical method causes no artificial damping of acoustic waves. New, non-reflecting boundary conditions have also been developed for use with the characteristic-based approach of Poinsot and Lele (1992). The new boundary conditions are implemented in a manner which allows for significant reduction of the computational domain of an LES by eliminating the need to perform LES in regions where one-dimensional acoustics significantly affect the instability but details of the hydrodynamics do not. These new numerical techniques have been demonstrated in an LES of an experimental combustor. The new techniques are shown to be an efficient means of performing LES of acoustic combustion

  8. Reduction and reconstruction methods for simulation and control of fluids

    NASA Astrophysics Data System (ADS)

    Ma, Zhanhua

    POD/ERA algorithms, they can be applied to linear time-varying systems. A motivating and model problem of stabilization of an unstable vortex shedding cycle with high average lift is then shown as an application of the lifted ERA method. We consider the flow past a flat plate at a post-stall angle of attack with periodic forcing at the trailing edge. The Newton-GMRES method is used to find a high-lift unstable orbit at a forcing period slightly larger than the natural period. A six-dimensional reduced-order model is constructed using lifted ERA to reconstruct the full (with a dimension about 1.4 x 105) linearized input-output dynamics about the orbit. An observer-based feedback controller is then designed using the reduced-order model. Simulation results show that the controller stabilizes the unstable orbit, and the reduced-order model correctly predicts the behavior of the full simulation. The second part of the thesis addresses a different type of reduction, namely symmetry reduction. In particular, we exploit symmetries to design special numerical integrators for a general class of systems (Lie-Poisson Hamiltonian systems) such that conservation laws, such as conservation of energy and momentum, are obeyed in numerical simulations. The motivating problem is a system of N point vortices evolving on a sphere that possesses a Lie-Poisson Hamiltonian structure. The design approach is a variational one at the Hamiltonian side that directly discretizes the corresponding Lie-Poisson variational principle, in which the Lie-Poisson system is regarded as a system reduced from a full canonical Hamiltonian system by symmetry. A modified version of Lie-Poisson variational principle is also proposed in this work. By construction the resulting integrators will not only simulate the Lie-Poisson dynamics, but also reconstruct some dynamics for the full system or the dual system (the so called Euler-Poincare reduced Lagrangian system). The integrators are then applied to a free

  9. Simulation of nonpoint source contamination based on adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Kourakos, G.; Harter, T.

    2014-12-01

    Contamination of groundwater aquifers from nonpoint sources is a worldwide problem. Typical agricultural groundwater basins receive contamination from a large array (in the order of ~10^5-6) of spatially and temporally heterogeneous sources such as fields, crops, dairies etc, while the received contaminants emerge at significantly uncertain time lags to a large array of discharge surfaces such as public supply, domestic and irrigation wells and streams. To support decision making in such complex regimes several approaches have been developed, which can be grouped into 3 categories: i) Index methods, ii)regression methods and iii) physically based methods. Among the three, physically based methods are considered more accurate, but at the cost of computational demand. In this work we present a physically based simulation framework which exploits the latest hardware and software developments to simulate large (>>1,000 km2) groundwater basins. First we simulate groundwater flow using a sufficiently detailed mesh to capture the spatial heterogeneity. To achieve optimal mesh quality we combine adaptive mesh refinement with the nonlinear solution for unconfined flow. Starting from a coarse grid the mesh is refined iteratively in the parts of the domain where the flow heterogeneity appears higher resulting in optimal grid. Secondly we simulate the nonpoint source pollution based on the detailed velocity field computed from the previous step. In our approach we use the streamline model where the 3D transport problem is decomposed into multiple 1D transport problems. The proposed framework is applied to simulate nonpoint source pollution in the Central Valley aquifer system, California.

  10. Simulation-based instruction of technical skills

    NASA Technical Reports Server (NTRS)

    Towne, Douglas M.; Munro, Allen

    1991-01-01

    A rapid intelligent tutoring development system (RAPIDS) was developed to facilitate the production of interactive, real-time graphical device models for use in instructing the operation and maintenance of complex systems. The tools allowed subject matter experts to produce device models by creating instances of previously defined objects and positioning them in the emerging device model. These simulation authoring functions, as well as those associated with demonstrating procedures and functional effects on the completed model, required no previous programming experience or use of frame-based instructional languages. Three large simulations were developed in RAPIDS, each involving more than a dozen screen-sized sections. Seven small, single-view applications were developed to explore the range of applicability. Three workshops were conducted to train others in the use of the authoring tools. Participants learned to employ the authoring tools in three to four days and were able to produce small working device models on the fifth day.

  11. Simulation-based instruction of technical skills

    NASA Technical Reports Server (NTRS)

    Towne, Douglas M.; Munro, Allen

    1991-01-01

    A rapid intelligent tutoring development system (RAPIDS) was developed to facilitate the production of interactive, real-time graphical device models for use in instructing the operation and maintenance of complex systems. The tools allowed subject matter experts to produce device models by creating instances of previously defined objects and positioning them in the emerging device model. These simulation authoring functions, as well as those associated with demonstrating procedures and functional effects on the completed model, required no previous programming experience or use of frame-based instructional languages. Three large simulations were developed in RAPIDS, each involving more than a dozen screen-sized sections. Seven small, single-view applications were developed to explore the range of applicability. Three workshops were conducted to train others in the use of the authoring tools. Participants learned to employ the authoring tools in three to four days and were able to produce small working device models on the fifth day.

  12. Evaluation methods of a middleware for networked surgical simulations.

    PubMed

    Cai, Qingbo; Liberatore, Vincenzo; Cavuşoğlu, M Cenk; Yoo, Youngjin

    2006-01-01

    Distributed surgical virtual environments are desirable since they substantially extend the accessibility of computational resources by network communication. However, network conditions critically affects the quality of a networked surgical simulation in terms of bandwidth limit, delays, and packet losses, etc. A solution to this problem is to introduce a middleware between the simulation application and the network so that it can take actions to enhance the user-perceived simulation performance. To comprehensively assess the effectiveness of such a middleware, we propose several evaluation methods in this paper, i.e., semi-automatic evaluation, middleware overhead measurement, and usability test.

  13. Knowledge Based Cloud FE Simulation of Sheet Metal Forming Processes

    PubMed Central

    Zhou, Du; Yuan, Xi; Gao, Haoxiang; Wang, Ailing; Liu, Jun; El Fakir, Omer; Politis, Denis J.; Wang, Liliang; Lin, Jianguo

    2016-01-01

    The use of Finite Element (FE) simulation software to adequately predict the outcome of sheet metal forming processes is crucial to enhancing the efficiency and lowering the development time of such processes, whilst reducing costs involved in trial-and-error prototyping. Recent focus on the substitution of steel components with aluminum alloy alternatives in the automotive and aerospace sectors has increased the need to simulate the forming behavior of such alloys for ever more complex component geometries. However these alloys, and in particular their high strength variants, exhibit limited formability at room temperature, and high temperature manufacturing technologies have been developed to form them. Consequently, advanced constitutive models are required to reflect the associated temperature and strain rate effects. Simulating such behavior is computationally very expensive using conventional FE simulation techniques. This paper presents a novel Knowledge Based Cloud FE (KBC-FE) simulation technique that combines advanced material and friction models with conventional FE simulations in an efficient manner thus enhancing the capability of commercial simulation software packages. The application of these methods is demonstrated through two example case studies, namely: the prediction of a material's forming limit under hot stamping conditions, and the tool life prediction under multi-cycle loading conditions. PMID:28060298

  14. Knowledge Based Cloud FE Simulation of Sheet Metal Forming Processes.

    PubMed

    Zhou, Du; Yuan, Xi; Gao, Haoxiang; Wang, Ailing; Liu, Jun; El Fakir, Omer; Politis, Denis J; Wang, Liliang; Lin, Jianguo

    2016-12-13

    The use of Finite Element (FE) simulation software to adequately predict the outcome of sheet metal forming processes is crucial to enhancing the efficiency and lowering the development time of such processes, whilst reducing costs involved in trial-and-error prototyping. Recent focus on the substitution of steel components with aluminum alloy alternatives in the automotive and aerospace sectors has increased the need to simulate the forming behavior of such alloys for ever more complex component geometries. However these alloys, and in particular their high strength variants, exhibit limited formability at room temperature, and high temperature manufacturing technologies have been developed to form them. Consequently, advanced constitutive models are required to reflect the associated temperature and strain rate effects. Simulating such behavior is computationally very expensive using conventional FE simulation techniques. This paper presents a novel Knowledge Based Cloud FE (KBC-FE) simulation technique that combines advanced material and friction models with conventional FE simulations in an efficient manner thus enhancing the capability of commercial simulation software packages. The application of these methods is demonstrated through two example case studies, namely: the prediction of a material's forming limit under hot stamping conditions, and the tool life prediction under multi-cycle loading conditions.

  15. Current concepts in simulation-based trauma education.

    PubMed

    Cherry, Robert A; Ali, Jameel

    2008-11-01

    The use of simulation-based technology in trauma education has focused on providing a safe and effective alternative to the more traditional methods that are used to teach technical skills and critical concepts in trauma resuscitation. Trauma team training using simulation-based technology is also being used to develop skills in leadership, team-information sharing, communication, and decision-makin