Science.gov

Sample records for based simulation methods

  1. Simulation Method for Wind Tunnel Based Virtual Flight Testing

    NASA Astrophysics Data System (ADS)

    Li, Hao; Zhao, Zhong-Liang; Fan, Zhao-Lin

    The Wind Tunnel Based Virtual Flight Testing (WTBVFT) could replicate the actual free flight and explore the aerodynamics/flight dynamics nonlinear coupling mechanism during the maneuver in the wind tunnel. The basic WTBVFT concept is to mount the test model on a specialized support system which allows for the model freely rotational motion, and the aerodynamic loading and motion parameters are measured simultaneously during the model motion. The simulations of the 3-DOF pitching motion of a typical missile in the vertical plane are performed with the openloop and closed-loop control methods. The objective is to analyze the effect of the main differences between the WTBVFT and the actual free flight, and study the simulation method for the WTBVFT. Preliminary simulation analyses have been conducted with positive results. These results indicate that the WTBVFT that uses closed-loop autopilot control method with the pitch angular rate feedback signal is able to replicate the actual free flight behavior within acceptable differences.

  2. Optimal grid-based methods for thin film micromagnetics simulations

    NASA Astrophysics Data System (ADS)

    Muratov, C. B.; Osipov, V. V.

    2006-08-01

    Thin film micromagnetics are a broad class of materials with many technological applications, primarily in magnetic memory. The dynamics of the magnetization distribution in these materials is traditionally modeled by the Landau-Lifshitz-Gilbert (LLG) equation. Numerical simulations of the LLG equation are complicated by the need to compute the stray field due to the inhomogeneities in the magnetization which presents the chief bottleneck for the simulation speed. Here, we introduce a new method for computing the stray field in a sample for a reduced model of ultra-thin film micromagnetics. The method uses a recently proposed idea of optimal finite difference grids for approximating Neumann-to-Dirichlet maps and has an advantage of being able to use non-uniform discretization in the film plane, as well as an efficient way of dealing with the boundary conditions at infinity for the stray field. We present several examples of the method's implementation and give a detailed comparison of its performance for studying domain wall structures compared to the conventional FFT-based methods.

  3. Study of Flapping Flight Using Discrete Vortex Method Based Simulations

    NASA Astrophysics Data System (ADS)

    Devranjan, S.; Jalikop, Shreyas V.; Sreenivas, K. R.

    2013-12-01

    In recent times, research in the area of flapping flight has attracted renewed interest with an endeavor to use this mechanism in Micro Air vehicles (MAVs). For a sustained and high-endurance flight, having larger payload carrying capacity we need to identify a simple and efficient flapping-kinematics. In this paper, we have used flow visualizations and Discrete Vortex Method (DVM) based simulations for the study of flapping flight. Our results highlight that simple flapping kinematics with down-stroke period (tD) shorter than the upstroke period (tU) would produce a sustained lift. We have identified optimal asymmetry ratio (Ar = tD/tU), for which flapping-wings will produce maximum lift and find that introducing optimal wing flexibility will further enhances the lift.

  4. A method for MREIT-based source imaging: simulation studies.

    PubMed

    Song, Yizhuang; Jeong, Woo Chul; Woo, Eung Je; Seo, Jin Keun

    2016-08-01

    This paper aims to provide a method for using magnetic resonance electrical impedance tomography (MREIT) to visualize local conductivity changes associated with evoked neuronal activities in the brain. MREIT is an MRI-based technique for conductivity mapping by probing the magnetic flux density induced by an externally injected current through surface electrodes. Since local conductivity changes resulting from evoked neural activities are very small (less than a few %), a major challenge is to acquire exogenous magnetic flux density data exceeding a certain noise level. Noting that the signal-to-noise ratio is proportional to the square root of the number of averages, it is important to reduce the data acquisition time to get more averages within a given total data collection time. The proposed method uses a sub-sampled k-space data set in the phase-encoding direction to significantly reduce the data acquisition time. Since the sub-sampled data violates the Nyquist criteria, we only get a nonlinearly wrapped version of the exogenous magnetic flux density data, which is insufficient for conductivity imaging. Taking advantage of the sparseness of the conductivity change, the proposed method detects local conductivity changes by estimating the time-change of the Laplacian of the nonlinearly wrapped data. PMID:27401235

  5. A method for MREIT-based source imaging: simulation studies

    NASA Astrophysics Data System (ADS)

    Song, Yizhuang; Jeong, Woo Chul; Woo, Eung Je; Seo, Jin Keun

    2016-08-01

    This paper aims to provide a method for using magnetic resonance electrical impedance tomography (MREIT) to visualize local conductivity changes associated with evoked neuronal activities in the brain. MREIT is an MRI-based technique for conductivity mapping by probing the magnetic flux density induced by an externally injected current through surface electrodes. Since local conductivity changes resulting from evoked neural activities are very small (less than a few %), a major challenge is to acquire exogenous magnetic flux density data exceeding a certain noise level. Noting that the signal-to-noise ratio is proportional to the square root of the number of averages, it is important to reduce the data acquisition time to get more averages within a given total data collection time. The proposed method uses a sub-sampled k-space data set in the phase-encoding direction to significantly reduce the data acquisition time. Since the sub-sampled data violates the Nyquist criteria, we only get a nonlinearly wrapped version of the exogenous magnetic flux density data, which is insufficient for conductivity imaging. Taking advantage of the sparseness of the conductivity change, the proposed method detects local conductivity changes by estimating the time-change of the Laplacian of the nonlinearly wrapped data.

  6. Human swallowing simulation based on videofluorography images using Hamiltonian MPS method

    NASA Astrophysics Data System (ADS)

    Kikuchi, Takahiro; Michiwaki, Yukihiro; Kamiya, Tetsu; Toyama, Yoshio; Tamai, Tasuku; Koshizuka, Seiichi

    2015-09-01

    In developed nations, swallowing disorders and aspiration pneumonia have become serious problems. We developed a method to simulate the behavior of the organs involved in swallowing to clarify the mechanisms of swallowing and aspiration. The shape model is based on anatomically realistic geometry, and the motion model utilizes forced displacements based on realistic dynamic images to reflect the mechanisms of human swallowing. The soft tissue organs are modeled as nonlinear elastic material using the Hamiltonian MPS method. This method allows for stable simulation of the complex swallowing movement. A penalty method using metaballs is employed to simulate contact between organ walls and smooth sliding along the walls. We performed four numerical simulations under different analysis conditions to represent four cases of swallowing, including a healthy volunteer and a patient with a swallowing disorder. The simulation results were compared to examine the epiglottic downfolding mechanism, which strongly influences the risk of aspiration.

  7. The Simulation of the Recharging Method Based on Solar Radiation for an Implantable Biosensor.

    PubMed

    Li, Yun; Song, Yong; Kong, Xianyue; Li, Maoyuan; Zhao, Yufei; Hao, Qun; Gao, Tianxin

    2016-01-01

    A method of recharging implantable biosensors based on solar radiation is proposed. Firstly, the models of the proposed method are developed. Secondly, the recharging processes based on solar radiation are simulated using Monte Carlo (MC) method and the energy distributions of sunlight within the different layers of human skin have been achieved and discussed. Finally, the simulation results are verified experimentally, which indicates that the proposed method will contribute to achieve a low-cost, convenient and safe method for recharging implantable biosensors. PMID:27626422

  8. A propagation method with adaptive mesh grid based on wave characteristics for wave optics simulation

    NASA Astrophysics Data System (ADS)

    Tang, Qiuyan; Wang, Jing; Lv, Pin; Sun, Quan

    2015-10-01

    Propagation simulation method and choosing mesh grid are both very important to get the correct propagation results in wave optics simulation. A new angular spectrum propagation method with alterable mesh grid based on the traditional angular spectrum method and the direct FFT method is introduced. With this method, the sampling space after propagation is not limited to propagation methods no more, but freely alterable. However, choosing mesh grid on target board influences the validity of simulation results directly. So an adaptive mesh choosing method based on wave characteristics is proposed with the introduced propagation method. We can calculate appropriate mesh grids on target board to get satisfying results. And for complex initial wave field or propagation through inhomogeneous media, we can also calculate and set the mesh grid rationally according to above method. Finally, though comparing with theoretical results, it's shown that the simulation result with the proposed method coinciding with theory. And by comparing with the traditional angular spectrum method and the direct FFT method, it's known that the proposed method is able to adapt to a wider range of Fresnel number conditions. That is to say, the method can simulate propagation results efficiently and correctly with propagation distance of almost zero to infinity. So it can provide better support for more wave propagation applications such as atmospheric optics, laser propagation and so on.

  9. A Simulation-Based Comparison of Covariate Adjustment Methods for the Analysis of Randomized Controlled Trials

    PubMed Central

    Chaussé, Pierre; Liu, Jin; Luta, George

    2016-01-01

    Covariate adjustment methods are frequently used when baseline covariate information is available for randomized controlled trials. Using a simulation study, we compared the analysis of covariance (ANCOVA) with three nonparametric covariate adjustment methods with respect to point and interval estimation for the difference between means. The three alternative methods were based on important members of the generalized empirical likelihood (GEL) family, specifically on the empirical likelihood (EL) method, the exponential tilting (ET) method, and the continuous updated estimator (CUE) method. Two criteria were considered for the comparison of the four statistical methods: the root mean squared error and the empirical coverage of the nominal 95% confidence intervals for the difference between means. Based on the results of the simulation study, for sensitivity analysis purposes, we recommend the use of ANCOVA (with robust standard errors when heteroscedasticity is present) together with the CUE-based covariate adjustment method. PMID:27077870

  10. Multiscale Simulation of Microcrack Based on a New Adaptive Finite Element Method

    NASA Astrophysics Data System (ADS)

    Xu, Yun; Chen, Jun; Chen, Dong Quan; Sun, Jin Shan

    In this paper, a new adaptive finite element (FE) framework based on the variational multiscale method is proposed and applied to simulate the dynamic behaviors of metal under loadings. First, the extended bridging scale method is used to couple molecular dynamics and FE. Then, macro damages evolvements of those micro defects are simulated by the adaptive FE method. Some auxiliary strategies, such as the conservative mesh remapping, failure mechanism and mesh splitting technique are also included in the adaptive FE computation. Efficiency of our method is validated by numerical experiments.

  11. Real-time simulation of ultrasound refraction phenomena using ray-trace based wavefront construction method.

    PubMed

    Szostek, Kamil; Piórkowski, Adam

    2016-10-01

    Ultrasound (US) imaging is one of the most popular techniques used in clinical diagnosis, mainly due to lack of adverse effects on patients and the simplicity of US equipment. However, the characteristics of the medium cause US imaging to imprecisely reconstruct examined tissues. The artifacts are the results of wave phenomena, i.e. diffraction or refraction, and should be recognized during examination to avoid misinterpretation of an US image. Currently, US training is based on teaching materials and simulators and ultrasound simulation has become an active research area in medical computer science. Many US simulators are limited by the complexity of the wave phenomena, leading to intensive sophisticated computation that makes it difficult for systems to operate in real time. To achieve the required frame rate, the vast majority of simulators reduce the problem of wave diffraction and refraction. The following paper proposes a solution for an ultrasound simulator based on methods known in geophysics. To improve simulation quality, a wavefront construction method was adapted which takes into account the refraction phenomena. This technique uses ray tracing and velocity averaging to construct wavefronts in the simulation. Instead of a geological medium, real CT scans are applied. This approach can produce more realistic projections of pathological findings and is also capable of providing real-time simulation. PMID:27586490

  12. Evaluation of a clinical simulation-based assessment method for EHR-platforms.

    PubMed

    Jensen, Sanne; Rasmussen, Stine Loft; Lyng, Karen Marie

    2014-01-01

    In a procurement process assessment of issues like human factors and interaction between technology and end-users can be challenging. In a large public procurement of an Electronic health record-platform (EHR-platform) in Denmark a clinical simulation-based method for assessing and comparing human factor issues was developed and evaluated. This paper describes the evaluation of the method, its advantages and disadvantages. Our findings showed that clinical simulation is beneficial for assessing user satisfaction, usefulness and patient safety, all though it is resource demanding. The method made it possible to assess qualitative topics during the procurement and it provides an excellent ground for user involvement. PMID:25160323

  13. Physics-based statistical model and simulation method of RF propagation in urban environments

    DOEpatents

    Pao, Hsueh-Yuan; Dvorak, Steven L.

    2010-09-14

    A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.

  14. A novel method for simulation of brushless DC motor servo-control system based on MATLAB

    NASA Astrophysics Data System (ADS)

    Tao, Keyan; Yan, Yingmin

    2006-11-01

    This paper provides a research about the simulation of brush-less DC motor (BLDCM) servo control system. Based on the mathematical model of Brush-less DC motor (BLDCM), built the system simulation model with the MATLAB software. When the system model is made, the isolated functional blocks, such as BLDCM block, the rotor's position detection block, change-phase logic block etc. have been modeled. By the organic combination of these blocks, the model of BLDCM can be established easily. The reasonability and validity have been testified by the simulation results and this novel method offers a new thought way for designing and debugging actual motors.

  15. Apparatus and method for interaction phenomena with world modules in data-flow-based simulation

    DOEpatents

    Xavier, Patrick G.; Gottlieb, Eric J.; McDonald, Michael J.; Oppel, III, Fred J.

    2006-08-01

    A method and apparatus accommodate interaction phenomenon in a data-flow-based simulation of a system of elements, by establishing meta-modules to simulate system elements and by establishing world modules associated with interaction phenomena. World modules are associated with proxy modules from a group of meta-modules associated with one of the interaction phenomenon. The world modules include a communication world, a sensor world, a mobility world, and a contact world. World modules can be further associated with other world modules if necessary. Interaction phenomenon are simulated in corresponding world modules by accessing member functions in the associated group of proxy modules. Proxy modules can be dynamically allocated at a desired point in the simulation to accommodate the addition of elements in the system of elements such as a system of robots, a system of communication terminals, or a system of vehicles, being simulated.

  16. Methods for simulation-based analysis of fluid-structure interaction.

    SciTech Connect

    Barone, Matthew Franklin; Payne, Jeffrey L.

    2005-10-01

    Methods for analysis of fluid-structure interaction using high fidelity simulations are critically reviewed. First, a literature review of modern numerical techniques for simulation of aeroelastic phenomena is presented. The review focuses on methods contained within the arbitrary Lagrangian-Eulerian (ALE) framework for coupling computational fluid dynamics codes to computational structural mechanics codes. The review treats mesh movement algorithms, the role of the geometric conservation law, time advancement schemes, wetted surface interface strategies, and some representative applications. The complexity and computational expense of coupled Navier-Stokes/structural dynamics simulations points to the need for reduced order modeling to facilitate parametric analysis. The proper orthogonal decomposition (POD)/Galerkin projection approach for building a reduced order model (ROM) is presented, along with ideas for extension of the methodology to allow construction of ROMs based on data generated from ALE simulations.

  17. Two dimensional Unite element method simulation to determine the brain capacitance based on ECVT measurement

    NASA Astrophysics Data System (ADS)

    Sirait, S. H.; Taruno, W. P.; Khotimah, S. N.; Haryanto, F.

    2016-03-01

    A simulation to determine capacitance of brain's electrical activity based on two electrodes ECVT was conducted in this study. This study began with construction of 2D coronal head geometry with five different layers and ECVT sensor design, and then both of these designs were merged. After that, boundary conditions were applied on two electrodes in the ECVT sensor. The first electrode was defined as a Dirichlet boundary condition with 20 V in potential and another electrode was defined as a Dirichlet boundary condition with 0 V in potential. Simulated Hodgkin-Huxley -based action potentials were applied as electrical activity of the brain and sequentially were put on 3 different cross-sectional positions. As the governing equation, the Poisson equation was implemented in the geometry. Poisson equation was solved by finite element method. The simulation showed that the simulated capacitance values were affected by action potentials and cross-sectional action potential positions.

  18. Validation of population-based disease simulation models: a review of concepts and methods

    PubMed Central

    2010-01-01

    Background Computer simulation models are used increasingly to support public health research and policy, but questions about their quality persist. The purpose of this article is to review the principles and methods for validation of population-based disease simulation models. Methods We developed a comprehensive framework for validating population-based chronic disease simulation models and used this framework in a review of published model validation guidelines. Based on the review, we formulated a set of recommendations for gathering evidence of model credibility. Results Evidence of model credibility derives from examining: 1) the process of model development, 2) the performance of a model, and 3) the quality of decisions based on the model. Many important issues in model validation are insufficiently addressed by current guidelines. These issues include a detailed evaluation of different data sources, graphical representation of models, computer programming, model calibration, between-model comparisons, sensitivity analysis, and predictive validity. The role of external data in model validation depends on the purpose of the model (e.g., decision analysis versus prediction). More research is needed on the methods of comparing the quality of decisions based on different models. Conclusion As the role of simulation modeling in population health is increasing and models are becoming more complex, there is a need for further improvements in model validation methodology and common standards for evaluating model credibility. PMID:21087466

  19. Simulation of ultrasonic wave propagation in welds using ray-based methods

    NASA Astrophysics Data System (ADS)

    Gardahaut, A.; Jezzine, K.; Cassereau, D.; Leymarie, N.

    2014-04-01

    Austenitic or bimetallic welds are particularly difficult to control due to their anisotropic and inhomogeneous properties. In this paper, we present a ray-based method to simulate the propagation of ultrasonic waves in such structures, taking into account their internal properties. This method is applied on a smooth representation of the orientation of the grain in the weld. The propagation model consists in solving the eikonal and transport equations in an inhomogeneous anisotropic medium. Simulation results are presented and compared to finite elements for a distribution of grain orientation expressed in a closed-form.

  20. Misclassification Errors in Unsupervised Classification Methods. Comparison Based on the Simulation of Targeted Proteomics Data

    PubMed Central

    Andreev, Victor P; Gillespie, Brenda W; Helfand, Brian T; Merion, Robert M

    2016-01-01

    Unsupervised classification methods are gaining acceptance in omics studies of complex common diseases, which are often vaguely defined and are likely the collections of disease subtypes. Unsupervised classification based on the molecular signatures identified in omics studies have the potential to reflect molecular mechanisms of the subtypes of the disease and to lead to more targeted and successful interventions for the identified subtypes. Multiple classification algorithms exist but none is ideal for all types of data. Importantly, there are no established methods to estimate sample size in unsupervised classification (unlike power analysis in hypothesis testing). Therefore, we developed a simulation approach allowing comparison of misclassification errors and estimating the required sample size for a given effect size, number, and correlation matrix of the differentially abundant proteins in targeted proteomics studies. All the experiments were performed in silico. The simulated data imitated the expected one from the study of the plasma of patients with lower urinary tract dysfunction with the aptamer proteomics assay Somascan (SomaLogic Inc, Boulder, CO), which targeted 1129 proteins, including 330 involved in inflammation, 180 in stress response, 80 in aging, etc. Three popular clustering methods (hierarchical, k-means, and k-medoids) were compared. K-means clustering performed much better for the simulated data than the other two methods and enabled classification with misclassification error below 5% in the simulated cohort of 100 patients based on the molecular signatures of 40 differentially abundant proteins (effect size 1.5) from among the 1129-protein panel. PMID:27524871

  1. Simulation of the Recharging Method of Implantable Biosensors Based on a Wearable Incoherent Light Source

    PubMed Central

    Song, Yong; Hao, Qun; Kong, Xianyue; Hu, Lanxin; Cao, Jie; Gao, Tianxin

    2014-01-01

    Recharging implantable electronics from the outside of the human body is very important for applications such as implantable biosensors and other implantable electronics. In this paper, a recharging method for implantable biosensors based on a wearable incoherent light source has been proposed and simulated. Firstly, we develop a model of the incoherent light source and a multi-layer model of skin tissue. Secondly, the recharging processes of the proposed method have been simulated and tested experimentally, whereby some important conclusions have been reached. Our results indicate that the proposed method will offer a convenient, safe and low-cost recharging method for implantable biosensors, which should promote the application of implantable electronics. PMID:25372616

  2. Agent-based modeling: Methods and techniques for simulating human systems

    PubMed Central

    Bonabeau, Eric

    2002-01-01

    Agent-based modeling is a powerful simulation modeling technique that has seen a number of applications in the last few years, including applications to real-world business problems. After the basic principles of agent-based simulation are briefly introduced, its four areas of application are discussed by using real-world applications: flow simulation, organizational simulation, market simulation, and diffusion simulation. For each category, one or several business applications are described and analyzed. PMID:12011407

  3. The method of infrared image simulation based on the measured image

    NASA Astrophysics Data System (ADS)

    Lou, Shuli; Liu, Liang; Ren, Jiancun

    2015-10-01

    The development of infrared imaging guidance technology has promoted the research of infrared imaging simulation technology and the key of infrared imaging simulation is the generation of IR image. The generation of IR image is worthful in military and economy. In order to solve the problem of credibility and economy of infrared scene generation, a method of infrared scene generation based on the measured image is proposed. Through researching on optical properties of ship-target and sea background, ship-target images with various gestures are extracted from recorded images based on digital image processing technology. The ship-target image is zoomed in and out to simulate the relative motion between the viewpoint and the target according to field of view and the distance between the target and the sensor. The gray scale of ship-target image is adjusted to simulate the radiation change of the ship-target according to the distance between the viewpoint and the target and the atmospheric transmission. Frames of recorded infrared images without target are interpolated to simulate high frame rate of missile. Processed ship-target images and sea-background infrared images are synthetized to obtain infrared scenes according to different viewpoints. Experiments proved that this method is flexible and applicable, and the fidelity and the reliability of synthesis infrared images can be guaranteed.

  4. Three-dimensional imaging simulation of active laser detection based on DLOS method

    NASA Astrophysics Data System (ADS)

    Zhang, Chuanxin; Zhou, Honghe; Chen, Xiang; Yuan, Yuan; Shuai, Yong; Tan, Heping

    2016-07-01

    The technology of active laser detection is widely used in many different fields nowadays. With the development of computer technology, programmable software simulation can provide reference for the design of active laser detection. The characteristics of the active laser detecting systems also can be judged more visual. Based on the features of the active laser detection, an improved method of radiative transfer calculation (Double Line Of Sight) was developed, and the simulation models of complete active laser detecting imaging were founded. Compared with the results calculated by the Monte Carlo method, the correctness of the improved method was verified. The results of active laser detecting imaging of complex three-dimensional targets in different atmospheric scenes were compared. The influence of different atmospheric dielectric property were analyzed, which provides effective reference for the design of active laser detection.

  5. A Monte Carlo simulation based inverse propagation method for stochastic model updating

    NASA Astrophysics Data System (ADS)

    Bao, Nuo; Wang, Chunjie

    2015-08-01

    This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.

  6. Efficient Molecular Dynamics Simulations of Multiple Radical Center Systems Based on the Fragment Molecular Orbital Method

    SciTech Connect

    Nakata, Hiroya; Schmidt, Michael W; Fedorov, Dmitri G; Kitaura, Kazuo; Nakamura, Shinichiro; Gordon, Mark S

    2014-10-16

    The fully analytic energy gradient has been developed and implemented for the restricted open-shell Hartree–Fock (ROHF) method based on the fragment molecular orbital (FMO) theory for systems that have multiple open-shell molecules. The accuracy of the analytic ROHF energy gradient is compared with the corresponding numerical gradient, illustrating the accuracy of the analytic gradient. The ROHF analytic gradient is used to perform molecular dynamics simulations of an unusual open-shell system, liquid oxygen, and mixtures of oxygen and nitrogen. These molecular dynamics simulations provide some insight about how triplet oxygen molecules interact with each other. Timings reveal that the method can calculate the energy gradient for a system containing 4000 atoms in only 6 h. Therefore, it is concluded that the FMO-ROHF method will be useful for investigating systems with multiple open shells.

  7. Thermoelastic Simulations Based on Discontinuous Galerkin Methods: Formulation and Application in Gas Turbines

    NASA Astrophysics Data System (ADS)

    Hao, Zengrong; Gu, Chunwei; Song, Yin

    2016-06-01

    This study extends the discontinuous Galerkin (DG) methods to simulations of thermoelasticity. A thermoelastic formulation of interior penalty DG (IP-DG) method is presented and aspects of the numerical implementation are discussed in matrix form. The content related to thermal expansion effects is illustrated explicitly in the discretized equation system. The feasibility of the method for general thermoelastic simulations is validated through typical test cases, including tackling stress discontinuities caused by jumps of thermal expansive properties and controlling accompanied non-physical oscillations through adjusting the magnitude of IP term. The developed simulation platform upon the method is applied to the engineering analysis of thermoelastic performance for a turbine vane and a series of vanes with various types of simplified thermal barrier coating (TBC) systems. This analysis demonstrates that while TBC properties on heat conduction are generally the major consideration for protecting the alloy base vanes, the mechanical properties may have more significant effects on protections of coatings themselves. Changing characteristics of normal tractions on TBC/base interface, closely related to the occurrence of coating failures, over diverse components distributions along TBC thickness of the functional graded materials are summarized and analysed, illustrating the opposite tendencies in situations with different thermal-stress-free temperatures for coatings.

  8. Availability study of CFD-based Mask3D simulation method for next generation lithography technologies

    NASA Astrophysics Data System (ADS)

    Takahashi, M.; Kawabata, Y.; Washitani, T.; Tanaka, S.; Maeda, S.; Mimotogi, S.

    2014-03-01

    In progress of lithography technologies, the importance of Mask3D analysis has been emphasized because the influence of mask topography effects is not avoidable to be increased explosively. An electromagnetic filed simulation method, such as FDTD, RCWA and FEM, is applied to analyze those complicated phenomena. We have investigated Constrained Interpolation Profile (CIP) method, which is one of the Method of Characteristics (MoC), for Mask3D analysis in optical lithography. CIP method can reproduce the phase of propagating waves with less numerical error by using high order polynomial function. The restrictions of grid distance are relaxed with spatial grid. Therefore this method reduces the number of grid points in complex structure. In this paper, we study the feasibility of CIP scheme applying a non-uniform and spatial-interpolated grid to practical mask patterns. The number of grid points might be increased in complex layout and topological structure since these structures require a dense grid to remain the fidelity of each design. We propose a spatial interpolation method based on CIP method same as time-domain interpolation to reduce the number of grid points to be computed. The simulation results of two meshing methods with spatial interpolation are shown.

  9. A flood map based DOI decoding method for block detector: a GATE simulation study.

    PubMed

    Shi, Han; Du, Dong; Su, Zhihong; Peng, Qiyu

    2014-01-01

    Positron Emission Tomography (PET) systems using detectors with Depth of Interaction (DOI) capabilities could achieve higher spatial resolution and better image quality than those without DOI. Up till now, most DOI methods developed are not cost-efficient for a whole body PET system. In this paper, we present a DOI decoding method based on flood map for low-cost conventional block detector with four-PMT readout. Using this method, the DOI information can be directly extracted from the DOI-related crystal spot deformation in the flood map. GATE simulations are then carried out to validate the method, confirming a DOI sorting accuracy of 85.27%. Therefore, we conclude that this method has the potential to be applied in conventional detectors to achieve a reasonable DOI measurement without dramatically increasing their complexity and cost of an entire PET system. PMID:25227021

  10. Using simulations to evaluate Mantel-based methods for assessing landscape resistance to gene flow.

    PubMed

    Zeller, Katherine A; Creech, Tyler G; Millette, Katie L; Crowhurst, Rachel S; Long, Robert A; Wagner, Helene H; Balkenhol, Niko; Landguth, Erin L

    2016-06-01

    Mantel-based tests have been the primary analytical methods for understanding how landscape features influence observed spatial genetic structure. Simulation studies examining Mantel-based approaches have highlighted major challenges associated with the use of such tests and fueled debate on when the Mantel test is appropriate for landscape genetics studies. We aim to provide some clarity in this debate using spatially explicit, individual-based, genetic simulations to examine the effects of the following on the performance of Mantel-based methods: (1) landscape configuration, (2) spatial genetic nonequilibrium, (3) nonlinear relationships between genetic and cost distances, and (4) correlation among cost distances derived from competing resistance models. Under most conditions, Mantel-based methods performed poorly. Causal modeling identified the true model only 22% of the time. Using relative support and simple Mantel r values boosted performance to approximately 50%. Across all methods, performance increased when landscapes were more fragmented, spatial genetic equilibrium was reached, and the relationship between cost distance and genetic distance was linearized. Performance depended on cost distance correlations among resistance models rather than cell-wise resistance correlations. Given these results, we suggest that the use of Mantel tests with linearized relationships is appropriate for discriminating among resistance models that have cost distance correlations <0.85 with each other for causal modeling, or <0.95 for relative support or simple Mantel r. Because most alternative parameterizations of resistance for the same landscape variable will result in highly correlated cost distances, the use of Mantel test-based methods to fine-tune resistance values will often not be effective. PMID:27516868

  11. Diffusion approximation-based simulation of stochastic ion channels: which method to use?

    PubMed Central

    Pezo, Danilo; Soudry, Daniel; Orio, Patricio

    2014-01-01

    To study the effects of stochastic ion channel fluctuations on neural dynamics, several numerical implementation methods have been proposed. Gillespie's method for Markov Chains (MC) simulation is highly accurate, yet it becomes computationally intensive in the regime of a high number of channels. Many recent works aim to speed simulation time using the Langevin-based Diffusion Approximation (DA). Under this common theoretical approach, each implementation differs in how it handles various numerical difficulties—such as bounding of state variables to [0,1]. Here we review and test a set of the most recently published DA implementations (Goldwyn et al., 2011; Linaro et al., 2011; Dangerfield et al., 2012; Orio and Soudry, 2012; Schmandt and Galán, 2012; Güler, 2013; Huang et al., 2013a), comparing all of them in a set of numerical simulations that assess numerical accuracy and computational efficiency on three different models: (1) the original Hodgkin and Huxley model, (2) a model with faster sodium channels, and (3) a multi-compartmental model inspired in granular cells. We conclude that for a low number of channels (usually below 1000 per simulated compartment) one should use MC—which is the fastest and most accurate method. For a high number of channels, we recommend using the method by Orio and Soudry (2012), possibly combined with the method by Schmandt and Galán (2012) for increased speed and slightly reduced accuracy. Consequently, MC modeling may be the best method for detailed multicompartment neuron models—in which a model neuron with many thousands of channels is segmented into many compartments with a few hundred channels. PMID:25404914

  12. Simulations of Ground Motion in Southern California based upon the Spectral-Element Method

    NASA Astrophysics Data System (ADS)

    Tromp, J.; Komatitsch, D.; Liu, Q.

    2003-12-01

    We use the spectral-element method to simulate ground motion generated by recent well-recorded small earthquakes in Southern California. Simulations are performed using a new sedimentary basin model that is constrained by hundreds of petroleum industry well logs and more than twenty thousand kilometers of seismic reflection profiles. The numerical simulations account for 3D variations of seismic wave speeds and density, topography and bathymetry, and attenuation. Simulations for several small recent events demonstrate that the combination of a detailed sedimentary basin model and an accurate numerical technique facilitates the simulation of ground motion at periods of 2 seconds and longer inside the Los Angeles basin and 6 seconds and longer elsewhere. Peak ground displacement, velocity and acceleration maps illustrate that significant amplification occurs in the basin. Centroid-Moment Tensor mechanisms are obtained based upon Pnl and surface waveforms and numerically calculated 3D Frechet derivatives. We use a combination of waveform and waveform-envelope misfit criteria, and facilitate pure double-couple or zero-trace moment-tensor inversions.

  13. Development and elaboration of numerical method for simulating gas–liquid–solid three-phase flows based on particle method

    NASA Astrophysics Data System (ADS)

    Takahashi, Ryohei; Mamori, Hiroya; Yamamoto, Makoto

    2016-02-01

    A numerical method for simulating gas-liquid-solid three-phase flows based on the moving particle semi-implicit (MPS) approach was developed in this study. Computational instability often occurs in multiphase flow simulations if the deformations of the free surfaces between different phases are large, among other reasons. To avoid this instability, this paper proposes an improved coupling procedure between different phases in which the physical quantities of particles in different phases are calculated independently. We performed numerical tests on two illustrative problems: a dam-break problem and a solid-sphere impingement problem. The former problem is a gas-liquid two-phase problem, and the latter is a gas-liquid-solid three-phase problem. The computational results agree reasonably well with the experimental results. Thus, we confirmed that the proposed MPS method reproduces the interaction between different phases without inducing numerical instability.

  14. Full wave simulation of lower hybrid waves in Maxwellian plasma based on the finite element method

    SciTech Connect

    Meneghini, O.; Shiraiwa, S.; Parker, R.

    2009-09-15

    A full wave simulation of the lower-hybrid (LH) wave based on the finite element method is presented. For the LH wave, the most important terms of the dielectric tensor are the cold plasma contribution and the electron Landau damping (ELD) term, which depends only on the component of the wave vector parallel to the background magnetic field. The nonlocal hot plasma ELD effect was expressed as a convolution integral along the magnetic field lines and the resultant integro-differential Helmholtz equation was solved iteratively. The LH wave propagation in a Maxwellian tokamak plasma based on the Alcator C experiment was simulated for electron temperatures in the range of 2.5-10 keV. Comparison with ray tracing simulations showed good agreement when the single pass damping is strong. The advantages of the new approach include a significant reduction of computational requirements compared to full wave spectral methods and seamless treatment of the core, the scrape off layer and the launcher regions.

  15. Aerodynamic flow simulation using a pressure-based method and a two-equation turbulence model

    NASA Astrophysics Data System (ADS)

    Lai, Y. G. J.; Przekwas, A. J.; So, R. M. C.

    1993-07-01

    In the past, most aerodynamic flow calculations were carried out with density-based numerical methods and zero-equation turbulence models. However, pressure-based methods and more advanced turbulence models have been routinely used in industry for many internal flow simulations and for incompressible flows. Unfortunately, their usefulness in calculating aerodynamic flows is still not well demonstrated and accepted. In this study, an advanced pressure-based numerical method and a recently proposed near-wall compressible two-equation turbulence model are used to calculate external aerodynamic flows. Several TVD-type schemes are extended to pressure-based method to better capture discontinuities such as shocks. Some improvements are proposed to accelerate the convergence of the numerical method. A compressible near-wall two-equation turbulence model is then implemented to calculate transonic turbulent flows over NACA 0012 and RAE 2822 airfoils with and without shocks. The calculated results are compared with wind tunnel data as well as with results obtained from the Baldwin-Lomax model. The performance of the two-equation turbulence model is evaluated and its merits or lack thereof are discussed.

  16. An efficient model for solving density driven groundwater flow problems based on the network simulation method

    NASA Astrophysics Data System (ADS)

    Soto Meca, A.; Alhama, F.; González Fernández, C. F.

    2007-06-01

    SummaryThe Henry and Elder problems are once more numerically studied using an efficient model based on the Network Simulation Method, which takes advantage of the powerful algorithms implemented in modern circuit simulation software. The network model of the volume element, which is directly deduced from the finite-difference differential equations of the spatially discretized governing equations, under the streamfunction formulation, is electrically connected to adjacent networks to conform the whole model of the medium to which the boundary conditions are added using adequate electrical devices. Coupling between equations is directly implemented in the model. Very few, simple rules are needed to design the model, which is run in a circuit simulation code to obtain the results with no added mathematical manipulations. Different versions of the Henry problem, as well as the Elder problem, are simulated and the solutions are successfully compared with the analytical and numerical solutions of other authors or codes. A grid convergence study for the Henry problem was also carried out to determine the grid size with negligible numerical dispersion, while a similar study was carried out with the Elder problem in order to compare the patterns of the solution with those of other authors. Computing times are relatively small for this kind of problem.

  17. Stray light analysis and suppression method of dynamic star simulator based on LCOS splicing technology

    NASA Astrophysics Data System (ADS)

    Meng, Yao; Zhang, Guo-yu

    2015-10-01

    Star simulator acts ground calibration equipment of the star sensor, It testes the related parameters and performance of the star sensor. At present, when the dynamic star simulator based on LCOS splicing is identified by the star sensor, there is a major problem which is the poor LCOS contrast. In this paper, we analysis the cause of LC OS stray light , which is the relation between the incident angle of light and contrast ratio and set up the function relationship between the angle and the irradiance of the stray light. According to this relationship, we propose a scheme that we control the incident angle . It is a popular method to use the compound parabolic concentrator (CPC), although it can control any angle what we want in theory, in fact, we usually use it above +/-15° because of the length and the manufacturing cost. Then I set a telescopic system in front of the CPC , that principle is the same as the laser beam expander. We simulate the CPC with the Tracepro, it simulate the exit surface irradiance. The telescopic system should be designed by the ZEMAX because of the chromatic aberration correction. As a result, we get a collimating light source which the viewing angle is less than +/-5° and the area of uniform irradiation surface is greater than 20mm×20mm.

  18. A Novel Antibody Humanization Method Based on Epitopes Scanning and Molecular Dynamics Simulation

    PubMed Central

    Zhao, Bin-Bin; Gong, Lu-Lu; Jin, Wen-Jing; Liu, Jing-Jun; Wang, Jing-Fei; Wang, Tian-Tian; Yuan, Xiao-Hui; He, You-Wen

    2013-01-01

    1-17-2 is a rat anti-human DEC-205 monoclonal antibody that induces internalization and delivers antigen to dendritic cells (DCs). The potentially clinical application of this antibody is limited by its murine origin. Traditional humanization method such as complementarity determining regions (CDRs) graft often leads to a decreased or even lost affinity. Here we have developed a novel antibody humanization method based on computer modeling and bioinformatics analysis. First, we used homology modeling technology to build the precise model of Fab. A novel epitope scanning algorithm was designed to identify antigenic residues in the framework regions (FRs) that need to be mutated to human counterpart in the humanization process. Then virtual mutation and molecular dynamics (MD) simulation were used to assess the conformational impact imposed by all the mutations. By comparing the root-mean-square deviations (RMSDs) of CDRs, we found five key residues whose mutations would destroy the original conformation of CDRs. These residues need to be back-mutated to rescue the antibody binding affinity. Finally we constructed the antibodies in vitro and compared their binding affinity by flow cytometry and surface plasmon resonance (SPR) assay. The binding affinity of the refined humanized antibody was similar to that of the original rat antibody. Our results have established a novel method based on epitopes scanning and MD simulation for antibody humanization. PMID:24278299

  19. Copula-based method for multisite monthly and daily streamflow simulation

    NASA Astrophysics Data System (ADS)

    Chen, Lu; Singh, Vijay P.; Guo, Shenglian; Zhou, Jianzhong; Zhang, Junhong

    2015-09-01

    Multisite stochastic simulation of streamflow sequences is needed for water resources planning and management. In this study, a new copula-based method is proposed for generating long-term multisite monthly and daily streamflow data. A multivariate copula, which is established based on bivariate copulas and conditional probability distributions, is employed to describe temporal dependences (single site) and spatial dependences (between sites). Monthly or daily streamflows at multiple sites are then generated by sampling from the conditional copula. Three tributaries of Colorado River and the upper Yangtze River are selected to evaluate the proposed methodology. Results show that the generated data at both higher and lower time scales can capture the distribution properties of the single site and preserve the spatial correlation of streamflows at different locations. The main advantage of the method is that the trivairate copula can be established using three bivariate copulas and the model parameters can be easily estimated using the Kendall tau rank correlation coefficient, which makes it possible to generate daily streamflow data. The method provides a new tool for multisite stochastic simulation.

  20. Copula-based method for Multisite Monthly and Daily Streamflow Simulation

    NASA Astrophysics Data System (ADS)

    Chen, L.; Dai, M.; Singh, V. P.; Guo, S.

    2014-12-01

    Multisite stochastic simulation of streamflow sequences is needed for water resources planning and management. In this study, a new copula-based method is proposed for generating long-term multisite monthly and daily streamflow data. A multivariate copula, which is established based on bivariate copulas and conditional probability distributions, is employed to describe temporal dependences (single site) and spatial dependences (between sites). Monthly or daily streamflows at multiple sites are then generated by sampling from the conditional copula. Three tributaries of Colorado River and the upper Yangtze River are selected to evaluate the proposed methodology. Results show that the generated data at both higher and lower time scales can capture the distribution properties of the single site and preserve the spatial correlation of streamflows at different locations. The main advantage of the method is that the model parameters can be easily estimated using Kendall tau rank correlation coefficient, which makes it possible to generate daily streamflow data. The method provides a new tool for multisite stochastic simulation.

  1. Broken wires diagnosis method numerical simulation based on smart cable structure

    NASA Astrophysics Data System (ADS)

    Li, Sheng; Zhou, Min; Yang, Yan

    2014-12-01

    The smart cable with embedded distributed fiber optical Bragg grating (FBG) sensors was chosen as the object to study a new diagnosis method about broken wires of the bridge cable. The diagnosis strategy based on cable force and stress distribution state of steel wires was put forward. By establishing the bridge-cable and cable-steel wires model, the broken wires sample database was simulated numerically. A method of the characterization cable state pattern which can both represent the degree and location of broken wires inside a cable was put forward. The training and predicting results of the sample database by the back propagation (BP) neural network showed that the proposed broken wires diagnosis method was feasible and expanded the broken wires diagnosis research area by using the smart cable which was used to be only representing cable force.

  2. Evaluation of FTIR-based analytical methods for the analysis of simulated wastes

    SciTech Connect

    Rebagay, T.V.; Cash, R.J.; Dodd, D.A.; Lockrem, L.L.; Meacham, J.E.; Winkelman, W.D.

    1994-09-30

    Three FTIR-based analytical methods that have potential to characterize simulated waste tank materials have been evaluated. These include: (1) fiber optics, (2) modular transfer optic using light guides equipped with non-contact sampling peripherals, and (3) photoacoustic spectroscopy. Pertinent instrumentation and experimental procedures for each method are described. The results show that the near-infrared (NIR) region of the infrared spectrum is the region of choice for the measurement of moisture in waste simulants. Differentiation of the NIR spectrum, as a preprocessing steps, will improve the analytical result. Preliminary data indicate that prominent combination bands of water and the first overtone band of the ferrocyanide stretching vibration may be utilized to measure water and ferrocyanide species simultaneously. Both near-infrared and mid-infrared spectra must be collected, however, to measure ferrocyanide species unambiguously and accurately. For ease of sample handling and the potential for field or waste tank deployment, the FTIR-Fiber Optic method is preferred over the other two methods. Modular transfer optic using light guides and photoacoustic spectroscopy may be used as backup systems and for the validation of the fiber optic data.

  3. A simple numerical method for snowmelt simulation based on the equation of heat energy.

    PubMed

    Stojković, Milan; Jaćimović, Nenad

    2016-01-01

    This paper presents one-dimensional numerical model for snowmelt/accumulation simulations, based on the equation of heat energy. It is assumed that the snow column is homogeneous at the current time step; however, its characteristics such as snow density and thermal conductivity are treated as functions of time. The equation of heat energy for snow column is solved using the implicit finite difference method. The incoming energy at the snow surface includes the following parts: conduction, convection, radiation and the raindrop energy. Along with the snow melting process, the model includes a model for snow accumulation. The Euler method for the numerical integration of the balance equation is utilized in the proposed model. The model applicability is demonstrated at the meteorological station Zlatibor, located in the western region of Serbia at 1,028 meters above sea level (m.a.s.l.) Simulation results of snowmelt/accumulation suggest that the proposed model achieved better agreement with observed data in comparison with the temperature index method. The proposed method may be utilized as part of a deterministic hydrological model in order to improve short and long term predictions of possible flood events. PMID:27054726

  4. IR imaging simulation and analysis for aeroengine exhaust system based on reverse Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Chen, Shiguo; Chen, Lihai; Mo, Dongla; Shi, Jingcheng

    2014-11-01

    The IR radiation characteristics of aeroengine are the important basis for IR stealth design and anti-stealth detection of aircraft. With the development of IR imaging sensor technology, the importance of aircraft IR stealth increases. An effort is presented to explore target IR radiation imaging simulation based on Reverse Monte Carlo Method (RMCM), which combined with the commercial CFD software. Flow and IR radiation characteristics of an aeroengine exhaust system are investigated, which developing a full size geometry model based on the actual parameters, using a flow-IR integration structured mesh, obtaining the engine performance parameters as the inlet boundary conditions of mixer section, and constructing a numerical simulation model of engine exhaust system of IR radiation characteristics based on RMCM. With the above models, IR radiation characteristics of aeroengine exhaust system is given, and focuses on the typical detecting band of IR spectral radiance imaging at azimuth 20°. The result shows that: (1) in small azimuth angle, the IR radiation is mainly from the center cone of all hot parts; near the azimuth 15°, mixer has the biggest radiation contribution, while center cone, turbine and flame stabilizer equivalent; (2) the main radiation components and space distribution in different spectrum is different, CO2 at 4.18, 4.33 and 4.45 micron absorption and emission obviously, H2O at 3.0 and 5.0 micron absorption and emission obviously.

  5. Numerical method to compute optical conductivity based on pump-probe simulations

    NASA Astrophysics Data System (ADS)

    Shao, Can; Tohyama, Takami; Luo, Hong-Gang; Lu, Hantao

    2016-05-01

    A numerical method to calculate optical conductivity based on a pump-probe setup is presented. Its validity and limits are tested and demonstrated via concrete numerical simulations on the half-filled one-dimensional extended Hubbard model both in and out of equilibrium. By employing either a steplike or a Gaussian-like probing vector potential, it is found that in nonequilibrium, the method in the narrow-probe-pulse limit can be identified with variant types of linear-response theory, which, in equilibrium, produce identical results. The observation reveals the underlying probe-pulse dependence of the optical conductivity calculations in nonequilibrium, which may have applications in the theoretical analysis of ultrafast spectroscopy measurements.

  6. Method for distributed agent-based non-expert simulation of manufacturing process behavior

    DOEpatents

    Ivezic, Nenad; Potok, Thomas E.

    2004-11-30

    A method for distributed agent based non-expert simulation of manufacturing process behavior on a single-processor computer comprises the steps of: object modeling a manufacturing technique having a plurality of processes; associating a distributed agent with each the process; and, programming each the agent to respond to discrete events corresponding to the manufacturing technique, wherein each discrete event triggers a programmed response. The method can further comprise the step of transmitting the discrete events to each agent in a message loop. In addition, the programming step comprises the step of conditioning each agent to respond to a discrete event selected from the group consisting of a clock tick message, a resources received message, and a request for output production message.

  7. Simulation-Based Optimization for Surgery Scheduling in Operation Theatre Management Using Response Surface Method.

    PubMed

    Liang, Feng; Guo, Yuanyuan; Fung, Richard Y K

    2015-11-01

    Operation theatre is one of the most significant assets in a hospital as the greatest source of revenue as well as the largest cost unit. This paper focuses on surgery scheduling optimization, which is one of the most crucial tasks in operation theatre management. A combined scheduling policy composed of three simple scheduling rules is proposed to optimize the performance of scheduling operation theatre. Based on the real-life scenarios, a simulation-based model about surgery scheduling system is built. With two optimization objectives, the response surface method is adopted to search for the optimal weight of simple rules in a combined scheduling policy in the model. Moreover, the weights configuration can be revised to cope with dispatching dynamics according to real-time change at the operation theatre. Finally, performance comparison between the proposed combined scheduling policy and tabu search algorithm indicates that the combined scheduling policy is capable of sequencing surgery appointments more efficiently. PMID:26385551

  8. Simulation and evaluation of tablet-coating burst based on finite element method.

    PubMed

    Yang, Yan; Li, Juan; Miao, Kong-Song; Shan, Wei-Guang; Tang, Lan; Yu, Hai-Ning

    2016-09-01

    The objective of this study was to simulate and evaluate the burst behavior of coated tablets. Three-dimensional finite element models of tablet-coating were established using software ANSYS. Swelling pressure of cores was measured by a self-made device and applied at the internal surface of the models. Mechanical properties of the polymer film were determined using a texture analyzer and applied as material properties of the models. The resulted finite element models were validated by experimental data. The validated models were used to assess the factors those influenced burst behavior and predict the coating burst behavior. The simulation results of coating burst and failure location were strongly matched with the experimental data. It was found that internal swelling pressure, inside corner radius and corner thickness were three main factors controlling the stress distribution and burst behavior. Based on the linear relationship between the internal pressure and the maximum principle stress on coating, burst pressure of coatings was calculated and used to predict the burst behavior. This study demonstrated that burst behavior of coated tablets could be simulated and evaluated by finite element method. PMID:26727401

  9. Spin tracking simulations in AGS based on ray-tracing methods - bare lattice, no snakes -

    SciTech Connect

    Meot, F.; Ahrens, L.; Gleen, J.; Huang, H.; Luccio, A.; MacKay, W. W.; Roser, T.; Tsoupas, N.

    2009-09-01

    This Note reports on the first simulations of and spin dynamics in the AGS using the ray-tracing code Zgoubi. It includes lattice analysis, comparisons with MAD, DA tracking, numerical calculation of depolarizing resonance strengths and comparisons with analytical models, etc. It also includes details on the setting-up of Zgoubi input data files and on the various numerical methods of concern in and available from Zgoubi. Simulations of crossing and neighboring of spin resonances in AGS ring, bare lattice, without snake, have been performed, in order to assess the capabilities of Zgoubi in that matter, and are reported here. This yields a rather long document. The two main reasons for that are, on the one hand the desire of an extended investigation of the energy span, and on the other hand a thorough comparison of Zgoubi results with analytical models as the 'thin lens' approximation, the weak resonance approximation, and the static case. Section 2 details the working hypothesis : AGS lattice data, formulae used for deriving various resonance related quantities from the ray-tracing based 'numerical experiments', etc. Section 3 gives inventories of the intrinsic and imperfection resonances together with, in a number of cases, the strengths derived from the ray-tracing. Section 4 gives the details of the numerical simulations of resonance crossing, including behavior of various quantities (closed orbit, synchrotron motion, etc.) aimed at controlling that the conditions of particle and spin motions are correct. In a similar manner Section 5 gives the details of the numerical simulations of spin motion in the static case: fixed energy in the neighboring of the resonance. In Section 6, weak resonances are explored, Zgoubi results are compared with the Fresnel integrals model. Section 7 shows the computation of the {rvec n} vector in the AGS lattice and tuning considered. Many details on the numerical conditions as data files etc. are given in the Appendix Section

  10. Numerical Simulation of Drophila Flight Based on Arbitrary Langrangian-Eulerian Method

    NASA Astrophysics Data System (ADS)

    Erzincanli, Belkis; Sahin, Mehmet

    2012-11-01

    A parallel unstructured finite volume algorithm based on Arbitrary Lagrangian Eulerian (ALE) method has been developed in order to investigate the wake structure around a pair of flapping Drosophila wings. The numerical method uses a side-centered arrangement of the primitive variables that does not require any ad-hoc modifications in order to enhance pressure coupling. A radial basis function (RBF) interpolation method is also implemented in order to achieve large mesh deformations. For the parallel solution of resulting large-scale algebraic equations, a matrix factorization is introduced similar to that of the projection method for the whole coupled system and two-cycle of BoomerAMG solver is used for the scaled discrete Laplacian provided by the HYPRE library which we access through the PETSc library. The present numerical algorithm is initially validated for the flow past an oscillating circular cylinder in a channel and the flow induced by an oscillating sphere in a cubic cavity. Then the numerical algorithm is applied to the numerical simulation of flow field around a pair of flapping Drosophila wing in hover flight. The time variation of the near wake structure is shown along with the aerodynamic loads and particle traces. The authors acknowledge financial support from Turkish National Scientific and Technical Research Council (TUBITAK) through project number 111M332. The authors would like to thank Michael Dickinson and Michael Elzinga for providing the experimental data.

  11. Comparison of three-dimensional Poisson solution methods for particle-based simulation and inhomogeneous dielectrics

    NASA Astrophysics Data System (ADS)

    Berti, Claudio; Gillespie, Dirk; Bardhan, Jaydeep P.; Eisenberg, Robert S.; Fiegna, Claudio

    2012-07-01

    Particle-based simulation represents a powerful approach to modeling physical systems in electronics, molecular biology, and chemical physics. Accounting for the interactions occurring among charged particles requires an accurate and efficient solution of Poisson's equation. For a system of discrete charges with inhomogeneous dielectrics, i.e., a system with discontinuities in the permittivity, the boundary element method (BEM) is frequently adopted. It provides the solution of Poisson's equation, accounting for polarization effects due to the discontinuity in the permittivity by computing the induced charges at the dielectric boundaries. In this framework, the total electrostatic potential is then found by superimposing the elemental contributions from both source and induced charges. In this paper, we present a comparison between two BEMs to solve a boundary-integral formulation of Poisson's equation, with emphasis on the BEMs' suitability for particle-based simulations in terms of solution accuracy and computation speed. The two approaches are the collocation and qualocation methods. Collocation is implemented following the induced-charge computation method of D. Boda [J. Chem. Phys.JCPSA60021-960610.1063/1.2212423 125, 034901 (2006)]. The qualocation method is described by J. Tausch [IEEE Transactions on Computer-Aided Design of Integrated Circuits and SystemsITCSDI0278-007010.1109/43.969433 20, 1398 (2001)]. These approaches are studied using both flat and curved surface elements to discretize the dielectric boundary, using two challenging test cases: a dielectric sphere embedded in a different dielectric medium and a toy model of an ion channel. Earlier comparisons of the two BEM approaches did not address curved surface elements or semiatomistic models of ion channels. Our results support the earlier findings that for flat-element calculations, qualocation is always significantly more accurate than collocation. On the other hand, when the dielectric boundary

  12. Method of simulation and visualization of FDG metabolism based on VHP image

    NASA Astrophysics Data System (ADS)

    Cui, Yunfeng; Bai, Jing

    2005-04-01

    FDG ([18F] 2-fluoro-2-deoxy-D-glucose) is the typical tracer used in clinical PET (positron emission tomography) studies. The FDG-PET is an important imaging tool for early diagnosis and treatment of malignant tumor and functional disease. The main purpose of this work is to propose a method that represents FDG metabolism in human body through the simulation and visualization of 18F distribution process dynamically based on the segmented VHP (Visible Human Project) image dataset. First, the plasma time-activity curve (PTAC) and the tissues time-activity curves (TTAC) are obtained from the previous studies and the literatures. According to the obtained PTAC and TTACs, a set of corresponding values are assigned to the segmented VHP image, Thus a set of dynamic images are derived to show the 18F distribution in the concerned tissues for the predetermined sampling schedule. Finally, the simulated FDG distribution images are visualized in 3D and 2D formats, respectively, incorporated with principal interaction functions. As compared with original PET image, our visualization result presents higher resolution because of the high resolution of VHP image data, and show the distribution process of 18F dynamically. The results of our work can be used in education and related research as well as a tool for the PET operator to design their PET experiment program.

  13. A new method to extract stable feature points based on self-generated simulation images

    NASA Astrophysics Data System (ADS)

    Long, Fei; Zhou, Bin; Ming, Delie; Tian, Jinwen

    2015-10-01

    Recently, image processing has got a lot of attention in the field of photogrammetry, medical image processing, etc. Matching two or more images of the same scene taken at different times, by different cameras, or from different viewpoints, is a popular and important problem. Feature extraction plays an important part in image matching. Traditional SIFT detectors reject the unstable points by eliminating the low contrast and edge response points. The disadvantage is the need to set the threshold manually. The main idea of this paper is to get the stable extremums by machine learning algorithm. Firstly we use ASIFT approach coupled with the light changes and blur to generate multi-view simulated images, which make up the set of the simulated images of the original image. According to the way of generating simulated images set, affine transformation of each generated image is also known. Instead of the traditional matching process which contain the unstable RANSAC method to get the affine transformation, this approach is more stable and accurate. Secondly we calculate the stability value of the feature points by the set of image with its affine transformation. Then we get the different feature properties of the feature point, such as DOG features, scales, edge point density, etc. Those two form the training set while stability value is the dependent variable and feature property is the independent variable. At last, a process of training by Rank-SVM is taken. We will get a weight vector. In use, based on the feature properties of each points and weight vector calculated by training, we get the sort value of each feature point which refers to the stability value, then we sort the feature points. In conclusion, we applied our algorithm and the original SIFT detectors to test as a comparison. While in different view changes, blurs, illuminations, it comes as no surprise that experimental results show that our algorithm is more efficient.

  14. Fluorescence volume imaging with an axicon: simulation study based on scalar diffraction method.

    PubMed

    Zheng, Juanjuan; Yang, Yanlong; Lei, Ming; Yao, Baoli; Gao, Peng; Ye, Tong

    2012-10-20

    In a two-photon excitation fluorescence volume imaging (TPFVI) system, an axicon is used to generate a Bessel beam and at the same time to collect the generated fluorescence to achieve large depth of field. A slice-by-slice diffraction propagation model in the frame of the angular spectrum method is proposed to simulate the whole imaging process of TPFVI. The simulation reveals that the Bessel beam can penetrate deep in scattering media due to its self-reconstruction ability. The simulation also demonstrates that TPFVI can image a volume of interest in a single raster scan. Two-photon excitation is crucial to eliminate the signals that are generated by the side lobes of Bessel beams; the unwanted signals may be further suppressed by placing a spatial filter in the front of the detector. The simulation method will guide the system design in improving the performance of a TPFVI system. PMID:23089777

  15. [Method for environmental management in paper industry based on pollution control technology simulation].

    PubMed

    Zhang, Xue-Ying; Wen, Zong-Guo

    2014-11-01

    To evaluate the reduction potential of industrial water pollutant emissions and to study the application of technology simulation in pollutant control and environment management, an Industrial Reduction Potential Analysis and Environment Management (IRPAEM) model was developed based on coupling of "material-process-technology-product". The model integrated bottom-up modeling and scenario analysis method, and was applied to China's paper industry. Results showed that under CM scenario, the reduction potentials of waster water, COD and ammonia nitrogen would reach 7 x 10(8) t, 39 x 10(4) t and 0.3 x 10(4) t, respectively in 2015, 13.8 x 10(8) t, 56 x 10(4) t and 0.5 x 10(4) t, respectively in 2020. Strengthening the end-treatment would still be the key method to reduce emissions during 2010-2020, while the reduction effect of structure adjustment would be more obvious during 2015-2020. Pollution production could basically reach the domestic or international advanced level of clean production in 2015 and 2020; the index of wastewater and ammonia nitrogen would basically meet the emission standards in 2015 and 2020 while COD would not. PMID:25639122

  16. Proposal of Classification Method of Time Series Data in International Emissions Trading Market Using Agent-based Simulation

    NASA Astrophysics Data System (ADS)

    Nakada, Tomohiro; Takadama, Keiki; Watanabe, Shigeyoshi

    This paper proposes the classification method using Bayesian analytical method to classify the time series data in the international emissions trading market depend on the agent-based simulation and compares the case with Discrete Fourier transform analytical method. The purpose demonstrates the analytical methods mapping time series data such as market price. These analytical methods have revealed the following results: (1) the classification methods indicate the distance of mapping from the time series data, it is easier the understanding and inference than time series data; (2) these methods can analyze the uncertain time series data using the distance via agent-based simulation including stationary process and non-stationary process; and (3) Bayesian analytical method can show the 1% difference description of the emission reduction targets of agent.

  17. Canopy BRF simulation of forest with different crown shape and height in larger scale based on Radiosity method

    NASA Astrophysics Data System (ADS)

    Song, Jinling; Qu, Yonghua; Wang, Jindi; Wan, Huawei; Liu, Xiaoqing

    2007-06-01

    Radiosity method is based on the computer simulation of 3D real structures of vegetations, such as leaves, branches and stems, which are composed by many facets. Using this method we can simulate the canopy reflectance and its bidirectional distribution of the vegetation canopy in visible and NIR regions. But with vegetations are more complex, more facets to compose them, so large memory and lots of time to calculate view factors are required, which are the choke points of using Radiosity method to calculate canopy BRF of lager scale vegetation scenes. We derived a new method to solve the problem, and the main idea is to abstract vegetation crown shapes and to simplify their structures, which can lessen the number of facets. The facets are given optical properties according to the reflectance, transmission and absorption of the real structure canopy. Based on the above work, we can simulate the canopy BRF of the mix scenes with different species vegetation in the large scale. In this study, taking broadleaf trees as an example, based on their structure characteristics, we abstracted their crowns as ellipsoid shells, and simulated the canopy BRF in visible and NIR regions of the large scale scene with different crown shape and different height ellipsoids. Form this study, we can conclude: LAI, LAD the probability gap, the sunlit and shaded surfaces are more important parameter to simulate the simplified vegetation canopy BRF. And the Radiosity method can apply us canopy BRF data in any conditions for our research.

  18. Transonic inviscid/turbulent airfoil flow simulations using a pressure based method with high order schemes

    NASA Astrophysics Data System (ADS)

    Zhou, Gang; Davidson, Lars; Olsson, Erik

    This paper presents computations of transonic aerodynamic flow simulations using a pressure-based Euler/Navier-Stokes solver. In this work emphasis is focused on the implementation of higher-order schemes such as QUICK, LUDS and MUSCL. A new scheme CHARM is proposed for convection approximation. Inviscid flow simulations are carried out for the airfoil NACA 0012. The CHARM scheme gives better resolution for the present inviscid case. The turbulent flow computations are carried out for the airfoil RAE 2822. Good results were obtained using QUICK scheme for mean motion equation combined with the MUSCL scheme for k and ɛ equations. No unphysical oscillations were observed. The results also show that the second-order and thir-dorder schemes yielded a comparable accuracy compared with the experimental data.

  19. A New Hybrid Viscoelastic Soft Tissue Model based on Meshless Method for Haptic Surgical Simulation

    PubMed Central

    Bao, Yidong; Wu, Dongmei; Yan, Zhiyuan; Du, Zhijiang

    2013-01-01

    This paper proposes a hybrid soft tissue model that consists of a multilayer structure and many spheres for surgical simulation system based on meshless. To improve accuracy of the model, tension is added to the three-parameter viscoelastic structure that connects the two spheres. By using haptic device, the three-parameter viscoelastic model (TPM) produces accurate deformationand also has better stress-strain, stress relaxation and creep properties. Stress relaxation and creep formulas have been obtained by mathematical formula derivation. Comparing with the experimental results of the real pig liver which were reported by Evren et al. and Amy et al., the curve lines of stress-strain, stress relaxation and creep of TPM are close to the experimental data of the real liver. Simulated results show that TPM has better real-time, stability and accuracy. PMID:24339837

  20. Two methods for transmission line simulation model creation based on time domain measurements

    NASA Astrophysics Data System (ADS)

    Rinas, D.; Frei, S.

    2011-07-01

    The emission from transmission lines plays an important role in the electromagnetic compatibility of automotive electronic systems. In a frequency range below 200 MHz radiation from cables is often the dominant emission factor. In higher frequency ranges radiation from PCBs and their housing becomes more relevant. Main sources for this emission are the conducting traces. The established field measurement methods according CISPR 25 for evaluation of emissions suffer from the need to use large anechoic chambers. Furthermore measurement data can not be used for simulation model creation in order to compute the overall fields radiated from a car. In this paper a method to determine the far-fields and a simulation model of radiating transmission lines, esp. cable bundles and conducting traces on planar structures, is proposed. The method measures the electromagnetic near-field above the test object. Measurements are done in time domain in order to get phase information and to reduce measurement time. On the basis of near-field data equivalent source identification can be done. Considering correlations between sources along each conductive structure in model creation process, the model accuracy increases and computational costs can be reduced.

  1. Simulation of the electrode shape change in electrochemical machining based on the level set method

    NASA Astrophysics Data System (ADS)

    Topa, V.; Purcar, M.; Avram, A.; Munteanu, C.; Chereches, R.; Grindei, L.

    2012-04-01

    This paper proposes a generally applicable numerical algorithm for the simulation of two dimensional electrode shape changes during electrochemical machining processes. The computational model consists of two coupled problems: an electrode shape change rate analysis and a moving boundary problem. The innovative aspect is that the workpiece shape is computed over a number of predefined time steps by convection of its surface with a velocity proportional and in the direction of the local electrode shape change rate. An example related to the electrochemical machining of a slot in a stainless steel plate is presented here to demonstrate the strong features of the proposed method.

  2. Full wave simulation of waves in ECRIS plasmas based on the finite element method

    SciTech Connect

    Torrisi, G.; Mascali, D.; Neri, L.; Castro, G.; Patti, G.; Celona, L.; Gammino, S.; Ciavola, G.; Di Donato, L.; Sorbello, G.; Isernia, T.

    2014-02-12

    This paper describes the modeling and the full wave numerical simulation of electromagnetic waves propagation and absorption in an anisotropic magnetized plasma filling the resonant cavity of an electron cyclotron resonance ion source (ECRIS). The model assumes inhomogeneous, dispersive and tensorial constitutive relations. Maxwell's equations are solved by the finite element method (FEM), using the COMSOL Multiphysics{sup ®} suite. All the relevant details have been considered in the model, including the non uniform external magnetostatic field used for plasma confinement, the local electron density profile resulting in the full-3D non uniform magnetized plasma complex dielectric tensor. The more accurate plasma simulations clearly show the importance of cavity effect on wave propagation and the effects of a resonant surface. These studies are the pillars for an improved ECRIS plasma modeling, that is mandatory to optimize the ion source output (beam intensity distribution and charge state, especially). Any new project concerning the advanced ECRIS design will take benefit by an adequate modeling of self-consistent wave absorption simulations.

  3. Monte Carlo method based radiative transfer simulation of stochastic open forest generated by circle packing application

    NASA Astrophysics Data System (ADS)

    Jin, Shengye; Tamura, Masayuki

    2013-10-01

    Monte Carlo Ray Tracing (MCRT) method is a versatile application for simulating radiative transfer regime of the Solar - Atmosphere - Landscape system. Moreover, it can be used to compute the radiation distribution over a complex landscape configuration, as an example like a forest area. Due to its robustness to the complexity of the 3-D scene altering, MCRT method is also employed for simulating canopy radiative transfer regime as the validation source of other radiative transfer models. In MCRT modeling within vegetation, one basic step is the canopy scene set up. 3-D scanning application was used for representing canopy structure as accurately as possible, but it is time consuming. Botanical growth function can be used to model the single tree growth, but cannot be used to express the impaction among trees. L-System is also a functional controlled tree growth simulation model, but it costs large computing memory. Additionally, it only models the current tree patterns rather than tree growth during we simulate the radiative transfer regime. Therefore, it is much more constructive to use regular solid pattern like ellipsoidal, cone, cylinder etc. to indicate single canopy. Considering the allelopathy phenomenon in some open forest optical images, each tree in its own `domain' repels other trees. According to this assumption a stochastic circle packing algorithm is developed to generate the 3-D canopy scene in this study. The canopy coverage (%) and the tree amount (N) of the 3-D scene are declared at first, similar to the random open forest image. Accordingly, we randomly generate each canopy radius (rc). Then we set the circle central coordinate on XY-plane as well as to keep circles separate from each other by the circle packing algorithm. To model the individual tree, we employ the Ishikawa's tree growth regressive model to set the tree parameters including DBH (dt), tree height (H). However, the relationship between canopy height (Hc) and trunk height (Ht) is

  4. Scientific bases, methods, and results of mathematical simulation and prediction of structure and behavior of petroleum geology systems

    SciTech Connect

    Buryakovsky, L.A. )

    1992-07-01

    This paper reports that the systems approach to geology is both a sophisticated ideology and a scientific method for investigation of very complicated geological systems. As applied to petroleum geology, it includes the methodological base and technology of mathematical simulation used for modeling geological systems: the systems that have been previously investigated and estimated by experimental data and/or field studies. Because geological systems develop in time, it is very important to simulate them as dynamic systems. The main tasks in the systems approach to petroleum geology are the numerical simulation of physical and reservoir properties of rocks, pore (geofluid) pressure in reservoir beds, and hydrocarbon resources. The results of numerical simulation are used for prediction of geological system structure and behavior in both studies and noninvestigated areas.

  5. Estimation of intraoperative blood flow during liver RF ablation using a finite element method-based biomechanical simulation.

    PubMed

    Watanabe, Hiroki; Yamazaki, Nozomu; Kobayashi, Yo; Miyashita, Tomoyuki; Ohdaira, Takeshi; Hashizume, Makoto; Fujie, Masakatsu G

    2011-01-01

    Radiofrequency ablation is increasingly being used for liver cancer because it is a minimally invasive treatment method. However, it is difficult for the operators to precisely control the formation of coagulation zones because of the cooling effect of capillary vessels. To overcome this limitation, we have proposed a model-based robotic ablation system using a real-time numerical simulation to analyze temperature distributions in the target organ. This robot can determine the adequate amount of electric power supplied to the organ based on real-time temperature information reflecting the cooling effect provided by the simulator. The objective of this study was to develop a method to estimate the intraoperative rate of blood flow in the target organ to determine temperature distribution. In this paper, we propose a simulation-based method to estimate the rate of blood flow. We also performed an in vitro study to validate the proposed method by estimating the rate of blood flow in a hog liver. The experimental results revealed that the proposed method can be used to estimate the rate of blood flow in an organ. PMID:22256059

  6. Qualitative Simulation of Photon Transport in Free Space Based on Monte Carlo Method and Its Parallel Implementation

    PubMed Central

    Chen, Xueli; Gao, Xinbo; Qu, Xiaochao; Chen, Duofang; Ma, Bin; Wang, Lin; Peng, Kuan; Liang, Jimin; Tian, Jie

    2010-01-01

    During the past decade, Monte Carlo method has obtained wide applications in optical imaging to simulate photon transport process inside tissues. However, this method has not been effectively extended to the simulation of free-space photon transport at present. In this paper, a uniform framework for noncontact optical imaging is proposed based on Monte Carlo method, which consists of the simulation of photon transport both in tissues and in free space. Specifically, the simplification theory of lens system is utilized to model the camera lens equipped in the optical imaging system, and Monte Carlo method is employed to describe the energy transformation from the tissue surface to the CCD camera. Also, the focusing effect of camera lens is considered to establish the relationship of corresponding points between tissue surface and CCD camera. Furthermore, a parallel version of the framework is realized, making the simulation much more convenient and effective. The feasibility of the uniform framework and the effectiveness of the parallel version are demonstrated with a cylindrical phantom based on real experimental results. PMID:20689705

  7. Incompressible SPH method based on Rankine source solution for violent water wave simulation

    NASA Astrophysics Data System (ADS)

    Zheng, X.; Ma, Q. W.; Duan, W. Y.

    2014-11-01

    With wide applications, the smoothed particle hydrodynamics method (abbreviated as SPH) has become an important numerical tool for solving complex flows, in particular those with a rapidly moving free surface. For such problems, the incompressible Smoothed Particle Hydrodynamics (ISPH) has been shown to yield better and more stable pressure time histories than the traditional SPH by many papers in literature. However, the existing ISPH method directly approximates the second order derivatives of the functions to be solved by using the Poisson equation. The order of accuracy of the method becomes low, especially when particles are distributed in a disorderly manner, which generally happens for modelling violent water waves. This paper introduces a new formulation using the Rankine source solution. In the new approach to the ISPH, the Poisson equation is first transformed into another form that does not include any derivative of the functions to be solved, and as a result, does not need to numerically approximate derivatives. The advantage of the new approach without need of numerical approximation of derivatives is obvious, potentially leading to a more robust numerical method. The newly formulated method is tested by simulating various water waves, and its convergent behaviours are numerically studied in this paper. Its results are compared with experimental data in some cases and reasonably good agreement is achieved. More importantly, numerical results clearly show that the newly developed method does need less number of particles and so less computational costs to achieve the similar level of accuracy, or to produce more accurate results with the same number of particles compared with the traditional SPH and existing ISPH when it is applied to modelling water waves.

  8. Waveform-based simulated annealing of crosshole transmission data: a semi-global method for estimating seismic anisotropy

    NASA Astrophysics Data System (ADS)

    Afanasiev, Michael V.; Pratt, R. Gerhard; Kamei, Rie; McDowell, Glenn

    2014-12-01

    We successfully apply the semi-global inverse method of simulated annealing to determine the best-fitting 1-D anisotropy model for use in acoustic frequency domain waveform tomography. Our forward problem is based on a numerical solution of the frequency domain acoustic wave equation, and we minimize wavefield phase residuals through random perturbations to a 1-D vertically varying anisotropy profile. Both real and synthetic examples are presented in order to demonstrate and validate the approach. For the real data example, we processed and inverted a cross-borehole data set acquired by Vale Technology Development (Canada) Ltd. in the Eastern Deeps deposit, located in Voisey's Bay, Labrador, Canada. The inversion workflow comprises the full suite of acquisition, data processing, starting model building through traveltime tomography, simulated annealing and finally waveform tomography. Waveform tomography is a high resolution method that requires an accurate starting model. A cycle-skipping issue observed in our initial starting model was hypothesized to be due to an erroneous anisotropy model from traveltime tomography. This motivated the use of simulated annealing as a semi-global method for anisotropy estimation. We initially tested the simulated annealing approach on a synthetic data set based on the Voisey's Bay environment; these tests were successful and led to the application of the simulated annealing approach to the real data set. Similar behaviour was observed in the anisotropy models obtained through traveltime tomography in both the real and synthetic data sets, where simulated annealing produced an anisotropy model which solved the cycle-skipping issue. In the real data example, simulated annealing led to a final model that compares well with the velocities independently estimated from borehole logs. By comparing the calculated ray paths and wave paths, we attributed the failure of anisotropic traveltime tomography to the breakdown of the ray

  9. Simulation of the reduction process of solid oxide fuel cell composite anode based on phase field method

    NASA Astrophysics Data System (ADS)

    Jiao, Zhenjun; Shikazono, Naoki

    2016-02-01

    It is known that the reduction process influences the initial performances and durability of nickel-yttria-stabilized zirconia composite anode of the solid oxide fuel cell. In the present study, the reduction process of nickel-yttria stabilized zirconia composite anode is simulated based on the phase field method. An three-dimensional reconstructed microstructure of nickel oxide-yttria stabilized zirconia composite obtained by focused ion beam-scanning electron microscopy is used as the initial microstructure for the simulation. Both reduction of nickel oxide and nickel sintering mechanisms are considered in the model. The reduction rates of nickel oxide at different interfaces are defined based on the literature data. Simulation results are qualitatively compared to the experimental anode microstructures with different reduction temperatures.

  10. The Corrected Simulation Method of Critical Heat Flux Prediction for Water-Cooled Divertor Based on Euler Homogeneous Model

    NASA Astrophysics Data System (ADS)

    Zhang, Jingyang; Han, Le; Chang, Haiping; Liu, Nan; Xu, Tiejun

    2016-02-01

    An accurate critical heat flux (CHF) prediction method is the key factor for realizing the steady-state operation of a water-cooled divertor that works under one-sided high heating flux conditions. An improved CHF prediction method based on Euler's homogeneous model for flow boiling combined with realizable k-ɛ model for single-phase flow is adopted in this paper in which time relaxation coefficients are corrected by the Hertz-Knudsen formula in order to improve the calculation accuracy of vapor-liquid conversion efficiency under high heating flux conditions. Moreover, local large differences of liquid physical properties due to the extreme nonuniform heating flux on cooling wall along the circumference direction are revised by formula IAPWS-IF97. Therefore, this method can improve the calculation accuracy of heat and mass transfer between liquid phase and vapor phase in a CHF prediction simulation of water-cooled divertors under the one-sided high heating condition. An experimental example is simulated based on the improved and the uncorrected methods. The simulation results, such as temperature, void fraction and heat transfer coefficient, are analyzed to achieve the CHF prediction. The results show that the maximum error of CHF based on the improved method is 23.7%, while that of CHF based on uncorrected method is up to 188%, as compared with the experiment results of Ref. [12]. Finally, this method is verified by comparison with the experimental data obtained by International Thermonuclear Experimental Reactor (ITER), with a maximum error of 6% only. This method provides an efficient tool for the CHF prediction of water-cooled divertors. supported by the National Magnetic Confinement Fusion Science Program of China (No. 2010GB104005) and National Natural Science Foundation of China (No. 51406085)

  11. A simplified numerical simulation method of bending properties for glass fiber cloth reinforced denture base resin.

    PubMed

    Tanimoto, Yasuhiro; Nishiwaki, Tsuyoshi; Nishiyama, Norihiro; Nemoto, Kimiya; Maekawa, Zen-ichiro

    2002-06-01

    The purpose of this study was to propose a new numerical modeling of the glass fiber cloth reinforced denture base resin (GFRP). The proposed model is constructed with an isotropic shell, beam and orthotropic shell elements representing the outmost resin, interlaminar resin and glass fiber cloth, respectively. The proposed model was applied to the failure progress analysis under three-point bending conditions, the validity of the numerical model was checked through comparisons with experimental results. The failure progress behaviors involving the local failures, such as interlaminar delamination and resin failure, could be simulated using the numerical model for analyzing the failure progress of GFRP. It is concluded that the model was effective for the failure progress analysis of GFRP. PMID:12238780

  12. Thickness-based adaptive mesh refinement methods for multi-phase flow simulations with thin regions

    SciTech Connect

    Chen, Xiaodong; Yang, Vigor

    2014-07-15

    In numerical simulations of multi-scale, multi-phase flows, grid refinement is required to resolve regions with small scales. A notable example is liquid-jet atomization and subsequent droplet dynamics. It is essential to characterize the detailed flow physics with variable length scales with high fidelity, in order to elucidate the underlying mechanisms. In this paper, two thickness-based mesh refinement schemes are developed based on distance- and topology-oriented criteria for thin regions with confining wall/plane of symmetry and in any situation, respectively. Both techniques are implemented in a general framework with a volume-of-fluid formulation and an adaptive-mesh-refinement capability. The distance-oriented technique compares against a critical value, the ratio of an interfacial cell size to the distance between the mass center of the cell and a reference plane. The topology-oriented technique is developed from digital topology theories to handle more general conditions. The requirement for interfacial mesh refinement can be detected swiftly, without the need of thickness information, equation solving, variable averaging or mesh repairing. The mesh refinement level increases smoothly on demand in thin regions. The schemes have been verified and validated against several benchmark cases to demonstrate their effectiveness and robustness. These include the dynamics of colliding droplets, droplet motions in a microchannel, and atomization of liquid impinging jets. Overall, the thickness-based refinement technique provides highly adaptive meshes for problems with thin regions in an efficient and fully automatic manner.

  13. Statistical modification analysis of helical planetary gears based on response surface method and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Guo, Fan

    2015-11-01

    Tooth modification technique is widely used in gear industry to improve the meshing performance of gearings. However, few of the present studies on tooth modification considers the influence of inevitable random errors on gear modification effects. In order to investigate the uncertainties of tooth modification amount variations on system's dynamic behaviors of a helical planetary gears, an analytical dynamic model including tooth modification parameters is proposed to carry out a deterministic analysis on the dynamics of a helical planetary gear. The dynamic meshing forces as well as the dynamic transmission errors of the sun-planet 1 gear pair with and without tooth modifications are computed and compared to show the effectiveness of tooth modifications on gear dynamics enhancement. By using response surface method, a fitted regression model for the dynamic transmission error(DTE) fluctuations is established to quantify the relationship between modification amounts and DTE fluctuations. By shifting the inevitable random errors arousing from manufacturing and installing process to tooth modification amount variations, a statistical tooth modification model is developed and a methodology combining Monte Carlo simulation and response surface method is presented for uncertainty analysis of tooth modifications. The uncertainly analysis reveals that the system's dynamic behaviors do not obey the normal distribution rule even though the design variables are normally distributed. In addition, a deterministic modification amount will not definitely achieve an optimal result for both static and dynamic transmission error fluctuation reduction simultaneously.

  14. Efficacy of laser-based irrigant activation methods in removing debris from simulated root canal irregularities.

    PubMed

    Deleu, Ellen; Meire, Maarten A; De Moor, Roeland J G

    2015-02-01

    In root canal therapy, irrigating solutions are essential to assist in debridement and disinfection, but their spread and action is often restricted by canal anatomy. Hence, activation of irrigants is suggested to improve their distribution in the canal system, increasing irrigation effectiveness. Activation can be done with lasers, termed laser-activated irrigation (LAI). The purpose of this in vitro study was to compare the efficacy of different irrigant activation methods in removing debris from simulated root canal irregularities. Twenty-five straight human canine roots were embedded in resin, split, and their canals prepared to a standardized shape. A groove was cut in the wall of each canal and filled with dentin debris. Canals were filled with sodium hypochlorite and six irrigant activation procedures were tested: conventional needle irrigation (CI), manual-dynamic irrigation with a tapered gutta percha cone (manual-dynamic irrigation (MDI)), passive ultrasonic irrigation, LAI with 2,940-nm erbium-doped yttrium aluminum garnet (Er:YAG) laser with a plain fiber tip inside the canal (Er-flat), LAI with Er:YAG laser with a conical tip held at the canal entrance (Er-PIPS), and LAI with a 980-nm diode laser moving the fiber inside the canal (diode). The amount of remaining debris in the groove was scored and compared among the groups using non-parametric tests. Conventional irrigation removed significantly less debris than all other groups. The Er:YAG with plain fiber tip was more efficient than MDI, CI, diode, and Er:YAG laser with PIPS tip in removing debris from simulated root canal irregularities. PMID:24091791

  15. Early breast cancer detection method based on a simulation study of single-channel passive microwave radiometry imaging

    NASA Astrophysics Data System (ADS)

    Kostopoulos, Spiros A.; Savva, Andonis D.; Asvestas, Pantelis A.; Nikolopoulos, Christos D.; Capsalis, Christos N.; Cavouras, Dionisis A.

    2015-09-01

    The aim of the present study is to provide a methodology for detecting temperature alterations in human breast, based on single channel microwave radiometer imaging. Radiometer measurements were simulated by modelling the human breast, the temperature distribution, and the antenna characteristics. Moreover, a simulated lesion of variable size and position in the breast was employed to provide for slight temperature changes in the breast. To detect the presence of a lesion, the temperature distribution in the breast was reconstructed. This was accomplished by assuming that temperature distribution is the mixture of distributions with unknown parameters, which were determined by means of the least squares and the singular value decomposition methods. The proposed method was validated in a variety of scenarios by altering the lesion size and location and radiometer position. The method proved capable in identifying temperature alterations caused by lesions, at different locations in the breast.

  16. A method motion simulator design based on modeling characteristics of the human operator

    NASA Technical Reports Server (NTRS)

    Repperger, D. W.; Junker, A. M.

    1978-01-01

    A design criteria is obtained to compare two simulators and evaluate their equivalence or credibility. In the subsequent analysis the comparison of two simulators can be considered as the same problem as the comparison of a real world situation and a simulation's representation of this real world situation. The design criteria developed involves modeling of the human operator and defining simple parameters to describe his behavior in the simulator and in the real world situation. In the process of obtaining human operator parameters to define characteristics to evaluate simulators, measures are also obtained on these human operator characteristics which can be used to describe the human as an information processor and controller. First, a study is conducted on the simulator design problem in such a manner that this modeling approach can be used to develop a criteria for the comparison of two simulators.

  17. A variable hard sphere-based phenomenological inelastic collision model for rarefied gas flow simulations by the direct simulation Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Prasanth, P. S.; Kakkassery, Jose K.; Vijayakumar, R.

    2012-04-01

    A modified phenomenological model is constructed for the simulation of rarefied flows of polyatomic non-polar gas molecules by the direct simulation Monte Carlo (DSMC) method. This variable hard sphere-based model employs a constant rotational collision number, but all its collisions are inelastic in nature and at the same time the correct macroscopic relaxation rate is maintained. In equilibrium conditions, there is equi-partition of energy between the rotational and translational modes and it satisfies the principle of reciprocity or detailed balancing. The present model is applicable for moderate temperatures at which the molecules are in their vibrational ground state. For verification, the model is applied to the DSMC simulations of the translational and rotational energy distributions in nitrogen gas at equilibrium and the results are compared with their corresponding Maxwellian distributions. Next, the Couette flow, the temperature jump and the Rayleigh flow are simulated; the viscosity and thermal conductivity coefficients of nitrogen are numerically estimated and compared with experimentally measured values. The model is further applied to the simulation of the rotational relaxation of nitrogen through low- and high-Mach-number normal shock waves in a novel way. In all cases, the results are found to be in good agreement with theoretically expected and experimentally observed values. It is concluded that the inelastic collision of polyatomic molecules can be predicted well by employing the constructed variable hard sphere (VHS)-based collision model.

  18. A new variable parallel holes collimator for scintigraphic device with validation method based on Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Trinci, G.; Massari, R.; Scandellari, M.; Boccalini, S.; Costantini, S.; Di Sero, R.; Basso, A.; Sala, R.; Scopinaro, F.; Soluri, A.

    2010-09-01

    The aim of this work is to show a new scintigraphic device able to change automatically the length of its collimator in order to adapt the spatial resolution value to gamma source distance. This patented technique replaces the need for collimator change that standard gamma cameras still feature. Monte Carlo simulations represent the best tool in searching new technological solutions for such an innovative collimation structure. They also provide a valid analysis on response of gamma cameras performances as well as on advantages and limits of this new solution. Specifically, Monte Carlo simulations are realized with GEANT4 (GEometry ANd Tracking) framework and the specific simulation object is a collimation method based on separate blocks that can be brought closer and farther, in order to reach and maintain specific spatial resolution values for all source-detector distances. To verify the accuracy and the faithfulness of these simulations, we have realized experimental measurements with identical setup and conditions. This confirms the power of the simulation as an extremely useful tool, especially where new technological solutions need to be studied, tested and analyzed before their practical realization. The final aim of this new collimation system is the improvement of the SPECT techniques, with the real control of the spatial resolution value during tomographic acquisitions. This principle did allow us to simulate a tomographic acquisition of two capillaries of radioactive solution, in order to verify the possibility to clearly distinguish them.

  19. Ground motion simulation for the 23 August 2011, Mineral, Virginia earthquake using physics-based and stochastic broadband methods

    USGS Publications Warehouse

    Sun, Xiaodan; Hartzell, Stephen; Rezaeian, Sanaz

    2015-01-01

    Three broadband simulation methods are used to generate synthetic ground motions for the 2011 Mineral, Virginia, earthquake and compare with observed motions. The methods include a physics‐based model by Hartzell et al. (1999, 2005), a stochastic source‐based model by Boore (2009), and a stochastic site‐based model by Rezaeian and Der Kiureghian (2010, 2012). The ground‐motion dataset consists of 40 stations within 600 km of the epicenter. Several metrics are used to validate the simulations: (1) overall bias of response spectra and Fourier spectra (from 0.1 to 10 Hz); (2) spatial distribution of residuals for GMRotI50 peak ground acceleration (PGA), peak ground velocity, and pseudospectral acceleration (PSA) at various periods; (3) comparison with ground‐motion prediction equations (GMPEs) for the eastern United States. Our results show that (1) the physics‐based model provides satisfactory overall bias from 0.1 to 10 Hz and produces more realistic synthetic waveforms; (2) the stochastic site‐based model also yields more realistic synthetic waveforms and performs superiorly for frequencies greater than about 1 Hz; (3) the stochastic source‐based model has larger bias at lower frequencies (<0.5  Hz) and cannot reproduce the varying frequency content in the time domain. The spatial distribution of GMRotI50 residuals shows that there is no obvious pattern with distance in the simulation bias, but there is some azimuthal variability. The comparison between synthetics and GMPEs shows similar fall‐off with distance for all three models, comparable PGA and PSA amplitudes for the physics‐based and stochastic site‐based models, and systematic lower amplitudes for the stochastic source‐based model at lower frequencies (<0.5  Hz).

  20. Voronoi based discrete least squares meshless method for heat conduction simulation in highly irregular geometries

    NASA Astrophysics Data System (ADS)

    Labibzadeh, Mojtaba

    2016-01-01

    A new technique is used in Discrete Least Square Meshfree(DLSM) method to remove the common existing deficiencies of meshfree methods in handling of the problems containing cracks or concave boundaries. An enhanced Discrete Least Squares Meshless method named as VDLSM(Voronoi based Discrete Least Squares Meshless) is developed in order to solve the steady-state heat conduction problem in irregular solid domains including concave boundaries or cracks. Existing meshless methods cannot estimate precisely the required unknowns in the vicinity of the above mentioned boundaries. Conducted researches are limited to domains with regular convex boundaries. To this end, the advantages of the Voronoi tessellation algorithm are implemented. The support domains of the sampling points are determined using a Voronoi tessellation algorithm. For the weight functions, a cubic spline polynomial is used based on a normalized distance variable which can provide a high degree of smoothness near those mentioned above discontinuities. Finally, Moving Least Squares(MLS) shape functions are constructed using a varitional method. This straight-forward scheme can properly estimate the unknowns(in this particular study, the temperatures at the nodal points) near and on the crack faces, crack tip or concave boundaries without need to extra backward corrective procedures, i.e. the iterative calculations for modifying the shape functions of the nodes located near or on these types of the complex boundaries. The accuracy and efficiency of the presented method are investigated by analyzing four particular examples. Obtained results from VDLSM are compared with the available analytical results or with the results of the well-known Finite Elements Method(FEM) when an analytical solution is not available. By comparisons, it is revealed that the proposed technique gives high accuracy for the solution of the steady-state heat conduction problems within cracked domains or domains with concave boundaries

  1. Methods of channeling simulation

    SciTech Connect

    Barrett, J.H.

    1989-06-01

    Many computer simulation programs have been used to interpret experiments almost since the first channeling measurements were made. Certain aspects of these programs are important in how accurately they simulate ions in crystals; among these are the manner in which the structure of the crystal is incorporated, how any quantity of interest is computed, what ion-atom potential is used, how deflections are computed from the potential, incorporation of thermal vibrations of the lattice atoms, correlations of thermal vibrations, and form of stopping power. Other aspects of the programs are included to improve the speed; among these are table lookup, importance sampling, and the multiparameter method. It is desirable for programs to facilitate incorporation of special features of interest in special situations; examples are relaxations and enhanced vibrations of surface atoms, easy substitution of an alternate potential for comparison, change of row directions from layer to layer in strained-layer lattices, and different vibration amplitudes for substitutional solute or impurity atoms. Ways of implementing all of these aspects and features and the consequences of them will be discussed. 30 refs., 3 figs.

  2. Parallel adaptive mesh refinement method based on WENO finite difference scheme for the simulation of multi-dimensional detonation

    NASA Astrophysics Data System (ADS)

    Wang, Cheng; Dong, XinZhuang; Shu, Chi-Wang

    2015-10-01

    For numerical simulation of detonation, computational cost using uniform meshes is large due to the vast separation in both time and space scales. Adaptive mesh refinement (AMR) is advantageous for problems with vastly different scales. This paper aims to propose an AMR method with high order accuracy for numerical investigation of multi-dimensional detonation. A well-designed AMR method based on finite difference weighted essentially non-oscillatory (WENO) scheme, named as AMR&WENO is proposed. A new cell-based data structure is used to organize the adaptive meshes. The new data structure makes it possible for cells to communicate with each other quickly and easily. In order to develop an AMR method with high order accuracy, high order prolongations in both space and time are utilized in the data prolongation procedure. Based on the message passing interface (MPI) platform, we have developed a workload balancing parallel AMR&WENO code using the Hilbert space-filling curve algorithm. Our numerical experiments with detonation simulations indicate that the AMR&WENO is accurate and has a high resolution. Moreover, we evaluate and compare the performance of the uniform mesh WENO scheme and the parallel AMR&WENO method. The comparison results provide us further insight into the high performance of the parallel AMR&WENO method.

  3. Physical parameter identification method based on modal analysis for two-axis on-road vehicles: Theory and simulation

    NASA Astrophysics Data System (ADS)

    Zheng, Minyi; Zhang, Bangji; Zhang, Jie; Zhang, Nong

    2016-03-01

    Physical parameters are very important for vehicle dynamic modeling and analysis. However, most of physical parameter identification methods are assuming some physical parameters of vehicle are known, and the other unknown parameters can be identified. In order to identify physical parameters of vehicle in the case that all physical parameters are unknown, a methodology based on the State Variable Method(SVM) for physical parameter identification of two-axis on-road vehicle is presented. The modal parameters of the vehicle are identified by the SVM, furthermore, the physical parameters of the vehicle are estimated by least squares method. In numerical simulations, physical parameters of Ford Granada are chosen as parameters of vehicle model, and half-sine bump function is chosen to simulate tire stimulated by impulse excitation. The first numerical simulation shows that the present method can identify all of the physical parameters and the largest absolute value of percentage error of the identified physical parameter is 0.205%; and the effect of the errors of additional mass, structural parameter and measurement noise are discussed in the following simulations, the results shows that when signal contains 30 dB noise, the largest absolute value of percentage error of the identification is 3.78%. These simulations verify that the presented method is effective and accurate for physical parameter identification of two-axis on-road vehicles. The proposed methodology can identify all physical parameters of 7-DOF vehicle model by using free-decay responses of vehicle without need to assume some physical parameters are known.

  4. On-orbit servicing system assessment and optimization methods based on lifecycle simulation under mixed aleatory and epistemic uncertainties

    NASA Astrophysics Data System (ADS)

    Yao, Wen; Chen, Xiaoqian; Huang, Yiyong; van Tooren, Michel

    2013-06-01

    To assess the on-orbit servicing (OOS) paradigm and optimize its utilities by taking advantage of its inherent flexibility and responsiveness, the OOS system assessment and optimization methods based on lifecycle simulation under uncertainties are studied. The uncertainty sources considered in this paper include both the aleatory (random launch/OOS operation failure and on-orbit component failure) and the epistemic (the unknown trend of the end-used market price) types. Firstly, the lifecycle simulation under uncertainties is discussed. The chronological flowchart is presented. The cost and benefit models are established, and the uncertainties thereof are modeled. The dynamic programming method to make optimal decision in face of the uncertain events is introduced. Secondly, the method to analyze the propagation effects of the uncertainties on the OOS utilities is studied. With combined probability and evidence theory, a Monte Carlo lifecycle Simulation based Unified Uncertainty Analysis (MCS-UUA) approach is proposed, based on which the OOS utility assessment tool under mixed uncertainties is developed. Thirdly, to further optimize the OOS system under mixed uncertainties, the reliability-based optimization (RBO) method is studied. To alleviate the computational burden of the traditional RBO method which involves nested optimum search and uncertainty analysis, the framework of Sequential Optimization and Mixed Uncertainty Analysis (SOMUA) is employed to integrate MCS-UUA, and the RBO algorithm SOMUA-MCS is developed. Fourthly, a case study on the OOS system for a hypothetical GEO commercial communication satellite is investigated with the proposed assessment tool. Furthermore, the OOS system is optimized with SOMUA-MCS. Lastly, some conclusions are given and future research prospects are highlighted.

  5. Efficient HPLC method development using structure-based database search, physico-chemical prediction and chromatographic simulation.

    PubMed

    Wang, Lin; Zheng, Jinjian; Gong, Xiaoyi; Hartman, Robert; Antonucci, Vincent

    2015-02-01

    Development of a robust HPLC method for pharmaceutical analysis can be very challenging and time-consuming. In our laboratory, we have developed a new workflow leveraging ACD/Labs software tools to improve the performance of HPLC method development. First, we established ACD-based analytical method databases that can be searched by chemical structure similarity. By taking advantage of the existing knowledge of HPLC methods archived in the databases, one can find a good starting point for HPLC method development, or even reuse an existing method as is for a new project. Second, we used the software to predict compound physicochemical properties before running actual experiments to help select appropriate method conditions for targeted screening experiments. Finally, after selecting stationary and mobile phases, we used modeling software to simulate chromatographic separations for optimized temperature and gradient program. The optimized new method was then uploaded to internal databases as knowledge available to assist future method development efforts. Routine implementation of such standardized workflows has the potential to reduce the number of experiments required for method development and facilitate systematic and efficient development of faster, greener and more robust methods leading to greater productivity. In this article, we used Loratadine method development as an example to demonstrate efficient method development using this new workflow. PMID:25481084

  6. Inferring Population Decline and Expansion From Microsatellite Data: A Simulation-Based Evaluation of the Msvar Method

    PubMed Central

    Girod, Christophe; Vitalis, Renaud; Leblois, Raphaël; Fréville, Hélène

    2011-01-01

    Reconstructing the demographic history of populations is a central issue in evolutionary biology. Using likelihood-based methods coupled with Monte Carlo simulations, it is now possible to reconstruct past changes in population size from genetic data. Using simulated data sets under various demographic scenarios, we evaluate the statistical performance of Msvar, a full-likelihood Bayesian method that infers past demographic change from microsatellite data. Our simulation tests show that Msvar is very efficient at detecting population declines and expansions, provided the event is neither too weak nor too recent. We further show that Msvar outperforms two moment-based methods (the M-ratio test and Bottleneck) for detecting population size changes, whatever the time and the severity of the event. The same trend emerges from a compilation of empirical studies. The latest version of Msvar provides estimates of the current and the ancestral population size and the time since the population started changing in size. We show that, in the absence of prior knowledge, Msvar provides little information on the mutation rate, which results in biased estimates and/or wide credibility intervals for each of the demographic parameters. However, scaling the population size parameters with the mutation rate and scaling the time with current population size, as coalescent theory requires, significantly improves the quality of the estimates for contraction but not for expansion scenarios. Finally, our results suggest that Msvar is robust to moderate departures from a strict stepwise mutation model. PMID:21385729

  7. Task-based image quality evaluation of iterative reconstruction methods for low dose CT using computer simulations

    NASA Astrophysics Data System (ADS)

    Xu, Jingyan; Fuld, Matthew K.; Fung, George S. K.; Tsui, Benjamin M. W.

    2015-04-01

    Iterative reconstruction (IR) methods for x-ray CT is a promising approach to improve image quality or reduce radiation dose to patients. The goal of this work was to use task based image quality measures and the channelized Hotelling observer (CHO) to evaluate both analytic and IR methods for clinical x-ray CT applications. We performed realistic computer simulations at five radiation dose levels, from a clinical reference low dose D0 to 25% D0. A fixed size and contrast lesion was inserted at different locations into the liver of the XCAT phantom to simulate a weak signal. The simulated data were reconstructed on a commercial CT scanner (SOMATOM Definition Flash; Siemens, Forchheim, Germany) using the vendor-provided analytic (WFBP) and IR (SAFIRE) methods. The reconstructed images were analyzed by CHOs with both rotationally symmetric (RS) and rotationally oriented (RO) channels, and with different numbers of lesion locations (5, 10, and 20) in a signal known exactly (SKE), background known exactly but variable (BKEV) detection task. The area under the receiver operating characteristic curve (AUC) was used as a summary measure to compare the IR and analytic methods; the AUC was also used as the equal performance criterion to derive the potential dose reduction factor of IR. In general, there was a good agreement in the relative AUC values of different reconstruction methods using CHOs with RS and RO channels, although the CHO with RO channels achieved higher AUCs than RS channels. The improvement of IR over analytic methods depends on the dose level. The reference dose level D0 was based on a clinical low dose protocol, lower than the standard dose due to the use of IR methods. At 75% D0, the performance improvement was statistically significant (p < 0.05). The potential dose reduction factor also depended on the detection task. For the SKE/BKEV task involving 10 lesion locations, a dose reduction of at least 25% from D0 was achieved.

  8. Simulation of metal cutting using the particle finite-element method and a physically based plasticity model

    NASA Astrophysics Data System (ADS)

    Rodríguez, J. M.; Jonsén, P.; Svoboda, A.

    2016-08-01

    Metal cutting is one of the most common metal-shaping processes. In this process, specified geometrical and surface properties are obtained through the break-up of material and removal by a cutting edge into a chip. The chip formation is associated with large strains, high strain rates and locally high temperatures due to adiabatic heating. These phenomena together with numerical complications make modeling of metal cutting difficult. Material models, which are crucial in metal-cutting simulations, are usually calibrated based on data from material testing. Nevertheless, the magnitudes of strains and strain rates involved in metal cutting are several orders of magnitude higher than those generated from conventional material testing. Therefore, a highly desirable feature is a material model that can be extrapolated outside the calibration range. In this study, a physically based plasticity model based on dislocation density and vacancy concentration is used to simulate orthogonal metal cutting of AISI 316L. The material model is implemented into an in-house particle finite-element method software. Numerical simulations are in agreement with experimental results, but also with previous results obtained with the finite-element method.

  9. National Clinical Skills Competition: an effective simulation-based method to improve undergraduate medical education in China

    PubMed Central

    Jiang, Guanchao; Chen, Hong; Wang, Qiming; Chi, Baorong; He, Qingnan; Xiao, Haipeng; Zhou, Qinghuan; Liu, Jing; Wang, Shan

    2016-01-01

    Background The National Clinical Skills Competition has been held in China for 5 consecutive years since 2010 to promote undergraduate education reform and improve the teaching quality. The effects of the simulation-based competition will be analyzed in this study. Methods Participation in the competitions and the compilation of the questions used in the competition finals are summarized, and the influence and guidance quality are further analyzed. Through the nationwide distribution of questionnaires in medical colleges, the effects of the simulation-based competition on promoting undergraduate medical education reform were evaluated. Results The results show that approximately 450 students from more than 110 colleges (accounting for 81% of colleges providing undergraduate clinical medical education in China) participated in the competition each year. The knowledge, skills, and attitudes were comprehensively evaluated by simulation-based assessment. Eight hundred and eighty copies of the questionnaires were distributed to 110 participating medical schools in 2015. In total, 752 valid responses were received across 95 schools. The majority of the interviewees agreed or strongly agreed that competition promoted the adoption of advanced educational principles (76.8%), updated the curriculum model and instructional methods (79.8%), strengthened faculty development (84.0%), improved educational resources (82.1%), and benefited all students (53.4%). Conclusions The National Clinical Skills Competition is widely accepted in China. It has effectively promoted the reform and development of undergraduate medical education in China. PMID:26894586

  10. A low numerical dissipation patch-based adaptive mesh refinement method for large-eddy simulation of compressible flows

    NASA Astrophysics Data System (ADS)

    Pantano, C.; Deiterding, R.; Hill, D. J.; Pullin, D. I.

    2007-01-01

    We present a methodology for the large-eddy simulation of compressible flows with a low-numerical dissipation scheme and structured adaptive mesh refinement (SAMR). A description of a conservative, flux-based hybrid numerical method that uses both centered finite-difference and a weighted essentially non-oscillatory (WENO) scheme is given, encompassing the cases of scheme alternation and internal mesh interfaces resulting from SAMR. In this method, the centered scheme is used in turbulent flow regions while WENO is employed to capture shocks. One-, two- and three-dimensional numerical experiments and example simulations are presented including homogeneous shock-free turbulence, a turbulent jet and the strongly shock-driven mixing of a Richtmyer-Meshkov instability.

  11. A Comparison of Some Model Order Reduction Methods for Fast Simulation of Soft Tissue Response using the Point Collocation-based Method of Finite Spheres (PCMFS).

    PubMed

    Banihani, Suleiman; De, Suvranu

    2009-01-01

    In this paper we develop the Point Collocation-based Method of Finite Spheres (PCMFS) to simulate the viscoelastic response of soft biological tissues and evaluate the effectiveness of model order reduction methods such as modal truncation, Hankel optimal model and truncated balanced realization techniques for PCMFS. The PCMFS was developed in [1] as a physics-based technique for real time simulation of surgical procedures. It is a meshfree numerical method in which discretization is performed using a set of nodal points with approximation functions compactly supported on spherical subdomains centered at the nodes. The point collocation method is used as the weighted residual technique where the governing differential equations are directly applied at the nodal points. Since computational speed has a significant role in simulation of surgical procedures, model order reduction methods have been compared for relative gains in efficiency and computational accuracy. Of these methods, truncated balanced realization results in the highest accuracy while modal truncation results in the highest efficiency. PMID:20300494

  12. Numerical Simulation of Evacuation Process in Malaysia By Using Distinct-Element-Method Based Multi-Agent Model

    NASA Astrophysics Data System (ADS)

    Abustan, M. S.; Rahman, N. A.; Gotoh, H.; Harada, E.; Talib, S. H. A.

    2016-07-01

    In Malaysia, not many researches on crowd evacuation simulation had been reported. Hence, the development of numerical crowd evacuation process by taking into account people behavioral patterns and psychological characteristics is crucial in Malaysia. On the other hand, tsunami disaster began to gain attention of Malaysian citizens after the 2004 Indian Ocean Tsunami that need quick evacuation process. In relation to the above circumstances, we have conducted simulations of tsunami evacuation process at the Miami Beach of Penang Island by using Distinct Element Method (DEM)-based crowd behavior simulator. The main objectives are to investigate and reproduce current conditions of evacuation process at the said locations under different hypothetical scenarios for the efficiency study of the evacuation. The sim-1 is initial condition of evacuation planning while sim-2 as improvement of evacuation planning by adding new evacuation area. From the simulation result, sim-2 have a shorter time of evacuation process compared to the sim-1. The evacuation time recuded 53 second. The effect of the additional evacuation place is confirmed from decreasing of the evacuation completion time. Simultaneously, the numerical simulation may be promoted as an effective tool in studying crowd evacuation process.

  13. Genetic Algorithm Based Simulated Annealing Method for Solving Unit Commitment Problem in Utility System

    NASA Astrophysics Data System (ADS)

    Rajan, C. Christober Asir

    2010-10-01

    The objective of this paper is to find the generation scheduling such that the total operating cost can be minimized, when subjected to a variety of constraints. This also means that it is desirable to find the optimal generating unit commitment in the power system for the next H hours. Genetic Algorithms (GA's) are general-purpose optimization techniques based on principles inspired from the biological evolution using metaphors of mechanisms such as neural section, genetic recombination and survival of the fittest. In this, the unit commitment schedule is coded as a string of symbols. An initial population of parent solutions is generated at random. Here, each schedule is formed by committing all the units according to their initial status ("flat start"). Here the parents are obtained from a pre-defined set of solution's i.e. each and every solution is adjusted to meet the requirements. Then, a random recommitment is carried out with respect to the unit's minimum down times. And SA improves the status. A 66-bus utility power system with twelve generating units in India demonstrates the effectiveness of the proposed approach. Numerical results are shown comparing the cost solutions and computation time obtained by using the Genetic Algorithm method and other conventional methods.

  14. Improving the degree-day method for sub-daily melt simulations with physically-based diurnal variations

    NASA Astrophysics Data System (ADS)

    Tobin, Cara; Schaefli, Bettina; Nicótina, Ludovico; Simoni, Silvia; Barrenetxea, Guillermo; Smith, Russell; Parlange, Marc; Rinaldo, Andrea

    2013-05-01

    This paper proposes a new extension of the classical degree-day snowmelt model applicable to hourly simulations for regions with limited data and adaptable to a broad range of spatially-explicit hydrological models. The snowmelt schemes have been tested with a point measurement dataset at the Cotton Creek Experimental Watershed (CCEW) in British Columbia, Canada and with a detailed dataset available from the Dranse de Ferret catchment, an extensively monitored catchment in the Swiss Alps. The snowmelt model performance is quantified with the use of a spatially-explicit model of the hydrologic response. Comparative analyses are presented with the widely-known, grid-based method proposed by Hock which combines a local, temperature-index approach with potential radiation. The results suggest that a simple diurnal cycle of the degree-day melt parameter based on minimum and maximum temperatures is competitive with the Hock approach for sub-daily melt simulations. Advantages of the new extension of the classical degree-day method over other temperature-index methods include its use of physically-based, diurnal variations and its ability to be adapted to data-constrained hydrological models which are lumped in some nature.

  15. Effects of Simulated Marker Placement Deviations on Running Kinematics and Evaluation of a Morphometric-Based Placement Feedback Method.

    PubMed

    Osis, Sean T; Hettinga, Blayne A; Macdonald, Shari; Ferber, Reed

    2016-01-01

    In order to provide effective test-retest and pooling of information from clinical gait analyses, it is critical to ensure that the data produced are as reliable as possible. Furthermore, it has been shown that anatomical marker placement is the largest source of inter-examiner variance in gait analyses. However, the effects of specific, known deviations in marker placement on calculated kinematic variables are unclear, and there is currently no mechanism to provide location-based feedback regarding placement consistency. The current study addresses these disparities by: applying a simulation of marker placement deviations to a large (n = 411) database of runners; evaluating a recently published method of morphometric-based deviation detection; and pilot-testing a system of location-based feedback for marker placements. Anatomical markers from a standing neutral trial were moved virtually by up to 30 mm to simulate deviations. Kinematic variables during running were then calculated using the original, and altered static trials. Results indicate that transverse plane angles at the knee and ankle are most sensitive to deviations in marker placement (7.59 degrees of change for every 10 mm of marker error), followed by frontal plane knee angles (5.17 degrees for every 10 mm). Evaluation of the deviation detection method demonstrated accuracies of up to 82% in classifying placements as deviant. Finally, pilot testing of a new methodology for providing location-based feedback demonstrated reductions of up to 80% in the deviation of outcome kinematics. PMID:26765846

  16. A Wavelet-Based Method for Simulation of Seismic Wave Propagation

    NASA Astrophysics Data System (ADS)

    Hong, T.; Kennett, B. L.

    2001-12-01

    Seismic wave propagation (e.g., both P-SV and SH in 2-D) can be modeled using wavelets. The governing elastic wave equations are transformed to a first-order differential equation system in time with a displacement-velocity formulation. Spatial derivatives are represented with a wavelet expansion using a semigroup approach. The evolution equations in time are derived from a Taylor expansion in terms of wavelet operators. The wavelet representation allows high accuracy for the spatial derivatives. Absorbing boundary conditions are implemented by including attenuation terms in the formulation of the equations. The traction-free condition at a free surface can be introduced with an equivalent force system. Irregular boundaries can be handled through a remapping of the coordinate system. The method is based on a displacement-velocity scheme which reduces memory requirements by about 30% compared to the use of velocity-stress. The new approach gives excellent agreement with analytic results for simple models including the Rayleigh waves at a free surface. A major strength of the wavelet approach is that the formulation can be employed for highly heterogeneous media and so can be used for complex situations.

  17. A stochastic model updating method for parameter variability quantification based on response surface models and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Fang, Sheng-En; Ren, Wei-Xin; Perera, Ricardo

    2012-11-01

    Stochastic model updating must be considered for quantifying uncertainties inherently existing in real-world engineering structures. By this means the statistical properties, instead of deterministic values, of structural parameters can be sought indicating the parameter variability. However, the implementation of stochastic model updating is much more complicated than that of deterministic methods particularly in the aspects of theoretical complexity and low computational efficiency. This study attempts to propose a simple and cost-efficient method by decomposing a stochastic updating process into a series of deterministic ones with the aid of response surface models and Monte Carlo simulation. The response surface models are used as surrogates for original FE models in the interest of programming simplification, fast response computation and easy inverse optimization. Monte Carlo simulation is adopted for generating samples from the assumed or measured probability distributions of responses. Each sample corresponds to an individual deterministic inverse process predicting the deterministic values of parameters. Then the parameter means and variances can be statistically estimated based on all the parameter predictions by running all the samples. Meanwhile, the analysis of variance approach is employed for the evaluation of parameter variability significance. The proposed method has been demonstrated firstly on a numerical beam and then a set of nominally identical steel plates tested in the laboratory. It is found that compared with the existing stochastic model updating methods, the proposed method presents similar accuracy while its primary merits consist in its simple implementation and cost efficiency in response computation and inverse optimization.

  18. Monte Carlo simulation methods in moment-based scale-bridging algorithms for thermal radiative-transfer problems

    NASA Astrophysics Data System (ADS)

    Densmore, J. D.; Park, H.; Wollaber, A. B.; Rauenzahn, R. M.; Knoll, D. A.

    2015-03-01

    We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption-emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck-Cummings algorithm.

  19. Monte Carlo simulation methods in moment-based scale-bridging algorithms for thermal radiative-transfer problems

    SciTech Connect

    Densmore, J.D.; Park, H.; Wollaber, A.B.; Rauenzahn, R.M.; Knoll, D.A.

    2015-03-01

    We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption–emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck–Cummings algorithm.

  20. Simulation of the early stage of binary alloy decomposition, based on the free energy density functional method

    NASA Astrophysics Data System (ADS)

    L'vov, P. E.; Svetukhin, V. V.

    2016-07-01

    Based on the free energy density functional method, the early stage of decomposition of a onedimensional binary alloy corresponding to the approximation of regular solutions has been simulated. In the simulation, Gaussian composition fluctuations caused by the initial alloy state are taken into account. The calculation is performed using the block approach implying discretization of the extensive solution volume into independent fragments for each of which the decomposition process is calculated, and then a joint analysis of the formed second phase segregations is performed. It was possible to trace all stages of solid solution decomposition: nucleation, growth, and coalescence (initial stage). The time dependences of the main phase distribution characteristics are calculated: the average size and concentration of the second phase particles, their size distribution function, and the nucleation rate of the second phase particles (clusters). Cluster trajectories in the size-composition space are constructed for the cases of growth and dissolution.

  1. Simulation of magnetization process of Pure-type superconductor magnet undulator based on T-method

    NASA Astrophysics Data System (ADS)

    Deri, Yi; Kawaguchi, Hideki; Tsuchimoto, Masanori; Tanaka, Takashi

    2015-11-01

    For the next generation Free Electron Laser, Pure-type undulator made of high Tc superconductors (HTSs) was considered to achieve a small size and high intensity magnetic field undulator. In general, it is very difficult to adjust the undulator magnet alignment after the HTS magnetization since the entire undulator is installed inside a cryostat. The appropriate HTS alignment has to be determined in the design stage. This paper presents the development of a numerical simulation code for magnetization process of the Pure-type HTS undulator to assist the design of the optimal size and alignment of the HTS magnets.

  2. Simulation modeling based method for choosing an effective set of fault tolerance mechanisms for real-time avionics systems

    NASA Astrophysics Data System (ADS)

    Bakhmurov, A. G.; Balashov, V. V.; Glonina, A. B.; Pashkov, V. N.; Smeliansky, R. L.; Volkanov, D. Yu.

    2013-12-01

    In this paper, the reliability allocation problem (RAP) for real-time avionics systems (RTAS) is considered. The proposed method for solving this problem consists of two steps: (i) creation of an RTAS simulation model at the necessary level of abstraction and (ii) application of metaheuristic algorithm to find an optimal solution (i. e., to choose an optimal set of fault tolerance techniques). When during the algorithm execution it is necessary to measure the execution time of some software components, the simulation modeling is applied. The procedure of simulation modeling also consists of the following steps: automatic construction of simulation model of the RTAS configuration and running this model in a simulation environment to measure the required time. This method was implemented as an experimental software tool. The tool works in cooperation with DYANA simulation environment. The results of experiments with the implemented method are presented. Finally, future plans for development of the presented method and tool are briefly described.

  3. Finite analytic method based on mixed-form Richards' equation for simulating water flow in vadose zone

    NASA Astrophysics Data System (ADS)

    Zhang, Zaiyong; Wang, Wenke; Yeh, Tian-chyi Jim; Chen, Li; Wang, Zhoufeng; Duan, Lei; An, Kedong; Gong, Chengcheng

    2016-06-01

    In this paper, we develop a finite analytic method (FAMM), which combines flexibility of numerical methods and advantages of analytical solutions, to solve the mixed-form Richards' equation. This new approach minimizes mass balance errors and truncation errors associated with most numerical approaches. We use numerical experiments to demonstrate that FAMM can obtain more accurate numerical solutions and control the global mass balance better than modified Picard finite difference method (MPFD) as compared with analytical solutions. In addition, FAMM is superior to the finite analytic method based on head-based Richards' equation (FAMH). Besides, FAMM solutions are compared to analytical solutions for wetting and drying processes in Brindabella Silty Clay Loam and Yolo Light Clay soils. Finally, we demonstrate that FAMM yields comparable results with those from MPFD and Hydrus-1D for simulating infiltration into other different soils under wet and dry conditions. These numerical experiments further confirm the fact that as long as a hydraulic constitutive model captures general behaviors of other models, it can be used to yield flow fields comparable to those based on other models.

  4. A Simulation-Based Comparison of Several Stochastic Linear Regression Methods in the Presence of Outliers.

    ERIC Educational Resources Information Center

    Rule, David L.

    Several regression methods were examined within the framework of weighted structural regression (WSR), comparing their regression weight stability and score estimation accuracy in the presence of outlier contamination. The methods compared are: (1) ordinary least squares; (2) WSR ridge regression; (3) minimum risk regression; (4) minimum risk 2;…

  5. Continuous surface force based lattice Boltzmann equation method for simulating thermocapillary flow

    NASA Astrophysics Data System (ADS)

    Zheng, Lin; Zheng, Song; Zhai, Qinglan

    2016-02-01

    In this paper, we extend a lattice Boltzmann equation (LBE) with continuous surface force (CSF) to simulate thermocapillary flows. The model is designed on our previous CSF LBE for athermal two phase flow, in which the interfacial tension forces and the Marangoni stresses as the results of the interface interactions between different phases are described by a conception of CSF. In this model, the sharp interfaces between different phases are separated by a narrow transition layers, and the kinetics and morphology evolution of phase separation would be characterized by an order parameter via Cahn-Hilliard equation which is solved in the frame work of LBE. The scalar convection-diffusion equation for temperature field is resolved by thermal LBE. The models are validated by thermal two layered Poiseuille flow, and two superimposed planar fluids at negligibly small Reynolds and Marangoni numbers for the thermocapillary driven convection, which have analytical solutions for the velocity and temperature. Then thermocapillary migration of two/three dimensional deformable droplet are simulated. Numerical results show that the predictions of present LBE agreed with the analytical solution/other numerical results.

  6. Task-based image quality evaluation of iterative reconstruction methods for low dose CT using computer simulations.

    PubMed

    Xu, Jingyan; Fuld, Matthew K; Fung, George S K; Tsui, Benjamin M W

    2015-04-01

    Iterative reconstruction (IR) methods for x-ray CT is a promising approach to improve image quality or reduce radiation dose to patients. The goal of this work was to use task based image quality measures and the channelized Hotelling observer (CHO) to evaluate both analytic and IR methods for clinical x-ray CT applications. We performed realistic computer simulations at five radiation dose levels, from a clinical reference low dose D0 to 25% D0. A fixed size and contrast lesion was inserted at different locations into the liver of the XCAT phantom to simulate a weak signal. The simulated data were reconstructed on a commercial CT scanner (SOMATOM Definition Flash; Siemens, Forchheim, Germany) using the vendor-provided analytic (WFBP) and IR (SAFIRE) methods. The reconstructed images were analyzed by CHOs with both rotationally symmetric (RS) and rotationally oriented (RO) channels, and with different numbers of lesion locations (5, 10, and 20) in a signal known exactly (SKE), background known exactly but variable (BKEV) detection task. The area under the receiver operating characteristic curve (AUC) was used as a summary measure to compare the IR and analytic methods; the AUC was also used as the equal performance criterion to derive the potential dose reduction factor of IR. In general, there was a good agreement in the relative AUC values of different reconstruction methods using CHOs with RS and RO channels, although the CHO with RO channels achieved higher AUCs than RS channels. The improvement of IR over analytic methods depends on the dose level. The reference dose level D0 was based on a clinical low dose protocol, lower than the standard dose due to the use of IR methods. At 75% D0, the performance improvement was statistically significant (p < 0.05). The potential dose reduction factor also depended on the detection task. For the SKE/BKEV task involving 10 lesion locations, a dose reduction of at least 25% from D0 was achieved. PMID:25776521

  7. A numerical simulation of the hole-tone feedback cycle based on an axisymmetric discrete vortex method and Curle's equation

    NASA Astrophysics Data System (ADS)

    Langthjem, M. A.; Nakano, M.

    2005-11-01

    An axisymmetric numerical simulation approach to the hole-tone self-sustained oscillation problem is developed, based on the discrete vortex method for the incompressible flow field, and a representation of flow noise sources on an acoustically compact impingement plate by Curle's equation. The shear layer of the jet is represented by 'free' discrete vortex rings, and the jet nozzle and the end plate by bound vortex rings. A vortex ring is released from the nozzle at each time step in the simulation. The newly released vortex rings are disturbed by acoustic feedback. It is found that the basic feedback cycle works hydrodynamically. The effect of the acoustic feedback is to suppress the broadband noise and reinforce the characteristic frequency and its higher harmonics. An experimental investigation is also described. A hot wire probe was used to measure velocity fluctuations in the shear layer, and a microphone to measure acoustic pressure fluctuations. Comparisons between simulated and experimental results show quantitative agreement with respect to both frequency and amplitude of the shear layer velocity fluctuations. As to acoustic pressure fluctuations, there is quantitative agreement w.r.t. frequencies, and reasonable qualitative agreement w.r.t. peaks of the characteristic frequency and its higher harmonics. Both simulated and measured frequencies f follow the criterion L/uc+L/c0=n/f where L is the gap length between nozzle exit and end plate, uc is the shear layer convection velocity, c0 is the speed of sound, and n is a mode number (n={1}/{2},1,{3}/{2},…). The experimental results however display a complicated pattern of mode jumps, which the numerical method cannot capture.

  8. Large Eddy simulation of compressible flows with a low-numerical dissipation patch-based adaptive mesh refinement method

    NASA Astrophysics Data System (ADS)

    Pantano, Carlos

    2005-11-01

    We describe a hybrid finite difference method for large-eddy simulation (LES) of compressible flows with a low-numerical dissipation scheme and structured adaptive mesh refinement (SAMR). Numerical experiments and validation calculations are presented including a turbulent jet and the strongly shock-driven mixing of a Richtmyer-Meshkov instability. The approach is a conservative flux-based SAMR formulation and as such, it utilizes refinement to computational advantage. The numerical method for the resolved scale terms encompasses the cases of scheme alternation and internal mesh interfaces resulting from SAMR. An explicit centered scheme that is consistent with a skew-symmetric finite difference formulation is used in turbulent flow regions while a weighted essentially non-oscillatory (WENO) scheme is employed to capture shocks. The subgrid stresses and transports are calculated by means of the streched-vortex model, Misra & Pullin (1997)

  9. Finger milling-cutter CNC generating hypoid pinion tooth surfaces based on modified-roll method and machining simulation

    NASA Astrophysics Data System (ADS)

    Li, Genggeng; Deng, Xiaozhong; Wei, Bingyang; Lei, Baozhen

    2011-05-01

    The two coordinate systems of cradle-type hypoid generator and free-form CNC machine tool by application disc milling-cutter to generate hypoid pinion tooth surfaces based on the modified-roll method were set up, respectively, and transformation principle and method for machine-tool settings between the two coordinate systems was studied. It was presented that finger milling-cutter was mounted on imagined disc milling-cutter and its motion was controlled directly by CNC shafts to replace disc milling-cutter blades effective cutting motion. Finger milling-cutter generation accomplished by ordered circular interpolation was determined, and interpolation center, starting and ending were worked out. Finally, a hypoid pinion was virtually machined by using CNC machining simulation software VERICUT.

  10. Finger milling-cutter CNC generating hypoid pinion tooth surfaces based on modified-roll method and machining simulation

    NASA Astrophysics Data System (ADS)

    Li, Genggeng; Deng, Xiaozhong; Wei, Bingyang; Lei, Baozhen

    2010-12-01

    The two coordinate systems of cradle-type hypoid generator and free-form CNC machine tool by application disc milling-cutter to generate hypoid pinion tooth surfaces based on the modified-roll method were set up, respectively, and transformation principle and method for machine-tool settings between the two coordinate systems was studied. It was presented that finger milling-cutter was mounted on imagined disc milling-cutter and its motion was controlled directly by CNC shafts to replace disc milling-cutter blades effective cutting motion. Finger milling-cutter generation accomplished by ordered circular interpolation was determined, and interpolation center, starting and ending were worked out. Finally, a hypoid pinion was virtually machined by using CNC machining simulation software VERICUT.

  11. Numerical simulation and experimental validation of biofilm in a multi-physics framework using an SPH based method

    NASA Astrophysics Data System (ADS)

    Soleimani, Meisam; Wriggers, Peter; Rath, Henryke; Stiesch, Meike

    2016-06-01

    In this paper, a 3D computational model has been developed to investigate biofilms in a multi-physics framework using smoothed particle hydrodynamics (SPH) based on a continuum approach. Biofilm formation is a complex process in the sense that several physical phenomena are coupled and consequently different time-scales are involved. On one hand, biofilm growth is driven by biological reaction and nutrient diffusion and on the other hand, it is influenced by fluid flow causing biofilm deformation and interface erosion in the context of fluid and deformable solid interaction. The geometrical and numerical complexity arising from these phenomena poses serious complications and challenges in grid-based techniques such as finite element. Here the solution is based on SPH as one of the powerful meshless methods. SPH based computational modeling is quite new in the biological community and the method is uniquely robust in capturing the interface-related processes of biofilm formation such as erosion. The obtained results show a good agreement with experimental and published data which demonstrates that the model is capable of simulating and predicting overall spatial and temporal evolution of biofilm.

  12. Testing planetary transit detection methods with grid-based Monte-Carlo simulations.

    NASA Astrophysics Data System (ADS)

    Bonomo, A. S.; Lanza, A. F.

    The detection of extrasolar planets by means of the transit method is a rapidly growing field of modern astrophysics. The periodic light dips produced by the passage of a planet in front of its parent star can be used to reveal the presence of the planet itself, to measure its orbital period and relative radius, as well as to perform studies on the outer layers of the planet by analysing the light of the star passing through the planet's atmosphere. We have developed a new method to detect transits of Earth-sized planets in front of solar-like stars that allows us to reduce the impact of stellar microvariability on transit detection. A large Monte Carlo numerical experiment has been designed to test the performance of our approach in comparison with other transit detection methods for stars of different magnitudes and planets of different radius and orbital period, as will be observed by the space experiments CoRoT and Kepler. The large computational load of this experiment has been managed by means of the Grid infrastructure of the COMETA consortium.

  13. Numerical simulation the pollutants transport in the Lake base on remote sensing image with Lattice Boltzmann Method

    NASA Astrophysics Data System (ADS)

    Qiao, Y.

    2013-12-01

    As China's economic development, water pollution incidents happened frequently. For example, the cyanobacterial bloom events repeatedly occur in Taihu Lake. In this research, we investigate the pollutants solute transport start at different points, such as the eutrophication substances Nitrogen and Phosphorus et al, with the Lattice Boltzmann Method (LBM) performed on real pore geometries. The LBM has emerged as a powerful tool for simulating the behaviour of multi-component fluid systems in complex pore networks. We will build a quick response simulation system, which is base on the high resolution GIS figure, using the LBM numerical method.When the start two deferent points at the Meiliang Bay nearby the Wuxi City, it is shown that the pollutants solute can't transport out of the bay to influence the Taihu Lake and the diffusion areas are similar. On the other hand, when the start point at central region of the Taihu Lake, it is found that the pollutants solute covered the almost whole area of the lake and the cyanobacterial bloom with good condition. In the same way, if the cyanobacterial bloom transport in the central area, then it will pollute the whole Taihu Lake. Therefore, when we monitor and deal with the eutrophication substances, we need to focus on the central area of lake.

  14. A free energy simulation method based study of interfacial segregation. Annual progress report, FY 1992

    SciTech Connect

    Srolovitz, D.J.

    1993-05-18

    Binary alloys were investigated. Segregation to and thermodynamics of twist grain boundaries in Cu-Ni were studied. Segregation to and order-disorder phase transitions at grain boundaries in ordered Ni{sub 3{minus}x}Al{sub 1+x} were also investigated. Order-disorder transitions at and segregation to the (001), (011), and (111) surfaces in Pd-Cu, Pd-Ag, and Pd-Au alloys were investigated. The (001) surface in Cu-rich alloys undergoes a surface phase transition from disordered to ordered surface phase upon cooling from high temperature, similar to the (001) surface transition in Ni-rich Pt-Ni alloys. Segregation and ordering appear to be correlated. The free energy minimization method was also used to calculate the heat of formation and lattice parameter of Ag-Cu metastable phases. Results of free energy minimization for free energy and entropy of Si agree with experiment and quasiharmonic calculations.

  15. Comparison of Ensemble Kalman Filter groundwater-data assimilation methods based on stochastic moment equations and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Panzeri, M.; Riva, M.; Guadagnini, A.; Neuman, S. P.

    2014-04-01

    Traditional Ensemble Kalman Filter (EnKF) data assimilation requires computationally intensive Monte Carlo (MC) sampling, which suffers from filter inbreeding unless the number of simulations is large. Recently we proposed an alternative EnKF groundwater-data assimilation method that obviates the need for sampling and is free of inbreeding issues. In our new approach, theoretical ensemble moments are approximated directly by solving a system of corresponding stochastic groundwater flow equations. Like MC-based EnKF, our moment equations (ME) approach allows Bayesian updating of system states and parameters in real-time as new data become available. Here we compare the performances and accuracies of the two approaches on two-dimensional transient groundwater flow toward a well pumping water in a synthetic, randomly heterogeneous confined aquifer subject to prescribed head and flux boundary conditions.

  16. An improved simulation of the deep Pacific Ocean using optimally estimated vertical diffusivity based on the Green's function method

    NASA Astrophysics Data System (ADS)

    Toyoda, Takahiro; Sugiura, Nozomi; Masuda, Shuhei; Sasaki, Yuji; Igarashi, Hiromichi; Ishikawa, Yoichi; Hatayama, Takaki; Kawano, Takeshi; Kawai, Yoshimi; Kouketsu, Shinya; Katsumata, Katsuro; Uchida, Hiroshi; Doi, Toshimasa; Fukasawa, Masao; Awaji, Toshiyuki

    2015-11-01

    An improved vertical diffusivity scheme is introduced into an ocean general circulation model to better reproduce the observed features of water property distribution inherent in the deep Pacific Ocean structure. The scheme incorporates (a) a horizontally uniform background profile, (b) a parameterization depending on the local static stability, and (c) a parameterization depending on the bottom topography. Weighting factors for these parameterizations are optimally estimated based on the Green's function method. The optimized values indicate an important role of both the intense vertical diffusivity near rough topography and the background vertical diffusivity. This is consistent with recent reports that indicate the presence of significant vertical mixing associated with finite-amplitude internal wave breaking along the bottom slope and its remote effect. The robust simulation with less artificial trend of water properties in the deep Pacific Ocean illustrates that our approach offers a better modeling analysis for the deep ocean variability.

  17. Numerical simulation of flows around two circular cylinders by mesh-free least square-based finite difference methods

    NASA Astrophysics Data System (ADS)

    Ding, H.; Shu, C.; Yeo, K. S.; Xu, D.

    2007-01-01

    In this paper, the mesh-free least square-based finite difference (MLSFD) method is applied to numerically study the flow field around two circular cylinders arranged in side-by-side and tandem configurations. For each configuration, various geometrical arrangements are considered, in order to reveal the different flow regimes characterized by the gap between the two cylinders. In this work, the flow simulations are carried out in the low Reynolds number range, that is, Re=100 and 200. Instantaneous vorticity contours and streamlines around the two cylinders are used as the visualization aids. Some flow parameters such as Strouhal number, drag and lift coefficients calculated from the solution are provided and quantitatively compared with those provided by other researchers.

  18. Spectral-Element Simulations of Wave Propagation in Porous Media: Finite-Frequency Sensitivity Kernels Based Upon Adjoint Methods

    NASA Astrophysics Data System (ADS)

    Morency, C.; Tromp, J.

    2008-12-01

    The mathematical formulation of wave propagation in porous media developed by Biot is based upon the principle of virtual work, ignoring processes at the microscopic level, and does not explicitly incorporate gradients in porosity. Based on recent studies focusing on averaging techniques, we derive the macroscopic porous medium equations from the microscale, with a particular emphasis on the effects of gradients in porosity. In doing so, we are able to naturally determine two key terms in the momentum equations and constitutive relationships, directly translating the coupling between the solid and fluid phases, namely a drag force and an interfacial strain tensor. In both terms, gradients in porosity arise. One remarkable result is that when we rewrite this set of equations in terms of the well known Biot variables us, w), terms involving gradients in porosity are naturally accommodated by gradients involving w, the fluid motion relative to the solid, and Biot's formulation is recovered, i.e., it remains valid in the presence of porosity gradients We have developed a numerical implementation of the Biot equations for two-dimensional problems based upon the spectral-element method (SEM) in the time domain. The SEM is a high-order variational method, which has the advantage of accommodating complex geometries like a finite-element method, while keeping the exponential convergence rate of (pseudo)spectral methods. As in the elastic and acoustic cases, poroelastic wave propagation based upon the SEM involves a diagonal mass matrix, which leads to explicit time integration schemes that are well-suited to simulations on parallel computers. Effects associated with physical dispersion & attenuation and frequency-dependent viscous resistance are addressed by using a memory variable approach. Various benchmarks involving poroelastic wave propagation in the high- and low-frequency regimes, and acoustic-poroelastic and poroelastic-poroelastic discontinuities have been

  19. A GPU accelerated, discrete time random walk model for simulating reactive transport in porous media using colocation probability function based reaction methods

    NASA Astrophysics Data System (ADS)

    Barnard, J. M.; Augarde, C. E.

    2012-12-01

    The simulation of reactions in flow through unsaturated porous media is a more complicated process when using particle tracking based models than in continuum based models. In the fomer particles are reacted on an individual particle-to-particle basis using either deterministic or probabilistic methods. This means that particle tracking methods, especially when simulations of reactions are included, are computationally intensive as the reaction simulations require tens of thousands of nearest neighbour searches per time step. Despite this, particle tracking methods merit further study due to their ability to eliminate numerical dispersion, to simulate anomalous transport and incomplete mixing of reactive solutes. A new model has been developed using discrete time random walk particle tracking methods to simulate reactive mass transport in porous media which includes a variation of colocation probability function based methods of reaction simulation from those presented by Benson & Meerschaert (2008). Model development has also included code acceleration via graphics processing units (GPUs). The nature of particle tracking methods means that they are well suited to parallelization using GPUs. The architecture of GPUs is single instruction - multiple data (SIMD). This means that only one operation can be performed at any one time but can be performed on multiple data simultaneously. This allows for significant speed gains where long loops of independent operations are performed. Computationally expensive code elements, such the nearest neighbour searches required by the reaction simulation, are therefore prime targets for GPU acceleration.

  20. System simulation method for fiber-based homodyne multiple target interferometers using short coherence length laser sources

    NASA Astrophysics Data System (ADS)

    Fox, Maik; Beuth, Thorsten; Streck, Andreas; Stork, Wilhelm

    2015-09-01

    Homodyne laser interferometers for velocimetry are well-known optical systems used in many applications. While the detector power output signal of such a system, using a long coherence length laser and a single target, is easily modelled using the Doppler shift, scenarios with a short coherence length source, e.g. an unstabilized semiconductor laser, and multiple weak targets demand a more elaborated approach for simulation. Especially when using fiber components, the actual setup is an important factor for system performance as effects like return losses and multiple way propagation have to be taken into account. If the power received from the targets is in the same region as stray light created in the fiber setup, a complete system simulation becomes a necessity. In previous work, a phasor based signal simulation approach for interferometers based on short coherence length laser sources has been evaluated. To facilitate the use of the signal simulation, a fiber component ray tracer has since been developed that allows the creation of input files for the signal simulation environment. The software uses object oriented MATLAB code, simplifying the entry of different fiber setups and the extension of the ray tracer. Thus, a seamless way from a system description based on arbitrarily interconnected fiber components to a signal simulation for different target scenarios has been established. The ray tracer and signal simulation are being used for the evaluation of interferometer concepts incorporating delay lines to compensate for short coherence length.

  1. Simulation-Based Bronchoscopy Training

    PubMed Central

    Kennedy, Cassie C.; Maldonado, Fabien

    2013-01-01

    Background: Simulation-based bronchoscopy training is increasingly used, but effectiveness remains uncertain. We sought to perform a comprehensive synthesis of published work on simulation-based bronchoscopy training. Methods: We searched MEDLINE, EMBASE, CINAHL, PsycINFO, ERIC, Web of Science, and Scopus for eligible articles through May 11, 2011. We included all original studies involving health professionals that evaluated, in comparison with no intervention or an alternative instructional approach, simulation-based training for flexible or rigid bronchoscopy. Study selection and data abstraction were performed independently and in duplicate. We pooled results using random effects meta-analysis. Results: From an initial pool of 10,903 articles, we identified 17 studies evaluating simulation-based bronchoscopy training. In comparison with no intervention, simulation training was associated with large benefits on skills and behaviors (pooled effect size, 1.21 [95% CI, 0.82-1.60]; n = 8 studies) and moderate benefits on time (0.62 [95% CI, 0.12-1.13]; n = 7). In comparison with clinical instruction, behaviors with real patients showed nonsignificant effects favoring simulation for time (0.61 [95% CI, −1.47 to 2.69]) and process (0.33 [95% CI, −1.46 to 2.11]) outcomes (n = 2 studies each), although variation in training time might account for these differences. Four studies compared alternate simulation-based training approaches. Inductive analysis to inform instructional design suggested that longer or more structured training is more effective, authentic clinical context adds value, and animal models and plastic part-task models may be superior to more costly virtual-reality simulators. Conclusions: Simulation-based bronchoscopy training is effective in comparison with no intervention. Comparative effectiveness studies are few. PMID:23370487

  2. Global approach for transient shear wave inversion based on the adjoint method: a comprehensive 2D simulation study.

    PubMed

    Arnal, B; Pinton, G; Garapon, P; Pernot, M; Fink, M; Tanter, M

    2013-10-01

    Shear wave imaging (SWI) maps soft tissue elasticity by measuring shear wave propagation with ultrafast ultrasound acquisitions (10 000 frames s(-1)). This spatiotemporal data can be used as an input for an inverse problem that determines a shear modulus map. Common inversion methods are local: the shear modulus at each point is calculated based on the values of its neighbour (e.g. time-of-flight, wave equation inversion). However, these approaches are sensitive to the information loss such as noise or the lack of the backscattered signal. In this paper, we evaluate the benefits of a global approach for elasticity inversion using a least-squares formulation, which is derived from full waveform inversion in geophysics known as the adjoint method. We simulate an acoustic waveform in a medium with a soft and a hard lesion. For this initial application, full elastic propagation and viscosity are ignored. We demonstrate that the reconstruction of the shear modulus map is robust with a non-uniform background or in the presence of noise with regularization. Compared to regular local inversions, the global approach leads to an increase of contrast (∼+3 dB) and a decrease of the quantification error (∼+2%). We demonstrate that the inversion is reliable in the case when there is no signal measured within the inclusions like hypoechoic lesions which could have an impact on medical diagnosis. PMID:24018867

  3. Optimization of the Homogenization Heat Treatment of Nickel-Based Superalloys Based on Phase-Field Simulations: Numerical Methods and Experimental Validation

    NASA Astrophysics Data System (ADS)

    Rettig, Ralf; Ritter, Nils C.; Müller, Frank; Franke, Martin M.; Singer, Robert F.

    2015-12-01

    A method for predicting the fastest possible homogenization treatment of the as-cast microstructure of nickel-based superalloys is presented and compared with experimental results for the single-crystal superalloy ERBO/1. The computational prediction method is based on phase-field simulations. Experimentally determined compositional fields of the as-cast microstructure from microprobe measurements are being used as input data. The software program MICRESS is employed to account for multicomponent diffusion, dissolution of the eutectic phases, nucleation, and growth of liquid phase (incipient melting). The optimization itself is performed using an iterative algorithm that increases the temperature in such a way that the microstructural state is always very close to the incipient melting limit. Maps are derived allowing describing the dissolution of primary γ/ γ'-islands and the elimination of residual segregation with respect to temperature and time.

  4. Numerical hydrodynamic simulations based on semi-analytic galaxy merger trees: method and Milky Way-like galaxies

    NASA Astrophysics Data System (ADS)

    Moster, Benjamin P.; Macciò, Andrea V.; Somerville, Rachel S.

    2014-01-01

    We present a new approach to study galaxy evolution in a cosmological context. We combine cosmological merger trees and semi-analytic models of galaxy formation to provide the initial conditions for multimerger hydrodynamic simulations. In this way, we exploit the advantages of merger simulations (high resolution and inclusion of the gas physics) and semi-analytic models (cosmological background and low computational cost), and integrate them to create a novel tool. This approach allows us to study the evolution of various galaxy properties, including the treatment of the hot gaseous halo from which gas cools and accretes on to the central disc, which has been neglected in many previous studies. This method shows several advantages over other methods. As only the particles in the regions of interest are included, the run time is much shorter than in traditional cosmological simulations, leading to greater computational efficiency. Using cosmological simulations, we show that multiple mergers are expected to be more common than sequences of isolated mergers, and therefore studies of galaxy mergers should take this into account. In this pilot study, we present our method and illustrate the results of simulating 10 Milky Way-like galaxies since z = 1. We find good agreement with observations for the total stellar masses, star formation rates, cold gas fractions and disc scalelength parameters. We expect that this novel numerical approach will be very useful for pursuing a number of questions pertaining to the transformation of galaxy internal structure through cosmic time.

  5. New methods in plasma simulation

    SciTech Connect

    Mason, R.J.

    1990-02-23

    The development of implicit methods of particle-in-cell (PIC) computer simulation in recent years, and their merger with older hybrid methods have created a new arsenal of simulation techniques for the treatment of complex practical problems in plasma physics. The new implicit hybrid codes are aimed at transitional problems that lie somewhere between the long time scale, high density regime associated with MHD modeling, and the short time scale, low density regime appropriate to PIC particle-in-cell techniques. This transitional regime arises in ICF coronal plasmas, in pulsed power plasma switches, in Z-pinches, and in foil implosions. Here, we outline how such a merger of implicit and hybrid methods has been carried out, specifically in the ANTHEM computer code, and demonstrate the utility of implicit hybrid simulation in applications. 25 refs., 5 figs.

  6. Simulation of optimal arctic routes using a numerical sea ice model based on an ice-coupled ocean circulation method

    NASA Astrophysics Data System (ADS)

    Nam, Jong-Ho; Park, Inha; Lee, Ho Jin; Kwon, Mi Ok; Choi, Kyungsik; Seo, Young-Kyo

    2013-06-01

    Ever since the Arctic region has opened its mysterious passage to mankind, continuous attempts to take advantage of its fastest route across the region has been made. The Arctic region is still covered by thick ice and thus finding a feasible navigating route is essential for an economical voyage. To find the optimal route, it is necessary to establish an efficient transit model that enables us to simulate every possible route in advance. In this work, an enhanced algorithm to determine the optimal route in the Arctic region is introduced. A transit model based on the simulated sea ice and environmental data numerically modeled in the Arctic is developed. By integrating the simulated data into a transit model, further applications such as route simulation, cost estimation or hindcast can be easily performed. An interactive simulation system that determines the optimal Arctic route using the transit model is developed. The simulation of optimal routes is carried out and the validity of the results is discussed.

  7. Finite element numerical simulation of 2.5D direct current method based on mesh refinement and recoarsement

    NASA Astrophysics Data System (ADS)

    Zhang, Qian-Jiang; Dai, Shi-Kun; Chen, Long-Wei; Qiang, Jian-Ke; Li, Kun; Zhao, Dong-Dong

    2016-06-01

    To deal with the problem of low computational precision at the nodes near the source and satisfy the requirements for computational efficiency in inversion imaging and finite-element numerical simulations of the direct current method, we propose a new mesh refinement and recoarsement method for a two-dimensional point source. We introduce the mesh refinement and mesh recoarsement into the traditional structured mesh subdivision. By refining the horizontal grids, the singularity owing to the point source is minimized and the topography is simulated. By recoarsening the horizontal grids, the number of grid cells is reduced significantly and computational efficiency is improved. Model tests show that the proposed method solves the singularity problem and reduces the number of grid cells by 80% compared to the uniform grid refinement.

  8. Bootstrapping Methods Applied for Simulating Laboratory Works

    ERIC Educational Resources Information Center

    Prodan, Augustin; Campean, Remus

    2005-01-01

    Purpose: The aim of this work is to implement bootstrapping methods into software tools, based on Java. Design/methodology/approach: This paper presents a category of software e-tools aimed at simulating laboratory works and experiments. Findings: Both students and teaching staff use traditional statistical methods to infer the truth from sample…

  9. A low-numerical dissipation, patch-based adaptive-mesh-refinement method for large-eddy simulation of compressible flows

    NASA Astrophysics Data System (ADS)

    Pantano, C.; Deiterding, R.; Hill, D. J.; Pullin, D. I.

    2006-09-01

    This paper describes a hybrid finite-difference method for the large-eddy simulation of compressible flows with low-numerical dissipation and structured adaptive mesh refinement (SAMR). A conservative flux-based approach is described with an explicit centered scheme used in turbulent flow regions while a weighted essentially non-oscillatory (WENO) scheme is employed to capture shocks. Three-dimensional numerical simulations of a Richtmyer-Meshkov instability are presented.

  10. Jacobian Free-Newton Krylov Discontinuous Galerkin Method and Physics-Based Preconditioning for Nuclear Reactor Simulations

    SciTech Connect

    HyeongKae Park; R. Nourgaliev; Richard C. Martineau; Dana A. Knoll

    2008-09-01

    Multidimensional, higher-order (2nd and higher) numerical methods have come to the forefront in recent years due to significant advances of computer technology and numerical algorithms, and have shown great potential as viable design tools for realistic applications. To achieve this goal, implicit high-order accurate coupling of the multiphysics simulations is a critical component. One of the issues that arise from multiphysics simulation is the necessity to resolve multiple time scales. For example, the dynamical time scales of neutron kinetics, fluid dynamics and heat conduction significantly differ (typically >1010 magnitude), with the dominant (fastest) physical mode also changing during the course of transient [Pope and Mousseau, 2007]. This leads to the severe time step restriction for stability in traditional multiphysics (i.e. operator split, semi-implicit discretization) simulations. The lower order methods suffer from an undesirable numerical dissipation. Thus implicit, higher order accurate scheme is necessary to perform seamlessly-coupled multiphysics simulations that can be used to analyze the “what-if” regulatory accident scenarios, or to design and optimize engineering systems.

  11. An image-based reaction field method for electrostatic interactions in molecular dynamics simulations of aqueous solutions

    NASA Astrophysics Data System (ADS)

    Lin, Yuchun; Baumketner, Andrij; Deng, Shaozhong; Xu, Zhenli; Jacobs, Donald; Cai, Wei

    2009-10-01

    In this paper, a new solvation model is proposed for simulations of biomolecules in aqueous solutions that combines the strengths of explicit and implicit solvent representations. Solute molecules are placed in a spherical cavity filled with explicit water, thus providing microscopic detail where it is most needed. Solvent outside of the cavity is modeled as a dielectric continuum whose effect on the solute is treated through the reaction field corrections. With this explicit/implicit model, the electrostatic potential represents a solute molecule in an infinite bath of solvent, thus avoiding unphysical interactions between periodic images of the solute commonly used in the lattice-sum explicit solvent simulations. For improved computational efficiency, our model employs an accurate and efficient multiple-image charge method to compute reaction fields together with the fast multipole method for the direct Coulomb interactions. To minimize the surface effects, periodic boundary conditions are employed for nonelectrostatic interactions. The proposed model is applied to study liquid water. The effect of model parameters, which include the size of the cavity, the number of image charges used to compute reaction field, and the thickness of the buffer layer, is investigated in comparison with the particle-mesh Ewald simulations as a reference. An optimal set of parameters is obtained that allows for a faithful representation of many structural, dielectric, and dynamic properties of the simulated water, while maintaining manageable computational cost. With controlled and adjustable accuracy of the multiple-image charge representation of the reaction field, it is concluded that the employed model achieves convergence with only one image charge in the case of pure water. Future applications to pKa calculations, conformational sampling of solvated biomolecules and electrolyte solutions are briefly discussed.

  12. A 3-Dimensional Absorbed Dose Calculation Method Based on Quantitative SPECT for Radionuclide Therapy: Evaluation for 131I Using Monte Carlo Simulation

    PubMed Central

    Ljungberg, Michael; Sjögreen, Katarina; Liu, Xiaowei; Frey, Eric; Dewaraja, Yuni; Strand, Sven-Erik

    2009-01-01

    A general method is presented for patient-specific 3-dimensional absorbed dose calculations based on quantitative SPECT activity measurements. Methods The computational scheme includes a method for registration of the CT image to the SPECT image and position-dependent compensation for attenuation, scatter, and collimator detector response performed as part of an iterative reconstruction method. A method for conversion of the measured activity distribution to a 3-dimensional absorbed dose distribution, based on the EGS4 (electron-gamma shower, version 4) Monte Carlo code, is also included. The accuracy of the activity quantification and the absorbed dose calculation is evaluated on the basis of realistic Monte Carlo–simulated SPECT data, using the SIMIND (simulation of imaging nuclear detectors) program and a voxel-based computer phantom. CT images are obtained from the computer phantom, and realistic patient movements are added relative to the SPECT image. The SPECT-based activity concentration and absorbed dose distributions are compared with the true ones. Results Correction could be made for object scatter, photon attenuation, and scatter penetration in the collimator. However, inaccuracies were imposed by the limited spatial resolution of the SPECT system, for which the collimator response correction did not fully compensate. Conclusion The presented method includes compensation for most parameters degrading the quantitative image information. The compensation methods are based on physical models and therefore are generally applicable to other radionuclides. The proposed evaluation methodology may be used as a basis for future intercomparison of different methods. PMID:12163637

  13. A simulation method for the fruitage body

    NASA Astrophysics Data System (ADS)

    Lu, Ling; Song, Weng-lin; Wang, Lei

    2009-07-01

    An effective visual modeling for creating the fruitage body has been present. According to the geometry shape character of fruitage, we build up its face model base on ellipsoid deformation. The face model is relation with radius. We consider different radius become a face in the fruitage, and uses same method to simulate the shape of fruitage inside. The body model is formed by combine face model and radius direction. Our method can simulate virtual inter and outer structure for fruitage body. The method decreases a lot of data and increases display speed. Another, the texture model of fruitage is defined by sum of different base function. This kind of method is simple and speed. We show the feasibility of our method by creating a winter-jujube and an apricot. They include exocorp, mesocorp and endocarp. It is useful that develop virtual plant.

  14. [The validation of the effect of correcting spectral background changes based on floating reference method by simulation].

    PubMed

    Wang, Zhu-lou; Zhang, Wan-jie; Li, Chen-xi; Chen, Wen-liang; Xu, Ke-xin

    2015-02-01

    There are some challenges in near-infrared non-invasive blood glucose measurement, such as the low signal to noise ratio of instrument, the unstable measurement conditions, the unpredictable and irregular changes of the measured object, and etc. Therefore, it is difficult to extract the information of blood glucose concentrations from the complicated signals accurately. Reference measurement method is usually considered to be used to eliminate the effect of background changes. But there is no reference substance which changes synchronously with the anylate. After many years of research, our research group has proposed the floating reference method, which is succeeded in eliminating the spectral effects induced by the instrument drifts and the measured object's background variations. But our studies indicate that the reference-point will changes following the changing of measurement location and wavelength. Therefore, the effects of floating reference method should be verified comprehensively. In this paper, keeping things simple, the Monte Carlo simulation employing Intralipid solution with the concentrations of 5% and 10% is performed to verify the effect of floating reference method used into eliminating the consequences of the light source drift. And the light source drift is introduced through varying the incident photon number. The effectiveness of the floating reference method with corresponding reference-points at different wavelengths in eliminating the variations of the light source drift is estimated. The comparison of the prediction abilities of the calibration models with and without using this method shows that the RMSEPs of the method are decreased by about 98.57% (5%Intralipid)and 99.36% (10% Intralipid)for different Intralipid. The results indicate that the floating reference method has obvious effect in eliminating the background changes. PMID:25970930

  15. Fourier transform-based scattering-rate method for self-consistent simulations of carrier transport in semiconductor heterostructures

    SciTech Connect

    Schrottke, L. Lü, X.; Grahn, H. T.

    2015-04-21

    We present a self-consistent model for carrier transport in periodic semiconductor heterostructures completely formulated in the Fourier domain. In addition to the Hamiltonian for the layer system, all expressions for the scattering rates, the applied electric field, and the carrier distribution are treated in reciprocal space. In particular, for slowly converging cases of the self-consistent solution of the Schrödinger and Poisson equations, numerous transformations between real and reciprocal space during the iterations can be avoided by using the presented method, which results in a significant reduction of computation time. Therefore, it is a promising tool for the simulation and efficient design of complex heterostructures such as terahertz quantum-cascade lasers.

  16. Matrix method for acoustic levitation simulation.

    PubMed

    Andrade, Marco A B; Perez, Nicolas; Buiochi, Flavio; Adamowski, Julio C

    2011-08-01

    A matrix method is presented for simulating acoustic levitators. A typical acoustic levitator consists of an ultrasonic transducer and a reflector. The matrix method is used to determine the potential for acoustic radiation force that acts on a small sphere in the standing wave field produced by the levitator. The method is based on the Rayleigh integral and it takes into account the multiple reflections that occur between the transducer and the reflector. The potential for acoustic radiation force obtained by the matrix method is validated by comparing the matrix method results with those obtained by the finite element method when using an axisymmetric model of a single-axis acoustic levitator. After validation, the method is applied in the simulation of a noncontact manipulation system consisting of two 37.9-kHz Langevin-type transducers and a plane reflector. The manipulation system allows control of the horizontal position of a small levitated sphere from -6 mm to 6 mm, which is done by changing the phase difference between the two transducers. The horizontal position of the sphere predicted by the matrix method agrees with the horizontal positions measured experimentally with a charge-coupled device camera. The main advantage of the matrix method is that it allows simulation of non-symmetric acoustic levitators without requiring much computational effort. PMID:21859587

  17. Land Surface Model and Particle Swarm Optimization Algorithm Based on the Model-Optimization Method for Improving Soil Moisture Simulation in a Semi-Arid Region

    PubMed Central

    Yang, Qidong; Zuo, Hongchao; Li, Weidong

    2016-01-01

    Improving the capability of land-surface process models to simulate soil moisture assists in better understanding the atmosphere-land interaction. In semi-arid regions, due to limited near-surface observational data and large errors in large-scale parameters obtained by the remote sensing method, there exist uncertainties in land surface parameters, which can cause large offsets between the simulated results of land-surface process models and the observational data for the soil moisture. In this study, observational data from the Semi-Arid Climate Observatory and Laboratory (SACOL) station in the semi-arid loess plateau of China were divided into three datasets: summer, autumn, and summer-autumn. By combing the particle swarm optimization (PSO) algorithm and the land-surface process model SHAW (Simultaneous Heat and Water), the soil and vegetation parameters that are related to the soil moisture but difficult to obtain by observations are optimized using three datasets. On this basis, the SHAW model was run with the optimized parameters to simulate the characteristics of the land-surface process in the semi-arid loess plateau. Simultaneously, the default SHAW model was run with the same atmospheric forcing as a comparison test. Simulation results revealed the following: parameters optimized by the particle swarm optimization algorithm in all simulation tests improved simulations of the soil moisture and latent heat flux; differences between simulated results and observational data are clearly reduced, but simulation tests involving the adoption of optimized parameters cannot simultaneously improve the simulation results for the net radiation, sensible heat flux, and soil temperature. Optimized soil and vegetation parameters based on different datasets have the same order of magnitude but are not identical; soil parameters only vary to a small degree, but the variation range of vegetation parameters is large. PMID:26991786

  18. Land Surface Model and Particle Swarm Optimization Algorithm Based on the Model-Optimization Method for Improving Soil Moisture Simulation in a Semi-Arid Region.

    PubMed

    Yang, Qidong; Zuo, Hongchao; Li, Weidong

    2016-01-01

    Improving the capability of land-surface process models to simulate soil moisture assists in better understanding the atmosphere-land interaction. In semi-arid regions, due to limited near-surface observational data and large errors in large-scale parameters obtained by the remote sensing method, there exist uncertainties in land surface parameters, which can cause large offsets between the simulated results of land-surface process models and the observational data for the soil moisture. In this study, observational data from the Semi-Arid Climate Observatory and Laboratory (SACOL) station in the semi-arid loess plateau of China were divided into three datasets: summer, autumn, and summer-autumn. By combing the particle swarm optimization (PSO) algorithm and the land-surface process model SHAW (Simultaneous Heat and Water), the soil and vegetation parameters that are related to the soil moisture but difficult to obtain by observations are optimized using three datasets. On this basis, the SHAW model was run with the optimized parameters to simulate the characteristics of the land-surface process in the semi-arid loess plateau. Simultaneously, the default SHAW model was run with the same atmospheric forcing as a comparison test. Simulation results revealed the following: parameters optimized by the particle swarm optimization algorithm in all simulation tests improved simulations of the soil moisture and latent heat flux; differences between simulated results and observational data are clearly reduced, but simulation tests involving the adoption of optimized parameters cannot simultaneously improve the simulation results for the net radiation, sensible heat flux, and soil temperature. Optimized soil and vegetation parameters based on different datasets have the same order of magnitude but are not identical; soil parameters only vary to a small degree, but the variation range of vegetation parameters is large. PMID:26991786

  19. Simulation-based surgical education.

    PubMed

    Evgeniou, Evgenios; Loizou, Peter

    2013-09-01

    The reduction in time for training at the workplace has created a challenge for the traditional apprenticeship model of training. Simulation offers the opportunity for repeated practice in a safe and controlled environment, focusing on trainees and tailored to their needs. Recent technological advances have led to the development of various simulators, which have already been introduced in surgical training. The complexity and fidelity of the available simulators vary, therefore depending on our recourses we should select the appropriate simulator for the task or skill we want to teach. Educational theory informs us about the importance of context in professional learning. Simulation should therefore recreate the clinical environment and its complexity. Contemporary approaches to simulation have introduced novel ideas for teaching teamwork, communication skills and professionalism. In order for simulation-based training to be successful, simulators have to be validated appropriately and integrated in a training curriculum. Within a surgical curriculum, trainees should have protected time for simulation-based training, under appropriate supervision. Simulation-based surgical education should allow the appropriate practice of technical skills without ignoring the clinical context and must strike an adequate balance between the simulation environment and simulators. PMID:23088646

  20. [Application of simulated annealing method and neural network on optimizing soil sampling schemes based on road distribution].

    PubMed

    Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng

    2015-03-01

    Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency. PMID:26211074

  1. A method to find correlations between steering feel and vehicle handling properties using a moving base driving simulator

    NASA Astrophysics Data System (ADS)

    Rothhämel, Malte; IJkema, Jolle; Drugge, Lars

    2011-12-01

    There have been several investigations to find out how drivers experience a change in vehicle-handling behaviour. However, the hypothesis that there is a correlation between what the driver perceives and vehicle- handling properties remains to be verified. To define what people feel, the human feeling of steering systems was divided into dimensions of perception. Then 28 test drivers rated different steering system characteristics of a semi-trailer tractor combination in a moving base-driving simulator. Characteristics of the steering system differed in friction, damping, inertia and stiffness. The same steering system characteristics were also tested in accordance with international standards of vehicle-handling tests resulting in characteristic quantities. The instrumental measurements and the non-instrumental ratings were analysed with respect to correlation between each other with the help of regression analysis and neural networks. Results show that there are correlations between measurements and ratings. Moreover, it is shown that which one of the handling variables influence the different dimensions of the steering feel.

  2. Development and Implementation of a Transport Method for the Transport and Reaction Simulation Engine (TaRSE) based on the Godunov-Mixed Finite Element Method

    USGS Publications Warehouse

    James, Andrew I.; Jawitz, James W.; Munoz-Carpena, Rafael

    2009-01-01

    A model to simulate transport of materials in surface water and ground water has been developed to numerically approximate solutions to the advection-dispersion equation. This model, known as the Transport and Reaction Simulation Engine (TaRSE), uses an algorithm that incorporates a time-splitting technique where the advective part of the equation is solved separately from the dispersive part. An explicit finite-volume Godunov method is used to approximate the advective part, while a mixed-finite element technique is used to approximate the dispersive part. The dispersive part uses an implicit discretization, which allows it to run stably with a larger time step than the explicit advective step. The potential exists to develop algorithms that run several advective steps, and then one dispersive step that encompasses the time interval of the advective steps. Because the dispersive step is computationally most expensive, schemes can be implemented that are more computationally efficient than non-time-split algorithms. This technique enables scientists to solve problems with high grid Peclet numbers, such as transport problems with sharp solute fronts, without spurious oscillations in the numerical approximation to the solution and with virtually no artificial diffusion.

  3. Investigating internal architecture effect in plastic deformation and failure for TPMS-based scaffolds using simulation methods and experimental procedure.

    PubMed

    Kadkhodapour, J; Montazerian, H; Raeisi, S

    2014-10-01

    Rapid prototyping (RP) has been a promising technique for producing tissue engineering scaffolds which mimic the behavior of host tissue as properly as possible. Biodegradability, agreeable feasibility of cell growth, and migration parallel to mechanical properties, such as strength and energy absorption, have to be considered in design procedure. In order to study the effect of internal architecture on the plastic deformation and failure pattern, the architecture of triply periodic minimal surfaces which have been observed in nature were used. P and D surfaces at 30% and 60% of volume fractions were modeled with 3∗3∗ 3 unit cells and imported to Objet EDEN 260 3-D printer. Models were printed by VeroBlue FullCure 840 photopolymer resin. Mechanical compression test was performed to investigate the compressive behavior of scaffolds. Deformation procedure and stress-strain curves were simulated by FEA and exhibited good agreement with the experimental observation. Current approaches for predicting dominant deformation mode under compression containing Maxwell's criteria and scaling laws were also investigated to achieve an understanding of the relationships between deformation pattern and mechanical properties of porous structures. It was observed that effect of stress concentration in TPMS-based scaffolds resultant by heterogeneous mass distribution, particularly at lower volume fractions, led to a different behavior from that of typical cellular materials. As a result, although more parameters are considered for determining dominant deformation in scaling laws, two mentioned approaches could not exclusively be used to compare the mechanical response of cellular materials at the same volume fraction. PMID:25175253

  4. Knowledge-based simulation for aerospace systems

    NASA Technical Reports Server (NTRS)

    Will, Ralph W.; Sliwa, Nancy E.; Harrison, F. Wallace, Jr.

    1988-01-01

    Knowledge-based techniques, which offer many features that are desirable in the simulation and development of aerospace vehicle operations, exhibit many similarities to traditional simulation packages. The eventual solution of these systems' current symbolic processing/numeric processing interface problem will lead to continuous and discrete-event simulation capabilities in a single language, such as TS-PROLOG. Qualitative, totally-symbolic simulation methods are noted to possess several intrinsic characteristics that are especially revelatory of the system being simulated, and capable of insuring that all possible behaviors are considered.

  5. Rule-based simulation models

    NASA Technical Reports Server (NTRS)

    Nieten, Joseph L.; Seraphine, Kathleen M.

    1991-01-01

    Procedural modeling systems, rule based modeling systems, and a method for converting a procedural model to a rule based model are described. Simulation models are used to represent real time engineering systems. A real time system can be represented by a set of equations or functions connected so that they perform in the same manner as the actual system. Most modeling system languages are based on FORTRAN or some other procedural language. Therefore, they must be enhanced with a reaction capability. Rule based systems are reactive by definition. Once the engineering system has been decomposed into a set of calculations using only basic algebraic unary operations, a knowledge network of calculations and functions can be constructed. The knowledge network required by a rule based system can be generated by a knowledge acquisition tool or a source level compiler. The compiler would take an existing model source file, a syntax template, and a symbol table and generate the knowledge network. Thus, existing procedural models can be translated and executed by a rule based system. Neural models can be provide the high capacity data manipulation required by the most complex real time models.

  6. A heterogeneous graph-based recommendation simulator

    SciTech Connect

    Yeonchan, Ahn; Sungchan, Park; Lee, Matt Sangkeun; Sang-goo, Lee

    2013-01-01

    Heterogeneous graph-based recommendation frameworks have flexibility in that they can incorporate various recommendation algorithms and various kinds of information to produce better results. In this demonstration, we present a heterogeneous graph-based recommendation simulator which enables participants to experience the flexibility of a heterogeneous graph-based recommendation method. With our system, participants can simulate various recommendation semantics by expressing the semantics via meaningful paths like User Movie User Movie. The simulator then returns the recommendation results on the fly based on the user-customized semantics using a fast Monte Carlo algorithm.

  7. Parallel node placement method by bubble simulation

    NASA Astrophysics Data System (ADS)

    Nie, Yufeng; Zhang, Weiwei; Qi, Nan; Li, Yiqiang

    2014-03-01

    An efficient Parallel Node Placement method by Bubble Simulation (PNPBS), employing METIS-based domain decomposition (DD) for an arbitrary number of processors is introduced. In accordance with the desired nodal density and Newton’s Second Law of Motion, automatic generation of node sets by bubble simulation has been demonstrated in previous work. Since the interaction force between nodes is short-range, for two distant nodes, their positions and velocities can be updated simultaneously and independently during dynamic simulation, which indicates the inherent property of parallelism, it is quite suitable for parallel computing. In this PNPBS method, the METIS-based DD scheme has been investigated for uniform and non-uniform node sets, and dynamic load balancing is obtained by evenly distributing work among the processors. For the nodes near the common interface of two neighboring subdomains, there is no need for special treatment after dynamic simulation. These nodes have good geometrical properties and a smooth density distribution which is desirable in the numerical solution of partial differential equations (PDEs). The results of numerical examples show that quasi linear speedup in the number of processors and high efficiency are achieved.

  8. Simulation and Non-Simulation Based Human Reliability Analysis Approaches

    SciTech Connect

    Boring, Ronald Laurids; Shirley, Rachel Elizabeth; Joe, Jeffrey Clark; Mandelli, Diego

    2014-12-01

    Part of the U.S. Department of Energy’s Light Water Reactor Sustainability (LWRS) Program, the Risk-Informed Safety Margin Characterization (RISMC) Pathway develops approaches to estimating and managing safety margins. RISMC simulations pair deterministic plant physics models with probabilistic risk models. As human interactions are an essential element of plant risk, it is necessary to integrate human actions into the RISMC risk model. In this report, we review simulation-based and non-simulation-based human reliability assessment (HRA) methods. Chapter 2 surveys non-simulation-based HRA methods. Conventional HRA methods target static Probabilistic Risk Assessments for Level 1 events. These methods would require significant modification for use in dynamic simulation of Level 2 and Level 3 events. Chapter 3 is a review of human performance models. A variety of methods and models simulate dynamic human performance; however, most of these human performance models were developed outside the risk domain and have not been used for HRA. The exception is the ADS-IDAC model, which can be thought of as a virtual operator program. This model is resource-intensive but provides a detailed model of every operator action in a given scenario, along with models of numerous factors that can influence operator performance. Finally, Chapter 4 reviews the treatment of timing of operator actions in HRA methods. This chapter is an example of one of the critical gaps between existing HRA methods and the needs of dynamic HRA. This report summarizes the foundational information needed to develop a feasible approach to modeling human interactions in the RISMC simulations.

  9. A Lattice Boltzmann Method for Turbomachinery Simulations

    NASA Technical Reports Server (NTRS)

    Hsu, A. T.; Lopez, I.

    2003-01-01

    Lattice Boltzmann (LB) Method is a relatively new method for flow simulations. The start point of LB method is statistic mechanics and Boltzmann equation. The LB method tries to set up its model at molecular scale and simulate the flow at macroscopic scale. LBM has been applied to mostly incompressible flows and simple geometry.

  10. Formability analysis of aluminum alloy sheets at elevated temperatures with numerical simulation based on the M-K method

    SciTech Connect

    Bagheriasl, Reza; Ghavam, Kamyar; Worswick, Michael

    2011-05-04

    The effect of temperature on formability of aluminum alloy sheet is studied by developing the Forming Limit Diagrams, FLD, for aluminum alloy 3000-series using the Marciniak and Kuczynski technique by numerical simulation. The numerical model is conducted in LS-DYNA and incorporates the Barlat's YLD2000 anisotropic yield function and the temperature dependant Bergstrom hardening law. Three different temperatures; room temperature, 250 deg. C and 300 deg. C, are studied. For each temperature case, various loading conditions are applied to the M-K defect model. The effect of the material anisotropy is considered by varying the defect angle. A simplified failure criterion is used to predict the onset of necking. Minor and major strains are obtained from the simulations and plotted for each temperature level. It is demonstrated that temperature improves the forming limit of aluminum 3000-series alloy sheet.

  11. An automated method for predicting full-scale CO/sub 2/ flood performance based on detailed pattern flood simulations

    SciTech Connect

    Rester, S.; Todd, M.R.

    1984-04-01

    A procedure is described for estimating the response of a field scale CO/sub 2/ flood from a limited number of simulations of pattern flood symmetry elements. This procedure accounts for areally varying reservoir properties, areally varying conditions when CO/sub 2/ injection is initiated, phased conversion of injectors to CO/sub 2/, and shut in criteria for producers. Examples of the use of this procedure are given.

  12. Multigrid methods with applications to reservoir simulation

    SciTech Connect

    Xiao, Shengyou

    1994-05-01

    Multigrid methods are studied for solving elliptic partial differential equations. Focus is on parallel multigrid methods and their use for reservoir simulation. Multicolor Fourier analysis is used to analyze the behavior of standard multigrid methods for problems in one and two dimensions. Relation between multicolor and standard Fourier analysis is established. Multiple coarse grid methods for solving model problems in 1 and 2 dimensions are considered; at each coarse grid level we use more than one coarse grid to improve convergence. For a given Dirichlet problem, a related extended problem is first constructed; a purification procedure can be used to obtain Moore-Penrose solutions of the singular systems encountered. For solving anisotropic equations, semicoarsening and line smoothing techniques are used with multiple coarse grid methods to improve convergence. Two-level convergence factors are estimated using multicolor. In the case where each operator has the same stencil on each grid point on one level, exact multilevel convergence factors can be obtained. For solving partial differential equations with discontinuous coefficients, interpolation and restriction operators should include information about the equation coefficients. Matrix-dependent interpolation and restriction operators based on the Schur complement can be used in nonsymmetric cases. A semicoarsening multigrid solver with these operators is used in UTCOMP, a 3-D, multiphase, multicomponent, compositional reservoir simulator. The numerical experiments are carried out on different computing systems. Results indicate that the multigrid methods are promising.

  13. Determining design gust loads for nonlinear aircraft similarity between methods based on matched filter theory and on stochastic simulation

    NASA Technical Reports Server (NTRS)

    Scott, Robert C.; Pototzky, Anthony S.; Perry, Boyd, III

    1992-01-01

    This is a work-in-progress paper. It explores the similarity between the results from two different analysis methods - one deterministic, the other stochastic - for computing maximized and time-correlated gust loads for nonlinear aircraft. To date, numerical studies have been performed using two different nonlinear aircraft configurations. These studies demonstrate that results from the deterministic analysis method are realizable in the stochastic analysis method.

  14. Medical students’ satisfaction with the Applied Basic Clinical Seminar with Scenarios for Students, a novel simulation-based learning method in Greece

    PubMed Central

    2016-01-01

    Purpose: The integration of simulation-based learning (SBL) methods holds promise for improving the medical education system in Greece. The Applied Basic Clinical Seminar with Scenarios for Students (ABCS3) is a novel two-day SBL course that was designed by the Scientific Society of Hellenic Medical Students. The ABCS3 targeted undergraduate medical students and consisted of three core components: the case-based lectures, the ABCDE hands-on station, and the simulation-based clinical scenarios. The purpose of this study was to evaluate the general educational environment of the course, as well as the skills and knowledge acquired by the participants. Methods: Two sets of questions were distributed to the participants: the Dundee Ready Educational Environment Measure (DREEM) questionnaire and an internally designed feedback questionnaire (InEv). A multiple-choice examination was also distributed prior to the course and following its completion. A total of 176 participants answered the DREEM questionnaire, 56 the InEv, and 60 the MCQs. Results: The overall DREEM score was 144.61 (±28.05) out of 200. Delegates who participated in both the case-based lectures and the interactive scenarios core components scored higher than those who only completed the case-based lecture session (P=0.038). The mean overall feedback score was 4.12 (±0.56) out of 5. Students scored significantly higher on the post-test than on the pre-test (P<0.001). Conclusion: The ABCS3 was found to be an effective SBL program, as medical students reported positive opinions about their experiences and exhibited improvements in their clinical knowledge and skills. PMID:27012313

  15. Simulation optimization as a method for lot size determination

    NASA Astrophysics Data System (ADS)

    Vazan, P.; Moravčík, O.; Jurovatá, D.; Juráková, A.

    2011-10-01

    The paper presents the simulation optimization as a good tool for solving many problems not only in research, but also in real praxis. It gives basic overview of the methods in simulation optimization. The authors also characterise their own experiences and they mention advantages and problems of simulation optimization. The paper is a contribution to more effective using of simulation optimization. The main goal was to give the general procedure for effective usage of simulation optimization. The authors present the alternative method for determination of lot size. The method uses the simulation optimization as a base procedure. The authors demonstrate the important stages of the method. The final procedure involves the process of selection of algorithm, input variables, their set up of range and step selection. The solution is compared with classic mathematic methods. The authors point out that the realization of simulation optimization is a compromise between acceptable time and accuracy of found solution.

  16. Identification of substance in complicated mixture of simulants under the action of THz radiation on the base of SDA (spectral dynamics analysis) method

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Varentsova, Svetlana A.; Krotkus, Arunas; Molis, Gediminas

    2010-10-01

    The SDA (Spectral Dynamics Analysis) - method (method of THz spectrum dynamics analysis in THz range of frequencies) is used for the detection and identification of substances with similar THz Fourier spectra (such substances are named usually as the simulants) in the two- or three-component medium. This method allows us to obtain the unique 2D THz signature of the substance - the spectrogram- and to analyze the dynamics of many spectral lines of the THz signal, passed through or reflected from substance, by one set of its integral measurements simultaneously; even measurements are made on short-term intervals (less than 20 ps). For long-term intervals (100 ps and more) the SDA method gives an opportunity to define the relaxation time for excited energy levels of molecules. This information gives new opportunity to identify the substance because the relaxation time is different for molecules of different substances. The restoration of the signal by its integral values is made on the base of SVD - Single Value Decomposition - technique. We consider three examples for PTFE mixed with small content of the L-Tartaric Acid and the Sucrose in pellets. A concentration of these substances is about 5%-10%. Our investigations show that the spectrograms and dynamics of spectral lines of THz pulse passed through the pure PTFE differ from the spectrograms of the compound medium containing PTFE and the L-Tartaric Acid or the Sucrose or both these substances together. So, it is possible to detect the presence of a small amount of the additional substances in the sample even their THz Fourier spectra are practically identical. Therefore, the SDA method can be very effective for the defense and security applications and for quality control in pharmaceutical industry. We also show that in the case of substances-simulants the use of auto- and correlation functions has much worse resolvability in a comparison with the SDA method.

  17. Jacobian-free Newton Krylov discontinuous Galerkin method and physics-based preconditioning for nuclear reactor simulations

    SciTech Connect

    HyeongKae Park; Robert R. Nourgaliev; Richard C. Martineau; Dana A. Knoll

    2008-09-01

    We present high-order accurate spatiotemporal discretization of all-speed flow solvers using Jacobian-free Newton Krylov framework. One of the key developments in this work is the physics-based preconditioner for the all-speed flow, which makes use of traditional semi-implicit schemes. The physics-based preconditioner is developed in the primitive variable form, which allows a straightforward separation of physical phenomena. Numerical examples demonstrate that the developed preconditioner effectively reduces the number of the Krylov iterations, and the efficiency is independent of the Mach number and mesh sizes under a fixed CFL condition.

  18. Using LC-MS Based Methods for Testing the Digestibility of a Nonpurified Transgenic Membrane Protein in Simulated Gastric Fluid.

    PubMed

    Skinner, Wayne S; Phinney, Brett S; Herren, Anthony; Goodstal, Floyd J; Dicely, Isabel; Facciotti, Daniel

    2016-06-29

    The digestibility of a nonpurified transgenic membrane protein was determined in pepsin, as part of the food safety evaluation of its resistance to digestion and allergenic potential. Delta-6-desaturase from Saprolegnia diclina, a transmembrane protein expressed in safflower for the production of gamma linolenic acid in the seed, could not be obtained in a pure, native form as normally required for this assay. As a novel approach, the endoplasmic reticulum isolated from immature seeds was digested in simulated gastric fluid (SGF) and the degradation of delta-6-desaturase was selectively followed by SDS-PAGE and targeted LC-MS/MS quantification using stable isotope-labeled peptides as internal standards. The digestion of delta-6-desaturase by SGF was shown to be both rapid and complete. Less than 10% of the initial amount of D6D remained intact after 30 s, and no fragments large enough (>3 kDa) to elicit a type I allergenic response remained after 60 min. PMID:27255301

  19. Simulation-Based Training for Colonoscopy

    PubMed Central

    Preisler, Louise; Svendsen, Morten Bo Søndergaard; Nerup, Nikolaj; Svendsen, Lars Bo; Konge, Lars

    2015-01-01

    Abstract The aim of this study was to create simulation-based tests with credible pass/fail standards for 2 different fidelities of colonoscopy models. Only competent practitioners should perform colonoscopy. Reliable and valid simulation-based tests could be used to establish basic competency in colonoscopy before practicing on patients. Twenty-five physicians (10 consultants with endoscopic experience and 15 fellows with very little endoscopic experience) were tested on 2 different simulator models: a virtual-reality simulator and a physical model. Tests were repeated twice on each simulator model. Metrics with discriminatory ability were identified for both modalities and reliability was determined. The contrasting-groups method was used to create pass/fail standards and the consequences of these were explored. The consultants significantly performed faster and scored higher than the fellows on both the models (P < 0.001). Reliability analysis showed Cronbach α = 0.80 and 0.87 for the virtual-reality and the physical model, respectively. The established pass/fail standards failed one of the consultants (virtual-reality simulator) and allowed one fellow to pass (physical model). The 2 tested simulations-based modalities provided reliable and valid assessments of competence in colonoscopy and credible pass/fail standards were established for both the tests. We propose to use these standards in simulation-based training programs before proceeding to supervised training on patients. PMID:25634177

  20. A hybrid-Vlasov model based on the current advance method for the simulation of collisionless magnetized plasma

    SciTech Connect

    Valentini, F. . E-mail: valentin@fis.unical.it; Travnicek, P.; Califano, F.; Hellinger, P.; Mangeney, A.

    2007-07-01

    We present a numerical scheme for the integration of the Vlasov-Maxwell system of equations for a non-relativistic plasma, in the hybrid approximation, where the Vlasov equation is solved for the ion distribution function and the electrons are treated as a fluid. In the Ohm equation for the electric field, effects of electron inertia have been retained, in order to include the small scale dynamics up to characteristic lengths of the order of the electron skin depth. The low frequency approximation is used by neglecting the time derivative of the electric field, i.e. the displacement current in the Ampere equation. The numerical algorithm consists in coupling the splitting method proposed by Cheng and Knorr in 1976 [C.Z. Cheng, G. Knorr, J. Comput. Phys. 22 (1976) 330-351.] and the current advance method (CAM) introduced by Matthews in 1994 [A.P. Matthews, J. Comput. Phys. 112 (1994) 102-116.] In its present version, the code solves the Vlasov-Maxwell equations in a five-dimensional phase space (2-D in the physical space and 3-D in the velocity space) and it is implemented in a parallel version to exploit the computational power of the modern massively parallel supercomputers. The structure of the algorithm and the coupling between the splitting method and the CAM method (extended to the hybrid case) is discussed in detail. Furthermore, in order to test the hybrid-Vlasov code, the numerical results on propagation and damping of linear ion-acoustic modes and time evolution of linear elliptically polarized Alfven waves (including the so-called whistler regime) are compared to the analytical solutions. Finally, the numerical results of the hybrid-Vlasov code on the parametric instability of Alfven waves are compared with those obtained using a two-fluid approach.

  1. [Comparison of two types of double-lined simulated landfill leakage detection based on high voltage DC method].

    PubMed

    Yang, Ping; Nai, Chang-Xin; Dong, Lu; Wang, Qi; Wang, Yan-Wen

    2006-01-01

    Two types of double high density polyethylene (HDPE) liners landfill that clay or geogrid was added between the two HDPE liners. The general resistance of the second mode is 15% larger than the general resistance of the first mode in the primary HDPE liner detection, and 20% larger than that of the first one in the secondary HDPE liner detection. High voltage DC method can accomplish the leakage detection and location of these two types of landfill and the error of leakage location is less than 10cm when electrode space is 1m. PMID:16599145

  2. Simulator certification methods and the vertical motion simulator

    NASA Technical Reports Server (NTRS)

    Showalter, T. W.

    1981-01-01

    The vertical motion simulator (VMS) is designed to simulate a variety of experimental helicopter and STOL/VTOL aircraft as well as other kinds of aircraft with special pitch and Z axis characteristics. The VMS includes a large motion base with extensive vertical and lateral travel capabilities, a computer generated image visual system, and a high speed CDC 7600 computer system, which performs aero model calculations. Guidelines on how to measure and evaluate VMS performance were developed. A survey of simulation users was conducted to ascertain they evaluated and certified simulators for use. The results are presented.

  3. A fluid model simulation of a simplified plasma limiter based on spectral-element time-domain method

    SciTech Connect

    Qian, Cheng; Ding, Dazhi Fan, Zhenhong; Chen, Rushan

    2015-03-15

    A simplified plasma limiter prototype is proposed and the fluid model coupled with Maxwell's equations is established to describe the operating mechanism of plasma limiter. A three-dimensional (3-D) simplified sandwich structure plasma limiter model is analyzed with the spectral-element time-domain (SETD) method. The field breakdown threshold of air and argon at different frequency is predicted and compared with the experimental data and there is a good agreement between them for gas microwave breakdown discharge problems. Numerical results demonstrate that the two-layer plasma limiter (plasma-slab-plasma) has better protective characteristics than a one-layer plasma limiter (slab-plasma-slab) with the same length of gas chamber.

  4. A Method to Estimate the Size and Characteristics of HIV-positive Populations Using an Individual-based Stochastic Simulation Model

    PubMed Central

    van Sighem, Ard; Thiebaut, Rodolphe; Smith, Colette; Ratmann, Oliver; Cambiano, Valentina; Albert, Jan; Amato-Gauci, Andrew; Bezemer, Daniela; Campbell, Colin; Commenges, Daniel; Donoghoe, Martin; Ford, Deborah; Kouyos, Roger; Lodwick, Rebecca; Lundgren, Jens; Pantazis, Nikos; Pharris, Anastasia; Quinten, Chantal; Thorne, Claire; Touloumi, Giota; Delpech, Valerie; Phillips, Andrew

    2016-01-01

    It is important not only to collect epidemiologic data on HIV but to also fully utilize such information to understand the epidemic over time and to help inform and monitor the impact of policies and interventions. We describe and apply a novel method to estimate the size and characteristics of HIV-positive populations. The method was applied to data on men who have sex with men living in the UK and to a pseudo dataset to assess performance for different data availability. The individual-based simulation model was calibrated using an approximate Bayesian computation-based approach. In 2013, 48,310 (90% plausibility range: 39,900–45,560) men who have sex with men were estimated to be living with HIV in the UK, of whom 10,400 (6,160–17,350) were undiagnosed. There were an estimated 3,210 (1,730–5,350) infections per year on average between 2010 and 2013. Sixty-two percent of the total HIV-positive population are thought to have viral load <500 copies/ml. In the pseudo-epidemic example, HIV estimates have narrower plausibility ranges and are closer to the true number, the greater the data availability to calibrate the model. We demonstrate that our method can be applied to settings with less data, however plausibility ranges for estimates will be wider to reflect greater uncertainty of the data used to fit the model. PMID:26605814

  5. A task-based parallelism and vectorized approach to 3D Method of Characteristics (MOC) reactor simulation for high performance computing architectures

    NASA Astrophysics Data System (ADS)

    Tramm, John R.; Gunow, Geoffrey; He, Tim; Smith, Kord S.; Forget, Benoit; Siegel, Andrew R.

    2016-05-01

    In this study we present and analyze a formulation of the 3D Method of Characteristics (MOC) technique applied to the simulation of full core nuclear reactors. Key features of the algorithm include a task-based parallelism model that allows independent MOC tracks to be assigned to threads dynamically, ensuring load balancing, and a wide vectorizable inner loop that takes advantage of modern SIMD computer architectures. The algorithm is implemented in a set of highly optimized proxy applications in order to investigate its performance characteristics on CPU, GPU, and Intel Xeon Phi architectures. Speed, power, and hardware cost efficiencies are compared. Additionally, performance bottlenecks are identified for each architecture in order to determine the prospects for continued scalability of the algorithm on next generation HPC architectures.

  6. Methods of sound simulation and applications in flight simulators

    NASA Technical Reports Server (NTRS)

    Gaertner, K. P.

    1980-01-01

    An overview of methods for electronically synthesizing sounds is presented. A given amount of hardware and computer capacity places an upper limit on the degree and fidelity of realism of sound simulation which is attainable. Good sound realism for aircraft simulators can be especially expensive because of the complexity of flight sounds and their changing patterns through time. Nevertheless, the flight simulator developed at the Research Institute for Human Engineering, West Germany, shows that it is possible to design an inexpensive sound simulator with the required acoustic properties using analog computer elements. The characteristics of the sub-sound elements produced by this sound simulator for take-off, cruise and approach are discussed.

  7. Image based SAR product simulation for analysis

    NASA Technical Reports Server (NTRS)

    Domik, G.; Leberl, F.

    1987-01-01

    SAR product simulation serves to predict SAR image gray values for various flight paths. Input typically consists of a digital elevation model and backscatter curves. A new method is described of product simulation that employs also a real SAR input image for image simulation. This can be denoted as 'image-based simulation'. Different methods to perform this SAR prediction are presented and advantages and disadvantages discussed. Ascending and descending orbit images from NASA's SIR-B experiment were used for verification of the concept: input images from ascending orbits were converted into images from a descending orbit; the results are compared to the available real imagery to verify that the prediction technique produces meaningful image data.

  8. Hydraulic performance numerical simulation of high specific speed mixed-flow pump based on quasi three-dimensional hydraulic design method

    NASA Astrophysics Data System (ADS)

    Zhang, Y. X.; Su, M.; Hou, H. C.; Song, P. F.

    2013-12-01

    This research adopts the quasi three-dimensional hydraulic design method for the impeller of high specific speed mixed-flow pump to achieve the purpose of verifying the hydraulic design method and improving hydraulic performance. Based on the two families of stream surface theory, the direct problem is completed when the meridional flow field of impeller is obtained by employing iterative calculation to settle the continuity and momentum equation of fluid. The inverse problem is completed by using the meridional flow field calculated in the direct problem. After several iterations of the direct and inverse problem, the shape of impeller and flow field information can be obtained finally when the result of iteration satisfies the convergent criteria. Subsequently the internal flow field of the designed pump are simulated by using RANS equations with RNG k-ε two-equation turbulence model. The static pressure and streamline distributions at the symmetrical cross-section, the vector velocity distribution around blades and the reflux phenomenon are analyzed. The numerical results show that the quasi three-dimensional hydraulic design method for high specific speed mixed-flow pump improves the hydraulic performance and reveal main characteristics of the internal flow of mixed-flow pump as well as provide basis for judging the rationality of the hydraulic design, improvement and optimization of hydraulic model.

  9. Large-Eddy Simulation and Multigrid Methods

    SciTech Connect

    Falgout,R D; Naegle,S; Wittum,G

    2001-06-18

    A method to simulate turbulent flows with Large-Eddy Simulation on unstructured grids is presented. Two kinds of dynamic models are used to model the unresolved scales of motion and are compared with each other on different grids. Thereby the behavior of the models is shown and additionally the feature of adaptive grid refinement is investigated. Furthermore the parallelization aspect is addressed.

  10. Inversion based on computational simulations

    SciTech Connect

    Hanson, K.M.; Cunningham, G.S.; Saquib, S.S.

    1998-09-01

    A standard approach to solving inversion problems that involve many parameters uses gradient-based optimization to find the parameters that best match the data. The authors discuss enabling techniques that facilitate application of this approach to large-scale computational simulations, which are the only way to investigate many complex physical phenomena. Such simulations may not seem to lend themselves to calculation of the gradient with respect to numerous parameters. However, adjoint differentiation allows one to efficiently compute the gradient of an objective function with respect to all the variables of a simulation. When combined with advanced gradient-based optimization algorithms, adjoint differentiation permits one to solve very large problems of optimization or parameter estimation. These techniques will be illustrated through the simulation of the time-dependent diffusion of infrared light through tissue, which has been used to perform optical tomography. The techniques discussed have a wide range of applicability to modeling including the optimization of models to achieve a desired design goal.

  11. Estimating School Efficiency: A Comparison of Methods Using Simulated Data.

    ERIC Educational Resources Information Center

    Bifulco, Robert; Bretschneider, Stuart

    2001-01-01

    Uses simulated data to assess the adequacy of two econometric and linear-programming techniques (data-envelopment analysis and corrected ordinary least squares) for measuring performance-based school reform. In complex data sets (simulated to contain measurement error and endogeneity), these methods are inadequate efficiency measures. (Contains 40…

  12. Constraint methods that accelerate free-energy simulations of biomolecules.

    PubMed

    Perez, Alberto; MacCallum, Justin L; Coutsias, Evangelos A; Dill, Ken A

    2015-12-28

    Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann's law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions. PMID:26723628

  13. Constraint methods that accelerate free-energy simulations of biomolecules

    SciTech Connect

    Perez, Alberto; MacCallum, Justin L.; Coutsias, Evangelos A.; Dill, Ken A.

    2015-12-28

    Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann’s law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.

  14. Bridging the gap: simulations meet knowledge bases

    NASA Astrophysics Data System (ADS)

    King, Gary W.; Morrison, Clayton T.; Westbrook, David L.; Cohen, Paul R.

    2003-09-01

    Tapir and Krill are declarative languages for specifying actions and agents, respectively, that can be executed in simulation. As such, they bridge the gap between strictly declarative knowledge bases and strictly executable code. Tapir and Krill components can be combined to produce models of activity which can answer questions about mechanisms and processes using conventional inference methods and simulation. Tapir was used in DARPA's Rapid Knowledge Formation (RKF) project to construct models of military tactics from the Army Field Manual FM3-90. These were then used to build Courses of Actions (COAs) which could be critiqued by declarative reasoning or via Monte Carlo simulation. Tapir and Krill can be read and written by non-knowledge engineers making it an excellent vehicle for Subject Matter Experts to build and critique knowledge bases.

  15. Spectral Methods in General Relativistic MHD Simulations

    NASA Astrophysics Data System (ADS)

    Garrison, David

    2012-03-01

    In this talk I discuss the use of spectral methods in improving the accuracy of a General Relativistic Magnetohydrodynamic (GRMHD) computer code. I introduce SpecCosmo, a GRMHD code developed as a Cactus arrangement at UHCL, and show simulation results using both Fourier spectral methods and finite differencing. This work demonstrates the use of spectral methods with the FFTW 3.3 Fast Fourier Transform package integrated with the Cactus Framework to perform spectral differencing using MPI.

  16. A simple method for simulating gasdynamic systems

    NASA Technical Reports Server (NTRS)

    Hartley, Tom T.

    1991-01-01

    A simple method for performing digital simulation of gasdynamic systems is presented. The approach is somewhat intuitive, and requires some knowledge of the physics of the problem as well as an understanding of the finite difference theory. The method is explicitly shown in appendix A which is taken from the book by P.J. Roache, 'Computational Fluid Dynamics,' Hermosa Publishers, 1982. The resulting method is relatively fast while it sacrifices some accuracy.

  17. Effective medium based optical analysis with finite element method simulations to study photochromic transitions in Ag-TiO2 nanocomposite films

    NASA Astrophysics Data System (ADS)

    Abhilash, T.; Balasubrahmaniyam, M.; Kasiviswanathan, S.

    2016-03-01

    Photochromic transitions in silver nanoparticles (AgNPs) embedded titanium dioxide (TiO2) films under green light illumination are marked by reduction in strength and blue shift in the position of the localized surface plasmon resonance (LSPR) associated with AgNPs. These transitions, which happen in the sub-nanometer length scale, have been analysed using the variations observed in the effective dielectric properties of the Ag-TiO2 nanocomposite films in response to the size reduction of AgNPs and subsequent changes in the surrounding medium due to photo-oxidation. Bergman-Milton formulation based on spectral density approach is used to extract dielectric properties and information about the geometrical distribution of the effective medium. Combined with finite element method simulations, we isolate the effects due to the change in average size of the nanoparticles and those due to the change in the dielectric function of the surrounding medium. By analysing the dynamics of photochromic transitions in the effective medium, we conclude that the observed blue shift in LSPR is mainly because of the change in the dielectric function of surrounding medium, while a shape-preserving effective size reduction of the AgNPs causes decrease in the strength of LSPR.

  18. Rainfall Simulation: methods, research questions and challenges

    NASA Astrophysics Data System (ADS)

    Ries, J. B.; Iserloh, T.

    2012-04-01

    In erosion research, rainfall simulations are used for the improvement of process knowledge as well as in the field for the assessment of overland flow generation, infiltration, and erosion rates. In all these fields of research, rainfall experiments have become an indispensable part of the research methods. In this context, small portable rainfall simulators with small test-plot sizes of one square-meter or even less, and devices of low weight and water consumption are in demand. Accordingly, devices with manageable technical effort like nozzle-type simulators seem to prevail against larger simulators. The reasons are obvious: lower costs and less time consumption needed for mounting enable a higher repetition rate. Regarding the high number of research questions, of different fields of application, and not least also due to the great technical creativity of our research staff, a large number of different experimental setups is available. Each of the devices produces a different rainfall, leading to different kinetic energy amounts influencing the soil surface and accordingly, producing different erosion results. Hence, important questions contain the definition, the comparability, the measurement and the simulation of natural rainfall and the problem of comparability in general. Another important discussion topic will be the finding of an agreement on an appropriate calibration method for the simulated rainfalls, in order to enable a comparison of the results of different rainfall simulator set-ups. In most of the publications, only the following "nice" sentence can be read: "Our rainfall simulator generates a rainfall spectrum that is similar to natural rainfall!". The most substantial and critical properties of a simulated rainfall are the drop-size distribution, the fall velocities of the drops, and the spatial distribution of the rainfall on the plot-area. In a comparison of the most important methods, the Laser Distrometer turned out to be the most up

  19. Reduced Basis Method for Nanodevices Simulation

    SciTech Connect

    Pau, George Shu Heng

    2008-05-23

    Ballistic transport simulation in nanodevices, which involves self-consistently solving a coupled Schrodinger-Poisson system of equations, is usually computationally intensive. Here, we propose coupling the reduced basis method with the subband decomposition method to improve the overall efficiency of the simulation. By exploiting a posteriori error estimation procedure and greedy sampling algorithm, we are able to design an algorithm where the computational cost is reduced significantly. In addition, the computational cost only grows marginally with the number of grid points in the confined direction.

  20. A guided simulated annealing method for crystallography.

    PubMed

    Chou, C I; Lee, T K

    2002-01-01

    A new optimization algorithm, the guided simulated annealing method, for use in X-ray crystallographic studies is presented. In the traditional simulated annealing method, the search for the global minimum of a cost function is only determined by the ratio of energy change to the temperature. This method designs a new quality function to guide the search for a minimum. Using a multiresolution process, the method is much more efficient in finding the global minimum than the traditional method. Results for two large molecules, isoleucinomycin (C(60)H(102)N(6)O(18)) and an alkyl calix (C(72)H(112)O(8). 4C(2)H(6)O), with different space groups are reported. PMID:11752762

  1. Mixed-level optical simulations of light-emitting diodes based on a combination of rigorous electromagnetic solvers and Monte Carlo ray-tracing methods

    NASA Astrophysics Data System (ADS)

    Bahl, Mayank; Zhou, Gui-Rong; Heller, Evan; Cassarly, William; Jiang, Mingming; Scarmozzino, Robert; Gregory, G. Groot; Herrmann, Daniel

    2015-04-01

    Over the last two decades, extensive research has been done to improve light-emitting diodes (LEDs) designs. Increasingly complex designs have necessitated the use of computational simulations which have provided numerous insights for improving LED performance. Depending upon the focus of the design and the scale of the problem, simulations are carried out using rigorous electromagnetic (EM) wave optics-based techniques, such as finite-difference time-domain and rigorous coupled wave analysis, or through ray optics-based techniques such as Monte Carlo ray-tracing (RT). The former are typically used for modeling nanostructures on the LED die, and the latter for modeling encapsulating structures, die placement, back-reflection, and phosphor downconversion. This paper presents the use of a mixed-level simulation approach that unifies the use of EM wave-level and ray-level tools. This approach uses rigorous EM wave-based tools to characterize the nanostructured die and generates both a bidirectional scattering distribution function and a far-field angular intensity distribution. These characteristics are then incorporated into the RT simulator to obtain the overall performance. Such a mixed-level approach allows for comprehensive modeling of the optical characteristic of LEDs, including polarization effects, and can potentially lead to a more accurate performance than that from individual modeling tools alone.

  2. Mass conservation of the unified continuous and discontinuous element-based Galerkin methods on dynamically adaptive grids with application to atmospheric simulations

    NASA Astrophysics Data System (ADS)

    Kopera, Michal A.; Giraldo, Francis X.

    2015-09-01

    We perform a comparison of mass conservation properties of the continuous (CG) and discontinuous (DG) Galerkin methods on non-conforming, dynamically adaptive meshes for two atmospheric test cases. The two methods are implemented in a unified way which allows for a direct comparison of the non-conforming edge treatment. We outline the implementation details of the non-conforming direct stiffness summation algorithm for the CG method and show that the mass conservation error is similar to the DG method. Both methods conserve to machine precision, regardless of the presence of the non-conforming edges. For lower order polynomials the CG method requires additional stabilization to run for very long simulation times. We addressed this issue by using filters and/or additional artificial viscosity. The mathematical proof of mass conservation for CG with non-conforming meshes is presented in Appendix B.

  3. A method for simulating a flux-locked DC SQUID

    NASA Technical Reports Server (NTRS)

    Gutt, G. M.; Kasdin, N. J.; Condron, M. R., II; Muhlfelder, B.; Lockhart, J. M.; Cromar, M. W.

    1993-01-01

    The authors describe a computationally efficient and accurate method for simulating a dc SQUID's V-Phi (voltage-flux) and I-V characteristics which has proven valuable in evaluating and improving various SQUID readout methods. The simulation of the SQUID is based on fitting of previously acquired data from either a real or a modeled device using the Fourier transform of the V-Phi curve. This method does not predict SQUID behavior, but rather is a way of replicating a known behavior efficiently with portability into various simulation programs such as SPICE. The authors discuss the methods used to simulate the SQUID and the flux-locking control electronics, and present specific examples of this approach. Results include an estimate of the slew rate and linearity of a simple flux-locked loop using a characterized dc SQUID.

  4. A Simulation Method Measuring Psychomotor Nursing Skills.

    ERIC Educational Resources Information Center

    McBride, Helena; And Others

    1981-01-01

    The development of a simulation technique to evaluate performance of psychomotor skills in an undergraduate nursing program is described. This method is used as one admission requirement to an alternate route nursing program. With modifications, any health profession could use this technique where psychomotor skills performance is important.…

  5. Method for Constructing Standardized Simulated Root Canals.

    ERIC Educational Resources Information Center

    Schulz-Bongert, Udo; Weine, Franklin S.

    1990-01-01

    The construction of visual and manipulative aids, clear resin blocks with root-canal-like spaces, for simulation of root canals is explained. Time, materials, and techniques are discussed. The method allows for comparison of canals, creation of any configuration of canals, and easy presentation during instruction. (MSE)

  6. A method based on Monte Carlo simulations and voxelized anatomical atlases to evaluate and correct uncertainties on radiotracer accumulation quantitation in beta microprobe studies in the rat brain

    NASA Astrophysics Data System (ADS)

    Pain, F.; Dhenain, M.; Gurden, H.; Routier, A. L.; Lefebvre, F.; Mastrippolito, R.; Lanièce, P.

    2008-10-01

    The β-microprobe is a simple and versatile technique complementary to small animal positron emission tomography (PET). It relies on local measurements of the concentration of positron-labeled molecules. So far, it has been successfully used in anesthetized rats for pharmacokinetics experiments and for the study of brain energetic metabolism. However, the ability of the technique to provide accurate quantitative measurements using 18F, 11C and 15O tracers is likely to suffer from the contribution of 511 keV gamma rays background to the signal and from the contribution of positrons from brain loci surrounding the locus of interest. The aim of the present paper is to provide a method of evaluating several parameters, which are supposed to affect the quantification of recordings performed in vivo with this methodology. We have developed realistic voxelized phantoms of the rat whole body and brain, and used them as input geometries for Monte Carlo simulations of previous β-microprobe reports. In the context of realistic experiments (binding of 11C-Raclopride to D2 dopaminergic receptors in the striatum; local glucose metabolic rate measurement with 18F-FDG and H2O15 blood flow measurements in the somatosensory cortex), we have calculated the detection efficiencies and corresponding contribution of 511 keV gammas from peripheral organs accumulation. We confirmed that the 511 keV gammas background does not impair quantification. To evaluate the contribution of positrons from adjacent structures, we have developed β-Assistant, a program based on a rat brain voxelized atlas and matrices of local detection efficiencies calculated by Monte Carlo simulations for several probe geometries. This program was used to calculate the 'apparent sensitivity' of the probe for each brain structure included in the detection volume. For a given localization of a probe within the brain, this allows us to quantify the different sources of beta signal. Finally, since stereotaxic accuracy is

  7. Fast Particle Methods for Multiscale Phenomena Simulations

    NASA Technical Reports Server (NTRS)

    Koumoutsakos, P.; Wray, A.; Shariff, K.; Pohorille, Andrew

    2000-01-01

    We are developing particle methods oriented at improving computational modeling capabilities of multiscale physical phenomena in : (i) high Reynolds number unsteady vortical flows, (ii) particle laden and interfacial flows, (iii)molecular dynamics studies of nanoscale droplets and studies of the structure, functions, and evolution of the earliest living cell. The unifying computational approach involves particle methods implemented in parallel computer architectures. The inherent adaptivity, robustness and efficiency of particle methods makes them a multidisciplinary computational tool capable of bridging the gap of micro-scale and continuum flow simulations. Using efficient tree data structures, multipole expansion algorithms, and improved particle-grid interpolation, particle methods allow for simulations using millions of computational elements, making possible the resolution of a wide range of length and time scales of these important physical phenomena.The current challenges in these simulations are in : [i] the proper formulation of particle methods in the molecular and continuous level for the discretization of the governing equations [ii] the resolution of the wide range of time and length scales governing the phenomena under investigation. [iii] the minimization of numerical artifacts that may interfere with the physics of the systems under consideration. [iv] the parallelization of processes such as tree traversal and grid-particle interpolations We are conducting simulations using vortex methods, molecular dynamics and smooth particle hydrodynamics, exploiting their unifying concepts such as : the solution of the N-body problem in parallel computers, highly accurate particle-particle and grid-particle interpolations, parallel FFT's and the formulation of processes such as diffusion in the context of particle methods. This approach enables us to transcend among seemingly unrelated areas of research.

  8. Domain reduction method for atomistic simulations

    SciTech Connect

    Medyanik, Sergey N. . E-mail: medyanik@northwestern.edu; Karpov, Eduard G. . E-mail: edkarpov@gmail.com; Liu, Wing Kam . E-mail: w-liu@northwestern.edu

    2006-11-01

    In this paper, a quasi-static formulation of the method of multi-scale boundary conditions (MSBCs) is derived and applied to atomistic simulations of carbon nano-structures, namely single graphene sheets and multi-layered graphite. This domain reduction method allows for the simulation of deformable boundaries in periodic atomic lattice structures, reduces the effective size of the computational domain, and consequently decreases the cost of computations. The size of the reduced domain is determined by the value of the domain reduction parameter. This parameter is related to the distance between the boundary of the reduced domain, where MSBCs are applied, and the boundary of the full domain, where the standard displacement boundary conditions are prescribed. Two types of multi-scale boundary conditions are derived: one for simulating in-layer multi-scale boundaries in a single graphene sheet and the other for simulating inter-layer multi-scale boundaries in multi-layered graphite. The method is tested on benchmark nano-indentation problems and the results are consistent with the full domain solutions.

  9. Mesoscopic Simulation Methods for Polymer Dynamics

    NASA Astrophysics Data System (ADS)

    Larson, Ronald

    2015-03-01

    We assess the accuracy and efficiency of mesoscopic simulation methods, namely Brownian Dynamics (BD), Stochastic Rotation Dynamics (SRD) and Dissipative Particle Dynamics (DPD), for polymers in solution at equilibrium and in flows in microfluidic geometries. Both SRD and DPD use solvent ``particles'' to carry momentum, and so account automatically for hydrodynamic interactions both within isolated polymer coils, and with other polymer molecules and with nearby solid boundaries. We assess quantitatively the effects of artificial particle inertia and fluid compressibility and show that they can be made small with appropriate choice of simulation parameters. We then use these methods to study flow-induced migration of polymer chains produced by: 1) hydrodynamic interactions, 2) streamline curvature or stress-gradients, and 3) convection of wall depletion zones. We show that huge concentration gradients can be produced by these mechanisms in microfluidic geometries that can be exploited for separation of polymers by size in periodic contraction-expansion geometries. We also assess the range of conditions for which BD, SRD or DPD is preferable for mesoscopic simulations. Finally, we show how such methods can be used to simulate quantitatively the swimming of micro-organisms such as E. coli. In collaboration with Lei Jiang and Tongyang Zhao, University of Michigan, Ann Arbor, MI.

  10. Discontinuous Galerkin Methods for Turbulence Simulation

    NASA Technical Reports Server (NTRS)

    Collis, S. Scott

    2002-01-01

    A discontinuous Galerkin (DG) method is formulated, implemented, and tested for simulation of compressible turbulent flows. The method is applied to turbulent channel flow at low Reynolds number, where it is found to successfully predict low-order statistics with fewer degrees of freedom than traditional numerical methods. This reduction is achieved by utilizing local hp-refinement such that the computational grid is refined simultaneously in all three spatial coordinates with decreasing distance from the wall. Another advantage of DG is that Dirichlet boundary conditions can be enforced weakly through integrals of the numerical fluxes. Both for a model advection-diffusion problem and for turbulent channel flow, weak enforcement of wall boundaries is found to improve results at low resolution. Such weak boundary conditions may play a pivotal role in wall modeling for large-eddy simulation.

  11. Automated Simulation Updates based on Flight Data

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Ward, David G.

    2007-01-01

    A statistically-based method for using flight data to update aerodynamic data tables used in flight simulators is explained and demonstrated. A simplified wind-tunnel aerodynamic database for the F/A-18 aircraft is used as a starting point. Flight data from the NASA F-18 High Alpha Research Vehicle (HARV) is then used to update the data tables so that the resulting aerodynamic model characterizes the aerodynamics of the F-18 HARV. Prediction cases are used to show the effectiveness of the automated method, which requires no ad hoc adjustments by the analyst.

  12. Computer Based Simulation of Laboratory Experiments.

    ERIC Educational Resources Information Center

    Edward, Norrie S.

    1997-01-01

    Examines computer based simulations of practical laboratory experiments in engineering. Discusses the aims and achievements of lab work (cognitive, process, psychomotor, and affective); types of simulations (model building and behavioral); and the strengths and weaknesses of simulations. Describes the development of a centrifugal pump simulation,…

  13. A method to produce and validate a digitally reconstructed radiograph-based computer simulation for optimisation of chest radiographs acquired with a computed radiography imaging system

    PubMed Central

    Moore, C S; Liney, G P; Beavis, A W; Saunderson, J R

    2011-01-01

    Objectives The purpose of this study was to develop and validate a computer model to produce realistic simulated computed radiography (CR) chest images using CT data sets of real patients. Methods Anatomical noise, which is the limiting factor in determining pathology in chest radiography, is realistically simulated by the CT data, and frequency-dependent noise has been added post-digitally reconstructed radiograph (DRR) generation to simulate exposure reduction. Realistic scatter and scatter fractions were measured in images of a chest phantom acquired on the CR system simulated by the computer model and added post-DRR calculation. Results The model has been validated with a phantom and patients and shown to provide predictions of signal-to-noise ratios (SNRs), tissue-to-rib ratios (TRRs: a measure of soft tissue pixel value to that of rib) and pixel value histograms that lie within the range of values measured with patients and the phantom. The maximum difference in measured SNR to that calculated was 10%. TRR values differed by a maximum of 1.3%. Conclusion Experienced image evaluators have responded positively to the DRR images, are satisfied they contain adequate anatomical features and have deemed them clinically acceptable. Therefore, the computer model can be used by image evaluators to grade chest images presented at different tube potentials and doses in order to optimise image quality and patient dose for clinical CR chest radiographs without the need for repeat patient exposures. PMID:21933979

  14. Matching methods to create paired survival data based on an exposure occurring over time: a simulation study with application to breast cancer

    PubMed Central

    2014-01-01

    Background Paired survival data are often used in clinical research to assess the prognostic effect of an exposure. Matching generates correlated censored data expecting that the paired subjects just differ from the exposure. Creating pairs when the exposure is an event occurring over time could be tricky. We applied a commonly used method, Method 1, which creates pairs a posteriori and propose an alternative method, Method 2, which creates pairs in “real-time”. We used two semi-parametric models devoted to correlated censored data to estimate the average effect of the exposure HR¯(t): the Holt and Prentice (HP), and the Lee Wei and Amato (LWA) models. Contrary to the HP, the LWA allowed adjustment for the matching covariates (LWA a ) and for an interaction (LWA i ) between exposure and covariates (assimilated to prognostic profiles). The aim of our study was to compare the performances of each model according to the two matching methods. Methods Extensive simulations were conducted. We simulated cohort data sets on which we applied the two matching methods, the HP and the LWA. We used our conclusions to assess the prognostic effect of subsequent pregnancy after treatment for breast cancer in a female cohort treated and followed up in eight french hospitals. Results In terms of bias and RMSE, Method 2 performed better than Method 1 in designing the pairs, and LWA a was the best model for all the situations except when there was an interaction between exposure and covariates, for which LWA i was more appropriate. On our real data set, we found opposite effects of pregnancy according to the six prognostic profiles, but none were statistically significant. We probably lacked statistical power or reached the limits of our approach. The pairs’ censoring options chosen for combination Method 2 - LWA had to be compared with others. Conclusions Correlated censored data designing by Method 2 seemed to be the most pertinent method to create pairs, when the criterion

  15. A reduced basis method for molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Vincent-Finley, Rachel Elisabeth

    In this dissertation, we develop a method for molecular simulation based on principal component analysis (PCA) of a molecular dynamics trajectory and least squares approximation of a potential energy function. Molecular dynamics (MD) simulation is a computational tool used to study molecular systems as they evolve through time. With respect to protein dynamics, local motions, such as bond stretching, occur within femtoseconds, while rigid body and large-scale motions, occur within a range of nanoseconds to seconds. To capture motion at all levels, time steps on the order of a femtosecond are employed when solving the equations of motion and simulations must continue long enough to capture the desired large-scale motion. To date, simulations of solvated proteins on the order of nanoseconds have been reported. It is typically the case that simulations of a few nanoseconds do not provide adequate information for the study of large-scale motions. Thus, the development of techniques that allow longer simulation times can advance the study of protein function and dynamics. In this dissertation we use principal component analysis (PCA) to identify the dominant characteristics of an MD trajectory and to represent the coordinates with respect to these characteristics. We augment PCA with an updating scheme based on a reduced representation of a molecule and consider equations of motion with respect to the reduced representation. We apply our method to butane and BPTI and compare the results to standard MD simulations of these molecules. Our results indicate that the molecular activity with respect to our simulation method is analogous to that observed in the standard MD simulation with simulations on the order of picoseconds.

  16. Twitter's tweet method modelling and simulation

    NASA Astrophysics Data System (ADS)

    Sarlis, Apostolos S.; Sakas, Damianos P.; Vlachos, D. S.

    2015-02-01

    This paper seeks to purpose the concept of Twitter marketing methods. The tools that Twitter provides are modelled and simulated using iThink in the context of a Twitter media-marketing agency. The paper has leveraged the system's dynamic paradigm to conduct Facebook marketing tools and methods modelling, using iThink™ system to implement them. It uses the design science research methodology for the proof of concept of the models and modelling processes. The following models have been developed for a twitter marketing agent/company and tested in real circumstances and with real numbers. These models were finalized through a number of revisions and iterators of the design, develop, simulate, test and evaluate. It also addresses these methods that suit most organized promotion through targeting, to the Twitter social media service. The validity and usefulness of these Twitter marketing methods models for the day-to-day decision making are authenticated by the management of the company organization. It implements system dynamics concepts of Twitter marketing methods modelling and produce models of various Twitter marketing situations. The Tweet method that Twitter provides can be adjusted, depending on the situation, in order to maximize the profit of the company/agent.

  17. Convected element method for simulation of angiogenesis.

    PubMed

    Pindera, Maciej Z; Ding, Hui; Chen, Zhijian

    2008-10-01

    We describe a novel Convected Element Method (CEM) for simulation of formation of functional blood vessels induced by tumor-generated growth factors in a process called angiogenesis. Angiogenesis is typically modeled by a convection-diffusion-reaction equation defined on a continuous domain. A difficulty arises when a continuum approach is used to represent the formation of discrete blood vessel structures. CEM solves this difficulty by using a hybrid continuous/discrete solution method allowing lattice-free tracking of blood vessel tips that trace out paths that subsequently are used to define compact vessel elements. In contrast to more conventional angiogenesis modeling, the new branches form evolving grids that are capable of simulating transport of biological and chemical factors such as nutrition and anti-angiogenic agents. The method is demonstrated on expository vessel growth and tumor response simulations for a selected set of conditions, and include effects of nutrient delivery and inhibition of vessel branching. Initial results show that CEM can predict qualitatively the development of biologically reasonable and fully functional vascular structures. Research is being carried out to generalize the approach which will allow quantitative predictions. PMID:18365201

  18. RELAP5 based engineering simulator

    SciTech Connect

    Charlton, T.R.; Laats, E.T.; Burtt, J.D.

    1990-01-01

    The INEL Engineering Simulation Center was established in 1988 to provide a modern, flexible, state-of-the-art simulation facility. This facility and two of the major projects which are part of the simulation center, the Advance Test Reactor (ATR) engineering simulator project and the Experimental Breeder Reactor II (EBR-II) advanced reactor control system, have been the subject of several papers in the past few years. Two components of the ATR engineering simulator project, RELAP5 and the Nuclear Plant Analyzer (NPA), have recently been improved significantly. This paper will present an overview of the INEL Engineering Simulation Center, and discuss the RELAP5/MOD3 and NPA/MOD1 codes, specifically how they are being used at the INEL Engineering Simulation Center. It will provide an update on the modifications to these two codes and their application to the ATR engineering simulator project, as well as, a discussion on the reactor system representation, control system modeling, two phase flow and heat transfer modeling. It will also discuss how these two codes are providing desktop, stand-alone reactor simulation. 12 refs., 2 figs.

  19. Vibratory compaction method for preparing lunar regolith drilling simulant

    NASA Astrophysics Data System (ADS)

    Chen, Chongbin; Quan, Qiquan; Deng, Zongquan; Jiang, Shengyuan

    2016-07-01

    Drilling and coring is an effective way to acquire lunar regolith samples along the depth direction. To facilitate the modeling and simulation of lunar drilling, ground verification experiments for drilling and coring should be performed using lunar regolith simulant. The simulant should mimic actual lunar regolith, and the distribution of its mechanical properties should vary along the longitudinal direction. Furthermore, an appropriate preparation method is required to ensure that the simulant has consistent mechanical properties so that the experimental results can be repeatable. Vibratory compaction actively changes the relative density of a raw material, making it suitable for building a multilayered drilling simulant. It is necessary to determine the relation between the preparation parameters and the expected mechanical properties of the drilling simulant. A vibratory compaction model based on the ideal elastoplastic theory is built to represent the dynamical properties of the simulant during compaction. Preparation experiments indicated that the preparation method can be used to obtain drilling simulant with the desired mechanical property distribution along the depth direction.

  20. Geant4 Simulation of Air Showers using Thinning Method

    NASA Astrophysics Data System (ADS)

    Sabra, Mohammad S.; Watts, John W.; Christl, Mark J.

    2015-04-01

    Simulation of complete air showers induced by cosmic ray particles becomes prohibitive at extreme energies due to the large number of secondary particles. Computing time of such simulations roughly scales with the energy of the primary cosmic ray particle, and becomes excessively large. To mitigate the problem, only small fraction of particles can be tracked and, then, the whole shower is reconstructed based on this sample. This method is called Thinning. Using this method in Geant4, we have simulated proton and iron air showers at extreme energies (E >1016 eV). Secondary particle densities are calculated and compared with the standard simulation program in this field, CORSIKA. This work is supported by the NASA Postdoctoral Program administrated by Oak Ridge Associated Universities.

  1. TU-C-17A-08: Improving IMRT Planning and Reducing Inter-Planner Variability Using the Stochastic Frontier Method: Validation Based On Clinical and Simulated Data

    SciTech Connect

    Gagne, MC; Archambault, L; Tremblay, D; Varfalvy, N

    2014-06-15

    Purpose: Intensity modulated radiation therapy always requires compromises between PTV coverage and organs at risk (OAR) sparing. We previously developed metrics that correlate doses to OAR to specific patients’ morphology using stochastic frontier analysis (SFA). Here, we aim to examine the validity of this approach using a large set of realistically simulated dosimetric and geometric data. Methods: SFA describes a set of treatment plans as an asymmetric distribution with respect to a frontier defining optimal plans. Eighty head and neck IMRT plans were used to establish a metric predicting the mean dose to parotids as a function of simple geometric parameters. A database of 140 parotids was used as a basis distribution to simulate physically plausible data of geometry and dose. Distributions comprising between 20 and 5000 were simulated and the SFA was applied to obtain new frontiers, which were compared to the original frontier. Results: It was possible to simulate distributions consistent with the original dataset. Below 160 organs, the SFA could not always describe distributions as asymmetric: a few cases showed a Gaussian or half-Gaussian distribution. In order to converge to a stable solution, the number of organs in a distribution must ideally be above 100, but in many cases stable parameters could be achieved with as low as 60 samples of organ data. Mean RMS value of the error of new frontiers was significantly reduced when additional organs are used. Conclusion: The number of organs in a distribution showed to have an impact on the effectiveness of the model. It is always possible to obtain a frontier, but if the number of organs in the distribution is small (< 160), it may not represent de lowest dose achievable. These results will be used to determine number of cases necessary to adapt the model to other organs.

  2. A Cloud-Based Simulation Architecture for Pandemic Influenza Simulation

    PubMed Central

    Eriksson, Henrik; Raciti, Massimiliano; Basile, Maurizio; Cunsolo, Alessandro; Fröberg, Anders; Leifler, Ola; Ekberg, Joakim; Timpka, Toomas

    2011-01-01

    High-fidelity simulations of pandemic outbreaks are resource consuming. Cluster-based solutions have been suggested for executing such complex computations. We present a cloud-based simulation architecture that utilizes computing resources both locally available and dynamically rented online. The approach uses the Condor framework for job distribution and management of the Amazon Elastic Computing Cloud (EC2) as well as local resources. The architecture has a web-based user interface that allows users to monitor and control simulation execution. In a benchmark test, the best cost-adjusted performance was recorded for the EC2 H-CPU Medium instance, while a field trial showed that the job configuration had significant influence on the execution time and that the network capacity of the master node could become a bottleneck. We conclude that it is possible to develop a scalable simulation environment that uses cloud-based solutions, while providing an easy-to-use graphical user interface. PMID:22195089

  3. A cloud-based simulation architecture for pandemic influenza simulation.

    PubMed

    Eriksson, Henrik; Raciti, Massimiliano; Basile, Maurizio; Cunsolo, Alessandro; Fröberg, Anders; Leifler, Ola; Ekberg, Joakim; Timpka, Toomas

    2011-01-01

    High-fidelity simulations of pandemic outbreaks are resource consuming. Cluster-based solutions have been suggested for executing such complex computations. We present a cloud-based simulation architecture that utilizes computing resources both locally available and dynamically rented online. The approach uses the Condor framework for job distribution and management of the Amazon Elastic Computing Cloud (EC2) as well as local resources. The architecture has a web-based user interface that allows users to monitor and control simulation execution. In a benchmark test, the best cost-adjusted performance was recorded for the EC2 H-CPU Medium instance, while a field trial showed that the job configuration had significant influence on the execution time and that the network capacity of the master node could become a bottleneck. We conclude that it is possible to develop a scalable simulation environment that uses cloud-based solutions, while providing an easy-to-use graphical user interface. PMID:22195089

  4. Assessing the performance of the MM/PBSA and MM/GBSA methods: I. The accuracy of binding free energy calculations based on molecular dynamics simulations

    PubMed Central

    Hou, Tingjun; Wang, Junmei; Li, Youyong; Wang, Wei

    2011-01-01

    The Molecular Mechanics/Poisson Boltzmann Surface Area (MM/PBSA) and the Molecular Mechanics/Generalized Born Surface Area (MM/GBSA) methods calculate binding free energies for macromolecules by combining molecular mechanics calculations and continuum solvation models. To systematically evaluate the performance of these methods, we report here an extensive study of 59 ligands interacting with six different proteins. First, we explored the effects of the length of the molecular dynamics (MD) simulation, ranging from 400 to 4800 ps, and the solute dielectric constant (1, 2 or 4) to the binding free energies predicted by MM/PBSA. The following three important conclusions could be observed: (1). MD simulation lengths have obvious impact on the predictions, and longer MD simulations are not always necessary to achieve better predictions; (2). The predictions are quite sensitive to solute dielectric constant, and this parameter should be carefully determined according to the characteristics of the protein/ligand binding interface; (3). Conformational entropy showed large fluctuations in MD trajectories and a large number of snapshots are necessary to achieve stable predictions. Next, we evaluated the accuracy of the binding free energies calculated by three Generalized Born (GB) models. We found that the GB model developed by Onufriev and Case was the most successful model in ranking the binding affinities of the studied inhibitors. Finally, we evaluated the performance of MM/GBSA and MM/PBSA in predicting binding free energies. Our results showed that MM/PBSA performed better in calculating absolute, but not necessarily relative, binding free energies than MM/GBSA. Considering its computational efficiency, MM/GBSA can serve as a powerful tool in drug design, where correct ranking of inhibitors is often emphasized. PMID:21117705

  5. Regolith simulant preparation methods for hardware testing

    NASA Astrophysics Data System (ADS)

    Gouache, Thibault P.; Brunskill, Christopher; Scott, Gregory P.; Gao, Yang; Coste, Pierre; Gourinat, Yves

    2010-12-01

    To qualify hardware for space flight, great care is taken to replicate the environment encountered in space. Emphasis is focused on presenting the hardware with the most extreme conditions it might encounter during its mission lifetime. The same care should be taken when regolith simulants are prepared to test space system performance. Indeed, the manner a granular material is prepared can have a very high influence on its mechanical properties and on the performance of the system interacting with it. Three regolith simulant preparation methods have been tested and are presented here (rain, pour, vibrate). They should enable researchers and hardware developers to test their prototypes in controlled and repeatable conditions. The pour and vibrate techniques are robust but only allow reaching a given relative density. The rain technique allows reaching a variety of relative densities but can be less robust if manually controlled.

  6. Physics-Based Simulator for NEO Exploration Analysis & Simulation

    NASA Technical Reports Server (NTRS)

    Balaram, J.; Cameron, J.; Jain, A.; Kline, H.; Lim, C.; Mazhar, H.; Myint, S.; Nayar, H.; Patton, R.; Pomerantz, M.; Quadrelli, M.; Shakkotai, P.; Tso, K.

    2011-01-01

    As part of the Space Exploration Analysis and Simulation (SEAS) task, the National Aeronautics and Space Administration (NASA) is using physics-based simulations at NASA's Jet Propulsion Laboratory (JPL) to explore potential surface and near-surface mission operations at Near Earth Objects (NEOs). The simulator is under development at JPL and can be used to provide detailed analysis of various surface and near-surface NEO robotic and human exploration concepts. In this paper we describe the SEAS simulator and provide examples of recent mission systems and operations concepts investigated using the simulation. We also present related analysis work and tools developed for both the SEAS task as well as general modeling, analysis and simulation capabilites for asteroid/small-body objects.

  7. Infrared Image Simulation Based On Statistical Learning Theory

    NASA Astrophysics Data System (ADS)

    Chaochao, Huang; Xiaodi, Wu; Wuqin, Tong

    2007-12-01

    A real-time simulation algorithm of infrared image based on statistical learning theory is presented. The method includes three contents to achieve real-time simulation of infrared image, such as acquiring the training sample, forecasting the scene temperature field value by statistical learning machine, data processing and data analysis of temperature field. The simulation result shows this algorithm based on ν - support vector regression have better maneuverability and generalization than the other method, and the simulation precision and real-time quality are satisfying.

  8. Interactive methods for exploring particle simulation data

    SciTech Connect

    Co, Christopher S.; Friedman, Alex; Grote, David P.; Vay, Jean-Luc; Bethel, E. Wes; Joy, Kenneth I.

    2004-05-01

    In this work, we visualize high-dimensional particle simulation data using a suite of scatter plot-based visualizations coupled with interactive selection tools. We use traditional 2D and 3D projection scatter plots as well as a novel oriented disk rendering style to convey various information about the data. Interactive selection tools allow physicists to manually classify ''interesting'' sets of particles that are highlighted across multiple, linked views of the data. The power of our application is the ability to correspond new visual representations of the simulation data with traditional, well understood visualizations. This approach supports the interactive exploration of the high-dimensional space while promoting discovery of new particle behavior.

  9. Level set method for microfabrication simulations

    NASA Astrophysics Data System (ADS)

    Baranski, Maciej; Kasztelanic, Rafal; Albero, Jorge; Nieradko, Lukasz; Gorecki, Christophe

    2010-05-01

    The article describes application of Level Set method for two different microfabrication processes. First is shape evolution of during reflow of the glass structure. Investigated problem were approximated by viscous flow of material thus kinetics of the process were known from physical model. Second problem is isotropic wet etching of silicon. Which is much more complicated because dynamics of the shape evolution is strongly coupled with time and geometry shapes history. In etching simulations Level Set method is coupled with Finite Element Method (FEM) that is used for calculation of etching acid concentration that determine geometry evolution of the structure. The problem arising from working with FEM with time varying boundaries was solved with the use of the dynamic mesh technique employing the Level Set formalism of higher dimensional function for geometry description. Isotropic etching was investigated in context of mico-lenses fabrication. Model was compared with experimental data obtained in etching of the silicon moulds used for micro-lenses fabrication.

  10. Developing a Theory of Digitally-Enabled Trial-Based Problem Solving through Simulation Methods: The Case of Direct-Response Marketing

    ERIC Educational Resources Information Center

    Clark, Joseph Warren

    2012-01-01

    In turbulent business environments, change is rapid, continuous, and unpredictable. Turbulence undermines those adaptive problem solving methods that generate solutions by extrapolating from what worked (or did not work) in the past. To cope with this challenge, organizations utilize trial-based problem solving (TBPS) approaches in which they…

  11. Physiological Based Simulator Fidelity Design Guidance

    NASA Technical Reports Server (NTRS)

    Schnell, Thomas; Hamel, Nancy; Postnikov, Alex; Hoke, Jaclyn; McLean, Angus L. M. Thom, III

    2012-01-01

    The evolution of the role of flight simulation has reinforced assumptions in aviation that the degree of realism in a simulation system directly correlates to the training benefit, i.e., more fidelity is always better. The construct of fidelity has several dimensions, including physical fidelity, functional fidelity, and cognitive fidelity. Interaction of different fidelity dimensions has an impact on trainee immersion, presence, and transfer of training. This paper discusses research results of a recent study that investigated if physiological-based methods could be used to determine the required level of simulator fidelity. Pilots performed a relatively complex flight task consisting of mission task elements of various levels of difficulty in a fixed base flight simulator and a real fighter jet trainer aircraft. Flight runs were performed using one forward visual channel of 40 deg. field of view for the lowest level of fidelity, 120 deg. field of view for the middle level of fidelity, and unrestricted field of view and full dynamic acceleration in the real airplane. Neuro-cognitive and physiological measures were collected under these conditions using the Cognitive Avionics Tool Set (CATS) and nonlinear closed form models for workload prediction were generated based on these data for the various mission task elements. One finding of the work described herein is that simple heart rate is a relatively good predictor of cognitive workload, even for short tasks with dynamic changes in cognitive loading. Additionally, we found that models that used a wide range of physiological and neuro-cognitive measures can further boost the accuracy of the workload prediction.

  12. A novel load balancing method for hierarchical federation simulation system

    NASA Astrophysics Data System (ADS)

    Bin, Xiao; Xiao, Tian-yuan

    2013-07-01

    In contrast with single HLA federation framework, hierarchical federation framework can improve the performance of large-scale simulation system in a certain degree by distributing load on several RTI. However, in hierarchical federation framework, RTI is still the center of message exchange of federation, and it is still the bottleneck of performance of federation, the data explosion in a large-scale HLA federation may cause overload on RTI, It may suffer HLA federation performance reduction or even fatal error. Towards this problem, this paper proposes a load balancing method for hierarchical federation simulation system based on queuing theory, which is comprised of three main module: queue length predicting, load controlling policy, and controller. The method promotes the usage of resources of federate nodes, and improves the performance of HLA simulation system with balancing load on RTIG and federates. Finally, the experiment results are presented to demonstrate the efficient control of the method.

  13. A discrete event method for wave simulation

    SciTech Connect

    Nutaro, James J

    2006-01-01

    This article describes a discrete event interpretation of the finite difference time domain (FDTD) and digital wave guide network (DWN) wave simulation schemes. The discrete event method is formalized using the discrete event system specification (DEVS). The scheme is shown to have errors that are proportional to the resolution of the spatial grid. A numerical example demonstrates the relative efficiency of the scheme with respect to FDTD and DWN schemes. The potential for the discrete event scheme to reduce numerical dispersion and attenuation errors is discussed.

  14. Efficiency of ultrasound training simulators: method for assessing image realism.

    PubMed

    Bø, Lars Eirik; Gjerald, Sjur Urdson; Brekken, Reidar; Tangen, Geir Arne; Hernes, Toril A Nagelhus

    2010-04-01

    Although ultrasound has become an important imaging modality within several medical professions, the benefit of ultrasound depends to some degree on the skills of the person operating the probe and interpreting the image. For some applications, the possibility to educate operators in a clinical setting is limited, and the use of training simulators is considered an alternative approach for learning basic skills. To ensure the quality of simulator-based training, it is important to produce simulated ultrasound images that resemble true images to a sufficient degree. This article describes a method that allows corresponding true and simulated ultrasound images to be generated and displayed side by side in real time, thus facilitating an interactive evaluation of ultrasound simulators in terms of image resemblance, real-time characteristics and man-machine interaction. The proposed method could be used to study the realism of ultrasound simulators and how this realism affects the quality of training, as well as being a valuable tool in the development of simulation algorithms. PMID:20337541

  15. A performance-based method for calculating the design thickness of compacted clay liners exposed to high strength leachate under simulated landfill conditions.

    PubMed

    Safari, Edwin; Jalili Ghazizade, Mahdi; Abdoli, Mohammad Ali

    2012-09-01

    Compacted clay liners (CCLs) when feasible, are preferred to composite geosynthetic liners. The thickness of CCLs is typically prescribed by each country's environmental protection regulations. However, considering the fact that construction of CCLs represents a significant portion of overall landfill construction costs; a performance based design of liner thickness would be preferable to 'one size fits all' prescriptive standards. In this study researchers analyzed the hydraulic behaviour of a compacted clayey soil in three laboratory pilot scale columns exposed to high strength leachate under simulated landfill conditions. The temperature of the simulated CCL at the surface was maintained at 40 ± 2 °C and a vertical pressure of 250 kPa was applied to the soil through a gravel layer on top of the 50 cm thick CCL where high strength fresh leachate was circulated at heads of 15 and 30 cm simulating the flow over the CCL. Inverse modelling using HYDRUS-1D indicated that the hydraulic conductivity after 180 days was decreased about three orders of magnitude in comparison with the values measured prior to the experiment. A number of scenarios of different leachate heads and persistence time were considered and saturation depth of the CCL was predicted through modelling. Under a typical leachate head of 30 cm, the saturation depth was predicted to be less than 60 cm for a persistence time of 3 years. This approach can be generalized to estimate an effective thickness of a CCL instead of using prescribed values, which may be conservatively overdesigned and thus unduly costly. PMID:22617473

  16. Fast simulations of patient-specific haemodynamics of coronary artery bypass grafts based on a POD-Galerkin method and a vascular shape parametrization

    NASA Astrophysics Data System (ADS)

    Ballarin, Francesco; Faggiano, Elena; Ippolito, Sonia; Manzoni, Andrea; Quarteroni, Alfio; Rozza, Gianluigi; Scrofani, Roberto

    2016-06-01

    In this work a reduced-order computational framework for the study of haemodynamics in three-dimensional patient-specific configurations of coronary artery bypass grafts dealing with a wide range of scenarios is proposed. We combine several efficient algorithms to face at the same time both the geometrical complexity involved in the description of the vascular network and the huge computational cost entailed by time dependent patient-specific flow simulations. Medical imaging procedures allow to reconstruct patient-specific configurations from clinical data. A centerlines-based parametrization is proposed to efficiently handle geometrical variations. POD-Galerkin reduced-order models are employed to cut down large computational costs. This computational framework allows to characterize blood flows for different physical and geometrical variations relevant in the clinical practice, such as stenosis factors and anastomosis variations, in a rapid and reliable way. Several numerical results are discussed, highlighting the computational performance of the proposed framework, as well as its capability to carry out sensitivity analysis studies, so far out of reach. In particular, a reduced-order simulation takes only a few minutes to run, resulting in computational savings of 99% of CPU time with respect to the full-order discretization. Moreover, the error between full-order and reduced-order solutions is also studied, and it is numerically found to be less than 1% for reduced-order solutions obtained with just O(100) online degrees of freedom.

  17. Apparatus for and method of simulating turbulence

    DOEpatents

    Dimas, Athanassios; Lottati, Isaac; Bernard, Peter; Collins, James; Geiger, James C.

    2003-01-01

    In accordance with a preferred embodiment of the invention, a novel apparatus for and method of simulating physical processes such as fluid flow is provided. Fluid flow near a boundary or wall of an object is represented by a collection of vortex sheet layers. The layers are composed of a grid or mesh of one or more geometrically shaped space filling elements. In the preferred embodiment, the space filling elements take on a triangular shape. An Eulerian approach is employed for the vortex sheets, where a finite-volume scheme is used on the prismatic grid formed by the vortex sheet layers. A Lagrangian approach is employed for the vortical elements (e.g., vortex tubes or filaments) found in the remainder of the flow domain. To reduce the computational time, a hairpin removal scheme is employed to reduce the number of vortex filaments, and a Fast Multipole Method (FMM), preferably implemented using parallel processing techniques, reduces the computation of the velocity field.

  18. Agent-Based Simulations for Project Management

    NASA Technical Reports Server (NTRS)

    White, J. Chris; Sholtes, Robert M.

    2011-01-01

    Currently, the most common approach used in project planning tools is the Critical Path Method (CPM). While this method was a great improvement over the basic Gantt chart technique being used at the time, it now suffers from three primary flaws: (1) task duration is an input, (2) productivity impacts are not considered , and (3) management corrective actions are not included. Today, computers have exceptional computational power to handle complex simulations of task e)(eculion and project management activities (e.g ., dynamically changing the number of resources assigned to a task when it is behind schedule). Through research under a Department of Defense contract, the author and the ViaSim team have developed a project simulation tool that enables more realistic cost and schedule estimates by using a resource-based model that literally turns the current duration-based CPM approach "on its head." The approach represents a fundamental paradigm shift in estimating projects, managing schedules, and reducing risk through innovative predictive techniques.

  19. An example-based brain MRI simulation framework

    NASA Astrophysics Data System (ADS)

    He, Qing; Roy, Snehashis; Jog, Amod; Pham, Dzung L.

    2015-03-01

    The simulation of magnetic resonance (MR) images plays an important role in the validation of image analysis algorithms such as image segmentation, due to lack of sufficient ground truth in real MR images. Previous work on MRI simulation has focused on explicitly modeling the MR image formation process. However, because of the overwhelming complexity of MR acquisition these simulations must involve simplifications and approximations that can result in visually unrealistic simulated images. In this work, we describe an example-based simulation framework, which uses an "atlas" consisting of an MR image and its anatomical models derived from the hard segmentation. The relationships between the MR image intensities and its anatomical models are learned using a patch-based regression that implicitly models the physics of the MR image formation. Given the anatomical models of a new brain, a new MR image can be simulated using the learned regression. This approach has been extended to also simulate intensity inhomogeneity artifacts based on the statistical model of training data. Results show that the example based MRI simulation method is capable of simulating different image contrasts and is robust to different choices of atlas. The simulated images resemble real MR images more than simulations produced by a physics-based model.

  20. Approximate method for estimating plasma ionization characteristics based on numerical simulation of the dynamics of a plasma bunch with a high specific energy in the upper ionosphere

    NASA Astrophysics Data System (ADS)

    Motorin, A. A.; Stupitsky, E. L.; Kholodov, A. S.

    2016-07-01

    The spatiotemporal pattern for the development of a plasma cloud formed in the ionosphere and the main cloud gas-dynamic characteristics have been obtained from 3D calculations of the explosion-type plasmodynamic flows previously performed by us. An approximate method for estimating the plasma temperature and ionization degree with the introduction of the effective adiabatic index has been proposed based on these results.

  1. Lensless ghost imaging based on mathematical simulation and experimental simulation

    NASA Astrophysics Data System (ADS)

    Liu, Yanyan; Wang, Biyi; Zhao, Yingchao; Dong, Junzhang

    2014-02-01

    The differences of conventional imaging and correlated imaging are discussed in this paper. The mathematical model of lensless ghost imaging system is set up and the image of double slits is computed by mathematical simulation. The results are also testified by the experimental verification. Both the theory simulation and experimental verifications results shows that the mathematical model based on statistical optical principle are keeping consistent with real experimental results.

  2. High-performance liquid chromatography - Ultraviolet method for the determination of total specific migration of nine ultraviolet absorbers in food simulants based on 1,1,3,3-Tetramethylguanidine and organic phase anion exchange solid phase extraction to remove glyceride.

    PubMed

    Wang, Jianling; Xiao, Xiaofeng; Chen, Tong; Liu, Tingfei; Tao, Huaming; He, Jun

    2016-06-17

    The glyceride in oil food simulant usually causes serious interferences to target analytes and leads to failure of the normal function of the RP-HPLC column. In this work, a convenient HPLC-UV method for the determination of the total specific migration of nine ultraviolet (UV) absorbers in food simulants was developed based on 1,1,3,3-tetramethylguanidine (TMG) and organic phase anion exchange (OPAE) SPE to efficiently remove glyceride in olive oil simulant. In contrast to the normal ion exchange carried out in an aqueous solution or aqueous phase environment, the OPAE SPE was performed in the organic phase environments, and the time-consuming and challenging extraction of the nine UV absorbers from vegetable oil with aqueous solution could be readily omitted. The method was proved to have good linearity (r≥0.99992), precision (intra-day RSD≤3.3%), and accuracy(91.0%≤recoveries≤107%); furthermore, the lower limit of quantifications (0.05-0.2mg/kg) in five types of food simulants(10% ethanol, 3% acetic acid, 20% ethanol, 50% ethanol and olive oil) was observed. The method was found to be well suited for quantitative determination of the total specific migration of the nine UV absorbers both in aqueous and vegetable oil simulant according to Commission Regulation (EU) No. 10/2011. Migration levels of the nine UV absorbers were determined in 31 plastic samples, and UV-24, UV-531, HHBP and UV-326 were frequently detected, especially in olive oil simulant for UV-326 in PE samples. In addition, the OPAE SPE procedure was also been applied to efficiently enrich or purify seven antioxidants in olive oil simulant. Results indicate that this procedure will have more extensive applications in the enriching or purification of the extremely weak acidic compounds with phenol hydroxyl group that are relatively stable in TMG n-hexane solution and that can be barely extracted from vegetable oil. PMID:27189432

  3. An improved method for simulating microcalcifications in digital mammograms.

    PubMed

    Zanca, Federica; Chakraborty, Dev Prasad; Van Ongeval, Chantal; Jacobs, Jurgen; Claus, Filip; Marchal, Guy; Bosmans, Hilde

    2008-09-01

    The assessment of the performance of a digital mammography system requires an observer study with a relatively large number of cases with known truth which is often difficult to assemble. Several investigators have developed methods for generating hybrid abnormal images containing simulated microcalcifications. This article addresses some of the limitations of earlier methods. The new method is based on digital images of needle biopsy specimens. Since the specimens are imaged separately from the breast, the microcalcification attenuation profile scan is deduced without the effects of over and underlying tissues. The resulting templates are normalized for image acquisition specific parameters and reprocessed to simulate microcalcifications appropriate to other imaging systems, with different x-ray, detector and image processing parameters than the original acquisition system. This capability is not shared by previous simulation methods that have relied on extracting microcalcifications from breast images. The method was validated by five experienced mammographers who compared 59 pairs of simulated and real microcalcifications in a two-alternative forced choice task designed to test if they could distinguish the real from the simulated lesions. They also classified the shapes of the microcalcifications according to a standardized clinical lexicon. The observed probability of correct choice was 0.415, 95% confidence interval (0.284, 0.546), showing that the radiologists were unable to distinguish the lesions. The shape classification revealed substantial agreement with the truth (mean kappa = 0.70), showing that we were able to accurately simulate the lesion morphology. While currently limited to single microcalcifications, the method is extensible to more complex clusters of microcalcifications and to three-dimensional images. It can be used to objectively assess an imaging technology, especially with respect to its ability to adequately visualize the morphology of

  4. An improved method for simulating microcalcifications in digital mammograms

    SciTech Connect

    Zanca, Federica; Chakraborty, Dev Prasad; Ongeval, Chantal van; Jacobs, Jurgen; Claus, Filip; Marchal, Guy; Bosmans, Hilde

    2008-09-15

    The assessment of the performance of a digital mammography system requires an observer study with a relatively large number of cases with known truth which is often difficult to assemble. Several investigators have developed methods for generating hybrid abnormal images containing simulated microcalcifications. This article addresses some of the limitations of earlier methods. The new method is based on digital images of needle biopsy specimens. Since the specimens are imaged separately from the breast, the microcalcification attenuation profile scan is deduced without the effects of over and underlying tissues. The resulting templates are normalized for image acquisition specific parameters and reprocessed to simulate microcalcifications appropriate to other imaging systems, with different x-ray, detector and image processing parameters than the original acquisition system. This capability is not shared by previous simulation methods that have relied on extracting microcalcifications from breast images. The method was validated by five experienced mammographers who compared 59 pairs of simulated and real microcalcifications in a two-alternative forced choice task designed to test if they could distinguish the real from the simulated lesions. They also classified the shapes of the microcalcifications according to a standardized clinical lexicon. The observed probability of correct choice was 0.415, 95% confidence interval (0.284, 0.546), showing that the radiologists were unable to distinguish the lesions. The shape classification revealed substantial agreement with the truth (mean kappa=0.70), showing that we were able to accurately simulate the lesion morphology. While currently limited to single microcalcifications, the method is extensible to more complex clusters of microcalcifications and to three-dimensional images. It can be used to objectively assess an imaging technology, especially with respect to its ability to adequately visualize the morphology of the

  5. Implicit methods for efficient musculoskeletal simulation and optimal control

    PubMed Central

    van den Bogert, Antonie J.; Blana, Dimitra; Heinrich, Dieter

    2011-01-01

    The ordinary differential equations for musculoskeletal dynamics are often numerically stiff and highly nonlinear. Consequently, simulations require small time steps, and optimal control problems are slow to solve and have poor convergence. In this paper, we present an implicit formulation of musculoskeletal dynamics, which leads to new numerical methods for simulation and optimal control, with the expectation that we can mitigate some of these problems. A first order Rosenbrock method was developed for solving forward dynamic problems using the implicit formulation. It was used to perform real-time dynamic simulation of a complex shoulder arm system with extreme dynamic stiffness. Simulations had an RMS error of only 0.11 degrees in joint angles when running at real-time speed. For optimal control of musculoskeletal systems, a direct collocation method was developed for implicitly formulated models. The method was applied to predict gait with a prosthetic foot and ankle. Solutions were obtained in well under one hour of computation time and demonstrated how patients may adapt their gait to compensate for limitations of a specific prosthetic limb design. The optimal control method was also applied to a state estimation problem in sports biomechanics, where forces during skiing were estimated from noisy and incomplete kinematic data. Using a full musculoskeletal dynamics model for state estimation had the additional advantage that forward dynamic simulations, could be done with the same implicitly formulated model to simulate injuries and perturbation responses. While these methods are powerful and allow solution of previously intractable problems, there are still considerable numerical challenges, especially related to the convergence of gradient-based solvers. PMID:22102983

  6. An improved method for simulating microcalcifications in digital mammograms

    PubMed Central

    Zanca, Federica; Chakraborty, Dev Prasad; Van Ongeval, Chantal; Jacobs, Jurgen; Claus, Filip; Marchal, Guy; Bosmans, Hilde

    2008-01-01

    The assessment of the performance of a digital mammography system requires an observer study with a relatively large number of cases with known truth which is often difficult to assemble. Several investigators have developed methods for generating hybrid abnormal images containing simulated microcalcifications. This article addresses some of the limitations of earlier methods. The new method is based on digital images of needle biopsy specimens. Since the specimens are imaged separately from the breast, the microcalcification attenuation profile scan is deduced without the effects of over and underlying tissues. The resulting templates are normalized for image acquisition specific parameters and reprocessed to simulate microcalcifications appropriate to other imaging systems, with different x-ray, detector and image processing parameters than the original acquisition system. This capability is not shared by previous simulation methods that have relied on extracting microcalcifications from breast images. The method was validated by five experienced mammographers who compared 59 pairs of simulated and real microcalcifications in a two-alternative forced choice task designed to test if they could distinguish the real from the simulated lesions. They also classified the shapes of the microcalcifications according to a standardized clinical lexicon. The observed probability of correct choice was 0.415, 95% confidence interval (0.284, 0.546), showing that the radiologists were unable to distinguish the lesions. The shape classification revealed substantial agreement with the truth (mean kappa=0.70), showing that we were able to accurately simulate the lesion morphology. While currently limited to single microcalcifications, the method is extensible to more complex clusters of microcalcifications and to three-dimensional images. It can be used to objectively assess an imaging technology, especially with respect to its ability to adequately visualize the morphology of the

  7. Etch Profile Simulation Using Level Set Methods

    NASA Technical Reports Server (NTRS)

    Hwang, Helen H.; Meyyappan, Meyya; Arnold, James O. (Technical Monitor)

    1997-01-01

    Etching and deposition of materials are critical steps in semiconductor processing for device manufacturing. Both etching and deposition may have isotropic and anisotropic components, due to directional sputtering and redeposition of materials, for example. Previous attempts at modeling profile evolution have used so-called "string theory" to simulate the moving solid-gas interface between the semiconductor and the plasma. One complication of this method is that extensive de-looping schemes are required at the profile corners. We will present a 2D profile evolution simulation using level set theory to model the surface. (1) By embedding the location of the interface in a field variable, the need for de-looping schemes is eliminated and profile corners are more accurately modeled. This level set profile evolution model will calculate both isotropic and anisotropic etch and deposition rates of a substrate in low pressure (10s mTorr) plasmas, considering the incident ion energy angular distribution functions and neutral fluxes. We will present etching profiles of Si substrates in Ar/Cl2 discharges for various incident ion energies and trench geometries.

  8. Application of particle method to the casting process simulation

    NASA Astrophysics Data System (ADS)

    Hirata, N.; Zulaida, Y. M.; Anzai, K.

    2012-07-01

    Casting processes involve many significant phenomena such as fluid flow, solidification, and deformation, and it is known that casting defects are strongly influenced by the phenomena. However the phenomena complexly interacts each other and it is difficult to observe them directly because the temperature of the melt and other apparatus components are quite high, and they are generally opaque; therefore, a computer simulation is expected to serve a lot of benefits to consider what happens in the processes. Recently, a particle method, which is one of fully Lagrangian methods, has attracted considerable attention. The particle methods based on Lagrangian methods involving no calculation lattice have been developed rapidly because of their applicability to multi-physics problems. In this study, we combined the fluid flow, heat transfer and solidification simulation programs, and tried to simulate various casting processes such as continuous casting, centrifugal casting and ingot making. As a result of continuous casting simulation, the powder flow could be calculated as well as the melt flow, and the subsequent shape of interface between the melt and the powder was calculated. In the centrifugal casting simulation, the mold was smoothly modeled along the shape of the real mold, and the fluid flow and the rotating mold are simulated directly. As a result, the flow of the melt dragged by the rotating mold was calculated well. The eccentric rotation and the influence of Coriolis force were also reproduced directly and naturally. For ingot making simulation, a shrinkage formation behavior was calculated and the shape of the shrinkage agreed well with the experimental result.

  9. Development of semiclassical molecular dynamics simulation method.

    PubMed

    Nakamura, Hiroki; Nanbu, Shinkoh; Teranishi, Yoshiaki; Ohta, Ayumi

    2016-04-28

    Various quantum mechanical effects such as nonadiabatic transitions, quantum mechanical tunneling and coherence play crucial roles in a variety of chemical and biological systems. In this paper, we propose a method to incorporate tunneling effects into the molecular dynamics (MD) method, which is purely based on classical mechanics. Caustics, which define the boundary between classically allowed and forbidden regions, are detected along classical trajectories and the optimal tunneling path with minimum action is determined by starting from each appropriate caustic. The real phase associated with tunneling can also be estimated. Numerical demonstration with use of a simple collinear chemical reaction O + HCl → OH + Cl is presented in order to help the reader to well comprehend the method proposed here. Generalization to the on-the-fly ab initio version is rather straightforward. By treating the nonadiabatic transitions at conical intersections by the Zhu-Nakamura theory, new semiclassical MD methods can be developed. PMID:27067383

  10. IMPACT OF SIMULANT PRODUCTION METHODS ON SRAT PRODUCT

    SciTech Connect

    EIBLING, R

    2006-03-22

    The research and development programs in support of the Defense Waste Processing Facility (DWPF) and other high level waste vitrification processes require the use of both nonradioactive waste simulants and actual waste samples. The nonradioactive waste simulants have been used for laboratory testing, pilot-scale testing and full-scale integrated facility testing. Recent efforts have focused on matching the physical properties of actual sludge. These waste simulants were designed to reproduce the chemical and, if possible, the physical properties of the actual high level waste. This technical report documents a study of simulant production methods for high level waste simulated sludge and their impact on the physical properties of the resultant SRAT product. The sludge simulants used in support of DWPF have been based on average waste compositions and on expected or actual batch compositions. These sludge simulants were created to primarily match the chemical properties of the actual waste. These sludges were produced by generating manganese dioxide, MnO{sub 2}, from permanganate ion (MnO{sub 4}{sup -}) and manganous nitrate, precipitating ferric nitrate and nickel nitrate with sodium hydroxide, washing with inhibited water and then addition of other waste species. While these simulated sludges provided a good match for chemical reaction studies, they did not adequately match the physical properties (primarily rheology) measured on the actual waste. A study was completed in FY04 to determine the impact of simulant production methods on the physical properties of Sludge Batch 3 simulant. This study produced eight batches of sludge simulant, all prepared to the same chemical target, by varying the sludge production methods. The sludge batch, which most closely duplicated the actual SB3 sludge physical properties, was Test 8. Test 8 sludge was prepared by coprecipitating all of the major metals (including Al). After the sludge was washed to meet the target, the sludge

  11. Computational Simulations and the Scientific Method

    NASA Technical Reports Server (NTRS)

    Kleb, Bil; Wood, Bill

    2005-01-01

    As scientific simulation software becomes more complicated, the scientific-software implementor's need for component tests from new model developers becomes more crucial. The community's ability to follow the basic premise of the Scientific Method requires independently repeatable experiments, and model innovators are in the best position to create these test fixtures. Scientific software developers also need to quickly judge the value of the new model, i.e., its cost-to-benefit ratio in terms of gains provided by the new model and implementation risks such as cost, time, and quality. This paper asks two questions. The first is whether other scientific software developers would find published component tests useful, and the second is whether model innovators think publishing test fixtures is a feasible approach.

  12. Computational simulation methods for composite fracture mechanics

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.

    1988-01-01

    Structural integrity, durability, and damage tolerance of advanced composites are assessed by studying damage initiation at various scales (micro, macro, and global) and accumulation and growth leading to global failure, quantitatively and qualitatively. In addition, various fracture toughness parameters associated with a typical damage and its growth must be determined. Computational structural analysis codes to aid the composite design engineer in performing these tasks were developed. CODSTRAN (COmposite Durability STRuctural ANalysis) is used to qualitatively and quantitatively assess the progressive damage occurring in composite structures due to mechanical and environmental loads. Next, methods are covered that are currently being developed and used at Lewis to predict interlaminar fracture toughness and related parameters of fiber composites given a prescribed damage. The general purpose finite element code MSC/NASTRAN was used to simulate the interlaminar fracture and the associated individual as well as mixed-mode strain energy release rates in fiber composites.

  13. Massively parallel simulations of multiphase flows using Lattice Boltzmann methods

    NASA Astrophysics Data System (ADS)

    Ahrenholz, Benjamin

    2010-03-01

    In the last two decades the lattice Boltzmann method (LBM) has matured as an alternative and efficient numerical scheme for the simulation of fluid flows and transport problems. Unlike conventional numerical schemes based on discretizations of macroscopic continuum equations, the LBM is based on microscopic models and mesoscopic kinetic equations. The fundamental idea of the LBM is to construct simplified kinetic models that incorporate the essential physics of microscopic or mesoscopic processes so that the macroscopic averaged properties obey the desired macroscopic equations. Especially applications involving interfacial dynamics, complex and/or changing boundaries and complicated constitutive relationships which can be derived from a microscopic picture are suitable for the LBM. In this talk a modified and optimized version of a Gunstensen color model is presented to describe the dynamics of the fluid/fluid interface where the flow field is based on a multi-relaxation-time model. Based on that modeling approach validation studies of contact line motion are shown. Due to the fact that the LB method generally needs only nearest neighbor information, the algorithm is an ideal candidate for parallelization. Hence, it is possible to perform efficient simulations in complex geometries at a large scale by massively parallel computations. Here, the results of drainage and imbibition (Degree of Freedom > 2E11) in natural porous media gained from microtomography methods are presented. Those fully resolved pore scale simulations are essential for a better understanding of the physical processes in porous media and therefore important for the determination of constitutive relationships.

  14. Simulation Platform: a cloud-based online simulation environment.

    PubMed

    Yamazaki, Tadashi; Ikeno, Hidetoshi; Okumura, Yoshihiro; Satoh, Shunji; Kamiyama, Yoshimi; Hirata, Yutaka; Inagaki, Keiichiro; Ishihara, Akito; Kannon, Takayuki; Usui, Shiro

    2011-09-01

    For multi-scale and multi-modal neural modeling, it is needed to handle multiple neural models described at different levels seamlessly. Database technology will become more important for these studies, specifically for downloading and handling the neural models seamlessly and effortlessly. To date, conventional neuroinformatics databases have solely been designed to archive model files, but the databases should provide a chance for users to validate the models before downloading them. In this paper, we report our on-going project to develop a cloud-based web service for online simulation called "Simulation Platform". Simulation Platform is a cloud of virtual machines running GNU/Linux. On a virtual machine, various software including developer tools such as compilers and libraries, popular neural simulators such as GENESIS, NEURON and NEST, and scientific software such as Gnuplot, R and Octave, are pre-installed. When a user posts a request, a virtual machine is assigned to the user, and the simulation starts on that machine. The user remotely accesses to the machine through a web browser and carries out the simulation, without the need to install any software but a web browser on the user's own computer. Therefore, Simulation Platform is expected to eliminate impediments to handle multiple neural models that require multiple software. PMID:21741207

  15. Discrete Stochastic Simulation Methods for Chemically Reacting Systems

    PubMed Central

    Cao, Yang; Samuels, David C.

    2012-01-01

    Discrete stochastic chemical kinetics describe the time evolution of a chemically reacting system by taking into account the fact that in reality chemical species are present with integer populations and exhibit some degree of randomness in their dynamical behavior. In recent years, with the development of new techniques to study biochemistry dynamics in a single cell, there are increasing studies using this approach to chemical kinetics in cellular systems, where the small copy number of some reactant species in the cell may lead to deviations from the predictions of the deterministic differential equations of classical chemical kinetics. This chapter reviews the fundamental theory related to stochastic chemical kinetics and several simulation methods that are based on that theory. We focus on non-stiff biochemical systems and the two most important discrete stochastic simulation methods: Gillespie's Stochastic Simulation Algorithm (SSA) and the tau-leaping method. Different implementation strategies of these two methods are discussed. Then we recommend a relatively simple and efficient strategy that combines the strengths of the two methods: the hybrid SSA/tau-leaping method. The implementation details of the hybrid strategy are given here and a related software package is introduced. Finally, the hybrid method is applied to simple biochemical systems as a demonstration of its application. PMID:19216925

  16. A Transfer Voltage Simulation Method for Generator Step Up Transformers

    NASA Astrophysics Data System (ADS)

    Funabashi, Toshihisa; Sugimoto, Toshirou; Ueda, Toshiaki; Ametani, Akihiro

    It has been found from measurements for 13 sets of GSU transformers that a transfer voltage of a generator step-up (GSU) transformer involves one dominant oscillation frequency. The frequency can be estimated from the inductance and capacitance values of the GSU transformer low-voltage-side. This observation has led to a new method for simulating a GSU transformer transfer voltage. The method is based on the EMTP TRANSFORMER model, but stray capacitances are added. The leakage inductance and the magnetizing resistance are modified using approximate curves for their frequency characteristics determined from the measured results. The new method is validated in comparison with the measured results.

  17. First Principles based methods and applications for realistic simulations on complex soft materials to develop new materials for energy, health, and environmental sustainability

    NASA Astrophysics Data System (ADS)

    Goddard, William

    2013-03-01

    For soft materials applications it is essential to obtain accurate descriptions of the weak (London dispersion, electrostatic) interactions between nonbond units, to include interactions with and stabilization by solvent, and to obtain accurate free energies and entropic changes during chemical, physical, and thermal processing. We will describe some of the advances being made in first principles based methods for treating soft materials with applications selected from new organic electrodes and electrolytes for batteries and fuel cells, forward osmosis for water cleanup, extended matter stable at ambient conditions, and drugs for modulating activation of GCPR membrane proteins,

  18. Atomistic hybrid DSMC/NEMD method for nonequilibrium multiscale simulations

    SciTech Connect

    Gu Kai; Watkins, Charles B. Koplik, Joel

    2010-03-01

    A multiscale hybrid method for coupling the direct simulation Monte Carlo (DSMC) method to the nonequilibrium molecular dynamics (NEMD) method is introduced. The method addresses Knudsen layer type gas flows within a few mean free paths of an interface or about an object with dimensions of the order of a few mean free paths. It employs the NEMD method to resolve nanoscale phenomena closest to the interface along with coupled DSMC simulation of the remainder of the Knudsen layer. The hybrid DSMC/NEMD method is a particle based algorithm without a buffer zone. It incorporates a new, modified generalized soft sphere (MGSS) molecular collision model to improve the poor computational efficiency of the traditional generalized soft sphere GSS model and to achieve DSMC compatibility with Lennard-Jones NEMD molecular interactions. An equilibrium gas, a Fourier thermal flow, and an oscillatory Couette flow, are simulated to validate the method. The method shows good agreement with Maxwell-Boltzmann theory for the equilibrium system, Chapman-Enskog theory for Fourier flow, and pure DSMC simulations for oscillatory Couette flow. Speedup in CPU time of the hybrid solver is benchmarked against a pure NEMD solver baseline for different system sizes and solver domain partitions. Finally, the hybrid method is applied to investigate interaction of argon gas with solid surface molecules in a parametric study of the influence of wetting effects and solid molecular mass on energy transfer and thermal accommodation coefficients. It is determined that wetting effect strength and solid molecular mass have a significant impact on the energy transfer between gas and solid phases and thermal accommodation coefficient.

  19. A multiscale quantum mechanics/electromagnetics method for device simulations.

    PubMed

    Yam, ChiYung; Meng, Lingyi; Zhang, Yu; Chen, GuanHua

    2015-04-01

    Multiscale modeling has become a popular tool for research applying to different areas including materials science, microelectronics, biology, chemistry, etc. In this tutorial review, we describe a newly developed multiscale computational method, incorporating quantum mechanics into electronic device modeling with the electromagnetic environment included through classical electrodynamics. In the quantum mechanics/electromagnetics (QM/EM) method, the regions of the system where active electron scattering processes take place are treated quantum mechanically, while the surroundings are described by Maxwell's equations and a semiclassical drift-diffusion model. The QM model and the EM model are solved, respectively, in different regions of the system in a self-consistent manner. Potential distributions and current densities at the interface between QM and EM regions are employed as the boundary conditions for the quantum mechanical and electromagnetic simulations, respectively. The method is illustrated in the simulation of several realistic systems. In the case of junctionless field-effect transistors, transfer characteristics are obtained and a good agreement between experiments and simulations is achieved. Optical properties of a tandem photovoltaic cell are studied and the simulations demonstrate that multiple QM regions are coupled through the classical EM model. Finally, the study of a carbon nanotube-based molecular device shows the accuracy and efficiency of the QM/EM method. PMID:25611987

  20. XML-based resources for simulation

    SciTech Connect

    Kelsey, R. L.; Riese, J. M.; Young, G. A.

    2004-01-01

    As simulations and the machines they run on become larger and more complex the inputs and outputs become more unwieldy. Increased complexity makes the setup of simulation problems difficult. It also contributes to the burden of handling and analyzing large amounts of output results. Another problem is that among a class of simulation codes (such as those for physical system simulation) there is often no single standard format or resource for input data. To run the same problem on different simulations requires a different setup for each simulation code. The extensible Markup Language (XML) is used to represent a general set of data resources including physical system problems, materials, and test results. These resources provide a 'plug and play' approach to simulation setup. For example, a particular material for a physical system can be selected from a material database. The XML-based representation of the selected material is then converted to the native format of the simulation being run and plugged into the simulation input file. In this manner a user can quickly and more easily put together a simulation setup. In the case of output data, an XML approach to regression testing includes tests and test results with XML-based representations. This facilitates the ability to query for specific tests and make comparisons between results. Also, output results can easily be converted to other formats for publishing online or on paper.

  1. Study of coherent synchrotron radiation effects by means of a new simulation code based on the non-linear extension of the operator splitting method

    NASA Astrophysics Data System (ADS)

    Dattoli, G.; Migliorati, M.; Schiavi, A.

    2007-05-01

    The coherent synchrotron radiation (CSR) is one of the main problems limiting the performance of high-intensity electron accelerators. The complexity of the physical mechanisms underlying the onset of instabilities due to CSR demands for accurate descriptions, capable of including the large number of features of an actual accelerating device. A code devoted to the analysis of these types of problems should be fast and reliable, conditions that are usually hardly achieved at the same time. In the past, codes based on Lie algebraic techniques have been very efficient to treat transport problems in accelerators. The extension of these methods to the non-linear case is ideally suited to treat CSR instability problems. We report on the development of a numerical code, based on the solution of the Vlasov equation, with the inclusion of non-linear contribution due to wake field effects. The proposed solution method exploits an algebraic technique that uses the exponential operators. We show that the integration procedure is capable of reproducing the onset of instability and the effects associated with bunching mechanisms leading to the growth of the instability itself. In addition, considerations on the threshold of the instability are also developed.

  2. Simulation-based training: the next revolution in radiology education?

    PubMed

    Desser, Terry S

    2007-11-01

    Simulation-based training methods have been widely adopted in hazardous professions such as aviation, nuclear power, and the military. Their use in medicine has been accelerating lately, fueled by the public's concerns over medical errors as well as new Accreditation Council for Graduate Medical Education requirements for outcome-based and proficiency-based assessment methods. This article reviews the rationale for simulator-based training, types of simulators, their historical development and validity testing, and some results to date in laparoscopic surgery and endoscopic procedures. A number of companies have developed endovascular simulators for interventional radiologic procedures; although they cannot as yet replicate the experience of performing cases in real patients, they promise to play an increasingly important role in procedural training in the future. PMID:17964504

  3. A High Order Element Based Method for the Simulation of Velocity Damping in the Hyporheic Zone of a High Mountain River

    NASA Astrophysics Data System (ADS)

    Preziosi-Ribero, Antonio; Peñaloza-Giraldo, Jorge; Escobar-Vargas, Jorge; Donado-Garzón, Leonardo

    2016-04-01

    Groundwater - Surface water interaction is a topic that has gained relevance among the scientific community over the past decades. However, several questions remain unsolved inside this topic, and almost all the research that has been done in the past regards the transport phenomena and has little to do with understanding the dynamics of the flow patterns of the above mentioned interactions. The aim of this research is to verify the attenuation of the water velocity that comes from the free surface and enters the porous media under the bed of a high mountain river. The understanding of this process is a key feature in order to characterize and quantify the interactions between groundwater and surface water. However, the lack of information and the difficulties that arise when measuring groundwater flows under streams make the physical quantification non reliable for scientific purposes. These issues suggest that numerical simulations and in-stream velocity measurements can be used in order to characterize these flows. Previous studies have simulated the attenuation of a sinusoidal pulse of vertical velocity that comes from a stream and goes into a porous medium. These studies used the Burgers equation and the 1-D Navier-Stokes equations as governing equations. However, the boundary conditions of the problem, and the results when varying the different parameters of the equations show that the understanding of the process is not complete yet. To begin with, a Spectral Multi Domain Penalty Method (SMPM) was proposed for quantifying the velocity damping solving the Navier - Stokes equations in 1D. The main assumptions are incompressibility and a hydrostatic approximation for the pressure distributions. This method was tested with theoretical signals that are mainly trigonometric pulses or functions. Afterwards, in order to test the results with real signals, velocity profiles were captured near the Gualí River bed (Honda, Colombia), with an Acoustic Doppler

  4. Lattice-Boltzmann-based Simulations of Diffusiophoresis

    NASA Astrophysics Data System (ADS)

    Castigliego, Joshua; Kreft Pearce, Jennifer

    We present results from a lattice-Boltzmann-base Brownian Dynamics simulation on diffusiophoresis and the separation of particles within the system. A gradient in viscosity that simulates a concentration gradient in a dissolved polymer allows us to separate various types of particles by their deformability. As seen in previous experiments, simulated particles that have a higher deformability react differently to the polymer matrix than those with a lower deformability. Therefore, the particles can be separated from each other. This simulation, in particular, was intended to model an oceanic system where the particles of interest were zooplankton, phytoplankton and microplastics. The separation of plankton from the microplastics was achieved.

  5. A calibration of the mixing-length for solar-type stars based on hydrodynamical simulations. I. Methodical aspects and results for solar metallicity

    NASA Astrophysics Data System (ADS)

    Ludwig, Hans-Günter; Freytag, Bernd; Steffen, Matthias

    1999-06-01

    Based on detailed 2D numerical radiation hydrodynamics (RHD) calculations of time-dependent compressible convection, we have studied the dynamics and thermal structure of the convective surface layers of solar-type stars. The RHD models provide information about the convective efficiency in the superadiabatic region at the top of convective envelopes and predict the asymptotic value of the entropy of the deep, adiabatically stratified layers (Fig. \\ref{f:sstarhd}). This information is translated into an effective mixing-length parameter \\alphaMLT suitable to construct standard stellar structure models. We validate the approach by a detailed comparison to helioseismic data. The grid of RHD models for solar metallicity comprises 58 simulation runs with a helium abundance of Y=0.28 in the range of effective temperatures 4300pun {K}<=Teff<= 7100pun {K} and gravities 2.54<={log g}<= 4.74. We find a moderate, nevertheless significant variation of \\alphaMLT between about 1.3 for F-dwarfs and 1.75 for K-subgiants with a dominant dependence on Teff (Fig. \\ref{f:mlp}). In the close neighbourhood of the Sun we find a plateau where \\alphaMLT remains almost constant. The internal accuracy of the calibration of \\alphaMLT is estimated to be +/- 0.05 with a possible systematic bias towards lower values. An analogous calibration of the convection theory of Canuto &\\ Mazzitelli (1991, 1992; CMT) gives a different temperature dependence but a similar variation of the free parameter (Fig. \\ref{f:mlpcm}). For the first time, values for the gravity-darkening exponent beta are derived independently of mixing-length theory: beta = 0.07... 0.10. We show that our findings are consistent with constraints from stellar stability considerations and provide compact fitting formulae for the calibrations.

  6. Electromagnetic metamaterial simulations using a GPU-accelerated FDTD method

    NASA Astrophysics Data System (ADS)

    Seok, Myung-Su; Lee, Min-Gon; Yoo, SeokJae; Park, Q.-Han

    2015-12-01

    Metamaterials composed of artificial subwavelength structures exhibit extraordinary properties that cannot be found in nature. Designing artificial structures having exceptional properties plays a pivotal role in current metamaterial research. We present a new numerical simulation scheme for metamaterial research. The scheme is based on a graphic processing unit (GPU)-accelerated finite-difference time-domain (FDTD) method. The FDTD computation can be significantly accelerated when GPUs are used instead of only central processing units (CPUs). We explain how the fast FDTD simulation of large-scale metamaterials can be achieved through communication optimization in a heterogeneous CPU/GPU-based computer cluster. Our method also includes various advanced FDTD techniques: the non-uniform grid technique, the total-field/scattered-field (TFSF) technique, the auxiliary field technique for dispersive materials, the running discrete Fourier transform, and the complex structure setting. We demonstrate the power of our new FDTD simulation scheme by simulating the negative refraction of light in a coaxial waveguide metamaterial.

  7. A generic reaction-based biogeochemical simulator

    SciTech Connect

    Fang, Yilin; Yabusaki, Steven B.; Yeh, Gour T.; C.T. Miller, M.W. Farthing, W.G. Gray, and G.F. Pinder

    2004-06-17

    This paper presents a generic biogeochemical simulator, BIOGEOCHEM. The simulator can read a thermodynamic database based on the EQ3/EQ6 database. It can also read user-specified equilibrium and kinetic reactions (reactions not defined in the format of that in EQ3/EQ6 database) symbolically. BIOGEOCHEM is developed with a general paradigm. It overcomes the requirement in most available reaction-based models that reactions and rate laws be specified in a limited number of canonical forms. The simulator interprets the reactions, and rate laws of virtually any type for input to the MAPLE symbolic mathematical software package. MAPLE then generates Fortran code for the analytical Jacobian matrix used in the Newton-Raphson technique, which are compiled and linked into the BIOGEOCHEM executable. With this feature, the users are exempted from recoding the simulator to accept new equilibrium expressions or kinetic rate laws. Two examples are used to demonstrate the new features of the simulator.

  8. A Carbonaceous Chondrite Based Simulant of Phobos

    NASA Technical Reports Server (NTRS)

    Rickman, Douglas L.; Patel, Manish; Pearson, V.; Wilson, S.; Edmunson, J.

    2016-01-01

    In support of an ESA-funded concept study considering a sample return mission, a simulant of the Martian moon Phobos was needed. There are no samples of the Phobos regolith, therefore none of the four characteristics normally used to design a simulant are explicitly known for Phobos. Because of this, specifications for a Phobos simulant were based on spectroscopy, other remote measurements, and judgment. A composition based on the Tagish Lake meteorite was assumed. The requirement that sterility be achieved, especially given the required organic content, was unusual and problematic. The final design mixed JSC-1A, antigorite, pseudo-agglutinates and gilsonite. Sterility was achieved by radiation in a commercial facility.

  9. 3-D Quantum Transport Solver Based on the Perfectly Matched Layer and Spectral Element Methods for the Simulation of Semiconductor Nanodevices

    PubMed Central

    Cheng, Candong; Lee, Joon-Ho; Lim, Kim Hwa; Massoud, Hisham Z.; Liu, Qing Huo

    2007-01-01

    A 3-D quantum transport solver based on the spectral element method (SEM) and perfectly matched layer (PML) is introduced to solve the 3-D Schrödinger equation with a tensor effective mass. In this solver, the influence of the environment is replaced with the artificial PML open boundary extended beyond the contact regions of the device. These contact regions are treated as waveguides with known incident waves from waveguide mode solutions. As the transmitted wave function is treated as a total wave, there is no need to decompose it into waveguide modes, thus significantly simplifying the problem in comparison with conventional open boundary conditions. The spectral element method leads to an exponentially improving accuracy with the increase in the polynomial order and sampling points. The PML region can be designed such that less than −100 dB outgoing waves are reflected by this artificial material. The computational efficiency of the SEM solver is demonstrated by comparing the numerical and analytical results from waveguide and plane-wave examples, and its utility is illustrated by multiple-terminal devices and semiconductor nanotube devices. PMID:18037971

  10. Simulation of automatic gain control method for laser radar receiver

    NASA Astrophysics Data System (ADS)

    Cai, Xiping; Shang, Hongbo; Wang, Lina; Yang, Shuang

    2008-12-01

    A receiver with high dynamic response and wide control range are necessary for a laser radar system. In this paper, an automatic gain control scheme for laser radar receiver is proposed. The scheme is based on a closed-loop logarithmic feedback method. Signal models for pulse laser radar system are created and as the input to the AGC model. The signal is supposed to be very weak and with a nanosecond order of pulse width in the light of the property of the laser radar. The method and the simulation for the AGC will be presented in detail.

  11. PIXE simulation: Models, methods and technologies

    SciTech Connect

    Batic, M.; Pia, M. G.; Saracco, P.; Weidenspointner, G.

    2013-04-19

    The simulation of PIXE (Particle Induced X-ray Emission) is discussed in the context of general-purpose Monte Carlo systems for particle transport. Dedicated PIXE codes are mainly concerned with the application of the technique to elemental analysis, but they lack the capability of dealing with complex experimental configurations. General-purpose Monte Carlo codes provide powerful tools to model the experimental environment in great detail, but so far they have provided limited functionality for PIXE simulation. This paper reviews recent developments that have endowed the Geant4 simulation toolkit with advanced capabilities for PIXE simulation, and related efforts for quantitative validation of cross sections and other physical parameters relevant to PIXE simulation.

  12. Method for simulating discontinuous physical systems

    DOEpatents

    Baty, Roy S.; Vaughn, Mark R.

    2001-01-01

    The mathematical foundations of conventional numerical simulation of physical systems provide no consistent description of the behavior of such systems when subjected to discontinuous physical influences. As a result, the numerical simulation of such problems requires ad hoc encoding of specific experimental results in order to address the behavior of such discontinuous physical systems. In the present invention, these foundations are replaced by a new combination of generalized function theory and nonstandard analysis. The result is a class of new approaches to the numerical simulation of physical systems which allows the accurate and well-behaved simulation of discontinuous and other difficult physical systems, as well as simpler physical systems. Applications of this new class of numerical simulation techniques to process control, robotics, and apparatus design are outlined.

  13. Adaptive wavelet collocation method simulations of Rayleigh-Taylor instability

    NASA Astrophysics Data System (ADS)

    Reckinger, S. J.; Livescu, D.; Vasilyev, O. V.

    2010-12-01

    Numerical simulations of single-mode, compressible Rayleigh-Taylor instability are performed using the adaptive wavelet collocation method (AWCM), which utilizes wavelets for dynamic grid adaptation. Due to the physics-based adaptivity and direct error control of the method, AWCM is ideal for resolving the wide range of scales present in the development of the instability. The problem is initialized consistent with the solutions from linear stability theory. Non-reflecting boundary conditions are applied to prevent the contamination of the instability growth by pressure waves created at the interface. AWCM is used to perform direct numerical simulations that match the early-time linear growth, the terminal bubble velocity and a reacceleration region.

  14. Lattice Boltzmann Method for Spacecraft Propellant Slosh Simulation

    NASA Technical Reports Server (NTRS)

    Orr, Jeb S.; Powers, Joseph F.; Yang, Hong Q

    2015-01-01

    A scalable computational approach to the simulation of propellant tank sloshing dynamics in microgravity is presented. In this work, we use the lattice Boltzmann equation (LBE) to approximate the behavior of two-phase, single-component isothermal flows at very low Bond numbers. Through the use of a non-ideal gas equation of state and a modified multiple relaxation time (MRT) collision operator, the proposed method can simulate thermodynamically consistent phase transitions at temperatures and density ratios consistent with typical spacecraft cryogenic propellants, for example, liquid oxygen. Determination of the tank forces and moments is based upon a novel approach that relies on the global momentum conservation of the closed fluid domain, and a parametric wall wetting model allows tuning of the free surface contact angle. Development of the interface is implicit and no interface tracking approach is required. A numerical example illustrates the method's application to prediction of bulk fluid behavior during a spacecraft ullage settling maneuver.

  15. A simulation based approach to optimize inventory replenishment with RAND algorithm: An extended study of corrected demand using Holt's method for textile industry

    NASA Astrophysics Data System (ADS)

    Morshed, Mohammad Sarwar; Kamal, Mostafa Mashnoon; Khan, Somaiya Islam

    2016-07-01

    Inventory has been a major concern in supply chain and numerous researches have been done lately on inventory control which brought forth a number of methods that efficiently manage inventory and related overheads by reducing cost of replenishment. This research is aimed towards providing a better replenishment policy in case of multi-product, single supplier situations for chemical raw materials of textile industries in Bangladesh. It is assumed that industries currently pursue individual replenishment system. The purpose is to find out the optimum ideal cycle time and individual replenishment cycle time of each product for replenishment that will cause lowest annual holding and ordering cost, and also find the optimum ordering quantity. In this paper indirect grouping strategy has been used. It is suggested that indirect grouping Strategy outperforms direct grouping strategy when major cost is high. An algorithm by Kaspi and Rosenblatt (1991) called RAND is exercised for its simplicity and ease of application. RAND provides an ideal cycle time (T) for replenishment and integer multiplier (ki) for individual items. Thus the replenishment cycle time for each product is found as T×ki. Firstly, based on data, a comparison between currently prevailing (individual) process and RAND is provided that uses the actual demands which presents 49% improvement in total cost of replenishment. Secondly, discrepancies in demand is corrected by using Holt's method. However, demands can only be forecasted one or two months into the future because of the demand pattern of the industry under consideration. Evidently, application of RAND with corrected demand display even greater improvement. The results of this study demonstrates that cost of replenishment can be significantly reduced by applying RAND algorithm and exponential smoothing models.

  16. Methods for simulating solute breakthrough curves in pumping groundwater wells

    USGS Publications Warehouse

    Starn, J. Jeffrey; Bagtzoglou, Amvrossios C.; Robbins, Gary A.

    2012-01-01

    In modeling there is always a trade-off between execution time and accuracy. For gradient-based parameter estimation methods, where a simulation model is run repeatedly to populate a Jacobian (sensitivity) matrix, there exists a need for rapid simulation methods of known accuracy that can decrease execution time, and thus make the model more useful without sacrificing accuracy. Convolution-based methods can be executed rapidly for any desired input function once the residence-time distribution is known. The residence-time distribution can be calculated efficiently using particle tracking, but particle tracking can be ambiguous near a pumping well if the grid is too coarse. We present several embedded analytical expressions for improving particle tracking near a pumping well and compare them with a finely gridded finite-difference solution in terms of accuracy and CPU usage. Even though the embedded analytical approach can improve particle tracking near a well, particle methods reduce, but do not eliminate, reliance on a grid because velocity fields typically are calculated on a grid, and additional error is incurred using linear interpolation of velocity. A dilution rate can be calculated for a given grid and pumping well to determine if the grid is sufficiently refined. Embedded analytical expressions increase accuracy but add significantly to CPU usage. Structural error introduced by the numerical solution method may affect parameter estimates.

  17. Methods for simulating solute breakthrough curves in pumping groundwater wells

    NASA Astrophysics Data System (ADS)

    Jeffrey Starn, J.; Bagtzoglou, Amvrossios C.; Robbins, Gary A.

    2012-11-01

    In modeling there is always a trade-off between execution time and accuracy. For gradient-based parameter estimation methods, where a simulation model is run repeatedly to populate a Jacobian (sensitivity) matrix, there exists a need for rapid simulation methods of known accuracy that can decrease execution time, and thus make the model more useful without sacrificing accuracy. Convolution-based methods can be executed rapidly for any desired input function once the residence-time distribution is known. The residence-time distribution can be calculated efficiently using particle tracking, but particle tracking can be ambiguous near a pumping well if the grid is too coarse. We present several embedded analytical expressions for improving particle tracking near a pumping well and compare them with a finely gridded finite-difference solution in terms of accuracy and CPU usage. Even though the embedded analytical approach can improve particle tracking near a well, particle methods reduce, but do not eliminate, reliance on a grid because velocity fields typically are calculated on a grid, and additional error is incurred using linear interpolation of velocity. A dilution rate can be calculated for a given grid and pumping well to determine if the grid is sufficiently refined. Embedded analytical expressions increase accuracy but add significantly to CPU usage. Structural error introduced by the numerical solution method may affect parameter estimates.

  18. Susceptibility of clinical isolates of Pseudomonas aeruginosa in the Northern Kyushu district of Japan to carbapenem antibiotics, determined by an integrated concentration method: evaluation of the method based on Monte Carlo simulation.

    PubMed

    Nagasawa, Zenzo; Kusaba, Koji; Aoki, Yosuke

    2008-06-01

    In empirical antibacterial therapy, regional surveillance is expected to yield important information for the determination of the class and dosage regimen of antibacterial agents to be used when dealing with infections with organisms such as Pseudomonas aeruginosa, in which strains resistant to antibacterial agents have been increasing. The minimal inhibitory concentrations (MICs) of five carbapenem antibiotics against P. aeruginosa strains isolated in the Northern Kyushu district of Japan between 2005 and 2006 were measured, and 100 strains for which carbapenem MICs were < or =0.5-32 microg/ml were selected. In this study, MIC was measured by two methods, i.e., the common serial twofold dilution method and an integrated concentration method, in which the concentration was changed, in increments of 2 microg/ml, from 2 to 16 microg/ml. The MIC(50)/MIC(90) values for imipenem, meropenem, biapenem, doripenem, and panipenem, respectively, with the former method were 8/16, 4/16, 4/16, 2/8, and 16/16 microg/ml; and the values were 6/10, 4/12, 4/10, 2/6, and 10/16 microg/ml with the latter method. The MIC data obtained with both methods were subjected to pharmacokinetic/pharmacodynamic (PK/PD) analysis with Monte Carlo simulation to calculate the probability of achieving the target of time above MIC (T>MIC) with each carbapenem. The probability of achieving 25% time above the MIC (T>MIC; % of T>MIC for dosing intervals) and 40% T>MIC against P. aeruginosa with any dosage regimen was higher with doripenem than with any other carbapenem tested. When the two sets of MIC data were subjected to PK/PD analysis, the difference between the two methods in the probability of achieving each % T>MIC was small, thus endorsing the validity of the serial twofold dilution method. PMID:18574662

  19. Numerical simulation of self-sustained oscillation of a voice-producing element based on Navier-Stokes equations and the finite element method

    NASA Astrophysics Data System (ADS)

    de Vries, Martinus P.; Hamburg, Marc C.; Schutte, Harm K.; Verkerke, Gijsbertus J.; Veldman, Arthur E. P.

    2003-04-01

    Surgical removal of the larynx results in radically reduced production of voice and speech. To improve voice quality a voice-producing element (VPE) is developed, based on the lip principle, called after the lips of a musician while playing a brass instrument. To optimize the VPE, a numerical model is developed. In this model, the finite element method is used to describe the mechanical behavior of the VPE. The flow is described by two-dimensional incompressible Navier-Stokes equations. The interaction between VPE and airflow is modeled by placing the grid of the VPE model in the grid of the aerodynamical model, and requiring continuity of forces and velocities. By applying and increasing pressure to the numerical model, pulses comparable to glottal volume velocity waveforms are obtained. By variation of geometric parameters their influence can be determined. To validate this numerical model, an in vitro test with a prototype of the VPE is performed. Experimental and numerical results show an acceptable agreement.

  20. Parallel implementation of the particle simulation method with dynamic load balancing: Toward realistic geodynamical simulation

    NASA Astrophysics Data System (ADS)

    Furuichi, M.; Nishiura, D.

    2015-12-01

    Fully Lagrangian methods such as Smoothed Particle Hydrodynamics (SPH) and Discrete Element Method (DEM) have been widely used to solve the continuum and particles motions in the computational geodynamics field. These mesh-free methods are suitable for the problems with the complex geometry and boundary. In addition, their Lagrangian nature allows non-diffusive advection useful for tracking history dependent properties (e.g. rheology) of the material. These potential advantages over the mesh-based methods offer effective numerical applications to the geophysical flow and tectonic processes, which are for example, tsunami with free surface and floating body, magma intrusion with fracture of rock, and shear zone pattern generation of granular deformation. In order to investigate such geodynamical problems with the particle based methods, over millions to billion particles are required for the realistic simulation. Parallel computing is therefore important for handling such huge computational cost. An efficient parallel implementation of SPH and DEM methods is however known to be difficult especially for the distributed-memory architecture. Lagrangian methods inherently show workload imbalance problem for parallelization with the fixed domain in space, because particles move around and workloads change during the simulation. Therefore dynamic load balance is key technique to perform the large scale SPH and DEM simulation. In this work, we present the parallel implementation technique of SPH and DEM method utilizing dynamic load balancing algorithms toward the high resolution simulation over large domain using the massively parallel super computer system. Our method utilizes the imbalances of the executed time of each MPI process as the nonlinear term of parallel domain decomposition and minimizes them with the Newton like iteration method. In order to perform flexible domain decomposition in space, the slice-grid algorithm is used. Numerical tests show that our

  1. A numerical method for cardiac mechanoelectric simulations.

    PubMed

    Pathmanathan, Pras; Whiteley, Jonathan P

    2009-05-01

    Much effort has been devoted to developing numerical techniques for solving the equations that describe cardiac electrophysiology, namely the monodomain equations and bidomain equations. Only a limited selection of publications, however, address the development of numerical techniques for mechanoelectric simulations where cardiac electrophysiology is coupled with deformation of cardiac tissue. One problem commonly encountered in mechanoelectric simulations is instability of the coupled numerical scheme. In this study, we develop a stable numerical scheme for mechanoelectric simulations. A number of convergence tests are carried out using this stable technique for simulations where deformations are of the magnitude typically observed in a beating heart. These convergence tests demonstrate that accurate computation of tissue deformation requires a nodal spacing of around 1 mm in the mesh used to calculate tissue deformation. This is a much finer computational grid than has previously been acknowledged, and has implications for the computational efficiency of the resulting numerical scheme. PMID:19263223

  2. A numerical method for power plant simulations

    SciTech Connect

    Carcasci, C.; Facchini, B.

    1996-03-01

    This paper describes a highly flexible computerized method of calculating operating data in a power cycle. The computerized method presented here permits the study of steam, gas and combined plants. Its flexibility is not restricted by any defined cycle scheme. A power plant consists of simple elements (turbine, compressor, combustor chamber, pump, etc.). Each power plant component is represented by its typical equations relating to fundamental mechanical and thermodynamic laws, so a power plant system is represented by algebraic equations, which are the typical equations of components, continuity equations, and data concerning plant conditions. This equation system is not linear, but can be reduced to a linear equation system with variable coefficients. The solution is simultaneous for each component and it is determined by an iterative process. An example of a simple gas turbine cycle demonstrates the applied technique. This paper also presents the user interface based on MS-Windows. The input data, the results, and any characteristic parameters of a complex cycle scheme are also shown.

  3. Computer Simulation Methods for Defect Configurations and Nanoscale Structures

    SciTech Connect

    Gao, Fei

    2010-01-01

    This chapter will describe general computer simulation methods, including ab initio calculations, molecular dynamics and kinetic Monte-Carlo method, and their applications to the calculations of defect configurations in various materials (metals, ceramics and oxides) and the simulations of nanoscale structures due to ion-solid interactions. The multiscale theory, modeling, and simulation techniques (both time scale and space scale) will be emphasized, and the comparisons between computer simulation results and exprimental observations will be made.

  4. A distributed UNIX-based simulator

    SciTech Connect

    Wyatt, P.W.; Arnold, T.R.; Hammer, K.E. ); Peery, J.S.; McKaskle, G.A. . Dept. of Nuclear Engineering)

    1990-01-01

    One of the problems confronting the designers of simulators over the last ten years -- particularly the designers of nuclear plant simulators -- has been how to accommodate the demands of their customers for increasing verisimilitude, especially in the modeling of as-faulted conditions. The demand for the modeling of multiphase multi-component thermal-hydraulics, for example, imposed a requirement that taxed the ingenuity of the simulator software developers. Difficulty was encountered in fitting such models into the existing simulator framework -- not least because the real-time requirement of training simulation imposed severe limits on the minimum time step. In the mid-1980's, two evolutions that had been proceeding for some time culminated in mature products of potentially great utility to simulation. One was the emergence of low-cost work stations featuring not only versatile, object-oriented graphics, but also considerable number-crunching capabilities of their own. The other was the adoption of UNIX as a standard'' operating system common to at least some machines offered by virtually all vendors. As a result, it is possible to design a simulator whose graphics and executive functions are off-loaded to one or more work stations, which are designed to handle such tasks. The number-crunching duties are assigned to another machine, which has been designed expressly for that purpose. This paper deals with such a distributed UNIX-based simulator developed at the Savannah River Laboratory using graphics supplied by Texas A M University under contract to SRL.

  5. Simulation-based medical education in pediatrics.

    PubMed

    Lopreiato, Joseph O; Sawyer, Taylor

    2015-01-01

    The use of simulation-based medical education (SBME) in pediatrics has grown rapidly over the past 2 decades and is expected to continue to grow. Similar to other instructional formats used in medical education, SBME is an instructional methodology that facilitates learning. Successful use of SBME in pediatrics requires attention to basic educational principles, including the incorporation of clear learning objectives. To facilitate learning during simulation the psychological safety of the participants must be ensured, and when done correctly, SBME is a powerful tool to enhance patient safety in pediatrics. Here we provide an overview of SBME in pediatrics and review key topics in the field. We first review the tools of the trade and examine various types of simulators used in pediatric SBME, including human patient simulators, task trainers, standardized patients, and virtual reality simulation. Then we explore several uses of simulation that have been shown to lead to effective learning, including curriculum integration, feedback and debriefing, deliberate practice, mastery learning, and range of difficulty and clinical variation. Examples of how these practices have been successfully used in pediatrics are provided. Finally, we discuss the future of pediatric SBME. As a community, pediatric simulation educators and researchers have been a leading force in the advancement of simulation in medicine. As the use of SBME in pediatrics expands, we hope this perspective will serve as a guide for those interested in improving the state of pediatric SBME. PMID:25748973

  6. Simulation of turbulent flows using nodal integral method

    NASA Astrophysics Data System (ADS)

    Singh, Suneet

    Nodal methods are the backbone of the production codes for neutron-diffusion and transport equations. Despite their high accuracy, use of these methods for simulation of fluid flow is relatively new. Recently, a modified nodal integral method (MNIM) has been developed for simulation of laminar flows. In view of its high accuracy and efficiency, extension of this method for the simulation of turbulent flows is a logical step forward. In this dissertation, MNIM is extended in two ways to simulate incompressible turbulent flows---a new MNIM is developed for the 2D k-epsilon equations; and 3D, parallel MNIM is developed for direct numerical simulations. Both developments are validated, and test problems are solved. In this dissertation, a new nodal numerical scheme is developed to solve the k-epsilon equations to simulate turbulent flows. The MNIM developed earlier for laminar flow equations is modified to incorporate eddy viscosity approximation and coupled with the above mentioned schemes for the k and epsilon equations, to complete the implementation of the numerical scheme for the k-epsilon model. The scheme developed is validated by comparing the results obtained by the developed method with the results available in the literature obtained using direct numerical simulations (DNS). The results of current simulations match reasonably well with the DNS results. The discrepancies in the results are mainly due to the limitations of the k-epsilon model rather than the deficiency in the developed MNIM. A parallel version of the MNIM is needed to enhance its capability, in order to carry out DNS of the turbulent flows. The parallelization of the scheme, however, presents some unique challenges as dependencies of the discrete variables are different from those that exist in other schemes (for example in finite volume based schemes). Hence, a parallel MNIM (PMNIM) is developed and implemented into a computer code with communication strategies based on the above mentioned

  7. Multinomial tau-leaping method for stochastic kinetic simulations.

    PubMed

    Pettigrew, Michel F; Resat, Haluk

    2007-02-28

    We introduce the multinomial tau-leaping (MtauL) method for general reaction networks with multichannel reactant dependencies. The MtauL method is an extension of the binomial tau-leaping method where efficiency is improved in several ways. First, tau-leaping steps are determined simply and efficiently using a priori information and Poisson distribution-based estimates of expectation values for reaction numbers over a tentative tau-leaping step. Second, networks are partitioned into closed groups of reactions and corresponding reactants in which no group reactant set is found in any other group. Third, product formation is factored into upper-bound estimation of the number of times a particular reaction occurs. Together, these features allow larger time steps where the numbers of reactions occurring simultaneously in a multichannel manner are estimated accurately using a multinomial distribution. Furthermore, we develop a simple procedure that places a specific upper bound on the total reaction number to ensure non-negativity of species populations over a single multiple-reaction step. Using two disparate test case problems involving cellular processes--epidermal growth factor receptor signaling and a lactose operon model--we show that the tau-leaping based methods such as the MtauL algorithm can significantly reduce the number of simulation steps thus increasing the numerical efficiency over the exact stochastic simulation algorithm by orders of magnitude. PMID:17343434

  8. Multinomial tau-leaping method for stochastic kinetic simulations

    NASA Astrophysics Data System (ADS)

    Pettigrew, Michel F.; Resat, Haluk

    2007-02-01

    We introduce the multinomial tau-leaping (MτL) method for general reaction networks with multichannel reactant dependencies. The MτL method is an extension of the binomial tau-leaping method where efficiency is improved in several ways. First, τ-leaping steps are determined simply and efficiently using a priori information and Poisson distribution-based estimates of expectation values for reaction numbers over a tentative τ-leaping step. Second, networks are partitioned into closed groups of reactions and corresponding reactants in which no group reactant set is found in any other group. Third, product formation is factored into upper-bound estimation of the number of times a particular reaction occurs. Together, these features allow larger time steps where the numbers of reactions occurring simultaneously in a multichannel manner are estimated accurately using a multinomial distribution. Furthermore, we develop a simple procedure that places a specific upper bound on the total reaction number to ensure non-negativity of species populations over a single multiple-reaction step. Using two disparate test case problems involving cellular processes—epidermal growth factor receptor signaling and a lactose operon model—we show that the τ-leaping based methods such as the MτL algorithm can significantly reduce the number of simulation steps thus increasing the numerical efficiency over the exact stochastic simulation algorithm by orders of magnitude.

  9. Knowledge-based simulation using object-oriented programming

    NASA Technical Reports Server (NTRS)

    Sidoran, Karen M.

    1993-01-01

    Simulations have become a powerful mechanism for understanding and modeling complex phenomena. Their results have had substantial impact on a broad range of decisions in the military, government, and industry. Because of this, new techniques are continually being explored and developed to make them even more useful, understandable, extendable, and efficient. One such area of research is the application of the knowledge-based methods of artificial intelligence (AI) to the computer simulation field. The goal of knowledge-based simulation is to facilitate building simulations of greatly increased power and comprehensibility by making use of deeper knowledge about the behavior of the simulated world. One technique for representing and manipulating knowledge that has been enhanced by the AI community is object-oriented programming. Using this technique, the entities of a discrete-event simulation can be viewed as objects in an object-oriented formulation. Knowledge can be factual (i.e., attributes of an entity) or behavioral (i.e., how the entity is to behave in certain circumstances). Rome Laboratory's Advanced Simulation Environment (RASE) was developed as a research vehicle to provide an enhanced simulation development environment for building more intelligent, interactive, flexible, and realistic simulations. This capability will support current and future battle management research and provide a test of the object-oriented paradigm for use in large scale military applications.

  10. Structured Debriefing in Simulation-Based Education.

    PubMed

    Palaganas, Janice C; Fey, Mary; Simon, Robert

    2016-02-01

    Debriefing following a simulation event is a conversational period for reflection and feedback aimed at sustaining or improving future performance. It is considered by many simulation educators to be a critical activity for learning in simulation-based education. Deep learning can be achieved during debriefing and often depends on the facilitation skills of the debriefer as well as the learner's perceptions of a safe and supportive learning environment as created by the debriefer. On the other hand, poorly facilitated debriefings may create adverse learning, generate bad feelings, and may lead to a degradation of clinical performance, self-reflection, or harm to the educator-learner relationship. The use of a structure that recognizes logical and sequential phases during debriefing can assist simulation educators to achieve a deep level of learning. PMID:26909457

  11. Simulations of Ground and Space-Based Oxygen Atom Experiments

    NASA Technical Reports Server (NTRS)

    Finchum, A. (Technical Monitor); Cline, J. A.; Minton, T. K.; Braunstein, M.

    2003-01-01

    A low-earth orbit (LEO) materials erosion scenario and the ground-based experiment designed to simulate it are compared using the direct-simulation Monte Carlo (DSMC) method. The DSMC model provides a detailed description of the interactions between the hyperthermal gas flow and a normally oriented flat plate for each case. We find that while the general characteristics of the LEO exposure are represented in the ground-based experiment, multi-collision effects can potentially alter the impact energy and directionality of the impinging molecules in the ground-based experiment. Multi-collision phenomena also affect downstream flux measurements.

  12. Multigrid hierarchical simulated annealing method for reconstructing heterogeneous media.

    PubMed

    Pant, Lalit M; Mitra, Sushanta K; Secanell, Marc

    2015-12-01

    A reconstruction methodology based on different-phase-neighbor (DPN) pixel swapping and multigrid hierarchical annealing is presented. The method performs reconstructions by starting at a coarse image and successively refining it. The DPN information is used at each refinement stage to freeze interior pixels of preformed structures. This preserves the large-scale structures in refined images and also reduces the number of pixels to be swapped, thereby resulting in a decrease in the necessary computational time to reach a solution. Compared to conventional single-grid simulated annealing, this method was found to reduce the required computation time to achieve a reconstruction by around a factor of 70-90, with the potential of even higher speedups for larger reconstructions. The method is able to perform medium sized (up to 300(3) voxels) three-dimensional reconstructions with multiple correlation functions in 36-47 h. PMID:26764849

  13. Method for numerical simulations of metastable states

    SciTech Connect

    Heller, U.M.; Seiberg, N.

    1983-06-15

    We present a numerical simulation of metastable states near a first-order phase transition in the example of a U(1) lattice gauge theory with a generalized action. In order to make measurements in these states possible their decay has to be prevented. We achieve this by using a microcanonical simulation for a finite system. We then obtain the coupling constant (inverse temperature) as a function of the action density. It turns out to be nonmonotonic and hence not uniquely invertible. From it we derive the effective potential for the action density. This effective potential is not always convex, a property that seems to be in contradiction with the standard lore about its convexity. This apparent ''paradox'' is resolved in a discussion about different definitions of the effective potential.

  14. Microcanonical ensemble simulation method applied to discrete potential fluids.

    PubMed

    Sastre, Francisco; Benavides, Ana Laura; Torres-Arenas, José; Gil-Villegas, Alejandro

    2015-09-01

    In this work we extend the applicability of the microcanonical ensemble simulation method, originally proposed to study the Ising model [A. Hüller and M. Pleimling, Int. J. Mod. Phys. C 13, 947 (2002)0129-183110.1142/S0129183102003693], to the case of simple fluids. An algorithm is developed by measuring the transition rates probabilities between macroscopic states, that has as advantage with respect to conventional Monte Carlo NVT (MC-NVT) simulations that a continuous range of temperatures are covered in a single run. For a given density, this new algorithm provides the inverse temperature, that can be parametrized as a function of the internal energy, and the isochoric heat capacity is then evaluated through a numerical derivative. As an illustrative example we consider a fluid composed of particles interacting via a square-well (SW) pair potential of variable range. Equilibrium internal energies and isochoric heat capacities are obtained with very high accuracy compared with data obtained from MC-NVT simulations. These results are important in the context of the application of the Hüller-Pleimling method to discrete-potential systems, that are based on a generalization of the SW and square-shoulder fluids properties. PMID:26465582

  15. Microcanonical ensemble simulation method applied to discrete potential fluids

    NASA Astrophysics Data System (ADS)

    Sastre, Francisco; Benavides, Ana Laura; Torres-Arenas, José; Gil-Villegas, Alejandro

    2015-09-01

    In this work we extend the applicability of the microcanonical ensemble simulation method, originally proposed to study the Ising model [A. Hüller and M. Pleimling, Int. J. Mod. Phys. C 13, 947 (2002), 10.1142/S0129183102003693], to the case of simple fluids. An algorithm is developed by measuring the transition rates probabilities between macroscopic states, that has as advantage with respect to conventional Monte Carlo NVT (MC-NVT) simulations that a continuous range of temperatures are covered in a single run. For a given density, this new algorithm provides the inverse temperature, that can be parametrized as a function of the internal energy, and the isochoric heat capacity is then evaluated through a numerical derivative. As an illustrative example we consider a fluid composed of particles interacting via a square-well (SW) pair potential of variable range. Equilibrium internal energies and isochoric heat capacities are obtained with very high accuracy compared with data obtained from MC-NVT simulations. These results are important in the context of the application of the Hüller-Pleimling method to discrete-potential systems, that are based on a generalization of the SW and square-shoulder fluids properties.

  16. High Performance Parallel Methods for Space Weather Simulations

    NASA Technical Reports Server (NTRS)

    Hunter, Paul (Technical Monitor); Gombosi, Tamas I.

    2003-01-01

    This is the final report of our NASA AISRP grant entitled 'High Performance Parallel Methods for Space Weather Simulations'. The main thrust of the proposal was to achieve significant progress towards new high-performance methods which would greatly accelerate global MHD simulations and eventually make it possible to develop first-principles based space weather simulations which run much faster than real time. We are pleased to report that with the help of this award we made major progress in this direction and developed the first parallel implicit global MHD code with adaptive mesh refinement. The main limitation of all earlier global space physics MHD codes was the explicit time stepping algorithm. Explicit time steps are limited by the Courant-Friedrichs-Lewy (CFL) condition, which essentially ensures that no information travels more than a cell size during a time step. This condition represents a non-linear penalty for highly resolved calculations, since finer grid resolution (and consequently smaller computational cells) not only results in more computational cells, but also in smaller time steps.

  17. Computational Methods for Jet Noise Simulation

    NASA Technical Reports Server (NTRS)

    Goodrich, John W. (Technical Monitor); Hagstrom, Thomas

    2003-01-01

    The purpose of our project is to develop, analyze, and test novel numerical technologies central to the long term goal of direct simulations of subsonic jet noise. Our current focus is on two issues: accurate, near-field domain truncations and high-order, single-step discretizations of the governing equations. The Direct Numerical Simulation (DNS) of jet noise poses a number of extreme challenges to computational technique. In particular, the problem involves multiple temporal and spatial scales as well as flow instabilities and is posed on an unbounded spatial domain. Moreover, the basic phenomenon of interest, the radiation of acoustic waves to the far field, involves only a minuscule fraction of the total energy. The best current simulations of jet noise are at low Reynolds number. It is likely that an increase of one to two orders of magnitude will be necessary to reach a regime where the separation between the energy-containing and dissipation scales is sufficient to make the radiated noise essentially independent of the Reynolds number. Such an increase in resolution cannot be obtained in the near future solely through increases in computing power. Therefore, new numerical methodologies of maximal efficiency and accuracy are required.

  18. Microcomputer-Based Programs for Pharmacokinetic Simulations.

    ERIC Educational Resources Information Center

    Li, Ronald C.; And Others

    1995-01-01

    Microcomputer software that simulates drug-concentration time profiles based on user-assigned pharmacokinetic parameters such as central volume of distribution, elimination rate constant, absorption rate constant, dosing regimens, and compartmental transfer rate constants is described. The software is recommended for use in undergraduate…

  19. Computational method for simulation of thermal load distribution in a lithographic lens.

    PubMed

    Yu, Xinfeng; Ni, Mingyang; Rui, Dawei; Qu, Yi; Zhang, Wei

    2016-05-20

    As a crucial step for thermal aberration prediction, thermal simulation is an effective way to acquire the temperature distribution of lenses. In the case of rigorous thermal simulation with the finite volume method, the amount of absorbed energy and its distribution within lens elements should be provided to guarantee simulation accuracy. In this paper, a computational method for simulation of thermal load distribution concerning lens material absorption was proposed based on light intensity of lens elements' surfaces. An algorithm for the verification of the method was also introduced, and the results showed that the method presented in this paper is an effective solution for thermal load distribution in a lithographic lens. PMID:27411148

  20. Simulation of solid body motion in a Newtonian fluid using a vorticity-based pseudo-spectral immersed boundary method augmented by the radial basis functions

    NASA Astrophysics Data System (ADS)

    Sabetghadam, Fereidoun; Soltani, Elshan

    2015-10-01

    The moving boundary conditions are implemented into the Fourier pseudo-spectral solution of the two-dimensional incompressible Navier-Stokes equations (NSE) in the vorticity-velocity form, using the radial basis functions (RBF). Without explicit definition of an external forcing function, the desired immersed boundary conditions are imposed by direct modification of the convection and diffusion terms. At the beginning of each time-step the solenoidal velocities, satisfying the desired moving boundary conditions, along with a modified vorticity are obtained and used in modification of the convection and diffusion terms of the vorticity evolution equation. Time integration is performed by the explicit fourth-order Runge-Kutta method and the boundary conditions are set at the beginning of each sub-step. The method is applied to a couple of moving boundary problems and more than second-order of accuracy in space is demonstrated for the Reynolds numbers up to Re = 550. Moreover, performance of the method is shown in comparison with the classical Fourier pseudo-spectral method.

  1. Experiential Learning Methods, Simulation Complexity and Their Effects on Different Target Groups

    ERIC Educational Resources Information Center

    Kluge, Annette

    2007-01-01

    This article empirically supports the thesis that there is no clear and unequivocal argument in favor of simulations and experiential learning. Instead the effectiveness of simulation-based learning methods depends strongly on the target group's characteristics. Two methods of supporting experiential learning are compared in two different complex…

  2. Accelerated GPU based SPECT Monte Carlo simulations.

    PubMed

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris

    2016-06-01

    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational

  3. Accelerated GPU based SPECT Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris

    2016-06-01

    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: 99m Tc, 111In and 131I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency

  4. Development of an Agent-Based Model (ABM) to Simulate the Immune System and Integration of a Regression Method to Estimate the Key ABM Parameters by Fitting the Experimental Data

    PubMed Central

    Tong, Xuming; Chen, Jinghang; Miao, Hongyu; Li, Tingting; Zhang, Le

    2015-01-01

    Agent-based models (ABM) and differential equations (DE) are two commonly used methods for immune system simulation. However, it is difficult for ABM to estimate key parameters of the model by incorporating experimental data, whereas the differential equation model is incapable of describing the complicated immune system in detail. To overcome these problems, we developed an integrated ABM regression model (IABMR). It can combine the advantages of ABM and DE by employing ABM to mimic the multi-scale immune system with various phenotypes and types of cells as well as using the input and output of ABM to build up the Loess regression for key parameter estimation. Next, we employed the greedy algorithm to estimate the key parameters of the ABM with respect to the same experimental data set and used ABM to describe a 3D immune system similar to previous studies that employed the DE model. These results indicate that IABMR not only has the potential to simulate the immune system at various scales, phenotypes and cell types, but can also accurately infer the key parameters like DE model. Therefore, this study innovatively developed a complex system development mechanism that could simulate the complicated immune system in detail like ABM and validate the reliability and efficiency of model like DE by fitting the experimental data. PMID:26535589

  5. Development of an Agent-Based Model (ABM) to Simulate the Immune System and Integration of a Regression Method to Estimate the Key ABM Parameters by Fitting the Experimental Data.

    PubMed

    Tong, Xuming; Chen, Jinghang; Miao, Hongyu; Li, Tingting; Zhang, Le

    2015-01-01

    Agent-based models (ABM) and differential equations (DE) are two commonly used methods for immune system simulation. However, it is difficult for ABM to estimate key parameters of the model by incorporating experimental data, whereas the differential equation model is incapable of describing the complicated immune system in detail. To overcome these problems, we developed an integrated ABM regression model (IABMR). It can combine the advantages of ABM and DE by employing ABM to mimic the multi-scale immune system with various phenotypes and types of cells as well as using the input and output of ABM to build up the Loess regression for key parameter estimation. Next, we employed the greedy algorithm to estimate the key parameters of the ABM with respect to the same experimental data set and used ABM to describe a 3D immune system similar to previous studies that employed the DE model. These results indicate that IABMR not only has the potential to simulate the immune system at various scales, phenotypes and cell types, but can also accurately infer the key parameters like DE model. Therefore, this study innovatively developed a complex system development mechanism that could simulate the complicated immune system in detail like ABM and validate the reliability and efficiency of model like DE by fitting the experimental data. PMID:26535589

  6. A Multiscale simulation method for ice crystallization and frost growth

    NASA Astrophysics Data System (ADS)

    Yazdani, Miad

    2015-11-01

    Formation of ice crystals and frost is associated with physical mechanisms at immensely separated scales. The primary focus of this work is on crystallization and frost growth on a cold plate exposed to the humid air. The nucleation is addressed through Gibbs energy barrier method based on the interfacial energy of crystal and condensate as well as the ambient and surface conditions. The supercooled crystallization of ice crystals is simulated through a phase-field based method where the variation of degree of surface tension anisotropy and its mode in the fluid medium is represented statistically. In addition, the mesoscale width of the interface is quantified asymptotically which serves as a length-scale criterion into a so-called ``Adaptive'' AMR (AAMR) algorithm to tie the grid resolution at the interface to local physical properties. Moreover, due to the exposure of crystal to humid air, a secondary non-equilibrium growth process contributes to the formation of frost at the tip of the crystal. A Monte-Carlo implementation of Diffusion Limited Aggregation method addresses the formation of frost during the crystallization. Finally, a virtual boundary based Immersed Boundary Method (IBM) is adapted to address the interaction of ice crystal with convective air during its growth.

  7. Transcending Competency Testing in Hospital-Based Simulation.

    PubMed

    Lassche, Madeline; Wilson, Barbara

    2016-02-01

    Simulation is a frequently used method for training students in health care professions and has recently gained acceptance in acute care hospital settings for use in educational programs and competency testing. Although hospital-based simulation is currently limited primarily to use in skills acquisition, expansion of the use of simulation via a modified Quality Health Outcomes Model to address systems factors such as the physical environment and human factors such as fatigue, reliance on memory, and reliance on vigilance could drive system-wide changes. Simulation is an expensive resource and should not be limited to use for education and competency testing. Well-developed, peer-reviewed simulations can be used for environmental factors, human factors, and interprofessional education to improve patients' outcomes and drive system-wide change for quality improvement initiatives. PMID:26909459

  8. Task simulation in computer-based training

    SciTech Connect

    Gardner, P.R.

    1988-02-01

    Westinghouse Hanford Company (WHC) makes extensive use of job-task simulations in company-developed computer-based training (CBT) courseware. This courseware is different from most others because it does not simulate process control machinery or other computer programs, instead the WHC Excerises model day-to-day tasks such as physical work preparations, progress, and incident handling. These Exercises provide a higher level of motivation and enable the testing of more complex patterns of behavior than those typically measured by multiple-choice and short questions. Examples from the WHC Radiation Safety and Crane Safety courses will be used as illustrations. 3 refs.

  9. Does a motion base prevent simulator sickness?

    NASA Technical Reports Server (NTRS)

    Sharkey, Thomas J.; Mccauley, Michael E.

    1992-01-01

    The use of high-fidelity motion cues to reduce the discrepancy between visually implied motion and actual motion is tested experimentally using the NASA Vertical Motion Simulator (VMS). Ten pilot subjects use the VMS to fly simulated S-turns and sawtooths which generate a high incidence of motion sickness. The subjects fly the maneuvers on separate days both with and without use of a motion base provided by the VMS, and data are collected regarding symptoms, dark focus, and postural equilibrium. The motion-base condition is shown to be practically irrelevant with respect to the incidence and severity of motion sickness. It is suggested that the data-collection procedure cannot detect differences in sickness levels, and the false cues of the motion condition are theorized to have an adverse impact approximately equivalent to the absence of cues in a fixed-base condition.

  10. The frontal method in hydrodynamics simulations

    USGS Publications Warehouse

    Walters, R.A.

    1980-01-01

    The frontal solution method has proven to be an effective means of solving the matrix equations resulting from the application of the finite element method to a variety of problems. In this study, several versions of the frontal method were compared in efficiency for several hydrodynamics problems. Three basic modifications were shown to be of value: 1. Elimination of equations with boundary conditions beforehand, 2. Modification of the pivoting procedures to allow dynamic management of the equation size, and 3. Storage of the eliminated equations in a vector. These modifications are sufficiently general to be applied to other classes of problems. ?? 1980.

  11. Microcomputer based software for biodynamic simulation

    NASA Technical Reports Server (NTRS)

    Rangarajan, N.; Shams, T.

    1993-01-01

    This paper presents a description of a microcomputer based software package, called DYNAMAN, which has been developed to allow an analyst to simulate the dynamics of a system consisting of a number of mass segments linked by joints. One primary application is in predicting the motion of a human occupant in a vehicle under the influence of a variety of external forces, specially those generated during a crash event. Extensive use of a graphical user interface has been made to aid the user in setting up the input data for the simulation and in viewing the results from the simulation. Among its many applications, it has been successfully used in the prototype design of a moving seat that aids in occupant protection during a crash, by aircraft designers in evaluating occupant injury in airplane crashes, and by users in accident reconstruction for reconstructing the motion of the occupant and correlating the impacts with observed injuries.

  12. Numerical Simulations of Granular Dynamics: Method and Tests

    NASA Astrophysics Data System (ADS)

    Richardson, Derek C.; Walsh, K. J.; Murdoch, N.; Michel, P.; Schwartz, S. R.

    2010-10-01

    We present a new particle-based numerical method for the simulation of granular dynamics, with application to motions of particles (regolith) on small solar system bodies and planetary surfaces [1]. The method employs the parallel N-body tree code pkdgrav [2] to search for collisions and compute particle trajectories. Particle confinement is achieved by combining arbitrary combinations of four provided wall primitives, namely infinite plane, finite disk, infinite cylinder, and finite cylinder, and degenerate cases of these. Various wall movements, including translation, oscillation, and rotation, are supported. Several tests of the method are presented, including a model granular "atmosphere” that achieves correct energy equipartition, and a series of tumbler simulations that compare favorably with actual laboratory experiments [3]. DCR and SRS acknowledge NASA Grant No. NNX08AM39G and NSF Grant No. AST0524875; KJW, the Poincaré Fellowship at OCA; NM, Thales Alenia Space and The Open University; and PM and NM, the French Programme National de Planétologie. References: [1] Richardson et al. (2010), Icarus, submitted; [2] Cf. Richardson et al. (2009), P&SS 57, 183 and references therein; [3] Brucks et al. (2007), PRE 75, 032301-1-032301-4.

  13. Bayesian individualization via sampling-based methods.

    PubMed

    Wakefield, J

    1996-02-01

    We consider the situation where we wish to adjust the dosage regimen of a patient based on (in general) sparse concentration measurements taken on-line. A Bayesian decision theory approach is taken which requires the specification of an appropriate prior distribution and loss function. A simple method for obtaining samples from the posterior distribution of the pharmacokinetic parameters of the patient is described. In general, these samples are used to obtain a Monte Carlo estimate of the expected loss which is then minimized with respect to the dosage regimen. Some special cases which yield analytic solutions are described. When the prior distribution is based on a population analysis then a method of accounting for the uncertainty in the population parameters is described. Two simulation studies showing how the methods work in practice are presented. PMID:8827585

  14. Simulation reduction using the Taguchi method

    NASA Technical Reports Server (NTRS)

    Mistree, Farrokh; Lautenschlager, Ume; Erikstad, Stein Owe; Allen, Janet K.

    1993-01-01

    A large amount of engineering effort is consumed in conducting experiments to obtain information needed for making design decisions. Efficiency in generating such information is the key to meeting market windows, keeping development and manufacturing costs low, and having high-quality products. The principal focus of this project is to develop and implement applications of Taguchi's quality engineering techniques. In particular, we show how these techniques are applied to reduce the number of experiments for trajectory simulation of the LifeSat space vehicle. Orthogonal arrays are used to study many parameters simultaneously with a minimum of time and resources. Taguchi's signal to noise ratio is being employed to measure quality. A compromise Decision Support Problem and Robust Design are applied to demonstrate how quality is designed into a product in the early stages of designing.

  15. Simulation of the 2-dimensional Drude’s model using molecular dynamics method

    SciTech Connect

    Naa, Christian Fredy; Amin, Aisyah; Ramli,; Suprijadi,; Djamal, Mitra; Wahyoedi, Seramika Ari; Viridi, Sparisoma

    2015-04-16

    In this paper, we reported the results of the simulation of the electronic conduction in solids. The simulation is based on the Drude’s models by applying molecular dynamics (MD) method, which uses the fifth-order predictor-corrector algorithm. A formula of the electrical conductivity as a function of lattice length and ion diameter τ(L, d) cand be obtained empirically based on the simulation results.

  16. A web-based virtual lighting simulator

    SciTech Connect

    Papamichael, Konstantinos; Lai, Judy; Fuller, Daniel; Tariq, Tara

    2002-05-06

    This paper is about a web-based ''virtual lighting simulator,'' which is intended to allow architects and lighting designers to quickly assess the effect of key parameters on the daylighting and lighting performance in various space types. The virtual lighting simulator consists of a web-based interface that allows navigation through a large database of images and data, which were generated through parametric lighting simulations. At its current form, the virtual lighting simulator has two main modules, one for daylighting and one for electric lighting. The daylighting module includes images and data for a small office space, varying most key daylighting parameters, such as window size and orientation, glazing type, surface reflectance, sky conditions, time of the year, etc. The electric lighting module includes images and data for five space types (classroom, small office, large open office, warehouse and small retail), varying key lighting parameters, such as the electric lighting system, surface reflectance, dimming/switching, etc. The computed images include perspectives and plans and are displayed in various formats to support qualitative as well as quantitative assessment. The quantitative information is in the form of iso-contour lines superimposed on the images, as well as false color images and statistical information on work plane illuminance. The qualitative information includes images that are adjusted to account for the sensitivity and adaptation of the human eye. The paper also includes a section on the major technical issues and their resolution.

  17. Simulating Biofilm Deformation and Detachment with the Immersed Boundary Method

    NASA Astrophysics Data System (ADS)

    Sudarsan, Rangarajan; Ghosh, Sudeshna; Stockie, John M.; Eberl, Hermann J.

    2016-03-01

    We apply the immersed boundary (or IB) method to simulate deformation and detachment of a periodic array of wall-bounded biofilm colonies in response to a linear shear flow. The biofilm material is represented as a network of Hookean springs that are placed along the edges of a triangulation of the biofilm region. The interfacial shear stress, lift and drag forces acting on the biofilm colony are computed by using fluid stress jump method developed by Williams, Fauci and Gaver [Disc. Contin. Dyn. Sys. B 11(2):519-540, 2009], with a modified version of their exclusion filter. Our detachment criterion is based on the novel concept of an averaged equivalent continuum stress tensor defined at each IB point in the biofilm which is then used to determine a corresponding von Mises yield stress; wherever this yield stress exceeds a given critical threshold the connections to that node are severed, thereby signalling the onset of a detachment event. In order to capture the deformation and detachment behaviour of a biofilm colony at different stages of growth, we consider a family of four biofilm shapes with varying aspect ratio. Our numerical simulations focus on the behaviour of weak biofilms (with relatively low yield stress threshold) and investigate features of the fluid-structure interaction such as locations of maximum shear and increased drag. The most important conclusion of this work is that the commonly employed detachment strategy in biofilm models based only on interfacial shear stress can lead to incorrect or inaccurate results when applied to the study of shear induced detachment of weak biofilms. Our detachment strategy based on equivalent continuum stresses provides a unified and consistent IB framework that handles both sloughing and erosion modes of biofilm detachment, and is consistent with strategies employed in many other continuum based biofilm models.

  18. Simulation of secondary fault shear displacements - method and application

    NASA Astrophysics Data System (ADS)

    Fälth, Billy; Hökmark, Harald; Lund, Björn; Mai, P. Martin; Munier, Raymond

    2014-05-01

    We present an earthquake simulation method to calculate dynamically and statically induced shear displacements on faults near a large earthquake. Our results are aimed at improved safety assessment of underground waste storage facilities, e.g. a nuclear waste repository. For our simulations, we use the distinct element code 3DEC. We benchmark 3DEC by running an earthquake simulation and then compare the displacement waveforms at a number of surface receivers with the corresponding results obtained from the COMPSYN code package. The benchmark test shows a good agreement in terms of both phase and amplitude. In our application to a potential earthquake near a storage facility, we use a model with a pre-defined earthquake fault plane (primary fault) surrounded by numerous smaller discontinuities (target fractures) representing faults in which shear movements may be induced by the earthquake. The primary fault and the target fractures are embedded in an elastic medium. Initial stresses are applied and the fault rupture mechanism is simulated through a programmed reduction of the primary fault shear strength, which is initiated at a pre-defined hypocenter. The rupture is propagated at a typical rupture propagation speed and arrested when it reaches the fault plane boundaries. The primary fault residual strength properties are uniform over the fault plane. The method allows for calculation of target fracture shear movements induced by static stress redistribution as well as by dynamic effects. We apply the earthquake simulation method in a model of the Forsmark nuclear waste repository site in Sweden with rock mass properties, in situ stresses and fault geometries according to the description of the site established by the Swedish Nuclear Fuel and Waste Management Co (SKB). The target fracture orientations are based on the Discrete Fracture Network model developed for the site. With parameter values set to provide reasonable upper bound estimates of target fracture

  19. Simulation-based assessment for construction helmets.

    PubMed

    Long, James; Yang, James; Lei, Zhipeng; Liang, Daan

    2015-01-01

    In recent years, there has been a concerted effort for greater job safety in all industries. Personnel protective equipment (PPE) has been developed to help mitigate the risk of injury to humans that might be exposed to hazardous situations. The human head is the most vulnerable to impact as a moderate magnitude can cause serious injury or death. That is why industries have required the use of an industrial hard hat or helmet. There have only been a few articles published to date that are focused on the risk of head injury when wearing an industrial helmet. A full understanding of the effectiveness of construction helmets on reducing injury is lacking. This paper presents a simulation-based method to determine the threshold at which a human will sustain injury when wearing a construction helmet and assesses the risk of injury for wearers of construction helmets or hard hats. Advanced finite element, or FE, models were developed to study the impact on construction helmets. The FE model consists of two parts: the helmet and the human models. The human model consists of a brain, enclosed by a skull and an outer layer of skin. The level and probability of injury to the head was determined using both the head injury criterion (HIC) and tolerance limits set by Deck and Willinger. The HIC has been widely used to assess the likelihood of head injury in vehicles. The tolerance levels proposed by Deck and Willinger are more suited for finite element models but lack wide-scale validation. Different cases of impact were studied using LSTC's LS-DYNA. PMID:23495784

  20. Research on the evaluation method for modelling and simulation of infrared imaging sensor

    NASA Astrophysics Data System (ADS)

    Lou, Shuli; Ren, Jiancun; Liu, Liang; Li, Zhaolong

    2015-10-01

    The validity of infrared image guidance simulation is determined by the fidelity and the accuracy of modelling and simulation of infrared imaging sensor system, and the research on assessment of modelling and simulation of infrared imaging sensor is important to design and assess the IR system. A method is proposed to evaluate simulation of infrared sensor effect based on full reference quality assessment method, the evaluation index system is established to evaluate simulation fidelity of infrared imaging sensor. The accuracy of irradiance and contrast of infrared simulation image can be assessed with one-dimension histogram analysis, and the spatial correlation of image is assessed with two-dimension histogram analysis, and geometrical and gray distribution characteristics are assessed with fidelity function. With the method, modelling and simulation of infrared imaging sensor can be effectively assessed.

  1. Simulation of large-scale rule-based models

    SciTech Connect

    Hlavacek, William S; Monnie, Michael I; Colvin, Joshua; Faseder, James

    2008-01-01

    Interactions of molecules, such as signaling proteins, with multiple binding sites and/or multiple sites of post-translational covalent modification can be modeled using reaction rules. Rules comprehensively, but implicitly, define the individual chemical species and reactions that molecular interactions can potentially generate. Although rules can be automatically processed to define a biochemical reaction network, the network implied by a set of rules is often too large to generate completely or to simulate using conventional procedures. To address this problem, we present DYNSTOC, a general-purpose tool for simulating rule-based models. DYNSTOC implements a null-event algorithm for simulating chemical reactions in a homogenous reaction compartment. The simulation method does not require that a reaction network be specified explicitly in advance, but rather takes advantage of the availability of the reaction rules in a rule-based specification of a network to determine if a randomly selected set of molecular components participates in a reaction during a time step. DYNSTOC reads reaction rules written in the BioNetGen language which is useful for modeling protein-protein interactions involved in signal transduction. The method of DYNSTOC is closely related to that of STOCHSIM. DYNSTOC differs from STOCHSIM by allowing for model specification in terms of BNGL, which extends the range of protein complexes that can be considered in a model. DYNSTOC enables the simulation of rule-based models that cannot be simulated by conventional methods. We demonstrate the ability of DYNSTOC to simulate models accounting for multisite phosphorylation and multivalent binding processes that are characterized by large numbers of reactions. DYNSTOC is free for non-commercial use. The C source code, supporting documentation and example input files are available at .

  2. Fast spot-based multiscale simulations of granular drainage

    SciTech Connect

    Rycroft, Chris H.; Wong, Yee Lok; Bazant, Martin Z.

    2009-05-22

    We develop a multiscale simulation method for dense granular drainage, based on the recently proposed spot model, where the particle packing flows by local collective displacements in response to diffusing"spots'" of interstitial free volume. By comparing with discrete-element method (DEM) simulations of 55,000 spheres in a rectangular silo, we show that the spot simulation is able to approximately capture many features of drainage, such as packing statistics, particle mixing, and flow profiles. The spot simulation runs two to three orders of magnitude faster than DEM, making it an appropriate method for real-time control or optimization. We demonstrateextensions for modeling particle heaping and avalanching at the free surface, and for simulating the boundary layers of slower flow near walls. We show that the spot simulations are robust and flexible, by demonstrating that they can be used in both event-driven and fixed timestep approaches, and showing that the elastic relaxation step used in the model can be applied much less frequently and still create good results.

  3. Parallel Performance Optimization of the Direct Simulation Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Gao, Da; Zhang, Chonglin; Schwartzentruber, Thomas

    2009-11-01

    Although the direct simulation Monte Carlo (DSMC) particle method is more computationally intensive compared to continuum methods, it is accurate for conditions ranging from continuum to free-molecular, accurate in highly non-equilibrium flow regions, and holds potential for incorporating advanced molecular-based models for gas-phase and gas-surface interactions. As available computer resources continue their rapid growth, the DSMC method is continually being applied to increasingly complex flow problems. Although processor clock speed continues to increase, a trend of increasing multi-core-per-node parallel architectures is emerging. To effectively utilize such current and future parallel computing systems, a combined shared/distributed memory parallel implementation (using both Open Multi-Processing (OpenMP) and Message Passing Interface (MPI)) of the DSMC method is under development. The parallel implementation of a new state-of-the-art 3D DSMC code employing an embedded 3-level Cartesian mesh will be outlined. The presentation will focus on performance optimization strategies for DSMC, which includes, but is not limited to, modified algorithm designs, practical code-tuning techniques, and parallel performance optimization. Specifically, key issues important to the DSMC shared memory (OpenMP) parallel performance are identified as (1) granularity (2) load balancing (3) locality and (4) synchronization. Challenges and solutions associated with these issues as they pertain to the DSMC method will be discussed.

  4. Daylighting simulation: methods, algorithms, and resources

    SciTech Connect

    Carroll, William L.

    1999-12-01

    This document presents work conducted as part of Subtask C, ''Daylighting Design Tools'', Subgroup C2, ''New Daylight Algorithms'', of the IEA SHC Task 21 and the ECBCS Program Annex 29 ''Daylight in Buildings''. The search for and collection of daylighting analysis methods and algorithms led to two important observations. First, there is a wide range of needs for different types of methods to produce a complete analysis tool. These include: Geometry; Light modeling; Characterization of the natural illumination resource; Materials and components properties, representations; and Usability issues (interfaces, interoperability, representation of analysis results, etc). Second, very advantageously, there have been rapid advances in many basic methods in these areas, due to other forces. They are in part driven by: The commercial computer graphics community (commerce, entertainment); The lighting industry; Architectural rendering and visualization for projects; and Academia: Course materials, research. This has led to a very rich set of information resources that have direct applicability to the small daylighting analysis community. Furthermore, much of this information is in fact available online. Because much of the information about methods and algorithms is now online, an innovative reporting strategy was used: the core formats are electronic, and used to produce a printed form only secondarily. The electronic forms include both online WWW pages and a downloadable .PDF file with the same appearance and content. Both electronic forms include live primary and indirect links to actual information sources on the WWW. In most cases, little additional commentary is provided regarding the information links or citations that are provided. This in turn allows the report to be very concise. The links are expected speak for themselves. The report consists of only about 10+ pages, with about 100+ primary links, but with potentially thousands of indirect links. For purposes of

  5. Simulations of 6-DOF Motion with a Cartesian Method

    NASA Technical Reports Server (NTRS)

    Murman, Scott M.; Aftosmis, Michael J.; Berger, Marsha J.; Kwak, Dochan (Technical Monitor)

    2003-01-01

    Coupled 6-DOF/CFD trajectory predictions using an automated Cartesian method are demonstrated by simulating a GBU-32/JDAM store separating from an F-18C aircraft. Numerical simulations are performed at two Mach numbers near the sonic speed, and compared with flight-test telemetry and photographic-derived data. Simulation results obtained with a sequential-static series of flow solutions are contrasted with results using a time-dependent flow solver. Both numerical methods show good agreement with the flight-test data through the first half of the simulations. The sequential-static and time-dependent methods diverge over the last half of the trajectory prediction. after the store produces peak angular rates. A cost comparison for the Cartesian method is included, in terms of absolute cost and relative to computing uncoupled 6-DOF trajectories. A detailed description of the 6-DOF method, as well as a verification of its accuracy, is provided in an appendix.

  6. Kinetic Method for Hydrogen-Deuterium-Tritium Mixture Distillation Simulation

    SciTech Connect

    Sazonov, A.B.; Kagramanov, Z.G.; Magomedbekov, E.P.

    2005-07-15

    Simulation of hydrogen distillation plants requires mathematical procedures suitable for multicomponent systems. In most of the present-day simulation methods a distillation column is assumed to be composed of theoretical stages, or plates. However, in the case of a multicomponent mixture theoretical plate does not exist.An alternative kinetic method of simulation is depicted in the work. According to this method a system of mass-transfer differential equations is solved numerically. Mass-transfer coefficients are estimated with using experimental results and empirical equations.Developed method allows calculating the steady state of a distillation column as well as its any non-steady state when initial conditions are given. The results for steady states are compared with ones obtained via Thiele-Geddes theoretical stage technique and the necessity of using kinetic method is demonstrated. Examples of a column startup period and periodic distillation simulations are shown as well.

  7. Interactive physically-based sound simulation

    NASA Astrophysics Data System (ADS)

    Raghuvanshi, Nikunj

    The realization of interactive, immersive virtual worlds requires the ability to present a realistic audio experience that convincingly compliments their visual rendering. Physical simulation is a natural way to achieve such realism, enabling deeply immersive virtual worlds. However, physically-based sound simulation is very computationally expensive owing to the high-frequency, transient oscillations underlying audible sounds. The increasing computational power of desktop computers has served to reduce the gap between required and available computation, and it has become possible to bridge this gap further by using a combination of algorithmic improvements that exploit the physical, as well as perceptual properties of audible sounds. My thesis is a step in this direction. My dissertation concentrates on developing real-time techniques for both sub-problems of sound simulation: synthesis and propagation. Sound synthesis is concerned with generating the sounds produced by objects due to elastic surface vibrations upon interaction with the environment, such as collisions. I present novel techniques that exploit human auditory perception to simulate scenes with hundreds of sounding objects undergoing impact and rolling in real time. Sound propagation is the complementary problem of modeling the high-order scattering and diffraction of sound in an environment as it travels from source to listener. I discuss my work on a novel numerical acoustic simulator (ARD) that is hundred times faster and consumes ten times less memory than a high-accuracy finite-difference technique, allowing acoustic simulations on previously-intractable spaces, such as a cathedral, on a desktop computer. Lastly, I present my work on interactive sound propagation that leverages my ARD simulator to render the acoustics of arbitrary static scenes for multiple moving sources and listener in real time, while accounting for scene-dependent effects such as low-pass filtering and smooth attenuation

  8. Experiential Learning through Computer-Based Simulations.

    ERIC Educational Resources Information Center

    Maynes, Bill; And Others

    1992-01-01

    Describes experiential learning instructional model and simulation for student principals. Describes interactive laser videodisc simulation. Reports preliminary findings about student principal learning from simulation. Examines learning approaches by unsuccessful and successful students and learning levels of model learners. Simulation's success…

  9. Collaborative virtual experience based on reconfigurable simulation

    NASA Astrophysics Data System (ADS)

    Shahab, Qonita M.; Kwon, Yong-Moo; Ko, Heedong

    2006-10-01

    Virtual Reality simulation enables immersive 3D experience of a Virtual Environment. A simulation-based Virtual Environment can be used to map real world phenomena onto virtual experience. With a reconfigurable simulation, users can reconfigure the parameters of the involved objects, so that they can see different effects from the different configurations. This concept is suitable for a classroom learning of physics law. This research studies the Virtual Reality simulation of Newton's physics law on rigid body type of objects. With network support, collaborative interaction is enabled so that people from different places can interact with the same set of objects in immersive Collaborative Virtual Environment. The taxonomy of the interaction in different levels of collaboration is described as: distinct objects and same object, in which there are same object - sequentially, same object - concurrently - same attribute, and same object - concurrently - distinct attributes. The case studies are the interaction of users in two cases: destroying and creating a set of arranged rigid bodies. In Virtual Domino, users can observe physics law while applying force to the domino blocks in order to destroy the arrangements. In Virtual Dollhouse, users can observe physics law while constructing a dollhouse using existing building blocks, under gravity effects.

  10. Dual Energy Method for Breast Imaging: A Simulation Study

    PubMed Central

    Koukou, V.; Martini, N.; Michail, C.; Sotiropoulou, P.; Fountzoula, C.; Kalyvas, N.; Kandarakis, I.; Nikiforidis, G.; Fountos, G.

    2015-01-01

    Dual energy methods can suppress the contrast between adipose and glandular tissues in the breast and therefore enhance the visibility of calcifications. In this study, a dual energy method based on analytical modeling was developed for the detection of minimum microcalcification thickness. To this aim, a modified radiographic X-ray unit was considered, in order to overcome the limited kVp range of mammographic units used in previous DE studies, combined with a high resolution CMOS sensor (pixel size of 22.5 μm) for improved resolution. Various filter materials were examined based on their K-absorption edge. Hydroxyapatite (HAp) was used to simulate microcalcifications. The contrast to noise ratio (CNRtc) of the subtracted images was calculated for both monoenergetic and polyenergetic X-ray beams. The optimum monoenergetic pair was 23/58 keV for the low and high energy, respectively, resulting in a minimum detectable microcalcification thickness of 100 μm. In the polyenergetic X-ray study, the optimal spectral combination was 40/70 kVp filtered with 100 μm cadmium and 1000 μm copper, respectively. In this case, the minimum detectable microcalcification thickness was 150 μm. The proposed dual energy method provides improved microcalcification detectability in breast imaging with mean glandular dose values within acceptable levels. PMID:26246848

  11. Dual Energy Method for Breast Imaging: A Simulation Study.

    PubMed

    Koukou, V; Martini, N; Michail, C; Sotiropoulou, P; Fountzoula, C; Kalyvas, N; Kandarakis, I; Nikiforidis, G; Fountos, G

    2015-01-01

    Dual energy methods can suppress the contrast between adipose and glandular tissues in the breast and therefore enhance the visibility of calcifications. In this study, a dual energy method based on analytical modeling was developed for the detection of minimum microcalcification thickness. To this aim, a modified radiographic X-ray unit was considered, in order to overcome the limited kVp range of mammographic units used in previous DE studies, combined with a high resolution CMOS sensor (pixel size of 22.5 μm) for improved resolution. Various filter materials were examined based on their K-absorption edge. Hydroxyapatite (HAp) was used to simulate microcalcifications. The contrast to noise ratio (CNR tc ) of the subtracted images was calculated for both monoenergetic and polyenergetic X-ray beams. The optimum monoenergetic pair was 23/58 keV for the low and high energy, respectively, resulting in a minimum detectable microcalcification thickness of 100 μm. In the polyenergetic X-ray study, the optimal spectral combination was 40/70 kVp filtered with 100 μm cadmium and 1000 μm copper, respectively. In this case, the minimum detectable microcalcification thickness was 150 μm. The proposed dual energy method provides improved microcalcification detectability in breast imaging with mean glandular dose values within acceptable levels. PMID:26246848

  12. Ocean Wave Simulation Based on Wind Field.

    PubMed

    Li, Zhongyi; Wang, Hao

    2016-01-01

    Ocean wave simulation has a wide range of applications in movies, video games and training systems. Wind force is the main energy resource for generating ocean waves, which are the result of the interaction between wind and the ocean surface. While numerous methods to handle simulating oceans and other fluid phenomena have undergone rapid development during the past years in the field of computer graphic, few of them consider to construct ocean surface height field from the perspective of wind force driving ocean waves. We introduce wind force to the construction of the ocean surface height field through applying wind field data and wind-driven wave particles. Continual and realistic ocean waves result from the overlap of wind-driven wave particles, and a strategy was proposed to control these discrete wave particles and simulate an endless ocean surface. The results showed that the new method is capable of obtaining a realistic ocean scene under the influence of wind fields at real time rates. PMID:26808718

  13. Ocean Wave Simulation Based on Wind Field

    PubMed Central

    2016-01-01

    Ocean wave simulation has a wide range of applications in movies, video games and training systems. Wind force is the main energy resource for generating ocean waves, which are the result of the interaction between wind and the ocean surface. While numerous methods to handle simulating oceans and other fluid phenomena have undergone rapid development during the past years in the field of computer graphic, few of them consider to construct ocean surface height field from the perspective of wind force driving ocean waves. We introduce wind force to the construction of the ocean surface height field through applying wind field data and wind-driven wave particles. Continual and realistic ocean waves result from the overlap of wind-driven wave particles, and a strategy was proposed to control these discrete wave particles and simulate an endless ocean surface. The results showed that the new method is capable of obtaining a realistic ocean scene under the influence of wind fields at real time rates. PMID:26808718

  14. Improved computational methods for simulating inertial confinement fusion

    NASA Astrophysics Data System (ADS)

    Fatenejad, Milad

    This dissertation describes the development of two multidimensional Lagrangian code for simulating inertial confinement fusion (ICF) on structured meshes. The first is DRACO, a production code primarily developed by the Laboratory for Laser Energetics. Several significant new capabilities were implemented including the ability to model radiative transfer using Implicit Monte Carlo [Fleck et al., JCP 8, 313 (1971)]. DRACO was also extended to operate in 3D Cartesian geometry on hexahedral meshes. Originally the code was only used in 2D cylindrical geometry. This included implementing thermal conduction and a flux-limited multigroup diffusion model for radiative transfer. Diffusion equations are solved by extending the 2D Kershaw method [Kershaw, JCP 39, 375 (1981)] to three dimensions. The second radiation-hydrodynamics code developed as part of this thesis is Cooper, a new 3D code which operates on structured hexahedral meshes. Cooper supports the compatible hydrodynamics framework [Caramana et al., JCP 146, 227 (1998)] to obtain round-off error levels of global energy conservation. This level of energy conservation is maintained even when two temperature thermal conduction, ion/electron equilibration, and multigroup diffusion based radiative transfer is active. Cooper is parallelized using domain decomposition, and photon energy group decomposition. The Mesh Oriented datABase (MOAB) computational library is used to exchange information between processes when domain decomposition is used. Cooper's performance is analyzed through direct comparisons with DRACO. Cooper also contains a method for preserving spherical symmetry during target implosions [Caramana et al., JCP 157, 89 (1999)]. Several deceleration phase implosion simulations were used to compare instability growth using traditional hydrodynamics and compatible hydrodynamics with/without symmetry modification. These simulations demonstrate increased symmetry preservation errors when traditional hydrodynamics

  15. Performance Analysis of an Actor-Based Distributed Simulation

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D.

    1998-01-01

    Object-oriented design of simulation programs appears to be very attractive because of the natural association of components in the simulated system with objects. There is great potential in distributing the simulation across several computers for the purpose of parallel computation and its consequent handling of larger problems in less elapsed time. One approach to such a design is to use "actors", that is, active objects with their own thread of control. Because these objects execute concurrently, communication is via messages. This is in contrast to an object-oriented design using passive objects where communication between objects is via method calls (direct calls when they are in the same address space and remote procedure calls when they are in different address spaces or different machines). This paper describes a performance analysis program for the evaluation of a design for distributed simulations based upon actors.

  16. Modelling and Simulation as a Recognizing Method in Education

    ERIC Educational Resources Information Center

    Stoffa, Veronika

    2004-01-01

    Computer animation-simulation models of complex processes and events, which are the method of instruction, can be an effective didactic device. Gaining deeper knowledge about objects modelled helps to plan simulation experiments oriented on processes and events researched. Animation experiments realized on multimedia computers can aid easier…

  17. Simulation of nonpoint source contamination based on adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Kourakos, G.; Harter, T.

    2014-12-01

    Contamination of groundwater aquifers from nonpoint sources is a worldwide problem. Typical agricultural groundwater basins receive contamination from a large array (in the order of ~10^5-6) of spatially and temporally heterogeneous sources such as fields, crops, dairies etc, while the received contaminants emerge at significantly uncertain time lags to a large array of discharge surfaces such as public supply, domestic and irrigation wells and streams. To support decision making in such complex regimes several approaches have been developed, which can be grouped into 3 categories: i) Index methods, ii)regression methods and iii) physically based methods. Among the three, physically based methods are considered more accurate, but at the cost of computational demand. In this work we present a physically based simulation framework which exploits the latest hardware and software developments to simulate large (>>1,000 km2) groundwater basins. First we simulate groundwater flow using a sufficiently detailed mesh to capture the spatial heterogeneity. To achieve optimal mesh quality we combine adaptive mesh refinement with the nonlinear solution for unconfined flow. Starting from a coarse grid the mesh is refined iteratively in the parts of the domain where the flow heterogeneity appears higher resulting in optimal grid. Secondly we simulate the nonpoint source pollution based on the detailed velocity field computed from the previous step. In our approach we use the streamline model where the 3D transport problem is decomposed into multiple 1D transport problems. The proposed framework is applied to simulate nonpoint source pollution in the Central Valley aquifer system, California.

  18. Simulation-based instruction of technical skills

    NASA Technical Reports Server (NTRS)

    Towne, Douglas M.; Munro, Allen

    1991-01-01

    A rapid intelligent tutoring development system (RAPIDS) was developed to facilitate the production of interactive, real-time graphical device models for use in instructing the operation and maintenance of complex systems. The tools allowed subject matter experts to produce device models by creating instances of previously defined objects and positioning them in the emerging device model. These simulation authoring functions, as well as those associated with demonstrating procedures and functional effects on the completed model, required no previous programming experience or use of frame-based instructional languages. Three large simulations were developed in RAPIDS, each involving more than a dozen screen-sized sections. Seven small, single-view applications were developed to explore the range of applicability. Three workshops were conducted to train others in the use of the authoring tools. Participants learned to employ the authoring tools in three to four days and were able to produce small working device models on the fifth day.

  19. Weak turbulence simulations with the Hermite-Fourier spectral method

    NASA Astrophysics Data System (ADS)

    Vencels, Juris; Delzanno, Gian Luca; Manzini, Gianmarco; Roytershteyn, Vadim; Markidis, Stefano

    2015-11-01

    Recently, a new (transform) method based on a Fourier-Hermite (FH) discretization of the Vlasov-Maxwell equations has been developed. The resulting set of moment equations is discretized implicitly in time with a Crank-Nicolson scheme and solved with a nonlinear Newton-Krylov technique. For periodic boundary conditions, this discretization delivers a scheme that conserves the total mass, momentum and energy of the system exactly. In this work, we apply the FH method to study a problem of Langmuir turbulence, where a low signal-to-noise ratio is important to follow the turbulent cascade and might require a lot of computational resources if studied with PIC. We simulate a weak (low density) electron beam moving in a Maxwellian plasma and subject to an instability that generates Langmuir waves and a weak turbulence field. We also discuss some optimization techniques to optimally select the Hermite basis in terms of its shift and scaling argument, and show that this technique improve the overall accuracy of the method. Finally, we discuss the applicability of the HF method for studying kinetic plasma turbulence. This work was funded by LDRD under the auspices of the NNSA of the U.S. by LANL under contract DE-AC52-06NA25396 and by EC through the EPiGRAM project (grant agreement no. 610598. epigram-project.eu).

  20. Deployment Simulation Methods for Ultra-Lightweight Inflatable Structures

    NASA Technical Reports Server (NTRS)

    Wang, John T.; Johnson, Arthur R.

    2003-01-01

    Two dynamic inflation simulation methods are employed for modeling the deployment of folded thin-membrane tubes. The simulations are necessary because ground tests include gravity effects and may poorly represent deployment in space. The two simulation methods are referred to as the Control Volume (CV) method and the Arbitrary Lagrangian Eulerian (ALE) method. They are available in the LS-DYNA nonlinear dynamic finite element code. Both methods are suitable for modeling the interactions between the inflation gas and the thin-membrane tube structures. The CV method only considers the pressure induced by the inflation gas in the simulation, while the ALE method models the actual flow of the inflation gas. Thus, the transient fluid properties at any location within the tube can be predicted by the ALE method. Deployment simulations of three packaged tube models; namely coiled, Z-folded, and telescopically-folded configurations, are performed. Results predicted by both methods for the telescopically-folded configuration are correlated and computational efficiency issues are discussed.

  1. An Efficient, Semi-implicit Pressure-based Scheme Employing a High-resolution Finitie Element Method for Simulating Transient and Steady, Inviscid and Viscous, Compressible Flows on Unstructured Grids

    SciTech Connect

    Richard C. Martineau; Ray A. Berry

    2003-04-01

    A new semi-implicit pressure-based Computational Fluid Dynamics (CFD) scheme for simulating a wide range of transient and steady, inviscid and viscous compressible flow on unstructured finite elements is presented here. This new CFD scheme, termed the PCICEFEM (Pressure-Corrected ICE-Finite Element Method) scheme, is composed of three computational phases, an explicit predictor, an elliptic pressure Poisson solution, and a semiimplicit pressure-correction of the flow variables. The PCICE-FEM scheme is capable of second-order temporal accuracy by incorporating a combination of a time-weighted form of the two-step Taylor-Galerkin Finite Element Method scheme as an explicit predictor for the balance of momentum equations and the finite element form of a time-weighted trapezoid rule method for the semi-implicit form of the governing hydrodynamic equations. Second-order spatial accuracy is accomplished by linear unstructured finite element discretization. The PCICE-FEM scheme employs Flux-Corrected Transport as a high-resolution filter for shock capturing. The scheme is capable of simulating flows from the nearly incompressible to the high supersonic flow regimes. The PCICE-FEM scheme represents an advancement in mass-momentum coupled, pressurebased schemes. The governing hydrodynamic equations for this scheme are the conservative form of the balance of momentum equations (Navier-Stokes), mass conservation equation, and total energy equation. An operator splitting process is performed along explicit and implicit operators of the semi-implicit governing equations to render the PCICE-FEM scheme in the class of predictor-corrector schemes. The complete set of semi-implicit governing equations in the PCICE-FEM scheme are cast in this form, an explicit predictor phase and a semi-implicit pressure-correction phase with the elliptic pressure Poisson solution coupling the predictor-corrector phases. The result of this predictor-corrector formulation is that the pressure Poisson

  2. Simulation-Based Rule Generation Considering Readability

    PubMed Central

    Yahagi, H.; Shimizu, S.; Ogata, T.; Hara, T.; Ota, J.

    2015-01-01

    Rule generation method is proposed for an aircraft control problem in an airport. Designing appropriate rules for motion coordination of taxiing aircraft in the airport is important, which is conducted by ground control. However, previous studies did not consider readability of rules, which is important because it should be operated and maintained by humans. Therefore, in this study, using the indicator of readability, we propose a method of rule generation based on parallel algorithm discovery and orchestration (PADO). By applying our proposed method to the aircraft control problem, the proposed algorithm can generate more readable and more robust rules and is found to be superior to previous methods. PMID:27347501

  3. Simulation-Based Rule Generation Considering Readability.

    PubMed

    Yahagi, H; Shimizu, S; Ogata, T; Hara, T; Ota, J

    2015-01-01

    Rule generation method is proposed for an aircraft control problem in an airport. Designing appropriate rules for motion coordination of taxiing aircraft in the airport is important, which is conducted by ground control. However, previous studies did not consider readability of rules, which is important because it should be operated and maintained by humans. Therefore, in this study, using the indicator of readability, we propose a method of rule generation based on parallel algorithm discovery and orchestration (PADO). By applying our proposed method to the aircraft control problem, the proposed algorithm can generate more readable and more robust rules and is found to be superior to previous methods. PMID:27347501

  4. On the simulation of trailing edge noise with a hybrid LES/APE method

    NASA Astrophysics Data System (ADS)

    Ewert, R.; Schröder, W.

    2004-02-01

    A hybrid method is applied to predict trailing edge noise based on a large eddy simulation (LES) of the compressible flow problem and acoustic perturbation equations (APE) for the time-dependent simulation of the acoustic field. The acoustic simulation in general considers the mean flow convection and refraction effects such that the computational domain of the flow simulation has to comprise only the significant acoustic source region. Using a modified rescaling method for the prediction of the unsteady turbulent inflow boundary layer, the LES just resolves the flow field in the immediate vicinity of the trailing edge. The linearized APE completely prevent the unbounded growth of hydrodynamic instabilities in critical mean flows.

  5. Fault diagnosis based on continuous simulation models

    NASA Technical Reports Server (NTRS)

    Feyock, Stefan

    1987-01-01

    The results are described of an investigation of techniques for using continuous simulation models as basis for reasoning about physical systems, with emphasis on the diagnosis of system faults. It is assumed that a continuous simulation model of the properly operating system is available. Malfunctions are diagnosed by posing the question: how can we make the model behave like that. The adjustments that must be made to the model to produce the observed behavior usually provide definitive clues to the nature of the malfunction. A novel application of Dijkstra's weakest precondition predicate transformer is used to derive the preconditions for producing the required model behavior. To minimize the size of the search space, an envisionment generator based on interval mathematics was developed. In addition to its intended application, the ability to generate qualitative state spaces automatically from quantitative simulations proved to be a fruitful avenue of investigation in its own right. Implementations of the Dijkstra transform and the envisionment generator are reproduced in the Appendix.

  6. Simulation-based disassembly systems design

    NASA Astrophysics Data System (ADS)

    Ohlendorf, Martin; Herrmann, Christoph; Hesselbach, Juergen

    2004-02-01

    Recycling of Waste of Electrical and Electronic Equipment (WEEE) is a matter of actual concern, driven by economic, ecological and legislative reasons. Here, disassembly as the first step of the treatment process plays a key role. To achieve sustainable progress in WEEE disassembly, the key is not to limit analysis and planning to merely disassembly processes in a narrow sense, but to consider entire disassembly plants including additional aspects such as internal logistics, storage, sorting etc. as well. In this regard, the paper presents ways of designing, dimensioning, structuring and modeling different disassembly systems. Goal is to achieve efficient and economic disassembly systems that allow recycling processes complying with legal requirements. Moreover, advantages of applying simulation software tools that are widespread and successfully utilized in conventional industry sectors are addressed. They support systematic disassembly planning by means of simulation experiments including consecutive efficiency evaluation. Consequently, anticipatory recycling planning considering various scenarios is enabled and decisions about which types of disassembly systems evidence appropriateness for specific circumstances such as product spectrum, throughput, disassembly depth etc. is supported. Furthermore, integration of simulation based disassembly planning in a holistic concept with configuration of interfaces and data utilization including cost aspects is described.

  7. Coherent-wave Monte Carlo method for simulating light propagation in tissue

    NASA Astrophysics Data System (ADS)

    Kraszewski, Maciej; Pluciński, Jerzy

    2016-03-01

    Simulating propagation and scattering of coherent light in turbid media, such as biological tissues, is a complex problem. Numerical methods for solving Helmholtz or wave equation (e.g. finite-difference or finite-element methods) require large amount of computer memory and long computation time. This makes them impractical for simulating laser beam propagation into deep layers of tissue. Other group of methods, based on radiative transfer equation, allows to simulate only propagation of light averaged over the ensemble of turbid medium realizations. This makes them unuseful for simulating phenomena connected to coherence properties of light. We propose a new method for simulating propagation of coherent light (e.g. laser beam) in biological tissue, that we called Coherent-Wave Monte Carlo method. This method is based on direct computation of optical interaction between scatterers inside the random medium, what allows to reduce amount of memory and computation time required for simulation. We present the theoretical basis of the proposed method and its comparison with finite-difference methods for simulating light propagation in scattering media in Rayleigh approximation regime.

  8. Efficient Combustion Simulation via the Adaptive Wavelet Collocation Method

    NASA Astrophysics Data System (ADS)

    Lung, Kevin; Brown-Dymkoski, Eric; Guerrero, Victor; Doran, Eric; Museth, Ken; Balme, Jo; Urberger, Bob; Kessler, Andre; Jones, Stephen; Moses, Billy; Crognale, Anthony

    Rocket engine development continues to be driven by the intuition and experience of designers, progressing through extensive trial-and-error test campaigns. Extreme temperatures and pressures frustrate direct observation, while high-fidelity simulation can be impractically expensive owing to the inherent muti-scale, multi-physics nature of the problem. To address this cost, an adaptive multi-resolution PDE solver has been designed which targets the high performance, many-core architecture of GPUs. The adaptive wavelet collocation method is used to maintain a sparse-data representation of the high resolution simulation, greatly reducing the memory footprint while tightly controlling physical fidelity. The tensorial, stencil topology of wavelet-based grids lends itself to highly vectorized algorithms which are necessary to exploit the performance of GPUs. This approach permits efficient implementation of direct finite-rate kinetics, and improved resolution of steep thermodynamic gradients and the smaller mixing scales that drive combustion dynamics. Resolving these scales is crucial for accurate chemical kinetics, which are typically degraded or lost in statistical modeling approaches.

  9. Methods for increased computational efficiency of multibody simulations

    NASA Astrophysics Data System (ADS)

    Epple, Alexander

    This thesis is concerned with the efficient numerical simulation of finite element based flexible multibody systems. Scaling operations are systematically applied to the governing index-3 differential algebraic equations in order to solve the problem of ill conditioning for small time step sizes. The importance of augmented Lagrangian terms is demonstrated. The use of fast sparse solvers is justified for the solution of the linearized equations of motion resulting in significant savings of computational costs. Three time stepping schemes for the integration of the governing equations of flexible multibody systems are discussed in detail. These schemes are the two-stage Radau IIA scheme, the energy decaying scheme, and the generalized-a method. Their formulations are adapted to the specific structure of the governing equations of flexible multibody systems. The efficiency of the time integration schemes is comprehensively evaluated on a series of test problems. Formulations for structural and constraint elements are reviewed and the problem of interpolation of finite rotations in geometrically exact structural elements is revisited. This results in the development of a new improved interpolation algorithm, which preserves the objectivity of the strain field and guarantees stable simulations in the presence of arbitrarily large rotations. Finally, strategies for the spatial discretization of beams in the presence of steep variations in cross-sectional properties are developed. These strategies reduce the number of degrees of freedom needed to accurately analyze beams with discontinuous properties, resulting in improved computational efficiency.

  10. Large Eddy Simulation and the Filtered Probability Density Function Method

    NASA Astrophysics Data System (ADS)

    Jones, W. P.; Navarro-Martinez, S.

    2009-12-01

    Recently there is has been increased interest in modelling combustion processes with high-levels of extinction and re-ignition. Such system often lie beyond the scope of conventional single scalar-based models. Large Eddy Simulation (LES) has shown a large potential for describing turbulent reactive systems, though combustion occurs at the smallest unresolved scales of the flow and must be modelled. In the sub-grid Probability Density Function (pdf) method approximations are devised to close the evolution equation for the joint-pdf which is then solved directly. The paper describes such an approach and concerns, in particular, the Eulerian stochastic field method of solving the pdf equation. The paper examines the capabilities of the LES-pdf method in capturing auto-ignition and extinction events in different partially premixed configurations with different fuels (hydrogen, methane and n-heptane). The results show that the LES-pdf formulation can capture different regimes without any parameter adjustments, independent of Reynolds numbers and fuel type.

  11. DISPLACEMENT BASED SEISMIC DESIGN METHODS.

    SciTech Connect

    HOFMAYER,C.MILLER,C.WANG,Y.COSTELLO,J.

    2003-07-15

    A research effort was undertaken to determine the need for any changes to USNRC's seismic regulatory practice to reflect the move, in the earthquake engineering community, toward using expected displacement rather than force (or stress) as the basis for assessing design adequacy. The research explored the extent to which displacement based seismic design methods, such as given in FEMA 273, could be useful for reviewing nuclear power stations. Two structures common to nuclear power plants were chosen to compare the results of the analysis models used. The first structure is a four-story frame structure with shear walls providing the primary lateral load system, referred herein as the shear wall model. The second structure is the turbine building of the Diablo Canyon nuclear power plant. The models were analyzed using both displacement based (pushover) analysis and nonlinear dynamic analysis. In addition, for the shear wall model an elastic analysis with ductility factors applied was also performed. The objectives of the work were to compare the results between the analyses, and to develop insights regarding the work that would be needed before the displacement based analysis methodology could be considered applicable to facilities licensed by the NRC. A summary of the research results, which were published in NUREGICR-6719 in July 2001, is presented in this paper.

  12. Simulating the daylight performance of fenestration systems and spaces of arbitrary complexity: The IDC method

    NASA Astrophysics Data System (ADS)

    Papamichael, K.; Beltran, L.

    1993-04-01

    A new method to simulate the daylight performance of fenestration systems and spaces is presented. This new method, named IDC (Integration of Directional Coefficients), allows the simulation of the daylight performance of fenestration systems and spaces of arbitrary complexity, under any sun, sky, and ground conditions. The IDC method is based on the combination of scale model photometry and computer-based simulation. Physical scale models are used to experimentally determine a comprehensive set of 'directional illuminance coefficients' at reference points of interest, which are then used in analytical, computer-based routines, to determine daylight factors or actual daylight illuminance values under any sun, sky, and ground conditions. The main advantage of the IDC method is its applicability to any optically complex environment. Moreover, the computer-based analytical routines are fast enough to allow for hourly simulation of the daylight performance over the course of an entire year. However, the method requires appropriate experimental facilities for the determination of the Directional Coefficients. The IDC method has been implemented and used successfully in inter-validation procedures with various daylight simulation computer programs. Currently, it is used to simulate the daylight performance of fenestration systems that incorporate optically complex components, such as Venetian blinds, optically treated light shelves and light pipes.

  13. A modified method of characteristics and its application in forward and inversion simulations of underwater explosion

    NASA Astrophysics Data System (ADS)

    Zhang, Chengjiao; Li, Xiaojie; Yang, Chenchen

    2016-07-01

    This paper introduces a modified method of characteristics and its application in forward and inversion simulations of underwater explosion. Compared with standard method of characteristics which is appropriate to homoentripic flow problem, the modified method can be also used to deal with isentropic flow problem such as underwater explosion. Underwater explosion of spherical TNT and composition B explosives are simulated by using the modified method, respectively. Peak pressures and flow field pressures are obtained, and they are coincident with those from empirical formulas. The comparison demonstrates the modified is feasible and reliable in underwater explosion simulation. Based on the modified method, inverse difference schemes and inverse method are introduced. Combined with the modified, the inverse schemes can be used to deal with gas-water interface inversion of underwater explosion. Inversion simulations of underwater explosion of the explosives are performed in water, and equation of state (EOS) of detonation product is not needed. The peak pressures from the forward simulations are provided as boundary conditions in the inversion simulations. Inversion interfaces are obtained and they are mainly in good agreement with those from the forward simulations in near field. The comparison indicates the inverse method and the inverse difference schemes are reliable and reasonable in interface inversion simulation.

  14. Broadening the interface bandwidth in simulation based training

    NASA Technical Reports Server (NTRS)

    Somers, Larry E.

    1989-01-01

    Currently most computer based simulations rely exclusively on computer generated graphics to create the simulation. When training is involved, the method almost exclusively used to display information to the learner is text displayed on the cathode ray tube. MICROEXPERT Systems is concentrating on broadening the communications bandwidth between the computer and user by employing a novel approach to video image storage combined with sound and voice output. An expert system is used to combine and control the presentation of analog video, sound, and voice output with computer based graphics and text. Researchers are currently involved in the development of several graphics based user interfaces for NASA, the U.S. Army, and the U.S. Navy. Here, the focus is on the human factors considerations, software modules, and hardware components being used to develop these interfaces.

  15. Treecode-based generalized Born method

    NASA Astrophysics Data System (ADS)

    Xu, Zhenli; Cheng, Xiaolin; Yang, Haizhao

    2011-02-01

    We have developed a treecode-based O(Nlog N) algorithm for the generalized Born (GB) implicit solvation model. Our treecode-based GB (tGB) is based on the GBr6 [J. Phys. Chem. B 111, 3055 (2007)], an analytical GB method with a pairwise descreening approximation for the R6 volume integral expression. The algorithm is composed of a cutoff scheme for the effective Born radii calculation, and a treecode implementation of the GB charge-charge pair interactions. Test results demonstrate that the tGB algorithm can reproduce the vdW surface based Poisson solvation energy with an average relative error less than 0.6% while providing an almost linear-scaling calculation for a representative set of 25 proteins with different sizes (from 2815 atoms to 65456 atoms). For a typical system of 10k atoms, the tGB calculation is three times faster than the direct summation as implemented in the original GBr6 model. Thus, our tGB method provides an efficient way for performing implicit solvent GB simulations of larger biomolecular systems at longer time scales.

  16. Dynamic Multiscale Quantum Mechanics/Electromagnetics Simulation Method.

    PubMed

    Meng, Lingyi; Yam, ChiYung; Koo, SiuKong; Chen, Quan; Wong, Ngai; Chen, GuanHua

    2012-04-10

    A newly developed hybrid quantum mechanics and electromagnetics (QM/EM) method [Yam et al. Phys. Chem. Chem. Phys.2011, 13, 14365] is generalized to simulate the real time dynamics. Instead of the electric and magnetic fields, the scalar and vector potentials are used to integrate Maxwell's equations in the time domain. The TDDFT-NEGF-EOM method [Zheng et al. Phys. Rev. B2007, 75, 195127] is employed to simulate the electronic dynamics in the quantum mechanical region. By allowing the penetration of a classical electromagnetic wave into the quantum mechanical region, the electromagnetic wave for the entire simulating region can be determined consistently by solving Maxwell's equations. The transient potential distributions and current density at the interface between quantum mechanical and classical regions are employed as the boundary conditions for the quantum mechanical and electromagnetic simulations, respectively. Charge distribution, current density, and potentials at different temporal steps and spatial scales are integrated seamlessly within a unified computational framework. PMID:26596737

  17. Optical simulation of surface textured TCO using FDTD method

    NASA Astrophysics Data System (ADS)

    Elviyanti, I. L.; Purwanto, H.; Kusumandari

    2016-02-01

    The purpose of this research is simulating the transmittance of surface textured transparent conducting oxide (TCO) for Dye-Sensitized Solar Cell (DSSC) application. The simulation based on finite difference time domain (FDTD) was performed using the MatLab software for flat and pyramid surface textured TCO. Fluorine-doped tin oxide (FTO) and indium tin oxide (ITO) were used as TCO material. The transmittance simulation of flat TCO was compared to UV-Vis spectrophotometer measurement of real TCO to ensure the accuracy of the simulation. Then, the transmittance simulation of pyramid surface textures of TCO is higher than a flat one. It suggested that surface texturing enhance the path of light through dispersion and reflectance light by the pattern of the surface. This result indicates that surface textured increasing the transmittance of TCO through a complex light trapping mechanism which might be used to increase the light harvesting for DSSC application.

  18. Assessment of Human Patient Simulation-Based Learning

    PubMed Central

    Schwartz, Catrina R.; Odegard, Peggy Soule; Hammer, Dana P.; Seybert, Amy L.

    2011-01-01

    The most common types of assessment of human patient simulation are satisfaction and/or confidence surveys or tests of knowledge acquisition. There is an urgent need to develop valid, reliable assessment instruments related to simulation-based learning. Assessment practices for simulation-based activities in the pharmacy curricula are highlighted, with a focus on human patient simulation. Examples of simulation-based assessment activities are reviewed according to type of assessment or domain being assessed. Assessment strategies are suggested for faculty members and programs that use simulation-based learning. PMID:22345727

  19. Tools for evaluating team performance in simulation-based training

    PubMed Central

    Rosen, Michael A; Weaver, Sallie J; Lazzara, Elizabeth H; Salas, Eduardo; Wu, Teresa; Silvestri, Salvatore; Schiebel, Nicola; Almeida, Sandra; King, Heidi B

    2010-01-01

    Teamwork training constitutes one of the core approaches for moving healthcare systems toward increased levels of quality and safety, and simulation provides a powerful method of delivering this training, especially for face-paced and dynamic specialty areas such as Emergency Medicine. Team performance measurement and evaluation plays an integral role in ensuring that simulation-based training for teams (SBTT) is systematic and effective. However, this component of SBTT systems is overlooked frequently. This article addresses this gap by providing a review and practical introduction to the process of developing and implementing evaluation systems in SBTT. First, an overview of team performance evaluation is provided. Second, best practices for measuring team performance in simulation are reviewed. Third, some of the prominent measurement tools in the literature are summarized and discussed relative to the best practices. Subsequently, implications of the review are discussed for the practice of training teamwork in Emergency Medicine. PMID:21063558

  20. Computer-based simulator for catheter insertion training.

    PubMed

    Aloisio, Giovanni; Barone, Luigi; Bergamasco, Massimo; Avizzano, Carlo Alberto; De Paolis, Lucio Tommaso; Franceschini, Marco; Mongelli, Antonio; Pantile, Gianluca; Provenzano, Luciana; Raspolli, Mirko

    2004-01-01

    Minimally invasive surgery procedures are getting common in surgical practice; however the new interventional procedure requires different skills compared to the conventional surgical techniques. The need for training process is very important in order to successfully and safely execute a surgical procedure. Computer-based simulators, with appropriate tactile feedback device, can be an efficient method for facilitating the education and training process. In addition, virtual reality surgical simulators can reduce costs of education and provide realism with regard to tissues behaviour and real-time interaction. This work take into account the results of the HERMES Project (HEmatology Research virtual MEdical System), conceived and managed by Consorzio CETMA-Research Centre; the aim of this project is to build an integrate system in order to simulate a coronary angioplasty intervention. PMID:15544228

  1. Simulation of parachute FSI using the front tracking method

    NASA Astrophysics Data System (ADS)

    Kim, Joung-Dong; Li, Yan; Li, Xiaolin

    2013-02-01

    We use the front tracking method on a spring system to model the dynamic evolution of parachute canopy and risers. The canopy surface and the riser string chord of a parachute are represented by a triangulated surface mesh with preset equilibrium length on each side of the simplices. The stretching and wrinkling of the canopy and its supporting string chords (risers) are modeled by the spring system. The spring constants of the canopy and the risers are chosen based on the analysis of Young's surface modulus for the canopy fabric and Young's string modulus of the string chord. Damping is added to dissipate the excessive spring internal energy. The current model does not have radial reinforcement cables and has not taken into account the canopy porosity. This mechanical structure is coupled with the incompressible Navier-Stokes solver through the "Impulse Method". We analyzed the numerical stability of the spring system and used this computational module to simulate the flow pattern around a static parachute canopy and the dynamic evolution during the parachute inflation process. The numerical solutions have been compared with the available experimental data and there are good agreements in the terminal descent velocity and breathing frequency of the parachute.

  2. Efficient methods and practical guidelines for simulating isotope effects.

    PubMed

    Ceriotti, Michele; Markland, Thomas E

    2013-01-01

    The shift in chemical equilibria due to isotope substitution is frequently exploited to obtain insight into a wide variety of chemical and physical processes. It is a purely quantum mechanical effect, which can be computed exactly using simulations based on the path integral formalism. Here we discuss how these techniques can be made dramatically more efficient, and how they ultimately outperform quasi-harmonic approximations to treat quantum liquids not only in terms of accuracy, but also in terms of computational cost. To achieve this goal we introduce path integral quantum mechanics estimators based on free energy perturbation, which enable the evaluation of isotope effects using only a single path integral molecular dynamics trajectory of the naturally abundant isotope. We use as an example the calculation of the free energy change associated with H/D and (16)O/(18)O substitutions in liquid water, and of the fractionation of those isotopes between the liquid and the vapor phase. In doing so, we demonstrate and discuss quantitatively the relative benefits of each approach, thereby providing a set of guidelines that should facilitate the choice of the most appropriate method in different, commonly encountered scenarios. The efficiency of the estimators we introduce and the analysis that we perform should in particular facilitate accurate ab initio calculation of isotope effects in condensed phase systems. PMID:23298033

  3. Simulating rotationally inelastic collisions using a direct simulation Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Schullian, O.; Loreau, J.; Vaeck, N.; van der Avoird, A.; Heazlewood, B. R.; Rennick, C. J.; Softley, T. P.

    2015-12-01

    A new approach to simulating rotational cooling using a direct simulation Monte Carlo (DSMC) method is described and applied to the rotational cooling of ammonia seeded into a helium supersonic jet. The method makes use of ab initio rotational state changing cross sections calculated as a function of collision energy. Each particle in the DSMC simulations is labelled with a vector of rotational populations that evolves with time. Transfer of energy into translation is calculated from the mean energy transfer for this population at the specified collision energy. The simulations are compared with a continuum model for the on-axis density, temperature and velocity; rotational temperature as a function of distance from the nozzle is in accord with expectations from experimental measurements. The method could be applied to other types of gas mixture dynamics under non-uniform conditions, such as buffer gas cooling of NH3 by He.

  4. Intercomparison Of Bias-Correction Methods For Monthly Temperature And Precipitation Simulated By Multiple Climate Models

    NASA Astrophysics Data System (ADS)

    Watanabe, S.; Kanae, S.; Seto, S.; Hirabayashi, Y.; Oki, T.

    2012-12-01

    Bias-correction methods applied to monthly temperature and precipitation data simulated by multiple General Circulation Models (GCMs) are evaluated in this study. Although various methods have been proposed recently, an intercomparison among them using multiple GCM simulations has seldom been reported. Here, five previous methods as well as a proposed new method are compared. Before the comparison, we classified previous methods. The methods proposed in previous studies can be classified into four types based on the following two criteria: 1) Whether the statistics (e.g. mean, standard deviation, or the coefficient of variation) of future simulation is used in bias-correction; and 2) whether the estimation of cumulative probability is included in bias-correction. The methods which require future statistics will depend on the data in the projection period, while those which do not use future statistics are not. The classification proposed can characterize each bias-correction method. These methods are applied to temperature and precipitation simulated from 12 GCMs in the Coupled Model Intercomparison Project (CMIP3) archives. Parameters of each method are calibrated by using 1948-1972 observed data and validated for the 1974-1998 period. These methods are then applied to GCM future simulations (2073-2097), and the bias-corrected data are intercompared. For the historical simulation, negligible difference can be found between observed and bias-corrected data. However, the difference in the future simulation is large dependent on the characteristics of each method. The frequency (probability) that the 2073-2097 bias-corrected data exceed the 95th percentile of the 1948-1972 observed data is estimated in order to evaluate the differences among methods. The difference between proposed and one of the previous method is more than 10% in many areas. The differences of bias-corrected data among methods are discussed based on their respective characteristics. The results

  5. Simulation of ultracold plasmas using the Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Vrinceanu, D.; Balaraman, G. S.

    2010-03-01

    After creation of the ultracold plasma, the system is far from equilibrium. The electrons equilibrate among themselves and achieve local-thermal equilibrium on a time scale of few nano-seconds. The ions on the other hand expand radially due to the thermal pressure exerted by the electrons, on a much slower time scale (microseconds). Molecular dynamics simulation can be used to study the expansion and equilibration of ultracold plasmas, however a full micro second simulation are computationally exorbitant. We propose a novel method using Monte Carlo method for simulating long timescale dynamics of a spherically symmetric ultracold plasma cloud [1]. Results from our method for the expansion of ion plasma size, and electron density distributions show good agreement with the molecular dynamics simulations. Our results for the collisionless plasma are in good agreement with the Vlasov equation. Our method is very computationally efficient, and takes a few minutes on a desktop to simulate tens of nanoseconds of dynamics of millions of particles. [4pt] [1] D. Vrinceanu, G. S. Balaraman and L. Collins, ``The King model for electrons in a finite-size ultracold plasma,'' J. Phys. A, 41 425501 (2008)

  6. Rotor dynamic simulation and system identification methods for application to vacuum whirl data

    NASA Technical Reports Server (NTRS)

    Berman, A.; Giansante, N.; Flannelly, W. G.

    1980-01-01

    Methods of using rotor vacuum whirl data to improve the ability to model helicopter rotors were developed. The work consisted of the formulation of the equations of motion of elastic blades on a hub using a Galerkin method; the development of a general computer program for simulation of these equations; the study and implementation of a procedure for determining physical parameters based on measured data; and the application of a method for computing the normal modes and natural frequencies based on test data.

  7. A General Simulation Method for Multiple Bodies in Proximate Flight

    NASA Technical Reports Server (NTRS)

    Meakin, Robert L.

    2003-01-01

    Methods of unsteady aerodynamic simulation for an arbitrary number of independent bodies flying in close proximity are considered. A novel method to efficiently detect collision contact points is described. A method to compute body trajectories in response to aerodynamic loads, applied loads, and inter-body collisions is also given. The physical correctness of the methods are verified by comparison to a set of analytic solutions. The methods, combined with a Navier-Stokes solver, are used to demonstrate the possibility of predicting the unsteady aerodynamics and flight trajectories of moving bodies that involve rigid-body collisions.

  8. Fabrication of plasmonic thin films and their characterization by optical method and FDTD simulation technique

    NASA Astrophysics Data System (ADS)

    Kuzma, A.; Uherek, F.; Å kriniarová, J.; Pudiš, D.; Weis, M.; Donoval, M.

    2015-08-01

    In this paper we present optical properties of thin metal films deposited on the glass substrates by the physical vapor deposition. Localized surface plasmon polaritons of different film thicknesses have been spectrally characterized by optical methods. Evidence of the Au nanoparticles in deposited thin films have been demonstrated by Scanning Electron Microscope (SEM) and Atomic Force Microscope (AFM) and their dimensions as well as separations have been evaluated. As a first approximation, the simulation model of deposited nanoparticles without assuming their dimension and separation distributions has been created. Simulation model defines relation between the nanoparticle dimensions and their separations. Model of deposited nanoparticles has been simulated by the Finite-Difference Time-Domain (FDTD) simulation method. The pulsed excitation has been used and transmission of optical radiation has been calculated from the spectral response by Fast Fourier Transform (FFT) analyses. Plasmonic extinctions have been calculated from measured spectral characteristics as well as simulated characteristics and compared with each other. The nanoparticle dimensions and separations have been evaluated from the agreement between the simulation and experimental spectral characteristics. Surface morphology of thin metal film has been used as an input for the detail simulation study based on the experimental observation of metal nanoparticle distribution. Hence, this simulation method includes appropriate coupling effects between nanoparticles and provides more reliable results. Obtained results are helpful for further deep understanding of thin metal films plasmonic properties and simulation method is demonstrated as a powerful tool for the deposition technology optimizations.

  9. A rainfall simulator based on multifractal generator

    NASA Astrophysics Data System (ADS)

    Akrour, Nawal; mallet, Cecile; barthes, Laurent; chazottes, Aymeric

    2015-04-01

    illustrating the simulator's capabilities will be provided. They show that the simulated two-dimensional fields have coherent statistical properties in term of cumulative rain rate distribution but also in term of power spectrum and structure function with the observed ones at different spatial scales (1, 4, 16 km2) involving that scale features are well represented by the model. Keywords: precipitation, multifractal modeling, variogram, structure function, scale invariance, rain intermittency Akrour, N., Aymeric; C., Verrier, S., Barthes, L., Mallet, C.: 2013. Calibrating synthetic multifractal times series with observed data. International Precipitation Conference (IPC 11), Wageningen, The Netherlands http://www.wageningenur.nl/upload_mm/7/5/e/a72f004a-8e66-445c-bb0b-f489ed0ff0d4_Abstract%20book_TotaalLR-SEC.pdf Akrour, N., Aymeric; C., Verrier, S., Mallet, C., Barthes, L.: 2014: Simulation of yearly rainfall time series at micro-scale resolution with actual properties: intermittency, scale invariance, rainfall distribution, submitted to Water Resources Research (under revision) Schertzer, D., S. Lovejoy, 1987: Physically based rain and cloud modeling by anisotropic, multiplicative turbulent cascades. J. Geophys. Res. 92, 9692-9714 Schleiss, M., S. Chamoun, and A. Berne (2014), Stochastic simulation of intermittent rainfall using the concept of dry drift, Water Resources Research, 50 (3), 2329-2349

  10. Direct simulation Monte Carlo method with a focal mechanism algorithm

    NASA Astrophysics Data System (ADS)

    Rachman, Asep Nur; Chung, Tae Woong; Yoshimoto, Kazuo; Yun, Sukyoung

    2015-01-01

    To simulate the observation of the radiation pattern of an earthquake, the direct simulation Monte Carlo (DSMC) method is modified by implanting a focal mechanism algorithm. We compare the results of the modified DSMC method (DSMC-2) with those of the original DSMC method (DSMC-1). DSMC-2 shows more or similarly reliable results compared to those of DSMC-1, for events with 12 or more recorded stations, by weighting twice for hypocentral distance of less than 80 km. Not only the number of stations, but also other factors such as rough topography, magnitude of event, and the analysis method influence the reliability of DSMC-2. The most reliable result by DSMC-2 is obtained by the best azimuthal coverage by the largest number of stations. The DSMC-2 method requires shorter time steps and a larger number of particles than those of DSMC-1 to capture a sufficient number of arrived particles in the small-sized receiver.

  11. Impact and implementation of simulation-based training for safety.

    PubMed

    Bilotta, Federico F; Werner, Samantha M; Bergese, Sergio D; Rosa, Giovanni

    2013-01-01

    Patient safety is an issue of imminent concern in the high-risk field of medicine, and systematic changes that alter the way medical professionals approach patient care are needed. Simulation-based training (SBT) is an exemplary solution for addressing the dynamic medical environment of today. Grounded in methodologies developed by the aviation industry, SBT exceeds traditional didactic and apprenticeship models in terms of speed of learning, amount of information retained, and capability for deliberate practice. SBT remains an option in many medical schools and continuing medical education curriculums (CMEs), though its use in training has been shown to improve clinical practice. Future simulation-based anesthesiology training research needs to develop methods for measuring both the degree to which training translates into increased practitioner competency and the effect of training on safety improvements for patients. PMID:24311981

  12. Impact and Implementation of Simulation-Based Training for Safety

    PubMed Central

    Bilotta, Federico F.; Werner, Samantha M.; Bergese, Sergio D.; Rosa, Giovanni

    2013-01-01

    Patient safety is an issue of imminent concern in the high-risk field of medicine, and systematic changes that alter the way medical professionals approach patient care are needed. Simulation-based training (SBT) is an exemplary solution for addressing the dynamic medical environment of today. Grounded in methodologies developed by the aviation industry, SBT exceeds traditional didactic and apprenticeship models in terms of speed of learning, amount of information retained, and capability for deliberate practice. SBT remains an option in many medical schools and continuing medical education curriculums (CMEs), though its use in training has been shown to improve clinical practice. Future simulation-based anesthesiology training research needs to develop methods for measuring both the degree to which training translates into increased practitioner competency and the effect of training on safety improvements for patients. PMID:24311981

  13. Numerical simulations of acoustics problems using the direct simulation Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Hanford, Amanda Danforth

    In the current study, real gas effects in the propagation of sound waves are simulated using the direct simulation Monte Carlo method for a wide range of systems. This particle method allows for treatment of acoustic phenomena for a wide range of Knudsen numbers, defined as the ratio of molecular mean free path to wavelength. Continuum models such as the Euler and Navier-Stokes equations break down for flows greater than a Knudsen number of approximately 0.05. Continuum models also suffer from the inability to simultaneously model nonequilibrium conditions, diatomic or polyatomic molecules, nonlinearity and relaxation effects and are limited in their range of validity. Therefore, direct simulation Monte Carlo is capable of directly simulating acoustic waves with a level of detail not possible with continuum approaches. The basis of direct simulation Monte Carlo lies within kinetic theory where representative particles are followed as they move and collide with other particles. A parallel, object-oriented DSMC solver was developed for this problem. Despite excellent parallel efficiency, computation time is considerable. Monatomic gases, gases with internal energy, planetary environments, and amplitude effects spanning a large range of Knudsen number have all been modeled with the same method and compared to existing theory. With the direct simulation method, significant deviations from continuum predictions are observed for high Knudsen number flows.

  14. Simulation methods with extended stability for stiff biochemical Kinetics

    PubMed Central

    2010-01-01

    Background With increasing computer power, simulating the dynamics of complex systems in chemistry and biology is becoming increasingly routine. The modelling of individual reactions in (bio)chemical systems involves a large number of random events that can be simulated by the stochastic simulation algorithm (SSA). The key quantity is the step size, or waiting time, τ, whose value inversely depends on the size of the propensities of the different channel reactions and which needs to be re-evaluated after every firing event. Such a discrete event simulation may be extremely expensive, in particular for stiff systems where τ can be very short due to the fast kinetics of some of the channel reactions. Several alternative methods have been put forward to increase the integration step size. The so-called τ-leap approach takes a larger step size by allowing all the reactions to fire, from a Poisson or Binomial distribution, within that step. Although the expected value for the different species in the reactive system is maintained with respect to more precise methods, the variance at steady state can suffer from large errors as τ grows. Results In this paper we extend Poisson τ-leap methods to a general class of Runge-Kutta (RK) τ-leap methods. We show that with the proper selection of the coefficients, the variance of the extended τ-leap can be well-behaved, leading to significantly larger step sizes. Conclusions The benefit of adapting the extended method to the use of RK frameworks is clear in terms of speed of calculation, as the number of evaluations of the Poisson distribution is still one set per time step, as in the original τ-leap method. The approach paves the way to explore new multiscale methods to simulate (bio)chemical systems. PMID:20701766

  15. Computational methods for coupling microstructural and micromechanical materials response simulations

    SciTech Connect

    HOLM,ELIZABETH A.; BATTAILE,CORBETT C.; BUCHHEIT,THOMAS E.; FANG,HUEI ELIOT; RINTOUL,MARK DANIEL; VEDULA,VENKATA R.; GLASS,S. JILL; KNOROVSKY,GERALD A.; NEILSEN,MICHAEL K.; WELLMAN,GERALD W.; SULSKY,DEBORAH; SHEN,YU-LIN; SCHREYER,H. BUCK

    2000-04-01

    Computational materials simulations have traditionally focused on individual phenomena: grain growth, crack propagation, plastic flow, etc. However, real materials behavior results from a complex interplay between phenomena. In this project, the authors explored methods for coupling mesoscale simulations of microstructural evolution and micromechanical response. In one case, massively parallel (MP) simulations for grain evolution and microcracking in alumina stronglink materials were dynamically coupled. In the other, codes for domain coarsening and plastic deformation in CuSi braze alloys were iteratively linked. this program provided the first comparison of two promising ways to integrate mesoscale computer codes. Coupled microstructural/micromechanical codes were applied to experimentally observed microstructures for the first time. In addition to the coupled codes, this project developed a suite of new computational capabilities (PARGRAIN, GLAD, OOF, MPM, polycrystal plasticity, front tracking). The problem of plasticity length scale in continuum calculations was recognized and a solution strategy was developed. The simulations were experimentally validated on stockpile materials.

  16. Hybrid Numerical Methods for Multiscale Simulations of Subsurface Biogeochemical Processes

    SciTech Connect

    Scheibe, Timothy D.; Tartakovsky, Alexandre M.; Tartakovsky, Daniel M.; Redden, George D.; Meakin, Paul

    2007-08-01

    Many subsurface flow and transport problems of importance today involve coupled non-linear flow, transport, and reaction in media exhibiting complex heterogeneity. In particular, problems involving biological mediation of reactions fall into this class of problems. Recent experimental research has revealed important details about the physical, chemical, and biological mechanisms involved in these processes at a variety of scales ranging from molecular to laboratory scales. However, it has not been practical or possible to translate detailed knowledge at small scales into reliable predictions of field-scale phenomena important for environmental management applications. A large assortment of numerical simulation tools have been developed, each with its own characteristic scale including molecular (e.g., molecular dynamics), microbial (e.g., cellular automata or particle individual-based models), pore (e.g., lattice-Boltzmann, pore network models, and discrete particle methods such as smoothed particle hydrodynamics) and continuum scales (e.g., traditional partial differential equations solved by finite difference or finite element methods). While many problems can be effectively addressed by one of these models at a single scale, some problems may require explicit integration of models across multiple scales. We are developing a hybrid multi-scale subsurface reactive transport modeling framework that integrates models with diverse representations of physics, chemistry and biology at different scales (sub-pore, pore and continuum). The modeling framework is being designed to take advantage of advanced computational technologies including parallel code components using the Common Component Architecture, parallel solvers, gridding, data and workflow management, and visualization. This paper describes the specific methods/codes being used at each scale, techniques used to directly and adaptively couple across model scales, and preliminary results of application to a

  17. A new lattice Monte Carlo method for simulating dielectric inhomogeneity

    NASA Astrophysics Data System (ADS)

    Duan, Xiaozheng; Wang, Zhen-Gang; Nakamura, Issei

    We present a new lattice Monte Carlo method for simulating systems involving dielectric contrast between different species by modifying an algorithm originally proposed by Maggs et al. The original algorithm is known to generate attractive interactions between particles that have different dielectric constant than the solvent. Here we show that such attractive force is spurious, arising from incorrectly biased statistical weight caused by the particle motion during the Monte Carlo moves. We propose a new, simple algorithm to resolve this erroneous sampling. We demonstrate the application of our algorithm by simulating an uncharged polymer in a solvent with different dielectric constant. Further, we show that the electrostatic fields in ionic crystals obtained from our simulations with a relatively small simulation box correspond well with results from the analytical solution. Thus, our Monte Carlo method avoids the need for the Ewald summation in conventional simulation methods for charged systems. This work was supported by the National Natural Science Foundation of China (21474112 and 21404103). We are grateful to Computing Center of Jilin Province for essential support.

  18. Operational characteristic analysis of conduction cooling HTS SMES for Real Time Digital Simulator based power quality enhancement simulation

    NASA Astrophysics Data System (ADS)

    Kim, A. R.; Kim, G. H.; Kim, K. M.; Kim, D. W.; Park, M.; Yu, I. K.; Kim, S. H.; Sim, K.; Sohn, M. H.; Seong, K. C.

    2010-11-01

    This paper analyzes the operational characteristics of conduction cooling Superconducting Magnetic Energy Storage (SMES) through a real hardware based simulation. To analyze the operational characteristics, the authors manufactured a small-scale toroidal-type SMES and implemented a Real Time Digital Simulator (RTDS) based power quality enhancement simulation. The method can consider not only electrical characteristics such as inductance and current but also temperature characteristic by using the real SMES system. In order to prove the effectiveness of the proposed method, a voltage sag compensation simulation has been implemented using the RTDS connected with the High Temperature Superconducting (HTS) model coil and DC/DC converter system, and the simulation results are discussed in detail.

  19. GREEN'S Function and Super-Particle Methods for Kinetic Simulation of Heteroepitaxy

    NASA Astrophysics Data System (ADS)

    Lam, Chi-Hang; Lung, M. T.

    Arrays of nanosized three dimensional islands are known to self-assemble spontaneously on strained heteroepitaxial thin films. We simulate the dynamics using kinetic Monte Carlo method based on a ball and spring lattice model. Green's function and super-particle methods which greatly enhance the computational efficiency are explained.

  20. Parallel methods for dynamic simulation of multiple manipulator systems

    NASA Technical Reports Server (NTRS)

    Mcmillan, Scott; Sadayappan, P.; Orin, David E.

    1993-01-01

    In this paper, efficient dynamic simulation algorithms for a system of m manipulators, cooperating to manipulate a large load, are developed; their performance, using two possible forms of parallelism on a general-purpose parallel computer, is investigated. One form, temporal parallelism, is obtained with the use of parallel numerical integration methods. A speedup of 3.78 on four processors of CRAY Y-MP8 was achieved with a parallel four-point block predictor-corrector method for the simulation of a four manipulator system. These multi-point methods suffer from reduced accuracy, and when comparing these runs with a serial integration method, the speedup can be as low as 1.83 for simulations with the same accuracy. To regain the performance lost due to accuracy problems, a second form of parallelism is employed. Spatial parallelism allows most of the dynamics of each manipulator chain to be computed simultaneously. Used exclusively in the four processor case, this form of parallelism in conjunction with a serial integration method results in a speedup of 3.1 on four processors over the best serial method. In cases where there are either more processors available or fewer chains in the system, the multi-point parallel integration methods are still advantageous despite the reduced accuracy because both forms of parallelism can then combine to generate more parallel tasks and achieve greater effective speedups. This paper also includes results for these cases.

  1. A comparative study of interface reconstruction methods for multi-material ALE simulations

    SciTech Connect

    Kucharik, Milan; Garimalla, Rao; Schofield, Samuel; Shashkov, Mikhail

    2009-01-01

    In this paper we compare the performance of different methods for reconstructing interfaces in multi-material compressible flow simulations. The methods compared are a material-order-dependent Volume-of-Fluid (VOF) method, a material-order-independent VOF method based on power diagram partitioning of cells and the Moment-of-Fluid method (MOF). We demonstrate that the MOF method provides the most accurate tracking of interfaces, followed by the VOF method with the right material ordering. The material-order-independent VOF method performs some-what worse than the above two while the solutions with VOF using the wrong material order are considerably worse.

  2. The Co-simulation of Humanoid Robot Based on Solidworks, ADAMS and Simulink

    NASA Astrophysics Data System (ADS)

    Song, Dalei; Zheng, Lidan; Wang, Li; Qi, Weiwei; Li, Yanli

    A simulation method of adaptive controller is proposed for the humanoid robot system based on co-simulation of Solidworks, ADAMS and Simulink. A complex mathematical modeling process is avoided by this method, and the real time dynamic simulating function of Simulink would be exerted adequately. This method could be generalized to other complicated control system. This method is adopted to build and analyse the model of humanoid robot. The trajectory tracking and adaptive controller design also proceed based on it. The effect of trajectory tracking is evaluated by fitting-curve theory of least squares method. The anti-interference capability of the robot is improved a lot through comparative analysis.

  3. A virtual reality based simulator for learning nasogastric tube placement.

    PubMed

    Choi, Kup-Sze; He, Xuejian; Chiang, Vico Chung-Lim; Deng, Zhaohong

    2015-02-01

    Nasogastric tube (NGT) placement is a common clinical procedure where a plastic tube is inserted into the stomach through the nostril for feeding or drainage. However, the placement is a blind process in which the tube may be mistakenly inserted into other locations, leading to unexpected complications or fatal incidents. The placement techniques are conventionally acquired by practising on unrealistic rubber mannequins or on humans. In this paper, a virtual reality based training simulation system is proposed to facilitate the training of NGT placement. It focuses on the simulation of tube insertion and the rendering of the feedback forces with a haptic device. A hybrid force model is developed to compute the forces analytically or numerically under different conditions, including the situations when the patient is swallowing or when the tube is buckled at the nostril. To ensure real-time interactive simulations, an offline simulation approach is adopted to obtain the relationship between the insertion depth and insertion force using a non-linear finite element method. The offline dataset is then used to generate real-time feedback forces by interpolation. The virtual training process is logged quantitatively with metrics that can be used for assessing objective performance and tracking progress. The system has been evaluated by nursing professionals. They found that the haptic feeling produced by the simulated forces is similar to their experience during real NGT insertion. The proposed system provides a new educational tool to enhance conventional training in NGT placement. PMID:25546468

  4. Validation of chemistry models employed in a particle simulation method

    NASA Technical Reports Server (NTRS)

    Haas, Brian L.; Mcdonald, Jeffrey D.

    1991-01-01

    The chemistry models employed in a statistical particle simulation method, as implemented in the Intel iPSC/860 multiprocessor computer, are validated and applied. Chemical relaxation of five-species air in these reservoirs involves 34 simultaneous dissociation, recombination, and atomic-exchange reactions. The reaction rates employed in the analytic solutions are obtained from Arrhenius experimental correlations as functions of temperature for adiabatic gas reservoirs in thermal equilibrium. Favorable agreement with the analytic solutions validates the simulation when applied to relaxation of O2 toward equilibrium in reservoirs dominated by dissociation and recombination, respectively, and when applied to relaxation of air in the temperature range 5000 to 30,000 K. A flow of O2 over a circular cylinder at high Mach number is simulated to demonstrate application of the method to multidimensional reactive flows.

  5. New lattice Boltzmann method for the simulation of three-dimensional radiation transfer in turbid media.

    PubMed

    McHardy, Christopher; Horneber, Tobias; Rauh, Cornelia

    2016-07-25

    Based on the kinetic theory of photons, a new lattice Boltzmann method for the simulation of 3D radiation transport is presented. The method was successfully validated with Monte Carlo simulations of radiation transport in optical thick absorbing and non-absorbing turbid media containing either isotropic or anisotropic scatterers. Moreover, for the approximation of Mie-scattering, a new iterative algebraic approach for the discretization of the scattering phase function was developed, ensuring full conservation of energy and asymmetry after discretization. It was found that the main error sources of the method are caused by linearization and ray effects and suggestions for further improvement of the method are made. PMID:27464152

  6. Energy Efficient Biomolecular Simulations with FPGA-based Reconfigurable Computing

    SciTech Connect

    Hampton, Scott S; Agarwal, Pratul K

    2010-05-01

    Reconfigurable computing (RC) is being investigated as a hardware solution for improving time-to-solution for biomolecular simulations. A number of popular molecular dynamics (MD) codes are used to study various aspects of biomolecules. These codes are now capable of simulating nanosecond time-scale trajectories per day on conventional microprocessor-based hardware, but biomolecular processes often occur at the microsecond time-scale or longer. A wide gap exists between the desired and achievable simulation capability; therefore, there is considerable interest in alternative algorithms and hardware for improving the time-to-solution of MD codes. The fine-grain parallelism provided by Field Programmable Gate Arrays (FPGA) combined with their low power consumption make them an attractive solution for improving the performance of MD simulations. In this work, we use an FPGA-based coprocessor to accelerate the compute-intensive calculations of LAMMPS, a popular MD code, achieving up to 5.5 fold speed-up on the non-bonded force computations of the particle mesh Ewald method and up to 2.2 fold speed-up in overall time-to-solution, and potentially an increase by a factor of 9 in power-performance efficiencies for the pair-wise computations. The results presented here provide an example of the multi-faceted benefits to an application in a heterogeneous computing environment.

  7. Simulation Methods for Teaching the Performance Appraisal Interview.

    ERIC Educational Resources Information Center

    Krayer, Karl J.

    1987-01-01

    Details the steps and some of the rubrics involved in teaching skills for performance appraisal interviewing through classroom simulations. Describes an effective method that maintains interest and enthusiasm among students while exposing them to communication behaviors that are essential for a successful appraisal interview. (NKA)

  8. FETI Methods for the Simulation of Biological Tissues

    PubMed Central

    Augustin, Christoph; Steinbach, Olaf

    2016-01-01

    Summary In this paper we describe the application of finite element tearing and interconnecting methods for the simulation of biological tissues, as a particular application we consider the myocardium. As most other tissues, this material is characterized by anisotropic and nonlinear behavior. PMID:26925469

  9. The Factorization Method for Simulating Systems with a Complex Action

    NASA Astrophysics Data System (ADS)

    Ambjørn, J.; Anagnostopoulos, K. N.; Nishimura, J.; Verbaarschot, J. J. M.

    2004-04-01

    We propose a method for Monte Carlo simulations of systems with a complex action. The method has the advantages of being in principle applicable to any such system and provides a solution to the overlap problem. We apply it in random matrix theory of finite density QCD where we compare with analytic results. In this model we find non-commutativity of the limits μ → 0 and N → ∞ which could be of relevance in QCD at finite density.

  10. Implementation of unsteady sampling procedures for the parallel direct simulation Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Cave, H. M.; Tseng, K.-C.; Wu, J.-S.; Jermy, M. C.; Huang, J.-C.; Krumdieck, S. P.

    2008-06-01

    An unsteady sampling routine for a general parallel direct simulation Monte Carlo method called PDSC is introduced, allowing the simulation of time-dependent flow problems in the near continuum range. A post-processing procedure called DSMC rapid ensemble averaging method (DREAM) is developed to improve the statistical scatter in the results while minimising both memory and simulation time. This method builds an ensemble average of repeated runs over small number of sampling intervals prior to the sampling point of interest by restarting the flow using either a Maxwellian distribution based on macroscopic properties for near equilibrium flows (DREAM-I) or output instantaneous particle data obtained by the original unsteady sampling of PDSC for strongly non-equilibrium flows (DREAM-II). The method is validated by simulating shock tube flow and the development of simple Couette flow. Unsteady PDSC is found to accurately predict the flow field in both cases with significantly reduced run-times over single processor code and DREAM greatly reduces the statistical scatter in the results while maintaining accurate particle velocity distributions. Simulations are then conducted of two applications involving the interaction of shocks over wedges. The results of these simulations are compared to experimental data and simulations from the literature where there these are available. In general, it was found that 10 ensembled runs of DREAM processing could reduce the statistical uncertainty in the raw PDSC data by 2.5-3.3 times, based on the limited number of cases in the present study.

  11. Simulation on Measurement Method of Geometric Distortion of Telescopes

    NASA Astrophysics Data System (ADS)

    Li, F.; Ren, S. L.

    2015-11-01

    Measuring the geometric distortion is conducive to improve the astrometric accuracy of telescopes, which is meaningful for many disciplines of astronomy, such as stellar clusters, natural satellites, asteroids, comets, and the other celestial bodies in the solar system. For this reason, researchers have developed an iterative self-calibration method to measure the geometric distortion of telescopes by observing a dense star field in the dithering mode, and have achieved many good results. However, the previous work did not constrain the density of star field or the dithering number in the observing mode, but chose relative good conditions to observe, which took up much observing time. In order to explore the validity of self-calibration method, and optimize its observing conditions, it is necessary to carry out the corresponding simulation. Firstly, we introduce the self-calibration method in detail in the present work. By the simulation method, the effectiveness of self-calibration method to give the geometric distortion is proved, and the observing conditions, such as the density of star field and dithering number, are optimized to give the geometric distortion with a high accuracy. Considering the practical application for correcting the geometric distortion, we also analyze the relation between the number of reference stars in the field of view and the astrometric accuracy by virtue of the simulation method.

  12. Quantitative and Qualitative Simulation in Computer Based Training.

    ERIC Educational Resources Information Center

    Stevens, Albert; Roberts, Burce

    1983-01-01

    Computer-based systems combining quantitative simulation with qualitative tutorial techniques provide learners with sophisticated individualized training. The teaching capabilities and operating procedures of Steamer, a simulated steam plant, are described. (Author/MBR)

  13. An Implicit Characteristic Based Method for Electromagnetics

    NASA Technical Reports Server (NTRS)

    Beggs, John H.; Briley, W. Roger

    2001-01-01

    An implicit characteristic-based approach for numerical solution of Maxwell's time-dependent curl equations in flux conservative form is introduced. This method combines a characteristic based finite difference spatial approximation with an implicit lower-upper approximate factorization (LU/AF) time integration scheme. This approach is advantageous for three-dimensional applications because the characteristic differencing enables a two-factor approximate factorization that retains its unconditional stability in three space dimensions, and it does not require solution of tridiagonal systems. Results are given both for a Fourier analysis of stability, damping and dispersion properties, and for one-dimensional model problems involving propagation and scattering for free space and dielectric materials using both uniform and nonuniform grids. The explicit Finite Difference Time Domain Method (FDTD) algorithm is used as a convenient reference algorithm for comparison. The one-dimensional results indicate that for low frequency problems on a highly resolved uniform or nonuniform grid, this LU/AF algorithm can produce accurate solutions at Courant numbers significantly greater than one, with a corresponding improvement in efficiency for simulating a given period of time. This approach appears promising for development of dispersion optimized LU/AF schemes for three dimensional applications.

  14. A Simulation Base Investigation of High Latency Space Systems Operations

    NASA Technical Reports Server (NTRS)

    Li, Zu Qun; Crues, Edwin Z.; Bielski, Paul; Moore, Michael

    2017-01-01

    NASA's human space program has developed considerable experience with near Earth space operations. Although NASA has experience with deep space robotic missions, NASA has little substantive experience with human deep space operations. Even in the Apollo program, the missions lasted only a few weeks and the communication latencies were on the order of seconds. Human missions beyond the relatively close confines of the Earth-Moon system will involve missions with durations measured in months and communications latencies measured in minutes. To minimize crew risk and to maximize mission success, NASA needs to develop a better understanding of the implications of these types of mission durations and communication latencies on vehicle design, mission design and flight controller interaction with the crew. To begin to address these needs, NASA performed a study using a physics-based subsystem simulation to investigate the interactions between spacecraft crew and a ground-based mission control center for vehicle subsystem operations across long communication delays. The simulation, built with a subsystem modeling tool developed at NASA's Johnson Space Center, models the life support system of a Mars transit vehicle. The simulation contains models of the cabin atmosphere and pressure control system, electrical power system, drinking and waste water systems, internal and external thermal control systems, and crew metabolic functions. The simulation has three interfaces: 1) a real-time crew interface that can be use to monitor and control the vehicle subsystems; 2) a mission control center interface with data transport delays up to 15 minutes each way; 3) a real-time simulation test conductor interface that can be use to insert subsystem malfunctions and observe the interactions between the crew, ground, and simulated vehicle. The study was conducted at the 21st NASA Extreme Environment Mission Operations (NEEMO) mission between July 18th and Aug 3rd of year 2016. The NEEMO

  15. Crystal level simulations using Eulerian finite element methods

    SciTech Connect

    Becker, R; Barton, N R; Benson, D J

    2004-02-06

    Over the last several years, significant progress has been made in the use of crystal level material models in simulations of forming operations. However, in Lagrangian finite element approaches simulation capabilities are limited in many cases by mesh distortion associated with deformation heterogeneity. Contexts in which such large distortions arise include: bulk deformation to strains approaching or exceeding unity, especially in highly anisotropic or multiphase materials; shear band formation and intersection of shear bands; and indentation with sharp indenters. Investigators have in the past used Eulerian finite element methods with material response determined from crystal aggregates to study steady state forming processes. However, Eulerian and Arbitrary Lagrangian-Eulerian (ALE) finite element methods have not been widely utilized for simulation of transient deformation processes at the crystal level. The advection schemes used in Eulerian and ALE codes control mesh distortion and allow for simulation of much larger total deformations. We will discuss material state representation issues related to advection and will present results from ALE simulations.

  16. Numerical simulation methods for the Rouse model in flow

    NASA Astrophysics Data System (ADS)

    Howard, Michael P.; Milner, Scott T.

    2011-11-01

    Simulation of the Rouse model in flow underlies a great variety of numerical investigations of polymer dynamics, in both entangled melts and solutions and in dilute solution. Typically a simple explicit stochastic Euler method is used to evolve the Rouse model. Here we compare this approach to an operator splitting method, which splits the evolution operator into stochastic linear and deterministic nonlinear parts and takes advantage of an analytical solution for the linear Rouse model in terms of the noise history. We show that this splitting method has second-order weak convergence, whereas the Euler method has only first-order weak convergence. Furthermore, the splitting method is unconditionally stable, in contrast to the limited stability range of the Euler method. Similar splitting methods are applicable to a broad class of problems in stochastic dynamics in which noise competes with ordering and flow to determine steady-state order parameter structures.

  17. Parallel Computing Environments and Methods for Power Distribution System Simulation

    SciTech Connect

    Lu, Ning; Taylor, Zachary T.; Chassin, David P.; Guttromson, Ross T.; Studham, Scott S.

    2005-11-10

    The development of cost-effective high-performance parallel computing on multi-processor super computers makes it attractive to port excessively time consuming simulation software from personal computers (PC) to super computes. The power distribution system simulator (PDSS) takes a bottom-up approach and simulates load at appliance level, where detailed thermal models for appliances are used. This approach works well for a small power distribution system consisting of a few thousand appliances. When the number of appliances increases, the simulation uses up the PC memory and its run time increases to a point where the approach is no longer feasible to model a practical large power distribution system. This paper presents an effort made to port a PC-based power distribution system simulator (PDSS) to a 128-processor shared-memory super computer. The paper offers an overview of the parallel computing environment and a description of the modification made to the PDSS model. The performances of the PDSS running on a standalone PC and on the super computer are compared. Future research direction of utilizing parallel computing in the power distribution system simulation is also addressed.

  18. The impact of cloud vertical profile on liquid water path retrieval based on the bispectral method: A theoretical study based on large-eddy simulations of shallow marine boundary layer clouds

    NASA Astrophysics Data System (ADS)

    Miller, Daniel J.; Zhang, Zhibo; Ackerman, Andrew S.; Platnick, Steven; Baum, Bryan A.

    2016-04-01

    Passive optical retrievals of cloud liquid water path (LWP), like those implemented for Moderate Resolution Imaging Spectroradiometer (MODIS), rely on cloud vertical profile assumptions to relate optical thickness (τ) and effective radius (re) retrievals to LWP. These techniques typically assume that shallow clouds are vertically homogeneous; however, an adiabatic cloud model is plausibly more realistic for shallow marine boundary layer cloud regimes. In this study a satellite retrieval simulator is used to perform MODIS-like satellite retrievals, which in turn are compared directly to the large-eddy simulation (LES) output. This satellite simulator creates a framework for rigorous quantification of the impact that vertical profile features have on LWP retrievals, and it accomplishes this while also avoiding sources of bias present in previous observational studies. The cloud vertical profiles from the LES are often more complex than either of the two standard assumptions, and the favored assumption was found to be sensitive to cloud regime (cumuliform/stratiform). Confirming previous studies, drizzle and cloud top entrainment of dry air are identified as physical features that bias LWP retrievals away from adiabatic and toward homogeneous assumptions. The mean bias induced by drizzle-influenced profiles was shown to be on the order of 5-10 g/m2. In contrast, the influence of cloud top entrainment was found to be smaller by about a factor of 2. A theoretical framework is developed to explain variability in LWP retrievals by introducing modifications to the adiabatic re profile. In addition to analyzing bispectral retrievals, we also compare results with the vertical profile sensitivity of passive polarimetric retrieval techniques.

  19. A real-time infrared imaging simulation method with physical effects modeling of infrared sensors

    NASA Astrophysics Data System (ADS)

    Li, Ni; Huai, Wenqing; Wang, Shaodan; Ren, Lei

    2016-09-01

    Infrared imaging simulation technology can provide infrared data sources for the development, improvement and evaluation of infrared imaging systems under different environment, status and weather conditions, which is reusable and more economic than physical experiments. A real-time infrared imaging simulation process is established to reproduce a complete physical imaging process. Our emphasis is put on the modeling of infrared sensors, involving physical effects of both spatial domain and frequency domain. An improved image convolution method is proposed based on GPU parallel processing to enhance the real-time simulation ability with ensuring its simulation accuracy at the same time. Finally the effectiveness of the above methods is validated by simulation analysis and result comparison.

  20. System and Method for Finite Element Simulation of Helicopter Turbulence

    NASA Technical Reports Server (NTRS)

    McFarland, R. E. (Inventor); Dulsenberg, Ken (Inventor)

    1999-01-01

    The present invention provides a turbulence model that has been developed for blade-element helicopter simulation. This model uses an innovative temporal and geometrical distribution algorithm that preserves the statistical characteristics of the turbulence spectra over the rotor disc, while providing velocity components in real time to each of five blade-element stations along each of four blades. for a total of twenty blade-element stations. The simulator system includes a software implementation of flight dynamics that adheres to the guidelines for turbulence set forth in military specifications. One of the features of the present simulator system is that it applies simulated turbulence to the rotor blades of the helicopter, rather than to its center of gravity. The simulator system accurately models the rotor penetration into a gust field. It includes time correlation between the front and rear of the main rotor, as well as between the side forces felt at the center of gravity and at the tail rotor. It also includes features for added realism, such as patchy turbulence and vertical gusts in to which the rotor disc penetrates. These features are realized by a unique real time implementation of the turbulence filters. The new simulator system uses two arrays one on either side of the main rotor to record the turbulence field and to produce time-correlation from the front to the rear of the rotor disc. The use of Gaussian Interpolation between the two arrays maintains the statistical properties of the turbulence across the rotor disc. The present simulator system and method may be used in future and existing real-time helicopter simulations with minimal increase in computational workload.

  1. PNS and statistical experiments simulation in subcritical systems using Monte-Carlo method on example of Yalina-Thermal assembly

    NASA Astrophysics Data System (ADS)

    Sadovich, Sergey; Talamo, A.; Burnos, V.; Kiyavitskaya, H.; Fokov, Yu.

    2014-06-01

    In subcritical systems driven by an external neutron source, the experimental methods based on pulsed neutron source and statistical techniques play an important role for reactivity measurement. Simulation of these methods is very time-consumed procedure. For simulations in Monte-Carlo programs several improvements for neutronic calculations have been made. This paper introduces a new method for simulation PNS and statistical measurements. In this method all events occurred in the detector during simulation are stored in a file using PTRAC feature in the MCNP. After that with a special code (or post-processing) PNS and statistical methods can be simulated. Additionally different shapes of neutron pulses and its lengths as well as dead time of detectors can be included into simulation. The methods described above were tested on subcritical assembly Yalina-Thermal, located in Joint Institute for Power and Nuclear Research SOSNY, Minsk, Belarus. A good agreement between experimental and simulated results was shown.

  2. Simulation of ground motion using the stochastic method

    USGS Publications Warehouse

    Boore, D.M.

    2003-01-01

    A simple and powerful method for simulating ground motions is to combine parametric or functional descriptions of the ground motion's amplitude spectrum with a random phase spectrum modified such that the motion is distributed over a duration related to the earthquake magnitude and to the distance from the source. This method of simulating ground motions often goes by the name "the stochastic method." It is particularly useful for simulating the higher-frequency ground motions of most interest to engineers (generally, f>0.1 Hz), and it is widely used to predict ground motions for regions of the world in which recordings of motion from potentially damaging earthquakes are not available. This simple method has been successful in matching a variety of ground-motion measures for earthquakes with seismic moments spanning more than 12 orders of magnitude and in diverse tectonic environments. One of the essential characteristics of the method is that it distills what is known about the various factors affecting ground motions (source, path, and site) into simple functional forms. This provides a means by which the results of the rigorous studies reported in other papers in this volume can be incorporated into practical predictions of ground motion.

  3. Agent-based simulation of building evacuation using a grid graph-based model

    NASA Astrophysics Data System (ADS)

    Tan, L.; Lin, H.; Hu, M.; Che, W.

    2014-02-01

    Shifting from macroscope models to microscope models, the agent-based approach has been widely used to model crowd evacuation as more attentions are paid on individualized behaviour. Since indoor evacuation behaviour is closely related to spatial features of the building, effective representation of indoor space is essential for the simulation of building evacuation. The traditional cell-based representation has limitations in reflecting spatial structure and is not suitable for topology analysis. Aiming at incorporating powerful topology analysis functions of GIS to facilitate agent-based simulation of building evacuation, we used a grid graph-based model in this study to represent the indoor space. Such model allows us to establish an evacuation network at a micro level. Potential escape routes from each node thus could be analysed through GIS functions of network analysis considering both the spatial structure and route capacity. This would better support agent-based modelling of evacuees' behaviour including route choice and local movements. As a case study, we conducted a simulation of emergency evacuation from the second floor of an official building using Agent Analyst as the simulation platform. The results demonstrate the feasibility of the proposed method, as well as the potential of GIS in visualizing and analysing simulation results.

  4. Simulation of 3D tumor cell growth using nonlinear finite element method.

    PubMed

    Dong, Shoubing; Yan, Yannan; Tang, Liqun; Meng, Junping; Jiang, Yi

    2016-06-01

    We propose a novel parallel computing framework for a nonlinear finite element method (FEM)-based cell model and apply it to simulate avascular tumor growth. We derive computation formulas to simplify the simulation and design the basic algorithms. With the increment of the proliferation generations of tumor cells, the FEM elements may become larger and more distorted. Then, we describe a remesh and refinement processing of the distorted or over large finite elements and the parallel implementation based on Message Passing Interface to improve the accuracy and efficiency of the simulation. We demonstrate the feasibility and effectiveness of the FEM model and the parallelization methods in simulations of early tumor growth. PMID:26213205

  5. Simulation: A Complementary Method for Teaching Health Services Strategic Management

    PubMed Central

    Reddick, W. T.

    1990-01-01

    Rapid change in the health care environment mandates a more comprehensive approach to the education of future health administrators. The area of consideration in this study is that of health care strategic management. A comprehensive literature review suggests microcomputer-based simulation as an appropriate vehicle for addressing the needs of both educators and students. Seven strategic management software packages are reviewed and rated with an instrument adapted from the Infoworld review format. The author concludes that a primary concern is the paucity of health care specific strategic management simulations.

  6. Applications of large eddy simulation methods to gyrokinetic turbulence

    SciTech Connect

    Bañón Navarro, A. Happel, T.; Teaca, B. [Applied Mathematics Research Centre, Coventry University, Coventry CV1 5FB; Max-Planck für Sonnensystemforschung, Max-Planck-Str. 2, D-37191 Katlenburg-Lindau; Max-Planck Jenko, F. [Max-Planck-Institut für Plasmaphysik, EURATOM Association, D-85748 Garching; Max-Planck Hammett, G. W. [Max-Planck Collaboration: ASDEX Upgrade Team

    2014-03-15

    The large eddy simulation (LES) approach—solving numerically the large scales of a turbulent system and accounting for the small-scale influence through a model—is applied to nonlinear gyrokinetic systems that are driven by a number of different microinstabilities. Comparisons between modeled, lower resolution, and higher resolution simulations are performed for an experimental measurable quantity, the electron density fluctuation spectrum. Moreover, the validation and applicability of LES is demonstrated through a series of diagnostics based on the free energetics of the system.

  7. Parameter Studies, time-dependent simulations and design with automated Cartesian methods

    NASA Technical Reports Server (NTRS)

    Aftosmis, Michael

    2005-01-01

    Over the past decade, NASA has made a substantial investment in developing adaptive Cartesian grid methods for aerodynamic simulation. Cartesian-based methods played a key role in both the Space Shuttle Accident Investigation and in NASA's return to flight activities. The talk will provide an overview of recent technological developments focusing on the generation of large-scale aerodynamic databases, automated CAD-based design, and time-dependent simulations with of bodies in relative motion. Automation, scalability and robustness underly all of these applications and research in each of these topics will be presented.

  8. Applying dynamic simulation modeling methods in health care delivery research-the SIMULATE checklist: report of the ISPOR simulation modeling emerging good practices task force.

    PubMed

    Marshall, Deborah A; Burgos-Liz, Lina; IJzerman, Maarten J; Osgood, Nathaniel D; Padula, William V; Higashi, Mitchell K; Wong, Peter K; Pasupathy, Kalyan S; Crown, William

    2015-01-01

    Health care delivery systems are inherently complex, consisting of multiple tiers of interdependent subsystems and processes that are adaptive to changes in the environment and behave in a nonlinear fashion. Traditional health technology assessment and modeling methods often neglect the wider health system impacts that can be critical for achieving desired health system goals and are often of limited usefulness when applied to complex health systems. Researchers and health care decision makers can either underestimate or fail to consider the interactions among the people, processes, technology, and facility designs. Health care delivery system interventions need to incorporate the dynamics and complexities of the health care system context in which the intervention is delivered. This report provides an overview of common dynamic simulation modeling methods and examples of health care system interventions in which such methods could be useful. Three dynamic simulation modeling methods are presented to evaluate system interventions for health care delivery: system dynamics, discrete event simulation, and agent-based modeling. In contrast to conventional evaluations, a dynamic systems approach incorporates the complexity of the system and anticipates the upstream and downstream consequences of changes in complex health care delivery systems. This report assists researchers and decision makers in deciding whether these simulation methods are appropriate to address specific health system problems through an eight-point checklist referred to as the SIMULATE (System, Interactions, Multilevel, Understanding, Loops, Agents, Time, Emergence) tool. It is a primer for researchers and decision makers working in health care delivery and implementation sciences who face complex challenges in delivering effective and efficient care that can be addressed with system interventions. On reviewing this report, the readers should be able to identify whether these simulation modeling

  9. Prediction of plasma simulation data with the Gaussian process method

    SciTech Connect

    Preuss, R.; Toussaint, U. von

    2014-12-05

    The simulation of plasma-wall interactions of fusion plasmas is extremely costly in computer power and time - the running time for a single parameter setting is easily in the order of weeks or months. We propose to exploit the already gathered results in order to predict the outcome for parametric studies within the high dimensional parameter space. For this we utilize Gaussian processes within the Bayesian framework and perform validation with one and two dimensional test cases from which we learn how to assess the outcome. Finally, the newly implemented method is applied to simulated data from the scrape-off layer of a fusion plasma. Uncertainties of the predictions are provided which point the way to parameter settings of further (expensive) simulations.

  10. Simulation of an array-based neural net model

    NASA Technical Reports Server (NTRS)

    Barnden, John A.

    1987-01-01

    Research in cognitive science suggests that much of cognition involves the rapid manipulation of complex data structures. However, it is very unclear how this could be realized in neural networks or connectionist systems. A core question is: how could the interconnectivity of items in an abstract-level data structure be neurally encoded? The answer appeals mainly to positional relationships between activity patterns within neural arrays, rather than directly to neural connections in the traditional way. The new method was initially devised to account for abstract symbolic data structures, but it also supports cognitively useful spatial analogue, image-like representations. As the neural model is based on massive, uniform, parallel computations over 2D arrays, the massively parallel processor is a convenient tool for simulation work, although there are complications in using the machine to the fullest advantage. An MPP Pascal simulation program for a small pilot version of the model is running.

  11. Momentum-exchange method in lattice Boltzmann simulations of particle-fluid interactions

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Cai, Qingdong; Xia, Zhenhua; Wang, Moran; Chen, Shiyi

    2013-07-01

    The momentum exchange method has been widely used in lattice Boltzmann simulations for particle-fluid interactions. Although proved accurate for still walls, it will result in inaccurate particle dynamics without corrections. In this work, we reveal the physical cause of this problem and find that the initial momentum of the net mass transfer through boundaries in the moving-boundary treatment is not counted in the conventional momentum exchange method. A corrected momentum exchange method is then proposed by taking into account the initial momentum of the net mass transfer at each time step. The method is easy to implement with negligible extra computation cost. Direct numerical simulations of a single elliptical particle sedimentation are carried out to evaluate the accuracy for our method as well as other lattice Boltzmann-based methods by comparisons with the results of the finite element method. A shear flow test shows that our method is Galilean invariant.

  12. Auditorium acoustics evaluation based on simulated impulse response

    NASA Astrophysics Data System (ADS)

    Wu, Shuoxian; Wang, Hongwei; Zhao, Yuezhe

    2001-05-01

    The impulse responses and other acoustical parameters of Huangpu Teenager Palace in Guangzhou were measured. Meanwhile, the acoustical simulation and auralization based on software ODEON were also made. The comparison between the parameters based on computer simulation and measuring is given. This case study shows that auralization technique based on computer simulation can be used for predicting the acoustical quality of a hall at its design stage.

  13. Validation of Ultrafilter Performance Model Based on Systematic Simulant Evaluation

    SciTech Connect

    Russell, Renee L.; Billing, Justin M.; Smith, Harry D.; Peterson, Reid A.

    2009-11-18

    Because of limited availability of test data with actual Hanford tank waste samples, a method was developed to estimate expected filtration performance based on physical characterization data for the Hanford Tank Waste Treatment and Immobilization Plant. A test with simulated waste was analyzed to demonstrate that filtration of this class of waste is consistent with a concentration polarization model. Subsequently, filtration data from actual waste samples were analyzed to demonstrate that centrifuged solids concentrations provide a reasonable estimate of the limiting concentration for filtration.

  14. A demonstration device to simulate the radial velocity method for exoplanet detection

    NASA Astrophysics Data System (ADS)

    Choopan, W.; Liewrian, W.; Ketpichainarong, W.; Panijpan, B.

    2016-07-01

    A device for simulating exoplanet detection by the radial method based on the Doppler principle has been constructed. The spectral shift of light from a distant star, mutually revolving with the exoplanet, is simulated by the spectral shift of the sound wave emitted by the device’s star approaching and receding relative to the static frequency detector. The detected sound frequency shift reflects the relative velocity of the ‘star’ very well. Both teachers and students benefit from the radial velocity method and the transit method (published by us previously) provided by this device.

  15. Exploring Discretization Error in Simulation-Based Aerodynamic Databases

    NASA Technical Reports Server (NTRS)

    Aftosmis, Michael J.; Nemec, Marian

    2010-01-01

    This work examines the level of discretization error in simulation-based aerodynamic databases and introduces strategies for error control. Simulations are performed using a parallel, multi-level Euler solver on embedded-boundary Cartesian meshes. Discretization errors in user-selected outputs are estimated using the method of adjoint-weighted residuals and we use adaptive mesh refinement to reduce these errors to specified tolerances. Using this framework, we examine the behavior of discretization error throughout a token database computed for a NACA 0012 airfoil consisting of 120 cases. We compare the cost and accuracy of two approaches for aerodynamic database generation. In the first approach, mesh adaptation is used to compute all cases in the database to a prescribed level of accuracy. The second approach conducts all simulations using the same computational mesh without adaptation. We quantitatively assess the error landscape and computational costs in both databases. This investigation highlights sensitivities of the database under a variety of conditions. The presence of transonic shocks or the stiffness in the governing equations near the incompressible limit are shown to dramatically increase discretization error requiring additional mesh resolution to control. Results show that such pathologies lead to error levels that vary by over factor of 40 when using a fixed mesh throughout the database. Alternatively, controlling this sensitivity through mesh adaptation leads to mesh sizes which span two orders of magnitude. We propose strategies to minimize simulation cost in sensitive regions and discuss the role of error-estimation in database quality.

  16. Simulation Of A Photofission-Based Cargo Interrogation System

    SciTech Connect

    King, Michael; Gozani, Tsahi; Stevenson, John; Shaw, Timothy

    2011-06-01

    A comprehensive model has been developed to characterize and optimize the detection of Bremsstrahlung x-ray induced fission signatures from nuclear materials hidden in cargo containers. An effective active interrogation system should not only induce a large number of fission events but also efficiently detect their signatures. The proposed scanning system utilizes a 9-MV commercially available linear accelerator and the detection of strong fission signals i.e. delayed gamma rays and prompt neutrons. Because the scanning system is complex and the cargo containers are large and often highly attenuating, the simulation method segments the model into several physical steps, representing each change of radiation particle. Each approximation is carried-out separately, resulting in a major reduction in computational time and a significant improvement in tally statistics. The model investigates the effect on the fission rate and detection rate by various cargo types, densities and distributions. Hydrogenous and metallic cargos, homogeneous and heterogeneous, as well as various locations of the nuclear material inside the cargo container were studied. We will show that for the photofission-based interrogation system simulation, the final results are not only in good agreement with a full, single-step simulation but also with experimental results, further validating the full-system simulation.

  17. Flood frequency estimation by hydrological continuous simulation and classical methods

    NASA Astrophysics Data System (ADS)

    Brocca, L.; Camici, S.; Melone, F.; Moramarco, T.; Tarpanelli, A.

    2009-04-01

    In recent years, the effects of flood damages have motivated the development of new complex methodologies for the simulation of the hydrologic/hydraulic behaviour of river systems, fundamental to direct the territorial planning as well as for the floodplain management and risk analysis. The valuation of the flood-prone areas can be carried out through various procedures that are usually based on the estimation of the peak discharge for an assigned probability of exceedence. In the case of ungauged or scarcely gauged catchments this is not straightforward, as the limited availability of historical peak flow data induces a relevant uncertainty in the flood frequency analysis. A possible solution to overcome this problem is the application of hydrological simulation studies in order to generate long synthetic discharge time series. For this purpose, recently, new methodologies based on the stochastic generation of rainfall and temperature data have been proposed. The inferred information can be used as input for a continuous hydrological model to generate a synthetic time series of peak river flow and, hence, the flood frequency distribution at a given site. In this study stochastic rainfall data have been generated via the Neyman-Scott Rectangular Pulses (NSRP) model characterized by a flexible structure in which the model parameters broadly relate to underlying physical features observed in rainfall fields and it is capable of preserving statistical properties of a rainfall time series over a range of time scales. The peak river flow time series have been generated through a continuous hydrological model aimed at flood prediction and developed for the purpose (hereinafter named MISDc) (Brocca, L., Melone, F., Moramarco, T., Singh, V.P., 2008. A continuous rainfall-runoff model as tool for the critical hydrological scenario assessment in natural channels. In: M. Taniguchi, W.C. Burnett, Y. Fukushima, M. Haigh, Y. Umezawa (Eds), From headwater to the ocean

  18. Distributed adaptive simulation through standards-based integration of simulators and adaptive learning systems.

    PubMed

    Bergeron, Bryan; Cline, Andrew; Shipley, Jaime

    2012-01-01

    We have developed a distributed, standards-based architecture that enables simulation and simulator designers to leverage adaptive learning systems. Our approach, which incorporates an electronic competency record, open source LMS, and open source microcontroller hardware, is a low-cost, pragmatic option to integrating simulators with traditional courseware. PMID:22356955

  19. Experiences using DAKOTA stochastic expansion methods in computational simulations.

    SciTech Connect

    Templeton, Jeremy Alan; Ruthruff, Joseph R.

    2012-01-01

    Uncertainty quantification (UQ) methods bring rigorous statistical connections to the analysis of computational and experiment data, and provide a basis for probabilistically assessing margins associated with safety and reliability. The DAKOTA toolkit developed at Sandia National Laboratories implements a number of UQ methods, which are being increasingly adopted by modeling and simulation teams to facilitate these analyses. This report disseminates results as to the performance of DAKOTA's stochastic expansion methods for UQ on a representative application. Our results provide a number of insights that may be of interest to future users of these methods, including the behavior of the methods in estimating responses at varying probability levels, and the expansion levels for the methodologies that may be needed to achieve convergence.

  20. Numerical simulation of seismic wave propagation produced by earthquake by using a particle method

    NASA Astrophysics Data System (ADS)

    Takekawa, Junichi; Madariaga, Raul; Mikada, Hitoshi; Goto, Tada-nori

    2012-12-01

    We propose a forward wavefield simulation based on a particle continuum model to simulate seismic waves travelling through a complex subsurface structure with arbitrary topography. The inclusion of arbitrary topography in the numerical simulation is a key issue not only for scientific interests but also for disaster prediction and mitigation purposes. In this study, a Hamiltonian particle method (HPM) is employed. It is easy to introduce traction-free boundary conditions in HPM and to refine the particle density in space. Any model with complex geometry and velocity structure can be simulated by HPM because the connectivity between particles is easily calculated based on their relative positions and the free surfaces are automatically introduced. In addition, the spatial resolution of the simulation could be refined in a simple manner even in a relatively complex velocity structure with arbitrary surface topography. For these reasons, the present method possesses great potential for the simulation of strong ground motions. In this paper, we first investigate the dispersion property of HPM through a plane wave analysis. Next, we simulate surface wave propagation in an elastic half space, and compare the numerical results with analytical solutions. HPM is more dispersive than FDM, however, our local refinement technique shows accuracy improvements in a simple and effective manner. Next, we introduce an earthquake double-couple source in HPM and compare a simulated seismic waveform obtained with HPM with that computed with FDM to demonstrate the performance of the method. Furthermore, we simulate the surface wave propagation in a model with a surface of arbitrary topographical shape and compare with results computed with FEM. In each simulation, HPM shows good agreement with the reference solutions. Finally, we discuss the calculation costs of HPM including its accuracy.

  1. Use of simulated data sets to evaluate the fidelity of metagenomic processing methods

    SciTech Connect

    Mavromatis, K; Ivanova, N; Barry, Kerrie; Shapiro, Harris; Goltsman, Eugene; McHardy, Alice C.; Rigoutsos, Isidore; Salamov, Asaf; Korzeniewski, Frank; Land, Miriam L; Lapidus, Alla L.; Grigoriev, Igor; Hugenholtz, Philip; Kyrpides, Nikos C

    2007-01-01

    Metagenomics is a rapidly emerging field of research for studying microbial communities. To evaluate methods presently used to process metagenomic sequences, we constructed three simulated data sets of varying complexity by combining sequencing reads randomly selected from 113 isolate genomes. These data sets were designed to model real metagenomes in terms of complexity and phylogenetic composition. We assembled sampled reads using three commonly used genome assemblers (Phrap, Arachne and JAZZ), and predicted genes using two popular gene-finding pipelines (fgenesb and CRITICA/GLIMMER). The phylogenetic origins of the assembled contigs were predicted using one sequence similarity-based ( blast hit distribution) and two sequence composition-based (PhyloPythia, oligonucleotide frequencies) binning methods. We explored the effects of the simulated community structure and method combinations on the fidelity of each processing step by comparison to the corresponding isolate genomes. The simulated data sets are available online to facilitate standardized benchmarking of tools for metagenomic analysis.

  2. Use of simulated data sets to evaluate the fidelity of Metagenomicprocessing methods

    SciTech Connect

    Mavromatis, Konstantinos; Ivanova, Natalia; Barry, Kerri; Shapiro, Harris; Goltsman, Eugene; McHardy, Alice C.; Rigoutsos, Isidore; Salamov, Asaf; Korzeniewski, Frank; Land, Miriam; Lapidus, Alla; Grigoriev, Igor; Richardson, Paul; Hugenholtz, Philip; Kyrpides, Nikos C.

    2006-12-01

    Metagenomics is a rapidly emerging field of research for studying microbial communities. To evaluate methods presently used to process metagenomic sequences, we constructed three simulated data sets of varying complexity by combining sequencing reads randomly selected from 113 isolate genomes. These data sets were designed to model real metagenomes in terms of complexity and phylogenetic composition. We assembled sampled reads using three commonly used genome assemblers (Phrap, Arachne and JAZZ), and predicted genes using two popular gene finding pipelines (fgenesb and CRITICA/GLIMMER). The phylogenetic origins of the assembled contigs were predicted using one sequence similarity--based (blast hit distribution) and two sequence composition--based (PhyloPythia, oligonucleotide frequencies) binning methods. We explored the effects of the simulated community structure and method combinations on the fidelity of each processing step by comparison to the corresponding isolate genomes. The simulated data sets are available online to facilitate standardized benchmarking of tools for metagenomic analysis.

  3. Lattice Boltzmann Method for Spacecraft Propellant Slosh Simulation

    NASA Technical Reports Server (NTRS)

    Orr, Jeb S.; Powers, Joseph F.; Yang, Hong Q.

    2015-01-01

    A scalable computational approach to the simulation of propellant tank sloshing dynamics in microgravity is presented. In this work, we use the lattice Boltzmann equation (LBE) to approximate the behavior of two-phase, single-component isothermal flows at very low Bond numbers. Through the use of a non-ideal gas equation of state and a modified multiple relaxation time (MRT) collision operator, the proposed method can simulate thermodynamically consistent phase transitions at temperatures and density ratios consistent with typical spacecraft cryogenic propellants, for example, liquid oxygen. Determination of the tank forces and moments relies upon the global momentum conservation of the fluid domain, and a parametric wall wetting model allows tuning of the free surface contact angle. Development of the interface is implicit and no interface tracking approach is required. Numerical examples illustrate the method's application to predicting bulk fluid motion including lateral propellant slosh in low-g conditions.

  4. Methods for variance reduction in Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Bixler, Joel N.; Hokr, Brett H.; Winblad, Aidan; Elpers, Gabriel; Zollars, Byron; Thomas, Robert J.

    2016-03-01

    Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, due to the probabilistic nature of these simulations, large numbers of photons are often required in order to generate relevant results. Here, we present methods for reduction in the variance of dose distribution in a computational volume. Dose distribution is computed via tracing of a large number of rays, and tracking the absorption and scattering of the rays within discrete voxels that comprise the volume. Variance reduction is shown here using quasi-random sampling, interaction forcing for weakly scattering media, and dose smoothing via bi-lateral filtering. These methods, along with the corresponding performance enhancements are detailed here.

  5. Silt motion simulation using finite volume particle method

    NASA Astrophysics Data System (ADS)

    Jahanbakhsh, E.; Vessaz, C.; Avellan, F.

    2014-03-01

    In this paper, we present a 3-D FVPM which features rectangular top-hat kernels. With this method, interaction vectors are computed exactly and efficiently. We introduce a new method to enforce the no-slip boundary condition. With this boundary enforcement, the interaction forces between fluid and wall are computed accurately. We employ the boundary force to predict the motion of rigid spherical silt particles inside the fluid. To validate the model, we simulate the 2-D sedimentation of a single particle in viscous fluid tank and compare results with benchmark data. The particle resolution is verified by convergence study. We also simulate the sedimentation of two particles exhibiting drafting, kissing and tumbling phenomena in 2-D and 3-D. We compare the results with other numerical solutions.

  6. Immersed boundary methods for simulating fluid-structure interaction

    NASA Astrophysics Data System (ADS)

    Sotiropoulos, Fotis; Yang, Xiaolei

    2014-02-01

    Fluid-structure interaction (FSI) problems commonly encountered in engineering and biological applications involve geometrically complex flexible or rigid bodies undergoing large deformations. Immersed boundary (IB) methods have emerged as a powerful simulation tool for tackling such flows due to their inherent ability to handle arbitrarily complex bodies without the need for expensive and cumbersome dynamic re-meshing strategies. Depending on the approach such methods adopt to satisfy boundary conditions on solid surfaces they can be broadly classified as diffused and sharp interface methods. In this review, we present an overview of the fundamentals of both classes of methods with emphasis on solution algorithms for simulating FSI problems. We summarize and juxtapose different IB approaches for imposing boundary conditions, efficient iterative algorithms for solving the incompressible Navier-Stokes equations in the presence of dynamic immersed boundaries, and strong and loose coupling FSI strategies. We also present recent results from the application of such methods to study a wide range of problems, including vortex-induced vibrations, aquatic swimming, insect flying, human walking and renewable energy. Limitations of such methods and the need for future research to mitigate them are also discussed.

  7. Calibration of three rainfall simulators with automatic measurement methods

    NASA Astrophysics Data System (ADS)

    Roldan, Margarita

    2010-05-01

    CALIBRATION OF THREE RAINFALL SIMULATORS WITH AUTOMATIC MEASUREMENT METHODS M. Roldán (1), I. Martín (2), F. Martín (2), S. de Alba(3), M. Alcázar(3), F.I. Cermeño(3) 1 Grupo de Investigación Ecología y Gestión Forestal Sostenible. ECOGESFOR-Universidad Politécnica de Madrid. E.U.I.T. Forestal. Avda. Ramiro de Maeztu s/n. Ciudad Universitaria. 28040 Madrid. margarita.roldan@upm.es 2 E.U.I.T. Forestal. Avda. Ramiro de Maeztu s/n. Ciudad Universitaria. 28040 Madrid. 3 Facultad de Ciencias Geológicas. Universidad Complutense de Madrid. Ciudad Universitaria s/n. 28040 Madrid The rainfall erosivity is the potential ability of rain to cause erosion. It is function of the physical characteristics of rainfall (Hudson, 1971). Most expressions describing erosivity are related to kinetic energy or momentum and so with drop mass or size and fall velocity. Therefore, research on factors determining erosivity leds to the necessity to study the relation between fall height and fall velocity for different drop sizes, generated in a rainfall simulator (Epema G.F.and Riezebos H.Th, 1983) Rainfall simulators are one of the most used tools for erosion studies and are used to determine fall velocity and drop size. Rainfall simulators allow repeated and multiple measurements The main reason for use of rainfall simulation as a research tool is to reproduce in a controlled way the behaviour expected in the natural environment. But in many occasions when simulated rain is used in order to compare it with natural rain, there is a lack of correspondence between natural and simulated rain and this can introduce some doubt about validity of data because the characteristics of natural rain are not adequately represented in rainfall simulation research (Dunkerley D., 2008). Many times the rainfall simulations have high rain rates and they do not resemble natural rain events and these measures are not comparables. And besides the intensity is related to the kinetic energy which

  8. Particle-based simulations of red blood cells-A review.

    PubMed

    Ye, Ting; Phan-Thien, Nhan; Lim, Chwee Teck

    2016-07-26

    Particle-based methods have been increasingly attractive for solving biofluid flow problems, because of the ease and flexibility in modeling complex structure fluids afforded by the methods. In this review, we focus on popular particle-based methods widely used in red blood cell (RBC) simulations, including dissipative particle dynamics (DPD), smoothed particle hydrodynamics (SPH), and lattice Boltzmann method (LBM). We introduce their basic ideas and formulations, and present their applications in RBC simulations which are divided into three classes according to the number of RBCs in the simulation: a single RBC, two or multiple RBCs, and RBC suspension. Furthermore, we analyze their advantages and disadvantages. On weighing the pros and cons of the methods, a combination of the immersed boundary (IB) method and some forms of smoothed dissipative particle hydrodynamics (SDPD) methods may be required to deal effectively with RBC simulations. PMID:26706718

  9. Construction of dynamic stochastic simulation models using knowledge-based techniques

    NASA Technical Reports Server (NTRS)

    Williams, M. Douglas; Shiva, Sajjan G.

    1990-01-01

    Over the past three decades, computer-based simulation models have proven themselves to be cost-effective alternatives to the more structured deterministic methods of systems analysis. During this time, many techniques, tools and languages for constructing computer-based simulation models have been developed. More recently, advances in knowledge-based system technology have led many researchers to note the similarities between knowledge-based programming and simulation technologies and to investigate the potential application of knowledge-based programming techniques to simulation modeling. The integration of conventional simulation techniques with knowledge-based programming techniques is discussed to provide a development environment for constructing knowledge-based simulation models. A comparison of the techniques used in the construction of dynamic stochastic simulation models and those used in the construction of knowledge-based systems provides the requirements for the environment. This leads to the design and implementation of a knowledge-based simulation development environment. These techniques were used in the construction of several knowledge-based simulation models including the Advanced Launch System Model (ALSYM).

  10. IR characteristic simulation of city scenes based on radiosity model

    NASA Astrophysics Data System (ADS)

    Xiong, Xixian; Zhou, Fugen; Bai, Xiangzhi; Yu, Xiyu

    2013-09-01

    Reliable modeling for thermal infrared (IR) signatures of real-world city scenes is required for signature management of civil and military platforms. Traditional modeling methods generally assume that scene objects are individual entities during the physical processes occurring in infrared range. However, in reality, the physical scene involves convective and conductive interactions between objects as well as the radiations interactions between objects. A method based on radiosity model describes these complex effects. It has been developed to enable an accurate simulation for the radiance distribution of the city scenes. Firstly, the physical processes affecting the IR characteristic of city scenes were described. Secondly, heat balance equations were formed on the basis of combining the atmospheric conditions, shadow maps and the geometry of scene. Finally, finite difference method was used to calculate the kinetic temperature of object surface. A radiosity model was introduced to describe the scattering effect of radiation between surface elements in the scene. By the synthesis of objects radiance distribution in infrared range, we could obtain the IR characteristic of scene. Real infrared images and model predictions were shown and compared. The results demonstrate that this method can realistically simulate the IR characteristic of city scenes. It effectively displays the infrared shadow effects and the radiation interactions between objects in city scenes.

  11. Simulating recrystallization in titanium using the phase field method

    NASA Astrophysics Data System (ADS)

    Gentry, S. P.; Thornton, K.

    2015-08-01

    Integrated computational materials engineering (ICME) links physics-based models to predict performance of materials based on their processing history. The recrystallization phase field model is developed and parameterized for commercially pure titanium. Stored energy and nucleation of dislocation-free grains are added into a phase field grain-growth model. A two-dimensional simulation of recrystallization in titanium at 800°C was performed; the recrystallized volume fraction was measured from the simulated microstructures. Fitting the recrystallized volume fraction to the Avramiequation gives the time exponent n as 1.8 and the annealing time to reach 50% recrystallization (t0.5) as 71 s. As expected, the microstructure evolves faster when driven by stored energy than when driven by grain boundary energy.

  12. An Ellipsoidal Particle-Finite Element Method for Hypervelocity Impact Simulation. Chapter 1

    NASA Technical Reports Server (NTRS)

    Shivarama, Ravishankar; Fahrenthold, Eric P.

    2004-01-01

    A number of coupled particle-element and hybrid particle-element methods have been developed for the simulation of hypervelocity impact problems, to avoid certain disadvantages associated with the use of pure continuum based or pure particle based methods. To date these methods have employed spherical particles. In recent work a hybrid formulation has been extended to the ellipsoidal particle case. A model formulation approach based on Lagrange's equations, with particles entropies serving as generalized coordinates, avoids the angular momentum conservation problems which have been reported with ellipsoidal smooth particle hydrodynamics models.

  13. Simulation Methods for Self-Assembled Polymers and Rings

    NASA Astrophysics Data System (ADS)

    Kindt, James T.

    2003-11-01

    New off-lattice grand canonical Monte Carlo simulation methods have been developed and used to model the equilibrium structure and phase diagrams of equilibrium polymers and rings. A scheme called Polydisperse Insertion, Removal, and Resizing (PDIRR) is used to accelerate the equilibration of the size distribution of self-assembled aggregates. This method allows the insertion or removal of aggregates (e.g., chains) containing an arbitrary number of monomers in a single Monte Carlo move, or the re-sizing of an existing aggregate. For the equilibrium polymer model under semi-dilute conditions, a several-fold increase in equilibration rate compared with single-monomer moves is observed, facilitating the study of the isotropic-nematic transition of semiflexible, self-assembled chains. Combined with the pivot-coupled GCMC method for ring simulation, the PDIRR approach also allows the phenomenological simulation of a polydisperse equilibrium phase of rings, 2-dimensional fluid domains, or flat self-assembled disks in three dimensions.

  14. Computer Simulations of Valveless Pumping using the Immersed Boundary Method

    NASA Astrophysics Data System (ADS)

    Jung, Eunok; Peskin, Charles

    2000-03-01

    Pumping blood in one direction is the main function of the heart, and the heart is equipped with valves that ensure unidirectional flow. Is it possible, though, to pump blood without valves? This report is intended to show by numerical simulation the possibility of a net flow which is generated by a valveless mechanism in a circulatory system. Simulations of valveless pumping are motivated by biomedical applications: cardiopulmonary resuscitation (CPR); and the human foetus before the development of the heart valves. The numerical method used in this work is immersed boundary method, which is applicable to problems involving an elastic structure interacting with a viscous incompressible fluid. This method has already been applied to blood flow in the heart, platelet aggregation during blood clotting, aquatic animal locomotion, and flow in collapsible tubes. The direction of flow inside a loop of tubing which consists of (almost) rigid and flexible parts is investigated when the boundary of one end of the flexible segment is forced periodically in time. Despite the absence of valves, net flow around the loop may appear in these simulations. Furthermore, we present the new, unexpected results that the direction of this flow is determined not only by the position of the periodic compression, but also by the frequency and amplitude of the driving force.

  15. Simulation of Foam Divot Weight on External Tank Utilizing Least Squares and Neural Network Methods

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Coroneos, Rula M.

    2007-01-01

    Simulation of divot weight in the insulating foam, associated with the external tank of the U.S. space shuttle, has been evaluated using least squares and neural network concepts. The simulation required models based on fundamental considerations that can be used to predict under what conditions voids form, the size of the voids, and subsequent divot ejection mechanisms. The quadratic neural networks were found to be satisfactory for the simulation of foam divot weight in various tests associated with the external tank. Both linear least squares method and the nonlinear neural network predicted identical results.

  16. The Local Variational Multiscale Method for Turbulence Simulation.

    SciTech Connect

    Collis, Samuel Scott; Ramakrishnan, Srinivas

    2005-05-01

    Accurate and efficient turbulence simulation in complex geometries is a formidable chal-lenge. Traditional methods are often limited by low accuracy and/or restrictions to simplegeometries. We explore the merger of Discontinuous Galerkin (DG) spatial discretizationswith Variational Multi-Scale (VMS) modeling, termed Local VMS (LVMS), to overcomethese limitations. DG spatial discretizations support arbitrarily high-order accuracy on un-structured grids amenable for complex geometries. Furthermore, high-order, hierarchicalrepresentation within DG provides a natural framework fora prioriscale separation crucialfor VMS implementation. We show that the combined benefits of DG and VMS within theLVMS method leads to promising new approach to LES for use in complex geometries.The efficacy of LVMS for turbulence simulation is assessed by application to fully-developed turbulent channelflow. First, a detailed spatial resolution study is undertakento record the effects of the DG discretization on turbulence statistics. Here, the localhp[?]refinement capabilites of DG are exploited to obtain reliable low-order statistics effi-ciently. Likewise, resolution guidelines for simulating wall-bounded turbulence using DGare established. We also explore the influence of enforcing Dirichlet boundary conditionsindirectly through numericalfluxes in DG which allows the solution to jump (slip) at thechannel walls. These jumps are effective in simulating the influence of the wall commen-surate with the local resolution and this feature of DG is effective in mitigating near-wallresolution requirements. In particular, we show that by locally modifying the numericalviscousflux used at the wall, we are able to regulate the near-wall slip through a penaltythat leads to improved shear-stress predictions. This work, demonstrates the potential ofthe numerical viscousflux to act as a numerically consistent wall-model and this successwarrents future research.As in any high-order numerical method some

  17. Mosquito population dynamics from cellular automata-based simulation

    NASA Astrophysics Data System (ADS)

    Syafarina, Inna; Sadikin, Rifki; Nuraini, Nuning

    2016-02-01

    In this paper we present an innovative model for simulating mosquito-vector population dynamics. The simulation consist of two stages: demography and dispersal dynamics. For demography simulation, we follow the existing model for modeling a mosquito life cycles. Moreover, we use cellular automata-based model for simulating dispersal of the vector. In simulation, each individual vector is able to move to other grid based on a random walk. Our model is also capable to represent immunity factor for each grid. We simulate the model to evaluate its correctness. Based on the simulations, we can conclude that our model is correct. However, our model need to be improved to find a realistic parameters to match real data.

  18. Individualized feedback during simulated laparoscopic training: a mixed methods study

    PubMed Central

    Weurlander, Maria; Hedman, Leif; Nisell, Henry; Lindqvist, Pelle G.; Felländer-Tsai, Li; Enochsson, Lars

    2015-01-01

    Objectives This study aimed to explore the value of indi-vidualized feedback on performance, flow and self-efficacy during simulated laparoscopy. Furthermore, we wished to explore attitudes towards feedback and simulator training among medical students. Methods Sixteen medical students were included in the study and randomized to laparoscopic simulator training with or without feedback. A teacher provided individualized feedback continuously throughout the procedures to the target group. Validated questionnaires and scales were used to evaluate self-efficacy and flow. The Mann-Whitney U test was used to evaluate differences between groups regarding laparoscopic performance (instrument path length), self-efficacy and flow. Qualitative data was collected by group interviews and interpreted using inductive thematic analyses. Results Sixteen students completed the simulator training and questionnaires. Instrument path length was shorter in the feedback group (median 3.9 m; IQR: 3.3-4.9) as com-pared to the control group (median 5.9 m; IQR: 5.0-8.1), p<0.05. Self-efficacy improved in both groups. Eleven students participated in the focus interviews. Participants in the control group expressed that they had fun, whereas participants in the feedback group were more concentrated on the task and also more anxious. Both groups had high ambitions to succeed and also expressed the importance of getting feedback. The authenticity of the training scenario was important for the learning process. Conclusions This study highlights the importance of individualized feedback during simulated laparoscopy training. The next step is to further optimize feedback and to transfer standardized and individualized feedback from the simulated setting to the operating room. PMID:26223033

  19. An HLA based design of space system simulation environment

    NASA Astrophysics Data System (ADS)

    Li, Yinghua; Li, Yong; Liu, Jie

    2007-06-01

    Space system simulation is involved in many application fields, such as space remote sensing and space communication, etc. A simulation environment which can be shared by different space system simulation is needed. Two rules, called object template towing and hierarchical reusability, are proposed. Based on these two rules, the architecture, the network structure and the function structure of the simulation environment are designed. Then, the mechanism of utilizing data resources, inheriting object models and running simulation systems are also constructed. These mechanisms make the simulation objects defined in advance be easily inherited by different HLA federates, the fundamental simulation models be shared by different simulation systems. Therefore, the simulation environment is highly universal and reusable.

  20. The implementation of distributed interactive simulator based on HLA

    NASA Astrophysics Data System (ADS)

    Zhang, Limin; Teng, Jianfu; Feng, Tao

    2004-03-01

    HLA (High Level Architecture) is a new architecture of distributed interactive simulation developed from DIS. We put forward a technical scheme of a distributed interactive simulator based on HLA, and bring forward a concept about distributed oriented-object simulator's engine, as well as an in-depth study on its architecture. This provides a new theoretical and practical approach in order to turn simulator's architecture into HLA.

  1. DOM Based XSS Detecting Method Based on Phantomjs

    NASA Astrophysics Data System (ADS)

    Dong, Ri-Zhan; Ling, Jie; Liu, Yi

    Because malicious code does not appear in html source code, DOM based XSS cannot be detected by traditional methods. By analyzing the causes of DOM based XSS, this paper proposes a detection method of DOM based XSS based on phantomjs. This paper uses function hijacking to detect dangerous operation and achieves a prototype system. Comparing with existing tools shows that the system improves the detection rate and the method is effective to detect DOM based XSS.

  2. Grid generation and adaptation for the Direct Simulation Monte Carlo Method. [for complex flows past wedges and cones

    NASA Technical Reports Server (NTRS)

    Olynick, David P.; Hassan, H. A.; Moss, James N.

    1988-01-01

    A grid generation and adaptation procedure based on the method of transfinite interpolation is incorporated into the Direct Simulation Monte Carlo Method of Bird. In addition, time is advanced based on a local criterion. The resulting procedure is used to calculate steady flows past wedges and cones. Five chemical species are considered. In general, the modifications result in a reduced computational effort. Moreover, preliminary results suggest that the simulation method is time step dependent if requirements on cell sizes are not met.

  3. Multigrid Methods for Fully Implicit Oil Reservoir Simulation

    NASA Technical Reports Server (NTRS)

    Molenaar, J.

    1996-01-01

    In this paper we consider the simultaneous flow of oil and water in reservoir rock. This displacement process is modeled by two basic equations: the material balance or continuity equations and the equation of motion (Darcy's law). For the numerical solution of this system of nonlinear partial differential equations there are two approaches: the fully implicit or simultaneous solution method and the sequential solution method. In the sequential solution method the system of partial differential equations is manipulated to give an elliptic pressure equation and a hyperbolic (or parabolic) saturation equation. In the IMPES approach the pressure equation is first solved, using values for the saturation from the previous time level. Next the saturations are updated by some explicit time stepping method; this implies that the method is only conditionally stable. For the numerical solution of the linear, elliptic pressure equation multigrid methods have become an accepted technique. On the other hand, the fully implicit method is unconditionally stable, but it has the disadvantage that in every time step a large system of nonlinear algebraic equations has to be solved. The most time-consuming part of any fully implicit reservoir simulator is the solution of this large system of equations. Usually this is done by Newton's method. The resulting systems of linear equations are then either solved by a direct method or by some conjugate gradient type method. In this paper we consider the possibility of applying multigrid methods for the iterative solution of the systems of nonlinear equations. There are two ways of using multigrid for this job: either we use a nonlinear multigrid method or we use a linear multigrid method to deal with the linear systems that arise in Newton's method. So far only a few authors have reported on the use of multigrid methods for fully implicit simulations. Two-level FAS algorithm is presented for the black-oil equations, and linear multigrid for

  4. Traffic and Driving Simulator Based on Architecture of Interactive Motion.

    PubMed

    Paz, Alexander; Veeramisti, Naveen; Khaddar, Romesh; de la Fuente-Mella, Hanns; Modorcea, Luiza

    2015-01-01

    This study proposes an architecture for an interactive motion-based traffic simulation environment. In order to enhance modeling realism involving actual human beings, the proposed architecture integrates multiple types of simulation, including: (i) motion-based driving simulation, (ii) pedestrian simulation, (iii) motorcycling and bicycling simulation, and (iv) traffic flow simulation. The architecture has been designed to enable the simulation of the entire network; as a result, the actual driver, pedestrian, and bike rider can navigate anywhere in the system. In addition, the background traffic interacts with the actual human beings. This is accomplished by using a hybrid mesomicroscopic traffic flow simulation modeling approach. The mesoscopic traffic flow simulation model loads the results of a user equilibrium traffic assignment solution and propagates the corresponding traffic through the entire system. The microscopic traffic flow simulation model provides background traffic around the vicinities where actual human beings are navigating the system. The two traffic flow simulation models interact continuously to update system conditions based on the interactions between actual humans and the fully simulated entities. Implementation efforts are currently in progress and some preliminary tests of individual components have been conducted. The implementation of the proposed architecture faces significant challenges ranging from multiplatform and multilanguage integration to multievent communication and coordination. PMID:26491711

  5. Traffic and Driving Simulator Based on Architecture of Interactive Motion

    PubMed Central

    Paz, Alexander; Veeramisti, Naveen; Khaddar, Romesh; de la Fuente-Mella, Hanns; Modorcea, Luiza

    2015-01-01

    This study proposes an architecture for an interactive motion-based traffic simulation environment. In order to enhance modeling realism involving actual human beings, the proposed architecture integrates multiple types of simulation, including: (i) motion-based driving simulation, (ii) pedestrian simulation, (iii) motorcycling and bicycling simulation, and (iv) traffic flow simulation. The architecture has been designed to enable the simulation of the entire network; as a result, the actual driver, pedestrian, and bike rider can navigate anywhere in the system. In addition, the background traffic interacts with the actual human beings. This is accomplished by using a hybrid mesomicroscopic traffic flow simulation modeling approach. The mesoscopic traffic flow simulation model loads the results of a user equilibrium traffic assignment solution and propagates the corresponding traffic through the entire system. The microscopic traffic flow simulation model provides background traffic around the vicinities where actual human beings are navigating the system. The two traffic flow simulation models interact continuously to update system conditions based on the interactions between actual humans and the fully simulated entities. Implementation efforts are currently in progress and some preliminary tests of individual components have been conducted. The implementation of the proposed architecture faces significant challenges ranging from multiplatform and multilanguage integration to multievent communication and coordination. PMID:26491711

  6. Hydrological Ensemble Simulation in Huaihe Catchment Based on VIC Model

    NASA Astrophysics Data System (ADS)

    Sun, R.; Yuan, H.

    2013-12-01

    Huaihe catchment plays a very important role in the political, economic, and cultural development in China. However, hydrological disasters frequently occur in Huaihe catchment, and thus hydrological simulation in this area has very important significance. The Variable Infiltration Capacity(VIC)model, a macroscale distributed hydrological model is applied to the upper Huaihe Catchment, to simulate the discharge of the basin outlet Bengbu station from 1970 to 1999. The uncertainty in the calibration of VIC model parameters has been analyzed, and the best set of parameters in the training period of 1970~1993 is achieved using the Generalized Likelihood Uncertainty Estimation (GLUE) method. The study also addresses the influence of different likelihood functions for the parameter sensitivity as well as the uncertainty of discharge simulation. Results show that among the six chosen parameters, the soil thickness of the second layer (d2) is the most sensitive one, followed by the saturation capacity curve shape parameter (B). Moreover, the parameter selection is sensitive to different likelihood functions. For example, the soil thickness of the third layer (d3) is sensitive when using Nash coefficient as the likelihood function, while d3 is not sensitive when using relative error as the likelihood function. With the 95% confidence interval, the coverage rate of the simulated discharge versus the observed discharge is small (around 0.4), indicating that the uncertainty in the model is large. The coverage rate of selecting relative error as the likelihood function is bigger than that of selecting Nash coefficient. Based on the calibration and sensitivity studies, hydrological ensemble forecasts have been established using multiple parameter sets. The ensemble mean forecasts show better simulations than the control forecast (i.e. the simulation using the best set of parameters) for the long-term trend of discharge, while the control forecast is better in the simulation of

  7. Particle behavior simulation in thermophoresis phenomena by direct simulation Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Wada, Takao

    2014-07-01

    A particle motion considering thermophoretic force is simulated by using direct simulation Monte Carlo (DSMC) method. Thermophoresis phenomena, which occur for a particle size of 1 μm, are treated in this paper. The problem of thermophoresis simulation is computation time which is proportional to the collision frequency. Note that the time step interval becomes much small for the simulation considering the motion of large size particle. Thermophoretic forces calculated by DSMC method were reported, but the particle motion was not computed because of the small time step interval. In this paper, the molecule-particle collision model, which computes the collision between a particle and multi molecules in a collision event, is considered. The momentum transfer to the particle is computed with a collision weight factor, where the collision weight factor means the number of molecules colliding with a particle in a collision event. The large time step interval is adopted by considering the collision weight factor. Furthermore, the large time step interval is about million times longer than the conventional time step interval of the DSMC method when a particle size is 1 μm. Therefore, the computation time becomes about one-millionth. We simulate the graphite particle motion considering thermophoretic force by DSMC-Neutrals (Particle-PLUS neutral module) with above the collision weight factor, where DSMC-Neutrals is commercial software adopting DSMC method. The size and the shape of the particle are 1 μm and a sphere, respectively. The particle-particle collision is ignored. We compute the thermophoretic forces in Ar and H2 gases of a pressure range from 0.1 to 100 mTorr. The results agree well with Gallis' analytical results. Note that Gallis' analytical result for continuum limit is the same as Waldmann's result.

  8. Some Developments of the Equilibrium Particle Simulation Method for the Direct Simulation of Compressible Flows

    NASA Technical Reports Server (NTRS)

    Macrossan, M. N.

    1995-01-01

    The direct simulation Monte Carlo (DSMC) method is the established technique for the simulation of rarefied gas flows. In some flows of engineering interest, such as occur for aero-braking spacecraft in the upper atmosphere, DSMC can become prohibitively expensive in CPU time because some regions of the flow, particularly on the windward side of blunt bodies, become collision dominated. As an alternative to using a hybrid DSMC and continuum gas solver (Euler or Navier-Stokes solver) this work is aimed at making the particle simulation method efficient in the high density regions of the flow. A high density, infinite collision rate limit of DSMC, the Equilibrium Particle Simulation method (EPSM) was proposed some 15 years ago. EPSM is developed here for the flow of a gas consisting of many different species of molecules and is shown to be computationally efficient (compared to DSMC) for high collision rate flows. It thus offers great potential as part of a hybrid DSMC/EPSM code which could handle flows in the transition regime between rarefied gas flows and fully continuum flows. As a first step towards this goal a pure EPSM code is described. The next step of combining DSMC and EPSM is not attempted here but should be straightforward. EPSM and DSMC are applied to Taylor-Couette flow with Kn = 0.02 and 0.0133 and S(omega) = 3). Toroidal vortices develop for both methods but some differences are found, as might be expected for the given flow conditions. EPSM appears to be less sensitive to the sequence of random numbers used in the simulation than is DSMC and may also be more dissipative. The question of the origin and the magnitude of the dissipation in EPSM is addressed. It is suggested that this analysis is also relevant to DSMC when the usual accuracy requirements on the cell size and decoupling time step are relaxed in the interests of computational efficiency.

  9. Simulation-based assessment in anesthesiology: requirements for practical implementation.

    PubMed

    Boulet, John R; Murray, David J

    2010-04-01

    Simulations have taken a central role in the education and assessment of medical students, residents, and practicing physicians. The introduction of simulation-based assessments in anesthesiology, especially those used to establish various competencies, has demanded fairly rigorous studies concerning the psychometric properties of the scores. Most important, major efforts have been directed at identifying, and addressing, potential threats to the validity of simulation-based assessment scores. As a result, organizations that wish to incorporate simulation-based assessments into their evaluation practices can access information regarding effective test development practices, the selection of appropriate metrics, the minimization of measurement errors, and test score validation processes. The purpose of this article is to provide a broad overview of the use of simulation for measuring physician skills and competencies. For simulations used in anesthesiology, studies that describe advances in scenario development, the development of scoring rubrics, and the validation of assessment results are synthesized. Based on the summary of relevant research, psychometric requirements for practical implementation of simulation-based assessments in anesthesiology are forwarded. As technology expands, and simulation-based education and evaluation takes on a larger role in patient safety initiatives, the groundbreaking work conducted to date can serve as a model for those individuals and organizations that are responsible for developing, scoring, or validating simulation-based education and assessment programs in anesthesiology. PMID:20234313

  10. Correlation theory-based signal processing method for CMF signals

    NASA Astrophysics Data System (ADS)

    Shen, Yan-lin; Tu, Ya-qing

    2016-06-01

    Signal processing precision of Coriolis mass flowmeter (CMF) signals affects measurement accuracy of Coriolis mass flowmeters directly. To improve the measurement accuracy of CMFs, a correlation theory-based signal processing method for CMF signals is proposed, which is comprised of the correlation theory-based frequency estimation method and phase difference estimation method. Theoretical analysis shows that the proposed method eliminates the effect of non-integral period sampling signals on frequency and phase difference estimation. The results of simulations and field experiments demonstrate that the proposed method improves the anti-interference performance of frequency and phase difference estimation and has better estimation performance than the adaptive notch filter, discrete Fourier transform and autocorrelation methods in terms of frequency estimation and the data extension-based correlation, Hilbert transform, quadrature delay estimator and discrete Fourier transform methods in terms of phase difference estimation, which contributes to improving the measurement accuracy of Coriolis mass flowmeters.

  11. Fluid, solid and fluid-structure interaction simulations on patient-based abdominal aortic aneurysm models.

    PubMed

    Kelly, Sinead; O'Rourke, Malachy

    2012-04-01

    This article describes the use of fluid, solid and fluid-structure interaction simulations on three patient-based abdominal aortic aneurysm geometries. All simulations were carried out using OpenFOAM, which uses the finite volume method to solve both fluid and solid equations. Initially a fluid-only simulation was carried out on a single patient-based geometry and results from this simulation were compared with experimental results. There was good qualitative and quantitative agreement between the experimental and numerical results, suggesting that OpenFOAM is capable of predicting the main features of unsteady flow through a complex patient-based abdominal aortic aneurysm geometry. The intraluminal thrombus and arterial wall were then included, and solid stress and fluid-structure interaction simulations were performed on this, and two other patient-based abdominal aortic aneurysm geometries. It was found that the solid stress simulations resulted in an under-estimation of the maximum stress by up to 5.9% when compared with the fluid-structure interaction simulations. In the fluid-structure interaction simulations, flow induced pressure within the aneurysm was found to be up to 4.8% higher than the value of peak systolic pressure imposed in the solid stress simulations, which is likely to be the cause of the variation in the stress results. In comparing the results from the initial fluid-only simulation with results from the fluid-structure interaction simulation on the same patient, it was found that wall shear stress values varied by up to 35% between the two simulation methods. It was concluded that solid stress simulations are adequate to predict the maximum stress in an aneurysm wall, while fluid-structure interaction simulations should be performed if accurate prediction of the fluid wall shear stress is necessary. Therefore, the decision to perform fluid-structure interaction simulations should be based on the particular variables of interest in a given

  12. Quadrature Moments Method for the Simulation of Turbulent Reactive Flows

    NASA Technical Reports Server (NTRS)

    Raman, Venkatramanan; Pitsch, Heinz; Fox, Rodney O.

    2003-01-01

    A sub-filter model for reactive flows, namely the DQMOM model, was formulated for Large Eddy Simulation (LES) using the filtered mass density function. Transport equations required to determine the location and size of the delta-peaks were then formulated for a 2-peak decomposition of the FDF. The DQMOM scheme was implemented in an existing structured-grid LES solver. Simulations of scalar shear layer using an experimental configuration showed that the first and second moments of both reactive and inert scalars are in good agreement with a conventional Lagrangian scheme that evolves the same FDF. Comparisons with LES simulations performed using laminar chemistry assumption for the reactive scalar show that the new method provides vast improvements at minimal computational cost. Currently, the DQMOM model is being implemented for use with the progress variable/mixture fraction model of Pierce. Comparisons with experimental results and LES simulations using a single-environment for the progress-variable are planned. Future studies will aim at understanding the effect of increase in environments on predictions.

  13. Krylov subspace methods for computing hydrodynamic interactions in Brownian dynamics simulations

    PubMed Central

    Ando, Tadashi; Chow, Edmond; Saad, Yousef; Skolnick, Jeffrey

    2012-01-01

    Hydrodynamic interactions play an important role in the dynamics of macromolecules. The most common way to take into account hydrodynamic effects in molecular simulations is in the context of a Brownian dynamics simulation. However, the calculation of correlated Brownian noise vectors in these simulations is computationally very demanding and alternative methods are desirable. This paper studies methods based on Krylov subspaces for computing Brownian noise vectors. These methods are related to Chebyshev polynomial approximations, but do not require eigenvalue estimates. We show that only low accuracy is required in the Brownian noise vectors to accurately compute values of dynamic and static properties of polymer and monodisperse suspension models. With this level of accuracy, the computational time of Krylov subspace methods scales very nearly as O(N2) for the number of particles N up to 10 000, which was the limit tested. The performance of the Krylov subspace methods, especially the “block” version, is slightly better than that of the Chebyshev method, even without taking into account the additional cost of eigenvalue estimates required by the latter. Furthermore, at N = 10 000, the Krylov subspace method is 13 times faster than the exact Cholesky method. Thus, Krylov subspace methods are recommended for performing large-scale Brownian dynamics simulations with hydrodynamic interactions. PMID:22897254

  14. A Survey of Stochastic Simulation and Optimization Methods in Signal Processing

    NASA Astrophysics Data System (ADS)

    Pereyra, Marcelo; Schniter, Philip; Chouzenoux, Emilie; Pesquet, Jean-Christophe; Tourneret, Jean-Yves; Hero, Alfred O.; McLaughlin, Steve

    2016-03-01

    Modern signal processing (SP) methods rely very heavily on probability and statistics to solve challenging SP problems. SP methods are now expected to deal with ever more complex models, requiring ever more sophisticated computational inference techniques. This has driven the development of statistical SP methods based on stochastic simulation and optimization. Stochastic simulation and optimization algorithms are computationally intensive tools for performing statistical inference in models that are analytically intractable and beyond the scope of deterministic inference methods. They have been recently successfully applied to many difficult problems involving complex statistical models and sophisticated (often Bayesian) statistical inference techniques. This survey paper offers an introduction to stochastic simulation and optimization methods in signal and image processing. The paper addresses a variety of high-dimensional Markov chain Monte Carlo (MCMC) methods as well as deterministic surrogate methods, such as variational Bayes, the Bethe approach, belief and expectation propagation and approximate message passing algorithms. It also discusses a range of optimization methods that have been adopted to solve stochastic problems, as well as stochastic methods for deterministic optimization. Subsequently, areas of overlap between simulation and optimization, in particular optimization-within-MCMC and MCMC-driven optimization are discussed.

  15. [Simulation in obstetrics and gynecology - a new method to improve the management of acute obstetric emergencies].

    PubMed

    Blum, Ronja; Gairing Bürglin, Anja; Gisin, Stefan

    2008-11-01

    In medical specialties, such as anaesthesia, the use of simulation has increased over the past 15 years. Medical simulation attempts to reproduce important clinical situations to practise team training or individual skills in a risk free environment. For a long time simulators have only been used by the airline industry and the military. Simulation as a training tool for practicing critical situations in obstetrics is not very common yet. Experience and routine are crucial to evaluate a medical emergency correctly and to take the appropriate measures. Nowadays the obstetrician requires a combination of manual and communication skills, fast emergency management and decision-making skills. Therefore simulation may help to attain these skills. This may not only satisfy the high expectations and demands of the patients towards doctors and midwives but would also help to keep calm in difficult situations and avoid mistakes. The goal is a risk free delivery for mother and child. Therefore we developed a simulation- based curricular unit for hands-on training of four different obstetric emergency scenarios. In this paper we describe our results about the feedback of doctors and midwives on their personal experiences due to this simulation-based curricular unit. The results indicate that simulation seems to be an accepted method for team training in emergency situations in obstetrics. Whether patient security increases after the regularly use of drill training needs to be investigated in further studies. PMID:18979433

  16. Intercomparison of 3D pore-scale flow and solute transport simulation methods

    DOE PAGESBeta

    Mehmani, Yashar; Schoenherr, Martin; Pasquali, Andrea; Perkins, William A.; Kim, Kyungjoo; Perego, Mauro; Parks, Michael L.; Balhoff, Matthew T.; Richmond, Marshall C.; Geier, Martin; et al

    2015-09-28

    Multiple numerical approaches have been developed to simulate porous media fluid flow and solute transport at the pore scale. These include 1) methods that explicitly model the three-dimensional geometry of pore spaces and 2) methods that conceptualize the pore space as a topologically consistent set of stylized pore bodies and pore throats. In previous work we validated a model of the first type, using computational fluid dynamics (CFD) codes employing a standard finite volume method (FVM), against magnetic resonance velocimetry (MRV) measurements of pore-scale velocities. Here we expand that validation to include additional models of the first type based onmore » the lattice Boltzmann method (LBM) and smoothed particle hydrodynamics (SPH), as well as a model of the second type, a pore-network model (PNM). The PNM approach used in the current study was recently improved and demonstrated to accurately simulate solute transport in a two-dimensional experiment. While the PNM approach is computationally much less demanding than direct numerical simulation methods, the effect of conceptualizing complex three-dimensional pore geometries on solute transport in the manner of PNMs has not been fully determined. We apply all four approaches (FVM-based CFD, LBM, SPH and PNM) to simulate pore-scale velocity distributions and (for capable codes) nonreactive solute transport, and intercompare the model results. Comparisons are drawn both in terms of macroscopic variables (e.g., permeability, solute breakthrough curves) and microscopic variables (e.g., local velocities and concentrations). Generally good agreement was achieved among the various approaches, but some differences were observed depending on the model context. The intercomparison work was challenging because of variable capabilities of the codes, and inspired some code enhancements to allow consistent comparison of flow and transport simulations across the full suite of methods. This paper provides support for

  17. Intercomparison of 3D pore-scale flow and solute transport simulation methods

    SciTech Connect

    Mehmani, Yashar; Schoenherr, Martin; Pasquali, Andrea; Perkins, William A.; Kim, Kyungjoo; Perego, Mauro; Parks, Michael L.; Balhoff, Matthew T.; Richmond, Marshall C.; Geier, Martin; Krafczyk, Manfred; Luo, Li -Shi; Tartakovsky, Alexandre M.; Yang, Xiaofan; Scheibe, Timothy D.; Trask, Nathaniel

    2015-09-28

    Multiple numerical approaches have been developed to simulate porous media fluid flow and solute transport at the pore scale. These include 1) methods that explicitly model the three-dimensional geometry of pore spaces and 2) methods that conceptualize the pore space as a topologically consistent set of stylized pore bodies and pore throats. In previous work we validated a model of the first type, using computational fluid dynamics (CFD) codes employing a standard finite volume method (FVM), against magnetic resonance velocimetry (MRV) measurements of pore-scale velocities. Here we expand that validation to include additional models of the first type based on the lattice Boltzmann method (LBM) and smoothed particle hydrodynamics (SPH), as well as a model of the second type, a pore-network model (PNM). The PNM approach used in the current study was recently improved and demonstrated to accurately simulate solute transport in a two-dimensional experiment. While the PNM approach is computationally much less demanding than direct numerical simulation methods, the effect of conceptualizing complex three-dimensional pore geometries on solute transport in the manner of PNMs has not been fully determined. We apply all four approaches (FVM-based CFD, LBM, SPH and PNM) to simulate pore-scale velocity distributions and (for capable codes) nonreactive solute transport, and intercompare the model results. Comparisons are drawn both in terms of macroscopic variables (e.g., permeability, solute breakthrough curves) and microscopic variables (e.g., local velocities and concentrations). Generally good agreement was achieved among the various approaches, but some differences were observed depending on the model context. The intercomparison work was challenging because of variable capabilities of the codes, and inspired some code enhancements to allow consistent comparison of flow and transport simulations across the full suite of methods. This paper provides support for confidence

  18. Comparison of AGE and Spectral Methods for the Simulation of Far-Wakes

    NASA Technical Reports Server (NTRS)

    Bisset, D. K.; Rogers, M. M.; Kega, Dennis (Technical Monitor)

    1999-01-01

    Turbulent flow simulation methods based on finite differences are attractive for their simplicity, flexibility and efficiency, but not always for accuracy or stability. This report demonstrates that a good compromise is possible with the Advected Grid Explicit (AGE) method. AGE has proven to be both efficient and accurate for simulating turbulent free-shear flows, including planar mixing layers and planar jets. Its efficiency results from its localized fully explicit finite difference formulation (Bisset 1998a,b) that is very straightforward to compute, outweighing the need for a fairly small timestep. Also, most of the successful simulations were slightly under-resolved, and therefore they were, in effect, large-eddy simulations (LES) without a sub-grid-scale (SGS) model, rather than direct numerical simulations (DNS). The principle is that the role of the smallest scales of turbulent motion (when the Reynolds number is not too low) is to dissipate turbulent energy, and therefore they do not have to be simulated when the numerical method is inherently dissipative at its resolution limits. Such simulations are termed 'auto-LES' (LES with automatic SGS modeling) in this report.

  19. Shrinkage regression-based methods for microarray missing value imputation

    PubMed Central

    2013-01-01

    Background Missing values commonly occur in the microarray data, which usually contain more than 5% missing values with up to 90% of genes affected. Inaccurate missing value estimation results in reducing the power of downstream microarray data analyses. Many types of methods have been developed to estimate missing values. Among them, the regression-based methods are very popular and have been shown to perform better than the other types of methods in many testing microarray datasets. Results To further improve the performances of the regression-based methods, we propose shrinkage regression-based methods. Our methods take the advantage of the correlation structure in the microarray data and select similar genes for the target gene by Pearson correlation coefficients. Besides, our methods incorporate the least squares principle, utilize a shrinkage estimation approach to adjust the coefficients of the regression model, and then use the new coefficients to estimate missing values. Simulation results show that the proposed methods provide more accurate missing value estimation in six testing microarray datasets than the existing regression-based methods do. Conclusions Imputation of missing values is a very important aspect of microarray data analyses because most of the downstream analyses require a complete dataset. Therefore, exploring accurate and efficient methods for estimating missing values has become an essential issue. Since our proposed shrinkage regression-based methods can provide accurate missing value estimation, they are competitive alternatives to the existing regression-based methods. PMID:24565159

  20. Assessing the Accuracy of Classwide Direct Observation Methods: Two Analyses Using Simulated and Naturalistic Data

    ERIC Educational Resources Information Center

    Dart, Evan H.; Radley, Keith C.; Briesch, Amy M.; Furlow, Christopher M.; Cavell, Hannah J.; Briesch, Amy M.

    2016-01-01

    Two studies investigated the accuracy of eight different interval-based group observation methods that are commonly used to assess the effects of classwide interventions. In Study 1, a Microsoft Visual Basic program was created to simulate a large set of observational data. Binary data were randomly generated at the student level to represent…

  1. New Simulation Methods to Facilitate Achieving a Mechanistic Understanding of Basic Pharmacology Principles in the Classroom

    ERIC Educational Resources Information Center

    Grover, Anita; Lam, Tai Ning; Hunt, C. Anthony

    2008-01-01

    We present a simulation tool to aid the study of basic pharmacology principles. By taking advantage of the properties of agent-based modeling, the tool facilitates taking a mechanistic approach to learning basic concepts, in contrast to the traditional empirical methods. Pharmacodynamics is a particular aspect of pharmacology that can benefit from…

  2. Study on self-calibration angle encoder using simulation method

    NASA Astrophysics Data System (ADS)

    Wang, Yan; Xue, Zi; Huang, Yao; Wang, Xiaona

    2016-01-01

    The angle measurement technology is very important in precision manufacture, optical industry, aerospace, aviation and navigation, etc. Further, the angle encoder, which uses concept `subdivision of full circle (2π rad=360°)' and transforms the angle into number of electronic pulse, is the most common instrument for angle measurement. To improve the accuracy of the angle encoder, a novel self-calibration method was proposed that enables the angle encoder to calibrate itself without angle reference. An angle deviation curve among 0° to 360° was simulated with equal weights Fourier components for the study of the self-calibration method. In addition, a self-calibration algorithm was used in the process of this deviation curve. The simulation result shows the relationship between the arrangement of multi-reading heads and the Fourier components distribution of angle encoder deviation curve. Besides, an actual self-calibration angle encoder was calibrated by polygon angle standard in national institute of metrology, China. The experiment result indicates the actual self-calibration effect on the Fourier components distribution of angle encoder deviation curve. In the end, the comparison, which is between the simulation self-calibration result and the experiment self-calibration result, reflects good consistency and proves the reliability of the self-calibration angle encoder.

  3. Investigation on Accelerating Dust Storm Simulation via Domain Decomposition Methods

    NASA Astrophysics Data System (ADS)

    Yu, M.; Gui, Z.; Yang, C. P.; Xia, J.; Chen, S.

    2014-12-01

    Dust storm simulation is a data and computing intensive process, which requires high efficiency and adequate computing resources. To speed up the process, high performance computing is widely adopted. By partitioning a large study area into small subdomains according to their geographic location and executing them on different computing nodes in a parallel fashion, the computing performance can be significantly improved. However, it is still a question worthy of consideration that how to allocate these subdomain processes into computing nodes without introducing imbalanced task loads and unnecessary communications among computing nodes. Here we propose a domain decomposition and allocation framework that can carefully leverage the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire system. The framework is tested in the NMM (Nonhydrostatic Mesoscale Model)-dust model, where a 72-hour processes of the dust load are simulated. Performance result using the proposed scheduling method is compared with the one using default scheduling methods of MPI. Results demonstrate that the system improves the performance of simulation by 20% up to 80%.

  4. Validation techniques of agent based modelling for geospatial simulations

    NASA Astrophysics Data System (ADS)

    Darvishi, M.; Ahmadi, G.

    2014-10-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  5. Simulation-based transthoracic echocardiography: “An anesthesiologist's perspective”

    PubMed Central

    Magoon, Rohan; Sharma, Amita; Ladha, Suruchi; Kapoor, Poonam Malhotra; Hasija, Suruchi

    2016-01-01

    With the growing requirement of echocardiography in the perioperative management, the anesthesiologists need to be well trained in transthoracic echocardiography (TTE). Lack of formal, structured teaching program precludes the same. The present article reviews the expanding domain of TTE, simulation-based TTE training, the advancements, current limitations, and the importance of simulation-based training for the anesthesiologists. PMID:27397457

  6. Amyloid oligomer structure characterization from simulations: A general method

    SciTech Connect

    Nguyen, Phuong H.; Li, Mai Suan

    2014-03-07

    Amyloid oligomers and plaques are composed of multiple chemically identical proteins. Therefore, one of the first fundamental problems in the characterization of structures from simulations is the treatment of the degeneracy, i.e., the permutation of the molecules. Second, the intramolecular and intermolecular degrees of freedom of the various molecules must be taken into account. Currently, the well-known dihedral principal component analysis method only considers the intramolecular degrees of freedom, and other methods employing collective variables can only describe intermolecular degrees of freedom at the global level. With this in mind, we propose a general method that identifies all the structures accurately. The basis idea is that the intramolecular and intermolecular states are described in terms of combinations of single-molecule and double-molecule states, respectively, and the overall structures of oligomers are the product basis of the intramolecular and intermolecular states. This way, the degeneracy is automatically avoided. The method is illustrated on the conformational ensemble of the tetramer of the Alzheimer's peptide Aβ{sub 9−40}, resulting from two atomistic molecular dynamics simulations in explicit solvent, each of 200 ns, starting from two distinct structures.

  7. Simulation based analysis of laser beam brazing

    NASA Astrophysics Data System (ADS)

    Dobler, Michael; Wiethop, Philipp; Schmid, Daniel; Schmidt, Michael

    2016-03-01

    Laser beam brazing is a well-established joining technology in car body manufacturing with main applications in the joining of divided tailgates and the joining of roof and side panels. A key advantage of laser brazed joints is the seam's visual quality which satisfies highest requirements. However, the laser beam brazing process is very complex and process dynamics are only partially understood. In order to gain deeper knowledge of the laser beam brazing process, to determine optimal process parameters and to test process variants, a transient three-dimensional simulation model of laser beam brazing is developed. This model takes into account energy input, heat transfer as well as fluid and wetting dynamics that lead to the formation of the brazing seam. A validation of the simulation model is performed by metallographic analysis and thermocouple measurements for different parameter sets of the brazing process. These results show that the multi-physical simulation model not only can be used to gain insight into the laser brazing process but also offers the possibility of process optimization in industrial applications. The model's capabilities in determining optimal process parameters are exemplarily shown for the laser power. Small deviations in the energy input can affect the brazing results significantly. Therefore, the simulation model is used to analyze the effect of the lateral laser beam position on the energy input and the resulting brazing seam.

  8. Issues of Simulation-Based Route Assignment

    SciTech Connect

    Nagel, K.; Rickert, M.

    1999-07-20

    The authors use an iterative re-planning scheme with simulation feedback to generate a self-consistent route-set for a given street network and origin-destination matrix. The iteration process is defined by three parameters. They found that they have influence on the speed of the relaxation, but not necessarily on its final state.

  9. Nonholonomic Hamiltonian Method for Molecular Dynamics Simulations of Reacting Shocks

    NASA Astrophysics Data System (ADS)

    Fahrenthold, Eric; Bass, Joseph

    2015-06-01

    Conventional molecular dynamics simulations of reacting shocks employ a holonomic Hamiltonian formulation: the breaking and forming of covalent bonds is described by potential functions. In general these potential functions: (a) are algebraically complex, (b) must satisfy strict smoothness requirements, and (c) contain many fitted parameters. In recent research the authors have developed a new noholonomic formulation of reacting molecular dynamics. In this formulation bond orders are determined by rate equations and the bonding-debonding process need not be described by differentiable functions. This simplifies the representation of complex chemistry and reduces the number of fitted model parameters. Example applications of the method show molecular level shock to detonation simulations in nitromethane and RDX. Research supported by the Defense Threat Reduction Agency.

  10. Large-eddy simulation using the finite element method

    SciTech Connect

    McCallen, R.C.; Gresho, P.M.; Leone, J.M. Jr.; Kollmann, W.

    1993-10-01

    In a large-eddy simulation (LES) of turbulent flows, the large-scale motion is calculated explicitly (i.e., approximated with semi-empirical relations). Typically, finite difference or spectral numerical schemes are used to generate an LES; the use of finite element methods (FEM) has been far less prominent. In this study, we demonstrate that FEM in combination with LES provides a viable tool for the study of turbulent, separating channel flows, specifically the flow over a two-dimensional backward-facing step. The combination of these methodologies brings together the advantages of each: LES provides a high degree of accuracy with a minimum of empiricism for turbulence modeling and FEM provides a robust way to simulate flow in very complex domains of practical interest. Such a combination should prove very valuable to the engineering community.

  11. Discrete Element Method Simulation of Nonlinear Viscoelastic Stress Wave Problems

    NASA Astrophysics Data System (ADS)

    Tang, Zhiping; Horie, Y.; Wang, Wenqiang

    2002-07-01

    A DEM(Discrete Element Method) simulation of nonlinear viscoelastic stress wave problems is carried out. The interaction forces among elements are described using a model in which neighbor elements are linked by a nonlinear spring and a certain number of Maxwell components in parallel. By making use of exponential relaxation moduli, it is shown that numerical computation of the convolution integral does not require storing and repeatedly calculating strain history, so that the computational cost is dramatically reduced. To validate the viscoelastic DM2 code1, stress wave propagation in a Maxwell rod with one end subjected to a constant stress loading is simulated. Results excellently fit those from the characteristics calculation. The code is then used to investigate the problem of meso-scale damage in a plastic-bonded explosive under shock loading. Results not only show "compression damage", but also reveal a complex damage evolution. They demonstrate a unique capability of DEM in modeling heterogeneous materials.

  12. Discrete Element Method Simulation of Nonlinear Viscoelastic Stress Wave Problems

    NASA Astrophysics Data System (ADS)

    Wang, Wenqiang; Tang, Zhiping; Horie, Y.

    2002-07-01

    A DEM(Discrete Element Method) simulation of nonlinear viscoelastic stress wave problems is carried out. The interaction forces among elements are described using a model in which neighbor elements are linked by a nonlinear spring and a certain number of Maxwell components in parallel. By making use of exponential relaxation moduli, it is shown that numerical computation of the convolution integral does not require storing and repeatedly calculating strain history, so that the computational cost is dramatically reduced. To validate the viscoelastic DM2 code[1], stress wave propagation in a Maxwell rod with one end subjected to a constant stress loading is simulated. Results excellently fit those from the characteristics calculation. The code is then used to investigate the problem of meso-scale damage in a plastic-bonded explosive under shock loading. Results not only show "compression damage", but also reveal a complex damage evolution. They demonstrate a unique capability of DEM in modeling heterogeneous materials.

  13. Lidar temperature profiling - Performance simulations of Mason's method

    NASA Technical Reports Server (NTRS)

    Schwemmer, G. K.; Wilkerson, T. D.

    1979-01-01

    In Mason's method (1975) atmospheric temperatures are inferred from a measure of the Boltzmann distribution of rotational states in one of the vibrational bands of O2. Differential absorption is measured using three tunable, narrowband pulse lasers. The outputs of two are tuned to wavelengths at the centers of absorption lines at either end of a particular branch in the band; the third wavelength is in a region of no absorption. The temperature-altitude profile can be calculated from the ratio of the two line absorption coefficients plus a priori knowledge of the line parameters. In the present paper, computer simulations of various lidar configurations are made, using different line pairs in the atmospheric bands of O2 (approximately 630, 690, and 760 nm). Simulated results are presented for temperature profiles measured from a Space Shuttle lidar.

  14. Discrete Element Method Simulations of Ice Floe Dynamics

    NASA Astrophysics Data System (ADS)

    Calantoni, J.; Bateman, S. P.; Shi, F.; Orzech, M.; Veeramony, J.

    2014-12-01

    Ice floes were modeled using LIGGGHTS, an open source discrete element method (DEM) software, where individual elements were bonded together to make floes. The bonds were allowed to break with a critical stress calibrated to existing laboratory measurements for the compressive, tensile, and flexural strength of ice floes. The DEM allows for heterogeneous shape and size distributions of the ice floes to evolve over time. We simulated the interaction between sea ice and ocean waves in the marginal ice zone using a coupled wave-ice system. The waves were modeled with NHWAVE, a non-hydrostatic wave model that predicts instantaneous surface elevation and the three-dimensional flow field. The ice floes and waves were coupled through buoyancy and drag forces. Preliminary comparisons with field and laboratory measurements for coupled simulations will be presented.

  15. Modeling and simulation method of target echo energy detection in laser simulation system

    NASA Astrophysics Data System (ADS)

    Cheng, Ye; Lv, Pin; Sun, Quan

    2015-10-01

    When using numerical simulation method study laser system, modeling and simulation energy distribution of the target echo on the detector is studied in order to achieve closed-loop optical path. From the perspective of Fresnel formula, using bidirectional reflectance distribution function (BRDF) model to calculate the intensity distribution of the target reflection; calculation of light vector angle expression reflects the phase change between reflected light and incident light when light travelling in a single medium surface. Setting position parameters and attitude parameters of different components in the laser simulation system, through the calculation of geometric relationship, the energy distribution under the view of the detector is achieved. Target surface shape was respectively set for planar, spherical and cylindrical. Analyzed the influence of targets surface roughness root mean square (RMS), zenith angle and azimuth angle of the incident light to targets reflection characteristics respectively. Results show that this method can accurately achieve the detection simulation of simple geometric shape surface target in laser system.

  16. Performance optimization of web-based medical simulation.

    PubMed

    Halic, Tansel; Ahn, Woojin; De, Suvranu

    2013-01-01

    This paper presents a technique for performance optimization of multimodal interactive web-based medical simulation. A web-based simulation framework is promising for easy access and wide dissemination of medical simulation. However, the real-time performance of the simulation highly depends on hardware capability on the client side. Providing consistent simulation in different hardware is critical for reliable medical simulation. This paper proposes a non-linear mixed integer programming model to optimize the performance of visualization and physics computation while considering hardware capability and application specific constraints. The optimization model identifies and parameterizes the rendering and computing capabilities of the client hardware using an exploratory proxy code. The parameters are utilized to determine the optimized simulation conditions including texture sizes, mesh sizes and canvas resolution. The test results show that the optimization model not only achieves a desired frame per second but also resolves visual artifacts due to low performance hardware. PMID:23400151

  17. A hybrid method for flood simulation in small catchments combining hydrodynamic and hydrological techniques

    NASA Astrophysics Data System (ADS)

    Bellos, Vasilis; Tsakiris, George

    2016-09-01

    The study presents a new hybrid method for the simulation of flood events in small catchments. It combines a physically-based two-dimensional hydrodynamic model and the hydrological unit hydrograph theory. Unit hydrographs are derived using the FLOW-R2D model which is based on the full form of two-dimensional Shallow Water Equations, solved by a modified McCormack numerical scheme. The method is tested at a small catchment in a suburb of Athens-Greece for a storm event which occurred in February 2013. The catchment is divided into three friction zones and unit hydrographs of 15 and 30 min are produced. The infiltration process is simulated by the empirical Kostiakov equation and the Green-Ampt model. The results from the implementation of the proposed hybrid method are compared with recorded data at the hydrometric station at the outlet of the catchment and the results derived from the fully hydrodynamic model FLOW-R2D. It is concluded that for the case studied, the proposed hybrid method produces results close to those of the fully hydrodynamic simulation at substantially shorter computational time. This finding, if further verified in a variety of case studies, can be useful in devising effective hybrid tools for the two-dimensional flood simulations, which are lead to accurate and considerably faster results than those achieved by the fully hydrodynamic simulations.

  18. Simulated evaluation of an intraoperative surface modeling method for catheter ablation by a real phantom simulation experiment

    NASA Astrophysics Data System (ADS)

    Sun, Deyu; Rettmann, Maryam E.; Packer, Douglas; Robb, Richard A.; Holmes, David R.

    2015-03-01

    In this work, we propose a phantom experiment method to quantitatively evaluate an intraoperative left-atrial modeling update method. In prior work, we proposed an update procedure which updates the preoperative surface model with information from real-time tracked 2D ultrasound. Prior studies did not evaluate the reconstruction using an anthropomorphic phantom. In this approach, a silicone heart phantom (based on a high resolution human atrial surface model reconstructed from CT images) was made as simulated atriums. A surface model of the left atrium of the phantom was deformed by a morphological operation - simulating the shape difference caused by organ deformation between pre-operative scanning and intra-operative guidance. During the simulated procedure, a tracked ultrasound catheter was inserted into right atrial phantom - scanning the left atrial phantom in a manner mimicking the cardiac ablation procedure. By merging the preoperative model and the intraoperative ultrasound images, an intraoperative left atrial model was reconstructed. According to results, the reconstruction error of the modeling method is smaller than the initial geometric difference caused by organ deformation. As the area of the left atrial phantom scanned by ultrasound increases, the reconstruction error of the intraoperative surface model decreases. The study validated the efficacy of the modeling method.

  19. Simulated evaluation of an intraoperative surface modeling method for catheter ablation by a real phantom simulation experiment

    PubMed Central

    Sun, Deyu; Rettmann, Maryam E.; Packer, Douglas; Robb, Richard A.; Holmes, David R.

    2015-01-01

    In this work, we propose a phantom experiment method to quantitatively evaluate an intraoperative left-atrial modeling update method. In prior work, we proposed an update procedure which updates the preoperative surface model with information from real-time tracked 2D ultrasound. Prior studies did not evaluate the reconstruction using an anthropomorphic phantom. In this approach, a silicone heart phantom (based on a high resolution human atrial surface model reconstructed from CT images) was made as simulated atriums. A surface model of the left atrium of the phantom was deformed by a morphological operation – simulating the shape difference caused by organ deformation between pre-operative scanning and intra-operative guidance. During the simulated procedure, a tracked ultrasound catheter was inserted into right atrial phantom – scanning the left atrial phantom in a manner mimicking the cardiac ablation procedure. By merging the preoperative model and the intraoperative ultrasound images, an intraoperative left atrial model was reconstructed. According to results, the reconstruction error of the modeling method is smaller than the initial geometric difference caused by organ deformation. As the area of the left atrial phantom scanned by ultrasound increases, the reconstruction error of the intraoperative surface model decreases. The study validated the efficacy of the modeling method. PMID:26405371

  20. Tutorial on agent-based modeling and simulation.

    SciTech Connect

    Macal, C. M.; North, M. J.; Decision and Information Sciences

    2005-01-01

    Agent-based modeling and simulation (ABMS) is a new approach to modeling systems comprised of autonomous, interacting agents. ABMS promises to have far-reaching effects on the way that businesses use computers to support decision-making and researchers use electronic laboratories to support their research. Some have gone so far as to contend that ABMS is a third way of doing science besides deductive and inductive reasoning. Computational advances have made possible a growing number of agent-based applications in a variety of fields. Applications range from modeling agent behavior in the stock market and supply chains, to predicting the spread of epidemics and the threat of bio-warfare, from modeling consumer behavior to understanding the fall of ancient civilizations, to name a few. This tutorial describes the theoretical and practical foundations of ABMS, identifies toolkits and methods for developing ABMS models, and provides some thoughts on the relationship between ABMS and traditional modeling techniques.

  1. Adaptive mesh refinement and adjoint methods in geophysics simulations

    NASA Astrophysics Data System (ADS)

    Burstedde, Carsten

    2013-04-01

    It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times

  2. Lattice-Boltzmann method for the simulation of multiphase mass transfer and reaction of dilute species

    NASA Astrophysics Data System (ADS)

    Riaud, Antoine; Zhao, Shufang; Wang, Kai; Cheng, Yi; Luo, Guangsheng

    2014-05-01

    Despite the popularity of the lattice-Boltzmann method (LBM) in simulating multiphase flows, a general approach for modeling dilute species in multiphase systems is still missing. In this report we propose to modify the collision operator of the solute by introducing a modified redistribution scheme. This operator is based on local fluid variables and keeps the parallelism inherent to LBM. After deriving macroscopic transport equations, an analytical equation of state of the solute is exhibited and the method is proven constituting a unified framework to simulate arbitrary solute distribution between phases, including single-phase soluble compounds, amphiphilic species with a partition coefficient, and surface-adsorbed compounds.

  3. On computer-intensive simulation and estimation methods for rare-event analysis in epidemic models.

    PubMed

    Clémençon, Stéphan; Cousien, Anthony; Felipe, Miraine Dávila; Tran, Viet Chi

    2015-12-10

    This article focuses, in the context of epidemic models, on rare events that may possibly correspond to crisis situations from the perspective of public health. In general, no close analytic form for their occurrence probabilities is available, and crude Monte Carlo procedures fail. We show how recent intensive computer simulation techniques, such as interacting branching particle methods, can be used for estimation purposes, as well as for generating model paths that correspond to realizations of such events. Applications of these simulation-based methods to several epidemic models fitted from real datasets are also considered and discussed thoroughly. PMID:26242476

  4. Lattice-Boltzmann method for the simulation of multiphase mass transfer and reaction of dilute species.

    PubMed

    Riaud, Antoine; Zhao, Shufang; Wang, Kai; Cheng, Yi; Luo, Guangsheng

    2014-05-01

    Despite the popularity of the lattice-Boltzmann method (LBM) in simulating multiphase flows, a general approach for modeling dilute species in multiphase systems is still missing. In this report we propose to modify the collision operator of the solute by introducing a modified redistribution scheme. This operator is based on local fluid variables and keeps the parallelism inherent to LBM. After deriving macroscopic transport equations, an analytical equation of state of the solute is exhibited and the method is proven constituting a unified framework to simulate arbitrary solute distribution between phases, including single-phase soluble compounds, amphiphilic species with a partition coefficient, and surface-adsorbed compounds. PMID:25353915

  5. Comparison of Different Methods of Grading a Level Turn Task on a Flight Simulator

    NASA Technical Reports Server (NTRS)

    Heath, Bruce E.; Crier, tomyka

    2003-01-01

    With the advancements in the computing power of personal computers, pc-based flight simulators and trainers have opened new avenues in the training of airplane pilots. It may be desirable to have the flight simulator make a quantitative evaluation of the progress of a pilot's training thereby reducing the physical requirement of the flight instructor who must, in turn, watch every flight. In an experiment, University students conducted six different flights, each consisting of two level turns. The flights were three minutes in duration. By evaluating videotapes, two certified flight instructors provided separate letter grades for each turn. These level turns were also evaluated using two other computer based grading methods. One method determined automated grades based on prescribed tolerances in bank angle, airspeed and altitude. The other method used was deviations in altitude and bank angle for performance index and performance grades.

  6. A Computer-Based Simulation of an Acid-Base Titration

    ERIC Educational Resources Information Center

    Boblick, John M.

    1971-01-01

    Reviews the advantages of computer simulated environments for experiments, referring in particular to acid-base titrations. Includes pre-lab instructions and a sample computer printout of a student's use of an acid-base simulation. Ten references. (PR)

  7. Lattice Boltzmann simulation of rising bubble dynamics using an effective buoyancy method

    NASA Astrophysics Data System (ADS)

    Ngachin, Merlin; Galdamez, Rinaldo G.; Gokaltun, Seckin; Sukop, Michael C.

    2015-08-01

    This study describes the behavior of bubbles rising under gravity using the Shan and Chen-type multicomponent multiphase lattice Boltzmann method (LBM) [X. Shan and H. Chen, Phys. Rev. E47, 1815 (1993)]. Two-dimensional (2D) single bubble motions were simulated, considering the buoyancy effect for which the topology of the bubble was characterized by the nondimensional Eötvös (Eo), and Morton (M) numbers. In this study, a new approach based on the "effective buoyancy" was adopted and proven to be consistent with the expected bubble shape deformation. This approach expands the range of effective density differences between the bubble and the liquid that can be simulated. Based on the balance of forces acting on the bubble, it can deform from spherical to ellipsoidal shape with skirts appearing at high Eo number. A benchmark computational case for qualitative and quantitative validation was performed using COMSOL Multiphysics based on the level set method. Simulations were conducted for 1 ≤ Eo ≤ 100 and 3 × 10-6 ≤ M ≤ 2.73 × 10-3. Interfacial tension was checked through simulations without gravity, where Laplace's law was satisfied. Finally, quantitative analyses based on the terminal rise velocity and the degree of circularity was performed for various Eo and M values. Our results were compared with both the theoretical shape regimes given in literature and available simulation results.

  8. Simulation on turning aspheric surface method via oscillating feed

    NASA Astrophysics Data System (ADS)

    Kong, Fanxing; Li, Zengqiang; Sun, Tao

    2014-08-01

    It is quite difficult to manufacturing optical components, the combination of high gradient ellipsoid and hyperboloid, with high machining surface requirements. To solve the problem, in this paper we present a turning and forming method via oscillating feed of R-θ layout lathe, analyze machining ellipsoid segment and hyperboloid segment separately through oscillating feed. Also calculate parameters on each trajectory during processing respectively and obtain displacement, velocity, acceleration and other parameters. The simulation result shows that this rotary turning method is capable of ensuring that the cutter is on the equidistance line of meridian cross section curve of work piece during processing high gradient aspheric surface, which helps getting high quality surface. Also the method provides a new approach and a theory basis for manufacturing high quality aspheric surface and extending function of the available twin-spindle lathe as well.

  9. A Multi-Stage Method for Connecting Participatory Sensing and Noise Simulations

    PubMed Central

    Hu, Mingyuan; Che, Weitao; Zhang, Qiuju; Luo, Qingli; Lin, Hui

    2015-01-01

    Most simulation-based noise maps are important for official noise assessment but lack local noise characteristics. The main reasons for this lack of information are that official noise simulations only provide information about expected noise levels, which is limited by the use of large-scale monitoring of noise sources, and are updated infrequently. With the emergence of smart cities and ubiquitous sensing, the possible improvements enabled by sensing technologies provide the possibility to resolve this problem. This study proposed an integrated methodology to propel participatory sensing from its current random and distributed sampling origins to professional noise simulation. The aims of this study were to effectively organize the participatory noise data, to dynamically refine the granularity of the noise features on road segments (e.g., different portions of a road segment), and then to provide a reasonable spatio-temporal data foundation to support noise simulations, which can be of help to researchers in understanding how participatory sensing can play a role in smart cities. This study first discusses the potential limitations of the current participatory sensing and simulation-based official noise maps. Next, we explain how participatory noise data can contribute to a simulation-based noise map by providing (1) spatial matching of the participatory noise data to the virtual partitions at a more microscopic level of road networks; (2) multi-temporal scale noise estimations at the spatial level of virtual partitions; and (3) dynamic aggregation of virtual partitions by comparing the noise values at the relevant temporal scale to form a dynamic segmentation of each road segment to support multiple spatio-temporal noise simulations. In this case study, we demonstrate how this method could play a significant role in a simulation-based noise map. Together, these results demonstrate the potential benefits of participatory noise data as dynamic input sources for

  10. A multi-stage method for connecting participatory sensing and noise simulations.

    PubMed

    Hu, Mingyuan; Che, Weitao; Zhang, Qiuju; Luo, Qingli; Lin, Hui

    2015-01-01

    Most simulation-based noise maps are important for official noise assessment but lack local noise characteristics. The main reasons for this lack of information are that official noise simulations only provide information about expected noise levels, which is limited by the use of large-scale monitoring of noise sources, and are updated infrequently. With the emergence of smart cities and ubiquitous sensing, the possible improvements enabled by sensing technologies provide the possibility to resolve this problem. This study proposed an integrated methodology to propel participatory sensing from its current random and distributed sampling origins to professional noise simulation. The aims of this study were to effectively organize the participatory noise data, to dynamically refine the granularity of the noise features on road segments (e.g., different portions of a road segment), and then to provide a reasonable spatio-temporal data foundation to support noise simulations, which can be of help to researchers in understanding how participatory sensing can play a role in smart cities. This study first discusses the potential limitations of the current participatory sensing and simulation-based official noise maps. Next, we explain how participatory noise data can contribute to a simulation-based noise map by providing (1) spatial matching of the participatory noise data to the virtual partitions at a more microscopic level of road networks; (2) multi-temporal scale noise estimations at the spatial level of virtual partitions; and (3) dynamic aggregation of virtual partitions by comparing the noise values at the relevant temporal scale to form a dynamic segmentation of each road segment to support multiple spatio-temporal noise simulations. In this case study, we demonstrate how this method could play a significant role in a simulation-based noise map. Together, these results demonstrate the potential benefits of participatory noise data as dynamic input sources for

  11. Method of recovering oil-based fluid

    SciTech Connect

    Brinkley, H.E.

    1993-07-13

    A method is described of recovering oil-based fluid, said method comprising the steps of: applying an oil-based fluid absorbent cloth of man-made fiber to an oil-based fluid, the cloth having at least a portion thereof that is napped so as to raise ends and loops of the man-made fibers and define voids; and absorbing the oil-based fluid into the napped portion of the cloth.

  12. Agent-based modeling to simulate the dengue spread

    NASA Astrophysics Data System (ADS)

    Deng, Chengbin; Tao, Haiyan; Ye, Zhiwei

    2008-10-01

    In this paper, we introduce a novel method ABM in simulating the unique process for the dengue spread. Dengue is an acute infectious disease with a long history of over 200 years. Unlike the diseases that can be transmitted directly from person to person, dengue spreads through a must vector of mosquitoes. There is still no any special effective medicine and vaccine for dengue up till now. The best way to prevent dengue spread is to take precautions beforehand. Thus, it is crucial to detect and study the dynamic process of dengue spread that closely relates to human-environment interactions where Agent-Based Modeling (ABM) effectively works. The model attempts to simulate the dengue spread in a more realistic way in the bottom-up way, and to overcome the limitation of ABM, namely overlooking the influence of geographic and environmental factors. Considering the influence of environment, Aedes aegypti ecology and other epidemiological characteristics of dengue spread, ABM can be regarded as a useful way to simulate the whole process so as to disclose the essence of the evolution of dengue spread.

  13. Grid-based Methods in Relativistic Hydrodynamics and Magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Martí, José María; Müller, Ewald

    2015-12-01

    An overview of grid-based numerical methods used in relativistic hydrodynamics (RHD) and magnetohydrodynamics (RMHD) is presented. Special emphasis is put on a comprehensive review of the application of high-resolution shock-capturing methods. Results of a set of demanding test bench simulations obtained with different numerical methods are compared in an attempt to assess the present capabilities and limits of the various numerical strategies. Applications to three astrophysical phenomena are briefly discussed to motivate the need for and to demonstrate the success of RHD and RMHD simulations in their understanding. The review further provides FORTRAN programs to compute the exact solution of the Riemann problem in RMHD, and to simulate 1D RMHD flows in Cartesian coordinates.

  14. Benchmark Study of 3D Pore-scale Flow and Solute Transport Simulation Methods

    NASA Astrophysics Data System (ADS)

    Scheibe, T. D.; Yang, X.; Mehmani, Y.; Perkins, W. A.; Pasquali, A.; Schoenherr, M.; Kim, K.; Perego, M.; Parks, M. L.; Trask, N.; Balhoff, M.; Richmond, M. C.; Geier, M.; Krafczyk, M.; Luo, L. S.; Tartakovsky, A. M.

    2015-12-01

    Multiple numerical approaches have been developed to simulate porous media fluid flow and solute transport at the pore scale. These include 1) methods that explicitly model the three-dimensional geometry of pore spaces and 2) methods that conceptualize the pore space as a topologically consistent set of stylized pore bodies and pore throats. In previous work we validated a model of the first type, using computational fluid dynamics (CFD) codes employing standard finite volume method (FVM), against magnetic resonance velocimetry (MRV) measurements of pore-scale velocities. Here we expand that benchmark study to include additional models of the first type based on the immersed-boundary method (IMB), lattice Boltzmann method (LBM), and smoothed particle hydrodynamics (SPH), as well as a model of the second type, a pore-network model (PNM). While the PNM approach is computationally much less demanding than direct numerical simulation methods, the effect of conceptualizing complex three-dimensional pore geometries in the manner of PNMs has not been fully determined. We apply all five approaches (FVM-based CFD, IMB, LBM, SPH and PNM) to simulate pore-scale velocity distributions and nonreactive solute transport, and intercompare the model results. Comparisons are drawn both in terms of macroscopic variables (e.g., permeability, solute breakthrough curves) and microscopic variables (e.g., local velocities and concentrations). Generally good agreement was achieved among the various approaches, but some differences were observed depending on the model context. The benchmark study was challenging because of variable capabilities of the codes, and inspired some code enhancements to allow consistent comparison of flow and transport simulations across the full suite of methods. This study provides support for confidence in a variety of pore-scale modeling methods, and motivates further development and application of pore-scale simulation methods.

  15. Maintain rigid structures in Verlet based Cartesian molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Tao, Peng; Wu, Xiongwu; Brooks, Bernard R.

    2012-10-01

    An algorithm is presented to maintain rigid structures in Verlet based Cartesian molecular dynamics (MD) simulations. After each unconstrained MD step, the coordinates of selected particles are corrected to maintain rigid structures through an iterative procedure of rotation matrix computation. This algorithm, named as SHAPE and implemented in CHARMM program suite, avoids the calculations of Lagrange multipliers, so that the complexity of computation does not increase with the number of particles in a rigid structure. The implementation of this algorithm does not require significant modification of propagation integrator, and can be plugged into any Cartesian based MD integration scheme. A unique feature of the SHAPE method is that it is interchangeable with SHAKE for any object that can be constrained as a rigid structure using multiple SHAKE constraints. Unlike SHAKE, the SHAPE method can be applied to large linear (with three or more centers) and planar (with four or more centers) rigid bodies. Numerical tests with four model systems including two proteins demonstrate that the accuracy and reliability of the SHAPE method are comparable to the SHAKE method, but with much more applicability and efficiency.

  16. Dshell++: A Component Based, Reusable Space System Simulation Framework

    NASA Technical Reports Server (NTRS)

    Lim, Christopher S.; Jain, Abhinandan

    2009-01-01

    This paper describes the multi-mission Dshell++ simulation framework for high fidelity, physics-based simulation of spacecraft, robotic manipulation and mobility systems. Dshell++ is a C++/Python library which uses modern script driven object-oriented techniques to allow component reuse and a dynamic run-time interface for complex, high-fidelity simulation of spacecraft and robotic systems. The goal of the Dshell++ architecture is to manage the inherent complexity of physicsbased simulations while supporting component model reuse across missions. The framework provides several features that support a large degree of simulation configurability and usability.

  17. Synchrotron-based EUV lithography illuminator simulator

    DOEpatents

    Naulleau, Patrick P.

    2004-07-27

    A lithographic illuminator to illuminate a reticle to be imaged with a range of angles is provided. The illumination can be employed to generate a pattern in the pupil of the imaging system, where spatial coordinates in the pupil plane correspond to illumination angles in the reticle plane. In particular, a coherent synchrotron beamline is used along with a potentially decoherentizing holographic optical element (HOE), as an experimental EUV illuminator simulation station. The pupil fill is completely defined by a single HOE, thus the system can be easily modified to model a variety of illuminator fill patterns. The HOE can be designed to generate any desired angular spectrum and such a device can serve as the basis for an illuminator simulator.

  18. Limits of simulation based high resolution EBSD.

    PubMed

    Alkorta, Jon

    2013-08-01

    High resolution electron backscattered diffraction (HREBSD) is a novel technique for a relative determination of both orientation and stress state in crystals through digital image correlation techniques. Recent works have tried to use simulated EBSD patterns as reference patterns to achieve the absolute orientation and stress state of crystals. However, a precise calibration of the pattern centre location is needed to avoid the occurrence of phantom stresses. A careful analysis of the projective transformation involved in the formation of EBSD patterns has permitted to understand these phantom stresses. This geometrical analysis has been confirmed by numerical simulations. The results indicate that certain combinations of crystal strain states and sample locations (pattern centre locations) lead to virtually identical EBSD patterns. This ambiguity makes the problem of solving the absolute stress state of a crystal unfeasible in a single-detector configuration. PMID:23676453

  19. High-order finite element methods for cardiac monodomain simulations.

    PubMed

    Vincent, Kevin P; Gonzales, Matthew J; Gillette, Andrew K; Villongco, Christopher T; Pezzuto, Simone; Omens, Jeffrey H; Holst, Michael J; McCulloch, Andrew D

    2015-01-01

    Computational modeling of tissue-scale cardiac electrophysiology requires numerically converged solutions to avoid spurious artifacts. The steep gradients inherent to cardiac action potential propagation necessitate fine spatial scales and therefore a substantial computational burden. The use of high-order interpolation methods has previously been proposed for these simulations due to their theoretical convergence advantage. In this study, we compare the convergence behavior of linear Lagrange, cubic Hermite, and the newly proposed cubic Hermite-style serendipity interpolation methods for finite element simulations of the cardiac monodomain equation. The high-order methods reach converged solutions with fewer degrees of freedom and longer element edge lengths than traditional linear elements. Additionally, we propose a dimensionless number, the cell Thiele modulus, as a more useful metric for determining solution convergence than element size alone. Finally, we use the cell Thiele modulus to examine convergence criteria for obtaining clinically useful activation patterns for applications such as patient-specific modeling where the total activation time is known a priori. PMID:26300783

  20. High-order finite element methods for cardiac monodomain simulations

    PubMed Central

    Vincent, Kevin P.; Gonzales, Matthew J.; Gillette, Andrew K.; Villongco, Christopher T.; Pezzuto, Simone; Omens, Jeffrey H.; Holst, Michael J.; McCulloch, Andrew D.

    2015-01-01

    Computational modeling of tissue-scale cardiac electrophysiology requires numerically converged solutions to avoid spurious artifacts. The steep gradients inherent to cardiac action potential propagation necessitate fine spatial scales and therefore a substantial computational burden. The use of high-order interpolation methods has previously been proposed for these simulations due to their theoretical convergence advantage. In this study, we compare the convergence behavior of linear Lagrange, cubic Hermite, and the newly proposed cubic Hermite-style serendipity interpolation methods for finite element simulations of the cardiac monodomain equation. The high-order methods reach converged solutions with fewer degrees of freedom and longer element edge lengths than traditional linear elements. Additionally, we propose a dimensionless number, the cell Thiele modulus, as a more useful metric for determining solution convergence than element size alone. Finally, we use the cell Thiele modulus to examine convergence criteria for obtaining clinically useful activation patterns for applications such as patient-specific modeling where the total activation time is known a priori. PMID:26300783

  1. FDCCII-based FDNR simulator topologies

    NASA Astrophysics Data System (ADS)

    Kaçar, Fırat; Yeşil, Abdullah

    2012-02-01

    In this article, three new circuits for realising frequency-dependent negative resistance (FDNR) are proposed. All proposed circuits employ a single fully differential current conveyor, grounded capacitors and resistor. Proposed circuits consist of minimum number of passive and active elements. All proposed circuits are lossless FDNR. The performance of the proposed FDNR is demonstrated on the third-order Butterworth low-pass filter. Simulation results are included to verify the theory.

  2. The Impact of Content Area Focus on the Effectiveness of a Web-Based Simulation

    ERIC Educational Resources Information Center

    Adcock, Amy B.; Duggan, Molly H.; Watson, Ginger S.; Belfore, Lee A.

    2010-01-01

    This paper describes an assessment of a web-based interview simulation designed to teach empathetic helping skills. The system includes an animated character acting as a client and responses designed to recreate a simulated role-play, a common assessment method used for teaching these skills. The purpose of this study was to determine whether…

  3. Rapid simulation of spatial epidemics: a spectral method.

    PubMed

    Brand, Samuel P C; Tildesley, Michael J; Keeling, Matthew J

    2015-04-01

    Spatial structure and hence the spatial position of host populations plays a vital role in the spread of infection. In the majority of situations, it is only possible to predict the spatial spread of infection using simulation models, which can be computationally demanding especially for large population sizes. Here we develop an approximation method that vastly reduces this computational burden. We assume that the transmission rates between individuals or sub-populations are determined by a spatial transmission kernel. This kernel is assumed to be isotropic, such that the transmission rate is simply a function of the distance between susceptible and infectious individuals; as such this provides the ideal mechanism for modelling localised transmission in a spatial environment. We show that the spatial force of infection acting on all susceptibles can be represented as a spatial convolution between the transmission kernel and a spatially extended 'image' of the infection state. This representation allows the rapid calculation of stochastic rates of infection using fast-Fourier transform (FFT) routines, which greatly improves the computational efficiency of spatial simulations. We demonstrate the efficiency and accuracy of this fast spectral rate recalculation (FSR) method with two examples: an idealised scenario simulating an SIR-type epidemic outbreak amongst N habitats distributed across a two-dimensional plane; the spread of infection between US cattle farms, illustrating that the FSR method makes continental-scale outbreak forecasting feasible with desktop processing power. The latter model demonstrates which areas of the US are at consistently high risk for cattle-infections, although predictions of epidemic size are highly dependent on assumptions about the tail of the transmission kernel. PMID:25659478

  4. Lattice-Boltzmann method for the simulation of transport phenomena in charged colloids.

    PubMed

    Horbach, J; Frenkel, D

    2001-12-01

    We present a simulation scheme based on the lattice-Boltzmann method to simulate the dynamics of charged colloids in an electrolyte. In our model we describe the electrostatics on the level of a Poisson-Boltzmann equation and the hydrodynamics of the fluid by the linearized Navier-Stokes equations. We verify our simulation scheme by means of a Chapman-Enskog expansion. Our method is applied to the calculation of the reduced sedimentation velocity U/U(0) for a cubic array of charged spheres in an electrolyte. We show that we recover the analytical solution first derived by Booth [F. Booth, J. Chem. Phys. 22, 1956 (1954)] for a weakly charged, isolated sphere in an unbounded electrolyte. The present method makes it possible to go beyond the Booth theory, and we discuss the dependence of the sedimentation velocity on the charge of the spheres. Finally we compare our results to experimental data. PMID:11736191

  5. A collision-selection rule for a particle simulation method suited to vector computers

    NASA Technical Reports Server (NTRS)

    Baganoff, D.; Mcdonald, J. D.

    1990-01-01

    A theory is developed for a selection rule governing collisions in a particle simulation of rarefied gas-dynamic flows. The selection rule leads to an algorithmic form highly compatible with fine grain parallel decomposition, allowing for efficient utilization of supercomputers having vector or massively parallel single instruction multiple data architectures. A comparison of shock-wave profiles obtained using both the selection rule and Bird's direct simulation Monte Carlo (DSMC) method show excellent agreement. The equation on which the selection rule is based is shown to be directly related to the time-counter procedure in the DSMC method. The results of several example simulations of representative rarefied flows are presented, for which the number of particles used ranged from 10 to the 6th to 10 to the 7th demonstrating the greatly improved computational efficiency of the method.

  6. Structure identification methods for atomistic simulations of crystalline materials

    DOE PAGESBeta

    Stukowski, Alexander

    2012-05-28

    Here, we discuss existing and new computational analysis techniques to classify local atomic arrangements in large-scale atomistic computer simulations of crystalline solids. This article includes a performance comparison of typical analysis algorithms such as common neighbor analysis (CNA), centrosymmetry analysis, bond angle analysis, bond order analysis and Voronoi analysis. In addition we propose a simple extension to the CNA method that makes it suitable for multi-phase systems. Finally, we introduce a new structure identification algorithm, the neighbor distance analysis, which is designed to identify atomic structure units in grain boundaries.

  7. The Immersed Interface Method for Insect Flight Simulation

    NASA Astrophysics Data System (ADS)

    Xu, Sheng

    2008-11-01

    The effect of a fluid-solid interface can be represented as a singular force in the Navier-Stokes equations. Two problems arise from this representation. One is how to calculate the force density, and the other is how to treat the force singularity. In the immersed interface method, the latter is solved with second-order accuracy and the sharp fluid-solid interface by incorporating singularity-induced flow jump conditions into discretization schemes. This talk focues on the former problem. In particular, I will present approaches to calculating the force density for both flexible and rigid solids. Results from insect flight simulation will be shown to demonstrate the approaches.

  8. Improving the Aircraft Design Process Using Web-based Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.

    2003-01-01

    Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and muitifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.

  9. Improving the Aircraft Design Process Using Web-Based Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.; Follen, Gregory J. (Technical Monitor)

    2000-01-01

    Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and multifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.

  10. A methodology towards virtualisation-based high performance simulation platform supporting multidisciplinary design of complex products

    NASA Astrophysics Data System (ADS)

    Ren, Lei; Zhang, Lin; Tao, Fei; (Luke) Zhang, Xiaolong; Luo, Yongliang; Zhang, Yabin

    2012-08-01

    Multidisciplinary design of complex products leads to an increasing demand for high performance simulation (HPS) platforms. One great challenge is how to achieve high efficient utilisation of large-scale simulation resources in distributed and heterogeneous environments. This article reports a virtualisation-based methodology to realise a HPS platform. This research is driven by the issues concerning large-scale simulation resources deployment and complex simulation environment construction, efficient and transparent utilisation of fine-grained simulation resources and high reliable simulation with fault tolerance. A framework of virtualisation-based simulation platform (VSIM) is first proposed. Then the article investigates and discusses key approaches in VSIM, including simulation resources modelling, a method to automatically deploying simulation resources for dynamic construction of system environment, and a live migration mechanism in case of faults in run-time simulation. Furthermore, the proposed methodology is applied to a multidisciplinary design system for aircraft virtual prototyping and some experiments are conducted. The experimental results show that the proposed methodology can (1) significantly improve the utilisation of fine-grained simulation resources, (2) result in a great reduction in deployment time and an increased flexibility for simulation environment construction and (3)achieve fault tolerant simulation.

  11. Handbook of Scaling Methods in Aquatic Ecology: Measurement, Analysis, Simulation

    NASA Astrophysics Data System (ADS)

    Marrasé, Celia

    2004-03-01

    Researchers in aquatic sciences have long been interested in describing temporal and biological heterogeneities at different observation scales. During the 1970s, scaling studies received a boost from the application of spectral analysis to ecological sciences. Since then, new insights have evolved in parallel with advances in observation technologies and computing power. In particular, during the last 2 decades, novel theoretical achievements were facilitated by the use of microstructure profilers, the application of mathematical tools derived from fractal and wavelet analyses, and the increase in computing power that allowed more complex simulations. The idea of publishing the Handbook of Scaling Methods in Aquatic Ecology arose out of a special session of the 2001 Aquatic Science Meeting of the American Society of Limnology and Oceanography. The edition of the book is timely, because it compiles a good amount of the work done in these last 2 decades. The book is comprised of three sections: measurements, analysis, and simulation. Each contains some review chapters and a number of more specialized contributions. The contents are multidisciplinary and focus on biological and physical processes and their interactions over a broad range of scales, from micro-layers to ocean basins. The handbook topics include high-resolution observation methodologies, as well as applications of different mathematical tools for analysis and simulation of spatial structures, time variability of physical and biological processes, and individual organism behavior. The scientific background of the authors is highly diverse, ensuring broad interest for the scientific community.

  12. PRATHAM: Parallel Thermal Hydraulics Simulations using Advanced Mesoscopic Methods

    SciTech Connect

    Joshi, Abhijit S; Jain, Prashant K; Mudrich, Jaime A; Popov, Emilian L

    2012-01-01

    At the Oak Ridge National Laboratory, efforts are under way to develop a 3D, parallel LBM code called PRATHAM (PaRAllel Thermal Hydraulic simulations using Advanced Mesoscopic Methods) to demonstrate the accuracy and scalability of LBM for turbulent flow simulations in nuclear applications. The code has been developed using FORTRAN-90, and parallelized using the message passing interface MPI library. Silo library is used to compact and write the data files, and VisIt visualization software is used to post-process the simulation data in parallel. Both the single relaxation time (SRT) and multi relaxation time (MRT) LBM schemes have been implemented in PRATHAM. To capture turbulence without prohibitively increasing the grid resolution requirements, an LES approach [5] is adopted allowing large scale eddies to be numerically resolved while modeling the smaller (subgrid) eddies. In this work, a Smagorinsky model has been used, which modifies the fluid viscosity by an additional eddy viscosity depending on the magnitude of the rate-of-strain tensor. In LBM, this is achieved by locally varying the relaxation time of the fluid.

  13. Comparative Study on Two Melting Simulation Methods: Melting Curve of Gold

    NASA Astrophysics Data System (ADS)

    Liu, Zhong-Li; Sun, Jun-Sheng; Li, Rui; Zhang, Xiu-Lu; Cai, Ling-Cang

    2016-05-01

    Melting simulation methods are of crucial importance to determining melting temperature of materials efficiently. A high-efficiency melting simulation method saves much simulation time and computational resources. To compare the efficiency of our newly developed shock melting (SM) method with that of the well-established two-phase (TP) method, we calculate the high-pressure melting curve of Au using the two methods based on the optimally selected interatomic potentials. Although we only use 640 atoms to determine the melting temperature of Au in the SM method, the resulting melting curve accords very well with the results from the TP method using much more atoms. Thus, this shows that a much smaller system size in SM method can still achieve a fully converged melting curve compared with the TP method, implying the robustness and efficiency of the SM method. Supported by the National Natural Science Foundation of China under Grant No. 41574076 and the NSAF of China under Grant No. U1230201/A06, and the Young Core Teacher Scheme of Henan Province under Grant No. 2014GGJS-108

  14. Pedestrian simulation and distribution in urban space based on visibility analysis and agent simulation

    NASA Astrophysics Data System (ADS)

    Ying, Shen; Li, Lin; Gao, Yurong

    2009-10-01

    Spatial visibility analysis is the important direction of pedestrian behaviors because our visual conception in space is the straight method to get environment information and navigate your actions. Based on the agent modeling and up-tobottom method, the paper develop the framework about the analysis of the pedestrian flow depended on visibility. We use viewshed in visibility analysis and impose the parameters on agent simulation to direct their motion in urban space. We analyze the pedestrian behaviors in micro-scale and macro-scale of urban open space. The individual agent use visual affordance to determine his direction of motion in micro-scale urban street on district. And we compare the distribution of pedestrian flow with configuration in macro-scale urban environment, and mine the relationship between the pedestrian flow and distribution of urban facilities and urban function. The paper first computes the visibility situations at the vantage point in urban open space, such as street network, quantify the visibility parameters. The multiple agents use visibility parameters to decide their direction of motion, and finally pedestrian flow reach to a stable state in urban environment through the simulation of multiple agent system. The paper compare the morphology of visibility parameters and pedestrian distribution with urban function and facilities layout to confirm the consistence between them, which can be used to make decision support in urban design.

  15. A coupled finite-element, boundary-integral method for simulating ultrasonic flowmeters.

    PubMed

    Bezdĕk, Michal; Landes, Hermann; Rieder, Alfred; Lerch, Reinhard

    2007-03-01

    Today's most popular technology of ultrasonic flow measurement is based on the transit-time principle. In this paper, a numerical simulation technique applicable to the analysis of transit-time flowmeters is presented. A flowmeter represents a large simulation problem that also requires computation of acoustic fields in moving media. For this purpose, a novel boundary integral method, the Helmholtz integral-ray tracing method (HIRM), is derived and validated. HIRM is applicable to acoustic radiation problems in arbitrary mean flows at low Mach numbers and significantly reduces the memory demands in comparison with the finite-element method (FEM). It relies on an approximate free-space Green's function which makes use of the ray tracing technique. For simulation of practical acoustic devices, a hybrid simulation scheme consisting of FEM and HIRM is proposed. The coupling of FEM and HIRM is facilitated by means of absorbing boundaries in combination with a new, reflection-free, acoustic-source formulation. Using the coupled FEM-HIRM scheme, a full three-dimensional (3-D) simulation of a complete transit-time flowmeter is performed for the first time. The obtained simulation results are in good agreement with measurements both at zero flow and under flow conditions. PMID:17375833

  16. An Investigation of Wavelet Bases for Grid-Based Multi-Scale Simulations Final Report

    SciTech Connect

    Baty, R.S.; Burns, S.P.; Christon, M.A.; Roach, D.W.; Trucano, T.G.; Voth, T.E.; Weatherby, J.R.; Womble, D.E.

    1998-11-01

    The research summarized in this report is the result of a two-year effort that has focused on evaluating the viability of wavelet bases for the solution of partial differential equations. The primary objective for this work has been to establish a foundation for hierarchical/wavelet simulation methods based upon numerical performance, computational efficiency, and the ability to exploit the hierarchical adaptive nature of wavelets. This work has demonstrated that hierarchical bases can be effective for problems with a dominant elliptic character. However, the strict enforcement of orthogonality was found to be less desirable than weaker semi-orthogonality or bi-orthogonality for solving partial differential equations. This conclusion has led to the development of a multi-scale linear finite element based on a hierarchical change of basis. The reproducing kernel particle method has been found to yield extremely accurate phase characteristics for hyperbolic problems while providing a convenient framework for multi-scale analyses.

  17. A pseudo non-linear method for fast simulations of ultrasonic reverberation

    NASA Astrophysics Data System (ADS)

    Byram, Brett; Shu, Jasmine

    2016-04-01

    There is growing evidence that reverberation is a primary mechanism of clinical image degradation. This has led to a number of new approaches to suppress reverberation, including our recently proposed model-based algorithm. The algorithm can work well, but it must be trained to reject clutter, while preserving the signal of interest. One way to do this is to use simulated data, but current simulation methods that include multipath scattering are slow and do not readily allow separation of clutter and signal. Here, we propose a more convenient pseudo non-linear simulation method that utilizes existing linear simulation tools like Field II. The approach functions by linearly simulating scattered wavefronts at shallow depths, and then time-shifting these wavefronts to deeper depths. The simulation only requires specification of the first and last scatterers encountered by a multiply reflected wave and a third point that establishes the arrival time of the reverberation. To maintain appropriate 2D correlation, this set of three points is fixed for the entire simulation and is shifted as with a normal linear simulation scattering field. We show example images, and we compute first order speckle statistics as a function of scatterer density. We perform ex vivo measures of reverberation where we find that the average speckle SNR is 1.73, which we can simulate with 2 reverberation scatterers per resolution cell. We also compare ex vivo lateral speckle statistics to those from linear and pseudo non-linear simulation data. Finally, the van Cittert-Zernike curve was shown to match empirical and theoretical observations.

  18. Development of land surface reflectance models based on multiscale simulation

    NASA Astrophysics Data System (ADS)

    Goodenough, Adam A.; Brown, Scott D.

    2015-05-01

    Modeling and simulation of Earth imaging sensors with large spatial coverage necessitates an understanding of how photons interact with individual land surface processes at an aggregate level. For example, the leaf angle distribution of a deciduous forest canopy has a significant impact on the path of a single photon as it is scattered among the leaves and, consequently, a significant impact on the observed bidirectional reflectance distribution function (BRDF) of the canopy as a whole. In particular, simulation of imagery of heterogeneous scenes for many multispectral/hyperspectral applications requires detailed modeling of regions of the spectrum where many orders of scattering are required due to both high reflectance and transmittance. Radiative transfer modeling based on ray tracing, hybrid Monte Carlo techniques and detailed geometric and optical models of land cover means that it is possible to build effective, aggregate optical models with parameters such as species, spatial distribution, and underlying terrain variation. This paper examines the capability of the Digital Image and Remote Sensing Image Generation (DIRSIG) model to generate BRDF data representing land surfaces at large scale from modeling at a much smaller scale. We describe robust methods for generating optical property models effectively in DIRSIG and present new tools for facilitating the process. The methods and results for forest canopies are described relative to the RAdiation transfer Model Intercomparison (RAMI) benchmark scenes, which also forms the basis for an evaluation of the approach. Additional applications and examples are presented, representing different types of land cover.

  19. Richardson Extrapolation Based Error Estimation for Stochastic Kinetic Plasma Simulations

    NASA Astrophysics Data System (ADS)

    Cartwright, Keigh

    2014-10-01

    To have a high degree of confidence in simulations one needs code verification, validation, solution verification and uncertainty qualification. This talk will focus on numerical error estimation for stochastic kinetic plasma simulations using the Particle-In-Cell (PIC) method and how it impacts the code verification and validation. A technique Is developed to determine the full converged solution with error bounds from the stochastic output of a Particle-In-Cell code with multiple convergence parameters (e.g. ?t, ?x, and macro particle weight). The core of this method is a multi parameter regression based on a second-order error convergence model with arbitrary convergence rates. Stochastic uncertainties in the data set are propagated through the model usin gstandard bootstrapping on a redundant data sets, while a suite of nine regression models introduces uncertainties in the fitting process. These techniques are demonstrated on Flasov-Poisson Child-Langmuir diode, relaxation of an electro distribution to a Maxwellian due to collisions and undriven sheaths and pre-sheaths. Sandia National Laboratories is a multie-program laboratory managed and operated by Sandia Corporation, a wholly owned subisidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under Contract DE-AC04-94AL85000.

  20. GPU-based Efficient Realistic Techniques for Bleeding and Smoke Generation in Surgical Simulators

    PubMed Central

    Halic, Tansel; Sankaranarayanan, Ganesh; De, Suvranu

    2010-01-01

    Background In actual surgery, smoke and bleeding due to cautery processes, provide important visual cues to the surgeon which have been proposed as factors in surgical skill assessment. While several virtual reality (VR)-based surgical simulators have incorporated effects of bleeding and smoke generation, they are not realistic due to the requirement of real time performance. To be interactive, visual update must be performed at least 30 Hz and haptic (touch) information must be refreshed at 1 kHz. Simulation of smoke and bleeding is, therefore, either ignored or simulated using highly simplified techniques since other computationally intensive processes compete for the available CPU resources. Methods In this work, we develop a novel low-cost method to generate realistic bleeding and smoke in VR-based surgical simulators which outsources the computations to the graphical processing unit (GPU), thus freeing up the CPU for other time-critical tasks. This method is independent of the complexity of the organ models in the virtual environment. User studies were performed using 20 subjects to determine the visual quality of the simulations compared to real surgical videos. Results The smoke and bleeding simulation were implemented as part of a Laparoscopic Adjustable Gastric Banding (LAGB) simulator. For the bleeding simulation, the original implementation using the shader did not incur in noticeable overhead. However, for smoke generation, an I/O (Input/Output) bottleneck was observed and two different methods were developed to overcome this limitation. Based on our benchmark results, a buffered approach performed better than a pipelined approach and could support up to 15 video streams in real time. Human subject studies showed that the visual realism of the simulations were as good as in real surgery (median rating of 4 on a 5-point Likert scale). Conclusions Based on the performance results and subject study, both bleeding and smoke simulations were concluded to be