Simulation Method for Wind Tunnel Based Virtual Flight Testing
NASA Astrophysics Data System (ADS)
Li, Hao; Zhao, Zhong-Liang; Fan, Zhao-Lin
The Wind Tunnel Based Virtual Flight Testing (WTBVFT) could replicate the actual free flight and explore the aerodynamics/flight dynamics nonlinear coupling mechanism during the maneuver in the wind tunnel. The basic WTBVFT concept is to mount the test model on a specialized support system which allows for the model freely rotational motion, and the aerodynamic loading and motion parameters are measured simultaneously during the model motion. The simulations of the 3-DOF pitching motion of a typical missile in the vertical plane are performed with the openloop and closed-loop control methods. The objective is to analyze the effect of the main differences between the WTBVFT and the actual free flight, and study the simulation method for the WTBVFT. Preliminary simulation analyses have been conducted with positive results. These results indicate that the WTBVFT that uses closed-loop autopilot control method with the pitch angular rate feedback signal is able to replicate the actual free flight behavior within acceptable differences.
Optimal grid-based methods for thin film micromagnetics simulations
NASA Astrophysics Data System (ADS)
Muratov, C. B.; Osipov, V. V.
2006-08-01
Thin film micromagnetics are a broad class of materials with many technological applications, primarily in magnetic memory. The dynamics of the magnetization distribution in these materials is traditionally modeled by the Landau-Lifshitz-Gilbert (LLG) equation. Numerical simulations of the LLG equation are complicated by the need to compute the stray field due to the inhomogeneities in the magnetization which presents the chief bottleneck for the simulation speed. Here, we introduce a new method for computing the stray field in a sample for a reduced model of ultra-thin film micromagnetics. The method uses a recently proposed idea of optimal finite difference grids for approximating Neumann-to-Dirichlet maps and has an advantage of being able to use non-uniform discretization in the film plane, as well as an efficient way of dealing with the boundary conditions at infinity for the stray field. We present several examples of the method's implementation and give a detailed comparison of its performance for studying domain wall structures compared to the conventional FFT-based methods.
Study of Flapping Flight Using Discrete Vortex Method Based Simulations
NASA Astrophysics Data System (ADS)
Devranjan, S.; Jalikop, Shreyas V.; Sreenivas, K. R.
2013-12-01
In recent times, research in the area of flapping flight has attracted renewed interest with an endeavor to use this mechanism in Micro Air vehicles (MAVs). For a sustained and high-endurance flight, having larger payload carrying capacity we need to identify a simple and efficient flapping-kinematics. In this paper, we have used flow visualizations and Discrete Vortex Method (DVM) based simulations for the study of flapping flight. Our results highlight that simple flapping kinematics with down-stroke period (tD) shorter than the upstroke period (tU) would produce a sustained lift. We have identified optimal asymmetry ratio (Ar = tD/tU), for which flapping-wings will produce maximum lift and find that introducing optimal wing flexibility will further enhances the lift.
A method for MREIT-based source imaging: simulation studies
NASA Astrophysics Data System (ADS)
Song, Yizhuang; Jeong, Woo Chul; Woo, Eung Je; Seo, Jin Keun
2016-08-01
This paper aims to provide a method for using magnetic resonance electrical impedance tomography (MREIT) to visualize local conductivity changes associated with evoked neuronal activities in the brain. MREIT is an MRI-based technique for conductivity mapping by probing the magnetic flux density induced by an externally injected current through surface electrodes. Since local conductivity changes resulting from evoked neural activities are very small (less than a few %), a major challenge is to acquire exogenous magnetic flux density data exceeding a certain noise level. Noting that the signal-to-noise ratio is proportional to the square root of the number of averages, it is important to reduce the data acquisition time to get more averages within a given total data collection time. The proposed method uses a sub-sampled k-space data set in the phase-encoding direction to significantly reduce the data acquisition time. Since the sub-sampled data violates the Nyquist criteria, we only get a nonlinearly wrapped version of the exogenous magnetic flux density data, which is insufficient for conductivity imaging. Taking advantage of the sparseness of the conductivity change, the proposed method detects local conductivity changes by estimating the time-change of the Laplacian of the nonlinearly wrapped data.
A method for MREIT-based source imaging: simulation studies.
Song, Yizhuang; Jeong, Woo Chul; Woo, Eung Je; Seo, Jin Keun
2016-08-01
This paper aims to provide a method for using magnetic resonance electrical impedance tomography (MREIT) to visualize local conductivity changes associated with evoked neuronal activities in the brain. MREIT is an MRI-based technique for conductivity mapping by probing the magnetic flux density induced by an externally injected current through surface electrodes. Since local conductivity changes resulting from evoked neural activities are very small (less than a few %), a major challenge is to acquire exogenous magnetic flux density data exceeding a certain noise level. Noting that the signal-to-noise ratio is proportional to the square root of the number of averages, it is important to reduce the data acquisition time to get more averages within a given total data collection time. The proposed method uses a sub-sampled k-space data set in the phase-encoding direction to significantly reduce the data acquisition time. Since the sub-sampled data violates the Nyquist criteria, we only get a nonlinearly wrapped version of the exogenous magnetic flux density data, which is insufficient for conductivity imaging. Taking advantage of the sparseness of the conductivity change, the proposed method detects local conductivity changes by estimating the time-change of the Laplacian of the nonlinearly wrapped data. PMID:27401235
Human swallowing simulation based on videofluorography images using Hamiltonian MPS method
NASA Astrophysics Data System (ADS)
Kikuchi, Takahiro; Michiwaki, Yukihiro; Kamiya, Tetsu; Toyama, Yoshio; Tamai, Tasuku; Koshizuka, Seiichi
2015-09-01
In developed nations, swallowing disorders and aspiration pneumonia have become serious problems. We developed a method to simulate the behavior of the organs involved in swallowing to clarify the mechanisms of swallowing and aspiration. The shape model is based on anatomically realistic geometry, and the motion model utilizes forced displacements based on realistic dynamic images to reflect the mechanisms of human swallowing. The soft tissue organs are modeled as nonlinear elastic material using the Hamiltonian MPS method. This method allows for stable simulation of the complex swallowing movement. A penalty method using metaballs is employed to simulate contact between organ walls and smooth sliding along the walls. We performed four numerical simulations under different analysis conditions to represent four cases of swallowing, including a healthy volunteer and a patient with a swallowing disorder. The simulation results were compared to examine the epiglottic downfolding mechanism, which strongly influences the risk of aspiration.
The Simulation of the Recharging Method Based on Solar Radiation for an Implantable Biosensor.
Li, Yun; Song, Yong; Kong, Xianyue; Li, Maoyuan; Zhao, Yufei; Hao, Qun; Gao, Tianxin
2016-01-01
A method of recharging implantable biosensors based on solar radiation is proposed. Firstly, the models of the proposed method are developed. Secondly, the recharging processes based on solar radiation are simulated using Monte Carlo (MC) method and the energy distributions of sunlight within the different layers of human skin have been achieved and discussed. Finally, the simulation results are verified experimentally, which indicates that the proposed method will contribute to achieve a low-cost, convenient and safe method for recharging implantable biosensors. PMID:27626422
NASA Astrophysics Data System (ADS)
Tang, Qiuyan; Wang, Jing; Lv, Pin; Sun, Quan
2015-10-01
Propagation simulation method and choosing mesh grid are both very important to get the correct propagation results in wave optics simulation. A new angular spectrum propagation method with alterable mesh grid based on the traditional angular spectrum method and the direct FFT method is introduced. With this method, the sampling space after propagation is not limited to propagation methods no more, but freely alterable. However, choosing mesh grid on target board influences the validity of simulation results directly. So an adaptive mesh choosing method based on wave characteristics is proposed with the introduced propagation method. We can calculate appropriate mesh grids on target board to get satisfying results. And for complex initial wave field or propagation through inhomogeneous media, we can also calculate and set the mesh grid rationally according to above method. Finally, though comparing with theoretical results, it's shown that the simulation result with the proposed method coinciding with theory. And by comparing with the traditional angular spectrum method and the direct FFT method, it's known that the proposed method is able to adapt to a wider range of Fresnel number conditions. That is to say, the method can simulate propagation results efficiently and correctly with propagation distance of almost zero to infinity. So it can provide better support for more wave propagation applications such as atmospheric optics, laser propagation and so on.
Chaussé, Pierre; Liu, Jin; Luta, George
2016-01-01
Covariate adjustment methods are frequently used when baseline covariate information is available for randomized controlled trials. Using a simulation study, we compared the analysis of covariance (ANCOVA) with three nonparametric covariate adjustment methods with respect to point and interval estimation for the difference between means. The three alternative methods were based on important members of the generalized empirical likelihood (GEL) family, specifically on the empirical likelihood (EL) method, the exponential tilting (ET) method, and the continuous updated estimator (CUE) method. Two criteria were considered for the comparison of the four statistical methods: the root mean squared error and the empirical coverage of the nominal 95% confidence intervals for the difference between means. Based on the results of the simulation study, for sensitivity analysis purposes, we recommend the use of ANCOVA (with robust standard errors when heteroscedasticity is present) together with the CUE-based covariate adjustment method. PMID:27077870
Multiscale Simulation of Microcrack Based on a New Adaptive Finite Element Method
NASA Astrophysics Data System (ADS)
Xu, Yun; Chen, Jun; Chen, Dong Quan; Sun, Jin Shan
In this paper, a new adaptive finite element (FE) framework based on the variational multiscale method is proposed and applied to simulate the dynamic behaviors of metal under loadings. First, the extended bridging scale method is used to couple molecular dynamics and FE. Then, macro damages evolvements of those micro defects are simulated by the adaptive FE method. Some auxiliary strategies, such as the conservative mesh remapping, failure mechanism and mesh splitting technique are also included in the adaptive FE computation. Efficiency of our method is validated by numerical experiments.
Szostek, Kamil; Piórkowski, Adam
2016-10-01
Ultrasound (US) imaging is one of the most popular techniques used in clinical diagnosis, mainly due to lack of adverse effects on patients and the simplicity of US equipment. However, the characteristics of the medium cause US imaging to imprecisely reconstruct examined tissues. The artifacts are the results of wave phenomena, i.e. diffraction or refraction, and should be recognized during examination to avoid misinterpretation of an US image. Currently, US training is based on teaching materials and simulators and ultrasound simulation has become an active research area in medical computer science. Many US simulators are limited by the complexity of the wave phenomena, leading to intensive sophisticated computation that makes it difficult for systems to operate in real time. To achieve the required frame rate, the vast majority of simulators reduce the problem of wave diffraction and refraction. The following paper proposes a solution for an ultrasound simulator based on methods known in geophysics. To improve simulation quality, a wavefront construction method was adapted which takes into account the refraction phenomena. This technique uses ray tracing and velocity averaging to construct wavefronts in the simulation. Instead of a geological medium, real CT scans are applied. This approach can produce more realistic projections of pathological findings and is also capable of providing real-time simulation. PMID:27586490
Evaluation of a clinical simulation-based assessment method for EHR-platforms.
Jensen, Sanne; Rasmussen, Stine Loft; Lyng, Karen Marie
2014-01-01
In a procurement process assessment of issues like human factors and interaction between technology and end-users can be challenging. In a large public procurement of an Electronic health record-platform (EHR-platform) in Denmark a clinical simulation-based method for assessing and comparing human factor issues was developed and evaluated. This paper describes the evaluation of the method, its advantages and disadvantages. Our findings showed that clinical simulation is beneficial for assessing user satisfaction, usefulness and patient safety, all though it is resource demanding. The method made it possible to assess qualitative topics during the procurement and it provides an excellent ground for user involvement. PMID:25160323
Physics-based statistical model and simulation method of RF propagation in urban environments
Pao, Hsueh-Yuan; Dvorak, Steven L.
2010-09-14
A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.
A novel method for simulation of brushless DC motor servo-control system based on MATLAB
NASA Astrophysics Data System (ADS)
Tao, Keyan; Yan, Yingmin
2006-11-01
This paper provides a research about the simulation of brush-less DC motor (BLDCM) servo control system. Based on the mathematical model of Brush-less DC motor (BLDCM), built the system simulation model with the MATLAB software. When the system model is made, the isolated functional blocks, such as BLDCM block, the rotor's position detection block, change-phase logic block etc. have been modeled. By the organic combination of these blocks, the model of BLDCM can be established easily. The reasonability and validity have been testified by the simulation results and this novel method offers a new thought way for designing and debugging actual motors.
Apparatus and method for interaction phenomena with world modules in data-flow-based simulation
Xavier, Patrick G.; Gottlieb, Eric J.; McDonald, Michael J.; Oppel, III, Fred J.
2006-08-01
A method and apparatus accommodate interaction phenomenon in a data-flow-based simulation of a system of elements, by establishing meta-modules to simulate system elements and by establishing world modules associated with interaction phenomena. World modules are associated with proxy modules from a group of meta-modules associated with one of the interaction phenomenon. The world modules include a communication world, a sensor world, a mobility world, and a contact world. World modules can be further associated with other world modules if necessary. Interaction phenomenon are simulated in corresponding world modules by accessing member functions in the associated group of proxy modules. Proxy modules can be dynamically allocated at a desired point in the simulation to accommodate the addition of elements in the system of elements such as a system of robots, a system of communication terminals, or a system of vehicles, being simulated.
Methods for simulation-based analysis of fluid-structure interaction.
Barone, Matthew Franklin; Payne, Jeffrey L.
2005-10-01
Methods for analysis of fluid-structure interaction using high fidelity simulations are critically reviewed. First, a literature review of modern numerical techniques for simulation of aeroelastic phenomena is presented. The review focuses on methods contained within the arbitrary Lagrangian-Eulerian (ALE) framework for coupling computational fluid dynamics codes to computational structural mechanics codes. The review treats mesh movement algorithms, the role of the geometric conservation law, time advancement schemes, wetted surface interface strategies, and some representative applications. The complexity and computational expense of coupled Navier-Stokes/structural dynamics simulations points to the need for reduced order modeling to facilitate parametric analysis. The proper orthogonal decomposition (POD)/Galerkin projection approach for building a reduced order model (ROM) is presented, along with ideas for extension of the methodology to allow construction of ROMs based on data generated from ALE simulations.
NASA Astrophysics Data System (ADS)
Sirait, S. H.; Taruno, W. P.; Khotimah, S. N.; Haryanto, F.
2016-03-01
A simulation to determine capacitance of brain's electrical activity based on two electrodes ECVT was conducted in this study. This study began with construction of 2D coronal head geometry with five different layers and ECVT sensor design, and then both of these designs were merged. After that, boundary conditions were applied on two electrodes in the ECVT sensor. The first electrode was defined as a Dirichlet boundary condition with 20 V in potential and another electrode was defined as a Dirichlet boundary condition with 0 V in potential. Simulated Hodgkin-Huxley -based action potentials were applied as electrical activity of the brain and sequentially were put on 3 different cross-sectional positions. As the governing equation, the Poisson equation was implemented in the geometry. Poisson equation was solved by finite element method. The simulation showed that the simulated capacitance values were affected by action potentials and cross-sectional action potential positions.
Validation of population-based disease simulation models: a review of concepts and methods
2010-01-01
Background Computer simulation models are used increasingly to support public health research and policy, but questions about their quality persist. The purpose of this article is to review the principles and methods for validation of population-based disease simulation models. Methods We developed a comprehensive framework for validating population-based chronic disease simulation models and used this framework in a review of published model validation guidelines. Based on the review, we formulated a set of recommendations for gathering evidence of model credibility. Results Evidence of model credibility derives from examining: 1) the process of model development, 2) the performance of a model, and 3) the quality of decisions based on the model. Many important issues in model validation are insufficiently addressed by current guidelines. These issues include a detailed evaluation of different data sources, graphical representation of models, computer programming, model calibration, between-model comparisons, sensitivity analysis, and predictive validity. The role of external data in model validation depends on the purpose of the model (e.g., decision analysis versus prediction). More research is needed on the methods of comparing the quality of decisions based on different models. Conclusion As the role of simulation modeling in population health is increasing and models are becoming more complex, there is a need for further improvements in model validation methodology and common standards for evaluating model credibility. PMID:21087466
Simulation of ultrasonic wave propagation in welds using ray-based methods
NASA Astrophysics Data System (ADS)
Gardahaut, A.; Jezzine, K.; Cassereau, D.; Leymarie, N.
2014-04-01
Austenitic or bimetallic welds are particularly difficult to control due to their anisotropic and inhomogeneous properties. In this paper, we present a ray-based method to simulate the propagation of ultrasonic waves in such structures, taking into account their internal properties. This method is applied on a smooth representation of the orientation of the grain in the weld. The propagation model consists in solving the eikonal and transport equations in an inhomogeneous anisotropic medium. Simulation results are presented and compared to finite elements for a distribution of grain orientation expressed in a closed-form.
Andreev, Victor P; Gillespie, Brenda W; Helfand, Brian T; Merion, Robert M
2016-01-01
Unsupervised classification methods are gaining acceptance in omics studies of complex common diseases, which are often vaguely defined and are likely the collections of disease subtypes. Unsupervised classification based on the molecular signatures identified in omics studies have the potential to reflect molecular mechanisms of the subtypes of the disease and to lead to more targeted and successful interventions for the identified subtypes. Multiple classification algorithms exist but none is ideal for all types of data. Importantly, there are no established methods to estimate sample size in unsupervised classification (unlike power analysis in hypothesis testing). Therefore, we developed a simulation approach allowing comparison of misclassification errors and estimating the required sample size for a given effect size, number, and correlation matrix of the differentially abundant proteins in targeted proteomics studies. All the experiments were performed in silico. The simulated data imitated the expected one from the study of the plasma of patients with lower urinary tract dysfunction with the aptamer proteomics assay Somascan (SomaLogic Inc, Boulder, CO), which targeted 1129 proteins, including 330 involved in inflammation, 180 in stress response, 80 in aging, etc. Three popular clustering methods (hierarchical, k-means, and k-medoids) were compared. K-means clustering performed much better for the simulated data than the other two methods and enabled classification with misclassification error below 5% in the simulated cohort of 100 patients based on the molecular signatures of 40 differentially abundant proteins (effect size 1.5) from among the 1129-protein panel. PMID:27524871
Song, Yong; Hao, Qun; Kong, Xianyue; Hu, Lanxin; Cao, Jie; Gao, Tianxin
2014-01-01
Recharging implantable electronics from the outside of the human body is very important for applications such as implantable biosensors and other implantable electronics. In this paper, a recharging method for implantable biosensors based on a wearable incoherent light source has been proposed and simulated. Firstly, we develop a model of the incoherent light source and a multi-layer model of skin tissue. Secondly, the recharging processes of the proposed method have been simulated and tested experimentally, whereby some important conclusions have been reached. Our results indicate that the proposed method will offer a convenient, safe and low-cost recharging method for implantable biosensors, which should promote the application of implantable electronics. PMID:25372616
Agent-based modeling: Methods and techniques for simulating human systems
Bonabeau, Eric
2002-01-01
Agent-based modeling is a powerful simulation modeling technique that has seen a number of applications in the last few years, including applications to real-world business problems. After the basic principles of agent-based simulation are briefly introduced, its four areas of application are discussed by using real-world applications: flow simulation, organizational simulation, market simulation, and diffusion simulation. For each category, one or several business applications are described and analyzed. PMID:12011407
The method of infrared image simulation based on the measured image
NASA Astrophysics Data System (ADS)
Lou, Shuli; Liu, Liang; Ren, Jiancun
2015-10-01
The development of infrared imaging guidance technology has promoted the research of infrared imaging simulation technology and the key of infrared imaging simulation is the generation of IR image. The generation of IR image is worthful in military and economy. In order to solve the problem of credibility and economy of infrared scene generation, a method of infrared scene generation based on the measured image is proposed. Through researching on optical properties of ship-target and sea background, ship-target images with various gestures are extracted from recorded images based on digital image processing technology. The ship-target image is zoomed in and out to simulate the relative motion between the viewpoint and the target according to field of view and the distance between the target and the sensor. The gray scale of ship-target image is adjusted to simulate the radiation change of the ship-target according to the distance between the viewpoint and the target and the atmospheric transmission. Frames of recorded infrared images without target are interpolated to simulate high frame rate of missile. Processed ship-target images and sea-background infrared images are synthetized to obtain infrared scenes according to different viewpoints. Experiments proved that this method is flexible and applicable, and the fidelity and the reliability of synthesis infrared images can be guaranteed.
Three-dimensional imaging simulation of active laser detection based on DLOS method
NASA Astrophysics Data System (ADS)
Zhang, Chuanxin; Zhou, Honghe; Chen, Xiang; Yuan, Yuan; Shuai, Yong; Tan, Heping
2016-07-01
The technology of active laser detection is widely used in many different fields nowadays. With the development of computer technology, programmable software simulation can provide reference for the design of active laser detection. The characteristics of the active laser detecting systems also can be judged more visual. Based on the features of the active laser detection, an improved method of radiative transfer calculation (Double Line Of Sight) was developed, and the simulation models of complete active laser detecting imaging were founded. Compared with the results calculated by the Monte Carlo method, the correctness of the improved method was verified. The results of active laser detecting imaging of complex three-dimensional targets in different atmospheric scenes were compared. The influence of different atmospheric dielectric property were analyzed, which provides effective reference for the design of active laser detection.
A Monte Carlo simulation based inverse propagation method for stochastic model updating
NASA Astrophysics Data System (ADS)
Bao, Nuo; Wang, Chunjie
2015-08-01
This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.
Nakata, Hiroya; Schmidt, Michael W; Fedorov, Dmitri G; Kitaura, Kazuo; Nakamura, Shinichiro; Gordon, Mark S
2014-10-16
The fully analytic energy gradient has been developed and implemented for the restricted open-shell Hartree–Fock (ROHF) method based on the fragment molecular orbital (FMO) theory for systems that have multiple open-shell molecules. The accuracy of the analytic ROHF energy gradient is compared with the corresponding numerical gradient, illustrating the accuracy of the analytic gradient. The ROHF analytic gradient is used to perform molecular dynamics simulations of an unusual open-shell system, liquid oxygen, and mixtures of oxygen and nitrogen. These molecular dynamics simulations provide some insight about how triplet oxygen molecules interact with each other. Timings reveal that the method can calculate the energy gradient for a system containing 4000 atoms in only 6 h. Therefore, it is concluded that the FMO-ROHF method will be useful for investigating systems with multiple open shells.
NASA Astrophysics Data System (ADS)
Hao, Zengrong; Gu, Chunwei; Song, Yin
2016-06-01
This study extends the discontinuous Galerkin (DG) methods to simulations of thermoelasticity. A thermoelastic formulation of interior penalty DG (IP-DG) method is presented and aspects of the numerical implementation are discussed in matrix form. The content related to thermal expansion effects is illustrated explicitly in the discretized equation system. The feasibility of the method for general thermoelastic simulations is validated through typical test cases, including tackling stress discontinuities caused by jumps of thermal expansive properties and controlling accompanied non-physical oscillations through adjusting the magnitude of IP term. The developed simulation platform upon the method is applied to the engineering analysis of thermoelastic performance for a turbine vane and a series of vanes with various types of simplified thermal barrier coating (TBC) systems. This analysis demonstrates that while TBC properties on heat conduction are generally the major consideration for protecting the alloy base vanes, the mechanical properties may have more significant effects on protections of coatings themselves. Changing characteristics of normal tractions on TBC/base interface, closely related to the occurrence of coating failures, over diverse components distributions along TBC thickness of the functional graded materials are summarized and analysed, illustrating the opposite tendencies in situations with different thermal-stress-free temperatures for coatings.
NASA Astrophysics Data System (ADS)
Takahashi, M.; Kawabata, Y.; Washitani, T.; Tanaka, S.; Maeda, S.; Mimotogi, S.
2014-03-01
In progress of lithography technologies, the importance of Mask3D analysis has been emphasized because the influence of mask topography effects is not avoidable to be increased explosively. An electromagnetic filed simulation method, such as FDTD, RCWA and FEM, is applied to analyze those complicated phenomena. We have investigated Constrained Interpolation Profile (CIP) method, which is one of the Method of Characteristics (MoC), for Mask3D analysis in optical lithography. CIP method can reproduce the phase of propagating waves with less numerical error by using high order polynomial function. The restrictions of grid distance are relaxed with spatial grid. Therefore this method reduces the number of grid points in complex structure. In this paper, we study the feasibility of CIP scheme applying a non-uniform and spatial-interpolated grid to practical mask patterns. The number of grid points might be increased in complex layout and topological structure since these structures require a dense grid to remain the fidelity of each design. We propose a spatial interpolation method based on CIP method same as time-domain interpolation to reduce the number of grid points to be computed. The simulation results of two meshing methods with spatial interpolation are shown.
A flood map based DOI decoding method for block detector: a GATE simulation study.
Shi, Han; Du, Dong; Su, Zhihong; Peng, Qiyu
2014-01-01
Positron Emission Tomography (PET) systems using detectors with Depth of Interaction (DOI) capabilities could achieve higher spatial resolution and better image quality than those without DOI. Up till now, most DOI methods developed are not cost-efficient for a whole body PET system. In this paper, we present a DOI decoding method based on flood map for low-cost conventional block detector with four-PMT readout. Using this method, the DOI information can be directly extracted from the DOI-related crystal spot deformation in the flood map. GATE simulations are then carried out to validate the method, confirming a DOI sorting accuracy of 85.27%. Therefore, we conclude that this method has the potential to be applied in conventional detectors to achieve a reasonable DOI measurement without dramatically increasing their complexity and cost of an entire PET system. PMID:25227021
Using simulations to evaluate Mantel-based methods for assessing landscape resistance to gene flow.
Zeller, Katherine A; Creech, Tyler G; Millette, Katie L; Crowhurst, Rachel S; Long, Robert A; Wagner, Helene H; Balkenhol, Niko; Landguth, Erin L
2016-06-01
Mantel-based tests have been the primary analytical methods for understanding how landscape features influence observed spatial genetic structure. Simulation studies examining Mantel-based approaches have highlighted major challenges associated with the use of such tests and fueled debate on when the Mantel test is appropriate for landscape genetics studies. We aim to provide some clarity in this debate using spatially explicit, individual-based, genetic simulations to examine the effects of the following on the performance of Mantel-based methods: (1) landscape configuration, (2) spatial genetic nonequilibrium, (3) nonlinear relationships between genetic and cost distances, and (4) correlation among cost distances derived from competing resistance models. Under most conditions, Mantel-based methods performed poorly. Causal modeling identified the true model only 22% of the time. Using relative support and simple Mantel r values boosted performance to approximately 50%. Across all methods, performance increased when landscapes were more fragmented, spatial genetic equilibrium was reached, and the relationship between cost distance and genetic distance was linearized. Performance depended on cost distance correlations among resistance models rather than cell-wise resistance correlations. Given these results, we suggest that the use of Mantel tests with linearized relationships is appropriate for discriminating among resistance models that have cost distance correlations <0.85 with each other for causal modeling, or <0.95 for relative support or simple Mantel r. Because most alternative parameterizations of resistance for the same landscape variable will result in highly correlated cost distances, the use of Mantel test-based methods to fine-tune resistance values will often not be effective. PMID:27516868
Diffusion approximation-based simulation of stochastic ion channels: which method to use?
Pezo, Danilo; Soudry, Daniel; Orio, Patricio
2014-01-01
To study the effects of stochastic ion channel fluctuations on neural dynamics, several numerical implementation methods have been proposed. Gillespie's method for Markov Chains (MC) simulation is highly accurate, yet it becomes computationally intensive in the regime of a high number of channels. Many recent works aim to speed simulation time using the Langevin-based Diffusion Approximation (DA). Under this common theoretical approach, each implementation differs in how it handles various numerical difficulties—such as bounding of state variables to [0,1]. Here we review and test a set of the most recently published DA implementations (Goldwyn et al., 2011; Linaro et al., 2011; Dangerfield et al., 2012; Orio and Soudry, 2012; Schmandt and Galán, 2012; Güler, 2013; Huang et al., 2013a), comparing all of them in a set of numerical simulations that assess numerical accuracy and computational efficiency on three different models: (1) the original Hodgkin and Huxley model, (2) a model with faster sodium channels, and (3) a multi-compartmental model inspired in granular cells. We conclude that for a low number of channels (usually below 1000 per simulated compartment) one should use MC—which is the fastest and most accurate method. For a high number of channels, we recommend using the method by Orio and Soudry (2012), possibly combined with the method by Schmandt and Galán (2012) for increased speed and slightly reduced accuracy. Consequently, MC modeling may be the best method for detailed multicompartment neuron models—in which a model neuron with many thousands of channels is segmented into many compartments with a few hundred channels. PMID:25404914
Simulations of Ground Motion in Southern California based upon the Spectral-Element Method
NASA Astrophysics Data System (ADS)
Tromp, J.; Komatitsch, D.; Liu, Q.
2003-12-01
We use the spectral-element method to simulate ground motion generated by recent well-recorded small earthquakes in Southern California. Simulations are performed using a new sedimentary basin model that is constrained by hundreds of petroleum industry well logs and more than twenty thousand kilometers of seismic reflection profiles. The numerical simulations account for 3D variations of seismic wave speeds and density, topography and bathymetry, and attenuation. Simulations for several small recent events demonstrate that the combination of a detailed sedimentary basin model and an accurate numerical technique facilitates the simulation of ground motion at periods of 2 seconds and longer inside the Los Angeles basin and 6 seconds and longer elsewhere. Peak ground displacement, velocity and acceleration maps illustrate that significant amplification occurs in the basin. Centroid-Moment Tensor mechanisms are obtained based upon Pnl and surface waveforms and numerically calculated 3D Frechet derivatives. We use a combination of waveform and waveform-envelope misfit criteria, and facilitate pure double-couple or zero-trace moment-tensor inversions.
NASA Astrophysics Data System (ADS)
Takahashi, Ryohei; Mamori, Hiroya; Yamamoto, Makoto
2016-02-01
A numerical method for simulating gas-liquid-solid three-phase flows based on the moving particle semi-implicit (MPS) approach was developed in this study. Computational instability often occurs in multiphase flow simulations if the deformations of the free surfaces between different phases are large, among other reasons. To avoid this instability, this paper proposes an improved coupling procedure between different phases in which the physical quantities of particles in different phases are calculated independently. We performed numerical tests on two illustrative problems: a dam-break problem and a solid-sphere impingement problem. The former problem is a gas-liquid two-phase problem, and the latter is a gas-liquid-solid three-phase problem. The computational results agree reasonably well with the experimental results. Thus, we confirmed that the proposed MPS method reproduces the interaction between different phases without inducing numerical instability.
Full wave simulation of lower hybrid waves in Maxwellian plasma based on the finite element method
Meneghini, O.; Shiraiwa, S.; Parker, R.
2009-09-15
A full wave simulation of the lower-hybrid (LH) wave based on the finite element method is presented. For the LH wave, the most important terms of the dielectric tensor are the cold plasma contribution and the electron Landau damping (ELD) term, which depends only on the component of the wave vector parallel to the background magnetic field. The nonlocal hot plasma ELD effect was expressed as a convolution integral along the magnetic field lines and the resultant integro-differential Helmholtz equation was solved iteratively. The LH wave propagation in a Maxwellian tokamak plasma based on the Alcator C experiment was simulated for electron temperatures in the range of 2.5-10 keV. Comparison with ray tracing simulations showed good agreement when the single pass damping is strong. The advantages of the new approach include a significant reduction of computational requirements compared to full wave spectral methods and seamless treatment of the core, the scrape off layer and the launcher regions.
Aerodynamic flow simulation using a pressure-based method and a two-equation turbulence model
NASA Astrophysics Data System (ADS)
Lai, Y. G. J.; Przekwas, A. J.; So, R. M. C.
1993-07-01
In the past, most aerodynamic flow calculations were carried out with density-based numerical methods and zero-equation turbulence models. However, pressure-based methods and more advanced turbulence models have been routinely used in industry for many internal flow simulations and for incompressible flows. Unfortunately, their usefulness in calculating aerodynamic flows is still not well demonstrated and accepted. In this study, an advanced pressure-based numerical method and a recently proposed near-wall compressible two-equation turbulence model are used to calculate external aerodynamic flows. Several TVD-type schemes are extended to pressure-based method to better capture discontinuities such as shocks. Some improvements are proposed to accelerate the convergence of the numerical method. A compressible near-wall two-equation turbulence model is then implemented to calculate transonic turbulent flows over NACA 0012 and RAE 2822 airfoils with and without shocks. The calculated results are compared with wind tunnel data as well as with results obtained from the Baldwin-Lomax model. The performance of the two-equation turbulence model is evaluated and its merits or lack thereof are discussed.
A Novel Antibody Humanization Method Based on Epitopes Scanning and Molecular Dynamics Simulation
Zhao, Bin-Bin; Gong, Lu-Lu; Jin, Wen-Jing; Liu, Jing-Jun; Wang, Jing-Fei; Wang, Tian-Tian; Yuan, Xiao-Hui; He, You-Wen
2013-01-01
1-17-2 is a rat anti-human DEC-205 monoclonal antibody that induces internalization and delivers antigen to dendritic cells (DCs). The potentially clinical application of this antibody is limited by its murine origin. Traditional humanization method such as complementarity determining regions (CDRs) graft often leads to a decreased or even lost affinity. Here we have developed a novel antibody humanization method based on computer modeling and bioinformatics analysis. First, we used homology modeling technology to build the precise model of Fab. A novel epitope scanning algorithm was designed to identify antigenic residues in the framework regions (FRs) that need to be mutated to human counterpart in the humanization process. Then virtual mutation and molecular dynamics (MD) simulation were used to assess the conformational impact imposed by all the mutations. By comparing the root-mean-square deviations (RMSDs) of CDRs, we found five key residues whose mutations would destroy the original conformation of CDRs. These residues need to be back-mutated to rescue the antibody binding affinity. Finally we constructed the antibodies in vitro and compared their binding affinity by flow cytometry and surface plasmon resonance (SPR) assay. The binding affinity of the refined humanized antibody was similar to that of the original rat antibody. Our results have established a novel method based on epitopes scanning and MD simulation for antibody humanization. PMID:24278299
Copula-based method for multisite monthly and daily streamflow simulation
NASA Astrophysics Data System (ADS)
Chen, Lu; Singh, Vijay P.; Guo, Shenglian; Zhou, Jianzhong; Zhang, Junhong
2015-09-01
Multisite stochastic simulation of streamflow sequences is needed for water resources planning and management. In this study, a new copula-based method is proposed for generating long-term multisite monthly and daily streamflow data. A multivariate copula, which is established based on bivariate copulas and conditional probability distributions, is employed to describe temporal dependences (single site) and spatial dependences (between sites). Monthly or daily streamflows at multiple sites are then generated by sampling from the conditional copula. Three tributaries of Colorado River and the upper Yangtze River are selected to evaluate the proposed methodology. Results show that the generated data at both higher and lower time scales can capture the distribution properties of the single site and preserve the spatial correlation of streamflows at different locations. The main advantage of the method is that the trivairate copula can be established using three bivariate copulas and the model parameters can be easily estimated using the Kendall tau rank correlation coefficient, which makes it possible to generate daily streamflow data. The method provides a new tool for multisite stochastic simulation.
Copula-based method for Multisite Monthly and Daily Streamflow Simulation
NASA Astrophysics Data System (ADS)
Chen, L.; Dai, M.; Singh, V. P.; Guo, S.
2014-12-01
Multisite stochastic simulation of streamflow sequences is needed for water resources planning and management. In this study, a new copula-based method is proposed for generating long-term multisite monthly and daily streamflow data. A multivariate copula, which is established based on bivariate copulas and conditional probability distributions, is employed to describe temporal dependences (single site) and spatial dependences (between sites). Monthly or daily streamflows at multiple sites are then generated by sampling from the conditional copula. Three tributaries of Colorado River and the upper Yangtze River are selected to evaluate the proposed methodology. Results show that the generated data at both higher and lower time scales can capture the distribution properties of the single site and preserve the spatial correlation of streamflows at different locations. The main advantage of the method is that the model parameters can be easily estimated using Kendall tau rank correlation coefficient, which makes it possible to generate daily streamflow data. The method provides a new tool for multisite stochastic simulation.
NASA Astrophysics Data System (ADS)
Soto Meca, A.; Alhama, F.; González Fernández, C. F.
2007-06-01
SummaryThe Henry and Elder problems are once more numerically studied using an efficient model based on the Network Simulation Method, which takes advantage of the powerful algorithms implemented in modern circuit simulation software. The network model of the volume element, which is directly deduced from the finite-difference differential equations of the spatially discretized governing equations, under the streamfunction formulation, is electrically connected to adjacent networks to conform the whole model of the medium to which the boundary conditions are added using adequate electrical devices. Coupling between equations is directly implemented in the model. Very few, simple rules are needed to design the model, which is run in a circuit simulation code to obtain the results with no added mathematical manipulations. Different versions of the Henry problem, as well as the Elder problem, are simulated and the solutions are successfully compared with the analytical and numerical solutions of other authors or codes. A grid convergence study for the Henry problem was also carried out to determine the grid size with negligible numerical dispersion, while a similar study was carried out with the Elder problem in order to compare the patterns of the solution with those of other authors. Computing times are relatively small for this kind of problem.
NASA Astrophysics Data System (ADS)
Meng, Yao; Zhang, Guo-yu
2015-10-01
Star simulator acts ground calibration equipment of the star sensor, It testes the related parameters and performance of the star sensor. At present, when the dynamic star simulator based on LCOS splicing is identified by the star sensor, there is a major problem which is the poor LCOS contrast. In this paper, we analysis the cause of LC OS stray light , which is the relation between the incident angle of light and contrast ratio and set up the function relationship between the angle and the irradiance of the stray light. According to this relationship, we propose a scheme that we control the incident angle . It is a popular method to use the compound parabolic concentrator (CPC), although it can control any angle what we want in theory, in fact, we usually use it above +/-15° because of the length and the manufacturing cost. Then I set a telescopic system in front of the CPC , that principle is the same as the laser beam expander. We simulate the CPC with the Tracepro, it simulate the exit surface irradiance. The telescopic system should be designed by the ZEMAX because of the chromatic aberration correction. As a result, we get a collimating light source which the viewing angle is less than +/-5° and the area of uniform irradiation surface is greater than 20mm×20mm.
Broken wires diagnosis method numerical simulation based on smart cable structure
NASA Astrophysics Data System (ADS)
Li, Sheng; Zhou, Min; Yang, Yan
2014-12-01
The smart cable with embedded distributed fiber optical Bragg grating (FBG) sensors was chosen as the object to study a new diagnosis method about broken wires of the bridge cable. The diagnosis strategy based on cable force and stress distribution state of steel wires was put forward. By establishing the bridge-cable and cable-steel wires model, the broken wires sample database was simulated numerically. A method of the characterization cable state pattern which can both represent the degree and location of broken wires inside a cable was put forward. The training and predicting results of the sample database by the back propagation (BP) neural network showed that the proposed broken wires diagnosis method was feasible and expanded the broken wires diagnosis research area by using the smart cable which was used to be only representing cable force.
A simple numerical method for snowmelt simulation based on the equation of heat energy.
Stojković, Milan; Jaćimović, Nenad
2016-01-01
This paper presents one-dimensional numerical model for snowmelt/accumulation simulations, based on the equation of heat energy. It is assumed that the snow column is homogeneous at the current time step; however, its characteristics such as snow density and thermal conductivity are treated as functions of time. The equation of heat energy for snow column is solved using the implicit finite difference method. The incoming energy at the snow surface includes the following parts: conduction, convection, radiation and the raindrop energy. Along with the snow melting process, the model includes a model for snow accumulation. The Euler method for the numerical integration of the balance equation is utilized in the proposed model. The model applicability is demonstrated at the meteorological station Zlatibor, located in the western region of Serbia at 1,028 meters above sea level (m.a.s.l.) Simulation results of snowmelt/accumulation suggest that the proposed model achieved better agreement with observed data in comparison with the temperature index method. The proposed method may be utilized as part of a deterministic hydrological model in order to improve short and long term predictions of possible flood events. PMID:27054726
Evaluation of FTIR-based analytical methods for the analysis of simulated wastes
Rebagay, T.V.; Cash, R.J.; Dodd, D.A.; Lockrem, L.L.; Meacham, J.E.; Winkelman, W.D.
1994-09-30
Three FTIR-based analytical methods that have potential to characterize simulated waste tank materials have been evaluated. These include: (1) fiber optics, (2) modular transfer optic using light guides equipped with non-contact sampling peripherals, and (3) photoacoustic spectroscopy. Pertinent instrumentation and experimental procedures for each method are described. The results show that the near-infrared (NIR) region of the infrared spectrum is the region of choice for the measurement of moisture in waste simulants. Differentiation of the NIR spectrum, as a preprocessing steps, will improve the analytical result. Preliminary data indicate that prominent combination bands of water and the first overtone band of the ferrocyanide stretching vibration may be utilized to measure water and ferrocyanide species simultaneously. Both near-infrared and mid-infrared spectra must be collected, however, to measure ferrocyanide species unambiguously and accurately. For ease of sample handling and the potential for field or waste tank deployment, the FTIR-Fiber Optic method is preferred over the other two methods. Modular transfer optic using light guides and photoacoustic spectroscopy may be used as backup systems and for the validation of the fiber optic data.
IR imaging simulation and analysis for aeroengine exhaust system based on reverse Monte Carlo method
NASA Astrophysics Data System (ADS)
Chen, Shiguo; Chen, Lihai; Mo, Dongla; Shi, Jingcheng
2014-11-01
The IR radiation characteristics of aeroengine are the important basis for IR stealth design and anti-stealth detection of aircraft. With the development of IR imaging sensor technology, the importance of aircraft IR stealth increases. An effort is presented to explore target IR radiation imaging simulation based on Reverse Monte Carlo Method (RMCM), which combined with the commercial CFD software. Flow and IR radiation characteristics of an aeroengine exhaust system are investigated, which developing a full size geometry model based on the actual parameters, using a flow-IR integration structured mesh, obtaining the engine performance parameters as the inlet boundary conditions of mixer section, and constructing a numerical simulation model of engine exhaust system of IR radiation characteristics based on RMCM. With the above models, IR radiation characteristics of aeroengine exhaust system is given, and focuses on the typical detecting band of IR spectral radiance imaging at azimuth 20°. The result shows that: (1) in small azimuth angle, the IR radiation is mainly from the center cone of all hot parts; near the azimuth 15°, mixer has the biggest radiation contribution, while center cone, turbine and flame stabilizer equivalent; (2) the main radiation components and space distribution in different spectrum is different, CO2 at 4.18, 4.33 and 4.45 micron absorption and emission obviously, H2O at 3.0 and 5.0 micron absorption and emission obviously.
Numerical method to compute optical conductivity based on pump-probe simulations
NASA Astrophysics Data System (ADS)
Shao, Can; Tohyama, Takami; Luo, Hong-Gang; Lu, Hantao
2016-05-01
A numerical method to calculate optical conductivity based on a pump-probe setup is presented. Its validity and limits are tested and demonstrated via concrete numerical simulations on the half-filled one-dimensional extended Hubbard model both in and out of equilibrium. By employing either a steplike or a Gaussian-like probing vector potential, it is found that in nonequilibrium, the method in the narrow-probe-pulse limit can be identified with variant types of linear-response theory, which, in equilibrium, produce identical results. The observation reveals the underlying probe-pulse dependence of the optical conductivity calculations in nonequilibrium, which may have applications in the theoretical analysis of ultrafast spectroscopy measurements.
Method for distributed agent-based non-expert simulation of manufacturing process behavior
Ivezic, Nenad; Potok, Thomas E.
2004-11-30
A method for distributed agent based non-expert simulation of manufacturing process behavior on a single-processor computer comprises the steps of: object modeling a manufacturing technique having a plurality of processes; associating a distributed agent with each the process; and, programming each the agent to respond to discrete events corresponding to the manufacturing technique, wherein each discrete event triggers a programmed response. The method can further comprise the step of transmitting the discrete events to each agent in a message loop. In addition, the programming step comprises the step of conditioning each agent to respond to a discrete event selected from the group consisting of a clock tick message, a resources received message, and a request for output production message.
Liang, Feng; Guo, Yuanyuan; Fung, Richard Y K
2015-11-01
Operation theatre is one of the most significant assets in a hospital as the greatest source of revenue as well as the largest cost unit. This paper focuses on surgery scheduling optimization, which is one of the most crucial tasks in operation theatre management. A combined scheduling policy composed of three simple scheduling rules is proposed to optimize the performance of scheduling operation theatre. Based on the real-life scenarios, a simulation-based model about surgery scheduling system is built. With two optimization objectives, the response surface method is adopted to search for the optimal weight of simple rules in a combined scheduling policy in the model. Moreover, the weights configuration can be revised to cope with dispatching dynamics according to real-time change at the operation theatre. Finally, performance comparison between the proposed combined scheduling policy and tabu search algorithm indicates that the combined scheduling policy is capable of sequencing surgery appointments more efficiently. PMID:26385551
Simulation and evaluation of tablet-coating burst based on finite element method.
Yang, Yan; Li, Juan; Miao, Kong-Song; Shan, Wei-Guang; Tang, Lan; Yu, Hai-Ning
2016-09-01
The objective of this study was to simulate and evaluate the burst behavior of coated tablets. Three-dimensional finite element models of tablet-coating were established using software ANSYS. Swelling pressure of cores was measured by a self-made device and applied at the internal surface of the models. Mechanical properties of the polymer film were determined using a texture analyzer and applied as material properties of the models. The resulted finite element models were validated by experimental data. The validated models were used to assess the factors those influenced burst behavior and predict the coating burst behavior. The simulation results of coating burst and failure location were strongly matched with the experimental data. It was found that internal swelling pressure, inside corner radius and corner thickness were three main factors controlling the stress distribution and burst behavior. Based on the linear relationship between the internal pressure and the maximum principle stress on coating, burst pressure of coatings was calculated and used to predict the burst behavior. This study demonstrated that burst behavior of coated tablets could be simulated and evaluated by finite element method. PMID:26727401
Spin tracking simulations in AGS based on ray-tracing methods - bare lattice, no snakes -
Meot, F.; Ahrens, L.; Gleen, J.; Huang, H.; Luccio, A.; MacKay, W. W.; Roser, T.; Tsoupas, N.
2009-09-01
This Note reports on the first simulations of and spin dynamics in the AGS using the ray-tracing code Zgoubi. It includes lattice analysis, comparisons with MAD, DA tracking, numerical calculation of depolarizing resonance strengths and comparisons with analytical models, etc. It also includes details on the setting-up of Zgoubi input data files and on the various numerical methods of concern in and available from Zgoubi. Simulations of crossing and neighboring of spin resonances in AGS ring, bare lattice, without snake, have been performed, in order to assess the capabilities of Zgoubi in that matter, and are reported here. This yields a rather long document. The two main reasons for that are, on the one hand the desire of an extended investigation of the energy span, and on the other hand a thorough comparison of Zgoubi results with analytical models as the 'thin lens' approximation, the weak resonance approximation, and the static case. Section 2 details the working hypothesis : AGS lattice data, formulae used for deriving various resonance related quantities from the ray-tracing based 'numerical experiments', etc. Section 3 gives inventories of the intrinsic and imperfection resonances together with, in a number of cases, the strengths derived from the ray-tracing. Section 4 gives the details of the numerical simulations of resonance crossing, including behavior of various quantities (closed orbit, synchrotron motion, etc.) aimed at controlling that the conditions of particle and spin motions are correct. In a similar manner Section 5 gives the details of the numerical simulations of spin motion in the static case: fixed energy in the neighboring of the resonance. In Section 6, weak resonances are explored, Zgoubi results are compared with the Fresnel integrals model. Section 7 shows the computation of the {rvec n} vector in the AGS lattice and tuning considered. Many details on the numerical conditions as data files etc. are given in the Appendix Section
Numerical Simulation of Drophila Flight Based on Arbitrary Langrangian-Eulerian Method
NASA Astrophysics Data System (ADS)
Erzincanli, Belkis; Sahin, Mehmet
2012-11-01
A parallel unstructured finite volume algorithm based on Arbitrary Lagrangian Eulerian (ALE) method has been developed in order to investigate the wake structure around a pair of flapping Drosophila wings. The numerical method uses a side-centered arrangement of the primitive variables that does not require any ad-hoc modifications in order to enhance pressure coupling. A radial basis function (RBF) interpolation method is also implemented in order to achieve large mesh deformations. For the parallel solution of resulting large-scale algebraic equations, a matrix factorization is introduced similar to that of the projection method for the whole coupled system and two-cycle of BoomerAMG solver is used for the scaled discrete Laplacian provided by the HYPRE library which we access through the PETSc library. The present numerical algorithm is initially validated for the flow past an oscillating circular cylinder in a channel and the flow induced by an oscillating sphere in a cubic cavity. Then the numerical algorithm is applied to the numerical simulation of flow field around a pair of flapping Drosophila wing in hover flight. The time variation of the near wake structure is shown along with the aerodynamic loads and particle traces. The authors acknowledge financial support from Turkish National Scientific and Technical Research Council (TUBITAK) through project number 111M332. The authors would like to thank Michael Dickinson and Michael Elzinga for providing the experimental data.
NASA Astrophysics Data System (ADS)
Berti, Claudio; Gillespie, Dirk; Bardhan, Jaydeep P.; Eisenberg, Robert S.; Fiegna, Claudio
2012-07-01
Particle-based simulation represents a powerful approach to modeling physical systems in electronics, molecular biology, and chemical physics. Accounting for the interactions occurring among charged particles requires an accurate and efficient solution of Poisson's equation. For a system of discrete charges with inhomogeneous dielectrics, i.e., a system with discontinuities in the permittivity, the boundary element method (BEM) is frequently adopted. It provides the solution of Poisson's equation, accounting for polarization effects due to the discontinuity in the permittivity by computing the induced charges at the dielectric boundaries. In this framework, the total electrostatic potential is then found by superimposing the elemental contributions from both source and induced charges. In this paper, we present a comparison between two BEMs to solve a boundary-integral formulation of Poisson's equation, with emphasis on the BEMs' suitability for particle-based simulations in terms of solution accuracy and computation speed. The two approaches are the collocation and qualocation methods. Collocation is implemented following the induced-charge computation method of D. Boda [J. Chem. Phys.JCPSA60021-960610.1063/1.2212423 125, 034901 (2006)]. The qualocation method is described by J. Tausch [IEEE Transactions on Computer-Aided Design of Integrated Circuits and SystemsITCSDI0278-007010.1109/43.969433 20, 1398 (2001)]. These approaches are studied using both flat and curved surface elements to discretize the dielectric boundary, using two challenging test cases: a dielectric sphere embedded in a different dielectric medium and a toy model of an ion channel. Earlier comparisons of the two BEM approaches did not address curved surface elements or semiatomistic models of ion channels. Our results support the earlier findings that for flat-element calculations, qualocation is always significantly more accurate than collocation. On the other hand, when the dielectric boundary
Method of simulation and visualization of FDG metabolism based on VHP image
NASA Astrophysics Data System (ADS)
Cui, Yunfeng; Bai, Jing
2005-04-01
FDG ([18F] 2-fluoro-2-deoxy-D-glucose) is the typical tracer used in clinical PET (positron emission tomography) studies. The FDG-PET is an important imaging tool for early diagnosis and treatment of malignant tumor and functional disease. The main purpose of this work is to propose a method that represents FDG metabolism in human body through the simulation and visualization of 18F distribution process dynamically based on the segmented VHP (Visible Human Project) image dataset. First, the plasma time-activity curve (PTAC) and the tissues time-activity curves (TTAC) are obtained from the previous studies and the literatures. According to the obtained PTAC and TTACs, a set of corresponding values are assigned to the segmented VHP image, Thus a set of dynamic images are derived to show the 18F distribution in the concerned tissues for the predetermined sampling schedule. Finally, the simulated FDG distribution images are visualized in 3D and 2D formats, respectively, incorporated with principal interaction functions. As compared with original PET image, our visualization result presents higher resolution because of the high resolution of VHP image data, and show the distribution process of 18F dynamically. The results of our work can be used in education and related research as well as a tool for the PET operator to design their PET experiment program.
A new method to extract stable feature points based on self-generated simulation images
NASA Astrophysics Data System (ADS)
Long, Fei; Zhou, Bin; Ming, Delie; Tian, Jinwen
2015-10-01
Recently, image processing has got a lot of attention in the field of photogrammetry, medical image processing, etc. Matching two or more images of the same scene taken at different times, by different cameras, or from different viewpoints, is a popular and important problem. Feature extraction plays an important part in image matching. Traditional SIFT detectors reject the unstable points by eliminating the low contrast and edge response points. The disadvantage is the need to set the threshold manually. The main idea of this paper is to get the stable extremums by machine learning algorithm. Firstly we use ASIFT approach coupled with the light changes and blur to generate multi-view simulated images, which make up the set of the simulated images of the original image. According to the way of generating simulated images set, affine transformation of each generated image is also known. Instead of the traditional matching process which contain the unstable RANSAC method to get the affine transformation, this approach is more stable and accurate. Secondly we calculate the stability value of the feature points by the set of image with its affine transformation. Then we get the different feature properties of the feature point, such as DOG features, scales, edge point density, etc. Those two form the training set while stability value is the dependent variable and feature property is the independent variable. At last, a process of training by Rank-SVM is taken. We will get a weight vector. In use, based on the feature properties of each points and weight vector calculated by training, we get the sort value of each feature point which refers to the stability value, then we sort the feature points. In conclusion, we applied our algorithm and the original SIFT detectors to test as a comparison. While in different view changes, blurs, illuminations, it comes as no surprise that experimental results show that our algorithm is more efficient.
Fluorescence volume imaging with an axicon: simulation study based on scalar diffraction method.
Zheng, Juanjuan; Yang, Yanlong; Lei, Ming; Yao, Baoli; Gao, Peng; Ye, Tong
2012-10-20
In a two-photon excitation fluorescence volume imaging (TPFVI) system, an axicon is used to generate a Bessel beam and at the same time to collect the generated fluorescence to achieve large depth of field. A slice-by-slice diffraction propagation model in the frame of the angular spectrum method is proposed to simulate the whole imaging process of TPFVI. The simulation reveals that the Bessel beam can penetrate deep in scattering media due to its self-reconstruction ability. The simulation also demonstrates that TPFVI can image a volume of interest in a single raster scan. Two-photon excitation is crucial to eliminate the signals that are generated by the side lobes of Bessel beams; the unwanted signals may be further suppressed by placing a spatial filter in the front of the detector. The simulation method will guide the system design in improving the performance of a TPFVI system. PMID:23089777
Zhang, Xue-Ying; Wen, Zong-Guo
2014-11-01
To evaluate the reduction potential of industrial water pollutant emissions and to study the application of technology simulation in pollutant control and environment management, an Industrial Reduction Potential Analysis and Environment Management (IRPAEM) model was developed based on coupling of "material-process-technology-product". The model integrated bottom-up modeling and scenario analysis method, and was applied to China's paper industry. Results showed that under CM scenario, the reduction potentials of waster water, COD and ammonia nitrogen would reach 7 x 10(8) t, 39 x 10(4) t and 0.3 x 10(4) t, respectively in 2015, 13.8 x 10(8) t, 56 x 10(4) t and 0.5 x 10(4) t, respectively in 2020. Strengthening the end-treatment would still be the key method to reduce emissions during 2010-2020, while the reduction effect of structure adjustment would be more obvious during 2015-2020. Pollution production could basically reach the domestic or international advanced level of clean production in 2015 and 2020; the index of wastewater and ammonia nitrogen would basically meet the emission standards in 2015 and 2020 while COD would not. PMID:25639122
NASA Astrophysics Data System (ADS)
Nakada, Tomohiro; Takadama, Keiki; Watanabe, Shigeyoshi
This paper proposes the classification method using Bayesian analytical method to classify the time series data in the international emissions trading market depend on the agent-based simulation and compares the case with Discrete Fourier transform analytical method. The purpose demonstrates the analytical methods mapping time series data such as market price. These analytical methods have revealed the following results: (1) the classification methods indicate the distance of mapping from the time series data, it is easier the understanding and inference than time series data; (2) these methods can analyze the uncertain time series data using the distance via agent-based simulation including stationary process and non-stationary process; and (3) Bayesian analytical method can show the 1% difference description of the emission reduction targets of agent.
NASA Astrophysics Data System (ADS)
Song, Jinling; Qu, Yonghua; Wang, Jindi; Wan, Huawei; Liu, Xiaoqing
2007-06-01
Radiosity method is based on the computer simulation of 3D real structures of vegetations, such as leaves, branches and stems, which are composed by many facets. Using this method we can simulate the canopy reflectance and its bidirectional distribution of the vegetation canopy in visible and NIR regions. But with vegetations are more complex, more facets to compose them, so large memory and lots of time to calculate view factors are required, which are the choke points of using Radiosity method to calculate canopy BRF of lager scale vegetation scenes. We derived a new method to solve the problem, and the main idea is to abstract vegetation crown shapes and to simplify their structures, which can lessen the number of facets. The facets are given optical properties according to the reflectance, transmission and absorption of the real structure canopy. Based on the above work, we can simulate the canopy BRF of the mix scenes with different species vegetation in the large scale. In this study, taking broadleaf trees as an example, based on their structure characteristics, we abstracted their crowns as ellipsoid shells, and simulated the canopy BRF in visible and NIR regions of the large scale scene with different crown shape and different height ellipsoids. Form this study, we can conclude: LAI, LAD the probability gap, the sunlit and shaded surfaces are more important parameter to simulate the simplified vegetation canopy BRF. And the Radiosity method can apply us canopy BRF data in any conditions for our research.
NASA Astrophysics Data System (ADS)
Zhou, Gang; Davidson, Lars; Olsson, Erik
This paper presents computations of transonic aerodynamic flow simulations using a pressure-based Euler/Navier-Stokes solver. In this work emphasis is focused on the implementation of higher-order schemes such as QUICK, LUDS and MUSCL. A new scheme CHARM is proposed for convection approximation. Inviscid flow simulations are carried out for the airfoil NACA 0012. The CHARM scheme gives better resolution for the present inviscid case. The turbulent flow computations are carried out for the airfoil RAE 2822. Good results were obtained using QUICK scheme for mean motion equation combined with the MUSCL scheme for k and ɛ equations. No unphysical oscillations were observed. The results also show that the second-order and thir-dorder schemes yielded a comparable accuracy compared with the experimental data.
A New Hybrid Viscoelastic Soft Tissue Model based on Meshless Method for Haptic Surgical Simulation
Bao, Yidong; Wu, Dongmei; Yan, Zhiyuan; Du, Zhijiang
2013-01-01
This paper proposes a hybrid soft tissue model that consists of a multilayer structure and many spheres for surgical simulation system based on meshless. To improve accuracy of the model, tension is added to the three-parameter viscoelastic structure that connects the two spheres. By using haptic device, the three-parameter viscoelastic model (TPM) produces accurate deformationand also has better stress-strain, stress relaxation and creep properties. Stress relaxation and creep formulas have been obtained by mathematical formula derivation. Comparing with the experimental results of the real pig liver which were reported by Evren et al. and Amy et al., the curve lines of stress-strain, stress relaxation and creep of TPM are close to the experimental data of the real liver. Simulated results show that TPM has better real-time, stability and accuracy. PMID:24339837
Two methods for transmission line simulation model creation based on time domain measurements
NASA Astrophysics Data System (ADS)
Rinas, D.; Frei, S.
2011-07-01
The emission from transmission lines plays an important role in the electromagnetic compatibility of automotive electronic systems. In a frequency range below 200 MHz radiation from cables is often the dominant emission factor. In higher frequency ranges radiation from PCBs and their housing becomes more relevant. Main sources for this emission are the conducting traces. The established field measurement methods according CISPR 25 for evaluation of emissions suffer from the need to use large anechoic chambers. Furthermore measurement data can not be used for simulation model creation in order to compute the overall fields radiated from a car. In this paper a method to determine the far-fields and a simulation model of radiating transmission lines, esp. cable bundles and conducting traces on planar structures, is proposed. The method measures the electromagnetic near-field above the test object. Measurements are done in time domain in order to get phase information and to reduce measurement time. On the basis of near-field data equivalent source identification can be done. Considering correlations between sources along each conductive structure in model creation process, the model accuracy increases and computational costs can be reduced.
Simulation of the electrode shape change in electrochemical machining based on the level set method
NASA Astrophysics Data System (ADS)
Topa, V.; Purcar, M.; Avram, A.; Munteanu, C.; Chereches, R.; Grindei, L.
2012-04-01
This paper proposes a generally applicable numerical algorithm for the simulation of two dimensional electrode shape changes during electrochemical machining processes. The computational model consists of two coupled problems: an electrode shape change rate analysis and a moving boundary problem. The innovative aspect is that the workpiece shape is computed over a number of predefined time steps by convection of its surface with a velocity proportional and in the direction of the local electrode shape change rate. An example related to the electrochemical machining of a slot in a stainless steel plate is presented here to demonstrate the strong features of the proposed method.
Full wave simulation of waves in ECRIS plasmas based on the finite element method
Torrisi, G.; Mascali, D.; Neri, L.; Castro, G.; Patti, G.; Celona, L.; Gammino, S.; Ciavola, G.; Di Donato, L.; Sorbello, G.; Isernia, T.
2014-02-12
This paper describes the modeling and the full wave numerical simulation of electromagnetic waves propagation and absorption in an anisotropic magnetized plasma filling the resonant cavity of an electron cyclotron resonance ion source (ECRIS). The model assumes inhomogeneous, dispersive and tensorial constitutive relations. Maxwell's equations are solved by the finite element method (FEM), using the COMSOL Multiphysics{sup ®} suite. All the relevant details have been considered in the model, including the non uniform external magnetostatic field used for plasma confinement, the local electron density profile resulting in the full-3D non uniform magnetized plasma complex dielectric tensor. The more accurate plasma simulations clearly show the importance of cavity effect on wave propagation and the effects of a resonant surface. These studies are the pillars for an improved ECRIS plasma modeling, that is mandatory to optimize the ion source output (beam intensity distribution and charge state, especially). Any new project concerning the advanced ECRIS design will take benefit by an adequate modeling of self-consistent wave absorption simulations.
NASA Astrophysics Data System (ADS)
Jin, Shengye; Tamura, Masayuki
2013-10-01
Monte Carlo Ray Tracing (MCRT) method is a versatile application for simulating radiative transfer regime of the Solar - Atmosphere - Landscape system. Moreover, it can be used to compute the radiation distribution over a complex landscape configuration, as an example like a forest area. Due to its robustness to the complexity of the 3-D scene altering, MCRT method is also employed for simulating canopy radiative transfer regime as the validation source of other radiative transfer models. In MCRT modeling within vegetation, one basic step is the canopy scene set up. 3-D scanning application was used for representing canopy structure as accurately as possible, but it is time consuming. Botanical growth function can be used to model the single tree growth, but cannot be used to express the impaction among trees. L-System is also a functional controlled tree growth simulation model, but it costs large computing memory. Additionally, it only models the current tree patterns rather than tree growth during we simulate the radiative transfer regime. Therefore, it is much more constructive to use regular solid pattern like ellipsoidal, cone, cylinder etc. to indicate single canopy. Considering the allelopathy phenomenon in some open forest optical images, each tree in its own `domain' repels other trees. According to this assumption a stochastic circle packing algorithm is developed to generate the 3-D canopy scene in this study. The canopy coverage (%) and the tree amount (N) of the 3-D scene are declared at first, similar to the random open forest image. Accordingly, we randomly generate each canopy radius (rc). Then we set the circle central coordinate on XY-plane as well as to keep circles separate from each other by the circle packing algorithm. To model the individual tree, we employ the Ishikawa's tree growth regressive model to set the tree parameters including DBH (dt), tree height (H). However, the relationship between canopy height (Hc) and trunk height (Ht) is
Buryakovsky, L.A. )
1992-07-01
This paper reports that the systems approach to geology is both a sophisticated ideology and a scientific method for investigation of very complicated geological systems. As applied to petroleum geology, it includes the methodological base and technology of mathematical simulation used for modeling geological systems: the systems that have been previously investigated and estimated by experimental data and/or field studies. Because geological systems develop in time, it is very important to simulate them as dynamic systems. The main tasks in the systems approach to petroleum geology are the numerical simulation of physical and reservoir properties of rocks, pore (geofluid) pressure in reservoir beds, and hydrocarbon resources. The results of numerical simulation are used for prediction of geological system structure and behavior in both studies and noninvestigated areas.
Watanabe, Hiroki; Yamazaki, Nozomu; Kobayashi, Yo; Miyashita, Tomoyuki; Ohdaira, Takeshi; Hashizume, Makoto; Fujie, Masakatsu G
2011-01-01
Radiofrequency ablation is increasingly being used for liver cancer because it is a minimally invasive treatment method. However, it is difficult for the operators to precisely control the formation of coagulation zones because of the cooling effect of capillary vessels. To overcome this limitation, we have proposed a model-based robotic ablation system using a real-time numerical simulation to analyze temperature distributions in the target organ. This robot can determine the adequate amount of electric power supplied to the organ based on real-time temperature information reflecting the cooling effect provided by the simulator. The objective of this study was to develop a method to estimate the intraoperative rate of blood flow in the target organ to determine temperature distribution. In this paper, we propose a simulation-based method to estimate the rate of blood flow. We also performed an in vitro study to validate the proposed method by estimating the rate of blood flow in a hog liver. The experimental results revealed that the proposed method can be used to estimate the rate of blood flow in an organ. PMID:22256059
Chen, Xueli; Gao, Xinbo; Qu, Xiaochao; Chen, Duofang; Ma, Bin; Wang, Lin; Peng, Kuan; Liang, Jimin; Tian, Jie
2010-01-01
During the past decade, Monte Carlo method has obtained wide applications in optical imaging to simulate photon transport process inside tissues. However, this method has not been effectively extended to the simulation of free-space photon transport at present. In this paper, a uniform framework for noncontact optical imaging is proposed based on Monte Carlo method, which consists of the simulation of photon transport both in tissues and in free space. Specifically, the simplification theory of lens system is utilized to model the camera lens equipped in the optical imaging system, and Monte Carlo method is employed to describe the energy transformation from the tissue surface to the CCD camera. Also, the focusing effect of camera lens is considered to establish the relationship of corresponding points between tissue surface and CCD camera. Furthermore, a parallel version of the framework is realized, making the simulation much more convenient and effective. The feasibility of the uniform framework and the effectiveness of the parallel version are demonstrated with a cylindrical phantom based on real experimental results. PMID:20689705
Incompressible SPH method based on Rankine source solution for violent water wave simulation
NASA Astrophysics Data System (ADS)
Zheng, X.; Ma, Q. W.; Duan, W. Y.
2014-11-01
With wide applications, the smoothed particle hydrodynamics method (abbreviated as SPH) has become an important numerical tool for solving complex flows, in particular those with a rapidly moving free surface. For such problems, the incompressible Smoothed Particle Hydrodynamics (ISPH) has been shown to yield better and more stable pressure time histories than the traditional SPH by many papers in literature. However, the existing ISPH method directly approximates the second order derivatives of the functions to be solved by using the Poisson equation. The order of accuracy of the method becomes low, especially when particles are distributed in a disorderly manner, which generally happens for modelling violent water waves. This paper introduces a new formulation using the Rankine source solution. In the new approach to the ISPH, the Poisson equation is first transformed into another form that does not include any derivative of the functions to be solved, and as a result, does not need to numerically approximate derivatives. The advantage of the new approach without need of numerical approximation of derivatives is obvious, potentially leading to a more robust numerical method. The newly formulated method is tested by simulating various water waves, and its convergent behaviours are numerically studied in this paper. Its results are compared with experimental data in some cases and reasonably good agreement is achieved. More importantly, numerical results clearly show that the newly developed method does need less number of particles and so less computational costs to achieve the similar level of accuracy, or to produce more accurate results with the same number of particles compared with the traditional SPH and existing ISPH when it is applied to modelling water waves.
NASA Astrophysics Data System (ADS)
Afanasiev, Michael V.; Pratt, R. Gerhard; Kamei, Rie; McDowell, Glenn
2014-12-01
We successfully apply the semi-global inverse method of simulated annealing to determine the best-fitting 1-D anisotropy model for use in acoustic frequency domain waveform tomography. Our forward problem is based on a numerical solution of the frequency domain acoustic wave equation, and we minimize wavefield phase residuals through random perturbations to a 1-D vertically varying anisotropy profile. Both real and synthetic examples are presented in order to demonstrate and validate the approach. For the real data example, we processed and inverted a cross-borehole data set acquired by Vale Technology Development (Canada) Ltd. in the Eastern Deeps deposit, located in Voisey's Bay, Labrador, Canada. The inversion workflow comprises the full suite of acquisition, data processing, starting model building through traveltime tomography, simulated annealing and finally waveform tomography. Waveform tomography is a high resolution method that requires an accurate starting model. A cycle-skipping issue observed in our initial starting model was hypothesized to be due to an erroneous anisotropy model from traveltime tomography. This motivated the use of simulated annealing as a semi-global method for anisotropy estimation. We initially tested the simulated annealing approach on a synthetic data set based on the Voisey's Bay environment; these tests were successful and led to the application of the simulated annealing approach to the real data set. Similar behaviour was observed in the anisotropy models obtained through traveltime tomography in both the real and synthetic data sets, where simulated annealing produced an anisotropy model which solved the cycle-skipping issue. In the real data example, simulated annealing led to a final model that compares well with the velocities independently estimated from borehole logs. By comparing the calculated ray paths and wave paths, we attributed the failure of anisotropic traveltime tomography to the breakdown of the ray
NASA Astrophysics Data System (ADS)
Jiao, Zhenjun; Shikazono, Naoki
2016-02-01
It is known that the reduction process influences the initial performances and durability of nickel-yttria-stabilized zirconia composite anode of the solid oxide fuel cell. In the present study, the reduction process of nickel-yttria stabilized zirconia composite anode is simulated based on the phase field method. An three-dimensional reconstructed microstructure of nickel oxide-yttria stabilized zirconia composite obtained by focused ion beam-scanning electron microscopy is used as the initial microstructure for the simulation. Both reduction of nickel oxide and nickel sintering mechanisms are considered in the model. The reduction rates of nickel oxide at different interfaces are defined based on the literature data. Simulation results are qualitatively compared to the experimental anode microstructures with different reduction temperatures.
NASA Astrophysics Data System (ADS)
Zhang, Jingyang; Han, Le; Chang, Haiping; Liu, Nan; Xu, Tiejun
2016-02-01
An accurate critical heat flux (CHF) prediction method is the key factor for realizing the steady-state operation of a water-cooled divertor that works under one-sided high heating flux conditions. An improved CHF prediction method based on Euler's homogeneous model for flow boiling combined with realizable k-ɛ model for single-phase flow is adopted in this paper in which time relaxation coefficients are corrected by the Hertz-Knudsen formula in order to improve the calculation accuracy of vapor-liquid conversion efficiency under high heating flux conditions. Moreover, local large differences of liquid physical properties due to the extreme nonuniform heating flux on cooling wall along the circumference direction are revised by formula IAPWS-IF97. Therefore, this method can improve the calculation accuracy of heat and mass transfer between liquid phase and vapor phase in a CHF prediction simulation of water-cooled divertors under the one-sided high heating condition. An experimental example is simulated based on the improved and the uncorrected methods. The simulation results, such as temperature, void fraction and heat transfer coefficient, are analyzed to achieve the CHF prediction. The results show that the maximum error of CHF based on the improved method is 23.7%, while that of CHF based on uncorrected method is up to 188%, as compared with the experiment results of Ref. [12]. Finally, this method is verified by comparison with the experimental data obtained by International Thermonuclear Experimental Reactor (ITER), with a maximum error of 6% only. This method provides an efficient tool for the CHF prediction of water-cooled divertors. supported by the National Magnetic Confinement Fusion Science Program of China (No. 2010GB104005) and National Natural Science Foundation of China (No. 51406085)
Tanimoto, Yasuhiro; Nishiwaki, Tsuyoshi; Nishiyama, Norihiro; Nemoto, Kimiya; Maekawa, Zen-ichiro
2002-06-01
The purpose of this study was to propose a new numerical modeling of the glass fiber cloth reinforced denture base resin (GFRP). The proposed model is constructed with an isotropic shell, beam and orthotropic shell elements representing the outmost resin, interlaminar resin and glass fiber cloth, respectively. The proposed model was applied to the failure progress analysis under three-point bending conditions, the validity of the numerical model was checked through comparisons with experimental results. The failure progress behaviors involving the local failures, such as interlaminar delamination and resin failure, could be simulated using the numerical model for analyzing the failure progress of GFRP. It is concluded that the model was effective for the failure progress analysis of GFRP. PMID:12238780
Thickness-based adaptive mesh refinement methods for multi-phase flow simulations with thin regions
Chen, Xiaodong; Yang, Vigor
2014-07-15
In numerical simulations of multi-scale, multi-phase flows, grid refinement is required to resolve regions with small scales. A notable example is liquid-jet atomization and subsequent droplet dynamics. It is essential to characterize the detailed flow physics with variable length scales with high fidelity, in order to elucidate the underlying mechanisms. In this paper, two thickness-based mesh refinement schemes are developed based on distance- and topology-oriented criteria for thin regions with confining wall/plane of symmetry and in any situation, respectively. Both techniques are implemented in a general framework with a volume-of-fluid formulation and an adaptive-mesh-refinement capability. The distance-oriented technique compares against a critical value, the ratio of an interfacial cell size to the distance between the mass center of the cell and a reference plane. The topology-oriented technique is developed from digital topology theories to handle more general conditions. The requirement for interfacial mesh refinement can be detected swiftly, without the need of thickness information, equation solving, variable averaging or mesh repairing. The mesh refinement level increases smoothly on demand in thin regions. The schemes have been verified and validated against several benchmark cases to demonstrate their effectiveness and robustness. These include the dynamics of colliding droplets, droplet motions in a microchannel, and atomization of liquid impinging jets. Overall, the thickness-based refinement technique provides highly adaptive meshes for problems with thin regions in an efficient and fully automatic manner.
NASA Astrophysics Data System (ADS)
Zhang, Jun; Guo, Fan
2015-11-01
Tooth modification technique is widely used in gear industry to improve the meshing performance of gearings. However, few of the present studies on tooth modification considers the influence of inevitable random errors on gear modification effects. In order to investigate the uncertainties of tooth modification amount variations on system's dynamic behaviors of a helical planetary gears, an analytical dynamic model including tooth modification parameters is proposed to carry out a deterministic analysis on the dynamics of a helical planetary gear. The dynamic meshing forces as well as the dynamic transmission errors of the sun-planet 1 gear pair with and without tooth modifications are computed and compared to show the effectiveness of tooth modifications on gear dynamics enhancement. By using response surface method, a fitted regression model for the dynamic transmission error(DTE) fluctuations is established to quantify the relationship between modification amounts and DTE fluctuations. By shifting the inevitable random errors arousing from manufacturing and installing process to tooth modification amount variations, a statistical tooth modification model is developed and a methodology combining Monte Carlo simulation and response surface method is presented for uncertainty analysis of tooth modifications. The uncertainly analysis reveals that the system's dynamic behaviors do not obey the normal distribution rule even though the design variables are normally distributed. In addition, a deterministic modification amount will not definitely achieve an optimal result for both static and dynamic transmission error fluctuation reduction simultaneously.
Deleu, Ellen; Meire, Maarten A; De Moor, Roeland J G
2015-02-01
In root canal therapy, irrigating solutions are essential to assist in debridement and disinfection, but their spread and action is often restricted by canal anatomy. Hence, activation of irrigants is suggested to improve their distribution in the canal system, increasing irrigation effectiveness. Activation can be done with lasers, termed laser-activated irrigation (LAI). The purpose of this in vitro study was to compare the efficacy of different irrigant activation methods in removing debris from simulated root canal irregularities. Twenty-five straight human canine roots were embedded in resin, split, and their canals prepared to a standardized shape. A groove was cut in the wall of each canal and filled with dentin debris. Canals were filled with sodium hypochlorite and six irrigant activation procedures were tested: conventional needle irrigation (CI), manual-dynamic irrigation with a tapered gutta percha cone (manual-dynamic irrigation (MDI)), passive ultrasonic irrigation, LAI with 2,940-nm erbium-doped yttrium aluminum garnet (Er:YAG) laser with a plain fiber tip inside the canal (Er-flat), LAI with Er:YAG laser with a conical tip held at the canal entrance (Er-PIPS), and LAI with a 980-nm diode laser moving the fiber inside the canal (diode). The amount of remaining debris in the groove was scored and compared among the groups using non-parametric tests. Conventional irrigation removed significantly less debris than all other groups. The Er:YAG with plain fiber tip was more efficient than MDI, CI, diode, and Er:YAG laser with PIPS tip in removing debris from simulated root canal irregularities. PMID:24091791
NASA Astrophysics Data System (ADS)
Kostopoulos, Spiros A.; Savva, Andonis D.; Asvestas, Pantelis A.; Nikolopoulos, Christos D.; Capsalis, Christos N.; Cavouras, Dionisis A.
2015-09-01
The aim of the present study is to provide a methodology for detecting temperature alterations in human breast, based on single channel microwave radiometer imaging. Radiometer measurements were simulated by modelling the human breast, the temperature distribution, and the antenna characteristics. Moreover, a simulated lesion of variable size and position in the breast was employed to provide for slight temperature changes in the breast. To detect the presence of a lesion, the temperature distribution in the breast was reconstructed. This was accomplished by assuming that temperature distribution is the mixture of distributions with unknown parameters, which were determined by means of the least squares and the singular value decomposition methods. The proposed method was validated in a variety of scenarios by altering the lesion size and location and radiometer position. The method proved capable in identifying temperature alterations caused by lesions, at different locations in the breast.
A method motion simulator design based on modeling characteristics of the human operator
NASA Technical Reports Server (NTRS)
Repperger, D. W.; Junker, A. M.
1978-01-01
A design criteria is obtained to compare two simulators and evaluate their equivalence or credibility. In the subsequent analysis the comparison of two simulators can be considered as the same problem as the comparison of a real world situation and a simulation's representation of this real world situation. The design criteria developed involves modeling of the human operator and defining simple parameters to describe his behavior in the simulator and in the real world situation. In the process of obtaining human operator parameters to define characteristics to evaluate simulators, measures are also obtained on these human operator characteristics which can be used to describe the human as an information processor and controller. First, a study is conducted on the simulator design problem in such a manner that this modeling approach can be used to develop a criteria for the comparison of two simulators.
NASA Astrophysics Data System (ADS)
Prasanth, P. S.; Kakkassery, Jose K.; Vijayakumar, R.
2012-04-01
A modified phenomenological model is constructed for the simulation of rarefied flows of polyatomic non-polar gas molecules by the direct simulation Monte Carlo (DSMC) method. This variable hard sphere-based model employs a constant rotational collision number, but all its collisions are inelastic in nature and at the same time the correct macroscopic relaxation rate is maintained. In equilibrium conditions, there is equi-partition of energy between the rotational and translational modes and it satisfies the principle of reciprocity or detailed balancing. The present model is applicable for moderate temperatures at which the molecules are in their vibrational ground state. For verification, the model is applied to the DSMC simulations of the translational and rotational energy distributions in nitrogen gas at equilibrium and the results are compared with their corresponding Maxwellian distributions. Next, the Couette flow, the temperature jump and the Rayleigh flow are simulated; the viscosity and thermal conductivity coefficients of nitrogen are numerically estimated and compared with experimentally measured values. The model is further applied to the simulation of the rotational relaxation of nitrogen through low- and high-Mach-number normal shock waves in a novel way. In all cases, the results are found to be in good agreement with theoretically expected and experimentally observed values. It is concluded that the inelastic collision of polyatomic molecules can be predicted well by employing the constructed variable hard sphere (VHS)-based collision model.
NASA Astrophysics Data System (ADS)
Trinci, G.; Massari, R.; Scandellari, M.; Boccalini, S.; Costantini, S.; Di Sero, R.; Basso, A.; Sala, R.; Scopinaro, F.; Soluri, A.
2010-09-01
The aim of this work is to show a new scintigraphic device able to change automatically the length of its collimator in order to adapt the spatial resolution value to gamma source distance. This patented technique replaces the need for collimator change that standard gamma cameras still feature. Monte Carlo simulations represent the best tool in searching new technological solutions for such an innovative collimation structure. They also provide a valid analysis on response of gamma cameras performances as well as on advantages and limits of this new solution. Specifically, Monte Carlo simulations are realized with GEANT4 (GEometry ANd Tracking) framework and the specific simulation object is a collimation method based on separate blocks that can be brought closer and farther, in order to reach and maintain specific spatial resolution values for all source-detector distances. To verify the accuracy and the faithfulness of these simulations, we have realized experimental measurements with identical setup and conditions. This confirms the power of the simulation as an extremely useful tool, especially where new technological solutions need to be studied, tested and analyzed before their practical realization. The final aim of this new collimation system is the improvement of the SPECT techniques, with the real control of the spatial resolution value during tomographic acquisitions. This principle did allow us to simulate a tomographic acquisition of two capillaries of radioactive solution, in order to verify the possibility to clearly distinguish them.
Sun, Xiaodan; Hartzell, Stephen; Rezaeian, Sanaz
2015-01-01
Three broadband simulation methods are used to generate synthetic ground motions for the 2011 Mineral, Virginia, earthquake and compare with observed motions. The methods include a physics‐based model by Hartzell et al. (1999, 2005), a stochastic source‐based model by Boore (2009), and a stochastic site‐based model by Rezaeian and Der Kiureghian (2010, 2012). The ground‐motion dataset consists of 40 stations within 600 km of the epicenter. Several metrics are used to validate the simulations: (1) overall bias of response spectra and Fourier spectra (from 0.1 to 10 Hz); (2) spatial distribution of residuals for GMRotI50 peak ground acceleration (PGA), peak ground velocity, and pseudospectral acceleration (PSA) at various periods; (3) comparison with ground‐motion prediction equations (GMPEs) for the eastern United States. Our results show that (1) the physics‐based model provides satisfactory overall bias from 0.1 to 10 Hz and produces more realistic synthetic waveforms; (2) the stochastic site‐based model also yields more realistic synthetic waveforms and performs superiorly for frequencies greater than about 1 Hz; (3) the stochastic source‐based model has larger bias at lower frequencies (<0.5 Hz) and cannot reproduce the varying frequency content in the time domain. The spatial distribution of GMRotI50 residuals shows that there is no obvious pattern with distance in the simulation bias, but there is some azimuthal variability. The comparison between synthetics and GMPEs shows similar fall‐off with distance for all three models, comparable PGA and PSA amplitudes for the physics‐based and stochastic site‐based models, and systematic lower amplitudes for the stochastic source‐based model at lower frequencies (<0.5 Hz).
NASA Astrophysics Data System (ADS)
Labibzadeh, Mojtaba
2016-01-01
A new technique is used in Discrete Least Square Meshfree(DLSM) method to remove the common existing deficiencies of meshfree methods in handling of the problems containing cracks or concave boundaries. An enhanced Discrete Least Squares Meshless method named as VDLSM(Voronoi based Discrete Least Squares Meshless) is developed in order to solve the steady-state heat conduction problem in irregular solid domains including concave boundaries or cracks. Existing meshless methods cannot estimate precisely the required unknowns in the vicinity of the above mentioned boundaries. Conducted researches are limited to domains with regular convex boundaries. To this end, the advantages of the Voronoi tessellation algorithm are implemented. The support domains of the sampling points are determined using a Voronoi tessellation algorithm. For the weight functions, a cubic spline polynomial is used based on a normalized distance variable which can provide a high degree of smoothness near those mentioned above discontinuities. Finally, Moving Least Squares(MLS) shape functions are constructed using a varitional method. This straight-forward scheme can properly estimate the unknowns(in this particular study, the temperatures at the nodal points) near and on the crack faces, crack tip or concave boundaries without need to extra backward corrective procedures, i.e. the iterative calculations for modifying the shape functions of the nodes located near or on these types of the complex boundaries. The accuracy and efficiency of the presented method are investigated by analyzing four particular examples. Obtained results from VDLSM are compared with the available analytical results or with the results of the well-known Finite Elements Method(FEM) when an analytical solution is not available. By comparisons, it is revealed that the proposed technique gives high accuracy for the solution of the steady-state heat conduction problems within cracked domains or domains with concave boundaries
Methods of channeling simulation
Barrett, J.H.
1989-06-01
Many computer simulation programs have been used to interpret experiments almost since the first channeling measurements were made. Certain aspects of these programs are important in how accurately they simulate ions in crystals; among these are the manner in which the structure of the crystal is incorporated, how any quantity of interest is computed, what ion-atom potential is used, how deflections are computed from the potential, incorporation of thermal vibrations of the lattice atoms, correlations of thermal vibrations, and form of stopping power. Other aspects of the programs are included to improve the speed; among these are table lookup, importance sampling, and the multiparameter method. It is desirable for programs to facilitate incorporation of special features of interest in special situations; examples are relaxations and enhanced vibrations of surface atoms, easy substitution of an alternate potential for comparison, change of row directions from layer to layer in strained-layer lattices, and different vibration amplitudes for substitutional solute or impurity atoms. Ways of implementing all of these aspects and features and the consequences of them will be discussed. 30 refs., 3 figs.
NASA Astrophysics Data System (ADS)
Wang, Cheng; Dong, XinZhuang; Shu, Chi-Wang
2015-10-01
For numerical simulation of detonation, computational cost using uniform meshes is large due to the vast separation in both time and space scales. Adaptive mesh refinement (AMR) is advantageous for problems with vastly different scales. This paper aims to propose an AMR method with high order accuracy for numerical investigation of multi-dimensional detonation. A well-designed AMR method based on finite difference weighted essentially non-oscillatory (WENO) scheme, named as AMR&WENO is proposed. A new cell-based data structure is used to organize the adaptive meshes. The new data structure makes it possible for cells to communicate with each other quickly and easily. In order to develop an AMR method with high order accuracy, high order prolongations in both space and time are utilized in the data prolongation procedure. Based on the message passing interface (MPI) platform, we have developed a workload balancing parallel AMR&WENO code using the Hilbert space-filling curve algorithm. Our numerical experiments with detonation simulations indicate that the AMR&WENO is accurate and has a high resolution. Moreover, we evaluate and compare the performance of the uniform mesh WENO scheme and the parallel AMR&WENO method. The comparison results provide us further insight into the high performance of the parallel AMR&WENO method.
NASA Astrophysics Data System (ADS)
Zheng, Minyi; Zhang, Bangji; Zhang, Jie; Zhang, Nong
2016-03-01
Physical parameters are very important for vehicle dynamic modeling and analysis. However, most of physical parameter identification methods are assuming some physical parameters of vehicle are known, and the other unknown parameters can be identified. In order to identify physical parameters of vehicle in the case that all physical parameters are unknown, a methodology based on the State Variable Method(SVM) for physical parameter identification of two-axis on-road vehicle is presented. The modal parameters of the vehicle are identified by the SVM, furthermore, the physical parameters of the vehicle are estimated by least squares method. In numerical simulations, physical parameters of Ford Granada are chosen as parameters of vehicle model, and half-sine bump function is chosen to simulate tire stimulated by impulse excitation. The first numerical simulation shows that the present method can identify all of the physical parameters and the largest absolute value of percentage error of the identified physical parameter is 0.205%; and the effect of the errors of additional mass, structural parameter and measurement noise are discussed in the following simulations, the results shows that when signal contains 30 dB noise, the largest absolute value of percentage error of the identification is 3.78%. These simulations verify that the presented method is effective and accurate for physical parameter identification of two-axis on-road vehicles. The proposed methodology can identify all physical parameters of 7-DOF vehicle model by using free-decay responses of vehicle without need to assume some physical parameters are known.
NASA Astrophysics Data System (ADS)
Yao, Wen; Chen, Xiaoqian; Huang, Yiyong; van Tooren, Michel
2013-06-01
To assess the on-orbit servicing (OOS) paradigm and optimize its utilities by taking advantage of its inherent flexibility and responsiveness, the OOS system assessment and optimization methods based on lifecycle simulation under uncertainties are studied. The uncertainty sources considered in this paper include both the aleatory (random launch/OOS operation failure and on-orbit component failure) and the epistemic (the unknown trend of the end-used market price) types. Firstly, the lifecycle simulation under uncertainties is discussed. The chronological flowchart is presented. The cost and benefit models are established, and the uncertainties thereof are modeled. The dynamic programming method to make optimal decision in face of the uncertain events is introduced. Secondly, the method to analyze the propagation effects of the uncertainties on the OOS utilities is studied. With combined probability and evidence theory, a Monte Carlo lifecycle Simulation based Unified Uncertainty Analysis (MCS-UUA) approach is proposed, based on which the OOS utility assessment tool under mixed uncertainties is developed. Thirdly, to further optimize the OOS system under mixed uncertainties, the reliability-based optimization (RBO) method is studied. To alleviate the computational burden of the traditional RBO method which involves nested optimum search and uncertainty analysis, the framework of Sequential Optimization and Mixed Uncertainty Analysis (SOMUA) is employed to integrate MCS-UUA, and the RBO algorithm SOMUA-MCS is developed. Fourthly, a case study on the OOS system for a hypothetical GEO commercial communication satellite is investigated with the proposed assessment tool. Furthermore, the OOS system is optimized with SOMUA-MCS. Lastly, some conclusions are given and future research prospects are highlighted.
Wang, Lin; Zheng, Jinjian; Gong, Xiaoyi; Hartman, Robert; Antonucci, Vincent
2015-02-01
Development of a robust HPLC method for pharmaceutical analysis can be very challenging and time-consuming. In our laboratory, we have developed a new workflow leveraging ACD/Labs software tools to improve the performance of HPLC method development. First, we established ACD-based analytical method databases that can be searched by chemical structure similarity. By taking advantage of the existing knowledge of HPLC methods archived in the databases, one can find a good starting point for HPLC method development, or even reuse an existing method as is for a new project. Second, we used the software to predict compound physicochemical properties before running actual experiments to help select appropriate method conditions for targeted screening experiments. Finally, after selecting stationary and mobile phases, we used modeling software to simulate chromatographic separations for optimized temperature and gradient program. The optimized new method was then uploaded to internal databases as knowledge available to assist future method development efforts. Routine implementation of such standardized workflows has the potential to reduce the number of experiments required for method development and facilitate systematic and efficient development of faster, greener and more robust methods leading to greater productivity. In this article, we used Loratadine method development as an example to demonstrate efficient method development using this new workflow. PMID:25481084
Girod, Christophe; Vitalis, Renaud; Leblois, Raphaël; Fréville, Hélène
2011-01-01
Reconstructing the demographic history of populations is a central issue in evolutionary biology. Using likelihood-based methods coupled with Monte Carlo simulations, it is now possible to reconstruct past changes in population size from genetic data. Using simulated data sets under various demographic scenarios, we evaluate the statistical performance of Msvar, a full-likelihood Bayesian method that infers past demographic change from microsatellite data. Our simulation tests show that Msvar is very efficient at detecting population declines and expansions, provided the event is neither too weak nor too recent. We further show that Msvar outperforms two moment-based methods (the M-ratio test and Bottleneck) for detecting population size changes, whatever the time and the severity of the event. The same trend emerges from a compilation of empirical studies. The latest version of Msvar provides estimates of the current and the ancestral population size and the time since the population started changing in size. We show that, in the absence of prior knowledge, Msvar provides little information on the mutation rate, which results in biased estimates and/or wide credibility intervals for each of the demographic parameters. However, scaling the population size parameters with the mutation rate and scaling the time with current population size, as coalescent theory requires, significantly improves the quality of the estimates for contraction but not for expansion scenarios. Finally, our results suggest that Msvar is robust to moderate departures from a strict stepwise mutation model. PMID:21385729
NASA Astrophysics Data System (ADS)
Xu, Jingyan; Fuld, Matthew K.; Fung, George S. K.; Tsui, Benjamin M. W.
2015-04-01
Iterative reconstruction (IR) methods for x-ray CT is a promising approach to improve image quality or reduce radiation dose to patients. The goal of this work was to use task based image quality measures and the channelized Hotelling observer (CHO) to evaluate both analytic and IR methods for clinical x-ray CT applications. We performed realistic computer simulations at five radiation dose levels, from a clinical reference low dose D0 to 25% D0. A fixed size and contrast lesion was inserted at different locations into the liver of the XCAT phantom to simulate a weak signal. The simulated data were reconstructed on a commercial CT scanner (SOMATOM Definition Flash; Siemens, Forchheim, Germany) using the vendor-provided analytic (WFBP) and IR (SAFIRE) methods. The reconstructed images were analyzed by CHOs with both rotationally symmetric (RS) and rotationally oriented (RO) channels, and with different numbers of lesion locations (5, 10, and 20) in a signal known exactly (SKE), background known exactly but variable (BKEV) detection task. The area under the receiver operating characteristic curve (AUC) was used as a summary measure to compare the IR and analytic methods; the AUC was also used as the equal performance criterion to derive the potential dose reduction factor of IR. In general, there was a good agreement in the relative AUC values of different reconstruction methods using CHOs with RS and RO channels, although the CHO with RO channels achieved higher AUCs than RS channels. The improvement of IR over analytic methods depends on the dose level. The reference dose level D0 was based on a clinical low dose protocol, lower than the standard dose due to the use of IR methods. At 75% D0, the performance improvement was statistically significant (p < 0.05). The potential dose reduction factor also depended on the detection task. For the SKE/BKEV task involving 10 lesion locations, a dose reduction of at least 25% from D0 was achieved.
Jiang, Guanchao; Chen, Hong; Wang, Qiming; Chi, Baorong; He, Qingnan; Xiao, Haipeng; Zhou, Qinghuan; Liu, Jing; Wang, Shan
2016-01-01
Background The National Clinical Skills Competition has been held in China for 5 consecutive years since 2010 to promote undergraduate education reform and improve the teaching quality. The effects of the simulation-based competition will be analyzed in this study. Methods Participation in the competitions and the compilation of the questions used in the competition finals are summarized, and the influence and guidance quality are further analyzed. Through the nationwide distribution of questionnaires in medical colleges, the effects of the simulation-based competition on promoting undergraduate medical education reform were evaluated. Results The results show that approximately 450 students from more than 110 colleges (accounting for 81% of colleges providing undergraduate clinical medical education in China) participated in the competition each year. The knowledge, skills, and attitudes were comprehensively evaluated by simulation-based assessment. Eight hundred and eighty copies of the questionnaires were distributed to 110 participating medical schools in 2015. In total, 752 valid responses were received across 95 schools. The majority of the interviewees agreed or strongly agreed that competition promoted the adoption of advanced educational principles (76.8%), updated the curriculum model and instructional methods (79.8%), strengthened faculty development (84.0%), improved educational resources (82.1%), and benefited all students (53.4%). Conclusions The National Clinical Skills Competition is widely accepted in China. It has effectively promoted the reform and development of undergraduate medical education in China. PMID:26894586
NASA Astrophysics Data System (ADS)
Rodríguez, J. M.; Jonsén, P.; Svoboda, A.
2016-08-01
Metal cutting is one of the most common metal-shaping processes. In this process, specified geometrical and surface properties are obtained through the break-up of material and removal by a cutting edge into a chip. The chip formation is associated with large strains, high strain rates and locally high temperatures due to adiabatic heating. These phenomena together with numerical complications make modeling of metal cutting difficult. Material models, which are crucial in metal-cutting simulations, are usually calibrated based on data from material testing. Nevertheless, the magnitudes of strains and strain rates involved in metal cutting are several orders of magnitude higher than those generated from conventional material testing. Therefore, a highly desirable feature is a material model that can be extrapolated outside the calibration range. In this study, a physically based plasticity model based on dislocation density and vacancy concentration is used to simulate orthogonal metal cutting of AISI 316L. The material model is implemented into an in-house particle finite-element method software. Numerical simulations are in agreement with experimental results, but also with previous results obtained with the finite-element method.
NASA Astrophysics Data System (ADS)
Pantano, C.; Deiterding, R.; Hill, D. J.; Pullin, D. I.
2007-01-01
We present a methodology for the large-eddy simulation of compressible flows with a low-numerical dissipation scheme and structured adaptive mesh refinement (SAMR). A description of a conservative, flux-based hybrid numerical method that uses both centered finite-difference and a weighted essentially non-oscillatory (WENO) scheme is given, encompassing the cases of scheme alternation and internal mesh interfaces resulting from SAMR. In this method, the centered scheme is used in turbulent flow regions while WENO is employed to capture shocks. One-, two- and three-dimensional numerical experiments and example simulations are presented including homogeneous shock-free turbulence, a turbulent jet and the strongly shock-driven mixing of a Richtmyer-Meshkov instability.
Banihani, Suleiman; De, Suvranu
2009-01-01
In this paper we develop the Point Collocation-based Method of Finite Spheres (PCMFS) to simulate the viscoelastic response of soft biological tissues and evaluate the effectiveness of model order reduction methods such as modal truncation, Hankel optimal model and truncated balanced realization techniques for PCMFS. The PCMFS was developed in [1] as a physics-based technique for real time simulation of surgical procedures. It is a meshfree numerical method in which discretization is performed using a set of nodal points with approximation functions compactly supported on spherical subdomains centered at the nodes. The point collocation method is used as the weighted residual technique where the governing differential equations are directly applied at the nodal points. Since computational speed has a significant role in simulation of surgical procedures, model order reduction methods have been compared for relative gains in efficiency and computational accuracy. Of these methods, truncated balanced realization results in the highest accuracy while modal truncation results in the highest efficiency. PMID:20300494
NASA Astrophysics Data System (ADS)
Abustan, M. S.; Rahman, N. A.; Gotoh, H.; Harada, E.; Talib, S. H. A.
2016-07-01
In Malaysia, not many researches on crowd evacuation simulation had been reported. Hence, the development of numerical crowd evacuation process by taking into account people behavioral patterns and psychological characteristics is crucial in Malaysia. On the other hand, tsunami disaster began to gain attention of Malaysian citizens after the 2004 Indian Ocean Tsunami that need quick evacuation process. In relation to the above circumstances, we have conducted simulations of tsunami evacuation process at the Miami Beach of Penang Island by using Distinct Element Method (DEM)-based crowd behavior simulator. The main objectives are to investigate and reproduce current conditions of evacuation process at the said locations under different hypothetical scenarios for the efficiency study of the evacuation. The sim-1 is initial condition of evacuation planning while sim-2 as improvement of evacuation planning by adding new evacuation area. From the simulation result, sim-2 have a shorter time of evacuation process compared to the sim-1. The evacuation time recuded 53 second. The effect of the additional evacuation place is confirmed from decreasing of the evacuation completion time. Simultaneously, the numerical simulation may be promoted as an effective tool in studying crowd evacuation process.
NASA Astrophysics Data System (ADS)
Rajan, C. Christober Asir
2010-10-01
The objective of this paper is to find the generation scheduling such that the total operating cost can be minimized, when subjected to a variety of constraints. This also means that it is desirable to find the optimal generating unit commitment in the power system for the next H hours. Genetic Algorithms (GA's) are general-purpose optimization techniques based on principles inspired from the biological evolution using metaphors of mechanisms such as neural section, genetic recombination and survival of the fittest. In this, the unit commitment schedule is coded as a string of symbols. An initial population of parent solutions is generated at random. Here, each schedule is formed by committing all the units according to their initial status ("flat start"). Here the parents are obtained from a pre-defined set of solution's i.e. each and every solution is adjusted to meet the requirements. Then, a random recommitment is carried out with respect to the unit's minimum down times. And SA improves the status. A 66-bus utility power system with twelve generating units in India demonstrates the effectiveness of the proposed approach. Numerical results are shown comparing the cost solutions and computation time obtained by using the Genetic Algorithm method and other conventional methods.
NASA Astrophysics Data System (ADS)
Tobin, Cara; Schaefli, Bettina; Nicótina, Ludovico; Simoni, Silvia; Barrenetxea, Guillermo; Smith, Russell; Parlange, Marc; Rinaldo, Andrea
2013-05-01
This paper proposes a new extension of the classical degree-day snowmelt model applicable to hourly simulations for regions with limited data and adaptable to a broad range of spatially-explicit hydrological models. The snowmelt schemes have been tested with a point measurement dataset at the Cotton Creek Experimental Watershed (CCEW) in British Columbia, Canada and with a detailed dataset available from the Dranse de Ferret catchment, an extensively monitored catchment in the Swiss Alps. The snowmelt model performance is quantified with the use of a spatially-explicit model of the hydrologic response. Comparative analyses are presented with the widely-known, grid-based method proposed by Hock which combines a local, temperature-index approach with potential radiation. The results suggest that a simple diurnal cycle of the degree-day melt parameter based on minimum and maximum temperatures is competitive with the Hock approach for sub-daily melt simulations. Advantages of the new extension of the classical degree-day method over other temperature-index methods include its use of physically-based, diurnal variations and its ability to be adapted to data-constrained hydrological models which are lumped in some nature.
Osis, Sean T; Hettinga, Blayne A; Macdonald, Shari; Ferber, Reed
2016-01-01
In order to provide effective test-retest and pooling of information from clinical gait analyses, it is critical to ensure that the data produced are as reliable as possible. Furthermore, it has been shown that anatomical marker placement is the largest source of inter-examiner variance in gait analyses. However, the effects of specific, known deviations in marker placement on calculated kinematic variables are unclear, and there is currently no mechanism to provide location-based feedback regarding placement consistency. The current study addresses these disparities by: applying a simulation of marker placement deviations to a large (n = 411) database of runners; evaluating a recently published method of morphometric-based deviation detection; and pilot-testing a system of location-based feedback for marker placements. Anatomical markers from a standing neutral trial were moved virtually by up to 30 mm to simulate deviations. Kinematic variables during running were then calculated using the original, and altered static trials. Results indicate that transverse plane angles at the knee and ankle are most sensitive to deviations in marker placement (7.59 degrees of change for every 10 mm of marker error), followed by frontal plane knee angles (5.17 degrees for every 10 mm). Evaluation of the deviation detection method demonstrated accuracies of up to 82% in classifying placements as deviant. Finally, pilot testing of a new methodology for providing location-based feedback demonstrated reductions of up to 80% in the deviation of outcome kinematics. PMID:26765846
A Wavelet-Based Method for Simulation of Seismic Wave Propagation
NASA Astrophysics Data System (ADS)
Hong, T.; Kennett, B. L.
2001-12-01
Seismic wave propagation (e.g., both P-SV and SH in 2-D) can be modeled using wavelets. The governing elastic wave equations are transformed to a first-order differential equation system in time with a displacement-velocity formulation. Spatial derivatives are represented with a wavelet expansion using a semigroup approach. The evolution equations in time are derived from a Taylor expansion in terms of wavelet operators. The wavelet representation allows high accuracy for the spatial derivatives. Absorbing boundary conditions are implemented by including attenuation terms in the formulation of the equations. The traction-free condition at a free surface can be introduced with an equivalent force system. Irregular boundaries can be handled through a remapping of the coordinate system. The method is based on a displacement-velocity scheme which reduces memory requirements by about 30% compared to the use of velocity-stress. The new approach gives excellent agreement with analytic results for simple models including the Rayleigh waves at a free surface. A major strength of the wavelet approach is that the formulation can be employed for highly heterogeneous media and so can be used for complex situations.
NASA Astrophysics Data System (ADS)
Fang, Sheng-En; Ren, Wei-Xin; Perera, Ricardo
2012-11-01
Stochastic model updating must be considered for quantifying uncertainties inherently existing in real-world engineering structures. By this means the statistical properties, instead of deterministic values, of structural parameters can be sought indicating the parameter variability. However, the implementation of stochastic model updating is much more complicated than that of deterministic methods particularly in the aspects of theoretical complexity and low computational efficiency. This study attempts to propose a simple and cost-efficient method by decomposing a stochastic updating process into a series of deterministic ones with the aid of response surface models and Monte Carlo simulation. The response surface models are used as surrogates for original FE models in the interest of programming simplification, fast response computation and easy inverse optimization. Monte Carlo simulation is adopted for generating samples from the assumed or measured probability distributions of responses. Each sample corresponds to an individual deterministic inverse process predicting the deterministic values of parameters. Then the parameter means and variances can be statistically estimated based on all the parameter predictions by running all the samples. Meanwhile, the analysis of variance approach is employed for the evaluation of parameter variability significance. The proposed method has been demonstrated firstly on a numerical beam and then a set of nominally identical steel plates tested in the laboratory. It is found that compared with the existing stochastic model updating methods, the proposed method presents similar accuracy while its primary merits consist in its simple implementation and cost efficiency in response computation and inverse optimization.
NASA Astrophysics Data System (ADS)
Densmore, J. D.; Park, H.; Wollaber, A. B.; Rauenzahn, R. M.; Knoll, D. A.
2015-03-01
We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption-emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck-Cummings algorithm.
Densmore, J.D.; Park, H.; Wollaber, A.B.; Rauenzahn, R.M.; Knoll, D.A.
2015-03-01
We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption–emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck–Cummings algorithm.
NASA Astrophysics Data System (ADS)
L'vov, P. E.; Svetukhin, V. V.
2016-07-01
Based on the free energy density functional method, the early stage of decomposition of a onedimensional binary alloy corresponding to the approximation of regular solutions has been simulated. In the simulation, Gaussian composition fluctuations caused by the initial alloy state are taken into account. The calculation is performed using the block approach implying discretization of the extensive solution volume into independent fragments for each of which the decomposition process is calculated, and then a joint analysis of the formed second phase segregations is performed. It was possible to trace all stages of solid solution decomposition: nucleation, growth, and coalescence (initial stage). The time dependences of the main phase distribution characteristics are calculated: the average size and concentration of the second phase particles, their size distribution function, and the nucleation rate of the second phase particles (clusters). Cluster trajectories in the size-composition space are constructed for the cases of growth and dissolution.
NASA Astrophysics Data System (ADS)
Bakhmurov, A. G.; Balashov, V. V.; Glonina, A. B.; Pashkov, V. N.; Smeliansky, R. L.; Volkanov, D. Yu.
2013-12-01
In this paper, the reliability allocation problem (RAP) for real-time avionics systems (RTAS) is considered. The proposed method for solving this problem consists of two steps: (i) creation of an RTAS simulation model at the necessary level of abstraction and (ii) application of metaheuristic algorithm to find an optimal solution (i. e., to choose an optimal set of fault tolerance techniques). When during the algorithm execution it is necessary to measure the execution time of some software components, the simulation modeling is applied. The procedure of simulation modeling also consists of the following steps: automatic construction of simulation model of the RTAS configuration and running this model in a simulation environment to measure the required time. This method was implemented as an experimental software tool. The tool works in cooperation with DYANA simulation environment. The results of experiments with the implemented method are presented. Finally, future plans for development of the presented method and tool are briefly described.
Simulation of magnetization process of Pure-type superconductor magnet undulator based on T-method
NASA Astrophysics Data System (ADS)
Deri, Yi; Kawaguchi, Hideki; Tsuchimoto, Masanori; Tanaka, Takashi
2015-11-01
For the next generation Free Electron Laser, Pure-type undulator made of high Tc superconductors (HTSs) was considered to achieve a small size and high intensity magnetic field undulator. In general, it is very difficult to adjust the undulator magnet alignment after the HTS magnetization since the entire undulator is installed inside a cryostat. The appropriate HTS alignment has to be determined in the design stage. This paper presents the development of a numerical simulation code for magnetization process of the Pure-type HTS undulator to assist the design of the optimal size and alignment of the HTS magnets.
NASA Astrophysics Data System (ADS)
Zhang, Zaiyong; Wang, Wenke; Yeh, Tian-chyi Jim; Chen, Li; Wang, Zhoufeng; Duan, Lei; An, Kedong; Gong, Chengcheng
2016-06-01
In this paper, we develop a finite analytic method (FAMM), which combines flexibility of numerical methods and advantages of analytical solutions, to solve the mixed-form Richards' equation. This new approach minimizes mass balance errors and truncation errors associated with most numerical approaches. We use numerical experiments to demonstrate that FAMM can obtain more accurate numerical solutions and control the global mass balance better than modified Picard finite difference method (MPFD) as compared with analytical solutions. In addition, FAMM is superior to the finite analytic method based on head-based Richards' equation (FAMH). Besides, FAMM solutions are compared to analytical solutions for wetting and drying processes in Brindabella Silty Clay Loam and Yolo Light Clay soils. Finally, we demonstrate that FAMM yields comparable results with those from MPFD and Hydrus-1D for simulating infiltration into other different soils under wet and dry conditions. These numerical experiments further confirm the fact that as long as a hydraulic constitutive model captures general behaviors of other models, it can be used to yield flow fields comparable to those based on other models.
ERIC Educational Resources Information Center
Rule, David L.
Several regression methods were examined within the framework of weighted structural regression (WSR), comparing their regression weight stability and score estimation accuracy in the presence of outlier contamination. The methods compared are: (1) ordinary least squares; (2) WSR ridge regression; (3) minimum risk regression; (4) minimum risk 2;…
Continuous surface force based lattice Boltzmann equation method for simulating thermocapillary flow
NASA Astrophysics Data System (ADS)
Zheng, Lin; Zheng, Song; Zhai, Qinglan
2016-02-01
In this paper, we extend a lattice Boltzmann equation (LBE) with continuous surface force (CSF) to simulate thermocapillary flows. The model is designed on our previous CSF LBE for athermal two phase flow, in which the interfacial tension forces and the Marangoni stresses as the results of the interface interactions between different phases are described by a conception of CSF. In this model, the sharp interfaces between different phases are separated by a narrow transition layers, and the kinetics and morphology evolution of phase separation would be characterized by an order parameter via Cahn-Hilliard equation which is solved in the frame work of LBE. The scalar convection-diffusion equation for temperature field is resolved by thermal LBE. The models are validated by thermal two layered Poiseuille flow, and two superimposed planar fluids at negligibly small Reynolds and Marangoni numbers for the thermocapillary driven convection, which have analytical solutions for the velocity and temperature. Then thermocapillary migration of two/three dimensional deformable droplet are simulated. Numerical results show that the predictions of present LBE agreed with the analytical solution/other numerical results.
Xu, Jingyan; Fuld, Matthew K; Fung, George S K; Tsui, Benjamin M W
2015-04-01
Iterative reconstruction (IR) methods for x-ray CT is a promising approach to improve image quality or reduce radiation dose to patients. The goal of this work was to use task based image quality measures and the channelized Hotelling observer (CHO) to evaluate both analytic and IR methods for clinical x-ray CT applications. We performed realistic computer simulations at five radiation dose levels, from a clinical reference low dose D0 to 25% D0. A fixed size and contrast lesion was inserted at different locations into the liver of the XCAT phantom to simulate a weak signal. The simulated data were reconstructed on a commercial CT scanner (SOMATOM Definition Flash; Siemens, Forchheim, Germany) using the vendor-provided analytic (WFBP) and IR (SAFIRE) methods. The reconstructed images were analyzed by CHOs with both rotationally symmetric (RS) and rotationally oriented (RO) channels, and with different numbers of lesion locations (5, 10, and 20) in a signal known exactly (SKE), background known exactly but variable (BKEV) detection task. The area under the receiver operating characteristic curve (AUC) was used as a summary measure to compare the IR and analytic methods; the AUC was also used as the equal performance criterion to derive the potential dose reduction factor of IR. In general, there was a good agreement in the relative AUC values of different reconstruction methods using CHOs with RS and RO channels, although the CHO with RO channels achieved higher AUCs than RS channels. The improvement of IR over analytic methods depends on the dose level. The reference dose level D0 was based on a clinical low dose protocol, lower than the standard dose due to the use of IR methods. At 75% D0, the performance improvement was statistically significant (p < 0.05). The potential dose reduction factor also depended on the detection task. For the SKE/BKEV task involving 10 lesion locations, a dose reduction of at least 25% from D0 was achieved. PMID:25776521
NASA Astrophysics Data System (ADS)
Langthjem, M. A.; Nakano, M.
2005-11-01
An axisymmetric numerical simulation approach to the hole-tone self-sustained oscillation problem is developed, based on the discrete vortex method for the incompressible flow field, and a representation of flow noise sources on an acoustically compact impingement plate by Curle's equation. The shear layer of the jet is represented by 'free' discrete vortex rings, and the jet nozzle and the end plate by bound vortex rings. A vortex ring is released from the nozzle at each time step in the simulation. The newly released vortex rings are disturbed by acoustic feedback. It is found that the basic feedback cycle works hydrodynamically. The effect of the acoustic feedback is to suppress the broadband noise and reinforce the characteristic frequency and its higher harmonics. An experimental investigation is also described. A hot wire probe was used to measure velocity fluctuations in the shear layer, and a microphone to measure acoustic pressure fluctuations. Comparisons between simulated and experimental results show quantitative agreement with respect to both frequency and amplitude of the shear layer velocity fluctuations. As to acoustic pressure fluctuations, there is quantitative agreement w.r.t. frequencies, and reasonable qualitative agreement w.r.t. peaks of the characteristic frequency and its higher harmonics. Both simulated and measured frequencies f follow the criterion L/uc+L/c0=n/f where L is the gap length between nozzle exit and end plate, uc is the shear layer convection velocity, c0 is the speed of sound, and n is a mode number (n={1}/{2},1,{3}/{2},…). The experimental results however display a complicated pattern of mode jumps, which the numerical method cannot capture.
NASA Astrophysics Data System (ADS)
Pantano, Carlos
2005-11-01
We describe a hybrid finite difference method for large-eddy simulation (LES) of compressible flows with a low-numerical dissipation scheme and structured adaptive mesh refinement (SAMR). Numerical experiments and validation calculations are presented including a turbulent jet and the strongly shock-driven mixing of a Richtmyer-Meshkov instability. The approach is a conservative flux-based SAMR formulation and as such, it utilizes refinement to computational advantage. The numerical method for the resolved scale terms encompasses the cases of scheme alternation and internal mesh interfaces resulting from SAMR. An explicit centered scheme that is consistent with a skew-symmetric finite difference formulation is used in turbulent flow regions while a weighted essentially non-oscillatory (WENO) scheme is employed to capture shocks. The subgrid stresses and transports are calculated by means of the streched-vortex model, Misra & Pullin (1997)
NASA Astrophysics Data System (ADS)
Li, Genggeng; Deng, Xiaozhong; Wei, Bingyang; Lei, Baozhen
2011-05-01
The two coordinate systems of cradle-type hypoid generator and free-form CNC machine tool by application disc milling-cutter to generate hypoid pinion tooth surfaces based on the modified-roll method were set up, respectively, and transformation principle and method for machine-tool settings between the two coordinate systems was studied. It was presented that finger milling-cutter was mounted on imagined disc milling-cutter and its motion was controlled directly by CNC shafts to replace disc milling-cutter blades effective cutting motion. Finger milling-cutter generation accomplished by ordered circular interpolation was determined, and interpolation center, starting and ending were worked out. Finally, a hypoid pinion was virtually machined by using CNC machining simulation software VERICUT.
NASA Astrophysics Data System (ADS)
Li, Genggeng; Deng, Xiaozhong; Wei, Bingyang; Lei, Baozhen
2010-12-01
The two coordinate systems of cradle-type hypoid generator and free-form CNC machine tool by application disc milling-cutter to generate hypoid pinion tooth surfaces based on the modified-roll method were set up, respectively, and transformation principle and method for machine-tool settings between the two coordinate systems was studied. It was presented that finger milling-cutter was mounted on imagined disc milling-cutter and its motion was controlled directly by CNC shafts to replace disc milling-cutter blades effective cutting motion. Finger milling-cutter generation accomplished by ordered circular interpolation was determined, and interpolation center, starting and ending were worked out. Finally, a hypoid pinion was virtually machined by using CNC machining simulation software VERICUT.
NASA Astrophysics Data System (ADS)
Soleimani, Meisam; Wriggers, Peter; Rath, Henryke; Stiesch, Meike
2016-06-01
In this paper, a 3D computational model has been developed to investigate biofilms in a multi-physics framework using smoothed particle hydrodynamics (SPH) based on a continuum approach. Biofilm formation is a complex process in the sense that several physical phenomena are coupled and consequently different time-scales are involved. On one hand, biofilm growth is driven by biological reaction and nutrient diffusion and on the other hand, it is influenced by fluid flow causing biofilm deformation and interface erosion in the context of fluid and deformable solid interaction. The geometrical and numerical complexity arising from these phenomena poses serious complications and challenges in grid-based techniques such as finite element. Here the solution is based on SPH as one of the powerful meshless methods. SPH based computational modeling is quite new in the biological community and the method is uniquely robust in capturing the interface-related processes of biofilm formation such as erosion. The obtained results show a good agreement with experimental and published data which demonstrates that the model is capable of simulating and predicting overall spatial and temporal evolution of biofilm.
Testing planetary transit detection methods with grid-based Monte-Carlo simulations.
NASA Astrophysics Data System (ADS)
Bonomo, A. S.; Lanza, A. F.
The detection of extrasolar planets by means of the transit method is a rapidly growing field of modern astrophysics. The periodic light dips produced by the passage of a planet in front of its parent star can be used to reveal the presence of the planet itself, to measure its orbital period and relative radius, as well as to perform studies on the outer layers of the planet by analysing the light of the star passing through the planet's atmosphere. We have developed a new method to detect transits of Earth-sized planets in front of solar-like stars that allows us to reduce the impact of stellar microvariability on transit detection. A large Monte Carlo numerical experiment has been designed to test the performance of our approach in comparison with other transit detection methods for stars of different magnitudes and planets of different radius and orbital period, as will be observed by the space experiments CoRoT and Kepler. The large computational load of this experiment has been managed by means of the Grid infrastructure of the COMETA consortium.
NASA Astrophysics Data System (ADS)
Qiao, Y.
2013-12-01
As China's economic development, water pollution incidents happened frequently. For example, the cyanobacterial bloom events repeatedly occur in Taihu Lake. In this research, we investigate the pollutants solute transport start at different points, such as the eutrophication substances Nitrogen and Phosphorus et al, with the Lattice Boltzmann Method (LBM) performed on real pore geometries. The LBM has emerged as a powerful tool for simulating the behaviour of multi-component fluid systems in complex pore networks. We will build a quick response simulation system, which is base on the high resolution GIS figure, using the LBM numerical method.When the start two deferent points at the Meiliang Bay nearby the Wuxi City, it is shown that the pollutants solute can't transport out of the bay to influence the Taihu Lake and the diffusion areas are similar. On the other hand, when the start point at central region of the Taihu Lake, it is found that the pollutants solute covered the almost whole area of the lake and the cyanobacterial bloom with good condition. In the same way, if the cyanobacterial bloom transport in the central area, then it will pollute the whole Taihu Lake. Therefore, when we monitor and deal with the eutrophication substances, we need to focus on the central area of lake.
Srolovitz, D.J.
1993-05-18
Binary alloys were investigated. Segregation to and thermodynamics of twist grain boundaries in Cu-Ni were studied. Segregation to and order-disorder phase transitions at grain boundaries in ordered Ni{sub 3{minus}x}Al{sub 1+x} were also investigated. Order-disorder transitions at and segregation to the (001), (011), and (111) surfaces in Pd-Cu, Pd-Ag, and Pd-Au alloys were investigated. The (001) surface in Cu-rich alloys undergoes a surface phase transition from disordered to ordered surface phase upon cooling from high temperature, similar to the (001) surface transition in Ni-rich Pt-Ni alloys. Segregation and ordering appear to be correlated. The free energy minimization method was also used to calculate the heat of formation and lattice parameter of Ag-Cu metastable phases. Results of free energy minimization for free energy and entropy of Si agree with experiment and quasiharmonic calculations.
NASA Astrophysics Data System (ADS)
Toyoda, Takahiro; Sugiura, Nozomi; Masuda, Shuhei; Sasaki, Yuji; Igarashi, Hiromichi; Ishikawa, Yoichi; Hatayama, Takaki; Kawano, Takeshi; Kawai, Yoshimi; Kouketsu, Shinya; Katsumata, Katsuro; Uchida, Hiroshi; Doi, Toshimasa; Fukasawa, Masao; Awaji, Toshiyuki
2015-11-01
An improved vertical diffusivity scheme is introduced into an ocean general circulation model to better reproduce the observed features of water property distribution inherent in the deep Pacific Ocean structure. The scheme incorporates (a) a horizontally uniform background profile, (b) a parameterization depending on the local static stability, and (c) a parameterization depending on the bottom topography. Weighting factors for these parameterizations are optimally estimated based on the Green's function method. The optimized values indicate an important role of both the intense vertical diffusivity near rough topography and the background vertical diffusivity. This is consistent with recent reports that indicate the presence of significant vertical mixing associated with finite-amplitude internal wave breaking along the bottom slope and its remote effect. The robust simulation with less artificial trend of water properties in the deep Pacific Ocean illustrates that our approach offers a better modeling analysis for the deep ocean variability.
NASA Astrophysics Data System (ADS)
Ding, H.; Shu, C.; Yeo, K. S.; Xu, D.
2007-01-01
In this paper, the mesh-free least square-based finite difference (MLSFD) method is applied to numerically study the flow field around two circular cylinders arranged in side-by-side and tandem configurations. For each configuration, various geometrical arrangements are considered, in order to reveal the different flow regimes characterized by the gap between the two cylinders. In this work, the flow simulations are carried out in the low Reynolds number range, that is, Re=100 and 200. Instantaneous vorticity contours and streamlines around the two cylinders are used as the visualization aids. Some flow parameters such as Strouhal number, drag and lift coefficients calculated from the solution are provided and quantitatively compared with those provided by other researchers.
NASA Astrophysics Data System (ADS)
Panzeri, M.; Riva, M.; Guadagnini, A.; Neuman, S. P.
2014-04-01
Traditional Ensemble Kalman Filter (EnKF) data assimilation requires computationally intensive Monte Carlo (MC) sampling, which suffers from filter inbreeding unless the number of simulations is large. Recently we proposed an alternative EnKF groundwater-data assimilation method that obviates the need for sampling and is free of inbreeding issues. In our new approach, theoretical ensemble moments are approximated directly by solving a system of corresponding stochastic groundwater flow equations. Like MC-based EnKF, our moment equations (ME) approach allows Bayesian updating of system states and parameters in real-time as new data become available. Here we compare the performances and accuracies of the two approaches on two-dimensional transient groundwater flow toward a well pumping water in a synthetic, randomly heterogeneous confined aquifer subject to prescribed head and flux boundary conditions.
NASA Astrophysics Data System (ADS)
Morency, C.; Tromp, J.
2008-12-01
The mathematical formulation of wave propagation in porous media developed by Biot is based upon the principle of virtual work, ignoring processes at the microscopic level, and does not explicitly incorporate gradients in porosity. Based on recent studies focusing on averaging techniques, we derive the macroscopic porous medium equations from the microscale, with a particular emphasis on the effects of gradients in porosity. In doing so, we are able to naturally determine two key terms in the momentum equations and constitutive relationships, directly translating the coupling between the solid and fluid phases, namely a drag force and an interfacial strain tensor. In both terms, gradients in porosity arise. One remarkable result is that when we rewrite this set of equations in terms of the well known Biot variables us, w), terms involving gradients in porosity are naturally accommodated by gradients involving w, the fluid motion relative to the solid, and Biot's formulation is recovered, i.e., it remains valid in the presence of porosity gradients We have developed a numerical implementation of the Biot equations for two-dimensional problems based upon the spectral-element method (SEM) in the time domain. The SEM is a high-order variational method, which has the advantage of accommodating complex geometries like a finite-element method, while keeping the exponential convergence rate of (pseudo)spectral methods. As in the elastic and acoustic cases, poroelastic wave propagation based upon the SEM involves a diagonal mass matrix, which leads to explicit time integration schemes that are well-suited to simulations on parallel computers. Effects associated with physical dispersion & attenuation and frequency-dependent viscous resistance are addressed by using a memory variable approach. Various benchmarks involving poroelastic wave propagation in the high- and low-frequency regimes, and acoustic-poroelastic and poroelastic-poroelastic discontinuities have been
NASA Astrophysics Data System (ADS)
Barnard, J. M.; Augarde, C. E.
2012-12-01
The simulation of reactions in flow through unsaturated porous media is a more complicated process when using particle tracking based models than in continuum based models. In the fomer particles are reacted on an individual particle-to-particle basis using either deterministic or probabilistic methods. This means that particle tracking methods, especially when simulations of reactions are included, are computationally intensive as the reaction simulations require tens of thousands of nearest neighbour searches per time step. Despite this, particle tracking methods merit further study due to their ability to eliminate numerical dispersion, to simulate anomalous transport and incomplete mixing of reactive solutes. A new model has been developed using discrete time random walk particle tracking methods to simulate reactive mass transport in porous media which includes a variation of colocation probability function based methods of reaction simulation from those presented by Benson & Meerschaert (2008). Model development has also included code acceleration via graphics processing units (GPUs). The nature of particle tracking methods means that they are well suited to parallelization using GPUs. The architecture of GPUs is single instruction - multiple data (SIMD). This means that only one operation can be performed at any one time but can be performed on multiple data simultaneously. This allows for significant speed gains where long loops of independent operations are performed. Computationally expensive code elements, such the nearest neighbour searches required by the reaction simulation, are therefore prime targets for GPU acceleration.
NASA Astrophysics Data System (ADS)
Fox, Maik; Beuth, Thorsten; Streck, Andreas; Stork, Wilhelm
2015-09-01
Homodyne laser interferometers for velocimetry are well-known optical systems used in many applications. While the detector power output signal of such a system, using a long coherence length laser and a single target, is easily modelled using the Doppler shift, scenarios with a short coherence length source, e.g. an unstabilized semiconductor laser, and multiple weak targets demand a more elaborated approach for simulation. Especially when using fiber components, the actual setup is an important factor for system performance as effects like return losses and multiple way propagation have to be taken into account. If the power received from the targets is in the same region as stray light created in the fiber setup, a complete system simulation becomes a necessity. In previous work, a phasor based signal simulation approach for interferometers based on short coherence length laser sources has been evaluated. To facilitate the use of the signal simulation, a fiber component ray tracer has since been developed that allows the creation of input files for the signal simulation environment. The software uses object oriented MATLAB code, simplifying the entry of different fiber setups and the extension of the ray tracer. Thus, a seamless way from a system description based on arbitrarily interconnected fiber components to a signal simulation for different target scenarios has been established. The ray tracer and signal simulation are being used for the evaluation of interferometer concepts incorporating delay lines to compensate for short coherence length.
Simulation-Based Bronchoscopy Training
Kennedy, Cassie C.; Maldonado, Fabien
2013-01-01
Background: Simulation-based bronchoscopy training is increasingly used, but effectiveness remains uncertain. We sought to perform a comprehensive synthesis of published work on simulation-based bronchoscopy training. Methods: We searched MEDLINE, EMBASE, CINAHL, PsycINFO, ERIC, Web of Science, and Scopus for eligible articles through May 11, 2011. We included all original studies involving health professionals that evaluated, in comparison with no intervention or an alternative instructional approach, simulation-based training for flexible or rigid bronchoscopy. Study selection and data abstraction were performed independently and in duplicate. We pooled results using random effects meta-analysis. Results: From an initial pool of 10,903 articles, we identified 17 studies evaluating simulation-based bronchoscopy training. In comparison with no intervention, simulation training was associated with large benefits on skills and behaviors (pooled effect size, 1.21 [95% CI, 0.82-1.60]; n = 8 studies) and moderate benefits on time (0.62 [95% CI, 0.12-1.13]; n = 7). In comparison with clinical instruction, behaviors with real patients showed nonsignificant effects favoring simulation for time (0.61 [95% CI, −1.47 to 2.69]) and process (0.33 [95% CI, −1.46 to 2.11]) outcomes (n = 2 studies each), although variation in training time might account for these differences. Four studies compared alternate simulation-based training approaches. Inductive analysis to inform instructional design suggested that longer or more structured training is more effective, authentic clinical context adds value, and animal models and plastic part-task models may be superior to more costly virtual-reality simulators. Conclusions: Simulation-based bronchoscopy training is effective in comparison with no intervention. Comparative effectiveness studies are few. PMID:23370487
Arnal, B; Pinton, G; Garapon, P; Pernot, M; Fink, M; Tanter, M
2013-10-01
Shear wave imaging (SWI) maps soft tissue elasticity by measuring shear wave propagation with ultrafast ultrasound acquisitions (10 000 frames s(-1)). This spatiotemporal data can be used as an input for an inverse problem that determines a shear modulus map. Common inversion methods are local: the shear modulus at each point is calculated based on the values of its neighbour (e.g. time-of-flight, wave equation inversion). However, these approaches are sensitive to the information loss such as noise or the lack of the backscattered signal. In this paper, we evaluate the benefits of a global approach for elasticity inversion using a least-squares formulation, which is derived from full waveform inversion in geophysics known as the adjoint method. We simulate an acoustic waveform in a medium with a soft and a hard lesion. For this initial application, full elastic propagation and viscosity are ignored. We demonstrate that the reconstruction of the shear modulus map is robust with a non-uniform background or in the presence of noise with regularization. Compared to regular local inversions, the global approach leads to an increase of contrast (∼+3 dB) and a decrease of the quantification error (∼+2%). We demonstrate that the inversion is reliable in the case when there is no signal measured within the inclusions like hypoechoic lesions which could have an impact on medical diagnosis. PMID:24018867
NASA Astrophysics Data System (ADS)
Rettig, Ralf; Ritter, Nils C.; Müller, Frank; Franke, Martin M.; Singer, Robert F.
2015-12-01
A method for predicting the fastest possible homogenization treatment of the as-cast microstructure of nickel-based superalloys is presented and compared with experimental results for the single-crystal superalloy ERBO/1. The computational prediction method is based on phase-field simulations. Experimentally determined compositional fields of the as-cast microstructure from microprobe measurements are being used as input data. The software program MICRESS is employed to account for multicomponent diffusion, dissolution of the eutectic phases, nucleation, and growth of liquid phase (incipient melting). The optimization itself is performed using an iterative algorithm that increases the temperature in such a way that the microstructural state is always very close to the incipient melting limit. Maps are derived allowing describing the dissolution of primary γ/ γ'-islands and the elimination of residual segregation with respect to temperature and time.
NASA Astrophysics Data System (ADS)
Moster, Benjamin P.; Macciò, Andrea V.; Somerville, Rachel S.
2014-01-01
We present a new approach to study galaxy evolution in a cosmological context. We combine cosmological merger trees and semi-analytic models of galaxy formation to provide the initial conditions for multimerger hydrodynamic simulations. In this way, we exploit the advantages of merger simulations (high resolution and inclusion of the gas physics) and semi-analytic models (cosmological background and low computational cost), and integrate them to create a novel tool. This approach allows us to study the evolution of various galaxy properties, including the treatment of the hot gaseous halo from which gas cools and accretes on to the central disc, which has been neglected in many previous studies. This method shows several advantages over other methods. As only the particles in the regions of interest are included, the run time is much shorter than in traditional cosmological simulations, leading to greater computational efficiency. Using cosmological simulations, we show that multiple mergers are expected to be more common than sequences of isolated mergers, and therefore studies of galaxy mergers should take this into account. In this pilot study, we present our method and illustrate the results of simulating 10 Milky Way-like galaxies since z = 1. We find good agreement with observations for the total stellar masses, star formation rates, cold gas fractions and disc scalelength parameters. We expect that this novel numerical approach will be very useful for pursuing a number of questions pertaining to the transformation of galaxy internal structure through cosmic time.
New methods in plasma simulation
Mason, R.J.
1990-02-23
The development of implicit methods of particle-in-cell (PIC) computer simulation in recent years, and their merger with older hybrid methods have created a new arsenal of simulation techniques for the treatment of complex practical problems in plasma physics. The new implicit hybrid codes are aimed at transitional problems that lie somewhere between the long time scale, high density regime associated with MHD modeling, and the short time scale, low density regime appropriate to PIC particle-in-cell techniques. This transitional regime arises in ICF coronal plasmas, in pulsed power plasma switches, in Z-pinches, and in foil implosions. Here, we outline how such a merger of implicit and hybrid methods has been carried out, specifically in the ANTHEM computer code, and demonstrate the utility of implicit hybrid simulation in applications. 25 refs., 5 figs.
NASA Astrophysics Data System (ADS)
Nam, Jong-Ho; Park, Inha; Lee, Ho Jin; Kwon, Mi Ok; Choi, Kyungsik; Seo, Young-Kyo
2013-06-01
Ever since the Arctic region has opened its mysterious passage to mankind, continuous attempts to take advantage of its fastest route across the region has been made. The Arctic region is still covered by thick ice and thus finding a feasible navigating route is essential for an economical voyage. To find the optimal route, it is necessary to establish an efficient transit model that enables us to simulate every possible route in advance. In this work, an enhanced algorithm to determine the optimal route in the Arctic region is introduced. A transit model based on the simulated sea ice and environmental data numerically modeled in the Arctic is developed. By integrating the simulated data into a transit model, further applications such as route simulation, cost estimation or hindcast can be easily performed. An interactive simulation system that determines the optimal Arctic route using the transit model is developed. The simulation of optimal routes is carried out and the validity of the results is discussed.
NASA Astrophysics Data System (ADS)
Zhang, Qian-Jiang; Dai, Shi-Kun; Chen, Long-Wei; Qiang, Jian-Ke; Li, Kun; Zhao, Dong-Dong
2016-06-01
To deal with the problem of low computational precision at the nodes near the source and satisfy the requirements for computational efficiency in inversion imaging and finite-element numerical simulations of the direct current method, we propose a new mesh refinement and recoarsement method for a two-dimensional point source. We introduce the mesh refinement and mesh recoarsement into the traditional structured mesh subdivision. By refining the horizontal grids, the singularity owing to the point source is minimized and the topography is simulated. By recoarsening the horizontal grids, the number of grid cells is reduced significantly and computational efficiency is improved. Model tests show that the proposed method solves the singularity problem and reduces the number of grid cells by 80% compared to the uniform grid refinement.
Bootstrapping Methods Applied for Simulating Laboratory Works
ERIC Educational Resources Information Center
Prodan, Augustin; Campean, Remus
2005-01-01
Purpose: The aim of this work is to implement bootstrapping methods into software tools, based on Java. Design/methodology/approach: This paper presents a category of software e-tools aimed at simulating laboratory works and experiments. Findings: Both students and teaching staff use traditional statistical methods to infer the truth from sample…
NASA Astrophysics Data System (ADS)
Pantano, C.; Deiterding, R.; Hill, D. J.; Pullin, D. I.
2006-09-01
This paper describes a hybrid finite-difference method for the large-eddy simulation of compressible flows with low-numerical dissipation and structured adaptive mesh refinement (SAMR). A conservative flux-based approach is described with an explicit centered scheme used in turbulent flow regions while a weighted essentially non-oscillatory (WENO) scheme is employed to capture shocks. Three-dimensional numerical simulations of a Richtmyer-Meshkov instability are presented.
HyeongKae Park; R. Nourgaliev; Richard C. Martineau; Dana A. Knoll
2008-09-01
Multidimensional, higher-order (2nd and higher) numerical methods have come to the forefront in recent years due to significant advances of computer technology and numerical algorithms, and have shown great potential as viable design tools for realistic applications. To achieve this goal, implicit high-order accurate coupling of the multiphysics simulations is a critical component. One of the issues that arise from multiphysics simulation is the necessity to resolve multiple time scales. For example, the dynamical time scales of neutron kinetics, fluid dynamics and heat conduction significantly differ (typically >1010 magnitude), with the dominant (fastest) physical mode also changing during the course of transient [Pope and Mousseau, 2007]. This leads to the severe time step restriction for stability in traditional multiphysics (i.e. operator split, semi-implicit discretization) simulations. The lower order methods suffer from an undesirable numerical dissipation. Thus implicit, higher order accurate scheme is necessary to perform seamlessly-coupled multiphysics simulations that can be used to analyze the “what-if” regulatory accident scenarios, or to design and optimize engineering systems.
NASA Astrophysics Data System (ADS)
Lin, Yuchun; Baumketner, Andrij; Deng, Shaozhong; Xu, Zhenli; Jacobs, Donald; Cai, Wei
2009-10-01
In this paper, a new solvation model is proposed for simulations of biomolecules in aqueous solutions that combines the strengths of explicit and implicit solvent representations. Solute molecules are placed in a spherical cavity filled with explicit water, thus providing microscopic detail where it is most needed. Solvent outside of the cavity is modeled as a dielectric continuum whose effect on the solute is treated through the reaction field corrections. With this explicit/implicit model, the electrostatic potential represents a solute molecule in an infinite bath of solvent, thus avoiding unphysical interactions between periodic images of the solute commonly used in the lattice-sum explicit solvent simulations. For improved computational efficiency, our model employs an accurate and efficient multiple-image charge method to compute reaction fields together with the fast multipole method for the direct Coulomb interactions. To minimize the surface effects, periodic boundary conditions are employed for nonelectrostatic interactions. The proposed model is applied to study liquid water. The effect of model parameters, which include the size of the cavity, the number of image charges used to compute reaction field, and the thickness of the buffer layer, is investigated in comparison with the particle-mesh Ewald simulations as a reference. An optimal set of parameters is obtained that allows for a faithful representation of many structural, dielectric, and dynamic properties of the simulated water, while maintaining manageable computational cost. With controlled and adjustable accuracy of the multiple-image charge representation of the reaction field, it is concluded that the employed model achieves convergence with only one image charge in the case of pure water. Future applications to pKa calculations, conformational sampling of solvated biomolecules and electrolyte solutions are briefly discussed.
Ljungberg, Michael; Sjögreen, Katarina; Liu, Xiaowei; Frey, Eric; Dewaraja, Yuni; Strand, Sven-Erik
2009-01-01
A general method is presented for patient-specific 3-dimensional absorbed dose calculations based on quantitative SPECT activity measurements. Methods The computational scheme includes a method for registration of the CT image to the SPECT image and position-dependent compensation for attenuation, scatter, and collimator detector response performed as part of an iterative reconstruction method. A method for conversion of the measured activity distribution to a 3-dimensional absorbed dose distribution, based on the EGS4 (electron-gamma shower, version 4) Monte Carlo code, is also included. The accuracy of the activity quantification and the absorbed dose calculation is evaluated on the basis of realistic Monte Carlo–simulated SPECT data, using the SIMIND (simulation of imaging nuclear detectors) program and a voxel-based computer phantom. CT images are obtained from the computer phantom, and realistic patient movements are added relative to the SPECT image. The SPECT-based activity concentration and absorbed dose distributions are compared with the true ones. Results Correction could be made for object scatter, photon attenuation, and scatter penetration in the collimator. However, inaccuracies were imposed by the limited spatial resolution of the SPECT system, for which the collimator response correction did not fully compensate. Conclusion The presented method includes compensation for most parameters degrading the quantitative image information. The compensation methods are based on physical models and therefore are generally applicable to other radionuclides. The proposed evaluation methodology may be used as a basis for future intercomparison of different methods. PMID:12163637
A simulation method for the fruitage body
NASA Astrophysics Data System (ADS)
Lu, Ling; Song, Weng-lin; Wang, Lei
2009-07-01
An effective visual modeling for creating the fruitage body has been present. According to the geometry shape character of fruitage, we build up its face model base on ellipsoid deformation. The face model is relation with radius. We consider different radius become a face in the fruitage, and uses same method to simulate the shape of fruitage inside. The body model is formed by combine face model and radius direction. Our method can simulate virtual inter and outer structure for fruitage body. The method decreases a lot of data and increases display speed. Another, the texture model of fruitage is defined by sum of different base function. This kind of method is simple and speed. We show the feasibility of our method by creating a winter-jujube and an apricot. They include exocorp, mesocorp and endocarp. It is useful that develop virtual plant.
Wang, Zhu-lou; Zhang, Wan-jie; Li, Chen-xi; Chen, Wen-liang; Xu, Ke-xin
2015-02-01
There are some challenges in near-infrared non-invasive blood glucose measurement, such as the low signal to noise ratio of instrument, the unstable measurement conditions, the unpredictable and irregular changes of the measured object, and etc. Therefore, it is difficult to extract the information of blood glucose concentrations from the complicated signals accurately. Reference measurement method is usually considered to be used to eliminate the effect of background changes. But there is no reference substance which changes synchronously with the anylate. After many years of research, our research group has proposed the floating reference method, which is succeeded in eliminating the spectral effects induced by the instrument drifts and the measured object's background variations. But our studies indicate that the reference-point will changes following the changing of measurement location and wavelength. Therefore, the effects of floating reference method should be verified comprehensively. In this paper, keeping things simple, the Monte Carlo simulation employing Intralipid solution with the concentrations of 5% and 10% is performed to verify the effect of floating reference method used into eliminating the consequences of the light source drift. And the light source drift is introduced through varying the incident photon number. The effectiveness of the floating reference method with corresponding reference-points at different wavelengths in eliminating the variations of the light source drift is estimated. The comparison of the prediction abilities of the calibration models with and without using this method shows that the RMSEPs of the method are decreased by about 98.57% (5%Intralipid)and 99.36% (10% Intralipid)for different Intralipid. The results indicate that the floating reference method has obvious effect in eliminating the background changes. PMID:25970930
Schrottke, L. Lü, X.; Grahn, H. T.
2015-04-21
We present a self-consistent model for carrier transport in periodic semiconductor heterostructures completely formulated in the Fourier domain. In addition to the Hamiltonian for the layer system, all expressions for the scattering rates, the applied electric field, and the carrier distribution are treated in reciprocal space. In particular, for slowly converging cases of the self-consistent solution of the Schrödinger and Poisson equations, numerous transformations between real and reciprocal space during the iterations can be avoided by using the presented method, which results in a significant reduction of computation time. Therefore, it is a promising tool for the simulation and efficient design of complex heterostructures such as terahertz quantum-cascade lasers.
Matrix method for acoustic levitation simulation.
Andrade, Marco A B; Perez, Nicolas; Buiochi, Flavio; Adamowski, Julio C
2011-08-01
A matrix method is presented for simulating acoustic levitators. A typical acoustic levitator consists of an ultrasonic transducer and a reflector. The matrix method is used to determine the potential for acoustic radiation force that acts on a small sphere in the standing wave field produced by the levitator. The method is based on the Rayleigh integral and it takes into account the multiple reflections that occur between the transducer and the reflector. The potential for acoustic radiation force obtained by the matrix method is validated by comparing the matrix method results with those obtained by the finite element method when using an axisymmetric model of a single-axis acoustic levitator. After validation, the method is applied in the simulation of a noncontact manipulation system consisting of two 37.9-kHz Langevin-type transducers and a plane reflector. The manipulation system allows control of the horizontal position of a small levitated sphere from -6 mm to 6 mm, which is done by changing the phase difference between the two transducers. The horizontal position of the sphere predicted by the matrix method agrees with the horizontal positions measured experimentally with a charge-coupled device camera. The main advantage of the matrix method is that it allows simulation of non-symmetric acoustic levitators without requiring much computational effort. PMID:21859587
Yang, Qidong; Zuo, Hongchao; Li, Weidong
2016-01-01
Improving the capability of land-surface process models to simulate soil moisture assists in better understanding the atmosphere-land interaction. In semi-arid regions, due to limited near-surface observational data and large errors in large-scale parameters obtained by the remote sensing method, there exist uncertainties in land surface parameters, which can cause large offsets between the simulated results of land-surface process models and the observational data for the soil moisture. In this study, observational data from the Semi-Arid Climate Observatory and Laboratory (SACOL) station in the semi-arid loess plateau of China were divided into three datasets: summer, autumn, and summer-autumn. By combing the particle swarm optimization (PSO) algorithm and the land-surface process model SHAW (Simultaneous Heat and Water), the soil and vegetation parameters that are related to the soil moisture but difficult to obtain by observations are optimized using three datasets. On this basis, the SHAW model was run with the optimized parameters to simulate the characteristics of the land-surface process in the semi-arid loess plateau. Simultaneously, the default SHAW model was run with the same atmospheric forcing as a comparison test. Simulation results revealed the following: parameters optimized by the particle swarm optimization algorithm in all simulation tests improved simulations of the soil moisture and latent heat flux; differences between simulated results and observational data are clearly reduced, but simulation tests involving the adoption of optimized parameters cannot simultaneously improve the simulation results for the net radiation, sensible heat flux, and soil temperature. Optimized soil and vegetation parameters based on different datasets have the same order of magnitude but are not identical; soil parameters only vary to a small degree, but the variation range of vegetation parameters is large. PMID:26991786
Yang, Qidong; Zuo, Hongchao; Li, Weidong
2016-01-01
Improving the capability of land-surface process models to simulate soil moisture assists in better understanding the atmosphere-land interaction. In semi-arid regions, due to limited near-surface observational data and large errors in large-scale parameters obtained by the remote sensing method, there exist uncertainties in land surface parameters, which can cause large offsets between the simulated results of land-surface process models and the observational data for the soil moisture. In this study, observational data from the Semi-Arid Climate Observatory and Laboratory (SACOL) station in the semi-arid loess plateau of China were divided into three datasets: summer, autumn, and summer-autumn. By combing the particle swarm optimization (PSO) algorithm and the land-surface process model SHAW (Simultaneous Heat and Water), the soil and vegetation parameters that are related to the soil moisture but difficult to obtain by observations are optimized using three datasets. On this basis, the SHAW model was run with the optimized parameters to simulate the characteristics of the land-surface process in the semi-arid loess plateau. Simultaneously, the default SHAW model was run with the same atmospheric forcing as a comparison test. Simulation results revealed the following: parameters optimized by the particle swarm optimization algorithm in all simulation tests improved simulations of the soil moisture and latent heat flux; differences between simulated results and observational data are clearly reduced, but simulation tests involving the adoption of optimized parameters cannot simultaneously improve the simulation results for the net radiation, sensible heat flux, and soil temperature. Optimized soil and vegetation parameters based on different datasets have the same order of magnitude but are not identical; soil parameters only vary to a small degree, but the variation range of vegetation parameters is large. PMID:26991786
Simulation-based surgical education.
Evgeniou, Evgenios; Loizou, Peter
2013-09-01
The reduction in time for training at the workplace has created a challenge for the traditional apprenticeship model of training. Simulation offers the opportunity for repeated practice in a safe and controlled environment, focusing on trainees and tailored to their needs. Recent technological advances have led to the development of various simulators, which have already been introduced in surgical training. The complexity and fidelity of the available simulators vary, therefore depending on our recourses we should select the appropriate simulator for the task or skill we want to teach. Educational theory informs us about the importance of context in professional learning. Simulation should therefore recreate the clinical environment and its complexity. Contemporary approaches to simulation have introduced novel ideas for teaching teamwork, communication skills and professionalism. In order for simulation-based training to be successful, simulators have to be validated appropriately and integrated in a training curriculum. Within a surgical curriculum, trainees should have protected time for simulation-based training, under appropriate supervision. Simulation-based surgical education should allow the appropriate practice of technical skills without ignoring the clinical context and must strike an adequate balance between the simulation environment and simulators. PMID:23088646
NASA Astrophysics Data System (ADS)
Rothhämel, Malte; IJkema, Jolle; Drugge, Lars
2011-12-01
There have been several investigations to find out how drivers experience a change in vehicle-handling behaviour. However, the hypothesis that there is a correlation between what the driver perceives and vehicle- handling properties remains to be verified. To define what people feel, the human feeling of steering systems was divided into dimensions of perception. Then 28 test drivers rated different steering system characteristics of a semi-trailer tractor combination in a moving base-driving simulator. Characteristics of the steering system differed in friction, damping, inertia and stiffness. The same steering system characteristics were also tested in accordance with international standards of vehicle-handling tests resulting in characteristic quantities. The instrumental measurements and the non-instrumental ratings were analysed with respect to correlation between each other with the help of regression analysis and neural networks. Results show that there are correlations between measurements and ratings. Moreover, it is shown that which one of the handling variables influence the different dimensions of the steering feel.
Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng
2015-03-01
Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency. PMID:26211074
James, Andrew I.; Jawitz, James W.; Munoz-Carpena, Rafael
2009-01-01
A model to simulate transport of materials in surface water and ground water has been developed to numerically approximate solutions to the advection-dispersion equation. This model, known as the Transport and Reaction Simulation Engine (TaRSE), uses an algorithm that incorporates a time-splitting technique where the advective part of the equation is solved separately from the dispersive part. An explicit finite-volume Godunov method is used to approximate the advective part, while a mixed-finite element technique is used to approximate the dispersive part. The dispersive part uses an implicit discretization, which allows it to run stably with a larger time step than the explicit advective step. The potential exists to develop algorithms that run several advective steps, and then one dispersive step that encompasses the time interval of the advective steps. Because the dispersive step is computationally most expensive, schemes can be implemented that are more computationally efficient than non-time-split algorithms. This technique enables scientists to solve problems with high grid Peclet numbers, such as transport problems with sharp solute fronts, without spurious oscillations in the numerical approximation to the solution and with virtually no artificial diffusion.
Kadkhodapour, J; Montazerian, H; Raeisi, S
2014-10-01
Rapid prototyping (RP) has been a promising technique for producing tissue engineering scaffolds which mimic the behavior of host tissue as properly as possible. Biodegradability, agreeable feasibility of cell growth, and migration parallel to mechanical properties, such as strength and energy absorption, have to be considered in design procedure. In order to study the effect of internal architecture on the plastic deformation and failure pattern, the architecture of triply periodic minimal surfaces which have been observed in nature were used. P and D surfaces at 30% and 60% of volume fractions were modeled with 3∗3∗ 3 unit cells and imported to Objet EDEN 260 3-D printer. Models were printed by VeroBlue FullCure 840 photopolymer resin. Mechanical compression test was performed to investigate the compressive behavior of scaffolds. Deformation procedure and stress-strain curves were simulated by FEA and exhibited good agreement with the experimental observation. Current approaches for predicting dominant deformation mode under compression containing Maxwell's criteria and scaling laws were also investigated to achieve an understanding of the relationships between deformation pattern and mechanical properties of porous structures. It was observed that effect of stress concentration in TPMS-based scaffolds resultant by heterogeneous mass distribution, particularly at lower volume fractions, led to a different behavior from that of typical cellular materials. As a result, although more parameters are considered for determining dominant deformation in scaling laws, two mentioned approaches could not exclusively be used to compare the mechanical response of cellular materials at the same volume fraction. PMID:25175253
Knowledge-based simulation for aerospace systems
NASA Technical Reports Server (NTRS)
Will, Ralph W.; Sliwa, Nancy E.; Harrison, F. Wallace, Jr.
1988-01-01
Knowledge-based techniques, which offer many features that are desirable in the simulation and development of aerospace vehicle operations, exhibit many similarities to traditional simulation packages. The eventual solution of these systems' current symbolic processing/numeric processing interface problem will lead to continuous and discrete-event simulation capabilities in a single language, such as TS-PROLOG. Qualitative, totally-symbolic simulation methods are noted to possess several intrinsic characteristics that are especially revelatory of the system being simulated, and capable of insuring that all possible behaviors are considered.
NASA Technical Reports Server (NTRS)
Nieten, Joseph L.; Seraphine, Kathleen M.
1991-01-01
Procedural modeling systems, rule based modeling systems, and a method for converting a procedural model to a rule based model are described. Simulation models are used to represent real time engineering systems. A real time system can be represented by a set of equations or functions connected so that they perform in the same manner as the actual system. Most modeling system languages are based on FORTRAN or some other procedural language. Therefore, they must be enhanced with a reaction capability. Rule based systems are reactive by definition. Once the engineering system has been decomposed into a set of calculations using only basic algebraic unary operations, a knowledge network of calculations and functions can be constructed. The knowledge network required by a rule based system can be generated by a knowledge acquisition tool or a source level compiler. The compiler would take an existing model source file, a syntax template, and a symbol table and generate the knowledge network. Thus, existing procedural models can be translated and executed by a rule based system. Neural models can be provide the high capacity data manipulation required by the most complex real time models.
A heterogeneous graph-based recommendation simulator
Yeonchan, Ahn; Sungchan, Park; Lee, Matt Sangkeun; Sang-goo, Lee
2013-01-01
Heterogeneous graph-based recommendation frameworks have flexibility in that they can incorporate various recommendation algorithms and various kinds of information to produce better results. In this demonstration, we present a heterogeneous graph-based recommendation simulator which enables participants to experience the flexibility of a heterogeneous graph-based recommendation method. With our system, participants can simulate various recommendation semantics by expressing the semantics via meaningful paths like User Movie User Movie. The simulator then returns the recommendation results on the fly based on the user-customized semantics using a fast Monte Carlo algorithm.
Parallel node placement method by bubble simulation
NASA Astrophysics Data System (ADS)
Nie, Yufeng; Zhang, Weiwei; Qi, Nan; Li, Yiqiang
2014-03-01
An efficient Parallel Node Placement method by Bubble Simulation (PNPBS), employing METIS-based domain decomposition (DD) for an arbitrary number of processors is introduced. In accordance with the desired nodal density and Newton’s Second Law of Motion, automatic generation of node sets by bubble simulation has been demonstrated in previous work. Since the interaction force between nodes is short-range, for two distant nodes, their positions and velocities can be updated simultaneously and independently during dynamic simulation, which indicates the inherent property of parallelism, it is quite suitable for parallel computing. In this PNPBS method, the METIS-based DD scheme has been investigated for uniform and non-uniform node sets, and dynamic load balancing is obtained by evenly distributing work among the processors. For the nodes near the common interface of two neighboring subdomains, there is no need for special treatment after dynamic simulation. These nodes have good geometrical properties and a smooth density distribution which is desirable in the numerical solution of partial differential equations (PDEs). The results of numerical examples show that quasi linear speedup in the number of processors and high efficiency are achieved.
Simulation and Non-Simulation Based Human Reliability Analysis Approaches
Boring, Ronald Laurids; Shirley, Rachel Elizabeth; Joe, Jeffrey Clark; Mandelli, Diego
2014-12-01
Part of the U.S. Department of Energy’s Light Water Reactor Sustainability (LWRS) Program, the Risk-Informed Safety Margin Characterization (RISMC) Pathway develops approaches to estimating and managing safety margins. RISMC simulations pair deterministic plant physics models with probabilistic risk models. As human interactions are an essential element of plant risk, it is necessary to integrate human actions into the RISMC risk model. In this report, we review simulation-based and non-simulation-based human reliability assessment (HRA) methods. Chapter 2 surveys non-simulation-based HRA methods. Conventional HRA methods target static Probabilistic Risk Assessments for Level 1 events. These methods would require significant modification for use in dynamic simulation of Level 2 and Level 3 events. Chapter 3 is a review of human performance models. A variety of methods and models simulate dynamic human performance; however, most of these human performance models were developed outside the risk domain and have not been used for HRA. The exception is the ADS-IDAC model, which can be thought of as a virtual operator program. This model is resource-intensive but provides a detailed model of every operator action in a given scenario, along with models of numerous factors that can influence operator performance. Finally, Chapter 4 reviews the treatment of timing of operator actions in HRA methods. This chapter is an example of one of the critical gaps between existing HRA methods and the needs of dynamic HRA. This report summarizes the foundational information needed to develop a feasible approach to modeling human interactions in the RISMC simulations.
A Lattice Boltzmann Method for Turbomachinery Simulations
NASA Technical Reports Server (NTRS)
Hsu, A. T.; Lopez, I.
2003-01-01
Lattice Boltzmann (LB) Method is a relatively new method for flow simulations. The start point of LB method is statistic mechanics and Boltzmann equation. The LB method tries to set up its model at molecular scale and simulate the flow at macroscopic scale. LBM has been applied to mostly incompressible flows and simple geometry.
Bagheriasl, Reza; Ghavam, Kamyar; Worswick, Michael
2011-05-04
The effect of temperature on formability of aluminum alloy sheet is studied by developing the Forming Limit Diagrams, FLD, for aluminum alloy 3000-series using the Marciniak and Kuczynski technique by numerical simulation. The numerical model is conducted in LS-DYNA and incorporates the Barlat's YLD2000 anisotropic yield function and the temperature dependant Bergstrom hardening law. Three different temperatures; room temperature, 250 deg. C and 300 deg. C, are studied. For each temperature case, various loading conditions are applied to the M-K defect model. The effect of the material anisotropy is considered by varying the defect angle. A simplified failure criterion is used to predict the onset of necking. Minor and major strains are obtained from the simulations and plotted for each temperature level. It is demonstrated that temperature improves the forming limit of aluminum 3000-series alloy sheet.
Rester, S.; Todd, M.R.
1984-04-01
A procedure is described for estimating the response of a field scale CO/sub 2/ flood from a limited number of simulations of pattern flood symmetry elements. This procedure accounts for areally varying reservoir properties, areally varying conditions when CO/sub 2/ injection is initiated, phased conversion of injectors to CO/sub 2/, and shut in criteria for producers. Examples of the use of this procedure are given.
Multigrid methods with applications to reservoir simulation
Xiao, Shengyou
1994-05-01
Multigrid methods are studied for solving elliptic partial differential equations. Focus is on parallel multigrid methods and their use for reservoir simulation. Multicolor Fourier analysis is used to analyze the behavior of standard multigrid methods for problems in one and two dimensions. Relation between multicolor and standard Fourier analysis is established. Multiple coarse grid methods for solving model problems in 1 and 2 dimensions are considered; at each coarse grid level we use more than one coarse grid to improve convergence. For a given Dirichlet problem, a related extended problem is first constructed; a purification procedure can be used to obtain Moore-Penrose solutions of the singular systems encountered. For solving anisotropic equations, semicoarsening and line smoothing techniques are used with multiple coarse grid methods to improve convergence. Two-level convergence factors are estimated using multicolor. In the case where each operator has the same stencil on each grid point on one level, exact multilevel convergence factors can be obtained. For solving partial differential equations with discontinuous coefficients, interpolation and restriction operators should include information about the equation coefficients. Matrix-dependent interpolation and restriction operators based on the Schur complement can be used in nonsymmetric cases. A semicoarsening multigrid solver with these operators is used in UTCOMP, a 3-D, multiphase, multicomponent, compositional reservoir simulator. The numerical experiments are carried out on different computing systems. Results indicate that the multigrid methods are promising.
NASA Technical Reports Server (NTRS)
Scott, Robert C.; Pototzky, Anthony S.; Perry, Boyd, III
1992-01-01
This is a work-in-progress paper. It explores the similarity between the results from two different analysis methods - one deterministic, the other stochastic - for computing maximized and time-correlated gust loads for nonlinear aircraft. To date, numerical studies have been performed using two different nonlinear aircraft configurations. These studies demonstrate that results from the deterministic analysis method are realizable in the stochastic analysis method.
2016-01-01
Purpose: The integration of simulation-based learning (SBL) methods holds promise for improving the medical education system in Greece. The Applied Basic Clinical Seminar with Scenarios for Students (ABCS3) is a novel two-day SBL course that was designed by the Scientific Society of Hellenic Medical Students. The ABCS3 targeted undergraduate medical students and consisted of three core components: the case-based lectures, the ABCDE hands-on station, and the simulation-based clinical scenarios. The purpose of this study was to evaluate the general educational environment of the course, as well as the skills and knowledge acquired by the participants. Methods: Two sets of questions were distributed to the participants: the Dundee Ready Educational Environment Measure (DREEM) questionnaire and an internally designed feedback questionnaire (InEv). A multiple-choice examination was also distributed prior to the course and following its completion. A total of 176 participants answered the DREEM questionnaire, 56 the InEv, and 60 the MCQs. Results: The overall DREEM score was 144.61 (±28.05) out of 200. Delegates who participated in both the case-based lectures and the interactive scenarios core components scored higher than those who only completed the case-based lecture session (P=0.038). The mean overall feedback score was 4.12 (±0.56) out of 5. Students scored significantly higher on the post-test than on the pre-test (P<0.001). Conclusion: The ABCS3 was found to be an effective SBL program, as medical students reported positive opinions about their experiences and exhibited improvements in their clinical knowledge and skills. PMID:27012313
Simulation optimization as a method for lot size determination
NASA Astrophysics Data System (ADS)
Vazan, P.; Moravčík, O.; Jurovatá, D.; Juráková, A.
2011-10-01
The paper presents the simulation optimization as a good tool for solving many problems not only in research, but also in real praxis. It gives basic overview of the methods in simulation optimization. The authors also characterise their own experiences and they mention advantages and problems of simulation optimization. The paper is a contribution to more effective using of simulation optimization. The main goal was to give the general procedure for effective usage of simulation optimization. The authors present the alternative method for determination of lot size. The method uses the simulation optimization as a base procedure. The authors demonstrate the important stages of the method. The final procedure involves the process of selection of algorithm, input variables, their set up of range and step selection. The solution is compared with classic mathematic methods. The authors point out that the realization of simulation optimization is a compromise between acceptable time and accuracy of found solution.
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Varentsova, Svetlana A.; Krotkus, Arunas; Molis, Gediminas
2010-10-01
The SDA (Spectral Dynamics Analysis) - method (method of THz spectrum dynamics analysis in THz range of frequencies) is used for the detection and identification of substances with similar THz Fourier spectra (such substances are named usually as the simulants) in the two- or three-component medium. This method allows us to obtain the unique 2D THz signature of the substance - the spectrogram- and to analyze the dynamics of many spectral lines of the THz signal, passed through or reflected from substance, by one set of its integral measurements simultaneously; even measurements are made on short-term intervals (less than 20 ps). For long-term intervals (100 ps and more) the SDA method gives an opportunity to define the relaxation time for excited energy levels of molecules. This information gives new opportunity to identify the substance because the relaxation time is different for molecules of different substances. The restoration of the signal by its integral values is made on the base of SVD - Single Value Decomposition - technique. We consider three examples for PTFE mixed with small content of the L-Tartaric Acid and the Sucrose in pellets. A concentration of these substances is about 5%-10%. Our investigations show that the spectrograms and dynamics of spectral lines of THz pulse passed through the pure PTFE differ from the spectrograms of the compound medium containing PTFE and the L-Tartaric Acid or the Sucrose or both these substances together. So, it is possible to detect the presence of a small amount of the additional substances in the sample even their THz Fourier spectra are practically identical. Therefore, the SDA method can be very effective for the defense and security applications and for quality control in pharmaceutical industry. We also show that in the case of substances-simulants the use of auto- and correlation functions has much worse resolvability in a comparison with the SDA method.
HyeongKae Park; Robert R. Nourgaliev; Richard C. Martineau; Dana A. Knoll
2008-09-01
We present high-order accurate spatiotemporal discretization of all-speed flow solvers using Jacobian-free Newton Krylov framework. One of the key developments in this work is the physics-based preconditioner for the all-speed flow, which makes use of traditional semi-implicit schemes. The physics-based preconditioner is developed in the primitive variable form, which allows a straightforward separation of physical phenomena. Numerical examples demonstrate that the developed preconditioner effectively reduces the number of the Krylov iterations, and the efficiency is independent of the Mach number and mesh sizes under a fixed CFL condition.
Skinner, Wayne S; Phinney, Brett S; Herren, Anthony; Goodstal, Floyd J; Dicely, Isabel; Facciotti, Daniel
2016-06-29
The digestibility of a nonpurified transgenic membrane protein was determined in pepsin, as part of the food safety evaluation of its resistance to digestion and allergenic potential. Delta-6-desaturase from Saprolegnia diclina, a transmembrane protein expressed in safflower for the production of gamma linolenic acid in the seed, could not be obtained in a pure, native form as normally required for this assay. As a novel approach, the endoplasmic reticulum isolated from immature seeds was digested in simulated gastric fluid (SGF) and the degradation of delta-6-desaturase was selectively followed by SDS-PAGE and targeted LC-MS/MS quantification using stable isotope-labeled peptides as internal standards. The digestion of delta-6-desaturase by SGF was shown to be both rapid and complete. Less than 10% of the initial amount of D6D remained intact after 30 s, and no fragments large enough (>3 kDa) to elicit a type I allergenic response remained after 60 min. PMID:27255301
Simulation-Based Training for Colonoscopy
Preisler, Louise; Svendsen, Morten Bo Søndergaard; Nerup, Nikolaj; Svendsen, Lars Bo; Konge, Lars
2015-01-01
Abstract The aim of this study was to create simulation-based tests with credible pass/fail standards for 2 different fidelities of colonoscopy models. Only competent practitioners should perform colonoscopy. Reliable and valid simulation-based tests could be used to establish basic competency in colonoscopy before practicing on patients. Twenty-five physicians (10 consultants with endoscopic experience and 15 fellows with very little endoscopic experience) were tested on 2 different simulator models: a virtual-reality simulator and a physical model. Tests were repeated twice on each simulator model. Metrics with discriminatory ability were identified for both modalities and reliability was determined. The contrasting-groups method was used to create pass/fail standards and the consequences of these were explored. The consultants significantly performed faster and scored higher than the fellows on both the models (P < 0.001). Reliability analysis showed Cronbach α = 0.80 and 0.87 for the virtual-reality and the physical model, respectively. The established pass/fail standards failed one of the consultants (virtual-reality simulator) and allowed one fellow to pass (physical model). The 2 tested simulations-based modalities provided reliable and valid assessments of competence in colonoscopy and credible pass/fail standards were established for both the tests. We propose to use these standards in simulation-based training programs before proceeding to supervised training on patients. PMID:25634177
Valentini, F. . E-mail: valentin@fis.unical.it; Travnicek, P.; Califano, F.; Hellinger, P.; Mangeney, A.
2007-07-01
We present a numerical scheme for the integration of the Vlasov-Maxwell system of equations for a non-relativistic plasma, in the hybrid approximation, where the Vlasov equation is solved for the ion distribution function and the electrons are treated as a fluid. In the Ohm equation for the electric field, effects of electron inertia have been retained, in order to include the small scale dynamics up to characteristic lengths of the order of the electron skin depth. The low frequency approximation is used by neglecting the time derivative of the electric field, i.e. the displacement current in the Ampere equation. The numerical algorithm consists in coupling the splitting method proposed by Cheng and Knorr in 1976 [C.Z. Cheng, G. Knorr, J. Comput. Phys. 22 (1976) 330-351.] and the current advance method (CAM) introduced by Matthews in 1994 [A.P. Matthews, J. Comput. Phys. 112 (1994) 102-116.] In its present version, the code solves the Vlasov-Maxwell equations in a five-dimensional phase space (2-D in the physical space and 3-D in the velocity space) and it is implemented in a parallel version to exploit the computational power of the modern massively parallel supercomputers. The structure of the algorithm and the coupling between the splitting method and the CAM method (extended to the hybrid case) is discussed in detail. Furthermore, in order to test the hybrid-Vlasov code, the numerical results on propagation and damping of linear ion-acoustic modes and time evolution of linear elliptically polarized Alfven waves (including the so-called whistler regime) are compared to the analytical solutions. Finally, the numerical results of the hybrid-Vlasov code on the parametric instability of Alfven waves are compared with those obtained using a two-fluid approach.
Yang, Ping; Nai, Chang-Xin; Dong, Lu; Wang, Qi; Wang, Yan-Wen
2006-01-01
Two types of double high density polyethylene (HDPE) liners landfill that clay or geogrid was added between the two HDPE liners. The general resistance of the second mode is 15% larger than the general resistance of the first mode in the primary HDPE liner detection, and 20% larger than that of the first one in the secondary HDPE liner detection. High voltage DC method can accomplish the leakage detection and location of these two types of landfill and the error of leakage location is less than 10cm when electrode space is 1m. PMID:16599145
Simulator certification methods and the vertical motion simulator
NASA Technical Reports Server (NTRS)
Showalter, T. W.
1981-01-01
The vertical motion simulator (VMS) is designed to simulate a variety of experimental helicopter and STOL/VTOL aircraft as well as other kinds of aircraft with special pitch and Z axis characteristics. The VMS includes a large motion base with extensive vertical and lateral travel capabilities, a computer generated image visual system, and a high speed CDC 7600 computer system, which performs aero model calculations. Guidelines on how to measure and evaluate VMS performance were developed. A survey of simulation users was conducted to ascertain they evaluated and certified simulators for use. The results are presented.
A fluid model simulation of a simplified plasma limiter based on spectral-element time-domain method
Qian, Cheng; Ding, Dazhi Fan, Zhenhong; Chen, Rushan
2015-03-15
A simplified plasma limiter prototype is proposed and the fluid model coupled with Maxwell's equations is established to describe the operating mechanism of plasma limiter. A three-dimensional (3-D) simplified sandwich structure plasma limiter model is analyzed with the spectral-element time-domain (SETD) method. The field breakdown threshold of air and argon at different frequency is predicted and compared with the experimental data and there is a good agreement between them for gas microwave breakdown discharge problems. Numerical results demonstrate that the two-layer plasma limiter (plasma-slab-plasma) has better protective characteristics than a one-layer plasma limiter (slab-plasma-slab) with the same length of gas chamber.
van Sighem, Ard; Thiebaut, Rodolphe; Smith, Colette; Ratmann, Oliver; Cambiano, Valentina; Albert, Jan; Amato-Gauci, Andrew; Bezemer, Daniela; Campbell, Colin; Commenges, Daniel; Donoghoe, Martin; Ford, Deborah; Kouyos, Roger; Lodwick, Rebecca; Lundgren, Jens; Pantazis, Nikos; Pharris, Anastasia; Quinten, Chantal; Thorne, Claire; Touloumi, Giota; Delpech, Valerie; Phillips, Andrew
2016-01-01
It is important not only to collect epidemiologic data on HIV but to also fully utilize such information to understand the epidemic over time and to help inform and monitor the impact of policies and interventions. We describe and apply a novel method to estimate the size and characteristics of HIV-positive populations. The method was applied to data on men who have sex with men living in the UK and to a pseudo dataset to assess performance for different data availability. The individual-based simulation model was calibrated using an approximate Bayesian computation-based approach. In 2013, 48,310 (90% plausibility range: 39,900–45,560) men who have sex with men were estimated to be living with HIV in the UK, of whom 10,400 (6,160–17,350) were undiagnosed. There were an estimated 3,210 (1,730–5,350) infections per year on average between 2010 and 2013. Sixty-two percent of the total HIV-positive population are thought to have viral load <500 copies/ml. In the pseudo-epidemic example, HIV estimates have narrower plausibility ranges and are closer to the true number, the greater the data availability to calibrate the model. We demonstrate that our method can be applied to settings with less data, however plausibility ranges for estimates will be wider to reflect greater uncertainty of the data used to fit the model. PMID:26605814
NASA Astrophysics Data System (ADS)
Tramm, John R.; Gunow, Geoffrey; He, Tim; Smith, Kord S.; Forget, Benoit; Siegel, Andrew R.
2016-05-01
In this study we present and analyze a formulation of the 3D Method of Characteristics (MOC) technique applied to the simulation of full core nuclear reactors. Key features of the algorithm include a task-based parallelism model that allows independent MOC tracks to be assigned to threads dynamically, ensuring load balancing, and a wide vectorizable inner loop that takes advantage of modern SIMD computer architectures. The algorithm is implemented in a set of highly optimized proxy applications in order to investigate its performance characteristics on CPU, GPU, and Intel Xeon Phi architectures. Speed, power, and hardware cost efficiencies are compared. Additionally, performance bottlenecks are identified for each architecture in order to determine the prospects for continued scalability of the algorithm on next generation HPC architectures.
Image based SAR product simulation for analysis
NASA Technical Reports Server (NTRS)
Domik, G.; Leberl, F.
1987-01-01
SAR product simulation serves to predict SAR image gray values for various flight paths. Input typically consists of a digital elevation model and backscatter curves. A new method is described of product simulation that employs also a real SAR input image for image simulation. This can be denoted as 'image-based simulation'. Different methods to perform this SAR prediction are presented and advantages and disadvantages discussed. Ascending and descending orbit images from NASA's SIR-B experiment were used for verification of the concept: input images from ascending orbits were converted into images from a descending orbit; the results are compared to the available real imagery to verify that the prediction technique produces meaningful image data.
Methods of sound simulation and applications in flight simulators
NASA Technical Reports Server (NTRS)
Gaertner, K. P.
1980-01-01
An overview of methods for electronically synthesizing sounds is presented. A given amount of hardware and computer capacity places an upper limit on the degree and fidelity of realism of sound simulation which is attainable. Good sound realism for aircraft simulators can be especially expensive because of the complexity of flight sounds and their changing patterns through time. Nevertheless, the flight simulator developed at the Research Institute for Human Engineering, West Germany, shows that it is possible to design an inexpensive sound simulator with the required acoustic properties using analog computer elements. The characteristics of the sub-sound elements produced by this sound simulator for take-off, cruise and approach are discussed.
NASA Astrophysics Data System (ADS)
Zhang, Y. X.; Su, M.; Hou, H. C.; Song, P. F.
2013-12-01
This research adopts the quasi three-dimensional hydraulic design method for the impeller of high specific speed mixed-flow pump to achieve the purpose of verifying the hydraulic design method and improving hydraulic performance. Based on the two families of stream surface theory, the direct problem is completed when the meridional flow field of impeller is obtained by employing iterative calculation to settle the continuity and momentum equation of fluid. The inverse problem is completed by using the meridional flow field calculated in the direct problem. After several iterations of the direct and inverse problem, the shape of impeller and flow field information can be obtained finally when the result of iteration satisfies the convergent criteria. Subsequently the internal flow field of the designed pump are simulated by using RANS equations with RNG k-ε two-equation turbulence model. The static pressure and streamline distributions at the symmetrical cross-section, the vector velocity distribution around blades and the reflux phenomenon are analyzed. The numerical results show that the quasi three-dimensional hydraulic design method for high specific speed mixed-flow pump improves the hydraulic performance and reveal main characteristics of the internal flow of mixed-flow pump as well as provide basis for judging the rationality of the hydraulic design, improvement and optimization of hydraulic model.
Large-Eddy Simulation and Multigrid Methods
Falgout,R D; Naegle,S; Wittum,G
2001-06-18
A method to simulate turbulent flows with Large-Eddy Simulation on unstructured grids is presented. Two kinds of dynamic models are used to model the unresolved scales of motion and are compared with each other on different grids. Thereby the behavior of the models is shown and additionally the feature of adaptive grid refinement is investigated. Furthermore the parallelization aspect is addressed.
Inversion based on computational simulations
Hanson, K.M.; Cunningham, G.S.; Saquib, S.S.
1998-09-01
A standard approach to solving inversion problems that involve many parameters uses gradient-based optimization to find the parameters that best match the data. The authors discuss enabling techniques that facilitate application of this approach to large-scale computational simulations, which are the only way to investigate many complex physical phenomena. Such simulations may not seem to lend themselves to calculation of the gradient with respect to numerous parameters. However, adjoint differentiation allows one to efficiently compute the gradient of an objective function with respect to all the variables of a simulation. When combined with advanced gradient-based optimization algorithms, adjoint differentiation permits one to solve very large problems of optimization or parameter estimation. These techniques will be illustrated through the simulation of the time-dependent diffusion of infrared light through tissue, which has been used to perform optical tomography. The techniques discussed have a wide range of applicability to modeling including the optimization of models to achieve a desired design goal.
Estimating School Efficiency: A Comparison of Methods Using Simulated Data.
ERIC Educational Resources Information Center
Bifulco, Robert; Bretschneider, Stuart
2001-01-01
Uses simulated data to assess the adequacy of two econometric and linear-programming techniques (data-envelopment analysis and corrected ordinary least squares) for measuring performance-based school reform. In complex data sets (simulated to contain measurement error and endogeneity), these methods are inadequate efficiency measures. (Contains 40…
Constraint methods that accelerate free-energy simulations of biomolecules.
Perez, Alberto; MacCallum, Justin L; Coutsias, Evangelos A; Dill, Ken A
2015-12-28
Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann's law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions. PMID:26723628
Constraint methods that accelerate free-energy simulations of biomolecules
Perez, Alberto; MacCallum, Justin L.; Coutsias, Evangelos A.; Dill, Ken A.
2015-12-28
Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann’s law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.
Bridging the gap: simulations meet knowledge bases
NASA Astrophysics Data System (ADS)
King, Gary W.; Morrison, Clayton T.; Westbrook, David L.; Cohen, Paul R.
2003-09-01
Tapir and Krill are declarative languages for specifying actions and agents, respectively, that can be executed in simulation. As such, they bridge the gap between strictly declarative knowledge bases and strictly executable code. Tapir and Krill components can be combined to produce models of activity which can answer questions about mechanisms and processes using conventional inference methods and simulation. Tapir was used in DARPA's Rapid Knowledge Formation (RKF) project to construct models of military tactics from the Army Field Manual FM3-90. These were then used to build Courses of Actions (COAs) which could be critiqued by declarative reasoning or via Monte Carlo simulation. Tapir and Krill can be read and written by non-knowledge engineers making it an excellent vehicle for Subject Matter Experts to build and critique knowledge bases.
Spectral Methods in General Relativistic MHD Simulations
NASA Astrophysics Data System (ADS)
Garrison, David
2012-03-01
In this talk I discuss the use of spectral methods in improving the accuracy of a General Relativistic Magnetohydrodynamic (GRMHD) computer code. I introduce SpecCosmo, a GRMHD code developed as a Cactus arrangement at UHCL, and show simulation results using both Fourier spectral methods and finite differencing. This work demonstrates the use of spectral methods with the FFTW 3.3 Fast Fourier Transform package integrated with the Cactus Framework to perform spectral differencing using MPI.
A simple method for simulating gasdynamic systems
NASA Technical Reports Server (NTRS)
Hartley, Tom T.
1991-01-01
A simple method for performing digital simulation of gasdynamic systems is presented. The approach is somewhat intuitive, and requires some knowledge of the physics of the problem as well as an understanding of the finite difference theory. The method is explicitly shown in appendix A which is taken from the book by P.J. Roache, 'Computational Fluid Dynamics,' Hermosa Publishers, 1982. The resulting method is relatively fast while it sacrifices some accuracy.
NASA Astrophysics Data System (ADS)
Abhilash, T.; Balasubrahmaniyam, M.; Kasiviswanathan, S.
2016-03-01
Photochromic transitions in silver nanoparticles (AgNPs) embedded titanium dioxide (TiO2) films under green light illumination are marked by reduction in strength and blue shift in the position of the localized surface plasmon resonance (LSPR) associated with AgNPs. These transitions, which happen in the sub-nanometer length scale, have been analysed using the variations observed in the effective dielectric properties of the Ag-TiO2 nanocomposite films in response to the size reduction of AgNPs and subsequent changes in the surrounding medium due to photo-oxidation. Bergman-Milton formulation based on spectral density approach is used to extract dielectric properties and information about the geometrical distribution of the effective medium. Combined with finite element method simulations, we isolate the effects due to the change in average size of the nanoparticles and those due to the change in the dielectric function of the surrounding medium. By analysing the dynamics of photochromic transitions in the effective medium, we conclude that the observed blue shift in LSPR is mainly because of the change in the dielectric function of surrounding medium, while a shape-preserving effective size reduction of the AgNPs causes decrease in the strength of LSPR.
Rainfall Simulation: methods, research questions and challenges
NASA Astrophysics Data System (ADS)
Ries, J. B.; Iserloh, T.
2012-04-01
In erosion research, rainfall simulations are used for the improvement of process knowledge as well as in the field for the assessment of overland flow generation, infiltration, and erosion rates. In all these fields of research, rainfall experiments have become an indispensable part of the research methods. In this context, small portable rainfall simulators with small test-plot sizes of one square-meter or even less, and devices of low weight and water consumption are in demand. Accordingly, devices with manageable technical effort like nozzle-type simulators seem to prevail against larger simulators. The reasons are obvious: lower costs and less time consumption needed for mounting enable a higher repetition rate. Regarding the high number of research questions, of different fields of application, and not least also due to the great technical creativity of our research staff, a large number of different experimental setups is available. Each of the devices produces a different rainfall, leading to different kinetic energy amounts influencing the soil surface and accordingly, producing different erosion results. Hence, important questions contain the definition, the comparability, the measurement and the simulation of natural rainfall and the problem of comparability in general. Another important discussion topic will be the finding of an agreement on an appropriate calibration method for the simulated rainfalls, in order to enable a comparison of the results of different rainfall simulator set-ups. In most of the publications, only the following "nice" sentence can be read: "Our rainfall simulator generates a rainfall spectrum that is similar to natural rainfall!". The most substantial and critical properties of a simulated rainfall are the drop-size distribution, the fall velocities of the drops, and the spatial distribution of the rainfall on the plot-area. In a comparison of the most important methods, the Laser Distrometer turned out to be the most up
Reduced Basis Method for Nanodevices Simulation
Pau, George Shu Heng
2008-05-23
Ballistic transport simulation in nanodevices, which involves self-consistently solving a coupled Schrodinger-Poisson system of equations, is usually computationally intensive. Here, we propose coupling the reduced basis method with the subband decomposition method to improve the overall efficiency of the simulation. By exploiting a posteriori error estimation procedure and greedy sampling algorithm, we are able to design an algorithm where the computational cost is reduced significantly. In addition, the computational cost only grows marginally with the number of grid points in the confined direction.
A guided simulated annealing method for crystallography.
Chou, C I; Lee, T K
2002-01-01
A new optimization algorithm, the guided simulated annealing method, for use in X-ray crystallographic studies is presented. In the traditional simulated annealing method, the search for the global minimum of a cost function is only determined by the ratio of energy change to the temperature. This method designs a new quality function to guide the search for a minimum. Using a multiresolution process, the method is much more efficient in finding the global minimum than the traditional method. Results for two large molecules, isoleucinomycin (C(60)H(102)N(6)O(18)) and an alkyl calix (C(72)H(112)O(8). 4C(2)H(6)O), with different space groups are reported. PMID:11752762
NASA Astrophysics Data System (ADS)
Bahl, Mayank; Zhou, Gui-Rong; Heller, Evan; Cassarly, William; Jiang, Mingming; Scarmozzino, Robert; Gregory, G. Groot; Herrmann, Daniel
2015-04-01
Over the last two decades, extensive research has been done to improve light-emitting diodes (LEDs) designs. Increasingly complex designs have necessitated the use of computational simulations which have provided numerous insights for improving LED performance. Depending upon the focus of the design and the scale of the problem, simulations are carried out using rigorous electromagnetic (EM) wave optics-based techniques, such as finite-difference time-domain and rigorous coupled wave analysis, or through ray optics-based techniques such as Monte Carlo ray-tracing (RT). The former are typically used for modeling nanostructures on the LED die, and the latter for modeling encapsulating structures, die placement, back-reflection, and phosphor downconversion. This paper presents the use of a mixed-level simulation approach that unifies the use of EM wave-level and ray-level tools. This approach uses rigorous EM wave-based tools to characterize the nanostructured die and generates both a bidirectional scattering distribution function and a far-field angular intensity distribution. These characteristics are then incorporated into the RT simulator to obtain the overall performance. Such a mixed-level approach allows for comprehensive modeling of the optical characteristic of LEDs, including polarization effects, and can potentially lead to a more accurate performance than that from individual modeling tools alone.
NASA Astrophysics Data System (ADS)
Kopera, Michal A.; Giraldo, Francis X.
2015-09-01
We perform a comparison of mass conservation properties of the continuous (CG) and discontinuous (DG) Galerkin methods on non-conforming, dynamically adaptive meshes for two atmospheric test cases. The two methods are implemented in a unified way which allows for a direct comparison of the non-conforming edge treatment. We outline the implementation details of the non-conforming direct stiffness summation algorithm for the CG method and show that the mass conservation error is similar to the DG method. Both methods conserve to machine precision, regardless of the presence of the non-conforming edges. For lower order polynomials the CG method requires additional stabilization to run for very long simulation times. We addressed this issue by using filters and/or additional artificial viscosity. The mathematical proof of mass conservation for CG with non-conforming meshes is presented in Appendix B.
A method for simulating a flux-locked DC SQUID
NASA Technical Reports Server (NTRS)
Gutt, G. M.; Kasdin, N. J.; Condron, M. R., II; Muhlfelder, B.; Lockhart, J. M.; Cromar, M. W.
1993-01-01
The authors describe a computationally efficient and accurate method for simulating a dc SQUID's V-Phi (voltage-flux) and I-V characteristics which has proven valuable in evaluating and improving various SQUID readout methods. The simulation of the SQUID is based on fitting of previously acquired data from either a real or a modeled device using the Fourier transform of the V-Phi curve. This method does not predict SQUID behavior, but rather is a way of replicating a known behavior efficiently with portability into various simulation programs such as SPICE. The authors discuss the methods used to simulate the SQUID and the flux-locking control electronics, and present specific examples of this approach. Results include an estimate of the slew rate and linearity of a simple flux-locked loop using a characterized dc SQUID.
A Simulation Method Measuring Psychomotor Nursing Skills.
ERIC Educational Resources Information Center
McBride, Helena; And Others
1981-01-01
The development of a simulation technique to evaluate performance of psychomotor skills in an undergraduate nursing program is described. This method is used as one admission requirement to an alternate route nursing program. With modifications, any health profession could use this technique where psychomotor skills performance is important.…
Method for Constructing Standardized Simulated Root Canals.
ERIC Educational Resources Information Center
Schulz-Bongert, Udo; Weine, Franklin S.
1990-01-01
The construction of visual and manipulative aids, clear resin blocks with root-canal-like spaces, for simulation of root canals is explained. Time, materials, and techniques are discussed. The method allows for comparison of canals, creation of any configuration of canals, and easy presentation during instruction. (MSE)
NASA Astrophysics Data System (ADS)
Pain, F.; Dhenain, M.; Gurden, H.; Routier, A. L.; Lefebvre, F.; Mastrippolito, R.; Lanièce, P.
2008-10-01
The β-microprobe is a simple and versatile technique complementary to small animal positron emission tomography (PET). It relies on local measurements of the concentration of positron-labeled molecules. So far, it has been successfully used in anesthetized rats for pharmacokinetics experiments and for the study of brain energetic metabolism. However, the ability of the technique to provide accurate quantitative measurements using 18F, 11C and 15O tracers is likely to suffer from the contribution of 511 keV gamma rays background to the signal and from the contribution of positrons from brain loci surrounding the locus of interest. The aim of the present paper is to provide a method of evaluating several parameters, which are supposed to affect the quantification of recordings performed in vivo with this methodology. We have developed realistic voxelized phantoms of the rat whole body and brain, and used them as input geometries for Monte Carlo simulations of previous β-microprobe reports. In the context of realistic experiments (binding of 11C-Raclopride to D2 dopaminergic receptors in the striatum; local glucose metabolic rate measurement with 18F-FDG and H2O15 blood flow measurements in the somatosensory cortex), we have calculated the detection efficiencies and corresponding contribution of 511 keV gammas from peripheral organs accumulation. We confirmed that the 511 keV gammas background does not impair quantification. To evaluate the contribution of positrons from adjacent structures, we have developed β-Assistant, a program based on a rat brain voxelized atlas and matrices of local detection efficiencies calculated by Monte Carlo simulations for several probe geometries. This program was used to calculate the 'apparent sensitivity' of the probe for each brain structure included in the detection volume. For a given localization of a probe within the brain, this allows us to quantify the different sources of beta signal. Finally, since stereotaxic accuracy is
Fast Particle Methods for Multiscale Phenomena Simulations
NASA Technical Reports Server (NTRS)
Koumoutsakos, P.; Wray, A.; Shariff, K.; Pohorille, Andrew
2000-01-01
We are developing particle methods oriented at improving computational modeling capabilities of multiscale physical phenomena in : (i) high Reynolds number unsteady vortical flows, (ii) particle laden and interfacial flows, (iii)molecular dynamics studies of nanoscale droplets and studies of the structure, functions, and evolution of the earliest living cell. The unifying computational approach involves particle methods implemented in parallel computer architectures. The inherent adaptivity, robustness and efficiency of particle methods makes them a multidisciplinary computational tool capable of bridging the gap of micro-scale and continuum flow simulations. Using efficient tree data structures, multipole expansion algorithms, and improved particle-grid interpolation, particle methods allow for simulations using millions of computational elements, making possible the resolution of a wide range of length and time scales of these important physical phenomena.The current challenges in these simulations are in : [i] the proper formulation of particle methods in the molecular and continuous level for the discretization of the governing equations [ii] the resolution of the wide range of time and length scales governing the phenomena under investigation. [iii] the minimization of numerical artifacts that may interfere with the physics of the systems under consideration. [iv] the parallelization of processes such as tree traversal and grid-particle interpolations We are conducting simulations using vortex methods, molecular dynamics and smooth particle hydrodynamics, exploiting their unifying concepts such as : the solution of the N-body problem in parallel computers, highly accurate particle-particle and grid-particle interpolations, parallel FFT's and the formulation of processes such as diffusion in the context of particle methods. This approach enables us to transcend among seemingly unrelated areas of research.
Mesoscopic Simulation Methods for Polymer Dynamics
NASA Astrophysics Data System (ADS)
Larson, Ronald
2015-03-01
We assess the accuracy and efficiency of mesoscopic simulation methods, namely Brownian Dynamics (BD), Stochastic Rotation Dynamics (SRD) and Dissipative Particle Dynamics (DPD), for polymers in solution at equilibrium and in flows in microfluidic geometries. Both SRD and DPD use solvent ``particles'' to carry momentum, and so account automatically for hydrodynamic interactions both within isolated polymer coils, and with other polymer molecules and with nearby solid boundaries. We assess quantitatively the effects of artificial particle inertia and fluid compressibility and show that they can be made small with appropriate choice of simulation parameters. We then use these methods to study flow-induced migration of polymer chains produced by: 1) hydrodynamic interactions, 2) streamline curvature or stress-gradients, and 3) convection of wall depletion zones. We show that huge concentration gradients can be produced by these mechanisms in microfluidic geometries that can be exploited for separation of polymers by size in periodic contraction-expansion geometries. We also assess the range of conditions for which BD, SRD or DPD is preferable for mesoscopic simulations. Finally, we show how such methods can be used to simulate quantitatively the swimming of micro-organisms such as E. coli. In collaboration with Lei Jiang and Tongyang Zhao, University of Michigan, Ann Arbor, MI.
Domain reduction method for atomistic simulations
Medyanik, Sergey N. . E-mail: medyanik@northwestern.edu; Karpov, Eduard G. . E-mail: edkarpov@gmail.com; Liu, Wing Kam . E-mail: w-liu@northwestern.edu
2006-11-01
In this paper, a quasi-static formulation of the method of multi-scale boundary conditions (MSBCs) is derived and applied to atomistic simulations of carbon nano-structures, namely single graphene sheets and multi-layered graphite. This domain reduction method allows for the simulation of deformable boundaries in periodic atomic lattice structures, reduces the effective size of the computational domain, and consequently decreases the cost of computations. The size of the reduced domain is determined by the value of the domain reduction parameter. This parameter is related to the distance between the boundary of the reduced domain, where MSBCs are applied, and the boundary of the full domain, where the standard displacement boundary conditions are prescribed. Two types of multi-scale boundary conditions are derived: one for simulating in-layer multi-scale boundaries in a single graphene sheet and the other for simulating inter-layer multi-scale boundaries in multi-layered graphite. The method is tested on benchmark nano-indentation problems and the results are consistent with the full domain solutions.
Automated Simulation Updates based on Flight Data
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Ward, David G.
2007-01-01
A statistically-based method for using flight data to update aerodynamic data tables used in flight simulators is explained and demonstrated. A simplified wind-tunnel aerodynamic database for the F/A-18 aircraft is used as a starting point. Flight data from the NASA F-18 High Alpha Research Vehicle (HARV) is then used to update the data tables so that the resulting aerodynamic model characterizes the aerodynamics of the F-18 HARV. Prediction cases are used to show the effectiveness of the automated method, which requires no ad hoc adjustments by the analyst.
Discontinuous Galerkin Methods for Turbulence Simulation
NASA Technical Reports Server (NTRS)
Collis, S. Scott
2002-01-01
A discontinuous Galerkin (DG) method is formulated, implemented, and tested for simulation of compressible turbulent flows. The method is applied to turbulent channel flow at low Reynolds number, where it is found to successfully predict low-order statistics with fewer degrees of freedom than traditional numerical methods. This reduction is achieved by utilizing local hp-refinement such that the computational grid is refined simultaneously in all three spatial coordinates with decreasing distance from the wall. Another advantage of DG is that Dirichlet boundary conditions can be enforced weakly through integrals of the numerical fluxes. Both for a model advection-diffusion problem and for turbulent channel flow, weak enforcement of wall boundaries is found to improve results at low resolution. Such weak boundary conditions may play a pivotal role in wall modeling for large-eddy simulation.
Computer Based Simulation of Laboratory Experiments.
ERIC Educational Resources Information Center
Edward, Norrie S.
1997-01-01
Examines computer based simulations of practical laboratory experiments in engineering. Discusses the aims and achievements of lab work (cognitive, process, psychomotor, and affective); types of simulations (model building and behavioral); and the strengths and weaknesses of simulations. Describes the development of a centrifugal pump simulation,…
Moore, C S; Liney, G P; Beavis, A W; Saunderson, J R
2011-01-01
Objectives The purpose of this study was to develop and validate a computer model to produce realistic simulated computed radiography (CR) chest images using CT data sets of real patients. Methods Anatomical noise, which is the limiting factor in determining pathology in chest radiography, is realistically simulated by the CT data, and frequency-dependent noise has been added post-digitally reconstructed radiograph (DRR) generation to simulate exposure reduction. Realistic scatter and scatter fractions were measured in images of a chest phantom acquired on the CR system simulated by the computer model and added post-DRR calculation. Results The model has been validated with a phantom and patients and shown to provide predictions of signal-to-noise ratios (SNRs), tissue-to-rib ratios (TRRs: a measure of soft tissue pixel value to that of rib) and pixel value histograms that lie within the range of values measured with patients and the phantom. The maximum difference in measured SNR to that calculated was 10%. TRR values differed by a maximum of 1.3%. Conclusion Experienced image evaluators have responded positively to the DRR images, are satisfied they contain adequate anatomical features and have deemed them clinically acceptable. Therefore, the computer model can be used by image evaluators to grade chest images presented at different tube potentials and doses in order to optimise image quality and patient dose for clinical CR chest radiographs without the need for repeat patient exposures. PMID:21933979
2014-01-01
Background Paired survival data are often used in clinical research to assess the prognostic effect of an exposure. Matching generates correlated censored data expecting that the paired subjects just differ from the exposure. Creating pairs when the exposure is an event occurring over time could be tricky. We applied a commonly used method, Method 1, which creates pairs a posteriori and propose an alternative method, Method 2, which creates pairs in “real-time”. We used two semi-parametric models devoted to correlated censored data to estimate the average effect of the exposure HR¯(t): the Holt and Prentice (HP), and the Lee Wei and Amato (LWA) models. Contrary to the HP, the LWA allowed adjustment for the matching covariates (LWA a ) and for an interaction (LWA i ) between exposure and covariates (assimilated to prognostic profiles). The aim of our study was to compare the performances of each model according to the two matching methods. Methods Extensive simulations were conducted. We simulated cohort data sets on which we applied the two matching methods, the HP and the LWA. We used our conclusions to assess the prognostic effect of subsequent pregnancy after treatment for breast cancer in a female cohort treated and followed up in eight french hospitals. Results In terms of bias and RMSE, Method 2 performed better than Method 1 in designing the pairs, and LWA a was the best model for all the situations except when there was an interaction between exposure and covariates, for which LWA i was more appropriate. On our real data set, we found opposite effects of pregnancy according to the six prognostic profiles, but none were statistically significant. We probably lacked statistical power or reached the limits of our approach. The pairs’ censoring options chosen for combination Method 2 - LWA had to be compared with others. Conclusions Correlated censored data designing by Method 2 seemed to be the most pertinent method to create pairs, when the criterion
A reduced basis method for molecular dynamics simulation
NASA Astrophysics Data System (ADS)
Vincent-Finley, Rachel Elisabeth
In this dissertation, we develop a method for molecular simulation based on principal component analysis (PCA) of a molecular dynamics trajectory and least squares approximation of a potential energy function. Molecular dynamics (MD) simulation is a computational tool used to study molecular systems as they evolve through time. With respect to protein dynamics, local motions, such as bond stretching, occur within femtoseconds, while rigid body and large-scale motions, occur within a range of nanoseconds to seconds. To capture motion at all levels, time steps on the order of a femtosecond are employed when solving the equations of motion and simulations must continue long enough to capture the desired large-scale motion. To date, simulations of solvated proteins on the order of nanoseconds have been reported. It is typically the case that simulations of a few nanoseconds do not provide adequate information for the study of large-scale motions. Thus, the development of techniques that allow longer simulation times can advance the study of protein function and dynamics. In this dissertation we use principal component analysis (PCA) to identify the dominant characteristics of an MD trajectory and to represent the coordinates with respect to these characteristics. We augment PCA with an updating scheme based on a reduced representation of a molecule and consider equations of motion with respect to the reduced representation. We apply our method to butane and BPTI and compare the results to standard MD simulations of these molecules. Our results indicate that the molecular activity with respect to our simulation method is analogous to that observed in the standard MD simulation with simulations on the order of picoseconds.
Twitter's tweet method modelling and simulation
NASA Astrophysics Data System (ADS)
Sarlis, Apostolos S.; Sakas, Damianos P.; Vlachos, D. S.
2015-02-01
This paper seeks to purpose the concept of Twitter marketing methods. The tools that Twitter provides are modelled and simulated using iThink in the context of a Twitter media-marketing agency. The paper has leveraged the system's dynamic paradigm to conduct Facebook marketing tools and methods modelling, using iThink™ system to implement them. It uses the design science research methodology for the proof of concept of the models and modelling processes. The following models have been developed for a twitter marketing agent/company and tested in real circumstances and with real numbers. These models were finalized through a number of revisions and iterators of the design, develop, simulate, test and evaluate. It also addresses these methods that suit most organized promotion through targeting, to the Twitter social media service. The validity and usefulness of these Twitter marketing methods models for the day-to-day decision making are authenticated by the management of the company organization. It implements system dynamics concepts of Twitter marketing methods modelling and produce models of various Twitter marketing situations. The Tweet method that Twitter provides can be adjusted, depending on the situation, in order to maximize the profit of the company/agent.
Convected element method for simulation of angiogenesis.
Pindera, Maciej Z; Ding, Hui; Chen, Zhijian
2008-10-01
We describe a novel Convected Element Method (CEM) for simulation of formation of functional blood vessels induced by tumor-generated growth factors in a process called angiogenesis. Angiogenesis is typically modeled by a convection-diffusion-reaction equation defined on a continuous domain. A difficulty arises when a continuum approach is used to represent the formation of discrete blood vessel structures. CEM solves this difficulty by using a hybrid continuous/discrete solution method allowing lattice-free tracking of blood vessel tips that trace out paths that subsequently are used to define compact vessel elements. In contrast to more conventional angiogenesis modeling, the new branches form evolving grids that are capable of simulating transport of biological and chemical factors such as nutrition and anti-angiogenic agents. The method is demonstrated on expository vessel growth and tumor response simulations for a selected set of conditions, and include effects of nutrient delivery and inhibition of vessel branching. Initial results show that CEM can predict qualitatively the development of biologically reasonable and fully functional vascular structures. Research is being carried out to generalize the approach which will allow quantitative predictions. PMID:18365201
RELAP5 based engineering simulator
Charlton, T.R.; Laats, E.T.; Burtt, J.D.
1990-01-01
The INEL Engineering Simulation Center was established in 1988 to provide a modern, flexible, state-of-the-art simulation facility. This facility and two of the major projects which are part of the simulation center, the Advance Test Reactor (ATR) engineering simulator project and the Experimental Breeder Reactor II (EBR-II) advanced reactor control system, have been the subject of several papers in the past few years. Two components of the ATR engineering simulator project, RELAP5 and the Nuclear Plant Analyzer (NPA), have recently been improved significantly. This paper will present an overview of the INEL Engineering Simulation Center, and discuss the RELAP5/MOD3 and NPA/MOD1 codes, specifically how they are being used at the INEL Engineering Simulation Center. It will provide an update on the modifications to these two codes and their application to the ATR engineering simulator project, as well as, a discussion on the reactor system representation, control system modeling, two phase flow and heat transfer modeling. It will also discuss how these two codes are providing desktop, stand-alone reactor simulation. 12 refs., 2 figs.
Vibratory compaction method for preparing lunar regolith drilling simulant
NASA Astrophysics Data System (ADS)
Chen, Chongbin; Quan, Qiquan; Deng, Zongquan; Jiang, Shengyuan
2016-07-01
Drilling and coring is an effective way to acquire lunar regolith samples along the depth direction. To facilitate the modeling and simulation of lunar drilling, ground verification experiments for drilling and coring should be performed using lunar regolith simulant. The simulant should mimic actual lunar regolith, and the distribution of its mechanical properties should vary along the longitudinal direction. Furthermore, an appropriate preparation method is required to ensure that the simulant has consistent mechanical properties so that the experimental results can be repeatable. Vibratory compaction actively changes the relative density of a raw material, making it suitable for building a multilayered drilling simulant. It is necessary to determine the relation between the preparation parameters and the expected mechanical properties of the drilling simulant. A vibratory compaction model based on the ideal elastoplastic theory is built to represent the dynamical properties of the simulant during compaction. Preparation experiments indicated that the preparation method can be used to obtain drilling simulant with the desired mechanical property distribution along the depth direction.
Geant4 Simulation of Air Showers using Thinning Method
NASA Astrophysics Data System (ADS)
Sabra, Mohammad S.; Watts, John W.; Christl, Mark J.
2015-04-01
Simulation of complete air showers induced by cosmic ray particles becomes prohibitive at extreme energies due to the large number of secondary particles. Computing time of such simulations roughly scales with the energy of the primary cosmic ray particle, and becomes excessively large. To mitigate the problem, only small fraction of particles can be tracked and, then, the whole shower is reconstructed based on this sample. This method is called Thinning. Using this method in Geant4, we have simulated proton and iron air showers at extreme energies (E >1016 eV). Secondary particle densities are calculated and compared with the standard simulation program in this field, CORSIKA. This work is supported by the NASA Postdoctoral Program administrated by Oak Ridge Associated Universities.
Gagne, MC; Archambault, L; Tremblay, D; Varfalvy, N
2014-06-15
Purpose: Intensity modulated radiation therapy always requires compromises between PTV coverage and organs at risk (OAR) sparing. We previously developed metrics that correlate doses to OAR to specific patients’ morphology using stochastic frontier analysis (SFA). Here, we aim to examine the validity of this approach using a large set of realistically simulated dosimetric and geometric data. Methods: SFA describes a set of treatment plans as an asymmetric distribution with respect to a frontier defining optimal plans. Eighty head and neck IMRT plans were used to establish a metric predicting the mean dose to parotids as a function of simple geometric parameters. A database of 140 parotids was used as a basis distribution to simulate physically plausible data of geometry and dose. Distributions comprising between 20 and 5000 were simulated and the SFA was applied to obtain new frontiers, which were compared to the original frontier. Results: It was possible to simulate distributions consistent with the original dataset. Below 160 organs, the SFA could not always describe distributions as asymmetric: a few cases showed a Gaussian or half-Gaussian distribution. In order to converge to a stable solution, the number of organs in a distribution must ideally be above 100, but in many cases stable parameters could be achieved with as low as 60 samples of organ data. Mean RMS value of the error of new frontiers was significantly reduced when additional organs are used. Conclusion: The number of organs in a distribution showed to have an impact on the effectiveness of the model. It is always possible to obtain a frontier, but if the number of organs in the distribution is small (< 160), it may not represent de lowest dose achievable. These results will be used to determine number of cases necessary to adapt the model to other organs.
A Cloud-Based Simulation Architecture for Pandemic Influenza Simulation
Eriksson, Henrik; Raciti, Massimiliano; Basile, Maurizio; Cunsolo, Alessandro; Fröberg, Anders; Leifler, Ola; Ekberg, Joakim; Timpka, Toomas
2011-01-01
High-fidelity simulations of pandemic outbreaks are resource consuming. Cluster-based solutions have been suggested for executing such complex computations. We present a cloud-based simulation architecture that utilizes computing resources both locally available and dynamically rented online. The approach uses the Condor framework for job distribution and management of the Amazon Elastic Computing Cloud (EC2) as well as local resources. The architecture has a web-based user interface that allows users to monitor and control simulation execution. In a benchmark test, the best cost-adjusted performance was recorded for the EC2 H-CPU Medium instance, while a field trial showed that the job configuration had significant influence on the execution time and that the network capacity of the master node could become a bottleneck. We conclude that it is possible to develop a scalable simulation environment that uses cloud-based solutions, while providing an easy-to-use graphical user interface. PMID:22195089
A cloud-based simulation architecture for pandemic influenza simulation.
Eriksson, Henrik; Raciti, Massimiliano; Basile, Maurizio; Cunsolo, Alessandro; Fröberg, Anders; Leifler, Ola; Ekberg, Joakim; Timpka, Toomas
2011-01-01
High-fidelity simulations of pandemic outbreaks are resource consuming. Cluster-based solutions have been suggested for executing such complex computations. We present a cloud-based simulation architecture that utilizes computing resources both locally available and dynamically rented online. The approach uses the Condor framework for job distribution and management of the Amazon Elastic Computing Cloud (EC2) as well as local resources. The architecture has a web-based user interface that allows users to monitor and control simulation execution. In a benchmark test, the best cost-adjusted performance was recorded for the EC2 H-CPU Medium instance, while a field trial showed that the job configuration had significant influence on the execution time and that the network capacity of the master node could become a bottleneck. We conclude that it is possible to develop a scalable simulation environment that uses cloud-based solutions, while providing an easy-to-use graphical user interface. PMID:22195089
Hou, Tingjun; Wang, Junmei; Li, Youyong; Wang, Wei
2011-01-01
The Molecular Mechanics/Poisson Boltzmann Surface Area (MM/PBSA) and the Molecular Mechanics/Generalized Born Surface Area (MM/GBSA) methods calculate binding free energies for macromolecules by combining molecular mechanics calculations and continuum solvation models. To systematically evaluate the performance of these methods, we report here an extensive study of 59 ligands interacting with six different proteins. First, we explored the effects of the length of the molecular dynamics (MD) simulation, ranging from 400 to 4800 ps, and the solute dielectric constant (1, 2 or 4) to the binding free energies predicted by MM/PBSA. The following three important conclusions could be observed: (1). MD simulation lengths have obvious impact on the predictions, and longer MD simulations are not always necessary to achieve better predictions; (2). The predictions are quite sensitive to solute dielectric constant, and this parameter should be carefully determined according to the characteristics of the protein/ligand binding interface; (3). Conformational entropy showed large fluctuations in MD trajectories and a large number of snapshots are necessary to achieve stable predictions. Next, we evaluated the accuracy of the binding free energies calculated by three Generalized Born (GB) models. We found that the GB model developed by Onufriev and Case was the most successful model in ranking the binding affinities of the studied inhibitors. Finally, we evaluated the performance of MM/GBSA and MM/PBSA in predicting binding free energies. Our results showed that MM/PBSA performed better in calculating absolute, but not necessarily relative, binding free energies than MM/GBSA. Considering its computational efficiency, MM/GBSA can serve as a powerful tool in drug design, where correct ranking of inhibitors is often emphasized. PMID:21117705
Regolith simulant preparation methods for hardware testing
NASA Astrophysics Data System (ADS)
Gouache, Thibault P.; Brunskill, Christopher; Scott, Gregory P.; Gao, Yang; Coste, Pierre; Gourinat, Yves
2010-12-01
To qualify hardware for space flight, great care is taken to replicate the environment encountered in space. Emphasis is focused on presenting the hardware with the most extreme conditions it might encounter during its mission lifetime. The same care should be taken when regolith simulants are prepared to test space system performance. Indeed, the manner a granular material is prepared can have a very high influence on its mechanical properties and on the performance of the system interacting with it. Three regolith simulant preparation methods have been tested and are presented here (rain, pour, vibrate). They should enable researchers and hardware developers to test their prototypes in controlled and repeatable conditions. The pour and vibrate techniques are robust but only allow reaching a given relative density. The rain technique allows reaching a variety of relative densities but can be less robust if manually controlled.
Infrared Image Simulation Based On Statistical Learning Theory
NASA Astrophysics Data System (ADS)
Chaochao, Huang; Xiaodi, Wu; Wuqin, Tong
2007-12-01
A real-time simulation algorithm of infrared image based on statistical learning theory is presented. The method includes three contents to achieve real-time simulation of infrared image, such as acquiring the training sample, forecasting the scene temperature field value by statistical learning machine, data processing and data analysis of temperature field. The simulation result shows this algorithm based on ν - support vector regression have better maneuverability and generalization than the other method, and the simulation precision and real-time quality are satisfying.
Physics-Based Simulator for NEO Exploration Analysis & Simulation
NASA Technical Reports Server (NTRS)
Balaram, J.; Cameron, J.; Jain, A.; Kline, H.; Lim, C.; Mazhar, H.; Myint, S.; Nayar, H.; Patton, R.; Pomerantz, M.; Quadrelli, M.; Shakkotai, P.; Tso, K.
2011-01-01
As part of the Space Exploration Analysis and Simulation (SEAS) task, the National Aeronautics and Space Administration (NASA) is using physics-based simulations at NASA's Jet Propulsion Laboratory (JPL) to explore potential surface and near-surface mission operations at Near Earth Objects (NEOs). The simulator is under development at JPL and can be used to provide detailed analysis of various surface and near-surface NEO robotic and human exploration concepts. In this paper we describe the SEAS simulator and provide examples of recent mission systems and operations concepts investigated using the simulation. We also present related analysis work and tools developed for both the SEAS task as well as general modeling, analysis and simulation capabilites for asteroid/small-body objects.
Interactive methods for exploring particle simulation data
Co, Christopher S.; Friedman, Alex; Grote, David P.; Vay, Jean-Luc; Bethel, E. Wes; Joy, Kenneth I.
2004-05-01
In this work, we visualize high-dimensional particle simulation data using a suite of scatter plot-based visualizations coupled with interactive selection tools. We use traditional 2D and 3D projection scatter plots as well as a novel oriented disk rendering style to convey various information about the data. Interactive selection tools allow physicists to manually classify ''interesting'' sets of particles that are highlighted across multiple, linked views of the data. The power of our application is the ability to correspond new visual representations of the simulation data with traditional, well understood visualizations. This approach supports the interactive exploration of the high-dimensional space while promoting discovery of new particle behavior.
Level set method for microfabrication simulations
NASA Astrophysics Data System (ADS)
Baranski, Maciej; Kasztelanic, Rafal; Albero, Jorge; Nieradko, Lukasz; Gorecki, Christophe
2010-05-01
The article describes application of Level Set method for two different microfabrication processes. First is shape evolution of during reflow of the glass structure. Investigated problem were approximated by viscous flow of material thus kinetics of the process were known from physical model. Second problem is isotropic wet etching of silicon. Which is much more complicated because dynamics of the shape evolution is strongly coupled with time and geometry shapes history. In etching simulations Level Set method is coupled with Finite Element Method (FEM) that is used for calculation of etching acid concentration that determine geometry evolution of the structure. The problem arising from working with FEM with time varying boundaries was solved with the use of the dynamic mesh technique employing the Level Set formalism of higher dimensional function for geometry description. Isotropic etching was investigated in context of mico-lenses fabrication. Model was compared with experimental data obtained in etching of the silicon moulds used for micro-lenses fabrication.
ERIC Educational Resources Information Center
Clark, Joseph Warren
2012-01-01
In turbulent business environments, change is rapid, continuous, and unpredictable. Turbulence undermines those adaptive problem solving methods that generate solutions by extrapolating from what worked (or did not work) in the past. To cope with this challenge, organizations utilize trial-based problem solving (TBPS) approaches in which they…
Physiological Based Simulator Fidelity Design Guidance
NASA Technical Reports Server (NTRS)
Schnell, Thomas; Hamel, Nancy; Postnikov, Alex; Hoke, Jaclyn; McLean, Angus L. M. Thom, III
2012-01-01
The evolution of the role of flight simulation has reinforced assumptions in aviation that the degree of realism in a simulation system directly correlates to the training benefit, i.e., more fidelity is always better. The construct of fidelity has several dimensions, including physical fidelity, functional fidelity, and cognitive fidelity. Interaction of different fidelity dimensions has an impact on trainee immersion, presence, and transfer of training. This paper discusses research results of a recent study that investigated if physiological-based methods could be used to determine the required level of simulator fidelity. Pilots performed a relatively complex flight task consisting of mission task elements of various levels of difficulty in a fixed base flight simulator and a real fighter jet trainer aircraft. Flight runs were performed using one forward visual channel of 40 deg. field of view for the lowest level of fidelity, 120 deg. field of view for the middle level of fidelity, and unrestricted field of view and full dynamic acceleration in the real airplane. Neuro-cognitive and physiological measures were collected under these conditions using the Cognitive Avionics Tool Set (CATS) and nonlinear closed form models for workload prediction were generated based on these data for the various mission task elements. One finding of the work described herein is that simple heart rate is a relatively good predictor of cognitive workload, even for short tasks with dynamic changes in cognitive loading. Additionally, we found that models that used a wide range of physiological and neuro-cognitive measures can further boost the accuracy of the workload prediction.
A novel load balancing method for hierarchical federation simulation system
NASA Astrophysics Data System (ADS)
Bin, Xiao; Xiao, Tian-yuan
2013-07-01
In contrast with single HLA federation framework, hierarchical federation framework can improve the performance of large-scale simulation system in a certain degree by distributing load on several RTI. However, in hierarchical federation framework, RTI is still the center of message exchange of federation, and it is still the bottleneck of performance of federation, the data explosion in a large-scale HLA federation may cause overload on RTI, It may suffer HLA federation performance reduction or even fatal error. Towards this problem, this paper proposes a load balancing method for hierarchical federation simulation system based on queuing theory, which is comprised of three main module: queue length predicting, load controlling policy, and controller. The method promotes the usage of resources of federate nodes, and improves the performance of HLA simulation system with balancing load on RTIG and federates. Finally, the experiment results are presented to demonstrate the efficient control of the method.
A discrete event method for wave simulation
Nutaro, James J
2006-01-01
This article describes a discrete event interpretation of the finite difference time domain (FDTD) and digital wave guide network (DWN) wave simulation schemes. The discrete event method is formalized using the discrete event system specification (DEVS). The scheme is shown to have errors that are proportional to the resolution of the spatial grid. A numerical example demonstrates the relative efficiency of the scheme with respect to FDTD and DWN schemes. The potential for the discrete event scheme to reduce numerical dispersion and attenuation errors is discussed.
Efficiency of ultrasound training simulators: method for assessing image realism.
Bø, Lars Eirik; Gjerald, Sjur Urdson; Brekken, Reidar; Tangen, Geir Arne; Hernes, Toril A Nagelhus
2010-04-01
Although ultrasound has become an important imaging modality within several medical professions, the benefit of ultrasound depends to some degree on the skills of the person operating the probe and interpreting the image. For some applications, the possibility to educate operators in a clinical setting is limited, and the use of training simulators is considered an alternative approach for learning basic skills. To ensure the quality of simulator-based training, it is important to produce simulated ultrasound images that resemble true images to a sufficient degree. This article describes a method that allows corresponding true and simulated ultrasound images to be generated and displayed side by side in real time, thus facilitating an interactive evaluation of ultrasound simulators in terms of image resemblance, real-time characteristics and man-machine interaction. The proposed method could be used to study the realism of ultrasound simulators and how this realism affects the quality of training, as well as being a valuable tool in the development of simulation algorithms. PMID:20337541
Safari, Edwin; Jalili Ghazizade, Mahdi; Abdoli, Mohammad Ali
2012-09-01
Compacted clay liners (CCLs) when feasible, are preferred to composite geosynthetic liners. The thickness of CCLs is typically prescribed by each country's environmental protection regulations. However, considering the fact that construction of CCLs represents a significant portion of overall landfill construction costs; a performance based design of liner thickness would be preferable to 'one size fits all' prescriptive standards. In this study researchers analyzed the hydraulic behaviour of a compacted clayey soil in three laboratory pilot scale columns exposed to high strength leachate under simulated landfill conditions. The temperature of the simulated CCL at the surface was maintained at 40 ± 2 °C and a vertical pressure of 250 kPa was applied to the soil through a gravel layer on top of the 50 cm thick CCL where high strength fresh leachate was circulated at heads of 15 and 30 cm simulating the flow over the CCL. Inverse modelling using HYDRUS-1D indicated that the hydraulic conductivity after 180 days was decreased about three orders of magnitude in comparison with the values measured prior to the experiment. A number of scenarios of different leachate heads and persistence time were considered and saturation depth of the CCL was predicted through modelling. Under a typical leachate head of 30 cm, the saturation depth was predicted to be less than 60 cm for a persistence time of 3 years. This approach can be generalized to estimate an effective thickness of a CCL instead of using prescribed values, which may be conservatively overdesigned and thus unduly costly. PMID:22617473
NASA Astrophysics Data System (ADS)
Ballarin, Francesco; Faggiano, Elena; Ippolito, Sonia; Manzoni, Andrea; Quarteroni, Alfio; Rozza, Gianluigi; Scrofani, Roberto
2016-06-01
In this work a reduced-order computational framework for the study of haemodynamics in three-dimensional patient-specific configurations of coronary artery bypass grafts dealing with a wide range of scenarios is proposed. We combine several efficient algorithms to face at the same time both the geometrical complexity involved in the description of the vascular network and the huge computational cost entailed by time dependent patient-specific flow simulations. Medical imaging procedures allow to reconstruct patient-specific configurations from clinical data. A centerlines-based parametrization is proposed to efficiently handle geometrical variations. POD-Galerkin reduced-order models are employed to cut down large computational costs. This computational framework allows to characterize blood flows for different physical and geometrical variations relevant in the clinical practice, such as stenosis factors and anastomosis variations, in a rapid and reliable way. Several numerical results are discussed, highlighting the computational performance of the proposed framework, as well as its capability to carry out sensitivity analysis studies, so far out of reach. In particular, a reduced-order simulation takes only a few minutes to run, resulting in computational savings of 99% of CPU time with respect to the full-order discretization. Moreover, the error between full-order and reduced-order solutions is also studied, and it is numerically found to be less than 1% for reduced-order solutions obtained with just O(100) online degrees of freedom.
Apparatus for and method of simulating turbulence
Dimas, Athanassios; Lottati, Isaac; Bernard, Peter; Collins, James; Geiger, James C.
2003-01-01
In accordance with a preferred embodiment of the invention, a novel apparatus for and method of simulating physical processes such as fluid flow is provided. Fluid flow near a boundary or wall of an object is represented by a collection of vortex sheet layers. The layers are composed of a grid or mesh of one or more geometrically shaped space filling elements. In the preferred embodiment, the space filling elements take on a triangular shape. An Eulerian approach is employed for the vortex sheets, where a finite-volume scheme is used on the prismatic grid formed by the vortex sheet layers. A Lagrangian approach is employed for the vortical elements (e.g., vortex tubes or filaments) found in the remainder of the flow domain. To reduce the computational time, a hairpin removal scheme is employed to reduce the number of vortex filaments, and a Fast Multipole Method (FMM), preferably implemented using parallel processing techniques, reduces the computation of the velocity field.
Agent-Based Simulations for Project Management
NASA Technical Reports Server (NTRS)
White, J. Chris; Sholtes, Robert M.
2011-01-01
Currently, the most common approach used in project planning tools is the Critical Path Method (CPM). While this method was a great improvement over the basic Gantt chart technique being used at the time, it now suffers from three primary flaws: (1) task duration is an input, (2) productivity impacts are not considered , and (3) management corrective actions are not included. Today, computers have exceptional computational power to handle complex simulations of task e)(eculion and project management activities (e.g ., dynamically changing the number of resources assigned to a task when it is behind schedule). Through research under a Department of Defense contract, the author and the ViaSim team have developed a project simulation tool that enables more realistic cost and schedule estimates by using a resource-based model that literally turns the current duration-based CPM approach "on its head." The approach represents a fundamental paradigm shift in estimating projects, managing schedules, and reducing risk through innovative predictive techniques.
An example-based brain MRI simulation framework
NASA Astrophysics Data System (ADS)
He, Qing; Roy, Snehashis; Jog, Amod; Pham, Dzung L.
2015-03-01
The simulation of magnetic resonance (MR) images plays an important role in the validation of image analysis algorithms such as image segmentation, due to lack of sufficient ground truth in real MR images. Previous work on MRI simulation has focused on explicitly modeling the MR image formation process. However, because of the overwhelming complexity of MR acquisition these simulations must involve simplifications and approximations that can result in visually unrealistic simulated images. In this work, we describe an example-based simulation framework, which uses an "atlas" consisting of an MR image and its anatomical models derived from the hard segmentation. The relationships between the MR image intensities and its anatomical models are learned using a patch-based regression that implicitly models the physics of the MR image formation. Given the anatomical models of a new brain, a new MR image can be simulated using the learned regression. This approach has been extended to also simulate intensity inhomogeneity artifacts based on the statistical model of training data. Results show that the example based MRI simulation method is capable of simulating different image contrasts and is robust to different choices of atlas. The simulated images resemble real MR images more than simulations produced by a physics-based model.
NASA Astrophysics Data System (ADS)
Motorin, A. A.; Stupitsky, E. L.; Kholodov, A. S.
2016-07-01
The spatiotemporal pattern for the development of a plasma cloud formed in the ionosphere and the main cloud gas-dynamic characteristics have been obtained from 3D calculations of the explosion-type plasmodynamic flows previously performed by us. An approximate method for estimating the plasma temperature and ionization degree with the introduction of the effective adiabatic index has been proposed based on these results.
Lensless ghost imaging based on mathematical simulation and experimental simulation
NASA Astrophysics Data System (ADS)
Liu, Yanyan; Wang, Biyi; Zhao, Yingchao; Dong, Junzhang
2014-02-01
The differences of conventional imaging and correlated imaging are discussed in this paper. The mathematical model of lensless ghost imaging system is set up and the image of double slits is computed by mathematical simulation. The results are also testified by the experimental verification. Both the theory simulation and experimental verifications results shows that the mathematical model based on statistical optical principle are keeping consistent with real experimental results.
Wang, Jianling; Xiao, Xiaofeng; Chen, Tong; Liu, Tingfei; Tao, Huaming; He, Jun
2016-06-17
The glyceride in oil food simulant usually causes serious interferences to target analytes and leads to failure of the normal function of the RP-HPLC column. In this work, a convenient HPLC-UV method for the determination of the total specific migration of nine ultraviolet (UV) absorbers in food simulants was developed based on 1,1,3,3-tetramethylguanidine (TMG) and organic phase anion exchange (OPAE) SPE to efficiently remove glyceride in olive oil simulant. In contrast to the normal ion exchange carried out in an aqueous solution or aqueous phase environment, the OPAE SPE was performed in the organic phase environments, and the time-consuming and challenging extraction of the nine UV absorbers from vegetable oil with aqueous solution could be readily omitted. The method was proved to have good linearity (r≥0.99992), precision (intra-day RSD≤3.3%), and accuracy(91.0%≤recoveries≤107%); furthermore, the lower limit of quantifications (0.05-0.2mg/kg) in five types of food simulants(10% ethanol, 3% acetic acid, 20% ethanol, 50% ethanol and olive oil) was observed. The method was found to be well suited for quantitative determination of the total specific migration of the nine UV absorbers both in aqueous and vegetable oil simulant according to Commission Regulation (EU) No. 10/2011. Migration levels of the nine UV absorbers were determined in 31 plastic samples, and UV-24, UV-531, HHBP and UV-326 were frequently detected, especially in olive oil simulant for UV-326 in PE samples. In addition, the OPAE SPE procedure was also been applied to efficiently enrich or purify seven antioxidants in olive oil simulant. Results indicate that this procedure will have more extensive applications in the enriching or purification of the extremely weak acidic compounds with phenol hydroxyl group that are relatively stable in TMG n-hexane solution and that can be barely extracted from vegetable oil. PMID:27189432
Implicit methods for efficient musculoskeletal simulation and optimal control
van den Bogert, Antonie J.; Blana, Dimitra; Heinrich, Dieter
2011-01-01
The ordinary differential equations for musculoskeletal dynamics are often numerically stiff and highly nonlinear. Consequently, simulations require small time steps, and optimal control problems are slow to solve and have poor convergence. In this paper, we present an implicit formulation of musculoskeletal dynamics, which leads to new numerical methods for simulation and optimal control, with the expectation that we can mitigate some of these problems. A first order Rosenbrock method was developed for solving forward dynamic problems using the implicit formulation. It was used to perform real-time dynamic simulation of a complex shoulder arm system with extreme dynamic stiffness. Simulations had an RMS error of only 0.11 degrees in joint angles when running at real-time speed. For optimal control of musculoskeletal systems, a direct collocation method was developed for implicitly formulated models. The method was applied to predict gait with a prosthetic foot and ankle. Solutions were obtained in well under one hour of computation time and demonstrated how patients may adapt their gait to compensate for limitations of a specific prosthetic limb design. The optimal control method was also applied to a state estimation problem in sports biomechanics, where forces during skiing were estimated from noisy and incomplete kinematic data. Using a full musculoskeletal dynamics model for state estimation had the additional advantage that forward dynamic simulations, could be done with the same implicitly formulated model to simulate injuries and perturbation responses. While these methods are powerful and allow solution of previously intractable problems, there are still considerable numerical challenges, especially related to the convergence of gradient-based solvers. PMID:22102983
An improved method for simulating microcalcifications in digital mammograms.
Zanca, Federica; Chakraborty, Dev Prasad; Van Ongeval, Chantal; Jacobs, Jurgen; Claus, Filip; Marchal, Guy; Bosmans, Hilde
2008-09-01
The assessment of the performance of a digital mammography system requires an observer study with a relatively large number of cases with known truth which is often difficult to assemble. Several investigators have developed methods for generating hybrid abnormal images containing simulated microcalcifications. This article addresses some of the limitations of earlier methods. The new method is based on digital images of needle biopsy specimens. Since the specimens are imaged separately from the breast, the microcalcification attenuation profile scan is deduced without the effects of over and underlying tissues. The resulting templates are normalized for image acquisition specific parameters and reprocessed to simulate microcalcifications appropriate to other imaging systems, with different x-ray, detector and image processing parameters than the original acquisition system. This capability is not shared by previous simulation methods that have relied on extracting microcalcifications from breast images. The method was validated by five experienced mammographers who compared 59 pairs of simulated and real microcalcifications in a two-alternative forced choice task designed to test if they could distinguish the real from the simulated lesions. They also classified the shapes of the microcalcifications according to a standardized clinical lexicon. The observed probability of correct choice was 0.415, 95% confidence interval (0.284, 0.546), showing that the radiologists were unable to distinguish the lesions. The shape classification revealed substantial agreement with the truth (mean kappa = 0.70), showing that we were able to accurately simulate the lesion morphology. While currently limited to single microcalcifications, the method is extensible to more complex clusters of microcalcifications and to three-dimensional images. It can be used to objectively assess an imaging technology, especially with respect to its ability to adequately visualize the morphology of
An improved method for simulating microcalcifications in digital mammograms
Zanca, Federica; Chakraborty, Dev Prasad; Ongeval, Chantal van; Jacobs, Jurgen; Claus, Filip; Marchal, Guy; Bosmans, Hilde
2008-09-15
The assessment of the performance of a digital mammography system requires an observer study with a relatively large number of cases with known truth which is often difficult to assemble. Several investigators have developed methods for generating hybrid abnormal images containing simulated microcalcifications. This article addresses some of the limitations of earlier methods. The new method is based on digital images of needle biopsy specimens. Since the specimens are imaged separately from the breast, the microcalcification attenuation profile scan is deduced without the effects of over and underlying tissues. The resulting templates are normalized for image acquisition specific parameters and reprocessed to simulate microcalcifications appropriate to other imaging systems, with different x-ray, detector and image processing parameters than the original acquisition system. This capability is not shared by previous simulation methods that have relied on extracting microcalcifications from breast images. The method was validated by five experienced mammographers who compared 59 pairs of simulated and real microcalcifications in a two-alternative forced choice task designed to test if they could distinguish the real from the simulated lesions. They also classified the shapes of the microcalcifications according to a standardized clinical lexicon. The observed probability of correct choice was 0.415, 95% confidence interval (0.284, 0.546), showing that the radiologists were unable to distinguish the lesions. The shape classification revealed substantial agreement with the truth (mean kappa=0.70), showing that we were able to accurately simulate the lesion morphology. While currently limited to single microcalcifications, the method is extensible to more complex clusters of microcalcifications and to three-dimensional images. It can be used to objectively assess an imaging technology, especially with respect to its ability to adequately visualize the morphology of the
An improved method for simulating microcalcifications in digital mammograms
Zanca, Federica; Chakraborty, Dev Prasad; Van Ongeval, Chantal; Jacobs, Jurgen; Claus, Filip; Marchal, Guy; Bosmans, Hilde
2008-01-01
The assessment of the performance of a digital mammography system requires an observer study with a relatively large number of cases with known truth which is often difficult to assemble. Several investigators have developed methods for generating hybrid abnormal images containing simulated microcalcifications. This article addresses some of the limitations of earlier methods. The new method is based on digital images of needle biopsy specimens. Since the specimens are imaged separately from the breast, the microcalcification attenuation profile scan is deduced without the effects of over and underlying tissues. The resulting templates are normalized for image acquisition specific parameters and reprocessed to simulate microcalcifications appropriate to other imaging systems, with different x-ray, detector and image processing parameters than the original acquisition system. This capability is not shared by previous simulation methods that have relied on extracting microcalcifications from breast images. The method was validated by five experienced mammographers who compared 59 pairs of simulated and real microcalcifications in a two-alternative forced choice task designed to test if they could distinguish the real from the simulated lesions. They also classified the shapes of the microcalcifications according to a standardized clinical lexicon. The observed probability of correct choice was 0.415, 95% confidence interval (0.284, 0.546), showing that the radiologists were unable to distinguish the lesions. The shape classification revealed substantial agreement with the truth (mean kappa=0.70), showing that we were able to accurately simulate the lesion morphology. While currently limited to single microcalcifications, the method is extensible to more complex clusters of microcalcifications and to three-dimensional images. It can be used to objectively assess an imaging technology, especially with respect to its ability to adequately visualize the morphology of the
Etch Profile Simulation Using Level Set Methods
NASA Technical Reports Server (NTRS)
Hwang, Helen H.; Meyyappan, Meyya; Arnold, James O. (Technical Monitor)
1997-01-01
Etching and deposition of materials are critical steps in semiconductor processing for device manufacturing. Both etching and deposition may have isotropic and anisotropic components, due to directional sputtering and redeposition of materials, for example. Previous attempts at modeling profile evolution have used so-called "string theory" to simulate the moving solid-gas interface between the semiconductor and the plasma. One complication of this method is that extensive de-looping schemes are required at the profile corners. We will present a 2D profile evolution simulation using level set theory to model the surface. (1) By embedding the location of the interface in a field variable, the need for de-looping schemes is eliminated and profile corners are more accurately modeled. This level set profile evolution model will calculate both isotropic and anisotropic etch and deposition rates of a substrate in low pressure (10s mTorr) plasmas, considering the incident ion energy angular distribution functions and neutral fluxes. We will present etching profiles of Si substrates in Ar/Cl2 discharges for various incident ion energies and trench geometries.
Application of particle method to the casting process simulation
NASA Astrophysics Data System (ADS)
Hirata, N.; Zulaida, Y. M.; Anzai, K.
2012-07-01
Casting processes involve many significant phenomena such as fluid flow, solidification, and deformation, and it is known that casting defects are strongly influenced by the phenomena. However the phenomena complexly interacts each other and it is difficult to observe them directly because the temperature of the melt and other apparatus components are quite high, and they are generally opaque; therefore, a computer simulation is expected to serve a lot of benefits to consider what happens in the processes. Recently, a particle method, which is one of fully Lagrangian methods, has attracted considerable attention. The particle methods based on Lagrangian methods involving no calculation lattice have been developed rapidly because of their applicability to multi-physics problems. In this study, we combined the fluid flow, heat transfer and solidification simulation programs, and tried to simulate various casting processes such as continuous casting, centrifugal casting and ingot making. As a result of continuous casting simulation, the powder flow could be calculated as well as the melt flow, and the subsequent shape of interface between the melt and the powder was calculated. In the centrifugal casting simulation, the mold was smoothly modeled along the shape of the real mold, and the fluid flow and the rotating mold are simulated directly. As a result, the flow of the melt dragged by the rotating mold was calculated well. The eccentric rotation and the influence of Coriolis force were also reproduced directly and naturally. For ingot making simulation, a shrinkage formation behavior was calculated and the shape of the shrinkage agreed well with the experimental result.
IMPACT OF SIMULANT PRODUCTION METHODS ON SRAT PRODUCT
EIBLING, R
2006-03-22
The research and development programs in support of the Defense Waste Processing Facility (DWPF) and other high level waste vitrification processes require the use of both nonradioactive waste simulants and actual waste samples. The nonradioactive waste simulants have been used for laboratory testing, pilot-scale testing and full-scale integrated facility testing. Recent efforts have focused on matching the physical properties of actual sludge. These waste simulants were designed to reproduce the chemical and, if possible, the physical properties of the actual high level waste. This technical report documents a study of simulant production methods for high level waste simulated sludge and their impact on the physical properties of the resultant SRAT product. The sludge simulants used in support of DWPF have been based on average waste compositions and on expected or actual batch compositions. These sludge simulants were created to primarily match the chemical properties of the actual waste. These sludges were produced by generating manganese dioxide, MnO{sub 2}, from permanganate ion (MnO{sub 4}{sup -}) and manganous nitrate, precipitating ferric nitrate and nickel nitrate with sodium hydroxide, washing with inhibited water and then addition of other waste species. While these simulated sludges provided a good match for chemical reaction studies, they did not adequately match the physical properties (primarily rheology) measured on the actual waste. A study was completed in FY04 to determine the impact of simulant production methods on the physical properties of Sludge Batch 3 simulant. This study produced eight batches of sludge simulant, all prepared to the same chemical target, by varying the sludge production methods. The sludge batch, which most closely duplicated the actual SB3 sludge physical properties, was Test 8. Test 8 sludge was prepared by coprecipitating all of the major metals (including Al). After the sludge was washed to meet the target, the sludge
Development of semiclassical molecular dynamics simulation method.
Nakamura, Hiroki; Nanbu, Shinkoh; Teranishi, Yoshiaki; Ohta, Ayumi
2016-04-28
Various quantum mechanical effects such as nonadiabatic transitions, quantum mechanical tunneling and coherence play crucial roles in a variety of chemical and biological systems. In this paper, we propose a method to incorporate tunneling effects into the molecular dynamics (MD) method, which is purely based on classical mechanics. Caustics, which define the boundary between classically allowed and forbidden regions, are detected along classical trajectories and the optimal tunneling path with minimum action is determined by starting from each appropriate caustic. The real phase associated with tunneling can also be estimated. Numerical demonstration with use of a simple collinear chemical reaction O + HCl → OH + Cl is presented in order to help the reader to well comprehend the method proposed here. Generalization to the on-the-fly ab initio version is rather straightforward. By treating the nonadiabatic transitions at conical intersections by the Zhu-Nakamura theory, new semiclassical MD methods can be developed. PMID:27067383
Computational simulation methods for composite fracture mechanics
NASA Technical Reports Server (NTRS)
Murthy, Pappu L. N.
1988-01-01
Structural integrity, durability, and damage tolerance of advanced composites are assessed by studying damage initiation at various scales (micro, macro, and global) and accumulation and growth leading to global failure, quantitatively and qualitatively. In addition, various fracture toughness parameters associated with a typical damage and its growth must be determined. Computational structural analysis codes to aid the composite design engineer in performing these tasks were developed. CODSTRAN (COmposite Durability STRuctural ANalysis) is used to qualitatively and quantitatively assess the progressive damage occurring in composite structures due to mechanical and environmental loads. Next, methods are covered that are currently being developed and used at Lewis to predict interlaminar fracture toughness and related parameters of fiber composites given a prescribed damage. The general purpose finite element code MSC/NASTRAN was used to simulate the interlaminar fracture and the associated individual as well as mixed-mode strain energy release rates in fiber composites.
Computational Simulations and the Scientific Method
NASA Technical Reports Server (NTRS)
Kleb, Bil; Wood, Bill
2005-01-01
As scientific simulation software becomes more complicated, the scientific-software implementor's need for component tests from new model developers becomes more crucial. The community's ability to follow the basic premise of the Scientific Method requires independently repeatable experiments, and model innovators are in the best position to create these test fixtures. Scientific software developers also need to quickly judge the value of the new model, i.e., its cost-to-benefit ratio in terms of gains provided by the new model and implementation risks such as cost, time, and quality. This paper asks two questions. The first is whether other scientific software developers would find published component tests useful, and the second is whether model innovators think publishing test fixtures is a feasible approach.
Massively parallel simulations of multiphase flows using Lattice Boltzmann methods
NASA Astrophysics Data System (ADS)
Ahrenholz, Benjamin
2010-03-01
In the last two decades the lattice Boltzmann method (LBM) has matured as an alternative and efficient numerical scheme for the simulation of fluid flows and transport problems. Unlike conventional numerical schemes based on discretizations of macroscopic continuum equations, the LBM is based on microscopic models and mesoscopic kinetic equations. The fundamental idea of the LBM is to construct simplified kinetic models that incorporate the essential physics of microscopic or mesoscopic processes so that the macroscopic averaged properties obey the desired macroscopic equations. Especially applications involving interfacial dynamics, complex and/or changing boundaries and complicated constitutive relationships which can be derived from a microscopic picture are suitable for the LBM. In this talk a modified and optimized version of a Gunstensen color model is presented to describe the dynamics of the fluid/fluid interface where the flow field is based on a multi-relaxation-time model. Based on that modeling approach validation studies of contact line motion are shown. Due to the fact that the LB method generally needs only nearest neighbor information, the algorithm is an ideal candidate for parallelization. Hence, it is possible to perform efficient simulations in complex geometries at a large scale by massively parallel computations. Here, the results of drainage and imbibition (Degree of Freedom > 2E11) in natural porous media gained from microtomography methods are presented. Those fully resolved pore scale simulations are essential for a better understanding of the physical processes in porous media and therefore important for the determination of constitutive relationships.
Simulation Platform: a cloud-based online simulation environment.
Yamazaki, Tadashi; Ikeno, Hidetoshi; Okumura, Yoshihiro; Satoh, Shunji; Kamiyama, Yoshimi; Hirata, Yutaka; Inagaki, Keiichiro; Ishihara, Akito; Kannon, Takayuki; Usui, Shiro
2011-09-01
For multi-scale and multi-modal neural modeling, it is needed to handle multiple neural models described at different levels seamlessly. Database technology will become more important for these studies, specifically for downloading and handling the neural models seamlessly and effortlessly. To date, conventional neuroinformatics databases have solely been designed to archive model files, but the databases should provide a chance for users to validate the models before downloading them. In this paper, we report our on-going project to develop a cloud-based web service for online simulation called "Simulation Platform". Simulation Platform is a cloud of virtual machines running GNU/Linux. On a virtual machine, various software including developer tools such as compilers and libraries, popular neural simulators such as GENESIS, NEURON and NEST, and scientific software such as Gnuplot, R and Octave, are pre-installed. When a user posts a request, a virtual machine is assigned to the user, and the simulation starts on that machine. The user remotely accesses to the machine through a web browser and carries out the simulation, without the need to install any software but a web browser on the user's own computer. Therefore, Simulation Platform is expected to eliminate impediments to handle multiple neural models that require multiple software. PMID:21741207
Discrete Stochastic Simulation Methods for Chemically Reacting Systems
Cao, Yang; Samuels, David C.
2012-01-01
Discrete stochastic chemical kinetics describe the time evolution of a chemically reacting system by taking into account the fact that in reality chemical species are present with integer populations and exhibit some degree of randomness in their dynamical behavior. In recent years, with the development of new techniques to study biochemistry dynamics in a single cell, there are increasing studies using this approach to chemical kinetics in cellular systems, where the small copy number of some reactant species in the cell may lead to deviations from the predictions of the deterministic differential equations of classical chemical kinetics. This chapter reviews the fundamental theory related to stochastic chemical kinetics and several simulation methods that are based on that theory. We focus on non-stiff biochemical systems and the two most important discrete stochastic simulation methods: Gillespie's Stochastic Simulation Algorithm (SSA) and the tau-leaping method. Different implementation strategies of these two methods are discussed. Then we recommend a relatively simple and efficient strategy that combines the strengths of the two methods: the hybrid SSA/tau-leaping method. The implementation details of the hybrid strategy are given here and a related software package is introduced. Finally, the hybrid method is applied to simple biochemical systems as a demonstration of its application. PMID:19216925
A Transfer Voltage Simulation Method for Generator Step Up Transformers
NASA Astrophysics Data System (ADS)
Funabashi, Toshihisa; Sugimoto, Toshirou; Ueda, Toshiaki; Ametani, Akihiro
It has been found from measurements for 13 sets of GSU transformers that a transfer voltage of a generator step-up (GSU) transformer involves one dominant oscillation frequency. The frequency can be estimated from the inductance and capacitance values of the GSU transformer low-voltage-side. This observation has led to a new method for simulating a GSU transformer transfer voltage. The method is based on the EMTP TRANSFORMER model, but stray capacitances are added. The leakage inductance and the magnetizing resistance are modified using approximate curves for their frequency characteristics determined from the measured results. The new method is validated in comparison with the measured results.
NASA Astrophysics Data System (ADS)
Goddard, William
2013-03-01
For soft materials applications it is essential to obtain accurate descriptions of the weak (London dispersion, electrostatic) interactions between nonbond units, to include interactions with and stabilization by solvent, and to obtain accurate free energies and entropic changes during chemical, physical, and thermal processing. We will describe some of the advances being made in first principles based methods for treating soft materials with applications selected from new organic electrodes and electrolytes for batteries and fuel cells, forward osmosis for water cleanup, extended matter stable at ambient conditions, and drugs for modulating activation of GCPR membrane proteins,
Atomistic hybrid DSMC/NEMD method for nonequilibrium multiscale simulations
Gu Kai; Watkins, Charles B. Koplik, Joel
2010-03-01
A multiscale hybrid method for coupling the direct simulation Monte Carlo (DSMC) method to the nonequilibrium molecular dynamics (NEMD) method is introduced. The method addresses Knudsen layer type gas flows within a few mean free paths of an interface or about an object with dimensions of the order of a few mean free paths. It employs the NEMD method to resolve nanoscale phenomena closest to the interface along with coupled DSMC simulation of the remainder of the Knudsen layer. The hybrid DSMC/NEMD method is a particle based algorithm without a buffer zone. It incorporates a new, modified generalized soft sphere (MGSS) molecular collision model to improve the poor computational efficiency of the traditional generalized soft sphere GSS model and to achieve DSMC compatibility with Lennard-Jones NEMD molecular interactions. An equilibrium gas, a Fourier thermal flow, and an oscillatory Couette flow, are simulated to validate the method. The method shows good agreement with Maxwell-Boltzmann theory for the equilibrium system, Chapman-Enskog theory for Fourier flow, and pure DSMC simulations for oscillatory Couette flow. Speedup in CPU time of the hybrid solver is benchmarked against a pure NEMD solver baseline for different system sizes and solver domain partitions. Finally, the hybrid method is applied to investigate interaction of argon gas with solid surface molecules in a parametric study of the influence of wetting effects and solid molecular mass on energy transfer and thermal accommodation coefficients. It is determined that wetting effect strength and solid molecular mass have a significant impact on the energy transfer between gas and solid phases and thermal accommodation coefficient.
A multiscale quantum mechanics/electromagnetics method for device simulations.
Yam, ChiYung; Meng, Lingyi; Zhang, Yu; Chen, GuanHua
2015-04-01
Multiscale modeling has become a popular tool for research applying to different areas including materials science, microelectronics, biology, chemistry, etc. In this tutorial review, we describe a newly developed multiscale computational method, incorporating quantum mechanics into electronic device modeling with the electromagnetic environment included through classical electrodynamics. In the quantum mechanics/electromagnetics (QM/EM) method, the regions of the system where active electron scattering processes take place are treated quantum mechanically, while the surroundings are described by Maxwell's equations and a semiclassical drift-diffusion model. The QM model and the EM model are solved, respectively, in different regions of the system in a self-consistent manner. Potential distributions and current densities at the interface between QM and EM regions are employed as the boundary conditions for the quantum mechanical and electromagnetic simulations, respectively. The method is illustrated in the simulation of several realistic systems. In the case of junctionless field-effect transistors, transfer characteristics are obtained and a good agreement between experiments and simulations is achieved. Optical properties of a tandem photovoltaic cell are studied and the simulations demonstrate that multiple QM regions are coupled through the classical EM model. Finally, the study of a carbon nanotube-based molecular device shows the accuracy and efficiency of the QM/EM method. PMID:25611987
XML-based resources for simulation
Kelsey, R. L.; Riese, J. M.; Young, G. A.
2004-01-01
As simulations and the machines they run on become larger and more complex the inputs and outputs become more unwieldy. Increased complexity makes the setup of simulation problems difficult. It also contributes to the burden of handling and analyzing large amounts of output results. Another problem is that among a class of simulation codes (such as those for physical system simulation) there is often no single standard format or resource for input data. To run the same problem on different simulations requires a different setup for each simulation code. The extensible Markup Language (XML) is used to represent a general set of data resources including physical system problems, materials, and test results. These resources provide a 'plug and play' approach to simulation setup. For example, a particular material for a physical system can be selected from a material database. The XML-based representation of the selected material is then converted to the native format of the simulation being run and plugged into the simulation input file. In this manner a user can quickly and more easily put together a simulation setup. In the case of output data, an XML approach to regression testing includes tests and test results with XML-based representations. This facilitates the ability to query for specific tests and make comparisons between results. Also, output results can easily be converted to other formats for publishing online or on paper.
NASA Astrophysics Data System (ADS)
Dattoli, G.; Migliorati, M.; Schiavi, A.
2007-05-01
The coherent synchrotron radiation (CSR) is one of the main problems limiting the performance of high-intensity electron accelerators. The complexity of the physical mechanisms underlying the onset of instabilities due to CSR demands for accurate descriptions, capable of including the large number of features of an actual accelerating device. A code devoted to the analysis of these types of problems should be fast and reliable, conditions that are usually hardly achieved at the same time. In the past, codes based on Lie algebraic techniques have been very efficient to treat transport problems in accelerators. The extension of these methods to the non-linear case is ideally suited to treat CSR instability problems. We report on the development of a numerical code, based on the solution of the Vlasov equation, with the inclusion of non-linear contribution due to wake field effects. The proposed solution method exploits an algebraic technique that uses the exponential operators. We show that the integration procedure is capable of reproducing the onset of instability and the effects associated with bunching mechanisms leading to the growth of the instability itself. In addition, considerations on the threshold of the instability are also developed.
Simulation-based training: the next revolution in radiology education?
Desser, Terry S
2007-11-01
Simulation-based training methods have been widely adopted in hazardous professions such as aviation, nuclear power, and the military. Their use in medicine has been accelerating lately, fueled by the public's concerns over medical errors as well as new Accreditation Council for Graduate Medical Education requirements for outcome-based and proficiency-based assessment methods. This article reviews the rationale for simulator-based training, types of simulators, their historical development and validity testing, and some results to date in laparoscopic surgery and endoscopic procedures. A number of companies have developed endovascular simulators for interventional radiologic procedures; although they cannot as yet replicate the experience of performing cases in real patients, they promise to play an increasingly important role in procedural training in the future. PMID:17964504
NASA Astrophysics Data System (ADS)
Preziosi-Ribero, Antonio; Peñaloza-Giraldo, Jorge; Escobar-Vargas, Jorge; Donado-Garzón, Leonardo
2016-04-01
Groundwater - Surface water interaction is a topic that has gained relevance among the scientific community over the past decades. However, several questions remain unsolved inside this topic, and almost all the research that has been done in the past regards the transport phenomena and has little to do with understanding the dynamics of the flow patterns of the above mentioned interactions. The aim of this research is to verify the attenuation of the water velocity that comes from the free surface and enters the porous media under the bed of a high mountain river. The understanding of this process is a key feature in order to characterize and quantify the interactions between groundwater and surface water. However, the lack of information and the difficulties that arise when measuring groundwater flows under streams make the physical quantification non reliable for scientific purposes. These issues suggest that numerical simulations and in-stream velocity measurements can be used in order to characterize these flows. Previous studies have simulated the attenuation of a sinusoidal pulse of vertical velocity that comes from a stream and goes into a porous medium. These studies used the Burgers equation and the 1-D Navier-Stokes equations as governing equations. However, the boundary conditions of the problem, and the results when varying the different parameters of the equations show that the understanding of the process is not complete yet. To begin with, a Spectral Multi Domain Penalty Method (SMPM) was proposed for quantifying the velocity damping solving the Navier - Stokes equations in 1D. The main assumptions are incompressibility and a hydrostatic approximation for the pressure distributions. This method was tested with theoretical signals that are mainly trigonometric pulses or functions. Afterwards, in order to test the results with real signals, velocity profiles were captured near the Gualí River bed (Honda, Colombia), with an Acoustic Doppler
Lattice-Boltzmann-based Simulations of Diffusiophoresis
NASA Astrophysics Data System (ADS)
Castigliego, Joshua; Kreft Pearce, Jennifer
We present results from a lattice-Boltzmann-base Brownian Dynamics simulation on diffusiophoresis and the separation of particles within the system. A gradient in viscosity that simulates a concentration gradient in a dissolved polymer allows us to separate various types of particles by their deformability. As seen in previous experiments, simulated particles that have a higher deformability react differently to the polymer matrix than those with a lower deformability. Therefore, the particles can be separated from each other. This simulation, in particular, was intended to model an oceanic system where the particles of interest were zooplankton, phytoplankton and microplastics. The separation of plankton from the microplastics was achieved.
NASA Astrophysics Data System (ADS)
Ludwig, Hans-Günter; Freytag, Bernd; Steffen, Matthias
1999-06-01
Based on detailed 2D numerical radiation hydrodynamics (RHD) calculations of time-dependent compressible convection, we have studied the dynamics and thermal structure of the convective surface layers of solar-type stars. The RHD models provide information about the convective efficiency in the superadiabatic region at the top of convective envelopes and predict the asymptotic value of the entropy of the deep, adiabatically stratified layers (Fig. \\ref{f:sstarhd}). This information is translated into an effective mixing-length parameter \\alphaMLT suitable to construct standard stellar structure models. We validate the approach by a detailed comparison to helioseismic data. The grid of RHD models for solar metallicity comprises 58 simulation runs with a helium abundance of Y=0.28 in the range of effective temperatures 4300pun {K}<=Teff<= 7100pun {K} and gravities 2.54<={log g}<= 4.74. We find a moderate, nevertheless significant variation of \\alphaMLT between about 1.3 for F-dwarfs and 1.75 for K-subgiants with a dominant dependence on Teff (Fig. \\ref{f:mlp}). In the close neighbourhood of the Sun we find a plateau where \\alphaMLT remains almost constant. The internal accuracy of the calibration of \\alphaMLT is estimated to be +/- 0.05 with a possible systematic bias towards lower values. An analogous calibration of the convection theory of Canuto &\\ Mazzitelli (1991, 1992; CMT) gives a different temperature dependence but a similar variation of the free parameter (Fig. \\ref{f:mlpcm}). For the first time, values for the gravity-darkening exponent beta are derived independently of mixing-length theory: beta = 0.07... 0.10. We show that our findings are consistent with constraints from stellar stability considerations and provide compact fitting formulae for the calibrations.
Electromagnetic metamaterial simulations using a GPU-accelerated FDTD method
NASA Astrophysics Data System (ADS)
Seok, Myung-Su; Lee, Min-Gon; Yoo, SeokJae; Park, Q.-Han
2015-12-01
Metamaterials composed of artificial subwavelength structures exhibit extraordinary properties that cannot be found in nature. Designing artificial structures having exceptional properties plays a pivotal role in current metamaterial research. We present a new numerical simulation scheme for metamaterial research. The scheme is based on a graphic processing unit (GPU)-accelerated finite-difference time-domain (FDTD) method. The FDTD computation can be significantly accelerated when GPUs are used instead of only central processing units (CPUs). We explain how the fast FDTD simulation of large-scale metamaterials can be achieved through communication optimization in a heterogeneous CPU/GPU-based computer cluster. Our method also includes various advanced FDTD techniques: the non-uniform grid technique, the total-field/scattered-field (TFSF) technique, the auxiliary field technique for dispersive materials, the running discrete Fourier transform, and the complex structure setting. We demonstrate the power of our new FDTD simulation scheme by simulating the negative refraction of light in a coaxial waveguide metamaterial.
A generic reaction-based biogeochemical simulator
Fang, Yilin; Yabusaki, Steven B.; Yeh, Gour T.; C.T. Miller, M.W. Farthing, W.G. Gray, and G.F. Pinder
2004-06-17
This paper presents a generic biogeochemical simulator, BIOGEOCHEM. The simulator can read a thermodynamic database based on the EQ3/EQ6 database. It can also read user-specified equilibrium and kinetic reactions (reactions not defined in the format of that in EQ3/EQ6 database) symbolically. BIOGEOCHEM is developed with a general paradigm. It overcomes the requirement in most available reaction-based models that reactions and rate laws be specified in a limited number of canonical forms. The simulator interprets the reactions, and rate laws of virtually any type for input to the MAPLE symbolic mathematical software package. MAPLE then generates Fortran code for the analytical Jacobian matrix used in the Newton-Raphson technique, which are compiled and linked into the BIOGEOCHEM executable. With this feature, the users are exempted from recoding the simulator to accept new equilibrium expressions or kinetic rate laws. Two examples are used to demonstrate the new features of the simulator.
A Carbonaceous Chondrite Based Simulant of Phobos
NASA Technical Reports Server (NTRS)
Rickman, Douglas L.; Patel, Manish; Pearson, V.; Wilson, S.; Edmunson, J.
2016-01-01
In support of an ESA-funded concept study considering a sample return mission, a simulant of the Martian moon Phobos was needed. There are no samples of the Phobos regolith, therefore none of the four characteristics normally used to design a simulant are explicitly known for Phobos. Because of this, specifications for a Phobos simulant were based on spectroscopy, other remote measurements, and judgment. A composition based on the Tagish Lake meteorite was assumed. The requirement that sterility be achieved, especially given the required organic content, was unusual and problematic. The final design mixed JSC-1A, antigorite, pseudo-agglutinates and gilsonite. Sterility was achieved by radiation in a commercial facility.
Cheng, Candong; Lee, Joon-Ho; Lim, Kim Hwa; Massoud, Hisham Z.; Liu, Qing Huo
2007-01-01
A 3-D quantum transport solver based on the spectral element method (SEM) and perfectly matched layer (PML) is introduced to solve the 3-D Schrödinger equation with a tensor effective mass. In this solver, the influence of the environment is replaced with the artificial PML open boundary extended beyond the contact regions of the device. These contact regions are treated as waveguides with known incident waves from waveguide mode solutions. As the transmitted wave function is treated as a total wave, there is no need to decompose it into waveguide modes, thus significantly simplifying the problem in comparison with conventional open boundary conditions. The spectral element method leads to an exponentially improving accuracy with the increase in the polynomial order and sampling points. The PML region can be designed such that less than −100 dB outgoing waves are reflected by this artificial material. The computational efficiency of the SEM solver is demonstrated by comparing the numerical and analytical results from waveguide and plane-wave examples, and its utility is illustrated by multiple-terminal devices and semiconductor nanotube devices. PMID:18037971
Simulation of automatic gain control method for laser radar receiver
NASA Astrophysics Data System (ADS)
Cai, Xiping; Shang, Hongbo; Wang, Lina; Yang, Shuang
2008-12-01
A receiver with high dynamic response and wide control range are necessary for a laser radar system. In this paper, an automatic gain control scheme for laser radar receiver is proposed. The scheme is based on a closed-loop logarithmic feedback method. Signal models for pulse laser radar system are created and as the input to the AGC model. The signal is supposed to be very weak and with a nanosecond order of pulse width in the light of the property of the laser radar. The method and the simulation for the AGC will be presented in detail.